fbpx
Wikipedia

Linear map

In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.

If a linear map is a bijection then it is called a linear isomorphism. In the case where , a linear map is called a linear endomorphism. Sometimes the term linear operator refers to this case,[1] but the term "linear operator" can have different meanings for different conventions: for example, it can be used to emphasize that and are real vector spaces (not necessarily with ),[citation needed] or it can be used to emphasize that is a function space, which is a common convention in functional analysis.[2] Sometimes the term linear function has the same meaning as linear map, while in analysis it does not.

A linear map from to always maps the origin of to the origin of . Moreover, it maps linear subspaces in onto linear subspaces in (possibly of a lower dimension);[3] for example, it maps a plane through the origin in to either a plane through the origin in , a line through the origin in , or just the origin in . Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations.

In the language of category theory, linear maps are the morphisms of vector spaces.

Definition and first consequences edit

Let   and   be vector spaces over the same field  . A function   is said to be a linear map if for any two vectors   and any scalar   the following two conditions are satisfied:

  • Additivity / operation of addition
     
  • Homogeneity of degree 1 / operation of scalar multiplication
     

Thus, a linear map is said to be operation preserving. In other words, it does not matter whether the linear map is applied before (the right hand sides of the above examples) or after (the left hand sides of the examples) the operations of addition and scalar multiplication.

By the associativity of the addition operation denoted as +, for any vectors   and scalars   the following equality holds:[4][5]

 
Thus a linear map is one which preserves linear combinations.

Denoting the zero elements of the vector spaces   and   by   and   respectively, it follows that   Let   and   in the equation for homogeneity of degree 1:

 

A linear map   with   viewed as a one-dimensional vector space over itself is called a linear functional.[6]

These statements generalize to any left-module   over a ring   without modification, and to any right-module upon reversing of the scalar multiplication.

Examples edit

  • A prototypical example that gives linear maps their name is a function  , of which the graph is a line through the origin.[7]
  • More generally, any homothety   centered in the origin of a vector space is a linear map (here c is a scalar).
  • The zero map   between two vector spaces (over the same field) is linear.
  • The identity map on any module is a linear operator.
  • For real numbers, the map   is not linear.
  • For real numbers, the map   is not linear (but is an affine transformation).
  • If   is a   real matrix, then   defines a linear map from   to   by sending a column vector   to the column vector  . Conversely, any linear map between finite-dimensional vector spaces can be represented in this manner; see the § Matrices, below.
  • If   is an isometry between real normed spaces such that   then   is a linear map. This result is not necessarily true for complex normed space.[8]
  • Differentiation defines a linear map from the space of all differentiable functions to the space of all functions. It also defines a linear operator on the space of all smooth functions (a linear operator is a linear endomorphism, that is, a linear map with the same domain and codomain). Indeed,
     
  • A definite integral over some interval I is a linear map from the space of all real-valued integrable functions on I to  . Indeed,
     
  • An indefinite integral (or antiderivative) with a fixed integration starting point defines a linear map from the space of all real-valued integrable functions on   to the space of all real-valued, differentiable functions on  . Without a fixed starting point, the antiderivative maps to the quotient space of the differentiable functions by the linear space of constant functions.
  • If   and   are finite-dimensional vector spaces over a field F, of respective dimensions m and n, then the function that maps linear maps   to n × m matrices in the way described in § Matrices (below) is a linear map, and even a linear isomorphism.
  • The expected value of a random variable (which is in fact a function, and as such an element of a vector space) is linear, as for random variables   and   we have   and  , but the variance of a random variable is not linear.

Linear extensions edit

Often, a linear map is constructed by defining it on a subset of a vector space and then extending by linearity to the linear span of the domain. Suppose   and   are vector spaces and   is a function defined on some subset   Then a linear extension of   to   if it exists, is a linear map   defined on   that extends  [note 1] (meaning that   for all  ) and takes its values from the codomain of  [9] When the subset   is a vector subspace of   then a ( -valued) linear extension of   to all of   is guaranteed to exist if (and only if)   is a linear map.[9] In particular, if   has a linear extension to   then it has a linear extension to all of  

The map   can be extended to a linear map   if and only if whenever   is an integer,   are scalars, and   are vectors such that   then necessarily  [10] If a linear extension of   exists then the linear extension   is unique and

 
holds for all   and   as above.[10] If   is linearly independent then every function   into any vector space has a linear extension to a (linear) map   (the converse is also true).

For example, if   and   then the assignment   and   can be linearly extended from the linearly independent set of vectors   to a linear map on   The unique linear extension   is the map that sends   to

 

Every (scalar-valued) linear functional   defined on a vector subspace of a real or complex vector space   has a linear extension to all of   Indeed, the Hahn–Banach dominated extension theorem even guarantees that when this linear functional   is dominated by some given seminorm   (meaning that   holds for all   in the domain of  ) then there exists a linear extension to   that is also dominated by  

Matrices edit

If   and   are finite-dimensional vector spaces and a basis is defined for each vector space, then every linear map from   to   can be represented by a matrix.[11] This is useful because it allows concrete calculations. Matrices yield examples of linear maps: if   is a real   matrix, then   describes a linear map   (see Euclidean space).

Let   be a basis for  . Then every vector   is uniquely determined by the coefficients   in the field  :

 

If   is a linear map,

 

which implies that the function f is entirely determined by the vectors  . Now let   be a basis for  . Then we can represent each vector   as

 

Thus, the function   is entirely determined by the values of  . If we put these values into an   matrix  , then we can conveniently use it to compute the vector output of   for any vector in  . To get  , every column   of   is a vector

 
corresponding to   as defined above. To define it more clearly, for some column   that corresponds to the mapping  ,
 
where   is the matrix of  . In other words, every column   has a corresponding vector   whose coordinates   are the elements of column  . A single linear map may be represented by many matrices. This is because the values of the elements of a matrix depend on the bases chosen.

The matrices of a linear transformation can be represented visually:

  1. Matrix for   relative to  :  
  2. Matrix for   relative to  :  
  3. Transition matrix from   to  :  
  4. Transition matrix from   to  :  
 
The relationship between matrices in a linear transformation

Such that starting in the bottom left corner   and looking for the bottom right corner  , one would left-multiply—that is,  . The equivalent method would be the "longer" method going clockwise from the same point such that   is left-multiplied with  , or  .

Examples in two dimensions edit

In two-dimensional space R2 linear maps are described by 2 × 2 matrices. These are some examples:

  • rotation
    • by 90 degrees counterclockwise:
       
    • by an angle θ counterclockwise:
       
  • reflection
    • through the x axis:
       
    • through the y axis:
       
    • through a line making an angle θ with the origin:
       
  • scaling by 2 in all directions:
     
  • horizontal shear mapping:
     
  • skew of the y axis by an angle θ:
     
  • squeeze mapping:
     
  • projection onto the y axis:
     

If a linear map is only composed of rotation, reflection, and/or uniform scaling, then the linear map is a conformal linear transformation.

Vector space of linear maps edit

The composition of linear maps is linear: if   and   are linear, then so is their composition  . It follows from this that the class of all vector spaces over a given field K, together with K-linear maps as morphisms, forms a category.

The inverse of a linear map, when defined, is again a linear map.

If   and   are linear, then so is their pointwise sum  , which is defined by  .

If   is linear and   is an element of the ground field  , then the map  , defined by  , is also linear.

Thus the set   of linear maps from   to   itself forms a vector space over  ,[12] sometimes denoted  .[13] Furthermore, in the case that  , this vector space, denoted  , is an associative algebra under composition of maps, since the composition of two linear maps is again a linear map, and the composition of maps is always associative. This case is discussed in more detail below.

Given again the finite-dimensional case, if bases have been chosen, then the composition of linear maps corresponds to the matrix multiplication, the addition of linear maps corresponds to the matrix addition, and the multiplication of linear maps with scalars corresponds to the multiplication of matrices with scalars.

Endomorphisms and automorphisms edit

A linear transformation   is an endomorphism of  ; the set of all such endomorphisms   together with addition, composition and scalar multiplication as defined above forms an associative algebra with identity element over the field   (and in particular a ring). The multiplicative identity element of this algebra is the identity map  .

An endomorphism of   that is also an isomorphism is called an automorphism of  . The composition of two automorphisms is again an automorphism, and the set of all automorphisms of   forms a group, the automorphism group of   which is denoted by   or  . Since the automorphisms are precisely those endomorphisms which possess inverses under composition,   is the group of units in the ring  .

If   has finite dimension  , then   is isomorphic to the associative algebra of all   matrices with entries in  . The automorphism group of   is isomorphic to the general linear group   of all   invertible matrices with entries in  .

Kernel, image and the rank–nullity theorem edit

If   is linear, we define the kernel and the image or range of   by

 

  is a subspace of   and   is a subspace of  . The following dimension formula is known as the rank–nullity theorem:[14]

 

The number   is also called the rank of   and written as  , or sometimes,  ;[15][16] the number   is called the nullity of   and written as   or  .[15][16] If   and   are finite-dimensional, bases have been chosen and   is represented by the matrix  , then the rank and nullity of   are equal to the rank and nullity of the matrix  , respectively.

Cokernel edit

A subtler invariant of a linear transformation   is the cokernel, which is defined as

 

This is the dual notion to the kernel: just as the kernel is a subspace of the domain, the co-kernel is a quotient space of the target. Formally, one has the exact sequence

 

These can be interpreted thus: given a linear equation f(v) = w to solve,

  • the kernel is the space of solutions to the homogeneous equation f(v) = 0, and its dimension is the number of degrees of freedom in the space of solutions, if it is not empty;
  • the co-kernel is the space of constraints that the solutions must satisfy, and its dimension is the maximal number of independent constraints.

The dimension of the co-kernel and the dimension of the image (the rank) add up to the dimension of the target space. For finite dimensions, this means that the dimension of the quotient space W/f(V) is the dimension of the target space minus the dimension of the image.

As a simple example, consider the map f: R2R2, given by f(x, y) = (0, y). Then for an equation f(x, y) = (a, b) to have a solution, we must have a = 0 (one constraint), and in that case the solution space is (x, b) or equivalently stated, (0, b) + (x, 0), (one degree of freedom). The kernel may be expressed as the subspace (x, 0) < V: the value of x is the freedom in a solution – while the cokernel may be expressed via the map WR,  : given a vector (a, b), the value of a is the obstruction to there being a solution.

An example illustrating the infinite-dimensional case is afforded by the map f: RR,   with b1 = 0 and bn + 1 = an for n > 0. Its image consists of all sequences with first element 0, and thus its cokernel consists of the classes of sequences with identical first element. Thus, whereas its kernel has dimension 0 (it maps only the zero sequence to the zero sequence), its co-kernel has dimension 1. Since the domain and the target space are the same, the rank and the dimension of the kernel add up to the same sum as the rank and the dimension of the co-kernel ( ), but in the infinite-dimensional case it cannot be inferred that the kernel and the co-kernel of an endomorphism have the same dimension (0 ≠ 1). The reverse situation obtains for the map h: RR,   with cn = an + 1. Its image is the entire target space, and hence its co-kernel has dimension 0, but since it maps all sequences in which only the first element is non-zero to the zero sequence, its kernel has dimension 1.

Index edit

For a linear operator with finite-dimensional kernel and co-kernel, one may define index as:

 
namely the degrees of freedom minus the number of constraints.

For a transformation between finite-dimensional vector spaces, this is just the difference dim(V) − dim(W), by rank–nullity. This gives an indication of how many solutions or how many constraints one has: if mapping from a larger space to a smaller one, the map may be onto, and thus will have degrees of freedom even without constraints. Conversely, if mapping from a smaller space to a larger one, the map cannot be onto, and thus one will have constraints even without degrees of freedom.

The index of an operator is precisely the Euler characteristic of the 2-term complex 0 → VW → 0. In operator theory, the index of Fredholm operators is an object of study, with a major result being the Atiyah–Singer index theorem.[17]

Algebraic classifications of linear transformations edit

No classification of linear maps could be exhaustive. The following incomplete list enumerates some important classifications that do not require any additional structure on the vector space.

Let V and W denote vector spaces over a field F and let T: VW be a linear map.

Monomorphism edit

T is said to be injective or a monomorphism if any of the following equivalent conditions are true:

  1. T is one-to-one as a map of sets.
  2. ker T = {0V}
  3. dim(ker T) = 0
  4. T is monic or left-cancellable, which is to say, for any vector space U and any pair of linear maps R: UV and S: UV, the equation TR = TS implies R = S.
  5. T is left-invertible, which is to say there exists a linear map S: WV such that ST is the identity map on V.

Epimorphism edit

T is said to be surjective or an epimorphism if any of the following equivalent conditions are true:

  1. T is onto as a map of sets.
  2. coker T = {0W}
  3. T is epic or right-cancellable, which is to say, for any vector space U and any pair of linear maps R: WU and S: WU, the equation RT = ST implies R = S.
  4. T is right-invertible, which is to say there exists a linear map S: WV such that TS is the identity map on W.

Isomorphism edit

T is said to be an isomorphism if it is both left- and right-invertible. This is equivalent to T being both one-to-one and onto (a bijection of sets) or also to T being both epic and monic, and so being a bimorphism.

If T: VV is an endomorphism, then:

  • If, for some positive integer n, the n-th iterate of T, Tn, is identically zero, then T is said to be nilpotent.
  • If T2 = T, then T is said to be idempotent
  • If T = kI, where k is some scalar, then T is said to be a scaling transformation or scalar multiplication map; see scalar matrix.

Change of basis edit

Given a linear map which is an endomorphism whose matrix is A, in the basis B of the space it transforms vector coordinates [u] as [v] = A[u]. As vectors change with the inverse of B (vectors are contravariant) its inverse transformation is [v] = B[v'].

Substituting this in the first expression

 
hence
 

Therefore, the matrix in the new basis is A′ = B−1AB, being B the matrix of the given basis.

Therefore, linear maps are said to be 1-co- 1-contra-variant objects, or type (1, 1) tensors.

Continuity edit

A linear transformation between topological vector spaces, for example normed spaces, may be continuous. If its domain and codomain are the same, it will then be a continuous linear operator. A linear operator on a normed linear space is continuous if and only if it is bounded, for example, when the domain is finite-dimensional.[18] An infinite-dimensional domain may have discontinuous linear operators.

An example of an unbounded, hence discontinuous, linear transformation is differentiation on the space of smooth functions equipped with the supremum norm (a function with small values can have a derivative with large values, while the derivative of 0 is 0). For a specific example, sin(nx)/n converges to 0, but its derivative cos(nx) does not, so differentiation is not continuous at 0 (and by a variation of this argument, it is not continuous anywhere).

Applications edit

A specific application of linear maps is for geometric transformations, such as those performed in computer graphics, where the translation, rotation and scaling of 2D or 3D objects is performed by the use of a transformation matrix. Linear mappings also are used as a mechanism for describing change: for example in calculus correspond to derivatives; or in relativity, used as a device to keep track of the local transformations of reference frames.

Another application of these transformations is in compiler optimizations of nested-loop code, and in parallelizing compiler techniques.

See also edit

Notes edit

  1. ^ "Linear transformations of V into V are often called linear operators on V." Rudin 1976, p. 207
  2. ^ Let V and W be two real vector spaces. A mapping a from V into W Is called a 'linear mapping' or 'linear transformation' or 'linear operator' [...] from V into W, if
      for all  ,
      for all   and all real λ. Bronshtein & Semendyayev 2004, p. 316
  3. ^ Rudin 1991, p. 14
    Here are some properties of linear mappings   whose proofs are so easy that we omit them; it is assumed that   and  :
    1.  
    2. If A is a subspace (or a convex set, or a balanced set) the same is true of  
    3. If B is a subspace (or a convex set, or a balanced set) the same is true of  
    4. In particular, the set:
       
      is a subspace of X, called the null space of  .
  4. ^ Rudin 1991, p. 14. Suppose now that X and Y are vector spaces over the same scalar field. A mapping   is said to be linear if   for all   and all scalars   and  . Note that one often writes  , rather than  , when   is linear.
  5. ^ Rudin 1976, p. 206. A mapping A of a vector space X into a vector space Y is said to be a linear transformation if:   for all   and all scalars c. Note that one often writes   instead of   if A is linear.
  6. ^ Rudin 1991, p. 14. Linear mappings of X onto its scalar field are called linear functionals.
  7. ^ "terminology - What does 'linear' mean in Linear Algebra?". Mathematics Stack Exchange. Retrieved 2021-02-17.
  8. ^ Wilansky 2013, pp. 21–26.
  9. ^ a b Kubrusly 2001, p. 57.
  10. ^ a b Schechter 1996, pp. 277–280.
  11. ^ Rudin 1976, p. 210 Suppose   and   are bases of vector spaces X and Y, respectively. Then every   determines a set of numbers   such that
     
    It is convenient to represent these numbers in a rectangular array of m rows and n columns, called an m by n matrix:
     
    Observe that the coordinates   of the vector   (with respect to the basis  ) appear in the jth column of  . The vectors   are therefore sometimes called the column vectors of  . With this terminology, the range of A is spanned by the column vectors of  .
  12. ^ Axler (2015) p. 52, § 3.3
  13. ^ Tu (2011), p. 19, § 3.1
  14. ^ Horn & Johnson 2013, 0.2.3 Vector spaces associated with a matrix or linear transformation, p. 6
  15. ^ a b Katznelson & Katznelson (2008) p. 52, § 2.5.1
  16. ^ a b Halmos (1974) p. 90, § 50
  17. ^ Nistor, Victor (2001) [1994], "Index theory", Encyclopedia of Mathematics, EMS Press: "The main question in index theory is to provide index formulas for classes of Fredholm operators ... Index theory has become a subject on its own only after M. F. Atiyah and I. Singer published their index theorems"
  18. ^ Rudin 1991, p. 15 1.18 Theorem Let   be a linear functional on a topological vector space X. Assume   for some  . Then each of the following four properties implies the other three:
    1.   is continuous
    2. The null space   is closed.
    3.   is not dense in X.
    4.   is bounded in some neighbourhood V of 0.
  1. ^ One map   is said to extend another map   if when   is defined at a point   then so is   and  

Bibliography edit

linear, linear, transformation, redirects, here, fractional, linear, transformations, möbius, transformation, confused, with, linear, function, this, article, includes, list, general, references, lacks, sufficient, corresponding, inline, citations, please, hel. Linear transformation redirects here For fractional linear transformations see Mobius transformation Not to be confused with linear function This article includes a list of general references but it lacks sufficient corresponding inline citations Please help to improve this article by introducing more precise citations December 2021 Learn how and when to remove this message In mathematics and more specifically in linear algebra a linear map also called a linear mapping linear transformation vector space homomorphism or in some contexts linear function is a mapping V W displaystyle V to W between two vector spaces that preserves the operations of vector addition and scalar multiplication The same names and the same definition are also used for the more general case of modules over a ring see Module homomorphism If a linear map is a bijection then it is called a linear isomorphism In the case where V W displaystyle V W a linear map is called a linear endomorphism Sometimes the term linear operator refers to this case 1 but the term linear operator can have different meanings for different conventions for example it can be used to emphasize that V displaystyle V and W displaystyle W are real vector spaces not necessarily with V W displaystyle V W citation needed or it can be used to emphasize that V displaystyle V is a function space which is a common convention in functional analysis 2 Sometimes the term linear function has the same meaning as linear map while in analysis it does not A linear map from V displaystyle V to W displaystyle W always maps the origin of V displaystyle V to the origin of W displaystyle W Moreover it maps linear subspaces in V displaystyle V onto linear subspaces in W displaystyle W possibly of a lower dimension 3 for example it maps a plane through the origin in V displaystyle V to either a plane through the origin in W displaystyle W a line through the origin in W displaystyle W or just the origin in W displaystyle W Linear maps can often be represented as matrices and simple examples include rotation and reflection linear transformations In the language of category theory linear maps are the morphisms of vector spaces Contents 1 Definition and first consequences 2 Examples 2 1 Linear extensions 3 Matrices 3 1 Examples in two dimensions 4 Vector space of linear maps 4 1 Endomorphisms and automorphisms 5 Kernel image and the rank nullity theorem 6 Cokernel 6 1 Index 7 Algebraic classifications of linear transformations 7 1 Monomorphism 7 2 Epimorphism 7 3 Isomorphism 8 Change of basis 9 Continuity 10 Applications 11 See also 12 Notes 13 BibliographyDefinition and first consequences editLet V displaystyle V nbsp and W displaystyle W nbsp be vector spaces over the same field K displaystyle K nbsp A function f V W displaystyle f V to W nbsp is said to be a linear map if for any two vectors u v V textstyle mathbf u mathbf v in V nbsp and any scalar c K displaystyle c in K nbsp the following two conditions are satisfied Additivity operation of addition f u v f u f v displaystyle f mathbf u mathbf v f mathbf u f mathbf v nbsp Homogeneity of degree 1 operation of scalar multiplication f c u c f u displaystyle f c mathbf u cf mathbf u nbsp Thus a linear map is said to be operation preserving In other words it does not matter whether the linear map is applied before the right hand sides of the above examples or after the left hand sides of the examples the operations of addition and scalar multiplication By the associativity of the addition operation denoted as for any vectors u 1 u n V textstyle mathbf u 1 ldots mathbf u n in V nbsp and scalars c 1 c n K textstyle c 1 ldots c n in K nbsp the following equality holds 4 5 f c 1 u 1 c n u n c 1 f u 1 c n f u n displaystyle f c 1 mathbf u 1 cdots c n mathbf u n c 1 f mathbf u 1 cdots c n f mathbf u n nbsp Thus a linear map is one which preserves linear combinations Denoting the zero elements of the vector spaces V displaystyle V nbsp and W displaystyle W nbsp by 0 V textstyle mathbf 0 V nbsp and 0 W textstyle mathbf 0 W nbsp respectively it follows that f 0 V 0 W textstyle f mathbf 0 V mathbf 0 W nbsp Let c 0 displaystyle c 0 nbsp and v V textstyle mathbf v in V nbsp in the equation for homogeneity of degree 1 f 0 V f 0 v 0 f v 0 W displaystyle f mathbf 0 V f 0 mathbf v 0f mathbf v mathbf 0 W nbsp A linear map V K displaystyle V to K nbsp with K displaystyle K nbsp viewed as a one dimensional vector space over itself is called a linear functional 6 These statements generalize to any left module R M textstyle R M nbsp over a ring R displaystyle R nbsp without modification and to any right module upon reversing of the scalar multiplication Examples editA prototypical example that gives linear maps their name is a function f R R x c x displaystyle f mathbb R to mathbb R x mapsto cx nbsp of which the graph is a line through the origin 7 More generally any homothety v c v textstyle mathbf v mapsto c mathbf v nbsp centered in the origin of a vector space is a linear map here c is a scalar The zero map x 0 textstyle mathbf x mapsto mathbf 0 nbsp between two vector spaces over the same field is linear The identity map on any module is a linear operator For real numbers the map x x 2 textstyle x mapsto x 2 nbsp is not linear For real numbers the map x x 1 textstyle x mapsto x 1 nbsp is not linear but is an affine transformation If A displaystyle A nbsp is a m n displaystyle m times n nbsp real matrix then A displaystyle A nbsp defines a linear map from R n displaystyle mathbb R n nbsp to R m displaystyle mathbb R m nbsp by sending a column vector x R n displaystyle mathbf x in mathbb R n nbsp to the column vector A x R m displaystyle A mathbf x in mathbb R m nbsp Conversely any linear map between finite dimensional vector spaces can be represented in this manner see the Matrices below If f V W textstyle f V to W nbsp is an isometry between real normed spaces such that f 0 0 textstyle f 0 0 nbsp then f displaystyle f nbsp is a linear map This result is not necessarily true for complex normed space 8 Differentiation defines a linear map from the space of all differentiable functions to the space of all functions It also defines a linear operator on the space of all smooth functions a linear operator is a linear endomorphism that is a linear map with the same domain and codomain Indeed d d x a f x b g x a d f x d x b d g x d x displaystyle frac d dx left af x bg x right a frac df x dx b frac dg x dx nbsp A definite integral over some interval I is a linear map from the space of all real valued integrable functions on I to R displaystyle mathbb R nbsp Indeed u v a f x b g x d x a u v f x d x b u v g x d x displaystyle int u v left af x bg x right dx a int u v f x dx b int u v g x dx nbsp An indefinite integral or antiderivative with a fixed integration starting point defines a linear map from the space of all real valued integrable functions on R displaystyle mathbb R nbsp to the space of all real valued differentiable functions on R displaystyle mathbb R nbsp Without a fixed starting point the antiderivative maps to the quotient space of the differentiable functions by the linear space of constant functions If V displaystyle V nbsp and W displaystyle W nbsp are finite dimensional vector spaces over a field F of respective dimensions m and n then the function that maps linear maps f V W textstyle f V to W nbsp to n m matrices in the way described in Matrices below is a linear map and even a linear isomorphism The expected value of a random variable which is in fact a function and as such an element of a vector space is linear as for random variables X displaystyle X nbsp and Y displaystyle Y nbsp we have E X Y E X E Y displaystyle E X Y E X E Y nbsp and E a X a E X displaystyle E aX aE X nbsp but the variance of a random variable is not linear nbsp The function f R 2 R 2 textstyle f mathbb R 2 to mathbb R 2 nbsp with f x y 2 x y textstyle f x y 2x y nbsp is a linear map This function scales the x textstyle x nbsp component of a vector by the factor 2 textstyle 2 nbsp nbsp The function f x y 2 x y textstyle f x y 2x y nbsp is additive It does not matter whether vectors are first added and then mapped or whether they are mapped and finally added f a b f a f b textstyle f mathbf a mathbf b f mathbf a f mathbf b nbsp nbsp The function f x y 2 x y textstyle f x y 2x y nbsp is homogeneous It does not matter whether a vector is first scaled and then mapped or first mapped and then scaled f l a l f a textstyle f lambda mathbf a lambda f mathbf a nbsp Linear extensions edit Often a linear map is constructed by defining it on a subset of a vector space and then extending by linearity to the linear span of the domain Suppose X displaystyle X nbsp and Y displaystyle Y nbsp are vector spaces and f S Y displaystyle f S to Y nbsp is a function defined on some subset S X displaystyle S subseteq X nbsp Then a linear extension of f displaystyle f nbsp to X displaystyle X nbsp if it exists is a linear map F X Y displaystyle F X to Y nbsp defined on X displaystyle X nbsp that extends f displaystyle f nbsp note 1 meaning that F s f s displaystyle F s f s nbsp for all s S displaystyle s in S nbsp and takes its values from the codomain of f displaystyle f nbsp 9 When the subset S displaystyle S nbsp is a vector subspace of X displaystyle X nbsp then a Y displaystyle Y nbsp valued linear extension of f displaystyle f nbsp to all of X displaystyle X nbsp is guaranteed to exist if and only if f S Y displaystyle f S to Y nbsp is a linear map 9 In particular if f displaystyle f nbsp has a linear extension to span S displaystyle operatorname span S nbsp then it has a linear extension to all of X displaystyle X nbsp The map f S Y displaystyle f S to Y nbsp can be extended to a linear map F span S Y displaystyle F operatorname span S to Y nbsp if and only if whenever n gt 0 displaystyle n gt 0 nbsp is an integer c 1 c n displaystyle c 1 ldots c n nbsp are scalars and s 1 s n S displaystyle s 1 ldots s n in S nbsp are vectors such that 0 c 1 s 1 c n s n displaystyle 0 c 1 s 1 cdots c n s n nbsp then necessarily 0 c 1 f s 1 c n f s n displaystyle 0 c 1 f left s 1 right cdots c n f left s n right nbsp 10 If a linear extension of f S Y displaystyle f S to Y nbsp exists then the linear extension F span S Y displaystyle F operatorname span S to Y nbsp is unique andF c 1 s 1 c n s n c 1 f s 1 c n f s n displaystyle F left c 1 s 1 cdots c n s n right c 1 f left s 1 right cdots c n f left s n right nbsp holds for all n c 1 c n displaystyle n c 1 ldots c n nbsp and s 1 s n displaystyle s 1 ldots s n nbsp as above 10 If S displaystyle S nbsp is linearly independent then every function f S Y displaystyle f S to Y nbsp into any vector space has a linear extension to a linear map span S Y displaystyle operatorname span S to Y nbsp the converse is also true For example if X R 2 displaystyle X mathbb R 2 nbsp and Y R displaystyle Y mathbb R nbsp then the assignment 1 0 1 displaystyle 1 0 to 1 nbsp and 0 1 2 displaystyle 0 1 to 2 nbsp can be linearly extended from the linearly independent set of vectors S 1 0 0 1 displaystyle S 1 0 0 1 nbsp to a linear map on span 1 0 0 1 R 2 displaystyle operatorname span 1 0 0 1 mathbb R 2 nbsp The unique linear extension F R 2 R displaystyle F mathbb R 2 to mathbb R nbsp is the map that sends x y x 1 0 y 0 1 R 2 displaystyle x y x 1 0 y 0 1 in mathbb R 2 nbsp toF x y x 1 y 2 x 2 y displaystyle F x y x 1 y 2 x 2y nbsp Every scalar valued linear functional f displaystyle f nbsp defined on a vector subspace of a real or complex vector space X displaystyle X nbsp has a linear extension to all of X displaystyle X nbsp Indeed the Hahn Banach dominated extension theorem even guarantees that when this linear functional f displaystyle f nbsp is dominated by some given seminorm p X R displaystyle p X to mathbb R nbsp meaning that f m p m displaystyle f m leq p m nbsp holds for all m displaystyle m nbsp in the domain of f displaystyle f nbsp then there exists a linear extension to X displaystyle X nbsp that is also dominated by p displaystyle p nbsp Matrices editMain article Transformation matrix If V displaystyle V nbsp and W displaystyle W nbsp are finite dimensional vector spaces and a basis is defined for each vector space then every linear map from V displaystyle V nbsp to W displaystyle W nbsp can be represented by a matrix 11 This is useful because it allows concrete calculations Matrices yield examples of linear maps if A displaystyle A nbsp is a real m n displaystyle m times n nbsp matrix then f x A x displaystyle f mathbf x A mathbf x nbsp describes a linear map R n R m displaystyle mathbb R n to mathbb R m nbsp see Euclidean space Let v 1 v n displaystyle mathbf v 1 ldots mathbf v n nbsp be a basis for V displaystyle V nbsp Then every vector v V displaystyle mathbf v in V nbsp is uniquely determined by the coefficients c 1 c n displaystyle c 1 ldots c n nbsp in the field R displaystyle mathbb R nbsp v c 1 v 1 c n v n displaystyle mathbf v c 1 mathbf v 1 cdots c n mathbf v n nbsp If f V W textstyle f V to W nbsp is a linear map f v f c 1 v 1 c n v n c 1 f v 1 c n f v n displaystyle f mathbf v f c 1 mathbf v 1 cdots c n mathbf v n c 1 f mathbf v 1 cdots c n f left mathbf v n right nbsp which implies that the function f is entirely determined by the vectors f v 1 f v n displaystyle f mathbf v 1 ldots f mathbf v n nbsp Now let w 1 w m displaystyle mathbf w 1 ldots mathbf w m nbsp be a basis for W displaystyle W nbsp Then we can represent each vector f v j displaystyle f mathbf v j nbsp asf v j a 1 j w 1 a m j w m displaystyle f left mathbf v j right a 1j mathbf w 1 cdots a mj mathbf w m nbsp Thus the function f displaystyle f nbsp is entirely determined by the values of a i j displaystyle a ij nbsp If we put these values into an m n displaystyle m times n nbsp matrix M displaystyle M nbsp then we can conveniently use it to compute the vector output of f displaystyle f nbsp for any vector in V displaystyle V nbsp To get M displaystyle M nbsp every column j displaystyle j nbsp of M displaystyle M nbsp is a vector a 1 j a m j displaystyle begin pmatrix a 1j vdots a mj end pmatrix nbsp corresponding to f v j displaystyle f mathbf v j nbsp as defined above To define it more clearly for some column j displaystyle j nbsp that corresponds to the mapping f v j displaystyle f mathbf v j nbsp M a 1 j a m j displaystyle mathbf M begin pmatrix cdots amp a 1j amp cdots amp vdots amp amp a mj amp end pmatrix nbsp where M displaystyle M nbsp is the matrix of f displaystyle f nbsp In other words every column j 1 n displaystyle j 1 ldots n nbsp has a corresponding vector f v j displaystyle f mathbf v j nbsp whose coordinates a 1 j a m j displaystyle a 1j cdots a mj nbsp are the elements of column j displaystyle j nbsp A single linear map may be represented by many matrices This is because the values of the elements of a matrix depend on the bases chosen The matrices of a linear transformation can be represented visually Matrix for T textstyle T nbsp relative to B textstyle B nbsp A textstyle A nbsp Matrix for T textstyle T nbsp relative to B textstyle B nbsp A textstyle A nbsp Transition matrix from B textstyle B nbsp to B textstyle B nbsp P textstyle P nbsp Transition matrix from B textstyle B nbsp to B textstyle B nbsp P 1 textstyle P 1 nbsp nbsp The relationship between matrices in a linear transformation Such that starting in the bottom left corner v B textstyle left mathbf v right B nbsp and looking for the bottom right corner T v B textstyle left T left mathbf v right right B nbsp one would left multiply that is A v B T v B textstyle A left mathbf v right B left T left mathbf v right right B nbsp The equivalent method would be the longer method going clockwise from the same point such that v B textstyle left mathbf v right B nbsp is left multiplied with P 1 A P textstyle P 1 AP nbsp or P 1 A P v B T v B textstyle P 1 AP left mathbf v right B left T left mathbf v right right B nbsp Examples in two dimensions edit In two dimensional space R2 linear maps are described by 2 2 matrices These are some examples rotation by 90 degrees counterclockwise A 0 1 1 0 displaystyle mathbf A begin pmatrix 0 amp 1 1 amp 0 end pmatrix nbsp by an angle 8 counterclockwise A cos 8 sin 8 sin 8 cos 8 displaystyle mathbf A begin pmatrix cos theta amp sin theta sin theta amp cos theta end pmatrix nbsp reflection through the x axis A 1 0 0 1 displaystyle mathbf A begin pmatrix 1 amp 0 0 amp 1 end pmatrix nbsp through the y axis A 1 0 0 1 displaystyle mathbf A begin pmatrix 1 amp 0 0 amp 1 end pmatrix nbsp through a line making an angle 8 with the origin A cos 2 8 sin 2 8 sin 2 8 cos 2 8 displaystyle mathbf A begin pmatrix cos 2 theta amp sin 2 theta sin 2 theta amp cos 2 theta end pmatrix nbsp scaling by 2 in all directions A 2 0 0 2 2 I displaystyle mathbf A begin pmatrix 2 amp 0 0 amp 2 end pmatrix 2 mathbf I nbsp horizontal shear mapping A 1 m 0 1 displaystyle mathbf A begin pmatrix 1 amp m 0 amp 1 end pmatrix nbsp skew of the y axis by an angle 8 A 1 sin 8 0 cos 8 displaystyle mathbf A begin pmatrix 1 amp sin theta 0 amp cos theta end pmatrix nbsp squeeze mapping A k 0 0 1 k displaystyle mathbf A begin pmatrix k amp 0 0 amp frac 1 k end pmatrix nbsp projection onto the y axis A 0 0 0 1 displaystyle mathbf A begin pmatrix 0 amp 0 0 amp 1 end pmatrix nbsp If a linear map is only composed of rotation reflection and or uniform scaling then the linear map is a conformal linear transformation Vector space of linear maps editThe composition of linear maps is linear if f V W displaystyle f V to W nbsp and g W Z textstyle g W to Z nbsp are linear then so is their composition g f V Z textstyle g circ f V to Z nbsp It follows from this that the class of all vector spaces over a given field K together with K linear maps as morphisms forms a category The inverse of a linear map when defined is again a linear map If f 1 V W textstyle f 1 V to W nbsp and f 2 V W textstyle f 2 V to W nbsp are linear then so is their pointwise sum f 1 f 2 displaystyle f 1 f 2 nbsp which is defined by f 1 f 2 x f 1 x f 2 x displaystyle f 1 f 2 mathbf x f 1 mathbf x f 2 mathbf x nbsp If f V W textstyle f V to W nbsp is linear and a textstyle alpha nbsp is an element of the ground field K textstyle K nbsp then the map a f textstyle alpha f nbsp defined by a f x a f x textstyle alpha f mathbf x alpha f mathbf x nbsp is also linear Thus the set L V W textstyle mathcal L V W nbsp of linear maps from V textstyle V nbsp to W textstyle W nbsp itself forms a vector space over K textstyle K nbsp 12 sometimes denoted Hom V W textstyle operatorname Hom V W nbsp 13 Furthermore in the case that V W textstyle V W nbsp this vector space denoted End V textstyle operatorname End V nbsp is an associative algebra under composition of maps since the composition of two linear maps is again a linear map and the composition of maps is always associative This case is discussed in more detail below Given again the finite dimensional case if bases have been chosen then the composition of linear maps corresponds to the matrix multiplication the addition of linear maps corresponds to the matrix addition and the multiplication of linear maps with scalars corresponds to the multiplication of matrices with scalars Endomorphisms and automorphisms edit Main articles Endomorphism and Automorphism A linear transformation f V V textstyle f V to V nbsp is an endomorphism of V textstyle V nbsp the set of all such endomorphisms End V textstyle operatorname End V nbsp together with addition composition and scalar multiplication as defined above forms an associative algebra with identity element over the field K textstyle K nbsp and in particular a ring The multiplicative identity element of this algebra is the identity map id V V textstyle operatorname id V to V nbsp An endomorphism of V textstyle V nbsp that is also an isomorphism is called an automorphism of V textstyle V nbsp The composition of two automorphisms is again an automorphism and the set of all automorphisms of V textstyle V nbsp forms a group the automorphism group of V textstyle V nbsp which is denoted by Aut V textstyle operatorname Aut V nbsp or GL V textstyle operatorname GL V nbsp Since the automorphisms are precisely those endomorphisms which possess inverses under composition Aut V textstyle operatorname Aut V nbsp is the group of units in the ring End V textstyle operatorname End V nbsp If V textstyle V nbsp has finite dimension n textstyle n nbsp then End V textstyle operatorname End V nbsp is isomorphic to the associative algebra of all n n textstyle n times n nbsp matrices with entries in K textstyle K nbsp The automorphism group of V textstyle V nbsp is isomorphic to the general linear group GL n K textstyle operatorname GL n K nbsp of all n n textstyle n times n nbsp invertible matrices with entries in K textstyle K nbsp Kernel image and the rank nullity theorem editMain articles Kernel linear algebra Image mathematics and Rank of a matrix If f V W textstyle f V to W nbsp is linear we define the kernel and the image or range of f textstyle f nbsp byker f x V f x 0 im f w W w f x x V displaystyle begin aligned ker f amp mathbf x in V f mathbf x mathbf 0 operatorname im f amp mathbf w in W mathbf w f mathbf x mathbf x in V end aligned nbsp ker f textstyle ker f nbsp is a subspace of V textstyle V nbsp and im f textstyle operatorname im f nbsp is a subspace of W textstyle W nbsp The following dimension formula is known as the rank nullity theorem 14 dim ker f dim im f dim V displaystyle dim ker f dim operatorname im f dim V nbsp The number dim im f textstyle dim operatorname im f nbsp is also called the rank of f textstyle f nbsp and written as rank f textstyle operatorname rank f nbsp or sometimes r f textstyle rho f nbsp 15 16 the number dim ker f textstyle dim ker f nbsp is called the nullity of f textstyle f nbsp and written as null f textstyle operatorname null f nbsp or n f textstyle nu f nbsp 15 16 If V textstyle V nbsp and W textstyle W nbsp are finite dimensional bases have been chosen and f textstyle f nbsp is represented by the matrix A textstyle A nbsp then the rank and nullity of f textstyle f nbsp are equal to the rank and nullity of the matrix A textstyle A nbsp respectively Cokernel editMain article Cokernel A subtler invariant of a linear transformation f V W textstyle f V to W nbsp is the cokernel which is defined ascoker f W f V W im f displaystyle operatorname coker f W f V W operatorname im f nbsp This is the dual notion to the kernel just as the kernel is a subspace of the domain the co kernel is a quotient space of the target Formally one has the exact sequence0 ker f V W coker f 0 displaystyle 0 to ker f to V to W to operatorname coker f to 0 nbsp These can be interpreted thus given a linear equation f v w to solve the kernel is the space of solutions to the homogeneous equation f v 0 and its dimension is the number of degrees of freedom in the space of solutions if it is not empty the co kernel is the space of constraints that the solutions must satisfy and its dimension is the maximal number of independent constraints The dimension of the co kernel and the dimension of the image the rank add up to the dimension of the target space For finite dimensions this means that the dimension of the quotient space W f V is the dimension of the target space minus the dimension of the image As a simple example consider the map f R2 R2 given by f x y 0 y Then for an equation f x y a b to have a solution we must have a 0 one constraint and in that case the solution space is x b or equivalently stated 0 b x 0 one degree of freedom The kernel may be expressed as the subspace x 0 lt V the value of x is the freedom in a solution while the cokernel may be expressed via the map W R a b a textstyle a b mapsto a nbsp given a vector a b the value of a is the obstruction to there being a solution An example illustrating the infinite dimensional case is afforded by the map f R R a n b n textstyle left a n right mapsto left b n right nbsp with b1 0 and bn 1 an for n gt 0 Its image consists of all sequences with first element 0 and thus its cokernel consists of the classes of sequences with identical first element Thus whereas its kernel has dimension 0 it maps only the zero sequence to the zero sequence its co kernel has dimension 1 Since the domain and the target space are the same the rank and the dimension of the kernel add up to the same sum as the rank and the dimension of the co kernel ℵ 0 0 ℵ 0 1 textstyle aleph 0 0 aleph 0 1 nbsp but in the infinite dimensional case it cannot be inferred that the kernel and the co kernel of an endomorphism have the same dimension 0 1 The reverse situation obtains for the map h R R a n c n textstyle left a n right mapsto left c n right nbsp with cn an 1 Its image is the entire target space and hence its co kernel has dimension 0 but since it maps all sequences in which only the first element is non zero to the zero sequence its kernel has dimension 1 Index edit For a linear operator with finite dimensional kernel and co kernel one may define index as ind f dim ker f dim coker f displaystyle operatorname ind f dim ker f dim operatorname coker f nbsp namely the degrees of freedom minus the number of constraints For a transformation between finite dimensional vector spaces this is just the difference dim V dim W by rank nullity This gives an indication of how many solutions or how many constraints one has if mapping from a larger space to a smaller one the map may be onto and thus will have degrees of freedom even without constraints Conversely if mapping from a smaller space to a larger one the map cannot be onto and thus one will have constraints even without degrees of freedom The index of an operator is precisely the Euler characteristic of the 2 term complex 0 V W 0 In operator theory the index of Fredholm operators is an object of study with a major result being the Atiyah Singer index theorem 17 Algebraic classifications of linear transformations editNo classification of linear maps could be exhaustive The following incomplete list enumerates some important classifications that do not require any additional structure on the vector space Let V and W denote vector spaces over a field F and let T V W be a linear map Monomorphism edit T is said to be injective or a monomorphism if any of the following equivalent conditions are true T is one to one as a map of sets ker T 0V dim ker T 0 T is monic or left cancellable which is to say for any vector space U and any pair of linear maps R U V and S U V the equation TR TS implies R S T is left invertible which is to say there exists a linear map S W V such that ST is the identity map on V Epimorphism edit T is said to be surjective or an epimorphism if any of the following equivalent conditions are true T is onto as a map of sets coker T 0W T is epic or right cancellable which is to say for any vector space U and any pair of linear maps R W U and S W U the equation RT ST implies R S T is right invertible which is to say there exists a linear map S W V such that TS is the identity map on W Isomorphism edit T is said to be an isomorphism if it is both left and right invertible This is equivalent to T being both one to one and onto a bijection of sets or also to T being both epic and monic and so being a bimorphism If T V V is an endomorphism then If for some positive integer n the n th iterate of T Tn is identically zero then T is said to be nilpotent If T2 T then T is said to be idempotent If T kI where k is some scalar then T is said to be a scaling transformation or scalar multiplication map see scalar matrix Change of basis editMain articles Basis linear algebra and Change of basis Given a linear map which is an endomorphism whose matrix is A in the basis B of the space it transforms vector coordinates u as v A u As vectors change with the inverse of B vectors are contravariant its inverse transformation is v B v Substituting this in the first expressionB v A B u displaystyle B left v right AB left u right nbsp hence v B 1 A B u A u displaystyle left v right B 1 AB left u right A left u right nbsp Therefore the matrix in the new basis is A B 1AB being B the matrix of the given basis Therefore linear maps are said to be 1 co 1 contra variant objects or type 1 1 tensors Continuity editMain articles Continuous linear operator and Discontinuous linear map A linear transformation between topological vector spaces for example normed spaces may be continuous If its domain and codomain are the same it will then be a continuous linear operator A linear operator on a normed linear space is continuous if and only if it is bounded for example when the domain is finite dimensional 18 An infinite dimensional domain may have discontinuous linear operators An example of an unbounded hence discontinuous linear transformation is differentiation on the space of smooth functions equipped with the supremum norm a function with small values can have a derivative with large values while the derivative of 0 is 0 For a specific example sin nx n converges to 0 but its derivative cos nx does not so differentiation is not continuous at 0 and by a variation of this argument it is not continuous anywhere Applications editA specific application of linear maps is for geometric transformations such as those performed in computer graphics where the translation rotation and scaling of 2D or 3D objects is performed by the use of a transformation matrix Linear mappings also are used as a mechanism for describing change for example in calculus correspond to derivatives or in relativity used as a device to keep track of the local transformations of reference frames Another application of these transformations is in compiler optimizations of nested loop code and in parallelizing compiler techniques See also edit nbsp Wikibooks has a book on the topic of Linear Algebra Linear Transformations Additive map Z module homomorphism Antilinear map Conjugate homogeneous additive map Bent function Special type of Boolean function Bounded operator Linear transformation between topological vector spaces Cauchy s functional equation Functional equation Continuous linear operator Linear functional Linear map from a vector space to its field of scalarsPages displaying short descriptions of redirect targets Linear isometry Distance preserving mathematical transformationPages displaying short descriptions of redirect targetsNotes edit Linear transformations of V into V are often called linear operators on V Rudin 1976 p 207 Let V and W be two real vector spaces A mapping a from V into W Is called a linear mapping or linear transformation or linear operator from V into W if a u v a u a v textstyle a mathbf u mathbf v a mathbf u a mathbf v nbsp for all u v V textstyle mathbf u mathbf v in V nbsp a l u l a u textstyle a lambda mathbf u lambda a mathbf u nbsp for all u V displaystyle mathbf u in V nbsp and all real l Bronshtein amp Semendyayev 2004 p 316 Rudin 1991 p 14Here are some properties of linear mappings L X Y textstyle Lambda X to Y nbsp whose proofs are so easy that we omit them it is assumed that A X textstyle A subset X nbsp and B Y textstyle B subset Y nbsp L 0 0 textstyle Lambda 0 0 nbsp If A is a subspace or a convex set or a balanced set the same is true of L A textstyle Lambda A nbsp If B is a subspace or a convex set or a balanced set the same is true of L 1 B textstyle Lambda 1 B nbsp In particular the set L 1 0 x X L x 0 N L displaystyle Lambda 1 0 mathbf x in X Lambda mathbf x 0 N Lambda nbsp is a subspace of X called the null space of L textstyle Lambda nbsp Rudin 1991 p 14 Suppose now that X and Y are vector spaces over the same scalar field A mapping L X Y textstyle Lambda X to Y nbsp is said to be linear if L a x b y a L x b L y textstyle Lambda alpha mathbf x beta mathbf y alpha Lambda mathbf x beta Lambda mathbf y nbsp for all x y X textstyle mathbf x mathbf y in X nbsp and all scalars a textstyle alpha nbsp and b textstyle beta nbsp Note that one often writes L x textstyle Lambda mathbf x nbsp rather than L x textstyle Lambda mathbf x nbsp when L textstyle Lambda nbsp is linear Rudin 1976 p 206 A mapping A of a vector space X into a vector space Y is said to be a linear transformation if A x 1 x 2 A x 1 A x 2 A c x c A x textstyle A left mathbf x 1 mathbf x 2 right A mathbf x 1 A mathbf x 2 A c mathbf x cA mathbf x nbsp for all x x 1 x 2 X textstyle mathbf x mathbf x 1 mathbf x 2 in X nbsp and all scalars c Note that one often writes A x textstyle A mathbf x nbsp instead of A x textstyle A mathbf x nbsp if A is linear Rudin 1991 p 14 Linear mappings of X onto its scalar field are called linear functionals terminology What does linear mean in Linear Algebra Mathematics Stack Exchange Retrieved 2021 02 17 Wilansky 2013 pp 21 26 a b Kubrusly 2001 p 57 a b Schechter 1996 pp 277 280 Rudin 1976 p 210 Suppose x 1 x n textstyle left mathbf x 1 ldots mathbf x n right nbsp and y 1 y m textstyle left mathbf y 1 ldots mathbf y m right nbsp are bases of vector spaces X and Y respectively Then every A L X Y textstyle A in L X Y nbsp determines a set of numbers a i j textstyle a i j nbsp such that A x j i 1 m a i j y i 1 j n displaystyle A mathbf x j sum i 1 m a i j mathbf y i quad 1 leq j leq n nbsp It is convenient to represent these numbers in a rectangular array of m rows and n columns called an m by n matrix A a 1 1 a 1 2 a 1 n a 2 1 a 2 2 a 2 n a m 1 a m 2 a m n displaystyle A begin bmatrix a 1 1 amp a 1 2 amp ldots amp a 1 n a 2 1 amp a 2 2 amp ldots amp a 2 n vdots amp vdots amp ddots amp vdots a m 1 amp a m 2 amp ldots amp a m n end bmatrix nbsp Observe that the coordinates a i j textstyle a i j nbsp of the vector A x j textstyle A mathbf x j nbsp with respect to the basis y 1 y m textstyle mathbf y 1 ldots mathbf y m nbsp appear in the jth column of A textstyle A nbsp The vectors A x j textstyle A mathbf x j nbsp are therefore sometimes called the column vectors of A textstyle A nbsp With this terminology the range of A is spanned by the column vectors of A textstyle A nbsp Axler 2015 p 52 3 3 Tu 2011 p 19 3 1 Horn amp Johnson 2013 0 2 3 Vector spaces associated with a matrix or linear transformation p 6 a b Katznelson amp Katznelson 2008 p 52 2 5 1 a b Halmos 1974 p 90 50 Nistor Victor 2001 1994 Index theory Encyclopedia of Mathematics EMS Press The main question in index theory is to provide index formulas for classes of Fredholm operators Index theory has become a subject on its own only after M F Atiyah and I Singer published their index theorems Rudin 1991 p 15 1 18 Theorem Let L textstyle Lambda nbsp be a linear functional on a topological vector space X Assume L x 0 textstyle Lambda mathbf x neq 0 nbsp for some x X textstyle mathbf x in X nbsp Then each of the following four properties implies the other three L textstyle Lambda nbsp is continuousThe null space N L textstyle N Lambda nbsp is closed N L textstyle N Lambda nbsp is not dense in X L textstyle Lambda nbsp is bounded in some neighbourhood V of 0 One map F displaystyle F nbsp is said to extend another map f displaystyle f nbsp if when f displaystyle f nbsp is defined at a point s displaystyle s nbsp then so is F displaystyle F nbsp and F s f s displaystyle F s f s nbsp Bibliography editAxler Sheldon Jay 2015 Linear Algebra Done Right 3rd ed Springer ISBN 978 3 319 11079 0 Bronshtein I N Semendyayev K A 2004 Handbook of Mathematics 4th ed New York Springer Verlag ISBN 3 540 43491 7 Halmos Paul Richard 1974 1958 Finite Dimensional Vector Spaces 2nd ed Springer ISBN 0 387 90093 4 Horn Roger A Johnson Charles R 2013 Matrix Analysis Second ed Cambridge University Press ISBN 978 0 521 83940 2 Katznelson Yitzhak Katznelson Yonatan R 2008 A Terse Introduction to Linear Algebra American Mathematical Society ISBN 978 0 8218 4419 9 Kubrusly Carlos 2001 Elements of operator theory Boston Birkhauser ISBN 978 1 4757 3328 0 OCLC 754555941 Lang Serge 1987 Linear Algebra Third ed New York Springer Verlag ISBN 0 387 96412 6 Rudin Walter 1973 Functional Analysis International Series in Pure and Applied Mathematics Vol 25 First ed New York NY McGraw Hill Science Engineering Math ISBN 9780070542259 Rudin Walter 1976 Principles of Mathematical Analysis Walter Rudin Student Series in Advanced Mathematics 3rd ed New York McGraw Hill ISBN 978 0 07 054235 8 Rudin Walter 1991 Functional Analysis International Series in Pure and Applied Mathematics Vol 8 Second ed New York NY McGraw Hill Science Engineering Math ISBN 978 0 07 054236 5 OCLC 21163277 Schaefer Helmut H Wolff Manfred P 1999 Topological Vector Spaces GTM Vol 8 Second ed New York NY Springer New York Imprint Springer ISBN 978 1 4612 7155 0 OCLC 840278135 Schechter Eric 1996 Handbook of Analysis and Its Foundations San Diego CA Academic Press ISBN 978 0 12 622760 4 OCLC 175294365 Swartz Charles 1992 An introduction to Functional Analysis New York M Dekker ISBN 978 0 8247 8643 4 OCLC 24909067 Tu Loring W 2011 An Introduction to Manifolds 2nd ed Springer ISBN 978 0 8218 4419 9 Wilansky Albert 2013 Modern Methods in Topological Vector Spaces Mineola New York Dover Publications Inc ISBN 978 0 486 49353 4 OCLC 849801114 Retrieved from https en wikipedia org w index php title Linear map amp oldid 1219601946, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.