Skip to main content

0. Presentation of the Course

Exams

100%, can use any notes including internet Take hop exam

Types of Questions

  • Numerical Questions
  • Theoretical questions

Difficulty Levels

  • 60-80 marks: questions of the same type of those solved during the lectures
  • 20-40 marks: questions that require some creativity and reasoning

Topics

Mainly about algebra and calculus

  • Matrices
  • Systems of Linear Equations
  • Vector Spaces
  • Linear Mappings

1. The Basics

Basic Definitions and notation

Set is a collection of items, name elements, characterised by a certain property A={2,4,7,D,24}A = \{ 2,4,7,D,24\} Element xx belgons to AA indicated as xAx \in A Several elements of AA compose a subset S:SAS:S\subset A

Want to inidcate all the elements of the set: xA:\forall x \in A: Want to say there exists(at least) one element of AA which has a certain property (such that): xA\exists x \in A \ni '
Want to say there exists only/exactly one element of AA which has certain property !xA\exists! x \in A \ni ' if statement 1 THEN statement 2 \rightarrow statement 1 IF AND ONLY IF statement 2     \iff

Cardinality of a set

coincident - every element of AA is also an element of BB and every element of BB is also an element of AA cardinality of a set is number of elements contained in AA empty - indicated with \emptyset

Intersection and Union

intersection - C=ABC = A \cap B - set containing all the elements that are both in the sets AA and BB union - C=ABC = A \cup B - set containing all the elements that are either or both the sets AA and BB difference - C=ABC = A \setminus B - set containing all the elements that are in AA but not BB

Associativity of the Intersection

(AB)C=A(BC)(A\cap B) \cap C = A\cap (B\cap C)

Numbers and Number sets

Set is finite if its cardinality is a finite number. Set is infinite if its cardinality is \infty continuous if x0A:xxx0\forall x_0 \in A:\exists x \ni ' |x-x_0|

Types of Number Sets

Continue from last semester ℂ - Set of number than can be expressed as a+jba+jb where a,bRa,b\in \R and the imaginary uni j=1j=\sqrt {-1}

Cartesian Product

New set generated by all the possible pairs C=A×BC=A\times B A={1,5,7}A=\{ 1,5,7\} B={2,3}B=\{ 2,3\} C=A×B={(1,2),(1,3),(5,2),(5,3),(7,2),(7,3)}C=A\times B = \{ (1,2),(1,3),(5,2),(5,3),(7,2),(7,3)\}

Relations

Order Relation

Indicated with \preceq if following properties are verified:

  • reflexivity
  • transitivity
  • antisymmetry

Example

Let x,yNx,y \in \N and let consider (x,y)R(x,y) \in \mathscr{R} if and only if xx is a multiple of yy. More formally (x,y)R    xy=k(x,y)\in \mathscr{R}\iff \frac{x}{y} = k with kNk\in \N

Equivalence

Indicated with \equiv if following properties are verified:

  • Reflexivity - xxx \equiv x
  • symmetry - xyx\equiv y then yxy\equiv x
  • transitivity - xyx\equiv y and yxy\equiv x then xzx \equiv z

Equivalence Classes

Let R\mathscr{R} be an equivalence relation defined on AA. The equivalence class of an element aa is a set defined as [a]={xAxa}[a]=\{ x \in A|x \equiv a \}

Partition

Set of all the elements equivalent to aAa \in A is called equivalence class and is indicated with [a][a]

Functions/Mapping

Relation is to be a mapping or function when it relates to any element of a set unique element of another. Let AA and BB be two sets, a mapping f:ABf:A \to B is a relation RA×B\mathscr{R} \subseteq A\times B such that: xA!yB(x,y)R\forall x \in A\exists! y \in B\ni '(x,y) \in \mathscr{R} (Every element, find only 1 element in other set such that y is = x)

The statement (x,y)R(x,y) \in \mathscr{R} can be expressed also as y=f(x)y=f(x) Unary operator - f:ABf:A \to B Binary operator - f:A×ABf:A \times A \to B or f:A×CBf:A\times C\to B Internal composition law - f:A×AAf:A\times A\to A

algebraic structure - set endowed with 1+ internal composition laws (A,+,.)(A,+,.)

Injective Functions

fABf A \to B is injection if the function values of two different elements is always different: x1\forall x_1 and x2\forall x_2 if x1x2x_1 \ne x_2 then f(x1)f(x2)f(x_1) \ne f(x_2) Never crosses horizontal line twice. Doesnt cross it twice

Surjective Functions

fABf A \to B is surjective if all elements of BB are mapped by an element of A:yBA: \forall y\in B it follows that xA\exists x \in A such that y=f(x)y=f(x) Graph has no holes. Y axis as got a function, eg crosses graph

Bijective Functions

fABf A \to B is bijective when both injection and surjection are verified

Numeric Vector

Numeric Vector

  • Let nNn\in \N and n>0n>0
  • Indicated with Rn\R^n, set of ordered n-tuples of real numbers
  • Generic element a = (a1,a2,...,ana_1,a_2,...,a_n) is named numeric vector or simply vector of order n

Sum of Vectors

  • Sum of vectors: c=(a1+b1,a2+b2,...,an+bn)c = (a_1 + b_1 , a_2 + b_2 , . . . , a_n + b_n )
  • Can also do the same for aba-b

Scalars

Numeric value λR1\lambda \in \R^1 is a scalar The product of a vector by a scalar is the vector c=(λa1,λa2,...,λan)c=(\lambda a_1,\lambda a_2,...,\lambda a_n) Very similar to ×\times

Scalar Product

Generated by the sum of the products of each pair of corresponding components ab=c=a1b1+a2b2,...,anbnab = c = a_1b_1 + a_2b_2,...,a_nb_n Example:

  • a=(1,0,3)a = (1, 0, 3)
  • b=(2,1,2)b = (2, 1, −2)
  • ab=(12)+(01)+(3(2))=2+06=4ab = (1 · 2) + (0 · 1) + (3 · (−2)) = 2 + 0 − 6 = −4

Properties

  • Symmetry: ab=baab=ba
  • Associativity: λ(ba)=(λa)b=a(λb)\lambda (ba) = (\lambda a)b = a(\lambda b)
  • Distributivity: a(b+c)=ab+aca(b+c)=ab+ac

Matrices

Matrix

2 natural numbers > 0. A matrix (m×nm\times n) is a generic table of the kind

A=(a1,1a1,2...a1,na2,1a2,2...a2,n............am,1am,2...am,n)A=\begin{pmatrix} a_1,_1&a_1,_2&...&a_1,_n \\ a_2,_1&a_2,_2&...&a_2,_n \\ ...&...&...&...\\ a_m,_1&a_m,_2&...&a_m,_n \\ \end{pmatrix} where each matrix element ai,jRa_i,_j\in \R

  • m=rows, n=columns
  • n order square matrix - ARn,nA \in \R_n,_n
  • rectangular - mnm \ne n

Matrix Transpose

  • Transpose matrix is ATA^T whose elements are the same of A but i,j:aj,i=aiT,j\forall i,j : a_j,_i = a^T_i,_j
  • Change index of row with index of columns (rotate it)

Symmetry

  • Only on square matrix
  • When i,j:ai,j=aj,i\forall _i,_j : a_i,_j = a_j,_i

Diagonal and Trace

  • Diagonal of a matrix is the ordered n-tuple that displays the same index twice. From 1 to nai,jn a_i,_j
  • Trace is the sum of the diagonal elements tr(AA)

Null Matrices

  • Is said null OO if all elements are zeros

Identity Matrices

  • Square matrix whose diagonal elements are all ones while all the other extra-diagonal elements are zero

Matrix Operations: Sum and Product

Matrix Sum

i,j:ci,j=ai,j+bi,j\forall i,j:c_i,_j = a_i,_j + b_i,_j Both matrices must be the same size! Can be subtracted from one another

Properties of the matrix sum

  • commutativity: A+B=B+AA + B = B + A
  • associativity: (A+B)+C=A+(B+C)(A + B) + C = A + (B + C)
  • neutral element: A+O=AA + O = A
  • opposite element: ARm,n:!BRm,nA+B=O∀A ∈ Rm,n : ∃!B ∈ R m,n \ni ‘ A + B = O

Product of a scalar and a matrix

Matrix CC defined as: i,j:ci,j=λai,j\forall i,j:c_i,_j = \lambda a_i,_j Similar to ×\times by the scalar

Properties of multiplying by a scalar

  • associativity: ARm,n\forall A \in \R_m,_n and λ,μR\forall \lambda , \mu \in \R:
    • (λμ)A=(Aμ)λ=(Aλ)μ(\lambda \mu )A = (A\mu ) \lambda = (A \lambda) \mu

53125453595c9d03db174ffe1ccc2a39.png

Matrix product

Product of matrices AA and BB is a matrix C=ABC = AB f87cf623638191e375ebed5a70184fb4.png

  • Just the scalar product of row vectors of AA with column vectors of BB
  • Number of columns in the first matrix must be the same as the number of rows in the second

c2,1=a2b1c_2,_1 = a_2b^1

  • left distributivity: A(B+C)=AB+ACA (B + C) = AB + AC
  • right distributivity: (B+C)A=BA+CA(B + C) A = BA + CA
  • associativity: A(BC)=(AB)CA (BC) = (AB) C
  • transpose of the product: (AB)T=BTAT(AB)^T = B^T A^T
  • neutral element: A:AI=A\forall A : AI = A
  • absorbing element: A:AO=O\forall A : AO = O

Commutable - AB=BAAB=BA, one with respect to the other Every matrix AA is commutable with OO (and the result is always OO) and with II (and the result is always AA)

Determinant of a Matrix

(takes decades to understand)

  • Most important concepts of maths
  • Permutation: Different ways of grouping items, can be checked with factorials
  • Fundamental Permutation - Reference a sequence
  • Inversion - Every time two objects in a permutation follow each other in a reverse order with respect to the fundamental
  • Even class permutation - Permutation undergone to an even number of inversions
    • Also have Odd class permutation

Associated Product

Never in the same column/row. Product of this is referred to as associated product and is indicated with the symbol ϵ\epsilon(c). Wont be the same number either. Order the factors according to the row index: fcb9972012bd08b0ce9ba20f0959362a.png

Coefficient of nk

Consider 1-2-...-n as fundamental permutation, the scalar nk is defined as: 51253eedd808d1a01f53eb9bb23f7d78.png

Determinant of a Matrix

indicated as det AA is the function det: Rn,nR\R_n,_n \to \R defined as the sum of n! associated products: k=1n!....\sum ^{n!} _{k=1} .... 64d835db6e86e89de63b15415fba4541.png Do it as left to right diagonal +, then right to left diagonal as -

Linear Dependence/Independence

Linear Combination

If row can be represented by other rows with weighted sum by means of the same scalars

Linear dependence

If null vector can be expressed as the linear combination of the mm rows (nn columns) by means of nun-null scalars Rows are linearly dependent if

λ1,λ2,...,λm0,0,...,0\exists \lambda_1,\lambda_2,...,\lambda_m\ne0,0,...,0 such that 2a22b1f6be42fbe66651c87888118178.png

Linear Independence

If only way to express a row of all zeros as the linear combination of the m rows(n columns) is by means of null scalars

Fundamental property of Linear Dependence

r rows are linearly dependent if and only if at least one row can be expressed as the linear combination of the others

Properties of determinants

Determinant of a matrix AA is equal to the determinant of its transpose matrix

  • Transpose Matrix: detA=detATdetA = det A^T
  • Triangular Matrix: Equal to the product of the diagonal elements
  • Row(Column) swap: If two rows (columns) are swapped the determinant of the matrix AsA_s is detA-detA
  • Null determinant: if two rows(columns) are identical(sum) then detA=0detA=0
    • If and only if the rows(columns) are linearly dependent
    • if and only if at least one row(column) is a linear combination of the other rows(columns)
    • if a row(column) is proportional to another row(column)
  • Invariant Determinant: Row(column) the elements of another row(column) all multiplied by the same scalar are added, the determinant remains the same
  • Row(column) multiplied by a scalar: Row(column) is multiplied by λ\lambda then λdetA\lambda detA
  • Matrix multiplied by a scalar: If λ\lambda is scalar, det(λA)=λndetAdet(\lambda A) = \lambda ^n detA
  • Determinant of the product: Product between two matrices is equal to the products of the determinants. det(AB)=det(A)det(B)=det(B)det(A)=det(BA)det(AB) = det(A)det(B) = det(B)det(A) = det(BA)

Adjugate Matrices

Submatrix

Obtained from AA by cancelling mrm-r rows and nsn-s columns. Where r,s 2 positive integers such that 1rm1\le r \le m and 1sn1\le s\le n

  • Rows dont have to be continuous
  • A=(331024125111)A=\begin{pmatrix} 3&3&1&0\\ 2&4&1&2\\ 5&1&1&1\\ \end{pmatrix}
  • Can be obtained by cancelling the second row, second and fourth column
  • (3151)\begin{pmatrix} 3&1\\ 5&1 \\ \end{pmatrix}

  • Minor: Determinant of submatrix
  • Major: If submatrix is the largest square
  • Complement submatrices: Obtained by cancelling on the ithi^{th} row and the jthj^{th} column from AA to the element ai,ja_{i,j}
  • Complement Minor: Determinant, indicated Mi,jM_{i,j}
  • Cofactors: Ai,j=(1)i+jMi,jA_{i,j} = (-1)^{i+j}M_{i,j}

Adjugate Matrix

A=(a1,1a1,2...a1,na2,1a2,2...a2,n............an,1an,2...an,n)A=\begin{pmatrix} a_{1,1}&a_1,_2&...&a_1,_n \\ a_2,_1&a_2,_2&...&a_2,_n \\ ...&...&...&...\\ a_n,_1&a_n,_2&...&a_n,_n \\ \end{pmatrix}

Let Ai,jA_{i,j} be the cofactor for ai,ja_{i,j}. The adjugate matrix (adjunct or adjoin) AA is: adj(A)=(A1,1A2,1...An,1A1,2a2,2...An,2............A1,nA2,n...An,n)adj(A)=\begin{pmatrix} A_1,_1&A_2,_1&...&A_n,_1 \\ A_1,_2&a_2,_2&...&A_n,_2 \\ ...&...&...&...\\ A_{1,n}&A_{2,n}&...&A_{n,n}\\ \end{pmatrix}

Dan Terms: Transpose it (Flip i,j around), then calculate the determinate of the remaining rows once removed it(the sub matrix, as would do normally). Then alternate between 1 and -1.(Use cofactors formula!. (1)i+j(-1)^{i+j})

Tutorial Def:

  1. AA
  2. ATA^T
  3. All Compliment miners
  4. Multiple by coefficent = adjudigate matrix
  5. A1=1detA×adj(A)A^{-1} = \frac{1}{detA}\times adj(A)

Laplace Theorems

Theorem 1

Determinant of AA can be computed as the sum of each row(column) multiplied by the corresponding cofactor detA=j=1nai,jAi,jdet A = \sum ^n _{j=1} a_{i,j} A_{i,j} for any arbitrary ii detA=i=1nai,jAi,jdet A = \sum ^n _{i=1} a_{i,j} A_{i,j} for any arbitrary jj

The determinant of a one-element matrix is just the element (det(a=a))(det(a = a)), can compute the determinant of any square number

Theorem 2

Sum of the elements of a row(column) multiplied by the corresponding cofactor related to another row(column) is always zero j=1nai,jAk,j=0\sum ^n _{j=1} a_{i,j} A_{k,j}=0 for any arbitrary kik\ne i i=1nai,jAi,k=0\sum ^n _{i=1} a_{i,j} A_{i,k}=0 for any arbitrary kjk\ne j

minors are the row you remove, so you take sub matrix of the remaining part. kk = minor

Introduction to Matrix inversion

Inverting a matrix

For a square matrix, AA, the inverse is A1A^{-1} which is define as the matrix for which AA1=IAA^{-1}=I

  • Invertible Matrices: \exists a matrix BRn,nAB=I=BAB \in \R _{n,n} | AB=I=BA
  • Unique Inverse !\exists! a matrix BRn,nAB=I=BAB \in \R _{n,n} | AB=I=BA
  • Inverse matrix A1A^{-1}: A1=1detAadj(A)A^{-1} = \frac{1}{detA} adj(A)
  • Singular/Non Invertable/ Linear Dependent: det(A)=0det(A) =0
  • Non-singular: det(A)0det(A)\ne 0
  • Inverse of a matrix product: (AB)1=B1A1(AB)^{-1}=B^{-1}A^{-1}

Orthogonal Matrices

ARn,nA\in \R_{n,n} is said orthogonal if the product between it and its transpose is the identity matrix:

AAT=I=ATAAA^T = I = A^TA

An orthogonal matrix is always non-singular and its determinant is either 1 or -1

A matrix is orthogonal if and only if the ssum of the squares of the element of a row(column) is equal to 1 and the scalar product of any two arbitary rows(columns) is equal to 0: j=1nai,j2=1\sum^n_{j=1}a^2_{i,j} =1 i,jaiaj=0\forall i,j a_ia_j=0

Rank of a Matrix

Rank - ARm,nA\in \R_{m,n} of matrix AA, indicated as ρA\rho_A is the highest order of the non-singular submatrix AρAA_ρ \subset A. So max value of rank in 3x3 matrices would be 3. If there is linear independence, then go to 2x2 and so forth. If AA is the null matrix then its rank is taken equal to 0

ARn,nA\in \R_{n,n} Matrix AA has ρ linearly independent rows(columns).

Rank is c

Sylvester's Lemma

ARn,nA\in \R_{n,n} and BRn,qB\in \R_{n,q}. Let AA be non-singular and ρBρ_B be the rank of the matrix BB. Follows that the rank of the product matrix ABAB is ρBρ_B

Law of Nullity

follows from the previous lemma ρAρ_A and ρBρ_B be the ranks and ρABρ_{AB} be rank of the product AB. FOllows that: ρABρA+ρBnρ_{AB}\ge ρ_A+ρ_B-n

Systems of Linear Equations

Basically simultaneous equations. Can be written as Ax=bAx=b

The coefficient matrix AA is said incomplete matrix. The matrix AcRm,n+1A^c\in \R_{m,n+1} whose first nn columns are those of the matrix AA and the (n+1th)(n+1^{th}) column is the vector bb is said complete matrix

Ac=(Ab)=A^c=(A|b)=

Cramer's Theorem

If AA is non-singular , the is only one solution simultaneously satisfying all the equations: if detA0detA\ne 0 then !x\exist!x such that Ax=bAx=b

Hybrid Matrix

Hybrid matrix with respect to the ithi^{th} column is the matrix AiA_i obtained from AA by substituting the ithi^{th} column with bb (Switch aia^i with bb)

Cramer's Method

For a given system of linear equations Ax=bAx=b with AA non-singular, a generic solution xix_i element of xx can be computed as xi=detAidetAx_i=\frac{detA_i}{detA}

For a square matrices would repeat 3 times, change column 3 times with bb

Rouchè-Capelli Theorem

Solutions for systems of linear equations

  • compatible if it has at least one solution
  • determined if it has only one solution
  • undetermined if it has infinite solutions
  • incompatible if it has no solutions. Has a split Ac=AbA^c=A|b

Theorem

Ax=bAx=b is compatible if and only if both the complete and incomplete matrices (AAcA|A^c) are characterised by the same rank ρA=ρAc=ρ\rho_A = \rho_{A^c}=\rho named rank of the system

Cases

ρa<ρac\rho_a<\rho_{a^c} - Incompatible, no solutions ρa=ρac\rho_a=\rho_{a^c} - system is compatible. Under these conditions, three cases can be identified:

  1. ρa=ρac=ρ=n=m\rho_a=\rho_{a^c}=\rho=n=m - System is a Cramers system and can be solved Cramers method
  2. ρa=ρac=ρ=n<m,ρ\rho_a=\rho_{a^c}=\rho=n<m, \rho equations of the system compose a Cramers system. The remaining mρm-\rho are linear combination of the other, these equations are redundant and only 1 solution
  3. ρa=ρac=ρ{<nm\rho_a=\rho_{a^c}=\rho\begin{cases}< n\\\le m\end{cases} is undetermined and has nρ\infty^{n-\rho}

How to choose the equations to cancel?

  • Cannot cancel any equations
  • Need to ensure that the cancellation of an equation does not change the rank of the incomplete matrix AA
  • Have to cancel only linearly dependent equations, that is rows of the matrix AcA^c

General Solutions of Undetermined Systems

  • Select the ρ\rho rows linearly independent rows of the complete matrix AcA^c
  • Choose (arbitrarily) nρn-\rho variables and replace them with parameters
  • Solve the system of linear equations with respect to the remaining variables
  • Express the parametric vector as a sum of vectors, where each parameter appears in only one vector

Homogeneous Systems of Linear Equations

Homogeneous if the vector of known terms bb is composed for only zeros and is indicated with O. Solution of at least null vector

General solution of a system

If a solution of the homogeneous system then λ\lambda is also a solution to the system

Combination of solutions

If α\alpha and β\beta are both solutions then every linear combination of these two n-tuple is solution of the system. λ,μR.(λα1+μβ1,λα2+μβ2,...,λαn+μβn)\forall\lambda,\mu\in\R.(\lambda\alpha_1+\mu\beta_1,\lambda\alpha_2+\mu\beta_2,...,\lambda\alpha_n+\mu\beta_n)

Special Property

Homogeneous system of nn equations in n+1n+1 variables. System has 1\infty^1 solutions proportionate to the n-tuple

  • Cramers Method requires the calculation of 1 determinant of a nn order matrix and nn determinate of nn order matrices
  • Means n!+n(n!)n!+n(n!) elementary operations
  • Need to find alternative methods as takes too long

Theoretical Foundations of Direct Methods

Elementary Row Operations

  • E1E_1: Swap of two rows aia_i and aja_jaiaja_i \leftarrow a_jajaia_j \leftarrow a_i
  • E2E_2: Multiplication of a row aia_i by a scalar λR\lambda\in\Raiλaia_i\leftarrow\lambda a_i
  • E3E_3: Substitution of a row aia_i by the sum of the row aia_i to another row aja_j: aiai+aja_i\leftarrow a_i+a_j

Equivalent Matrices and Equivalent Systems

Equivalent Matrices: Apply the elementary row operations on AA, obtain new matrix CRm,nC\in\R_{m,n}.

Equivalent Systems: 2 systems of linear equations in the same variables: Ax=bAx=b and Cx=dCx=d. These two systems are equivalent if they have the same solutions

Consider a system of mm linear equations in nn variables Ax=bAx=b. Let AcRm,n+1A^c\in\R_{m,n+1} be the complete matrix associated with this system. If another system of linear equations is associated with a complete matrix AcRm,n+1A^{'c}\in\R_{m,n+1} equivalent to AcA^c, then the two systems are equivalent

  • E1E_1: The swap of two rows, the equations of the system are swapped, no effect on the solution of the system
  • E2E_2: Same solutions, modified systems is equivalent to the original one
  • E3E_3: After operation, the modified system is equivalent in solutions to the original one

Direct Methods

Direct methods are methods for solving systems of linear equations by manipulating the original matrix to produce an equivalent system that can be easily solved. A typical manipulation is the transformation of a system of linear equation into triangular system

Gaussian Elimination

Ax=bAx=b

  1. Construct the complete matrix AcA^c
  2. Apply the elementary row operations to obtain a staircase complete matrix and triangular incomplete matrix, that is we add rows to obtain a null element in the desired position
  3. Write down the new system of linear equations
  4. Solve the nthn^{th} equations of the system and use the result to solve the (n1)th(n-1)^{th}
  5. Continue recursively until the first equation

To obtain the matrix would do: r2(2)=r2(1)+(a2,1(1)a1,1(1))×r1(1)r^{(2)}_2=r^{(1)}_2+(\frac{-a^{(1)}_{2,1}}{a^{(1)}_{1,1}})\times r^{(1)}_1 As it keeps going on, it gets longer and longer, to create a matrices with leading zeros

Dan Method: an,m(an,1×an1,m)a_{n,m}-(a_{n,1}\times a_{n-1,m}) This completes the first 2 rows and first column. Then have to increase m, to do second column

LU factorisation

Direct method that transforms a Matrix into a matrix product LULU where LL is a lower triangular matrix having the diagonal elements all equal to 1 and UU is an upper triangular matrix Ax=bLUx=bAx=b\Rightarrow LUx=b Pose Ux=yUx=y solve at first the triangular system Ly=bLy=b and then extract xx from the triangular system Ux=yUx=y Does not alter the vector of known terms bb.

If working with non singular matrix, then split any matrix into a product such that A=LUA=LU. If non-singular matrix and is square, and has det of 0, then !\exists! lower traingular matrix LL having all the diagonal elements equal to 1 and !\exists! upper triangular matrix UU such that A=LUA=LU

Equivalence of Gaussian elimination and LU factorisation

  • Gaussian elimination and LU factorisation are both direct methods
  • They are not identical but they are equivalent
  • Whilst performing the algorithm the LU factorisation is implicitly performed
  • Both Gaussian elimination and LU factorisation have a lower complexity than theoretical methods
  • The complexity is in the order of n3n^3 operations

Short introduction to Iterative Methods

Jacobi's Method - Starts from an initial guess x(0)x^{(0)}, iteratively apply some formulas to detect the solution of the system. Unlike direct methods that converge to the theoretical solution in a finite time, iterative methods are approximate since they converge, under some conditions de24a6992d8ae1212eee0928d816f2a3.png

Definition of Vector Space

Vector Space

  • EE to be a non-null set and KK to be a scalar set
  • Vectors: elements of the set EE
  • ++ - Internal composition law, E×EEE\times E \to E
  • .. - External composition law, K×EEK\times E\to E
  • (E,+,.E,+,.) is said vector space of the vector set EE over the scalar field (K,+,K,+,) if and olf if the ten vector space axioms are verified

Axioms

  • EE is closed with respect to the internal composition law: u,vE:u+vE∀u, v ∈ E : u + v ∈ E
  • EE is closed with respect to the external composition law: uEandλK:λuE∀u ∈ E and ∀λ ∈ K : λu ∈ E
  • Commutativity for the internal composition law: u,vE:u+v=v+u∀u, v ∈ E : u + v = v + u
  • Associativity for the internal composition law: u,v,wE×E:u+(v+w)=(u+v)+w∀u, v, w ∈ E × E : u + (v + w) = (u + v) + w
  • Neutral element for the internal composition law: uE:!oEu+o=u∀u ∈ E : ∃!o ∈ E|u + o = u
  • Opposite element for the internal composition law: uE:!uEu+u=o∀u ∈ E : ∃!−u ∈ E|u + −u = o
  • Associativity for the external composition law: uE∀u ∈ E and λ,μK:λ(μu)=(λμ)u=λμu∀λ, μ ∈ K : λ(μu) = (λμ) u = λμu
  • Distributivity 1: u,vE∀u, v ∈ E and λK:λ(u+v)=λu+λv∀λ ∈ K : λ (u + v) = λu + λv
  • Distributivity 2: uEandλ,μK:(λ+μ)u=λu+μu∀u ∈ E and ∀λ, μ ∈ K : (λ + μ)u = λu + μu
  • Neutral elements for the external composition law: uE:!1K1u=u∀u ∈ E : ∃!1 ∈ K |1u = u

Vector Subspace

(E,+,.E,+,.) be a vector space, UEU \subset E and UU \ne ∅ The triple (E,+,.E,+,.) is a vector subspace of (E,+,.E,+,.) if (U,+,.U,+,.) is a vector space over the same field KK with respect to both the composition laws

Proposition shows that we do not need to prove all 10 axioms, just need to prove closure of the two composition laws.

Null vector in Vector Spaces

(E,+,.E,+,.) be a vector space over a field KK. Every vector subspace (U,+,.U,+,.) of (E,+,.E,+,.) contains the null vector. For every vector space (E,+,.E,+,.), at least two vector subspace exist

Intersection and Sum Spaces

Intersection Spaces

If (U,+,.U,+,.) and (V,+,.V,+,.) are two vector subspace of (E,+,.E,+,.), then (UV,+,.U\cap V,+,.) is a vector subspace of (E,+,.E,+,.)

Sum Space

(U,+,.U,+,.) and (V,+,.V,+,.) be a vector space of (E,+,.E,+,.). Sum subset is a set S=U+VS=U+V defined as S=U+V={wEuU,vVw=u+v}S=U+V=\{ w\in E |\exists u \in U, v \in V|w=u+v \}

Direct Sum

(U,+,.U,+,.) and (V,+,.V,+,.) be two vector subspaces of (E,+,.E,+,.). If UV={o}U\cap V = \{ o\} the subset sum S=U+VS=U+V is indicated as S=UVS=U⊕V and named subset direct sum

Linear dependence in nn dimensions

Basic Definition

  • Linear combination of the nn vectors by means of nn scalars is the vector λ1v1+λ2v2+...+λnvn\lambda_1v_1+\lambda_2v_2+...+\lambda_nv_n
  • Said to be linear dependent if the null vector o can be expressed as linear combination by means of the scalars λ1,λ2,...,λn0,0,...,0\lambda_1,\lambda_2,...,\lambda_n\ne 0,0,...,0
  • Vectors are linearly independent if the null vector oo can be expressed as linear combination only by means of the scalars 0,0,...,00,0,...,0
  • Are linearly dependent if and only if at least one of them can be expressed as linear combination of the others
  • Would then check them as you would do in matrices

Linear Span

Set containing the totality of all the possibly linear combinations of the vectors, by means of nn scalars. L(v1,v2,...,vn)={λ1v1+λ2v2+...+λnnλ1,λ2,...,λnK}L (v_1 , v_2 , . . . , v_n ) = \{ λ_1 v_1 + λ_2 v_2 + . . . + λ_n n |λ_1 , λ_2 , . . . , λ_n ∈ K \}

Properties

  1. Span L(v1,v2,...,vn)L(v_1,v_2,...,v_n) with the composition laws in a vector subspace of (E,+,.E,+,.)
  2. sNs\in \N with s<ns<n. If ss vectors are linearly independent while each of the remaing nsn-s vectors is linear combination of the linearly independent ss vectors, then L(v1,v2,...,vn)=L(v1,v2,...,vs)L (v_1 , v_2 , . . . , v_n ) = L (v_1 , v_2 , . . . , v_s )
  3. L(v1,v2,...,vn)=L(v(σ)1,v(σ)2,...,v(σ)no)L (v_1 , v_2 , . . . , v_n ) = L (v_{(σ)1} , v_{(σ)2} , . . . , v_{(σ)no })

Basis of a Vector Space

Basis

Vector space is said to be finite-dimensional if \exists a finite number of vectors such that the vector space (L,+,.)=(E,+,.)(L,+,.)=(E,+,.) where the span LL is L(v1..)L(v_1..).

A basis B={v1...}B=\{v_1...\} of (E,+,.)(E,+,.) is a set of vectors E\in E that verify the following properties:

Null Vector and Linear Dependence

If one of the vectors is equal to the null vector, then these vectors are linear dependent

Steinitz Lemma

(E,+,.)(E,+,.) = Finite-dimensional vector space L(v1..)=EL(v_1..)=E = E its span Let w1...w_1... be ss linearly independent vectors E\in E Follows that sns\le n, the number of a set of linearly independent vectors cannot be higher than the number of vectors spanning the vector space.

Corollary of the Steinitz Lemma

^continuation B={w1...}B=\{w_1...\} be its basis, it follows that sns\le n. The vectors composing the basis are linearly independent. For the Steinitzs Lemma, it follows immediately that sns\le n

Dimension of a Basis

Order of a Basis

Number of vectors composing a basis is said order of a basis All the bases of a vector spaces have the same order

Dimension of a Vector Space

The order of a basis is said dimension is indicated with dim (E,+,.)(E,+,.) or dim(EE). The dimension dim (E,+,.)=n(E,+,.)=n of a vector space is:

  • the maximum number of linearly independent vectors of E
  • the minimum number of vectors spanning E

Linear independence and dimension

The vectors span the vectors if and only if they are linearly independent

Grassmann Formula

Reduction and Extension of a basis

Basis Reduction Theorem - If some vectors are removed a basis of (E,+,.)(E,+,.) is obtained Basis Extension Theorem - Let w1..w_1.. be ss linearly independent vectors of the vector space. If w1..w_1.. are not already a basis, they can be extended to a basis (by adding other linearly independent vectors)

Unique Representation

If the nn vectors are linearly dependent while n1n-1 is linearly independent, there is a unique way to express one vectors as linear combination of others (How you would represent other lines with one another, identify them with lambdas)

Grassmanns Formula

Let (U,+,.)(U,+,.) and (V,+,.)(V,+,.) be vector subspace of (E,+,.)(E,+,.). Then, dim(U+V)+dim(UV)=dim(U)+dim(V)dim(U+V)+dim(U\cap V) = dim(U) + dim(V)

Basic Definitions of Mappings

Mapping and Domain

UU be a set such that UEU\subseteq E The relation ff is said mapping when uU:!wFf(u)=w\forall u\in U : \exists!w\in F \ni`f(u)=w Said domain and is indicated with dom(f)dom(f)

A vector ww such that w=f(u)w=f(u) is said to be the mapped (or transformed) of uu through ff

Image

Image (ImIm) is a set of all all vectors ww that exist such that f(u)=wf(u)=w Im(f)={wFuEf(u)=w}Im(f)=\{w\in F | \exist u\in E\ni`f(u)=w\}

dom/domain = Input Image = output \to also means a subset

Surjective, injective, bijective

Mapping ff is surjective if the image of ff coincides with F:Im(f)=FF:Im(f)=F

Mapping ff is injective if u,vE\forall u,v \in E with f(u)=f(v)u=vf(u)=f(v)\to u=v (if uu is different from vv, then so will f(u)/f(v)f(u)/f(v))

Mapping ff is bijective if ff is injective and surjective

Linear Mappings

Linear Mapping

Linear mapping if the following properties are valid:

  • Additivity: u,vE:f(u+v)=f(u)+f(v)\forall u,v \exists E:f(u+v)=f(u)+f(v)
  • Homogeneity: λK\forall\lambda \in K and vE:f(λv)=λf(v)\forall v\in E :f(\lambda v)= \lambda f(v)

(This happens very very rarely)

Affine Mapping

Said affine if $ g(v) = f(v) - f(u)$ is linear

Linear Mappings

Image of UU through ff, f(U)f(U) is the set f(U)={wFuUf(u)=w}f(U)=\{w\in F | \exist u \in U \ni`f(u)=w\} Follows that the triple (f(u),+,.)(f(u),+,.) is a vectors subspace of (F,+,.)(F,+,.)

Inverse Image

EWFE\to W\subset F The inverse image of WW through ff, indicated with f1(w)f^{-1}(w) is a set defined as: f1w={uEf(u)W}f^{-1}w=\{u\in E|f(u)\in W\}

Matrix Representation

Linear mapping can be expressed as the product of a matrix by a vector y=f(x)=Axy=f(x)=Ax

Image from a matrix

The mapping y=f(x)y=f(x) is expressed as a matrix equation y=Axy=Ax. Follows that the image of the mapping Im(f)Im(f) is spanned by the column vectors of the matrix A: Im(f)=L(a1,a2,...an)Im(f)=L(a^1,a^2,...a^n) where A=(a1,a2,...an)A=(a^1,a^2,...a^n)

(Just convert matrix into (A). Columns into own (C))

Endomorphisms and Kernel

Endomorphism

EFE\to F. If E=FE=F, f:EEf:E\to E then linear mapping is endomorphism

Null Mapping

O:EFO:E \to F is a mapping defined as vE:O(v)=OF\forall v \in E: O(v) = O_F

Identity Mapping

(Should be same dimension) I:EFI:E\to F is a mapping defined as vE:I(v)=v\forall v\in E: I(v)=v

Matrix Representation

EEE\to E be endomorphism. Mapping of multiplication of a square matrix by a vector: f(x):Axf(x):Ax

y=f(X)=Axy=f(X)=Ax. The inverse function x=f1(y)=A1yx=f^{-1}(y)=A^{-1}y

Linear dependence

Needs to be endomorphism v1...v_1... likely dependent then so are f(v)...f(v)...

Kernal

EFE\to F linear mapping ker(f)={vEf(v)=oFker(f)=\{v\in E| f(v) = o_F

Kernal as Vector Space

The triple (Ker(f),+,.)(Ker(f),+,.) is a vector subspace of (E,+,.)(E,+,.)

Kernal and Injection

f:EFf:E\to F be a linear mapping u,vEu,v\in E. Follows that f(u)=f(v)f(u)=f(v) if and only if uvKer(f)u-v\in Ker(f)

Theorem

Mapping is injective if and only if: Ker(f)={oe}Ker(f)=\{o_e\}

Linear Independence and injection

EFE\to F V1.....V_1..... be nn linearly independent vectors E\in E. If ff is injective then f(v1)...f(v_1)... are also linearly independent vectors of ff Endomorphism ff is invertible if and only if it is injective

Rank-Nullity Theorem

Let f:EFf:E\to F be a linear mapping. The rank of ff is the dimension of the image, rk(f):=dim(Im(f))rk(f) := dim(Im(f)) The rank is an invariant of the map.

The dimension of the kernel, dim(ker(f))dim(ker(f)) is said nullity of a mapping.

Theorem

Under the hypothese: the sum of rank and nullity of a mapping is equal to the dimension of the vector space (E,+,.)(E,+,.): dim(ker(f))+dim(Im(f))=dim(E)dim(ker(f)) + dim(Im(f)) = dim(E) Usually dim(Im(f))dim(Im(f)) is the hardest to calculate and this theorem allows an easy way to compute it as dim(E)dim(ker(f))dim(E)-dim(ker(f))

Proof

If dim(ker(f))=0dim(ker(f))=0 then ff is injective. If the basis spans the image, the vectors are linearly independent.

Corollaries of Rank-Nullity Theorem

Let f:EEf:E\to E be an endomorphism where (E,+,.)(E,+,.) is a finite-dimensional vector space.

  • If ff is injective then it is also surjective
  • if ff is surjective then it is also injective

Corollary

Let f:EFf:E\to F be a linear mapping with (E,+,.)(E,+,.) and (F,+,.)(F,+,.).

  • Consider dim(E)>dim(F)dim(E)>dim(F). Follows that the mapping is not injective.
  • Consider dim(E)<dim(F)dim(E)<dim(F). Follows that the mapping is not surjective

Eigenvalues and Eigenvectors

Let f:EEf:E\to E be an endomorphism where (E,+,.)(E,+,.) is a vectors space with dimension KK. Every vector xx such that f(x)=λxf(x)=\lambda x with a λ\lambda scalar and xE÷{oE}x\in E\div \{o_E\} is said eigenvector of the endomorphism ff related to the eigenvalue λ\lambda Eigenvalue and eigenvector are building blocks of reality

Dan terms: λ\lambda = eigenvalue (num of x) xx = eigenvector (any number) homogeneous = (0,0)

Eigenspace

Let f:EEf:E\to E be an endomorphism The set V(λ)EV(\lambda)\subset E with λK\lambda\in K defined as V(λ)=e{oE}{xEf(x)=λx}V(\lambda)=e\{o_E\}\cup\{x\in E|f(x)=\lambda x\} This is said eigenspace of the endomorphism ff related to the eigenvalue λ\lambda. The dimension of the eigenspace is said geometric multiplicity of the eigne value λ\lambda and is indicated with γm\gamma_m

Determining Eigenvalues and Eigenvectors

Let f:EEf:E\to E be an endomorphism. Let p(λ)p(\lambda) be the order nn characteristic polynomial related to the endomorphism. The number of roots of ρ(λ)\rho(\lambda) is called algebraic multiplicity.

Introduction to Diagonalization

One variable for each line in a diagonal manner. Not all the mappings/matrices can be diagonalized, Need to have enough linearly independent columns of PP.

Let f:EEf:E\to E be an endomorphism. The matrix is diagonalizable if and only if it has n linearly independent eigenvectors:

  • All the eigenvalues are distinct
  • The algebraic multiplicity of each eigenvalue coincides with its geometric multiplicity

Symmetric Mappings

If the mapping is characterised by a symmetric matrix, then can prove that;

  • The mapping is always diagonalisable
  • The transformation matrix PP can be built to be orthogonal (P1=PT)P^{-1}=P^{T}). Also means the new reference system given by the eigenvectors is also a convenient orthogonal system