It wasn’t my greatest week of all time but it was relatively productive. I’m happy to say I made it through nine videos but unfortunately found a lot of what was taught confusing. In the big picture I feel like I made some decent progress this week understanding linear algebra, in general, but there were definitely a few things that Trefor talked about that flew completely over my head. Unlike the last few weeks, for some reason I didn’t write out many of my own notes this week but just took screen shots of the nine videos and wrote out what each screen shot was explaining. (Or at least my interpretation of what each screen shot was explaining. 🙄) I’m not sure why I didn’t write any of my own notes for each video. Probably because it takes a lot longer doing it that way, and I was keen to get through as many videos as possible. And plus, as you’ll see, it didn’t make much sense for me to write my own notes for a lot of videos based on how they were presented. Even though I should have made more of an effort to write out notes, I now only have 30 videos left in this playlist, so I’ve at least got that going for me, which is nice. 🙃
(I’m going to list videos in chronological order, add their screen shots, and write below each screen shot what I wrote in my notes about each of them.)
Video 1 – The Vector Space of Polynomials: Span, Linear Independence and Basis
This SS shows vectors in R2 that when moved…
Are linearly dependent.
Question: Are the expressions (1 – x), (1 + x), and x2 linearly independent or linearly dependent?
This is where I get lost… It looks like you state that the sum of the terms equals 0, I think because it would imply linear independence. You then state there are coefficients in front of each term, t1, t2, and t3. You then do some algebra to factor out the 1, x, and x2 which I think could (should?) be thought of as a vector [1, x, x2], but I’m not sure.
At the end it shows that you can separate and state that each expression MUST equal 0 for the entire equation to equal 0. (Which, if it does, I believe means it would be L.I.)
I don’t get this part either. I guess you can then turn the three expressions that equal 0 into a 3×3 matrix which have the coefficients related to each tx variable. You can use E.R.O. to turn the matrix intro RREF which… I guess means the expressions and term (1 – x), (1 + x), and x2 are linearly independent… 🤔
At this point, Trefor states that there’s a relationship between vectors being L.D. and L.I. with polynomials being L.D. and L.I. which I don’t really understand. I believe in this SS he’s going through the notation for the span of polynomials, which I think is the same thing as finding the span for vectors.
(I of course could be wrong about all everything I just wrote.)
(Ok, so I have no clue what’s happening, especially because this was the first video I watched this week and I’m having a hard time remembering what Trefor was saying. I don’t know what I was talking about in my notes but here’s what I wrote:)
He sets up the example by saying that instead of setting the variable to tx you set them to cx, where “c” stands for coefficient and the solutions to all three equations (a, b, and c) is ANY polynomial because (I guess) if you can turn the cx matrix into RREF then the polynomial spans ALL of R2.
(…😐)
Video 2 – Subspaces are the Natural Subsets of Linear Algebra | Definition + First Examples
(This was another video I didn’t really understand. 😕)
Notation for the requirements of a “subspace”.
Apparently this is not a subspace because a vector can be scaled out of it.
Apparently the green line is a subspace because it 1) goes through the origin, 2) you can scale the vectors to whatever length and they stay on the green line, and 3) the red and yellow arrows can be added and scaled infinitely and they stay on the green line.
(Does this mean if either vector were not in the same direction of the green line and on top of it, the green line would not be a subspace? I don’t know.)
Video 3 – The Span is a Subspace | Proof + Visualization
Trefor introduces the idea that the span of a 2D plane is equal to all combinations of a and b which could be written geometrically as x, i.e. x ∈ {a, b}.
(I think this means that the plane created by a and b would go on infinitely, although it doesn’t show this in the image.)
This is the notational proof of the span of a and b is a subspace. (Of… something?)
Video 4 – The Null Space and Column Space of a Matrix | Algebraically and Geometerically
When you do some matrix transformation, some vectors may…
Collapse down to…
The “Column Space”, which is the red line (b). (Note: b will be in Rm given that the matrix transformation had the dimensions mXn.)
The green line relates to the “Null Space”. It’s the set of vectors that BEFORE the transformation occurs will…
Collapse down to…
Zero, a.k.a. the “Null Space”. (This means that the “Null Space” vectors are calculated BEFORE the transformation and, therefore, live in Rn. I think.)
This SS and the one below outline the notation for the null space of a transformation, Null(A), and the column space or a transformation, Col(A).
Video 5 – The Basis of a Subspace
In this video Trefor was stating that the basis vector e1 and e2 can be combined and scaled to reach ANY vector in R2.
You can rotate and stretch basis vectors as much as you want and still span R2.
Here he says you can denote e1 and e2 with a1 and a2 to generalize them, meaning you can take ANY two vectors to act as basis vectors. Apparently this means if you put those basis vectors into a matrix and turn it into RREF, there will be leading-ones in every row.
Here he was saying that >2 vectors are redundant.
Rules for “Basis of a Subspace”. I don’t know how L.I. plays into it, but I think it’s because of redundant vectors.
Video 6 – Finding a Basis for the Null Space or Column Space of a Matrix A
I didn’t understand this video but was still somewhat able to follow along with the math, although what was going on and why the math works the way it does was lost on me.
In the first three screen shots above, Trefor does some LA to show that the null space of matrix A is determined (maybe?) by the second and fourth columns of the matrix.
Trefor says here that the columns x2 and x4 are redundant and can be created with combinations of x1 and x3.
Video 7 – Finding a Basis for Col(A) when A is not in REF Form
The matrix on the right is the REF of the matrix on the left. Columns x1 and x3 are leading-one columns. (Which I think might be known as “pivot” columns, but I’m not sure.)
You can go back to the OG matrix on the left and note that the columns x1 and x3 are the pivot (?) columns. (I believe this means they form the “basis for a column space”.)
Video 8 – Coordinate Systems From Non-Standard Bases | Definitions + Visualizations
This SS started the video by showing standard basis vectors and a third random, green vector.
Shows SBVs can create the green vector.
Shows non-SBV.
Shows how to use non-SBVs to get to the green vector.
Video 9 – Writing Vectors in a New Coordinate System **Example**
I will let my notes speak for themselves on this one.
So ya, not the greatest week of all time, but I at least got through a bunch of videos even if I didn’t understand a lot of what was going on in them. As often it the case, I’m guessing I’ll reach a point later on where a lot of what I watched this week will click into place after watching videos on math that builds on top of this. Like I said at the start, I’m pumped that I only have 30 more vids left to watch in this playlist! I’m hoping I can get through them all by early November, which I think is doable. I’m excited to get back to KA and am optimistic that the KA Linear Algebra course will be easier to get through when I get back there. As always, fingers crossed Week 267 is a productive one so that I can get back to KA sooner rather than later! 🤞🏼