I got through eight videos this week from the Linear Algebra playlist by Dr. Trefor Bazet. I probably should have gotten through more, so I’m a bit disappointed about that, but all-in-all I feel like I had a solid week and made progress understanding linear algebra, in general. Most of what was covered in the videos was review of things I’d already seen on KA, but a lot of what I think are fundamental concepts to LA sunk in after watching these videos. I feel like I have a pretty strong intuitive grasp on matrices now and what they represent. I also now have a better understanding of what the definitions of linear dependence and independence are, basis vectors, and the names for the outputs of certain matrix transformations. That said, I certainly haven’t got it all figured out (as you’ll see at the bottom of this post where I talk about the last video I watched this week), but, overall, I definitely have a better idea of what’s going on with LA and how to do LA. (Still don’t have much of a grasp on why it all works though… But I’m getting there!)
I don’t have very much time to write this post, so this is going to be a speed run through the eight videos I watched:
Video 1
I wrote down a few laymen’s points that I took way from this video that I think are correct but still not 100% sure:
- Vectors are linearly dependent if you can find some combination of coefficients for them so that they can add up to the 0 vector. (This could be wrong.)
- I think this means if the vectors are, for example, in a plane in R2, they’d be linearly IN-dependent from a vector in the third dimension, R3.
- If ALL coefficients, ai, or all vectors, xi, equal 0, then the vectors are also linearly dependent. (But this is stupid because it just means the “vectors” don’t leave the origin, so of course the value stays at 0 and they’re “linearly dependent”.)
Video 1
I’ll let me notes above speak for themselves on this one.
Video 3
I watched this video on Wednesday and, looking back at the screen shot now, don’t really remember what Trefor was talking about. It’s clearly a flow chart to tell whether a set of vectors are linearly dependent or independent. I wrote out in my notes, “this just covers linear dependence versus linear independence and gives an explanation.” I think I may have missed the big picture of what Trefor was saying though. Reading it over now, it just seems like all he was saying is that if a set of vectors lead back to the OG, then they’re linearly dependent on each other.
Video 4
This was probably the most helpful video I watched this week. As you can see from the screen shot, it covered what a matrix transformation does and how it works. For whatever reason, this video really helped me wrap my head around the difference between what the rows represent in a matrix (the variables) and what the columns represent (vectors). As you can see from my notes, I also started to strongly grasp how matrix-vector multiplication takes a vector in a certain dimension (in the example the vector’s in R5) and outputs a vector in the matrix’s dimension (which in the example was R3).
Video 5
This was a helpful video where Trefor just worked through some matrix transformation examples algebraically and, as you can see, explained what words are used to denote which transformation.
Video 6
I didn’t understand much of what Trefor talked about in this video. Up to this screen shot, Trefor was just going through the rules for a linear transformation. (As a heads up, I’m having a hard time understanding the difference between a linear transformation and a matrix transformation.)
In the previous two screen shots, Trefor took the vectors he drew out and rotated them around the origin. It was pretty cool the way he was able to keep the axis stationary but “grab” and rotate the vectors around the OG. As he was rotating the vectors, he was explaining that even though they’re rotating, they maintain the same proportion and distance to each other and because of this, it turns out the rotation is actually a linear transformation, as long as the rotation is around the OG.
Video 7
In this video Trefor talks about basis vectors which seem like they’re the same thing as unit vectors, but I’m not sure. The difference may be that a unit vector has unit of 1 but could go in ANY direction, whereas a basis vector also has a unit of 1 but might HAVE to go in only one dimension. (I’m not sure about that though.) Trefor denotes basis vectors with ei where e denotes “basis vector” and the subscript “i” denotes whatever dimension the basis vector is in, a.k.a. whatever row the 1 would be in.
Video 8
I watched this final video three times at the end of the week and still don’t understand whatever main point Trefor is trying to make. I don’t understand the difference between a matrix transformation and a linear transformation at least in part because I don’t understand what a linear transformation is. This video was still helpful for me to see how you can multiply a vector by what I think of as a “unit matrix” (which would be a square matrix with 1’s in a diagonal line from the top left corner to the bottom right corner and 0’s in all the other spaces) which would result in a linear combination of the vector’s coefficients multiplied by their respective basis vector. (I’m not sure if any of that made sense, but I think I have it figured out. Even if I do, I definitely don’t know how to explain it.) He wraps up by saying that since you can multiply the transformation, T, by the unit matrix, ei, that’s like turning the transformation, T, into a matrix, A, which then makes a linear transformation the same thing as a matrix transformation.
(AGH! I’m pretty sure none of that made sense… I’m hoping I’m on the right track though. 🤞🏼)
And that’s where I ended this week. It was a bit of a disappointing ending as I (clearly) didn’t understand the final video, but like I said initially, I do think it was a solid week, overall.
I’ve officially gotten through 28 of the 82 videos from this playlist. I still have a long way to go, but I’m optimistic that I can get through the entire playlist in the next few weeks. In all likelihood, it probably won’t be until November, but maybe I can pick up the pace and get through it by mid-October. When I finally do get through this playlist, I think I’ll be in a great position to crush the Linear Algebra playlist on KA. It’s annoying because I feel like I’m SO close to the finishing off the MATH section of KA, and yet at the same time I feel SOOO far away. I know I’ll get there, but it’s definitely taking a long time…