I had a relatively subpar week studying math. I made it through seven videos on Dr. Trefor’s Linear Algebra playlist and probably only studied for ~3 hours. I had one potentially key breakthrough (I think), but that’s all. Given how little I studied, I can’t be too surprised that the week didn’t amount to much. I have some sympathy for myself for not getting more done as I have a lot of things going on these days, but at the same time, it’s pretty disappointing this is taking me so long… The good news is that I’ll be starting this week on the 68th video of 83 in the playlist (I think Trefor added an extra video from when I first started) so I’m likely two weeks away from getting through the playlist. In the big picture, the good news is I definitely have a better intuitive understanding of what’s going on with LA than when I first started this playlist. I’m guessing (and praying 🙏🏼) that when I get back to KA I’ll be able to get through the Linear Algebra course much faster with better comprehension after getting through this playlist.
Just like the past few weeks, here are screen shots from the eight videos I got through this week and my notes for some of them:
Video 1 – What Eigenvalues and Eigen Vectors Mean…
In this screen shot, the matrix in the top left scaled the two vectors e1 and e2, however e2 doesn’t rotate and is only stretched. Because it doesn’t rotate, this means that e2 is an eigenvector for this particular matrix. (To be clear, the fact that it doesn’t rotate is what makes it an eigenvector for this matrix.)
In the above two screen shots, you can see that the vector [1, 1] also doesn’t rotate, meaning that it too is an eigenvector of this particular matrix.
I remembered hearing about and working on eigenvectors before on KA but their definition never really stuck with me. This week I think I was finally was able to grasp what they are. As per my note, if, for example, you have a vectors that are transformed by a matrix but they don’t rotate, then they’re eigenvectors. (Maybe…)
Video 2 – Using Determinants to Compute Eigenvalues & Eigenvectors
This video worked through the formula for finding eigenvalues and eigenvectors, but I don’t really understand what’s going on with the proof. To me it seems like it’s saying the matrix, A, is equal to λ which, as far as I know, is the “eigenvalue”, i.e. the particular coefficient, or ‘stretch value’, associated with each eigenvector. So, in other words, for eigenvectors, A equals the eigenvalues of each component.
(I’m pretty sure none of what I just wrote made any sense and was likely incorrect. 🙃)
Video 3 – Example: Computing Eigenvalues and Eigenvectors
This video carried on from the previous one. Here Trefor worked through an example of how to calculate eigenvalues and eigenvectors. Unfortunately, whatever he was trying to explain didn’t really sink in. 😔 Nonetheless, I added what I wrote in my notes under each screen shot:
This shows you how to get eigenvalues (which are λ = 2 and λ =3). (I don’t understand why you state that the eigenvalues, λ, equal 0.)
This shows you how to get the eigenvector, [1, 0], which you do by inputting the eigenvalues back into the matrix in place of λ. (I also don’t understand the bottom where it says x = s[0, 1].)
This screen shot shows/does the same thing as the previous one except it inputs λ = 3 in the matrix and outputs x = s[1, 1].
Here Trefor was just saying that using the eigenvectors, [1, 0] and [1, 1], and multiplying them by the matrix, the eigenvectors will never rotate, which is what makes them eigenvectors. (I feel like I’ve said that 100 times at this point. EIGENVECTORS DON’T ROTATE!)
Video 4 – A Range of Possibilities for Eigenvalues and Eigenvectors
In this video Trefor worked through a couple of different examples of matrices with different eigenvalues and eigenvectors. He did three different examples and all of them were confusing, but I screen-shotted the first example which is just below. In the example, Trefor talked about how for this particular matrix, ALL vectors being multiplied by the matrix would be considered eigenvectors (I think) because the matrix just scaled everything by 2. I believe this is because the matrix he was referring to had a top-left to bottom-right diagonal line of 2’s and 0’s everywhere else:
Also, in the screen shot below, I was really having a hard time understanding the notation x = s[1, 0] so I asked CGPT to help explain it to me and this is what it said:
Video 5 – Visualizing Diagonalization
I took a detour here and watched the video just below. It popped up on YouTube beneath Trefor’s playlist and looked helpful so I watched it. Here’s the video and a potential insight I may have had after watching the video below it:
“It could be that the reason you want to be able to change basis vectors is so that you can change them to eigenvectors so that with diagonalization, for example, a 3D object ONLY stretches and DOESN’T rotate during a transformation”. That’s what I wrote in my notes and if this is true, I could see why the math would be much simpler to do it this way.
This screen shot shows how it can be much simpler to solve matrix multiplication when there’s a diagonal matrix with 0’s everywhere else than it is to use matrix multiplication on two matrices that have all the entries/elements filled in with numbers other than 0.
This shows that you can get to the same vector using different basis vectors. At this part of the video the narrator was making the point I wrote out above, that it can be easier to change the basis vectors to eigenvectors and then solve the transformation with eigenvectors that don’t rotate than a different set of basis vectors that would. He said:
“First you change your basis, then you scale your direction in variant vectors, then you change back to the regular basis.”
Which apparently is the what the equation Ak = PDP-1 means which shows two videos down from here. ⬇️ ⬇️
Video 6 – Diagonal Matrices are Freaking Awesome
This video just worked through an example of how to calculate the eigenvalues and eigenvectors of a diagonal matrix, which I don’t really understand. What I literally wrote in my notes after watching this video three times was, “I don’t know what’s going on with this stuff.”
Video 7 – How the Diagonalization Process Works
In this video Trefor walked through the theoretical proofs for diagonalization. (I think…)
Video 8 – Compute Large Powers of a Matrix Via Diagonalization
In this video Trefor just talked about why it’s helpful to use diagonalization, which is because it simplifies the math of taking matrices to large exponents. But also, I believe, for matrix multiplication, in general, and not necessarily ONLY taking a matrix to a large power.
And that, unfortunately, was all I got done this week. So ya, not only was it only a few videos, but I barely understood any of what I watched… It’s disappointing, but I do think I’m making headway with linear algebra. everything I’m learning, or maybe just the way it’s all being presented, seems very conceptual and hard to grasp. I’m thinking I should maybe get CGPT what’s going on with some of the concepts and get it to give me more real-world examples of what Trefor’s talking about in these videos. I feel like that might help to make some of the concepts more tangible. In any case, my main goal for this coming week is to set myself up well for following week so I can get through the entire playlist by the end of that week. There are 15 videos left to go, so if I could get through eight of them this week, that would be pretty solid. For the millionth/billionth time, fingers crossed I can make it happen! 🤞🏼