Week 269 – Oct. 21st to Oct. 27th

I only made it through six videos in Dr. TB’s Linear Algebra playlist this week but I actually made a TON of progress. On Tuesday I got my ass kicked by the 68th video in the playlist. The same notation came up in that video which I was struggling with last week and I just about had a nervous breakdown. It was one of the lowest points I’ve had in the past few months and I was super frustrated. I decided that I needed to branch out and Google a bunch of other videos from different creators to try and figure out the notation. On Wednesday and Thursday, I ended up watching six other videos which helped me SO much in understanding parametric vector notation (the notation I couldn’t understand), eigenvalues and vectors, and matrix multiplication, in general. I also got a better grasp on why certain variables are given the name “Free” variables. So, it was an enlightening week to say the least all because I decided to branch out from the Linear Algebra playlist. Going forward, I NEED to remember to do this more regularly. Hearing different people talk about the same concept is incredibly helpful to understand the concept as they all approach it and talk about it from slightly different angles using different language. Hearing people talk about the same concept in slightly different ways REALLY helps me to broaden my understanding of what’s going on. So ya, it was a solid week which was a huge relief. 😮‍💨

Before I get into the six videos I watched from Dr. TB’s playlist, below are the six videos I watched outside of his playlist. The first video is the one that helped me understand how parametric vector notation works, which is why I’ve added screen shots from it along with my own notes. 

The part I was struggling with the most with parametric vector notation was 1) understanding that any column in a REF matrix that DOESN’T have a leading-1 in its bottom element (“leading” refers to the ROW of the bottom element in the column, implying that all elements in the row to the left MUST be 0’s), then that column vector is known as a “free variable”, and 2) once you solve the equations for the non-free variables, you put those equations into a single vector, [x1, x2, x3, x4] = [the equations for each variable], and then FACTOR OUT the free variables. (This is the final step in my notes above.)

This video helped me understand what “free” in free variables means. The host states that if, for example, you have the equation for a circle, x + y = 2, and you subtract y from both sides of the equation, i.e. x = 2 – y, you could say that the variable ‘y’ could be a free variable in that you could change it freely to any value and then ‘x’ would be dependent on what you inputted for ‘y’. I don’t know if this is EXACTLY the definition of “free variable” but I’m thinking it’s probably close and definitely helps simplify in my mind what it means for a variable to be free.

Here are the six videos I watched the Dr. TB’s playlist:

Video 1 – Full Example: Diagonalizing a Matrix

Looking back over my notes for this question, I’m pretty confused by all of it and don’t remember or really intuitively understand what’s going on. I definitely have a better grasp on how the math works than I did at the beginning of the week, but I don’t think my understanding is strong enough to do a better job explaining what my notes above say.

Video 2 – COMPLEX Eigenvalues, Eigenvectors & Diagonalization **Full Example**

Again, looking at this question, I can’t explain any better what my notes already say. One thing I could mention, however, is that part of the purpose of this video was to show that you can use eigenvalues and vectors on a matrix where the eigen value is √–1 = i, but you then get into complex numbers which Trefor said is outside the scope of this playlist and wouldn’t be covered in any further detail. Nonetheless, I still found this video helpful to practice solving eigenvalues and vectors.

Video 3 – Visualizing Diagonalization & Eigen-Bases

The point of this video is that when doing some matrix transformation, it’s often easier to compute the transformation by switching the basis vectors to eigenvectors, doing the matrix transformation on the eigenvectors (which will simply stretch the vectors meaning you won’t have to deal with rotating them) and then switch the grid back to the OG grid, a.k.a. basis, after the transformation.

Here Trefor starts by saying if you have a diagonal matrix multiplying two SBV’s on a standard grid…

The diagonal matrix will simply stretch those SBV’s and the grid. They DON’T rotate.

To take the grid back to its original shape, you again multiply by the diagonal matrix but put a 1/x in place of each element on the diagonal where x = whatever the element was in the OG matrix.

Trefor got into another example here where he started with a grid and basis vectors for some non-diagonal transformation.

This shows how after that transformation the grid ends up stretching AND rotating by squishing inwards.

This shows the eigenvector for that same transformation before the transformation.

This shows the same transformation applied to the eigenvectors, which you can see ONLY stretches them. Therefore, if you change the SBV’s from the OG matrix to the eigenvectors, THEN do the transformation, it will make the computation simpler as you’ll only need to stretch the eigenvectors/grid and not rotate it.

Video 4 – Similar Matrices have Similar Properties

As I’ve said already, I’m definitely making progress understanding everything, but I really don’t understand the notation from this video well enough to explain it. Nonetheless, here are two screen shots from the video and what I wrote in my notes for each screen shot:

“Algebraic proof that A and B matrices will be equal (or similar?) in some ways.”

“Further talks about the proof.”

(Very helpful notes. 🙄)

Video 5 – The Similarity Relationship Represents a Change Basis

I don’t really know what Trefor was talking about here. I’m pretty sure this has to do with the idea that you can switch vectors to eigenvectors, do a matrix transformation, and then switch back to the OG basis and vectors. But on top of that, I think Trefor is saying that the bases/grids between the OG grid and the eigenvector grid MUST be similar in certain ways in order for the whole thing to work. I could be wrong about that though.

Video 6 – Dot Products and Length

This video was helpful for me to understand how the formula for the length of a vector works by taking the vector’s dot product of itself. I’d even go as far as saying that I understand why the formula works, which is something I don’t often get to say. (That said, I don’t intuitively understand why Pythag’s Theorem works, but apart from that, I understand why a vectors length – no matter how many variables it includes – is equal to the dot product of itself. 💪🏼)

I watched one more video at the end of the week for technically a total of seven vids from the LA playlist, but a lot of it went over my head so I didn’t think I should add it here. 🤷🏻‍♂️

It’s possible I could get through this playlist by the end of next week. 🙏🏼 I’m on the 74th of 83 vids, meaning I’d have to get through ten videos next week, i.e. two per day. I think I have a pretty good shot given that 1) I made a ton of progress this week understanding linear algebra, in general, by watching the six vids at the start of this post, and 2) I’m more motivated than ever to get through this so I can get back to KA, finish the Linear Algebra course, and FI-NA-LLY get through the math section of KA. 😤 I want to get there SO bad, and I’m so close. I’m sure I’ll be even more motivated if I can get through Trefor’s LA playlist this week, which is why I want to do it and keep my momentum going. As always, fingers crossed I can make it happen. 🤞🏼