Week 270 – Oct. 28th to Nov. 3rd

It was a photo-finish, but I made it through all 10 videos from Dr. TB’s Linear Algebra playlist this week. 📸 💨 🏁 To be fair, there were a bunch of things that went well over my head, but all in all, I think I understood a decent amount of what I watched, at least compared to weeks past. I watched a handful of videos outside of his playlist this week as well, but not as many as last week. I’m not going to bother adding those videos to this post, but I thought it would be worth mentioning that 1) I did, and 2) I’m still finding it super helpful to hear different creators talk about the same concept. As I said last week, listening to multiple people approach a problem from different perspectives and come at the concept from different “angles” (so to speak) REALLY helps broaden my understanding of what they’re all trying to get at. Now that I’m finally through Dr. TB’s playlist, I need to remember to keeping doing this heading back to KA, which I must say that I’m PUMPED to finally be doing. 💪🏼 😊

Here are the ten videos I watched this week from Dr. TB’s playlist and the notes I took from them:

Video 1 – Distance, Angles, Orthogonality and Pythagoras for Vectors

I feel like my notes do a pretty good job explaining what went down in this vid so I’m not going to elaborate.

Video 2 – Orthogonal Bases are Easy to Work With!

Looking back at this screenshot and my notes, I have no clue what’s going on with this video. I think what’s being explained is that if you have some vector dotted with a particular basis vector inside an orthogonal basis, you’ll be left with ONLY the vector’s component in whatever direction the basis vector was that you dotted it with. (Pretty sure that makes no sense…) For example, if you were to dot a vector x = [3, 4, 5] with ĵ = [0, 1, 0] you’d be left with xĵ = [0, 4, 0]. I’m not at all confident that I’m correct about any of that, or if that was even the point of the video… 🙃

Video 3 – Orthogonal Decomposition Theorem Part 1: Defining the Orthogonal Compliment

Again, I don’t really know what’s going on with this video. (I also have a feeling I’m going to be writing that over and over again in this post…) Here’s what I wrote in my notes for this vid:

“I’m not 100% sure what Trefor is talking about in this vid, but I think he’s saying if you have a subspace such as a plane, say denoted with W, and you have a set of orthogonal vectors to that plane, say y, it is easier (maybe?) to find the vectors that create W, {u1, u2, … , un}, than to find all the vectors y. AND (the point is) if you find the vectors {u1, u2, … , un} you prove orthogonality to all vectors in the set y.”

Reading that back, that doesn’t really make sense to me. 👍🏼

Video 4 – The Geometric View on Orthogonal Projections

I’m surprised to say that what Trefor’s talking about in this video actually kind of makes sense to me. In laymen’s terms, I believe what the first screenshot is saying is, “if you take two vectors, y and u, and put their bases in the same spot, then project y onto u at exactly a 90° angle, they’ll create a right triangle where the opposite side would be an orthogonal vector to u (and denoted with z), and the adjacent side would be some scaler version of u with the scaler coefficient being α, i.e. αu.”

I believe the second screenshot is showing how you can dot a vector x with SBV’s to isolate the vector’s x-, y-, or z-component. (I think.)

Video 5 – Orthogonal Decomposition Part 2

Once again, I’m a bit confused reading this one back so I’m just going to rewrite what I had in my notes under each screenshot:

“This shows the setup for the question: Vector y (blue, diagonal arrow from OG) being projected onto xy-plane. Projection = u1 in/on x-axis + u2 in/on y-axis. z = orthogonal vector.”

“Dr. TB starts to show proof for ŷ. Where it says:

  • ŷ = ((⋅ u1)/(u1 ⋅ u1))u1 + … + ((⋅ up)/(up ⋅ up))up

First of all, ((⋅ u1)/ (u1 ⋅ u1))u1 = α = (The scaling coefficient that scales the basis vector u1 to ŷ‘s length in the x-direction). Similarly, it doesn’t show it on the screenshot, but ((⋅ u2)/ (u2 ⋅ u2))u2 would do the same thing for ŷ‘s length parallel to the y-axis. Finally, [the equation in the bullet above] is a generic formula for vectors in as many dimensions as you want (i.e. ‘p’ dimensions) but in this example ŷ would only have 2 dimensions which would be the xy-plane.”

“Trefor states that ŷ IS a member of W because the basis vectors of  ŷ, {u1, … , up} ARE the basis vectors of W. His next question is, “how can you be sure that vector z is a member of W [pronounced “W-‘perp’”, i.e. perpendicular]? Solution is z ⋅ ui = 0 (where ui is ANY vector in W, I think).”

(I don’t think any of what I just wrote makes sense, but I don’t know what’s happening, so I’m just going to move on…)

Video 6 – Proving That Orthogonal Projections are a Form of Minimization

Video 7 – Using Gram-Schmidt to Orthogonalize a Basis

Here Trefor is setting up the question he’s going to answer to explain what the Grah-Schmidt equation is and how/why it works. In laymen’s terms, imagine are three vectors that are not orthogonal but are linearly independent, meaning their span is 3D. 

Question: how would change those BV’s to be orthogonal, such as î, ĵ, and k̂ ?

First off, you don’t change the first vector, x1, you just say the orthogonal basis vector, v1, is the OG vector, i.e. v1 = x1.

To find the second BV, v2, you have to project x2 onto v1 (which will end up being αu) then subtract that projected vector from x2 to get v2.  

This shows what I just explained above.

This shows the geometric, 3D version of what I explained above.

This states that the formula works past three dimensions, which is Trefor denotes with vp.

Video 8 – Full Example: Using Gram-Schmidt

In this video Trefor worked through an example of how to use the G-S equation:

I liked working through this video because the vector algebra wasn’t too hard, and it was helpful practice. I think I’m FINALLY starting to feel comfortable with doing vector/matrix operations. 😮‍💨

Video 9 & 10 – Least Squares Approximations & Reducing the Least Squares Approximation to Solve a System

These were actually two videos that bled into each other. I was planning on rewriting the notes I took from both videos in this post, but I’m drained already so here are screenshots of the super messy notes I took instead:

Aaand, I’m done. Done with this post, and done with this playlist. (Thank the LAWD. 🙏🏼)

I can’t believe I’ll finally be getting back to KA next week. I don’t remember when I first started Trefor’s playlist and am too lazy right now to go back and figure it out, but I’m sure it’s been at least two months or maybe three. I’m hoping I’ll find the KA vids a lot easier grasp now and am pretty optimistic that I will. But even if they’re not completely clear, like I said at the top, I’ll just Google videos that seem similar which I think will make things a lot easier, going forward. I only have five videos left to get through in the unit I’m currently in on KA, so I’m hoping that by the end of next week I should be into the second of the three units in Linear Algebra. Then I’ll just need to get through the Differential Equations unit, which isn’t too long, redo the course challenge for Multivariable Calculus, and I’ll be DONE. THE END IS NEAR! 😭