Week 209 – Aug. 28th to Sept. 3rd

I’m happy to say that my fifth year working on KA got off to a pretty good start! I didn’t achieve my goal of getting all the way through the section Optimizing Multivariable Functions this week, but managed to get about halfway there. I didn’t take too many notes overall this week but did cover a lot of material and learned about quite a few new and somewhat difficult vector operations, terminology and concepts. For the most part I was able to wrap my head around everything I worked through but I also know that I’ll need more practice with everything that was covered before the concepts “stick” and become intuitive for me and simple to understand. Compared to the past few weeks however, this week was a very productive one so I’m really happy and optimistic with how I’ve kicked off my FIFTH YEAR (😱) working on KA. ☺️

My week started off in the section Quadratic Approximations on the video titled The Hessian Matrix. My understanding of this matrix is that it’s kind of like the second derivative of the gradient matrix. Grant said that in multivariable calculus, the Hessian matrix is the equivalent of what the second derivative is in single variable calculus. Here’s a screen shot and a few notes I took of what the Hessian matrix looks like:

The Hessian Matrix is used with the quadratic approximation formula and can be used to simplify the equation to rewrite it in vector form. I still find the both forms very confusing but am able to understand the general concept of what is going on in them. I more or less know how it they work but don’t have much understanding of why they work. In any case, here’s a screen shot from the article Quadratic Approximation that shows how the quadratic approximation can be rewritten in vector form which is more concise:

One thing to note here is that the bolded X and H both represent vectors and that the convention is that bolded letters, in general, typically denote vectors. Also, the superscripted “T” overtop of ‘(X – X0)T‘ means ‘Transpose’ which means to flip the vector from a column to a row (think from horizontal to landscape). Now that I’m looking at this screen shot, the whole thing is very confusing to me… 😔

Here are a few questions I worked through from the Hessian Matrix exercise:

Question 1

Question 2

Finally, here’s is an example from the article Quadratic Approximation of how to work through a quadratic approximation in non-vector form:

I didn’t get started on the following section, Optimizing Multivariable Functions, until Friday but managed to get through the first two videos and first two exercises by the end of the week. The first video and exercise talked about what’re known as a global maximum or minimum, or local maxima or minima points. (I also just learned that maxima is the plural of maximum.) Here’s a screen shot from the first vid:

The global maximum point is the highest point on the 3D render in the screen shot above. It indicates what (x, y) coordinates result in the greatest z-coordinate. Local maxima points would be the other highest points on the other hills in the 3D render. Conversely, the global minimum and local minima points are the (x, y) coordinates that result in the lowest z-values which you can’t really see in the screen shot.

To calculate these values, you find out where the gradient equals 0, i.e. where there’s no slope in any combination of x- or y-direction. Below are a few questions I worked through from the first exercise. I didn’t know how to solve them at first but then realized all I needed to do was find the gradient of the given function and then input the given (x, y) or (x, y, z) coordinates into the gradient to see if each it outputted 0. As a side note, an output does equal is apparently known as the ‘zero vector’. Here are two of the questions I worked through:

Question 3

Question 4

The final video I watched talked about something called a saddle point which is also a spot where the gradient outputs 0 but not because it’s the highest point on a ‘hill’ or lowest point in a ‘hole’, so to speak. To be honest, I’m not 100% sure why or how the math behind saddle points work and why they output 0, but I think it has something to do with the symmetry between the partial derivative of x and the partial derivative of y and them having an inverse relationship at the given point that nets out to 0. The easier way to think through a saddle point is that it simply looks like a horse saddle. 🐴 Here’s a screen shot from the video which makes it clearer:

The last exercise I worked through was very straightforward and just asked me to look at an image and state whether the graph had any points on it where the gradient equalled 0. Here are three example questions from that exercise where the first question shows a global minimum, the second question shows a function that has infinite points where the gradient equals 0, and the third question shows a saddle point:

Question 4

Question 5

Question 6

Like I said at the start, I’m definitely happy with how this past week went and with how I’ve started my fifth year on KA. It says that I’m now 64% of the way through this unit, Applications of Multivariable Functions (320/500 M.P.), but I don’t think that’s a good indication of how much more I have left to go. I still have 13 videos, seven articles and one exercise left to get through. I’m hoping I can finish this unit off by the end of September, which seems like a pretty reasonable goal. From there I will only have two more units left to get through and the course challenge and then I’ll be DONE Multivariable Calculus and will have achieved the goal I set for myself four years ago. As I’ve mentioned before, there’s still another subject in the Math section of KA, Linear Algebra, which I plan on starting after MV Calc. Then, after getting through that, I plan on doing the physics courses, chemistry courses, and biology courses so I don’t think I’ll be done with KA anytime soon. After four years, I’m not too worried about the timeline of getting through it all at this point. I know I’ll get through it eventually, one step at a time! 🧑🏻‍🔬