Week 48 – June 27th to Aug. 2nd

My first girlfriend broke up with me when I was 18 years old which, funny enough, is the time I first started going to the gym. I initially found it quite difficult to motivate myself to workout. Fast forward 12 years and working out is as integral a part of my life as eating breakfast and watching YouTube. Similarly, 8 years ago my diet consisted primarily of Tim Hortons and food that came out of a box, that came out of a freezer. Around 3-4 years ago I altered my diet to mainly consist of whole foods and pretty close to zero processed food. These are two examples of things I’ve done which I consider life-style changes, i.e. changes I’ve made that have stuck for good. I’m starting to wonder if I’m going through another similar type of ‘educational’ change in my life now. Having been at this for 48 weeks, I’m realizing how much I’m enjoying working through KA and am starting to think I may continue to self-educate myself for the rest of my life. I’ve recently been more inclined to look up words I’m unsure of or places I don’t know of which I think ties into this idea. It will be interesting to see if this trend continues.

(I apologize if that rant came across as super cocky, but I’m pretty happy with how things are going and have honestly noticed a shift in my mindset regarding education.)

In the entire time I’ve been working on KA, I haven’t learned as many definitions in any other week as I did in Week 48. I finished the unit Study Design and got half way through the unit Probability and, between the two, I ended up writing down 39 definitions. I think 39 new terms is probably too many definitions for me to remember all in the span of 1 week and am hoping it doesn’t happen much more often. I was pleased with the effort I put in this week spending ~2 hours each day on KA, especially considering that I don’t find learning definitions all the enjoyable.

Oddly enough, the first definition I learned this week was the definition of the subject statistics itself. I was surprised that it took until the 6th unit of the course before I was given a direct definition of statistics which is “a form of mathematical analysis that deals with collecting, presenting, and analyzing data”. Statistical questions are ones that, to answer, you need to collect data with variability. The data must have more than 1 data point (i.e. be a part of a dataset) where comparison is required between more than just two things (ex. asking is A > B is not a statistical question). If you’re wondering if a question falls under the category of statistics, ask yourself “is there variation?”.

I was then taught the definitions of correlation and causality, which I had heard before but never had specifically been taught the mathematical definition. Both of these words have to do with running an experiment:

  • Correlation
    • A and B are related, i.e. there is a connection between A and B, however this connection could be due to multiple factors.
    • A < — > B
  • Causality
    • A causes B.
    • A — > B

When conducting an experiment, in order to prove causality you must restrict every variable except for the one being tested in order to conclusively determine causality. If an experiment shows signs of a relationship between A and B but all the variables haven’t been accounted for, you can only state that there’s a correlation between A and B and not causality.

Next I learned about how experiments can produce biased results. The following are 9 types of ways in which biases can occur (brace yourselves for the incoming onslaught of definitions) but, FYI, there are quite a few other types of biases I found online which I didn’t add here:

  1. Selection Bias/Undercoverage Bias
    • Selecting individuals, groups or data for analysis in such a way that proper randomization is not achieved resulting in a sample that does not properly represent the population.
  2. Time Interval Bias
    • Intentionally sampling within a specific time range.
  3. Confirmation Bias
    • The tendency to favor information that confirms ones’ own beliefs.
  4. Omitted Variable Bias
    • The absence of relevant variables in a model.
    • Removing relevant and/or too many variables.
  5. Recall/Recency Bias
    • When participants do not recall previous events, memories, or details.
    • When participants unconsciously place greater value on recent events.
  6. Observer Bias
    • A subjective viewpoint of observers and how they assess subjective criteria or record subjective information.
  7. Funding Bias
    • To skew a study to support a financial sponsor.
  8. Response Bias
    • To answer questions on a survey untruthfully or misleadingly.
    • When people are dishonest when answering survey questions.
  9. Voluntary Response Bias
    • When a sample is comprised of volunteers who self-select themselves.

Moving on, sampling is when a fraction of a population is used to conduct an experiment on to draw some type of conclusion about the overall population. If you were to survey 500 people in a town of 30,000, the 500 people surveyed would be considered the sample. There are ‘good’ and ‘bad’ ways to come up with a sample, so to speak. In order to produce accurate results, it’s important that a sample is selected randomly otherwise it may not be, and most likely isn’t, representative of the larger population. A few ways to take a sample are:

  • Bad
    • Convenience Sampling
      • The researcher chooses a sample that is readily available in some non-random way.
      • Ex. Surveying the last 20 people to go into a bar trying to find results about all patrons of the bar.
    • Voluntary Response Sample
      • When researchers put out a request for people to join a sample and each individual person decides whether or not to take part.
  • Good
    • Simple Random Sample
      • Every member of a population has an equal chance of being selected.
      • Ex. putting names in a hat and drawing at random.
      • This is often done with a computer program because “humans are notoriously bad at selecting randomly.” – Sal
    • Stratified Random Sample
      • The population is first split up into groups. The overall sample consists of some equal percent of members from each different group. The members from each group are chosen randomly.
    • Cluster Random Sample
      • The population is first split up into groups. Every member from a handful of the groups (i.e. clusters) are selected. The handful of groups/clusters are selected at random.
    • Systematic Random Sample
      • Members of a population are put into a random order. A starting point is selected at random and every nth member is selected to be in the sample (ex. every 5th of 10th member).

The last thing I learned about in the unit Study Design was the specific types of statistical studies. Again, this is another thing that in my mind should have been explained at the beginning of the unit rather than at the end. Nonetheless, the different types of studies are:

  • Sample Study
    • Taking a (hopefully random) sample from a population and surveying them to make an estimation about a parameter of the population.
      • A parameter can be anything you’d like to study about a population, ex. number of shoes per household, number of years studying French, number of vacations taken per year, etc.
  • Observation Study
    • A study that aims to see if there’s a correlation between 2 variables.
    • Think (x, y) charts or bivariate/scatterplots.
  • Experiments
    • Attempt to prove/show causality.
    • Must have a ‘control’ group and ‘treatment’ group which must be identical in every single way apart from applying the variable in question to the treatment group.
    • Matched Pair Experiments
      1. Experiment has two rounds where ‘control’ group and ‘treatment’ group switch after the first round and the experiment is repeated a second time with each group having switched spots, or
      2. Members of a study are split into pairs that are matched as nearly as possible on a common trait and then each pair is split up, one going into the ‘control’ group and the other going into the ‘treatment’ group.

I only got ~50% of the way through the unit Probability but still ended up covering a substantial amount of material. To start, it was explained that the probability of an event occurring can only be between 0 and 1, which can also be written in terms of percent. Probability is often denoted as P(A) where A is the event in question. The denotation would translate literally to “the probability of A occurring equals…”. When trying to calculate probability, it’s helpful to start by thinking of it as:

  • P(a) = “# of possibilities that meet the condition” / “# of equally likely possibilities”
  • Ex. P(flipping a coin to land on heads) = 1 head / 2 possible outcomes = 1/2 = 0.5 = 50%

After going through a number videos on basic probability and working through their corresponding exercises, I then started working on Sets which can be thought of as a collection of distinct objects. Sets often take the form of numbers (ex. you have a set of numbers on a bingo card) but can be anything such as cars, people, farm animals, etc. An example of how to denote sets would be:

  • X = {3, 12, 5, 13} or
  • Y = {14, 15, 6, 3} or
  • Anything = { x, y, z,}

The { } brackets are used to show a set of something which is annoying since those are the brackets I used to use to denote the square root symbol. Going forward, I will use those brackets to denote sets of anything and find a new way of denoting the square root sign.

I then moved on to set notation. Up to this point I’ve been taught the notation used for adding and subtracting sets and describing them, in general. The five set notation symbols I learned this week are:

  1. Intersection (“”)
    • Objects that belong to set A “and” B.
    • If there are two sets containing numbers, ex. set A and set B, the intersection would be any numbers that occur in both sets.
    • Ex. A = {3, 12, 5, 13, 14}, B = {14, 15, 6, 3}
      • A ⋂ B = {3, 14}
  2. Union (“”)
    • Objects that belong to set A “or” B.
    • In two sets of numbers (set A and B), the union of the two sets would contain each and every number found in both sets but no duplicates.
    • Ex. A = {3, 12, 5, 13, 14}, B = {14, 15, 6, 3}
      • A ⋃ B = {3, 12, 5, 13, 14, 15, 6}
  3. Relevant Compliment (“\”)
    • Think of it as subtraction.
    • The relevant compliment of set B in A is all the numbers found in A but not in B.
    • Ex. A = {1, 2, 3, 4, 5, 6}, B = {2, 4, 6}
      • A\B = {1, 3, 5}
  4. Null Set (“”)
    • When a set is “empty”.
    • Subtracting a set from itself (i.e. taking the relevant compliment of a set from itself) results in a null set.
    • Ex. A = {1, 2, 3, 4, 5, 6}
      • A\A = ∅
  5. Element/Member of (“”)
    • Is used to say “x is a part of set A”.
    • Ex. A = {1, 2, 3, 4, 5, 6}
      • 3 ∈ A

After going through those set notation terms, I was then taught about Subsets, Strict Subsets, Supersets, and Strict Supersets:

  • Subset (“”)
    • Every member/element in one set is contained within another.
    • Ex. A = {1, 2, 3, 4, 5, 6}, B = {2, 4, 6}
      • B ⊆ A
  • Strict Subset (“”)
    • Every member/element in set B is contained within set A BUT every member of set A is NOT contained in set B.
    • Essentially the same thing as subset but gives a bit more information as a subset could be equal to the set it’s being compared to.
    •  Ex. A = {1, 2, 3, 4, 5, 6}, B = {2, 4, 6}
      • B ⊊ A
  • Superset (“”)
    • Set A has all the elements as set B, or more.
    • Ex. A = {1, 2, 3, 4, 5, 6}, B = {2, 4, 6}
      • A ⊇ B
  • Strict Superset (“”)
    • Set A has all the elements as set B, AND more.
    • Ex. A = {1, 2, 3, 4, 5, 6}, B = {2, 4, 6}
      • A ⊋ B

Lastly I learned about the difference between theoretical probability vs experimental probability and the Law of Large Numbers:

  • Theoretical Probability
    • When you control and can account for all the variables of an event and there are not outside variables that can influence the outcome of the event.
    • Ex. flipping a coin is considered theoretical probability since there are (essentially) only two possible outcomes (unless somehow it landed on its’ side).
  • Experimental Probability
    • The probability of an event occurring based on past statistical experience/evidence.
    • Ex. if the leafs won 50/60 games in some alternate universe, you’d say they had a 5/6 chance on winning their next game based on experimental probability.
  • Law of Large Numbers
    • States that when conducting an experiment, the experimental probability will get closer to the theoretical probability the more the experiment is run (a.k.a. the ‘larger’ number of times the experiment is run).
    • “The average of the results obtained from a large number of trials should be closer to the expected value, and will tend to become closer to the expected value, as more trials are performed.” – Wikipedia
    • Ex. the more you flip a closer the mean of all of the flips will get to 0.5.

As I finish writing this post I’m just realizing now how many definitions I went through this week. I’m also realizing that I really don’t enjoy learning definitions nearly as much as I enjoy working through actual math problems. I’m hopeful that I can finish off Probability (720/1600 M.P.) by the middle of the week and then get through the following unit Counting, Permutations, and Combinations (0/500 M.P.) before the end of the week. I’m mostly hopeful, however, that I’m done with learning definitions.