Exercise 8: Linear models, continued#

This homework assignment is designed to give you a deeper understanding of linear models. First, we’ll dive into the math behind the closed-form solution of maximum likelihood estimation. In the first section below, write your answers using Latex equation formatting.

Note: Check out this page and this page for resources on how to do Latex formatting. You can also double click on the question cells in this notebook to see how math is formatted in the questions.


1. Deriving the Maximum Likelihood Estimate for Simple Linear Regression (6 points)#

Using the mean squared error (MSE) as your objective function (the thing you’re trying to minimize when you fit your model) allows for a closed form solution to finding the maximum likelihood estimate (MLE) of your model parameters in linear regression. Let’s consider the simple, single predictor variable model, i.e. simple linear regression : \(Y= \beta_0 + \beta_1 X \).

a) Use algebra to show how you can expand out \(MSE(\beta_0, \beta_1)\) to get from i to ii below.

i) \(E[ (Y-(\beta_0 + \beta_1 X))^2]\)

ii) \(E[Y^2] -2 \beta_0E[Y]-2 \beta_1 Cov[X,Y]-2 \beta_1 E[X]E[Y]+ \beta_0^2 +2 \beta_0 \beta_1 E[X]+\beta_1^2 Var[X]+ \beta_1^2 (E[X])^2\)

Answer:

Write your answer here

b) Prove that the MLE of \(\beta_0\) is \(E[Y]- \beta_1 E[X]\) by taking the derivative of ii above, with respect to \(\beta_0\), setting the derivative to zero, and solving for \(\beta_0\).

Answer:

Write your answer here

c) Prove that the MLE for \(\beta_1\) is \(Cov[X,Y]/Var[X]\) by taking the derivative of equation ii above, with respect to \(\beta_1\), setting the derivative to zero, and solving for \(\beta_1\). Hint: after you’ve simplified / expanded a bit, plug in the solution for \(\beta_0\) from part b.

Answer:

Write your answer here


2. Connecting to data (4 points)#

Now let’s connect this to some real data. Once again we’ll be using the unrestricted_trimmed_1_7_2020_10_50_44.csv file from the Homework/hcp_data folder in the class GitHub repository.

​ This data is a portion of the Human Connectome Project database. It provides measures of cognitive tasks and brain morphology measurements from 1206 participants. The full description of each variable is provided in the HCP_S1200_DataDictionary_April_20_2018.csv file in the Homework/hcp_data folder in the class GitHub repository.

a) Use the setwd and read.csv functions to load data from the unrestricted_trimmed_1_7_2020_10_50_44.csv file. Then use the tidyverse tools make a new dataframe d1 that only inclues the subject ID (Subject), Flanker Task performance (Flanker_Unadj), and total grey matter volume (FS_Total_GM_Vol) variables and remove all NA values.

Use the head function to look at the first few rows of each data frame.

# WRITE YOUR CODE HERE

b) Now we’re going to see if the solutions we proved above actually line up with the model fit that R gives us (it should…). Calculate what the \(\beta_0\) and \(\beta_1\) coefficients should be for a simple linear regression model using Flanker_Unadj as \(Y\) and FS_Total_GM_Vol as \(X\). Use the formulas we derived above (\(\beta_1 = Cov[XY]/Var[X]\) , \(\beta_0 = E[Y] - \beta_1E[X]\)). Then use lm() to compare the coefficients you calculated with the ones R gives you.

# WRITE YOUR CODE HERE

DUE: 5pm EST, Feb 28, 2024

IMPORTANT Did you collaborate with anyone on this assignment? If so, list their names here.

Someone’s Name