Coursera: Machine Learning (Week 2) [Assignment Solution] - Andrew NG
Recommended Courses:
1.Linear regression and get to see it work on data.
Don't just copy paste the code for the sake of completion.
Make sure you understand the code first.
In this exercise, you will implement linear regression and get to see it work on data. Before starting on this programming exercise, we strongly recommend watching the video lectures and completing the review questions for the associated topics.
It consist of the following files:
# indicates optional exercises
- ex1.m - Octave/MATLAB script that steps you through the exercise
- ex1 multi.m - Octave/MATLAB script for the later parts of the exercise
- ex1data1.txt - Dataset for linear regression with one variable
- ex1data2.txt - Dataset for linear regression with multiple variables
- submit.m - Submission script that sends your solutions to our servers
- [*] warmUpExercise.m - Simple example function in Octave/MATLAB
- [*] plotData.m - Function to display the dataset
- [*] computeCost.m - Function to compute the cost of linear regression
- [*] gradientDescent.m - Function to run gradient descent
- [#] computeCostMulti.m - Cost function for multiple variables
- [#] gradientDescentMulti.m - Gradient descent for multiple variables
- [#] featureNormalize.m - Function to normalize features
- [#] normalEqn.m - Function to compute the normal equations
# indicates optional exercises
warmUpExercise.m :
function A = warmUpExercise() %WARMUPEXERCISE Example function in octave % A = WARMUPEXERCISE() is an example function that returns the 5x5 identity matrix A = []; % ============= YOUR CODE HERE ============== % Instructions: Return the 5x5 identity matrix % In octave, we return values by defining which variables % represent the return values (at the top of the file) % and then set them accordingly. A = eye(5); %It's a built-in function to create identity matrix % =========================================== end
plotData.m :
function plotData(x, y) %PLOTDATA Plots the data points x and y into a new figure % PLOTDATA(x,y) plots the data points and gives the figure axes labels of % population and profit. figure; % open a new figure window % ====================== YOUR CODE HERE ====================== % Instructions: Plot the training data into a figure using the % "figure" and "plot" commands. Set the axes labels using % the "xlabel" and "ylabel" commands. Assume the % population and revenue data have been passed in % as the x and y arguments of this function. % % Hint: You can use the 'rx' option with plot to have the markers % appear as red crosses. Furthermore, you can make the % markers larger by using plot(..., 'rx', 'MarkerSize', 10); plot(x, y, 'rx', 'MarkerSize', 10); % Plot the data ylabel('Profit in $10,000s'); % Set the y-axis label xlabel('Population of City in 10,000s'); % Set the x-axis label % ============================================================ end
computeCost.m :
function J = computeCost(X, y, theta) %COMPUTECOST Compute cost for linear regression % J = COMPUTECOST(X, y, theta) computes the cost of using theta as the % parameter for linear regression to fit the data points in X and y % Initialize some useful values m = length(y); % number of training examples % You need to return the following variables correctly J = 0; % ====================== YOUR CODE HERE ====================== % Instructions: Compute the cost of a particular choice of theta % You should set J to the cost. %%%%%%%%%%%%% CORRECT %%%%%%%%% % h = X*theta; % temp = 0; % for i=1:m % temp = temp + (h(i) - y(i))^2; % end % J = (1/(2*m)) * temp; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%% CORRECT: Vectorized Implementation %%%%%%%%% J = (1/(2*m))*sum(((X*theta)-y).^2); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % ========================================================================= end
gradientDescent.m :
function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters) %GRADIENTDESCENT Performs gradient descent to learn theta % theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by % taking num_iters gradient steps with learning rate alpha % Initialize some useful values m = length(y); % number of training examples J_history = zeros(num_iters, 1); for iter = 1:num_iters % ====================== YOUR CODE HERE ====================== % Instructions: Perform a single gradient step on the parameter vector % theta. % % Hint: While debugging, it can be useful to print out the values % of the cost function (computeCost) and gradient here. % %%%%%%%%% CORRECT %%%%%%% %error = (X * theta) - y; %temp0 = theta(1) - ((alpha/m) * sum(error .* X(:,1))); %temp1 = theta(2) - ((alpha/m) * sum(error .* X(:,2))); %theta = [temp0; temp1]; %%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%% CORRECT %%%%%%% %error = (X * theta) - y; %temp0 = theta(1) - ((alpha/m) * X(:,1)'*error); %temp1 = theta(2) - ((alpha/m) * X(:,2)'*error); %theta = [temp0; temp1]; %%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%% CORRECT %%%%%%% error = (X * theta) - y; theta = theta - ((alpha/m) * X'*error); %%%%%%%%%%%%%%%%%%%%%%%%% % ============================================================ % Save the cost J in every iteration J_history(iter) = computeCost(X, y, theta); end end
computeCostMulti.m :
function J = computeCostMulti(X, y, theta) %COMPUTECOSTMULTI Compute cost for linear regression with multiple variables % J = COMPUTECOSTMULTI(X, y, theta) computes the cost of using theta as the % parameter for linear regression to fit the data points in X and y % Initialize some useful values m = length(y); % number of training examples % You need to return the following variables correctly J = 0; % ====================== YOUR CODE HERE ====================== % Instructions: Compute the cost of a particular choice of theta % You should set J to the cost. J = (1/(2*m))*(sum(((X*theta)-y).^2)); % ========================================================================= end
gradientDescentMulti.m :
function [theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters) %GRADIENTDESCENTMULTI Performs gradient descent to learn theta % theta = GRADIENTDESCENTMULTI(x, y, theta, alpha, num_iters) updates theta by % taking num_iters gradient steps with learning rate alpha % Initialize some useful values m = length(y); % number of training examples J_history = zeros(num_iters, 1); for iter = 1:num_iters % ====================== YOUR CODE HERE ====================== % Instructions: Perform a single gradient step on the parameter vector % theta. % % Hint: While debugging, it can be useful to print out the values % of the cost function (computeCostMulti) and gradient here. % %%%%%%%% CORRECT %%%%%%%%%% error = (X * theta) - y; theta = theta - ((alpha/m) * X'*error); %%%%%%%%%%%%%%%%%%%%%%%%%%% % ============================================================ % Save the cost J in every iteration J_history(iter) = computeCostMulti(X, y, theta); end end
- 一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一
&
Have no concerns to ask doubts in the comment section. I will give my best to answer it.
If you find this helpful kindly comment and share the post.
This is the simplest way to encourage me to keep doing such work.
Thanks & Regards,
- Wolf
Comments
Post a Comment