Ticker

6/recent/ticker-posts

Coursera: Machine Learning (Week 1) Quiz - Linear Regression with One Variable | Andrew NG

 


Recommended Courses:

2.Linear Regression with One Variable
Don't just copy & paste for the sake of completion. 
The solutions uploaded here are only for reference.
They are meant to unblock you if you get stuck somewhere.
Make sure you understand first.
1.Consider the problem of predicting how well a student does in her second year of college/university, given how well she did in her first year. Specifically, let x be equal to the number of “A” grades (including A-. A and A+ grades) that a student receives in their first year of college (freshmen year). We would like to predict the value of y, which we define as the number of “A” grades they get in their second year (sophomore year).
Here each row is one training example. Recall that in linear regression, our hypothesis is  to denote the number of training examples.

enter image description here
For the training set given above (note that this training set may also be referenced in other questions in this quiz), what is the value of ? In the box below, please enter your answer (which should be a number between 0 and 10).
Answers=4
  1. Many substances that can burn (such as gasoline and alcohol) have a chemical structure based on carbon atoms; for this reason they are called hydrocarbons. A chemist wants to understand how the number of carbon atoms in a molecule affects how much energy is released when that molecule combusts (meaning that it is burned). The chemist obtains the dataset below. In the column on the right, “kJ/mol” is the unit measuring the amount of energy released.

    enter image description here

    2.You would like to use linear regression () to estimate the amount of energy released (y) as a function of the number of carbon atoms (x). Which of the following do you think will be the values you obtain for  and  ? You should be able to select the right answer without actually implementing linear regression.
    •   = −569.6,  = 530.9
    •   = −1780.0,  = −530.9
    •   = −569.6,  = −530.9
    •   = −1780.0,  = 530.9
2.For this question, assume that we are using the training set from Q1.
Recall our definition of the cost fu
nction was 
What is ? In the box below,
please enter your answer (Simplify fractions to decimals when entering answer, and ‘.’ as the decimal delimiter e.g., 1.5).
Answers=0.5
3.Suppose we set  in the linear regression hypothesis from Q1. What is  ?
Answers=3
3.Suppose we set  = −2,  = 0.5 in the linear regression hypothesis from Q1. What is ?
Answers=1
  1. Let  be some function so that  outputs a number. For this problem,  is some arbitrary/unknown smooth function (not necessarily the cost function of linear regression, so  may have local optima).
    Suppose we use gradient descent to try to minimize  as a function of  and .
    Which of the following statements are true? (Check all that apply.)
    •  If  and  are initialized at the global minimum, then one iteration will not change their values.
    •  Setting the learning rate  to be very small is not harmful, and can only speed up the convergence of gradient descent.
    •  No matter how  and  are initialized, so long as  is sufficiently small, we can safely expect gradient descent to converge to the same solution.
    •  If the first few iterations of gradient descent cause  to increase rather than decrease, then the most likely cause is that we have set the learning rate  to too large a value.

    • 4.In the given figure, the cost function  has been plotted against  and , as shown in ‘Plot 2’. The contour plot for the same cost function is given in ‘Plot 1’. Based on the figure, choose the correct options (check all that apply).
      enter image description here
      •  If we start from point B, gradient descent with a well-chosen learning rate will eventually help us reach at or near point A, as the value of cost function  is maximum at point A.
      •  If we start from point B, gradient descent with a well-chosen learning rate will eventually help us reach at or near point C, as the value of cost function  is minimum at point C.
      •  Point P (the global minimum of plot 2) corresponds to point A of Plot 1.
      •  If we start from point B, gradient descent with a well-chosen learning rate will eventually help us reach at or near point A, as the value of cost function  is minimum at A.
      •  Point P (The global minimum of plot 2) corresponds to point C of Plot 1.
      • 5.Suppose that for some linear regression problem (say, predicting housing prices as in the lecture), we have some training set, and for our training set we managed to find some , such that .
        Which of the statements below must then be true? (Check all that apply.)
        •  Gradient descent is likely to get stuck at a local minimum and fail to find the global minimum.
        •  For this to be true, we must have  and 
          so that 
        •  For this to be true, we must have  for every value of  = 1, 2,…,.
        •  Our training set can be fit perfectly by a straight line, i.e., all of our training examples lie perfectly on some straight line.
        • 一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一
                  Machine Learning Coursera-All weeks solutions [Assignment + Quiz]   click here
                                                                        &
                  Coursera Google Data Analytics Professional Quiz Answers   click here

          Have no concerns to ask doubts in the comment section. I will give my best to answer it.
          If you find this helpful kindly comment and share the post.
          This is the simplest way to encourage me to keep doing such work.


          Thanks & Regards,
          - Wolf

Post a Comment

0 Comments