Coursera: Machine Learning (Week 3) Quiz - Regularization | Andrew NG
Recommended Courses:
2.Regularization
Don't just copy & paste for the sake of completion. The solutions uploaded here are only for reference.They are meant to unblock you if you get stuck somewhere.Make sure you understand first.
- You are training a classification model with logistic regression. Which of the following statements are true? Check all that apply.
- Introducing regularization to the model always results in equal or better performance on the training set.
- Introducing regularization to the model always results in equal or better performance on examples not in the training set.
- Adding a new feature to the model always results in equal or better performance on the training set.
- Adding many new features to the model helps prevent overfitting on the training set.
- Suppose you ran logistic regression twice, once with , and once with . One of the times, you got parameters , and the other time you got . However, you forgot which value of corresponds to which value of . Which one do you think corresponds to ?
- Suppose you ran logistic regression twice, once with , and once with . One of the times, you got parameters , and the other time you got . However, you forgot which value of corresponds to which value of . Which one do you think corresponds to ?
- Which of the following statements about regularization are true? Check all that apply.
- Using a very large value of hurt the performance of your hypothesis; the only reason we do not set to be too large is to avoid numerical problems.
- Because logistic regression outputs values , its range of output values can only be “shrunk” slightly by regularization anyway, so regularization is generally not helpful for it.
- Consider a classification problem. Adding regularization may cause your classifier to incorrectly classify some training examples (which it had correctly classified when not using regularization, i.e. when λ = 0).
- Using too large a value of λ can cause your hypothesis to overfit the data; this can be avoided by reducing λ.
- Which of the following statements about regularization are true? Check all that apply.
- Using a very large value of hurt the performance of your hypothesis; the only reason we do not set to be too large is to avoid numerical problems.
- Because logistic regression outputs values , its range of output values can only be “shrunk” slightly by regularization anyway, so regularization is generally not helpful for it.
- Because regularization causes J(θ) to no longer be convex, gradient descent may
not always converge to the global minimum (when λ > 0, and when using an
appropriate learning rate α). - Using too large a value of λ can cause your hypothesis to underfit the data; this can be avoided by reducing λ.
- In which one of the following figures do you think the hypothesis has overfit the training set?
- Figure:
ANSWER - Figure:
- Figure:
- Figure:
- 5.In which one of the following figures do you think the hypothesis has underfit the training set?
- Figure:
ANSWER - Figure:
- Figure:
- Figure:
- 一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一一
&Have no concerns to ask doubts in the comment section. I will give my best to answer it.If you find this helpful kindly comment and share the post.This is the simplest way to encourage me to keep doing such work.Thanks & Regards,- Wolf
- Figure:
- Figure:
Comments
Post a Comment