Home › Forums › Assignment courserra › Deep Learning Specialization (Andrew NG) › Improving Deep Neural Network: Hyperparameter Tuning, Regularization and Optimization › Week 1 Practical aspects of deep learning
Tagged: Andrew NG, Coursera, Deep Learning, Deep Learning Specialization, Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
 This topic has 0 replies, 1 voice, and was last updated 10 months, 3 weeks ago by Abhishek Tyagi.

AuthorPosts


August 6, 2020 at 1:49 pm #1073Abhishek TyagiKeymaster
1. If you have 10,000,000 examples, how would you split the train/dev/test set?
Answer
98% train . 1% dev . 1% test
2. The dev and set test should:
Answer
Come from same distribution.
3. If your Neural Network model seems to have high variance, what of the following would be promising things to try?
Answer
Add regularization
4. You are working on an automated checkout kiosk for a supermarket, and are building a classifier for apples, bananas and oranges. Suppose your classifier obtains a training set error of 0.5%, and a dev set error of 7%. Which of the following are promising things to try to improve your classifier? (Check all that apply.)
Answer

Increase the regularization parameter lambda
Get more training data5. What is weight decay?
Answer
A regularization technique (such as L2 regularization) that results in gradient descent shrinking the weights on every iteration.6. What happens when you increase the regularization hyperparameter lambda?
Answer
Weights are pushed toward becoming smaller (closer to 0)7. With the inverted dropout technique, at test time:
Answer
You do not apply dropout (do not randomly eliminate units) and do not keep the 1/keep_prob factor in the calculations used in training.8. Increasing the parameter keep_prob from (say) 0.5 to 0.6 will likely cause the following: (Check the two that apply)
Answer
Reducing the regularization effect
Causing the neural network to end up with a lower training set error9. Which of these techniques are useful for reducing variance (reducing overfitting)? (Check all that apply.)
Answer
L2 regularization
Data augmentation
Dropout10. Why do we normalize the inputs xx?
Answer
It makes the cost function faster to optimize


AuthorPosts
 You must be logged in to reply to this topic.