April 15, 2020 at 2:42 am #370Abhishek TyagiKeymaster
1)- Your first task is to perform regression using estimators on Boston Housing data. Please visit <b>this</b> notebook for answering the following questions. Follow the instructions given in the colab notebook to train a linear regression model on the data for 3000 steps. On evaluating on the test data, what is the range of average loss of the trained model?
2)- Now train a DNN regressor model on the data for 3000 steps. Your network should have one hidden layer of 10 neurons. Leave the other parameters (except config) on their default values. On evaluating the test data, what is the range of an average loss of the trained model?
Answer- More than 300
3)- Train the model of Q2 for 6000 steps instead of 3000 steps. Evaluate the trained model on both the training data and the test data. What is the range of difference between the average loss on the training data and the test data?
Answer- Between 50-100
4)- Train boosted trees regressor on the data for 50 steps. Set n_batches_per_layer as 1, center_bias as True and leave the other parameters (except config) on their default values. Evaluate the trained model on both the training data and the test data. What is the range of difference between the average loss on the training data and the test data?
Answer- Less than 10
5)- In the next 3 questions, you will observe the plotted graphs in the notebook and answer simple questions about DFCs. We have plotted the feature contributions for the 15th example of the test data. Which feature has the largest contribution (positive or negative) to the predicted value?
6)- In continuation of Q5, if we increase the value of RM keeping the contributions constant, what happens to our predicted value?
7)- How does the contribution of the RM feature change with an increase in its value from 6 to higher values?
8)- Which of the following data augmentation techniques can be done using tf.keras.preprocessing.image.ImageDataGenerator?
9)- What are the input and output shapes of an embedding layer with vocab_size = 1000 and embedding dimension = 25?
Answer- Input shape: (samples, sequence_length), Output shape: (samples, sequence_length, 25)
10)- When we learn embeddings from a large corpus of data, we might learn embeddings that are biased in a certain way. A good set of embeddings should be free of any bias. Let e<sub>x</sub> be the embedding for word x. Which of the options is correct about the following statements?
i. E(girl) – E (boy
ii. E(aunt) – E(uncle)
iii. E(brother) – E(sister)
Answer- i and ii should be approximately equal
- You must be logged in to reply to this topic.