Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Data Engineer Exam - Topic 3 Question 70 Discussion

Actual exam question for Google's Professional Data Engineer exam
Question #: 70
Topic #: 3
[All Professional Data Engineer Questions]

Which of the following are examples of hyperparameters? (Select 2 answers.)

Show Suggested Answer Hide Answer
Suggested Answer: C

Contribute your Thoughts:

0/2000 characters
Pearlene
5 months ago
Yeah, A and B are the classic examples.
upvoted 0 times
...
Alton
5 months ago
Wait, are weights not hyperparameters? That’s surprising!
upvoted 0 times
...
Annmarie
5 months ago
Totally agree, A and B are the right choices!
upvoted 0 times
...
Wynell
5 months ago
I thought biases were hyperparameters too?
upvoted 0 times
...
Katy
6 months ago
A and B are definitely hyperparameters!
upvoted 0 times
...
Virgina
6 months ago
I'm a bit confused; I thought both the number of hidden layers and the number of nodes were hyperparameters, but I can't recall for sure.
upvoted 0 times
...
Eliz
6 months ago
I feel like biases and weights are more like parameters, not hyperparameters.
upvoted 0 times
...
Rikki
6 months ago
I remember practicing a question like this, and I think the number of nodes in each hidden layer is also a hyperparameter.
upvoted 0 times
...
Twana
6 months ago
I think the number of hidden layers is definitely a hyperparameter, but I'm not sure about the other one.
upvoted 0 times
...
Wei
6 months ago
Yep, I agree with Emily. Biases and weights are model parameters that get updated during training, not hyperparameters that are set beforehand.
upvoted 0 times
...
Brianne
6 months ago
I think the key is to focus on what can be adjusted before training versus what gets learned during training. The number of layers and nodes are set upfront, so those are hyperparameters.
upvoted 0 times
...
Mari
6 months ago
Wait, are biases and weights considered hyperparameters too? I'm a bit confused on that.
upvoted 0 times
...
Bambi
6 months ago
Okay, I've got this. The number of hidden layers and the number of nodes in each layer are definitely hyperparameters, since they're set before training the model.
upvoted 0 times
...
Gerri
6 months ago
Hmm, this looks like a tricky one. I'll need to think carefully about the difference between hyperparameters and model parameters.
upvoted 0 times
...
Miriam
6 months ago
Hmm, I'm a little unsure about this one. The options seem pretty similar, and I'm not totally clear on the difference between interfund reimbursement and interfund exchange. I'll have to think this through carefully.
upvoted 0 times
...
Glen
6 months ago
Hmm, I'm a bit unsure about this one. The options seem similar, and I want to make sure I understand the differences between them. Let me think this through carefully.
upvoted 0 times
...
Chanel
7 months ago
Isn't Service Discoverability more about how easily services can be found rather than restricting access? I feel a bit uncertain on this one.
upvoted 0 times
...
Emilio
11 months ago
I'm going with A and B. Gotta love those hidden layers and node counts! They're like the dials on a machine learning control panel.
upvoted 0 times
Domitila
9 months ago
It's important to understand the role of each hyperparameter in order to optimize the model.
upvoted 0 times
...
Dulce
9 months ago
I think biases and weights are more like parameters that get adjusted during training.
upvoted 0 times
...
Cassi
9 months ago
Yeah, those hidden layers and node counts can really impact the performance of the model.
upvoted 0 times
...
Oliva
10 months ago
I agree, A and B are definitely hyperparameters that can be tuned.
upvoted 0 times
...
...
Tabetha
11 months ago
Haha, this is a tricky one! I bet the exam creators just wanted to see if we can distinguish between the model's parameters and its hyperparameters. Time to brush up on my ML terminology!
upvoted 0 times
Jennifer
9 months ago
B) Number of nodes in each hidden layer
upvoted 0 times
...
Sharmaine
10 months ago
A) Number of hidden layers
upvoted 0 times
...
...
Lizbeth
11 months ago
Wait, I thought hyperparameters were external to the model, like the learning rate or the batch size. Isn't that what we're supposed to be looking for here?
upvoted 0 times
Anjelica
11 months ago
B) Number of nodes in each hidden layer
upvoted 0 times
...
Gerri
11 months ago
A) Number of hidden layers
upvoted 0 times
...
...
Georgeanna
12 months ago
I think biases and weights are also hyperparameters because they are set before training the model.
upvoted 0 times
...
Brent
12 months ago
I agree with Tawanna, hyperparameters include the number of hidden layers and nodes in each layer.
upvoted 0 times
...
Frederic
1 year ago
I'm pretty sure C and D are also hyperparameters. Biases and weights are part of the model's architecture and can be adjusted during the training process.
upvoted 0 times
Gilma
10 months ago
I agree, adjusting biases and weights can have a significant impact on the model's learning process.
upvoted 0 times
...
Salley
10 months ago
Yes, biases and weights are important hyperparameters that can affect the model's performance.
upvoted 0 times
...
Becky
10 months ago
I think you're right, biases and weights are also hyperparameters that can be tuned.
upvoted 0 times
...
Linn
10 months ago
D) Weights
upvoted 0 times
...
Dierdre
11 months ago
C) Biases
upvoted 0 times
...
Annmarie
11 months ago
B) Number of nodes in each hidden layer
upvoted 0 times
...
Eden
11 months ago
A) Number of hidden layers
upvoted 0 times
...
...
Tawanna
1 year ago
A) Number of hidden layers and B) Number of nodes in each hidden layer
upvoted 0 times
...
German
1 year ago
Hmm, I think A and B are the correct hyperparameters. The number of hidden layers and nodes in each layer are crucial hyperparameters that can be tuned to optimize the model's performance.
upvoted 0 times
...

Save Cancel