New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Machine Learning Engineer Exam - Topic 3 Question 88 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 88
Topic #: 3
[All Professional Machine Learning Engineer Questions]

You need to train a computer vision model that predicts the type of government ID present in a given image using a GPU-powered virtual machine on Compute Engine. You use the following parameters:

* Optimizer: SGD

* Image shape 224x224

* Batch size 64

* Epochs 10

* Verbose 2

During training you encounter the following error: ResourceExhaustedError: out of Memory (oom) when allocating tensor. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: D

The best option to minimize storage and computational overhead is to use the TRANSFORM clause in the CREATE MODEL statement in the SQL query to calculate the required statistics. The TRANSFORM clause allows you to specify feature preprocessing logic that applies to both training and prediction. The preprocessing logic is executed in the same query as the model creation, which avoids the need to create and store intermediate tables. The TRANSFORM clause also supports quantile bucketization and MinMax scaling, which are the preprocessing steps required for this scenario. Option A is incorrect because creating a component in the Vertex AI Pipelines DAG to calculate the required statistics may increase the computational overhead, as the component needs to run separately from the model creation. Moreover, the component needs to pass the statistics to subsequent components, which may increase the storage overhead. Option B is incorrect because preprocessing and staging the data in BigQuery prior to feeding it to the model may also increase the storage and computational overhead, as you need to create and maintain additional tables for the preprocessed data. Moreover, you need to ensure that the preprocessing logic is consistent for both training and inference. Option C is incorrect because creating SQL queries to calculate and store the required statistics in separate BigQuery tables may also increase the storage and computational overhead, as you need to create and maintain additional tables for the statistics. Moreover, you need to ensure that the statistics are updated regularly to reflect the new data.Reference:

BigQuery ML documentation

Using the TRANSFORM clause

Feature preprocessing with BigQuery ML


Contribute your Thoughts:

0/2000 characters
Andra
3 months ago
Lowering the learning rate might help too, but batch size is key.
upvoted 0 times
...
Ernie
4 months ago
Wait, can you really just change the image shape? Seems risky!
upvoted 0 times
...
Shantell
4 months ago
Changing the optimizer won't help with memory issues.
upvoted 0 times
...
Lisha
4 months ago
I agree, batch size is usually the first thing to try!
upvoted 0 times
...
Odette
4 months ago
Just reduce the batch size to 32 or 16.
upvoted 0 times
...
Glendora
4 months ago
I feel like adjusting the learning rate could be useful, but I'm not confident it would solve the out of memory problem. Batch size seems more straightforward.
upvoted 0 times
...
Vernell
5 months ago
I practiced a similar question where reducing the image size helped with memory errors. So, maybe option D could be a valid approach too?
upvoted 0 times
...
Sharen
5 months ago
I'm not entirely sure, but I think changing the optimizer might not directly address the memory issue. It seems more related to resource allocation.
upvoted 0 times
...
Billy
5 months ago
I remember we discussed how batch size can significantly impact memory usage during training. Reducing it might help with the out of memory error.
upvoted 0 times
...
Ben
5 months ago
I'm feeling pretty confident about this one. The out of memory error is clearly a resource issue, so reducing the batch size is the logical solution. I'll make sure to try that first.
upvoted 0 times
...
Maryln
5 months ago
Ah, I've seen this kind of issue before. Reducing the batch size is definitely the way to go. That should free up enough memory to get the training running smoothly.
upvoted 0 times
...
Enola
5 months ago
Okay, I'm a bit confused here. The error message mentions running out of memory, but I'm not sure if changing the optimizer or learning rate would really help with that. I might try reducing the image shape instead.
upvoted 0 times
...
Mona
5 months ago
Hmm, this looks like a tricky one. I think I'll try reducing the batch size first - that seems like the most straightforward way to address the out of memory error.
upvoted 0 times
...
Adelina
5 months ago
I think the Network Testing Companion is a good solution to identify the network packet loss during the Teams calls. It's a straightforward approach that should give us the data we need.
upvoted 0 times
...
Sylvie
10 months ago
Alright, who's the genius that chose a 224x224 image shape for a GPU-powered VM? That's like trying to fit a monster truck in a Smart car!
upvoted 0 times
Xenia
10 months ago
C) Change the learning rate
upvoted 0 times
...
Annamae
10 months ago
B) Reduce the batch size
upvoted 0 times
...
Thaddeus
10 months ago
A) Change the optimizer
upvoted 0 times
...
...
Lindsey
11 months ago
Hmm, 'out of Memory' error? Looks like someone's been skipping their GPU diet. Time to go on a batch size reduction binge!
upvoted 0 times
Gaston
9 months ago
D) Reduce the image shape
upvoted 0 times
...
Shawnta
9 months ago
C) Change the learning rate
upvoted 0 times
...
Fletcher
10 months ago
B) Reduce the batch size
upvoted 0 times
...
Gayla
10 months ago
A) Change the optimizer
upvoted 0 times
...
...
Mammie
11 months ago
Changing the learning rate? I don't think that's going to help with the OOM error. Gotta free up that GPU memory, my friend.
upvoted 0 times
...
Reita
11 months ago
I would try reducing the image shape first. Smaller input size means less memory required for the model, and you can always resize the images later.
upvoted 0 times
...
Thad
11 months ago
Reducing the batch size seems like the obvious choice here. Too large a batch can easily exhaust GPU memory, especially with high-resolution images.
upvoted 0 times
Garry
9 months ago
Changing the learning rate could also potentially help with memory management.
upvoted 0 times
...
Laurel
9 months ago
C) Change the learning rate
upvoted 0 times
...
Norah
9 months ago
Yes, reducing the batch size is a common solution to memory errors during training.
upvoted 0 times
...
Val
9 months ago
B) Reduce the batch size
upvoted 0 times
...
Bulah
9 months ago
I think changing the optimizer might also be worth trying to optimize memory usage.
upvoted 0 times
...
Lisbeth
9 months ago
A) Change the optimizer
upvoted 0 times
...
Tijuana
10 months ago
That's a good point, reducing the batch size should help with the memory issue.
upvoted 0 times
...
Evelynn
10 months ago
B) Reduce the batch size
upvoted 0 times
...
...
Paris
11 months ago
I think changing the optimizer might also help in resolving the memory error.
upvoted 0 times
...
Margurite
11 months ago
I agree with Casey, reducing the batch size should help with the memory issue.
upvoted 0 times
...
Casey
11 months ago
I think we should reduce the batch size.
upvoted 0 times
...

Save Cancel