Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Machine Learning Engineer Exam - Topic 4 Question 68 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 68
Topic #: 4
[All Professional Machine Learning Engineer Questions]

You have trained an XGBoost model that you plan to deploy on Vertex Al for online prediction. You are now uploading your model to Vertex Al Model Registry, and you need to configure the explanation method that will serve online prediction requests to be returned with minimal latency. You also want to be alerted when feature attributions of the model meaningfully change over time. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: A

Contribute your Thoughts:

0/2000 characters
Mollie
4 months ago
Is training-serving skew really the right monitoring objective?
upvoted 0 times
...
Timothy
4 months ago
Totally agree with A, sampled Shapley is solid!
upvoted 0 times
...
Anissa
4 months ago
Wait, why would you use a path count of 50? Seems excessive!
upvoted 0 times
...
Donette
4 months ago
I think B might be better for accuracy though.
upvoted 0 times
...
Ammie
5 months ago
A is the best choice for minimal latency!
upvoted 0 times
...
Tony
5 months ago
I’m leaning towards option A because it mentions prediction drift, which seems crucial for monitoring changes over time.
upvoted 0 times
...
Nickole
5 months ago
I feel like the path count could really impact performance, but I can't remember if 5 or 50 is better for Shapley.
upvoted 0 times
...
Jesus
5 months ago
I think we practiced a question similar to this, and I recall that Integrated Gradients might be more suitable for certain models.
upvoted 0 times
...
Daren
5 months ago
I remember we discussed Shapley values in class, but I'm not sure if sampled Shapley is the best choice for low latency.
upvoted 0 times
...
Dorathy
5 months ago
I'm pretty confident I know the right approach here. Integrated Gradients with a path count of 50 will give us more accurate explanations, and training-serving skew monitoring will catch any issues with the model's performance on new data.
upvoted 0 times
...
Jani
5 months ago
Okay, I think I've got this. Sampled Shapley with a path count of 5 should give us fast explanations for online predictions. And monitoring for prediction drift is the right way to track changes in feature attributions over time.
upvoted 0 times
...
Leonardo
5 months ago
This looks like a tricky question about configuring model explanations and monitoring on Vertex AI. I'll need to carefully read through the options and think about the trade-offs.
upvoted 0 times
...
Dorothea
5 months ago
Hmm, I'm a bit confused about the difference between prediction drift and training-serving skew as monitoring objectives. I'll need to review those concepts before deciding.
upvoted 0 times
...
Rozella
5 months ago
Okay, I've read the question a few times now. I believe the Security Operator role should allow User1 to update the Secure Score improvement actions, so I'll select "Yes" for this solution.
upvoted 0 times
...
Giovanna
6 months ago
Okay, I'm going to take my time and work through this methodically. I don't want to rush and miss something important.
upvoted 0 times
...
Melvin
2 years ago
I'm with you guys on this one. Integrated Gradients is a solid choice, but I think the higher path count of 50 is the way to go. As for the monitoring objective, I'd definitely go with training-serving skew. It's going to be way more useful than just tracking prediction drift, which doesn't give you the full picture.
upvoted 0 times
Cheryl
2 years ago
Definitely, training-serving skew provides a more comprehensive view of model performance.
upvoted 0 times
...
Dalene
2 years ago
Yeah, monitoring training-serving skew will give us more insights than just prediction drift.
upvoted 0 times
...
Gilma
2 years ago
I agree, deploying to Vertex AI Endpoints is the way to go.
upvoted 0 times
...
Leslie
2 years ago
I think Integrated Gradients with a path count of 5 is a good choice.
upvoted 0 times
...
Elza
2 years ago
C) 3 Create a Model Monitoring job that uses training-serving skew as the monitoring objective.
upvoted 0 times
...
Jenelle
2 years ago
B) 2 Deploy the model to Vertex AI Endpoints.
upvoted 0 times
...
Twanna
2 years ago
B) 1 Specify Integrated Gradients as the explanation method with a path count of 5.
upvoted 0 times
...
...
Armando
2 years ago
You know, I was thinking the same thing. Integrated Gradients is another good explanation method, but the path count of 50 seems more appropriate to get reliable feature attributions. And using training-serving skew as the monitoring objective is a smart move to stay on top of any changes in the model's behavior over time.
upvoted 0 times
...
Nana
2 years ago
I agree with Theodora. Sampled Shapley can be a good choice, but I think a higher path count is necessary to get meaningful feature attributions. The question also mentions wanting to be alerted when feature attributions change over time, so I would go with the option that uses training-serving skew as the monitoring objective, as that's likely more relevant to detecting changes in feature importance.
upvoted 0 times
...
Theodora
2 years ago
Hmm, this is an interesting question. I think the key here is to choose an explanation method that can provide feature attributions with minimal latency, which is important for online prediction requests. Sampled Shapley seems like a good option, but I'm not sure if a path count of 5 is enough to get accurate feature attributions. I might go with a higher path count, like 50, to ensure more reliable explanations.
upvoted 0 times
...

Save Cancel