Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Machine Learning Engineer Topic 4 Question 73 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 73
Topic #: 4
[All Professional Machine Learning Engineer Questions]

You have a custom job that runs on Vertex Al on a weekly basis The job is Implemented using a proprietary ML workflow that produces the datasets. models, and custom artifacts, and sends them to a Cloud Storage bucket Many different versions of the datasets and models were created Due to compliance requirements, your company needs to track which model was used for making a particular prediction, and needs access to the artifacts for each model. How should you configure your workflows to meet these requirement?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

Lauran
10 months ago
I personally think option C) Use the Vertex AI Metadata API inside the custom Job is the most efficient solution.
upvoted 0 times
...
Gwenn
10 months ago
I disagree, I believe option D) Register each model in Vertex AI Model Registry is the way to go.
upvoted 0 times
...
Brett
11 months ago
I think the best option is A) Configure a TensorFlow Extended (TFX) ML Metadata database, and use the ML Metadata API.
upvoted 0 times
...
Naomi
11 months ago
I'm not sure, but option C sounds good to me. Using Vertex AI Metadata API inside the job seems like a practical approach.
upvoted 0 times
...
Thomasena
11 months ago
I agree with Georgeanna. It's important to have a centralized database to keep track of all the models and datasets.
upvoted 0 times
...
Georgeanna
11 months ago
I think option A is the best choice. By using TFX ML Metadata database, we can easily track the model used for predictions.
upvoted 0 times
...
Twana
1 years ago
Hmm, I'm not sure about option B. Relying on autologging in Vertex AI may not give us enough control over the metadata. And a separate TFX metadata database (option A) sounds like overkill for this use case.
upvoted 0 times
...
Eladia
1 years ago
Option D also sounds promising - registering the models in the Vertex AI Model Registry and using labels could be a simple way to manage the versioning and provenance. But I'm not sure how robust that would be for a complex workflow.
upvoted 0 times
...
Honey
1 years ago
I'm leaning towards option C. Using the Vertex AI Metadata API seems like the most direct way to link the models, datasets, and artifacts together. Plus, we can create custom context and execution details to meet the compliance needs.
upvoted 0 times
Emerson
11 months ago
I'm convinced, option C it is!
upvoted 0 times
...
Shawnee
11 months ago
It's definitely a more structured way to track model usage and artifacts.
upvoted 0 times
...
Brande
12 months ago
Using events to link everything sounds like a good approach.
upvoted 0 times
...
Alona
12 months ago
That makes sense, it's important to link all the necessary information together.
upvoted 0 times
...
Kristeen
12 months ago
We can create custom context and execution details as needed for compliance.
upvoted 0 times
...
Norah
12 months ago
Agreed, using the Vertex AI Metadata API seems like the most direct solution.
upvoted 0 times
...
Truman
12 months ago
I think option C is the best choice.
upvoted 0 times
...
...
Chaya
1 years ago
Whoa, this question looks like a real brain-teaser! We definitely need to track the models and artifacts for compliance, but it's not clear which option is the best approach.
upvoted 0 times
...

Save Cancel