Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon MLS-C01 Exam - Topic 2 Question 114 Discussion

Actual exam question for Amazon's MLS-C01 exam
Question #: 114
Topic #: 2
[All MLS-C01 Questions]

An online delivery company wants to choose the fastest courier for each delivery at the moment an order is placed. The company wants to implement this feature for existing users and new users of its application. Data scientists have trained separate models with XGBoost for this purpose, and the models are stored in Amazon S3. There is one model fof each city where the company operates.

The engineers are hosting these models in Amazon EC2 for responding to the web client requests, with one instance for each model, but the instances have only a 5% utilization in CPU and memory, ....operation engineers want to avoid managing unnecessary resources.

Which solution will enable the company to achieve its goal with the LEAST operational overhead?

Show Suggested Answer Hide Answer
Suggested Answer: B

The best solution for this scenario is to use a multi-model endpoint in Amazon SageMaker, which allows hosting multiple models on the same endpoint and invoking them dynamically at runtime. This way, the company can reduce the operational overhead of managing multiple EC2 instances and model servers, and leverage the scalability, security, and performance of SageMaker hosting services. By using a multi-model endpoint, the company can also save on hosting costs by improving endpoint utilization and paying only for the models that are loaded in memory and the API calls that are made. To use a multi-model endpoint, the company needs to prepare a Docker container based on the open-source multi-model server, which is a framework-agnostic library that supports loading and serving multiple models from Amazon S3. The company can then create a multi-model endpoint in SageMaker, pointing to the S3 bucket containing all the models, and invoke the endpoint from the web client at runtime, specifying the TargetModel parameter according to the city of each request. This solution also enables the company to add or remove models from the S3 bucket without redeploying the endpoint, and to use different versions of the same model for different cities if needed.References:

Use Docker containers to build models

Host multiple models in one container behind one endpoint

Multi-model endpoints using Scikit Learn

Multi-model endpoints using XGBoost


Contribute your Thoughts:

0/2000 characters
Macy
2 months ago
Not sure if D is worth the extra effort for separate endpoints.
upvoted 0 times
...
Tammara
2 months ago
Wait, why not just use A? Sounds easier to manage.
upvoted 0 times
...
Ailene
2 months ago
I disagree, C could work just as well with less complexity.
upvoted 0 times
...
Alesia
3 months ago
I feel like option A might not be efficient since it suggests doing batch processing, which could delay responses for users.
upvoted 0 times
...
Tabetha
3 months ago
I practiced a similar question where we had to optimize resource usage, and I think option B would be the best choice for that.
upvoted 0 times
...
Alita
3 months ago
I remember studying about multi-model endpoints in SageMaker, so option B seems like a good fit since it reduces overhead.
upvoted 0 times
...
Malika
3 months ago
B seems like the best option for reducing overhead.
upvoted 0 times
...
Erick
3 months ago
I'm leaning towards Option C, where we keep a single EC2 instance and use a model server to load the models from S3 as needed. This seems like it could be the simplest solution to implement, and the use of API Gateway to integrate with the web client is a nice touch. However, I'm a bit concerned about the scalability of this approach as the number of cities grows.
upvoted 0 times
...
Shalon
4 months ago
I'm not entirely sure, but I think keeping a single EC2 instance could lead to performance issues if multiple requests come in at once.
upvoted 0 times
...
Gerri
4 months ago
B is definitely the way to go for efficiency!
upvoted 0 times
...
Domitila
4 months ago
If I were answering this, I would probably go with Option D. Creating separate SageMaker endpoints for each city seems like the most scalable and maintainable solution, even if it might require a bit more initial setup. The ability to easily invoke the right model for each request is a key requirement, and SageMaker makes that relatively straightforward.
upvoted 0 times
...
Maile
4 months ago
Option B with the SageMaker multi-model endpoint seems like a good balance between reducing operational overhead and maintaining real-time performance. By leveraging the managed service, the company can avoid managing individual instances while still being able to serve models for different cities on-demand.
upvoted 0 times
...
Brynn
4 months ago
I'm a bit confused by the different options. It's not clear to me which one would have the least operational overhead while still meeting the company's requirements. I might need to do some additional research on the different AWS services mentioned to fully understand the implications of each approach.
upvoted 0 times
...
Tiffiny
5 months ago
This seems like a straightforward question about optimizing the deployment of machine learning models. I would carefully read through the options and consider the tradeoffs between operational overhead, scalability, and real-time performance.
upvoted 0 times
...
Lenna
5 months ago
I'm a bit torn between B and D. Both seem to have their advantages, but I'm not sure which one would truly be the "least operational overhead." I'll need to carefully weigh the pros and cons of each.
upvoted 0 times
...
Brigette
5 months ago
I feel pretty confident about this one. I think option D is the way to go - using separate SageMaker endpoints for each city seems like the most scalable and low-maintenance solution.
upvoted 0 times
...
Zena
5 months ago
Okay, let me think this through. I'm leaning towards option B, as it seems to leverage SageMaker's multi-model capabilities, which could help reduce the operational overhead. But I'll need to double-check the details.
upvoted 0 times
...
Marge
5 months ago
Hmm, I'm a bit confused by the details here. I think I need to re-read the question a few times to make sure I'm not missing anything important.
upvoted 0 times
...
Antonio
6 months ago
This looks like a tricky question. I'm not sure if I fully understand the requirements yet, but I'll try to break it down step-by-step.
upvoted 0 times
...
Tegan
11 months ago
I'm not a fan of the 'single instance for all models' approach in option C. That's just asking for trouble when demand increases. Give me the SageMaker goodness any day!
upvoted 0 times
Therese
11 months ago
I think using SageMaker with separate endpoints for each city is the way to go.
upvoted 0 times
...
Charlena
11 months ago
I agree, having a single instance for all models seems risky.
upvoted 0 times
...
...
Leontine
11 months ago
Now this is more like it! Separate SageMaker endpoints for each city, that's a clean and scalable solution. The client can just invoke the right endpoint based on the request.
upvoted 0 times
Dustin
10 months ago
Now this is more like it! Separate SageMaker endpoints for each city, that's a clean and scalable solution. The client can just invoke the right endpoint based on the request.
upvoted 0 times
...
Ashley
10 months ago
D) Prepare a Docker container based on the prebuilt images in Amazon SageMaker. Replace the existing instances with separate SageMaker endpoints. one for each city where the company operates. Invoke the endpoints from the web client, specifying the URL and EndpomtName parameter according to the city of each request.
upvoted 0 times
...
Arleen
10 months ago
B) Prepare an Amazon SageMaker Docker container based on the open-source multi-model server. Remove the existing instances and create a multi-model endpoint in SageMaker instead, pointing to the S3 bucket containing all the models Invoke the endpoint from the web client at runtime, specifying the TargetModel parameter according to the city of each request.
upvoted 0 times
...
...
Emerson
12 months ago
Hmm, using a single EC2 instance to host all the models? That could become a bottleneck. Plus, the API Gateway integration adds unnecessary complexity.
upvoted 0 times
...
Evangelina
12 months ago
Option B with the multi-model server in SageMaker seems like a good fit. Centralized model management and real-time inference capabilities - sounds like the right balance of features.
upvoted 0 times
...
Alpha
12 months ago
That's a good point, Mollie. Option D could also be a great solution for the company.
upvoted 0 times
...
Mollie
12 months ago
I prefer option D. Having separate SageMaker endpoints for each city will ensure faster delivery times.
upvoted 0 times
...
Hailey
12 months ago
The SageMaker batch transform solution in option A sounds interesting, but it may not be suitable for real-time inference. We need a more responsive approach.
upvoted 0 times
Catalina
11 months ago
C) Keep only a single EC2 instance for hosting all the models. Install a model server in the instance and load each model by pulling it from Amazon S3. Integrate the instance with the web client using Amazon API Gateway for responding to the requests in real time, specifying the target resource according to the city of each request.
upvoted 0 times
...
Tracie
11 months ago
B) Prepare an Amazon SageMaker Docker container based on the open-source multi-model server. Remove the existing instances and create a multi-model endpoint in SageMaker instead, pointing to the S3 bucket containing all the models Invoke the endpoint from the web client at runtime, specifying the TargetModel parameter according to the city of each request.
upvoted 0 times
...
Fidelia
11 months ago
A) Create an Amazon SageMaker notebook instance for pulling all the models from Amazon S3 using the boto3 library. Remove the existing instances and use the notebook to perform a SageMaker batch transform for performing inferences offline for all the possible users in all the cities. Store the results in different files in Amazon S3. Point the web client to the files.
upvoted 0 times
...
...
Graham
1 year ago
I agree with Alpha. Option B seems efficient and will help avoid managing unnecessary resources.
upvoted 0 times
...
Alpha
1 year ago
I think option B is the best choice. Using a multi-model endpoint in SageMaker will reduce operational overhead.
upvoted 0 times
...

Save Cancel