New Year Sale ! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Databricks Exam Databricks-Generative-AI-Engineer-Associate Topic 5 Question 4 Discussion

Actual exam question for Databricks's Databricks-Generative-AI-Engineer-Associate exam
Question #: 4
Topic #: 5
[All Databricks-Generative-AI-Engineer-Associate Questions]

A Generative AI Engineer developed an LLM application using the provisioned throughput Foundation Model API. Now that the application is ready to be deployed, they realize their volume of requests are not sufficiently high enough to create their own provisioned throughput endpoint. They want to choose a strategy that ensures the best cost-effectiveness for their application.

What strategy should the Generative AI Engineer use?

Show Suggested Answer Hide Answer
Suggested Answer: B

Problem Context: The engineer needs a cost-effective deployment strategy for an LLM application with relatively low request volume.

Explanation of Options:

Option A: Switching to external models may not provide the required control or integration necessary for specific application needs.

Option B: Using a pay-per-token model is cost-effective, especially for applications with variable or low request volumes, as it aligns costs directly with usage.

Option C: Changing to a model with fewer parameters could reduce costs, but might also impact the performance and capabilities of the application.

Option D: Manually throttling requests is a less efficient and potentially error-prone strategy for managing costs.

Option B is ideal, offering flexibility and cost control, aligning expenses directly with the application's usage patterns.


Contribute your Thoughts:

Fernanda
24 days ago
Ah, the joys of scaling AI applications. I'd say option B is the way to go, but maybe they should also consider a backup plan just in case. You know, like a Plan B.
upvoted 0 times
...
Albina
29 days ago
I bet this Generative AI Engineer wishes they had a crystal ball to see the future. Oh well, I'd go with option B and cross my fingers.
upvoted 0 times
Rosita
4 days ago
User 1: I think option B sounds like a safe bet.
upvoted 0 times
...
...
Jolanda
1 months ago
Manually throttling the requests? That's just asking for trouble. Option D seems like a band-aid solution to me.
upvoted 0 times
Reed
13 days ago
D: Changing to a model with fewer parameters might help reduce hardware constraint issues as well.
upvoted 0 times
...
Maryann
20 days ago
B: Maybe switching to External Models would be a better long-term strategy.
upvoted 0 times
...
Madelyn
21 days ago
C: Deploying the model using pay-per-token throughput could also be a cost-effective option.
upvoted 0 times
...
Soledad
23 days ago
A: I agree, manually throttling requests is not a sustainable solution.
upvoted 0 times
...
...
Oliva
1 months ago
Hmm, I'm not so sure about that. Reducing the number of parameters might be a better idea to avoid hardware constraints. Option C looks promising.
upvoted 0 times
...
Rolande
1 months ago
I think option B is the way to go. Pay-per-token throughput sounds like a good cost-effective solution for this scenario.
upvoted 0 times
Edelmira
16 days ago
I think it depends on the specific needs of the application and the budget constraints.
upvoted 0 times
...
Freeman
24 days ago
But wouldn't switching to External Models be a better long-term solution?
upvoted 0 times
...
Fannie
28 days ago
I agree, option B seems like the most cost-effective choice.
upvoted 0 times
...
...
Tasia
2 months ago
I disagree, I believe deploying the model using pay-per-token throughput would be more cost-effective in the long run.
upvoted 0 times
...
Yolando
2 months ago
I think the best strategy would be to switch to using External Models instead.
upvoted 0 times
...

Save Cancel