Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Databricks Exam Databricks Certified Generative AI Engineer Associate Topic 5 Question 4 Discussion

Actual exam question for Databricks's Databricks Certified Generative AI Engineer Associate exam
Question #: 4
Topic #: 5
[All Databricks Certified Generative AI Engineer Associate Questions]

A Generative AI Engineer developed an LLM application using the provisioned throughput Foundation Model API. Now that the application is ready to be deployed, they realize their volume of requests are not sufficiently high enough to create their own provisioned throughput endpoint. They want to choose a strategy that ensures the best cost-effectiveness for their application.

What strategy should the Generative AI Engineer use?

Show Suggested Answer Hide Answer
Suggested Answer: B

Problem Context: The engineer needs a cost-effective deployment strategy for an LLM application with relatively low request volume.

Explanation of Options:

Option A: Switching to external models may not provide the required control or integration necessary for specific application needs.

Option B: Using a pay-per-token model is cost-effective, especially for applications with variable or low request volumes, as it aligns costs directly with usage.

Option C: Changing to a model with fewer parameters could reduce costs, but might also impact the performance and capabilities of the application.

Option D: Manually throttling requests is a less efficient and potentially error-prone strategy for managing costs.

Option B is ideal, offering flexibility and cost control, aligning expenses directly with the application's usage patterns.


Contribute your Thoughts:

Fernanda
7 months ago
Ah, the joys of scaling AI applications. I'd say option B is the way to go, but maybe they should also consider a backup plan just in case. You know, like a Plan B.
upvoted 0 times
...
Albina
8 months ago
I bet this Generative AI Engineer wishes they had a crystal ball to see the future. Oh well, I'd go with option B and cross my fingers.
upvoted 0 times
Berry
6 months ago
Delbert: Let's hope it works out for the Generative AI Engineer.
upvoted 0 times
...
Rosendo
6 months ago
User 3: Agreed, it's better to have cost guarantees.
upvoted 0 times
...
Delbert
7 months ago
User 2: Yeah, pay-per-token throughput seems like a good choice.
upvoted 0 times
...
Rosita
7 months ago
User 1: I think option B sounds like a safe bet.
upvoted 0 times
...
...
Jolanda
8 months ago
Manually throttling the requests? That's just asking for trouble. Option D seems like a band-aid solution to me.
upvoted 0 times
Reed
7 months ago
D: Changing to a model with fewer parameters might help reduce hardware constraint issues as well.
upvoted 0 times
...
Maryann
7 months ago
B: Maybe switching to External Models would be a better long-term strategy.
upvoted 0 times
...
Madelyn
7 months ago
C: Deploying the model using pay-per-token throughput could also be a cost-effective option.
upvoted 0 times
...
Soledad
7 months ago
A: I agree, manually throttling requests is not a sustainable solution.
upvoted 0 times
...
...
Oliva
8 months ago
Hmm, I'm not so sure about that. Reducing the number of parameters might be a better idea to avoid hardware constraints. Option C looks promising.
upvoted 0 times
...
Rolande
8 months ago
I think option B is the way to go. Pay-per-token throughput sounds like a good cost-effective solution for this scenario.
upvoted 0 times
Edelmira
7 months ago
I think it depends on the specific needs of the application and the budget constraints.
upvoted 0 times
...
Freeman
7 months ago
But wouldn't switching to External Models be a better long-term solution?
upvoted 0 times
...
Fannie
8 months ago
I agree, option B seems like the most cost-effective choice.
upvoted 0 times
...
...
Tasia
8 months ago
I disagree, I believe deploying the model using pay-per-token throughput would be more cost-effective in the long run.
upvoted 0 times
...
Yolando
9 months ago
I think the best strategy would be to switch to using External Models instead.
upvoted 0 times
...

Save Cancel