New Year Sale ! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Databricks Exam Databricks-Generative-AI-Engineer-Associate Topic 2 Question 3 Discussion

Actual exam question for Databricks's Databricks-Generative-AI-Engineer-Associate exam
Question #: 3
Topic #: 2
[All Databricks-Generative-AI-Engineer-Associate Questions]

A Generative AI Engineer is building an LLM to generate article summaries in the form of a type of poem, such as a haiku, given the article content. However, the initial output from the LLM does not match the desired tone or style.

Which approach will NOT improve the LLM's response to achieve the desired response?

Show Suggested Answer Hide Answer
Suggested Answer: B

The task at hand is to improve the LLM's ability to generate poem-like article summaries with the desired tone and style. Using a neutralizer to normalize the tone and style of the underlying documents (option B) will not help improve the LLM's ability to generate the desired poetic style. Here's why:

Neutralizing Underlying Documents: A neutralizer aims to reduce or standardize the tone of input data. However, this contradicts the goal, which is to generate text with a specific tone and style (like haikus). Neutralizing the source documents will strip away the richness of the content, making it harder for the LLM to generate creative, stylistic outputs like poems.

Why Other Options Improve Results:

A (Explicit Instructions in the Prompt): Directly instructing the LLM to generate text in a specific tone and style helps align the output with the desired format (e.g., haikus). This is a common and effective technique in prompt engineering.

C (Few-shot Examples): Providing examples of the desired output format helps the LLM understand the expected tone and structure, making it easier to generate similar outputs.

D (Fine-tuning the LLM): Fine-tuning the model on a dataset that contains examples of the desired tone and style is a powerful way to improve the model's ability to generate outputs that match the target format.

Therefore, using a neutralizer (option B) is not an effective method for achieving the goal of generating stylized poetic summaries.


Contribute your Thoughts:

Blondell
17 days ago
I don't know, maybe we should just ask the LLM to write a limerick instead. At least those are supposed to be a bit nonsensical.
upvoted 0 times
...
Craig
20 days ago
Normalizing the tone? That's like trying to make a poem sound 'business-casual'. I'm not feeling it.
upvoted 0 times
Theodora
8 days ago
A: Maybe providing explicit instructions would help the LLM understand what tone and style to use.
upvoted 0 times
...
...
Bettina
20 days ago
Ah, the old 'make it do what I want' approach. Good luck with that, my friend.
upvoted 0 times
...
Corrina
27 days ago
Wait, we're supposed to generate poetry? I thought this was a tech exam. I'm out of my element here.
upvoted 0 times
Timothy
3 days ago
B: Yeah, we're working on making the article summaries more poetic.
upvoted 0 times
...
Val
5 days ago
A: Don't worry, we're just trying to improve the LLM's response.
upvoted 0 times
...
...
Cristina
1 months ago
If I wanted to write a haiku, I'd just use a Haiku generator. Why are we making this so complicated?
upvoted 0 times
Shenika
7 days ago
C: Fine-tuning the LLM on the desired dataset might be the best approach.
upvoted 0 times
...
Luisa
9 days ago
B: Using a Haiku generator would be too simple for this project.
upvoted 0 times
...
Dudley
29 days ago
A: Maybe the LLM needs more training data.
upvoted 0 times
...
...
Barabara
2 months ago
I disagree, I believe option C will not help achieve the desired response.
upvoted 0 times
...
Emogene
2 months ago
I think option B will not improve the LLM's response.
upvoted 0 times
...

Save Cancel