How does the temperature setting in a decoding algorithm influence the probability distribution over the vocabulary?
Comprehensive and Detailed In-Depth Explanation=
Temperature adjusts the softmax distribution in decoding. Increasing it (e.g., to 2.0) flattens the curve, giving lower-probability words a better chance, thus increasing diversity---Option C is correct. Option A exaggerates---top words still have impact, just less dominance. Option B is backwards---decreasing temperature sharpens, not broadens. Option D is false---temperature directly alters distribution, not speed. This controls output creativity.
: OCI 2025 Generative AI documentation likely reiterates temperature effects under decoding parameters.
Why is it challenging to apply diffusion models to text generation?
Comprehensive and Detailed In-Depth Explanation=
Diffusion models, widely used for image generation, iteratively denoise data from noise to a structured output. Images are continuous (pixel values), while text is categorical (discrete tokens), making it challenging to apply diffusion directly to text, as the denoising process struggles with discrete spaces. This makes Option C correct. Option A is false---text generation can benefit from complex models. Option B is incorrect---text is categorical. Option D is wrong, as diffusion models aren't inherently image-only but are better suited to continuous data. Research adapts diffusion for text, but it's less straightforward.
: OCI 2025 Generative AI documentation likely discusses diffusion models under generative techniques, noting their image focus.
Given the following code:
PromptTemplate(input_variables=["human_input", "city"], template=template)
Which statement is true about PromptTemplate in relation to input_variables?
Comprehensive and Detailed In-Depth Explanation=
In LangChain, PromptTemplate supports any number of input_variables (zero, one, or more), allowing flexible prompt design---Option C is correct. The example shows two, but it's not a requirement. Option A (minimum two) is false---no such limit exists. Option B (single variable) is too restrictive. Option D (no variables) contradicts its purpose---variables are optional but supported. This adaptability aids prompt engineering.
: OCI 2025 Generative AI documentation likely covers PromptTemplate under LangChain prompt design.
What does a higher number assigned to a token signify in the "Show Likelihoods" feature of the language model token generation?
Comprehensive and Detailed In-Depth Explanation=
In ''Show Likelihoods,'' a higher number (probability score) indicates a token's greater likelihood of following the current token, reflecting the model's prediction confidence---Option B is correct. Option A (less likely) is the opposite. Option C (unrelated) misinterprets---likelihood ties tokens contextually. Option D (only one) assumes greedy decoding, not the feature's purpose. This helps users understand model preferences.
: OCI 2025 Generative AI documentation likely explains ''Show Likelihoods'' under token generation insights.
What is the purpose of the "stop sequence" parameter in the OCI Generative AI Generation models?
Comprehensive and Detailed In-Depth Explanation=
The ''stop sequence'' parameter defines a string (e.g., ''.'' or ''\n'') that, when generated, halts text generation, allowing control over output length or structure---Option A is correct. Option B (penalty) describes frequency/presence penalties. Option C (max tokens) is a separate parameter. Option D (randomness) relates to temperature. Stop sequences ensure precise termination.
: OCI 2025 Generative AI documentation likely details stop sequences under generation parameters.
Laurel
16 days agoTegan
1 months agoRodrigo
1 months ago