Given the following code:
PromptTemplate(input_variables=["human_input", "city"], template=template)
Which statement is true about PromptTemplate in relation to input_variables?
Comprehensive and Detailed In-Depth Explanation=
In LangChain, PromptTemplate supports any number of input_variables (zero, one, or more), allowing flexible prompt design---Option C is correct. The example shows two, but it's not a requirement. Option A (minimum two) is false---no such limit exists. Option B (single variable) is too restrictive. Option D (no variables) contradicts its purpose---variables are optional but supported. This adaptability aids prompt engineering.
: OCI 2025 Generative AI documentation likely covers PromptTemplate under LangChain prompt design.
What does a higher number assigned to a token signify in the "Show Likelihoods" feature of the language model token generation?
Comprehensive and Detailed In-Depth Explanation=
In ''Show Likelihoods,'' a higher number (probability score) indicates a token's greater likelihood of following the current token, reflecting the model's prediction confidence---Option B is correct. Option A (less likely) is the opposite. Option C (unrelated) misinterprets---likelihood ties tokens contextually. Option D (only one) assumes greedy decoding, not the feature's purpose. This helps users understand model preferences.
: OCI 2025 Generative AI documentation likely explains ''Show Likelihoods'' under token generation insights.
What is the purpose of the "stop sequence" parameter in the OCI Generative AI Generation models?
Comprehensive and Detailed In-Depth Explanation=
The ''stop sequence'' parameter defines a string (e.g., ''.'' or ''\n'') that, when generated, halts text generation, allowing control over output length or structure---Option A is correct. Option B (penalty) describes frequency/presence penalties. Option C (max tokens) is a separate parameter. Option D (randomness) relates to temperature. Stop sequences ensure precise termination.
: OCI 2025 Generative AI documentation likely details stop sequences under generation parameters.
Which is NOT a category of pretrained foundational models available in the OCI Generative AI service?
Comprehensive and Detailed In-Depth Explanation=
OCI Generative AI typically offers pretrained models for summarization (A), generation (B), and embeddings (D), aligning with common generative tasks. Translation models (C) are less emphasized in generative AI services, often handled by specialized NLP platforms, making C the NOT category. While possible, translation isn't a core OCI generative focus based on standard offerings.
: OCI 2025 Generative AI documentation likely lists model categories under pretrained options.
How are fine-tuned customer models stored to enable strong data privacy and security in the OCI Generative AI service?
Comprehensive and Detailed In-Depth Explanation=
In OCI, fine-tuned models are stored in Object Storage, encrypted by default, ensuring privacy and security per cloud best practices---Option B is correct. Option A (shared) violates privacy. Option C (unencrypted) contradicts security standards. Option D (Key Management) stores keys, not models. Encryption protects customer data.
: OCI 2025 Generative AI documentation likely details storage security under fine-tuning workflows.
Tegan
4 days agoRodrigo
5 days ago