Cyber Monday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft Exam AI-900 Topic 5 Question 79 Discussion

Actual exam question for Microsoft's AI-900 exam
Question #: 79
Topic #: 5
[All AI-900 Questions]

What should you implement to identify hateful responses returned by a generative Al solution?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

Linn
27 days ago
Content filtering? More like content censorship, am I right? We should be encouraging free expression, not stifling it!
upvoted 0 times
...
Eden
1 months ago
Fine-tuning, all the way. You can really hone the AI's responses to make sure they're on point and not crossing any lines.
upvoted 0 times
Lindsay
2 days ago
Prompt engineering is crucial to guide the AI in providing appropriate responses.
upvoted 0 times
...
Janey
4 days ago
Abuse monitoring can also be helpful in detecting any inappropriate content generated by the AI.
upvoted 0 times
...
Dudley
8 days ago
Fine-tuning is definitely important to identify and prevent hateful responses.
upvoted 0 times
...
...
Ronnie
1 months ago
I believe content filtering could also be useful in identifying hateful responses.
upvoted 0 times
...
Sina
1 months ago
I agree with Leota, abuse monitoring can help identify hateful responses.
upvoted 0 times
...
Teddy
1 months ago
Abuse monitoring is a must! You need to keep a close eye on what's coming out of that AI, and shut down any hateful nonsense right away.
upvoted 0 times
Wayne
5 days ago
D) fine-tuning
upvoted 0 times
...
Mila
12 days ago
C) content filtering
upvoted 0 times
...
Misty
13 days ago
B) abuse monitoring
upvoted 0 times
...
Micah
16 days ago
A) prompt engineering
upvoted 0 times
...
...
Leota
1 months ago
I think we should implement abuse monitoring.
upvoted 0 times
...
Cherelle
2 months ago
Prompt engineering, for sure. That way, you can train the AI to stay positive and avoid generating anything offensive in the first place.
upvoted 0 times
Phil
7 days ago
Fine-tuning the AI model can further refine its ability to avoid generating offensive content.
upvoted 0 times
...
Ernie
8 days ago
Content filtering is another useful tool to ensure only appropriate responses are generated.
upvoted 0 times
...
Lynda
11 days ago
Abuse monitoring could also help in identifying and filtering out any negative content.
upvoted 0 times
...
Jovita
21 days ago
Prompt engineering is definitely important to prevent hateful responses.
upvoted 0 times
...
...
Roxane
2 months ago
I think content filtering is the way to go. Gotta keep those hateful responses out of the system, you know?
upvoted 0 times
Teri
1 months ago
Prompt engineering might help guide the AI to generate more positive responses instead of hateful ones.
upvoted 0 times
...
Teddy
1 months ago
Abuse monitoring could also be useful to catch any inappropriate content before it's generated.
upvoted 0 times
...
Major
1 months ago
Content filtering is definitely important to weed out those hateful responses.
upvoted 0 times
...
...

Save Cancel