New Year Sale ! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft Exam AI-900 Topic 5 Question 79 Discussion

Actual exam question for Microsoft's AI-900 exam
Question #: 79
Topic #: 5
[All AI-900 Questions]

What should you implement to identify hateful responses returned by a generative Al solution?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

Linn
2 months ago
Content filtering? More like content censorship, am I right? We should be encouraging free expression, not stifling it!
upvoted 0 times
...
Eden
2 months ago
Fine-tuning, all the way. You can really hone the AI's responses to make sure they're on point and not crossing any lines.
upvoted 0 times
Lindsay
1 months ago
Prompt engineering is crucial to guide the AI in providing appropriate responses.
upvoted 0 times
...
Janey
1 months ago
Abuse monitoring can also be helpful in detecting any inappropriate content generated by the AI.
upvoted 0 times
...
Dudley
1 months ago
Fine-tuning is definitely important to identify and prevent hateful responses.
upvoted 0 times
...
...
Ronnie
2 months ago
I believe content filtering could also be useful in identifying hateful responses.
upvoted 0 times
...
Sina
2 months ago
I agree with Leota, abuse monitoring can help identify hateful responses.
upvoted 0 times
...
Teddy
2 months ago
Abuse monitoring is a must! You need to keep a close eye on what's coming out of that AI, and shut down any hateful nonsense right away.
upvoted 0 times
Wayne
1 months ago
D) fine-tuning
upvoted 0 times
...
Mila
1 months ago
C) content filtering
upvoted 0 times
...
Misty
2 months ago
B) abuse monitoring
upvoted 0 times
...
Micah
2 months ago
A) prompt engineering
upvoted 0 times
...
...
Leota
2 months ago
I think we should implement abuse monitoring.
upvoted 0 times
...
Cherelle
3 months ago
Prompt engineering, for sure. That way, you can train the AI to stay positive and avoid generating anything offensive in the first place.
upvoted 0 times
Phil
1 months ago
Fine-tuning the AI model can further refine its ability to avoid generating offensive content.
upvoted 0 times
...
Ernie
1 months ago
Content filtering is another useful tool to ensure only appropriate responses are generated.
upvoted 0 times
...
Lynda
1 months ago
Abuse monitoring could also help in identifying and filtering out any negative content.
upvoted 0 times
...
Jovita
2 months ago
Prompt engineering is definitely important to prevent hateful responses.
upvoted 0 times
...
...
Roxane
3 months ago
I think content filtering is the way to go. Gotta keep those hateful responses out of the system, you know?
upvoted 0 times
Teri
2 months ago
Prompt engineering might help guide the AI to generate more positive responses instead of hateful ones.
upvoted 0 times
...
Teddy
2 months ago
Abuse monitoring could also be useful to catch any inappropriate content before it's generated.
upvoted 0 times
...
Major
2 months ago
Content filtering is definitely important to weed out those hateful responses.
upvoted 0 times
...
...

Save Cancel