BlackFriday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft Exam AI-900 Topic 5 Question 79 Discussion

Actual exam question for Microsoft's AI-900 exam
Question #: 79
Topic #: 5
[All AI-900 Questions]

What should you implement to identify hateful responses returned by a generative Al solution?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

Ronnie
7 days ago
I believe content filtering could also be useful in identifying hateful responses.
upvoted 0 times
...
Sina
9 days ago
I agree with Leota, abuse monitoring can help identify hateful responses.
upvoted 0 times
...
Teddy
10 days ago
Abuse monitoring is a must! You need to keep a close eye on what's coming out of that AI, and shut down any hateful nonsense right away.
upvoted 0 times
...
Leota
11 days ago
I think we should implement abuse monitoring.
upvoted 0 times
...
Cherelle
15 days ago
Prompt engineering, for sure. That way, you can train the AI to stay positive and avoid generating anything offensive in the first place.
upvoted 0 times
...
Roxane
19 days ago
I think content filtering is the way to go. Gotta keep those hateful responses out of the system, you know?
upvoted 0 times
Teri
2 days ago
Prompt engineering might help guide the AI to generate more positive responses instead of hateful ones.
upvoted 0 times
...
Teddy
3 days ago
Abuse monitoring could also be useful to catch any inappropriate content before it's generated.
upvoted 0 times
...
Major
11 days ago
Content filtering is definitely important to weed out those hateful responses.
upvoted 0 times
...
...

Save Cancel