Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Salesforce Exam Salesforce AI Associate Topic 4 Question 35 Discussion

Actual exam question for Salesforce's Salesforce AI Associate exam
Question #: 35
Topic #: 4
[All Salesforce AI Associate Questions]

Contribute your Thoughts:

Ashley
2 months ago
Ooh, I bet the answer is 'All of the above' - that's always the safe choice, right? Just kidding, just kidding. Gotta keep those AI models in check, though.
upvoted 0 times
Delsie
10 days ago
A) Ongoing auditing and monitoring of data that is used in AI applications
upvoted 0 times
...
Delisa
24 days ago
C) Using data that contains more examples of minority groups than majority groups
upvoted 0 times
...
Stanford
29 days ago
B) Excluding data features from the AI application to benefit a population
upvoted 0 times
...
Son
1 months ago
A) Ongoing auditing and monitoring of data that is used in AI applications
upvoted 0 times
...
...
Mohammad
2 months ago
Using more data on minority groups? Now that's an interesting idea. But I wonder if it would just create a new kind of bias. Hmm, tough choice.
upvoted 0 times
Jenise
1 months ago
C) Using data that contains more examples of minority groups than majority groups
upvoted 0 times
...
Alyce
1 months ago
B) Excluding data features from the AI application to benefit a population
upvoted 0 times
...
Mose
1 months ago
A) Ongoing auditing and monitoring of data that is used in AI applications
upvoted 0 times
...
...
Carman
2 months ago
Excluding data features? Sounds like a quick fix, but we all know that can lead to other problems. Better to get that data in order first.
upvoted 0 times
Shenika
27 days ago
C) Using data that contains more examples of minority groups than majority groups
upvoted 0 times
...
Gerald
1 months ago
A) That's true, ongoing auditing and monitoring of data is crucial to ensure fairness in AI applications.
upvoted 0 times
...
Carisa
2 months ago
B) Excluding data features from the AI application to benefit a population
upvoted 0 times
...
Anastacia
2 months ago
A) Ongoing auditing and monitoring of data that is used in AI applications
upvoted 0 times
...
...
Reita
2 months ago
I think excluding data features from the AI application to benefit a population could also help in mitigating bias.
upvoted 0 times
...
Laquanda
2 months ago
I believe option C) Using data that contains more examples of minority groups is also a good technique to ensure fairness.
upvoted 0 times
...
Samira
3 months ago
Ongoing auditing and monitoring of data is the way to go. Gotta keep an eye on those AI models to make sure they're not biased, you know?
upvoted 0 times
Alishia
2 months ago
B) Excluding data features from the AI application to benefit a population
upvoted 0 times
...
Darell
2 months ago
That's right, we need to constantly check the data to ensure fairness.
upvoted 0 times
...
Shanice
2 months ago
A) Ongoing auditing and monitoring of data that is used in AI applications
upvoted 0 times
...
...
Ivory
3 months ago
I agree with Lisha. It's important to regularly check the data for bias.
upvoted 0 times
...
Lisha
3 months ago
I think the answer is A) Ongoing auditing and monitoring of data.
upvoted 0 times
...

Save Cancel