BlackFriday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Snowflake Exam DSA-C02 Topic 2 Question 24 Discussion

Actual exam question for Snowflake's DSA-C02 exam
Question #: 24
Topic #: 2
[All DSA-C02 Questions]

You are training a binary classification model to support admission approval decisions for a college degree program.

How can you evaluate if the model is fair, and doesn't discriminate based on ethnicity?

Show Suggested Answer Hide Answer
Suggested Answer: C

By using ethnicity as a sensitive field, and comparing disparity between selection rates and performance metrics for each ethnicity value, you can evaluate the fairness of the model.


Contribute your Thoughts:

Rosalyn
4 months ago
C is the way to go. Gotta look at the selection rates and metrics for each ethnicity to ensure there's no discrimination happening. Anything less is just sloppy work.
upvoted 0 times
...
Shizue
4 months ago
Haha, option D - 'None of the above' - that's the easy way out. Real data scientists roll up their sleeves and tackle the tough questions of fairness and equity.
upvoted 0 times
Kimberlie
2 months ago
B) Remove the ethnicity feature from the training dataset.
upvoted 0 times
...
Edelmira
3 months ago
C) Compare disparity between selection rates and performance metrics across ethnicities.
upvoted 0 times
...
Felton
3 months ago
A) Evaluate each trained model with a validation dataset and use the model with the highest accuracy score.
upvoted 0 times
...
...
Sylvia
4 months ago
I think evaluating each trained model with a validation dataset and using the model with the highest accuracy score is the best approach.
upvoted 0 times
...
Gilbert
4 months ago
I agree with Rosalia, comparing disparity is important to ensure fairness and avoid discrimination.
upvoted 0 times
...
Maryanne
4 months ago
But wouldn't removing the ethnicity feature from the training dataset be a better option to ensure fairness?
upvoted 0 times
...
Truman
4 months ago
B is a cop-out, just removing the feature won't solve the underlying bias. Gotta dig deeper and actually analyze the model's performance across groups.
upvoted 0 times
Winfred
3 months ago
B) Remove the ethnicity feature from the training dataset.
upvoted 0 times
...
Yaeko
3 months ago
C) Compare disparity between selection rates and performance metrics across ethnicities.
upvoted 0 times
...
Maira
3 months ago
A) Evaluate each trained model with a validation dataset and use the model with the highest accuracy score.
upvoted 0 times
...
...
Rosalia
4 months ago
I think we should compare disparity between selection rates and performance metrics across ethnicities.
upvoted 0 times
...
Stanton
4 months ago
Option C seems legit, gotta check that model isn't biased against any ethnicities. Don't want to repeat the college admission scandals, am I right?
upvoted 0 times
Katheryn
3 months ago
C) Compare disparity between selection rates and performance metrics across ethnicities.
upvoted 0 times
...
Amie
4 months ago
A) Evaluate each trained model with a validation dataset and use the model with the highest accuracy score.
upvoted 0 times
...
...

Save Cancel