Cloud Kicks prepares a dataset for an AI model and identifies some inconsistencies in the data.
What is the most appropriate action the company should take?
What are the potential consequences of an organization suffering from poor data quality?
What is an example of ethical debt?
''Launching an AI feature after discovering a harmful bias is an example of ethical debt. Ethical debt is a term that describes the potential harm or risk caused by unethical or irresponsible decisions or actions related to AI systems. Ethical debt can accumulate over time and have negative consequences for users, customers, partners, or society. For example, launching an AI feature after discovering a harmful bias can create ethical debt by exposing users to unfair or inaccurate results that may affect their trust, satisfaction, or well-being.''
A consultant conducts a series of Consequence Scanning workshops to support testing diverse datasets.
Which Salesforce Trusted AI Principles is being practiced>
''Conducting a series of Consequence Scanning workshops to support testing diverse datasets is an action that practices Salesforce's Trusted AI Principle of Inclusivity. Inclusivity is one of the Trusted AI Principles that states that AI systems should be designed and developed with respect for diversity and inclusion of different perspectives, backgrounds, and experiences. Conducting Consequence Scanning workshops means engaging with various stakeholders to identify and assess the potential impacts and implications of AI systems on different groups or domains. Conducting Consequence Scanning workshops can help practice Inclusivity by ensuring that diverse datasets are used to test and evaluate AI systems.''
A financial institution plans a campaign for preapproved credit cards?
How should they implement Salesforce's Trusted AI Principle of Transparency?
''Flagging sensitive variables and their proxies to prevent discriminatory lending practices is how they should implement Salesforce's Trusted AI Principle of Transparency. Transparency is one of the Trusted AI Principles that states that AI systems should be designed and developed with respect for clarity and openness in how they work and why they make certain decisions. Transparency also means that AI users should be able to access relevant information and documentation about the AI systems they interact with. Flagging sensitive variables and their proxies means identifying and marking variables that can potentially cause discrimination or unfair treatment based on a person's identity or characteristics, such as age, gender, race, income, or credit score. Flagging sensitive variables and their proxies can help implement Transparency by allowing users to understand and evaluate the data used or generated by AI systems.''
Larae
11 days agoWai
22 days agoMeghan
26 days agoAja
1 months agoMarleen
2 months agoDalene
2 months agoDannette
2 months agoAnnita
2 months agoCiara
3 months agoCatrice
3 months agoMargarita
4 months agoVinnie
5 months agoGerman
5 months agoHerschel
5 months agoLouann
6 months agoLindsey
6 months agoRoy
6 months agoKimbery
6 months ago