Register or log in to access this video
As AI use grows, so does the need for transparency and interpretability to mitigate bias, racism, and discrimination. This talk covers building trustworthy, human-oriented AI.
The need for interpretable artificial intelligence systems grows along with the prevalence of artificial intelligence applications in everyday life. Despite the successes, these systems have their limitations and drawbacks. The most significant ones are the lack of transparency and interpretability behind their behaviors, which leaves users with little understanding of how particular decisions are made by these models. Every day, new cases of bias, racism, and discrimination are reported by users of these systems. In this talk, I discuss bias in AI, how to mitigate them to build trustworthy AI products using a human-oriented and critical approach.
Key takeaways
- Importance of Transparency: Understand the critical need for transparent AI systems to build user trust and ensure accountability in decision-making processes.
- Mitigating Bias: Learn strategies to identify and mitigate bias in AI models to prevent discrimination and promote fairness.
- Human-Centered Approach: Emphasize the importance of a human-oriented approach in AI development, ensuring that ethical considerations are prioritized.
- Interdisciplinary Collaboration: Recognize the role of various tech professionals, including software engineers, data engineers, and data scientists, in creating ethical AI.
- Real-World Impact: Reflect on the real-world implications of AI products and the responsibility of tech professionals to create systems that positively impact society.