ATGO AI
| Accountability, Trust, Governance and Oversight of Artificial Intelligence | cover logo
RSS Feed Apple Podcasts Overcast Castro Pocket Casts
English
Non-explicit
anchor.fm
16:11

ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence |

by ForHumanity Center

ATGO AI is podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence).

Copyright: ForHumanity Center

Episodes

#OpenBox The Data Brokers & Emerging Governance with Heidi Part 2

20m · Published 20 Dec 11:34

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

I spoke with Heidi Saas

Heidi is a Data Privacy and Technology Attorney. She regularly advise SMEs and start ups working in a wide variety of industries, on data privacy and ethical AI strategies. She is also a ForHumanity Contributor and algorithmic auditor.

This is part 2. She is speaking about how enterprises can manage the challenges by good governance practices .

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#OpenBox The Data Brokers & Emerging Governance with Heidi

21m · Published 14 Dec 20:58

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.


I spoke with Heidi Saas Heidi is a Data Privacy and Technology Attorney. She regularly advise SMEs and start ups working in a wide variety of industries, on data privacy and ethical AI strategies. She is also a ForHumanity Contributor and algorithmic auditor.

This is part 1. She is speaking about how regulations are emerging in the context of data brokers and how enterprises need to adopt to the changing compliance environment in managing data.

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#openbox - Open issues and problems in dealing with dark patterns

13m · Published 06 Dec 12:51

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series

Today, we have with us Marie. Marie Potel-Saville is the founder and CEO of amurabi, a legal innovation by design agency. She was a lawyer for over 10 years at Magic Circle law firms such as Freshfields and Allen & Overy in London, Brussels and Paris. She is also the founder of Fair-Patterns, a SAAS platform to fight against dark patterns. She is spearheading efforts towards addressing the challenging problem of deceptive designs in applications using innovative technology. We are going to be discussing some nuances with her on this.

In this episode, Marie speaks about the enterprise approaches in working on fair patterns and the emerging regulatory interests in addressing the gap.

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#OpenBox - Open issues in dealing with dark patterns and/ or deceptive designs

20m · Published 23 Nov 13:26

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series

Today, we have with us Marie. Marie Potel-Saville is the founder and CEO of amurabi, a legal innovation by design agency. She was a lawyer for over 10 years at Magic Circle law firms such as Freshfields and Allen & Overy in London, Brussels and Paris. She is also the founder of Fair-Patterns, a SAAS platform to fight against dark patterns. She is spearheading efforts towards addressing the challenging problem of deceptive designs in applications using innovative technology. We are going to be discussing some nuances with her on this.


In this episode, Marie speaks about the key considerations in dealing with the deceptive designs and how fair patterns enable a better business proposition

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#openbox Bias identification and mitigation with Patrick Hall - Part 2

22m · Published 02 Nov 12:39

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

Today, we have with us Patrick Hall. Patrick is a Assistant Professor at George Washington University. He is conducting research in support of the NIST AI Risk Management Framework and a contributor to NIST work on building a Standard for Identifying and Managing Bias in Artificial Intelligence. He is also the collaborator running the open-source initiative called “Awesome Machine Learning Interpretability” which maintains and curates a list of practical and awesome responsible machine learning resources. He is also one of the authors of Machine Learning for High Risk Applications released by O’reilly. He is also managing the AI incident Database.

This is part 2 of the episode

He spoke about key approaches for bias mitigation and the limitations therein. He also discusses the open problems in this area.

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#Openbox - bias discussion with Patrick Hall

22m · Published 18 Oct 09:36

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.


Today, we have with us Patrick Hall. Patrick is a Assistant Professor at George Washington University. He is conducting research in support of the NIST AI Risk Management Framework and a contributor to NIST work on building a Standard for Identifying and Managing Bias in Artificial Intelligence. He is also the collaborator running the open-source initiative called “Awesome Machine Learning Interpretability” which maintains and curates a list of practical and awesome responsible machine learning resources. He is also one of the authors of Machine Learning for High Risk Applications released by O’reilly. He is also managing the AI incident Database.

He spoke about key considerations for metrics regarding bias for varied types of data. He also discusses the open problems in this area.

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#OPENBOX Navigating Causality with Aleksander Molak - Part 2

23m · Published 28 Sep 12:21

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visithttps://forhumanity.center/.

Today, we have with us Aleksandr. Aleksander Molakis a Machine Learning Researcher, Educator, Consultant,and Authorwho who gained experience working with Fortune 100, Fortune 500, and Inc. 5000 companies across Europe, the USA,and Israel, designing and building large-scale machine learning systems. On a mission to democratize causality for businesses and machine learning practitioners, Aleksander is a prolific writer, creator,and international speaker.He is the author of the book Causal inference and discovery in Python.

This is Part 2. He discusses about some critical considerations regarding causality including honest reflections on how to leverage causality for humanity.

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#OPENBOX - Navigating Causal Discovery with Aleksander Molak

25m · Published 28 Sep 12:17

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.

Today, we have with us Aleksandr. Aleksander Molakis a Machine Learning Researcher, Educator, Consultant,and Authorwho who gained experience working with Fortune 100, Fortune 500, and Inc. 5000 companies across Europe, the USA,and Israel, designing and building large-scale machine learning systems. On a mission to democratize causality for businesses and machine learning practitioners, Aleksander is a prolific writer, creator,and international speaker.He is the author of the book Causal inference and discovery in Python.

This is Part 1. He discusses about open issues and considerations in causal discovery, Directed acrylic graphs, and Causal effect estimators.

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#OpenBox - Charting the Sociotechnical Gap in Explainable AI - Part 2

22m · Published 05 Sep 10:37

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. Today, we have with us ⁠Upol Ehsan is a Researcher and Doctoral Candidate in the School of Interactive Computing at Georgia Tech and an affiliate at the Data & Society Research Institute. Combining AI, HCI, and philosophy, his work in Explainable AI (XAI) and Responsible AI aims to foster a future where anyone, regardless of their background, can use AI-powered technology with dignity. His work has pioneered the area of⁠ Human-centered Explainable AI (a sub-field of XAI), receiving multiple awards at ACM CHI, FAccT, and HCII and been covered in major media outlets. By promoting equity and ethics in AI, he wants to ensure stakeholders who aren’t at the table do not end up on the menu. Outside research, he is a founder and advisor for Aalor Asha, an educational institute he started for underprivileged children subjected to child labor. He is also a social entrepreneur and has co-founded DeshLabs, a social innovation lab focused on fostering grassroots innovations in emerging markets.

We discuss the paper titled “Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI” which It will be presented at CSCW 2023 and was co-authors with Koustuv Saha, Munmun de Choudhury, and Mark Riedl.

Upol explains about specific nuances on why Explainability cannot be considered independent of model development and deployment environment.

This is part 2 of the discussion.

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#OpenBox - Charting the Sociotechnical Gap in Explainable AI - Part 1

23m · Published 05 Sep 10:24

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. Today, we have with us ⁠Upol Ehsan is a Researcher and Doctoral Candidate in the School of Interactive Computing at Georgia Tech and an affiliate at the Data & Society Research Institute. Combining AI, HCI, and philosophy, his work in Explainable AI (XAI) and Responsible AI aims to foster a future where anyone, regardless of their background, can use AI-powered technology with dignity. His work has pioneered the area of⁠ Human-centered Explainable AI (a sub-field of XAI), receiving multiple awards at ACM CHI, FAccT, and HCII and been covered in major media outlets. By promoting equity and ethics in AI, he wants to ensure stakeholders who aren’t at the table do not end up on the menu. Outside research, he is a founder and advisor for Aalor Asha, an educational institute he started for underprivileged children subjected to child labor. He is also a social entrepreneur and has co-founded DeshLabs, a social innovation lab focused on fostering grassroots innovations in emerging markets.

We discuss the paper titled “Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI” which It will be presented at CSCW 2023 and was co-authors with Koustuv Saha, Munmun de Choudhury, and Mark Riedl.

Upol explains about specific nuances on why Explainability cannot be considered independent of model development and deployment environment.

This is part 1 of the discussion.

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence | has 38 episodes in total of non- explicit content. Total playtime is 10:15:04. The language of the podcast is English. This podcast has been added on August 26th 2022. It might contain more episodes than the ones shown here. It was last updated on May 5th, 2024 01:40.

Similar Podcasts

Every Podcast » Podcasts » ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence |