ATGO AI
| Accountability, Trust, Governance and Oversight of Artificial Intelligence | cover logo
RSS Feed Apple Podcasts Overcast Castro Pocket Casts
English
Non-explicit
anchor.fm
16:20

ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence |

by ForHumanity Center

ATGO AI is podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence).

Copyright: ForHumanity Center

Episodes

#OpenBox - Charting the Sociotechnical Gap in Explainable AI - Part 2

22m · Published 05 Sep 10:37

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. Today, we have with us ⁠Upol Ehsan is a Researcher and Doctoral Candidate in the School of Interactive Computing at Georgia Tech and an affiliate at the Data & Society Research Institute. Combining AI, HCI, and philosophy, his work in Explainable AI (XAI) and Responsible AI aims to foster a future where anyone, regardless of their background, can use AI-powered technology with dignity. His work has pioneered the area of⁠ Human-centered Explainable AI (a sub-field of XAI), receiving multiple awards at ACM CHI, FAccT, and HCII and been covered in major media outlets. By promoting equity and ethics in AI, he wants to ensure stakeholders who aren’t at the table do not end up on the menu. Outside research, he is a founder and advisor for Aalor Asha, an educational institute he started for underprivileged children subjected to child labor. He is also a social entrepreneur and has co-founded DeshLabs, a social innovation lab focused on fostering grassroots innovations in emerging markets.

We discuss the paper titled “Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI” which It will be presented at CSCW 2023 and was co-authors with Koustuv Saha, Munmun de Choudhury, and Mark Riedl.

Upol explains about specific nuances on why Explainability cannot be considered independent of model development and deployment environment.

This is part 2 of the discussion.

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#OpenBox - Charting the Sociotechnical Gap in Explainable AI - Part 1

23m · Published 05 Sep 10:24

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. Today, we have with us ⁠Upol Ehsan is a Researcher and Doctoral Candidate in the School of Interactive Computing at Georgia Tech and an affiliate at the Data & Society Research Institute. Combining AI, HCI, and philosophy, his work in Explainable AI (XAI) and Responsible AI aims to foster a future where anyone, regardless of their background, can use AI-powered technology with dignity. His work has pioneered the area of⁠ Human-centered Explainable AI (a sub-field of XAI), receiving multiple awards at ACM CHI, FAccT, and HCII and been covered in major media outlets. By promoting equity and ethics in AI, he wants to ensure stakeholders who aren’t at the table do not end up on the menu. Outside research, he is a founder and advisor for Aalor Asha, an educational institute he started for underprivileged children subjected to child labor. He is also a social entrepreneur and has co-founded DeshLabs, a social innovation lab focused on fostering grassroots innovations in emerging markets.

We discuss the paper titled “Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI” which It will be presented at CSCW 2023 and was co-authors with Koustuv Saha, Munmun de Choudhury, and Mark Riedl.

Upol explains about specific nuances on why Explainability cannot be considered independent of model development and deployment environment.

This is part 1 of the discussion.

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#OPENBOX - Open issues in Data Poisoning defence with Antonio Part 2

22m · Published 07 Mar 05:48

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

My name is Sundar. I am an Ethics and Risk professional and an AI Ethics researcher. I am the host of this podcast.

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.

In this part 2 of the podcast, he speaks about emerging types of attacks wherein the attack approaches are less sophisticaand computer vision. He is expected to join CISPA labs saarbrücken, Germany. He is passionate about machine learning security and closely follows the cutting edge research in this space. He also authored a paper with Kathrin (one of our earlier podcast guest). We are discussing about” Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning” Paper that he co-authored.

In this part 2 of the podcast, he speaks about emerging types of attacks wherein the attack approaches are less sophisticated, but impactful. 

--- Send in a voice message: https://anchor.fm/ryan-carrier3/message

#OPENBOX - Data Poisoning and associated Open issues with Antonio CINA

16m · Published 07 Mar 04:53

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.

Today, we have with us Antonio. He is a PhD student at University Ca' Foscari of Venice working in the fields of adversarial machine learning and computer vision. He is expected to join CISPA labs saarbrücken, Germany. He is passionate about machine learning security and closely follows cutting-edge research in this space. He also authored a paper with Kathrin (one of our earlier podcast guests). We are discussing ” Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning” Paper that he co-authored.

In this he is speaking about the varied attack vectors and specific open issues in this space

--- Send in a voice message: https://anchor.fm/ryan-carrier3/message

#OPENBOX - Eric Smith - Human Evaluation of Open-domain Conversations - Part 2

16m · Published 21 Oct 14:57

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.

Today, we have Eric with us. Eric is a Research Engineer at Facebook AI Research (FAIR). He is interested in questions around (a) conversational ai-how to make it better, and how to evaluate and (b) questions of bias in language models. He is interested in understanding languages and their underlying constructs. We will cover a paper titled “Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agents,” published in May 2022, which he co-authored.

This is part 2 of the podcast. Eric discusses known limitations of per-turn, per-dialogue, and pairwise methods [scenario-based testing, fault injection, adversarial attacks

--- Send in a voice message: https://anchor.fm/ryan-carrier3/message

#OPENBOX - Eric Smith - Human Evaluation of Open-domain Conversations - Part 1

18m · Published 21 Oct 14:55

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.

Today, we have Eric with us. Eric is a Research Engineer at Facebook AI Research (FAIR). He is interested in questions around (a) conversational ai-how to make it better, and how to evaluate and (b) questions of bias in language models. He is interested in understanding languages and their underlying constructs. We will cover a paper titled “Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agents,” published in May 2022, which he co-authored.

This is part 1 of the podcast. In this podcast, Eric discusses human evaluation in open-domain conversational contexts, Likert scales, and subjective outcomes. 

--- Send in a voice message: https://anchor.fm/ryan-carrier3/message

#OPENBOX - Carol Anderson - Zero-Shot Learning Part 2

9m · Published 21 Oct 14:46

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.

Today, we have Carol with us. Carol Anderson machine learning practitioner working on NLP for the past 5 years, most recently with Nvidia. Before this, she was working in the field of molecular biology. She is currently focusing on AI ethics, given the potential harm that AI contributes to people. She wants to use her skills to prevent those harms. I am glad to be having a conversation with him. We will cover a paper titled “Issues with Entailment-based Zero-shot Text Classification,” published in 2021.  and also discuss specific practical issues associated with Zero-shot learning.

This is part 2 of the conversation. She covers the difficulty of classifying among the available options, labor-intensive label design, and the underlying bias encoded in models. 

--- Send in a voice message: https://anchor.fm/ryan-carrier3/message

#OPENBOX - Carol Anderson - Zero Shot Learning Part 1

19m · Published 21 Oct 14:45

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.

Today, we have Carol with us. Carol Anderson machine learning practitioner working on NLP for the past 5 years, most recently with Nvidia. Before this, she was working in the field of molecular biology. She is currently focusing on AI ethics, given the potential harm that AI contributes to people. She wants to use her skills to prevent those harms. I am glad to be having a conversation with him. We will cover a paper titled “Issues with Entailment-based Zero-shot Text Classification,” published in 2021.  and also discuss specific practical issues associated with Zero-shot learning.

This is part 1 of the conversation. 

--- Send in a voice message: https://anchor.fm/ryan-carrier3/message

#OPENBOX - Maura Pintor - Improving Optimization of Adversarial Examples - Part 1

23m · Published 21 Oct 14:34

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.

Today, we have with us, Maura. Maura Pintor is a Postdoctoral Researcher at the PRA Lab, at the University of Cagliari, Italy. She received the MSc degree in Telecommunications Engineering with honors in 2018 and the Ph.D. degree in Electronic and Computer Engineering from the University of Cagliari in 2022. Her Ph.D. thesis, "Towards Debugging and Improving Adversarial Robustness Evaluations", provides a framework for optimizing and debugging adversarial attacks. Maura co-authored a paper titled “Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples, “ which was the podcast's focus.

This is part 1 of the podcast. In this podcast, Maura covered why evaluating defenses is complex, the nature of mitigation failures and why robustness is overestimated in evasion attacks in the context of open issues.

--- Send in a voice message: https://anchor.fm/ryan-carrier3/message

#OPENBOX - Maura Pintor - Open issues in improving optimization of adversarial examples - Part 2

12m · Published 21 Oct 14:30

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.

Today, we have with us, Maura. Maura Pintor is a Postdoctoral Researcher at the PRA Lab, at the University of Cagliari, Italy. She received the MSc degree in Telecommunications Engineering with honors in 2018 and the Ph.D. degree in Electronic and Computer Engineering from the University of Cagliari in 2022. Her Ph.D. thesis, "Towards Debugging and Improving Adversarial Robustness Evaluations", provides a framework for optimizing and debugging adversarial attacks. Maura co-authored a paper titled “Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples, “ which was the podcast's focus.

This is part 2 of the podcast. In this podcast, Maura covered transferability analysis, testing ML models for robustness, and challenges associated with repurposing models in the context of open issues. 

--- Send in a voice message: https://anchor.fm/ryan-carrier3/message

ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence | has 40 episodes in total of non- explicit content. Total playtime is 10:53:33. The language of the podcast is English. This podcast has been added on August 26th 2022. It might contain more episodes than the ones shown here. It was last updated on May 21st, 2024 01:10.

Similar Podcasts

Every Podcast » Podcasts » ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence |