ATGO AI
| Accountability, Trust, Governance and Oversight of Artificial Intelligence | cover logo

#openbox Bias identification and mitigation with Patrick Hall - Part 2

22m · ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence | · 02 Nov 12:39

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

Today, we have with us Patrick Hall. Patrick is a Assistant Professor at George Washington University. He is conducting research in support of the NIST AI Risk Management Framework and a contributor to NIST work on building a Standard for Identifying and Managing Bias in Artificial Intelligence. He is also the collaborator running the open-source initiative called “Awesome Machine Learning Interpretability” which maintains and curates a list of practical and awesome responsible machine learning resources. He is also one of the authors of Machine Learning for High Risk Applications released by O’reilly. He is also managing the AI incident Database.

This is part 2 of the episode

He spoke about key approaches for bias mitigation and the limitations therein. He also discusses the open problems in this area.

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

The episode #openbox Bias identification and mitigation with Patrick Hall - Part 2 from the podcast ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence | has a duration of 22:45. It was first published 02 Nov 12:39. The cover art and the content belong to their respective owners.

More episodes from ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence |

#openbox - image watermarking in training with Kirill Part 2

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.

Today, we have with us Kirill. He is a ML Ph.D. student at Technische Universität Berlin. He is part of UMI lab, where in Interpretability and Explainable AI. He studies abstractions and representations in Deep Neural Networks. He is also a passionate photographer. We are going to be talking about his recent paper “Mark My Words: Dangers of Watermarked Images in ImageNet” which he presented at European Conference on Artificial Intelligence few months back.

This is part 2 of the podcast

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#OpenBox - Dangers of watermarked images in training

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.

Today, we have with us Kirill. He is a ML Ph.D. student at Technische Universität Berlin. He is part of UMI lab, where in Interpretability and Explainable AI. He studies abstractions and representations in Deep Neural Networks. He is also a passionate photographer. We are going to be talking about his recent paper “Mark My Words: Dangers of Watermarked Images in ImageNet” which he presented at European Conference on Artificial Intelligence few months back.

This is part 1 of the podcast

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#OpenBox The Data Brokers & Emerging Governance with Heidi Part 2

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

I spoke with Heidi Saas

Heidi is a Data Privacy and Technology Attorney. She regularly advise SMEs and start ups working in a wide variety of industries, on data privacy and ethical AI strategies. She is also a ForHumanity Contributor and algorithmic auditor.

This is part 2. She is speaking about how enterprises can manage the challenges by good governance practices .

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#OpenBox The Data Brokers & Emerging Governance with Heidi

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.


I spoke with Heidi Saas Heidi is a Data Privacy and Technology Attorney. She regularly advise SMEs and start ups working in a wide variety of industries, on data privacy and ethical AI strategies. She is also a ForHumanity Contributor and algorithmic auditor.

This is part 1. She is speaking about how regulations are emerging in the context of data brokers and how enterprises need to adopt to the changing compliance environment in managing data.

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#openbox - Open issues and problems in dealing with dark patterns

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series

Today, we have with us Marie. Marie Potel-Saville is the founder and CEO of amurabi, a legal innovation by design agency. She was a lawyer for over 10 years at Magic Circle law firms such as Freshfields and Allen & Overy in London, Brussels and Paris. She is also the founder of Fair-Patterns, a SAAS platform to fight against dark patterns. She is spearheading efforts towards addressing the challenging problem of deceptive designs in applications using innovative technology. We are going to be discussing some nuances with her on this.

In this episode, Marie speaks about the enterprise approaches in working on fair patterns and the emerging regulatory interests in addressing the gap.

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message
Every Podcast » ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence | » #openbox Bias identification and mitigation with Patrick Hall - Part 2