ATGO AI
| Accountability, Trust, Governance and Oversight of Artificial Intelligence | cover logo

#OPENBOX - Navigating Causal Discovery with Aleksander Molak

25m · ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence | · 28 Sep 12:17

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.

Today, we have with us Aleksandr. Aleksander Molakis a Machine Learning Researcher, Educator, Consultant,and Authorwho who gained experience working with Fortune 100, Fortune 500, and Inc. 5000 companies across Europe, the USA,and Israel, designing and building large-scale machine learning systems. On a mission to democratize causality for businesses and machine learning practitioners, Aleksander is a prolific writer, creator,and international speaker.He is the author of the book Causal inference and discovery in Python.

This is Part 1. He discusses about open issues and considerations in causal discovery, Directed acrylic graphs, and Causal effect estimators.

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

The episode #OPENBOX - Navigating Causal Discovery with Aleksander Molak from the podcast ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence | has a duration of 25:26. It was first published 28 Sep 12:17. The cover art and the content belong to their respective owners.

More episodes from ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence |

#openbox - image watermarking in training with Kirill Part 2

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.

Today, we have with us Kirill. He is a ML Ph.D. student at Technische Universität Berlin. He is part of UMI lab, where in Interpretability and Explainable AI. He studies abstractions and representations in Deep Neural Networks. He is also a passionate photographer. We are going to be talking about his recent paper “Mark My Words: Dangers of Watermarked Images in ImageNet” which he presented at European Conference on Artificial Intelligence few months back.

This is part 2 of the podcast

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#OpenBox - Dangers of watermarked images in training

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.

Today, we have with us Kirill. He is a ML Ph.D. student at Technische Universität Berlin. He is part of UMI lab, where in Interpretability and Explainable AI. He studies abstractions and representations in Deep Neural Networks. He is also a passionate photographer. We are going to be talking about his recent paper “Mark My Words: Dangers of Watermarked Images in ImageNet” which he presented at European Conference on Artificial Intelligence few months back.

This is part 1 of the podcast

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#OpenBox The Data Brokers & Emerging Governance with Heidi Part 2

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

I spoke with Heidi Saas

Heidi is a Data Privacy and Technology Attorney. She regularly advise SMEs and start ups working in a wide variety of industries, on data privacy and ethical AI strategies. She is also a ForHumanity Contributor and algorithmic auditor.

This is part 2. She is speaking about how enterprises can manage the challenges by good governance practices .

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#OpenBox The Data Brokers & Emerging Governance with Heidi

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.


I spoke with Heidi Saas Heidi is a Data Privacy and Technology Attorney. She regularly advise SMEs and start ups working in a wide variety of industries, on data privacy and ethical AI strategies. She is also a ForHumanity Contributor and algorithmic auditor.

This is part 1. She is speaking about how regulations are emerging in the context of data brokers and how enterprises need to adopt to the changing compliance environment in managing data.

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

#openbox - Open issues and problems in dealing with dark patterns

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series

Today, we have with us Marie. Marie Potel-Saville is the founder and CEO of amurabi, a legal innovation by design agency. She was a lawyer for over 10 years at Magic Circle law firms such as Freshfields and Allen & Overy in London, Brussels and Paris. She is also the founder of Fair-Patterns, a SAAS platform to fight against dark patterns. She is spearheading efforts towards addressing the challenging problem of deceptive designs in applications using innovative technology. We are going to be discussing some nuances with her on this.

In this episode, Marie speaks about the enterprise approaches in working on fair patterns and the emerging regulatory interests in addressing the gap.

--- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message
Every Podcast » ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence | » #OPENBOX - Navigating Causal Discovery with Aleksander Molak