Radio Berkman cover logo

When a Bot is the Judge

32m · Radio Berkman · 29 Nov 20:21

We encounter algorithms all the time. There are algorithms that can guess within a fraction of a percentage point whether you’ll like a certain movie on Netflix, a post on Facebook, or a link in a Google search. But Risk Assessment Tools now being adopted by criminal justice systems all across the country - from Arizona, to Kentucky, to Pennsylvania, to New Jersey - are made to guess whether you’re likely to flee the jurisdiction of your trial, or commit a crime again if you are released. With stakes as high as this — human freedom — some are asking for greater caution and scrutiny regarding the tools being developed. Chris Bavitz, managing director of the Cyberlaw Clinic at Harvard Law School, helped draft an open letter to the state legislature of Massachusetts about Risk Assessment Tools, co-signed by a dozen researchers working on the Ethics and Governance of Artificial Intelligence. He spoke with Gretchen Weber about why we need more transparency and scrutiny in the adoption of these tools. Read the open letter here: https://cyber.harvard.edu/publications/2017/11/openletter

The episode When a Bot is the Judge from the podcast Radio Berkman has a duration of 32:44. It was first published 29 Nov 20:21. The cover art and the content belong to their respective owners.

More episodes from Radio Berkman

A spotlight on Nieman-Berkman Klein Fellow Jonathan Jackson

Jonathan Jackson is a co-founder of Blavity Inc., a technology and media company for black millennials. Blavity’s mission is to "economically and creatively support Black millennials across the African diaspora, so they can pursue the work they love, and change the world in the process." Blavity has grown immensely since their founding in 2014 — among other things, spawning five unique sites, reaching over 7 million visitors a month, and organizing a number of technology, activism, and entrepreneurship conferences. Jonathan Jackson is also a Joint Fellow with the Nieman Foundation and the Berkman Klein Center for Internet & Society for 2018-2019. During his time here, he says, he is looking for frameworks and unique ways to measure black cultural influence (and the economic impact of black creativity) in the US and around the world. Jonathan sat down with the Berkman Klein Center’s Victoria Borneman to talk about his work. Music from this episode: "Jaspertine" by Pling - Licensed under Creative Commons Attribution Noncommercial (3.0) More information about this work, including a transcript, can be found here: https://cyber.harvard.edu/story/2019-01/get-know-berkman-klein-fellow-jonathan-jackson

A spotlight on 2018 Berkman Klein Fellow Amy Zhang

Berkman Klein Center interns sat down with 2018 Berkman Klein Center Fellow Amy Zhang, to discuss her work on combating online harassment and misinformation as well as her research as a Fellow.

How Youth Are Reinventing Instagram and Why Having Multiple Accounts Is Trending

According to a recent Pew Research Center study, Instagram is the second most popular platform among 13 to 17-year-olds in the US, after YouTube. Nearly 72 percent of US teenagers are on the image sharing platform. Our Youth & Media team looked at how teens are using Instagram to figure out who they are. While seemingly just a photo-sharing platform, users have molded Instagram into a more complex social media environment, with dynamics and a shared internal language almost as complex as a typical middle or high school. This episode was produced by Tanvi Kanchinadam, Skyler Sallick, Quinn Robinson, Jessi Whitby, Sonia Kim, Alexa Hasse, Sandra Cortesi, and Andres Lombana-Bermudez. More information about this work, including a transcript, can be found here: https://cyber.harvard.edu/story/2018-11/how-youth-are-reinventing-instagram-and-why-having-multiple-accounts-trending

When a Bot is the Judge

We encounter algorithms all the time. There are algorithms that can guess within a fraction of a percentage point whether you’ll like a certain movie on Netflix, a post on Facebook, or a link in a Google search. But Risk Assessment Tools now being adopted by criminal justice systems all across the country - from Arizona, to Kentucky, to Pennsylvania, to New Jersey - are made to guess whether you’re likely to flee the jurisdiction of your trial, or commit a crime again if you are released. With stakes as high as this — human freedom — some are asking for greater caution and scrutiny regarding the tools being developed. Chris Bavitz, managing director of the Cyberlaw Clinic at Harvard Law School, helped draft an open letter to the state legislature of Massachusetts about Risk Assessment Tools, co-signed by a dozen researchers working on the Ethics and Governance of Artificial Intelligence. He spoke with Gretchen Weber about why we need more transparency and scrutiny in the adoption of these tools. Read the open letter here: https://cyber.harvard.edu/publications/2017/11/openletter

Fake News & How To Stop It

Even before Election Day, 2016, observers of technology & journalism were delivering warnings about the spread of fake news. Headlines like “Pope Francis Shocks World, Endorses Donald Trump For President” and “Donald Trump Protestor Speaks Out, Was Paid $3500 To Protest” would pop up, seemingly out of nowhere, and spread like wildfire. Both of those headlines, and hundreds more like them, racked up millions of views and shares on social networks, gaining enough traction to earn mentions in the mainstream press. Fact checkers only had to dig one layer deeper to find that the original publishers of these stories were entirely fake, clickbait news sites, making up false sources, quotes, and images, often impersonating legitimate news outlets, like ABC, and taking home thousands of dollars a month in ad revenue. But by that time, the damage of fake news was done - the story of the $3500 protestor already calcified in the minds of the casual news observer as fact. It turns out that it’s not enough to expect your average person to be able to tell the difference between news that is true and news that seems true. Unlike the food companies who create the products on our grocery shelves, news media are not required by law to be licensed, inspected, or bear a label of ingredients and nutrition facts, not that they should or could be. But the gatekeepers of news media that we encounter in the digital age - the social media platforms like Facebook and Twitter, search engines like Google, and content hosts like YouTube - could and should be pitching in to help news consumers navigate the polluted sea of content they interact with on a daily basis. That’s according to Berkman Klein Center co-founder Jonathan Zittrain and Zeynep Tufekci, a techno-sociologist who researches the intersection of politics, news, and the internet. They joined us recently to discuss the phenomenon of fake news and what platforms can do to stop it. Facebook and Google have recently instituted to processes to remove fake news sites from their ad networks. And since this interview Facebook has also announced options allowing users to flag fake news, and a partnership with the factchecking website Snopes to offer a layer of verification on questionable sites. For more on this episode visit: https://cyber.harvard.edu/interactive/radioberkman238 CC-licensed content this week: Neurowaxx: “Pop Circus” (http://ccmixter.org/files/Neurowaxx/14234) Photo by Flickr user gazeronly (https://www.flickr.com/photos/gazeronly/10612167956/)

Every Podcast » Radio Berkman » When a Bot is the Judge