TechLaw Chat cover logo
RSS Feed Apple Podcasts Overcast Castro Pocket Casts
English
Non-explicit
transistor.fm
9:12

It looks like this podcast has ended some time ago. This means that no new episodes have been added some time ago. If you're the host of this podcast, you can check whether your RSS file is reachable for podcast clients.

TechLaw Chat

by Matthew Lavy & Iain Munro

A series of short podcasts exploring emerging and topical issues in technology law.

Copyright: (c) 2020, Matthew Lavy & Iain Munro

Episodes

Surveillance tech and nosey neighbours

8m · Published 16 Feb 15:48

As increasingly sophisticated video and audio recording devices become available to householders at only moderate cost, deployment of such surveillance tech by householders is becoming ubiquitous. However, those deploying these devices do not always consider the impact of their surveillance tech on neighbouring properties or the legal ramifications of that impact. This episode explores this theme, and considers the causes of action and practical steps available to a neighbour adversely affected by overly intrusive surveillance tech. 

References:

  • For a couple of illustrative cases that have reached the Courts, see Fairhurst v Woodard [2021] 10 WLUK 151 and Woolley & Woolley v Akbar or Akram [2017] SC EDIN 7.
  • ICO guidance is available for people being filmed and those using domestic CCTV. Some discussion of the 'household exemption' is found in František Ryneš v Úřad pro ochranu osobních údajů [2015] 1 W.L.R. 2607.
  • Those in neighbour disputes should consider these steps, including using a mediation service.

Child safety on a video-sharing platform

8m · Published 11 Jan 11:31

As of now, the UK has not enacted online harms legislation, and social media platforms in general are under no statutory duty to protect children from harmful content. However, providers of video-sharing platforms do have statutory obligations in that regard, set out in Part 4B of the Communications Act 2003 (added to the Act by amendment in 2020). Amongst other things, section 368Z1 of the Act requires providers of such platforms to make appropriate measures to protect under-18s from videos and audio-visual commercial communications containing "restricted material". Regardless of the statutory obligations (or lack thereof in the case of non-video social media platforms), many platforms expend considerable efforts seeking to protect children from harm.

In this episode, we consider how a video-sharing start-up might focus its resources in order to comply with its statutory obligations and to maximise the prospects that it offers a safe environment for children. We are joined in this endeavour by Dr Elena Martellozzo, an Associate Professor in Criminology at the centre for Child Abuse and Trauma Studies (CATS) at Middlesex University. Elena has extensive experience of applied research within the Criminal Justice arena. Elena’s research includes exploring children and young people’s online behaviour, the analysis of sexual grooming and online harm and police practice in the area of child sexual abuse. Elena has emerged as a leading researcher and global voice in the field of child protection, victimology, policing and cybercrime. She is a prolific writer and has participated in highly sensitive research with the Police, the IWF, the NSPCC, the OCC, the Home Office and other government departments. Elena has also acted as an advisor on child online protection to governments and practitioners in Italy (since 2004) and Bahrain (2016) to develop a national child internet safety policy framework. 

Further reading:

  • Part 4B of the Communications Act 2003 can be found here: https://www.legislation.gov.uk/ukpga/2003/21/part/4B
  • A description of the Internet Watch Foundation technology suite can be found here: https://www.iwf.org.uk/our-technology/
  • A series of recommendations for various stakeholders (including tech companies) in relation to protection of children online in the age of COVID is made in the Glitch report.
  • An article by Dr Martellozzo and her team on the effect of harmful content on children can be found on Sage Open here.
  • Dr Martellozzo explains the grooming process in Chapter 4 of Bryce, Robinson and Petherick, Child Abuse and Neglect: here: Forensic Issues in Evidence, Impact and Management, Academic Press, 2019.
  • In the LSE-hosted blogpost Speaking Up: Contributing to the fight against gender-based online violence, Dr Martellozzo, Paula Bradbury and Emma Short provide commentary and references on this issue.


Kris Proposes Drone Delivery

9m · Published 21 Dec 12:07

This end-of-year episode explores the viability of delivery of Christmas gifts by drone in UK airspace. Someone has ambitious plans involving the precision drop of parcels down chimneys. We discuss the legal risks that arise and the hurdles that will have to be jumped if the Civil Aviation Authority is to authorise that plan.

Further reading:

  • The primary guidance document for those wishing to operate unmanned aircraft systems within the UK is CAP722.  It sets out the relevant law and provides substantial amounts of operational material and guidance.
  • The Civil Aviation Authority’s ‘Drone and Model Aircraft Code’ can be found here.
  • The two legislative sources referred to in the podcast are the Air Navigation Order 2016 and the Civil Aviation Act 1982.

 

Data protection representative actions: door slammed shut or door ajar?

8m · Published 11 Nov 17:19

The long-anticipated Supreme Court decision in Lloyd v Google [2021] UKSC 50 was handed down on 10 November 2021. Reversing the decision of the Court of Appeal and reinstating the first instance decision of Warby J, the Supreme Court held that Richard Lloyd could not pursue a damages claim as representative of the class of individuals affected by Google's alleged breach of the Data Protection Act 1998 in relation to the so-called "safari workaround". The reasoning is involved, and the Judgment bears reading in full. In essence, however, the court held that establishing a right to damages for breach of the Data Protection Act 1998, and quantifying those damages, involved a claimant-by-claimant analysis that, in each case, must identify the breach affecting that claimant, the loss suffered by that claimant, and the causal connection between breach and loss. The claims were accordingly unsuitable in principle for a representative action. The Judgment also addressed in some detail the nature of damages for breach of data protection legislation, and the nature and scope of representative actions under CPR 19.6.

In this episode we explore some of the ramifications of the decision through a scenario involving a data breach at an online marketplace.

The Judgment may be found here, and a press summary here.

The Emperor's New Tokens?

8m · Published 29 Jul 14:50

Non-fungible tokens (or 'NFTs') are a blockchain-based mechanism for uniquely identifying digital assets, and verifying both authenticity and ownership. An increasingly popular use case for NFTs (albeit it is only one of several use cases) involves the creation and sale of digital art. Notwithstanding that the NFT marketplace for digital art is dynamic and growing (with some NFTs selling at auction for vast sums), the legal basis of NFTs and, critically, the nature of what a purchaser actually acquires when purchasing an NFT artwork, are not universally understood. We explore these issues in this episode, which concerns the purchase of an NFT image for commercial use.

Further reading:

  • A useful introduction to NFTs, together with links for further reading, can be found on the Ethereum website here: https://ethereum.org/en/nft/.
  • An entertaining explanation of the phenomenon of digital art NFTs can be found here: https://www.theverge.com/22310188/nft-explainer-what-is-blockchain-crypto-art-faq.
  • For those who want to dig into the detail, the most commonly used technical standard for NFTs is currently ERC-721, which is explained here: https://ethereum.org/en/developers/docs/standards/tokens/erc-721/.

Keeping it ethical

9m · Published 08 Jul 15:15

AI companies need to engage with the ethical implications of their systems. That involves planning ahead: in this episode, we therefore look at the European Union’s proposed AI regulation, and – with the help of our guest, Patricia Shaw – discuss its application in an EdTech context. The proposed regulation is available here.

Patricia Shaw is CEO of Beyond Reach Consulting Ltd, providing AI/data ethics strategy, public policy engagement, bespoke AI/data ethics risk and governance advice, and advisory board services, across financial services, public sector (Health- and EdTech), and smart cities. 

 

Trish is passionate about Responsible AI and is an expert advisor to IEEE’s Ethical Certification Program for Autonomous Intelligent Systems and P7003 (algorithmic bias) standards programme, a Fellow of ForHumanity contributing to the Independent Audit of AI Systems. She contributed to The Institute for Ethical AI in Education’s Ethical Framework for AI in Education, and is a Fellow of the Royal Society of Arts having been on the Advisory Board for the ‘Power over information’project concerning regulation of online harms.

 

A non-practising Solicitor, public speaker, and author, Trish is also Chair of the Trustee Board of the Society for Computers and Law, Member of the Board of iTechlaw as well as Vice Chair of their AI committee.  She is listed on 2021 - 100 Brilliant Women in AI Ethics™.

At the AI's discretion

8m · Published 07 Apr 09:37

Where a contract confers a discretion on one party that materially affects the rights of its counterparty,  the discretion must be exercised rationally. The Supreme Court held in Braganza v BP Shipping Ltd [2015] UKSC 17 that exercising a discretion rationally involves (i) taking the right things (and only the right things) into account, and (ii) avoiding a decision that no reasonable decision-maker could have reached. In this episode, we explore how those principles might operate in the context of a discretion exercised automatically by a machine learning algorithm. We do so in the context of a fraud detection algorithm and an online farmers' market somewhere in East Anglia.

Further reading:

  • This episode was inspired by Tom Whittaker's thought-provoking article on the case of TF Global Markets (UK) (trading as ThinkMarkets)) v Financial Ombudsman Service Limited [2020] EWHC 3178 (Admin). The article may be found here: https://www.lexology.com/library/detail.aspx?g=ad5569ea-af1a-4040-b596-a6a29b3c73b0
  • Supreme Court decision in Braganza v BP Shipping Ltd [2015] UKSC 17: https://www.supremecourt.uk/cases/uksc-2013-0099.html
  • Anyone with any doubts as to the prevalence of AI-based fraud detection systems might like to do this: https://letmegooglethat.com/?q=ai+fraud+detection. There is no problem in principle with using such tools. The issue (in a contractual context) is how their outputs are translated into discretionary decisions.

Computer says "Go!"

9m · Published 18 Mar 11:04

Fully autonomous vehicles may be a few years away, but cars offering so-called “eyes off/hands off”, or “Level 3” automation, whereby the car is sufficiently capable that the driver’s role is limited to taking over control when requested by the car to do so, is expected to be commercially available in the very near future. In this episode we flash forward to summer 2023 and an accident involving a Level 3 autonomous vehicle. We consider how existing legal frameworks cope with the liability issues that arise when AI takes control of the driving but where the driver remains in the safety chain as a fallback for when the automation cannot cope.

 

Further reading:

  • Riley-Smith QC and McCormick, ‘Liability for Physical Damage’ in The Law of Artificial Intelligence (2020), which helped to inspire our scenario.
  • Glassbrook, Northey and Milligan, A Practical Guide to the Law of Driverless Cars (2019).
  • The hack involved in this scenario: https://interestingengineering.com/teslas-autopilot-can-be-tricked-in-just-a-split-second-through-this-method

The Black Box problem

9m · Published 05 Feb 15:27

AI can improve how businesses make decisions. But how does a business explain the rationale behind AI decisions to its customers? In this episode, we explore this issue through the scenario of a bank that uses AI to evaluate loan applications and needs to be able to explain to customers why an application may have been rejected. We do so with the help of Andrew Burgess, founder of Greenhouse Intelligence ([email protected]).

 

About Andrew: He has worked as an advisor to C-level executives in Technology and Sourcing for the past 25 years. He is considered a thought-leader and practitioner in AI and Robotic Process Automation, and is regularly invited to speak at conferences on the subject. He is a strategic advisor to a number of ambitious companies in the field of disruptive technologies. Andrew has written two books - The Executive Guide to Artificial Intelligence (Palgrave MacMillan, 2018) and, with the London School of Economics, The Rise of Legal Services Outsourcing (Bloomsbury, 2014). He is Visiting Senior Fellow in AI and RPA at Loughborough University and Expert-In-Residence for AI at Imperial College’s Enterprise Lab. He is a prolific writer on the ‘future of work’ both in his popular weekly newsletter and in industry magazines and blogs. 

 

Further reading:

  • ICO and The Alan Turing Institute, ‘Explaining decisions made with AI’ (2020)
  • ICO, ‘Guide to the General Data Protection Regulation (GDPR)’ (2021)
  • The Data Protection & Privacy chapter in The Law of Artificial Intelligence (Sweet & Maxwell, 2020)
  • An explanation of the SHAP and LIME tools mentioned by Andrew can be found at https://towardsdatascience.com/idea-behind-lime-and-shap-b603d35d34eb, and a deeper explanation for the more mathematically minded can be found here: https://www.kdnuggets.com/2019/12/interpretability-part-3-lime-shap.html 

 

Track it with a Smart Contract?

9m · Published 22 Dec 09:02

This podcast explores the benefits and limitations of Smart Contracts in the context of human-provided services by considering the practicalities of using Smart Contracts to regulate the contractual relationship between brands and social media influencers.

 

Further reading:

  • For a more detailed discussion of smart contracts, see the Legal Statement on cryptoassets and smart contracts, November 2019, published by the LawTech Delivery Panel UK Jurisdiction Taskforce (downloadable here).
  • If you’d like to see one of the starting points - N. Szabo, “Smart contracts: building blocks for digital markets.” (1996) EXTROPY: The Journal of Transhumanist Thought, (16), 18(2), available in revised draft here. 
  • For some insight into the vulnerabilities – see, e.g., Singh et al ‘Blockchain smart contracts formalization: Approaches and challenges to address vulnerabilities’, Computers & Security 88 (2020) 101654.
  • A promising application of Smart Contracts is in supply chains – discussed here in the Harvard Business Review. 
  • Our scenario was inspired by a CoinDesk article on 22 January 2015 (‘Ex-Rugby Star: Smart Contracts Could Prevent Legal Disputes in Sport’).

TechLaw Chat has 14 episodes in total of non- explicit content. Total playtime is 2:09:01. The language of the podcast is English. This podcast has been added on October 28th 2022. It might contain more episodes than the ones shown here. It was last updated on March 23rd, 2024 13:41.

Similar Podcasts

Every Podcast » Podcasts » TechLaw Chat