Unconscionable Acts: Assigning Responsibility for AI-Generated Evil  

Written by Allison McVey

Edited by Austin McNichols, Esme Merrill, and Cecilia Murphy

Gonzalez v. Google (2023) and related cases like Twitter v. Taamneh (2023) provide early insight into how the Supreme Court is beginning to assess artificial intelligence-related questions of legal responsibility, adding a further wrinkle of complexity to ongoing debates over tech companies' liability for the information published on their platform. Section 230 of the Communications Decency Act absolves Internet Service Providers from liability for the user-generated content circulating on their platforms; but, as artificial intelligence revolutionizes this already potently influential and unregulated media form, it is crucial that courts hold tech companies responsible for the terroristic content proliferating and emanating from their sites. One way to institute a higher standard of responsibility for user-targeted algorithms is to treat them as legal employees of the companies by which they are maintained, subjecting them to respondeat superior legal doctrine. This way, technology companies will have a duty to restrain and prevent damaging conduct from reaching extremist users.

Previous
Previous

Green Justice: The Right to a Healthy Environment

Next
Next

A Price to Life: Challenges Faced by the Inflation Reduction Act and Their Impacts on Access to Life-Saving Medications