Archive | Internet

Article: The Mathematics of Regulatory Fragmentation

This study explores the rapidly evolving landscape of state-level digital platform regulation in the United States and uncovers a surprising mathematical dimension to how overlapping laws impact technology design, compliance costs, and market dynamics.

MATHEMATICS OF REGULATORY FRAGMENTATION

🔍 What the Research Explores

State governments have introduced a patchwork of social media safety laws that impose technical mandates on online platforms — especially aimed at enhancing user protections such as youth safety. While well-intentioned, these regulations do not simply add compliance costs as jurisdictions pile on more rules. Instead:

  • Each new state requirement interacts with every other, creating multiplicative — not additive — technical burdens for platforms.

  • This exponential growth in complexity stems from a combinatorial reality: as more distinct regulations are introduced, the number of potential conflicts and design constraints multiplies rapidly.

  • Smaller platforms and new market entrants are particularly disadvantaged, as they face disproportionately high engineering and operational costs to satisfy conflicting rules across jurisdictions.

  • Ironically, regulatory fragmentation may also undermine the very safeguards these laws are meant to provide, by incentivizing workarounds and fragmenting the user experience.

📌 Why It Matters

This research highlights a critical, often overlooked dimension of digital policy: the interactions between laws matter just as much as the content of the laws themselves. By applying mathematical reasoning to regulation, this work provides policymakers, researchers, and technologists with a new lens for assessing the real-world effects of decentralized digital governance.

The paper contributes to debates on platform regulation, digital governance, and the economics of compliance: topics that are central to recent legislative efforts across the U.S. and around the world.

📥 Read the Paper

The full paper (58 pages) is available on SSRN.

0

[Article] Nerd Harder: A Typology of Techno-Legal Solutionist Logics in Child Online Safety Laws

Co-authored with Lorcan Neill and Evan Ringel, our project examines recently enacted state-level child online safety laws (COSLs) and demonstrates how different techno-legal solutionist logics manifest in these legislative efforts.

Our analysis demonstrates three interdependent patterns: (1) the checklist fallacy (reducing safety to discrete technical features), (2) the false promise of age verification (assuming identity verification will prevent harm), and (3) the design determinism myth (overestimating design’s power to shape social outcomes).

The appeal of techno-legal solutionism transcends borders–from California to Brussels, it offers policymakers seemingly clear solutions to complex problems. However, our analysis shows that this approach fundamentally misunderstands both the social shaping of technology and the complexity of youth well-being. Technologies can influence outcomes by offering (or not) certain design features (i.e., affordances); yet these designs do not determine the outcomes. This overconfidence that technology can determine an outcome risks ignoring the more complex and nuanced forces shaping children’s online experiences. Moving forward requires abandoning the fallacy that we can simply “nerd harder” our way to youth safety—and instead embracing the more challenging work of developing comprehensive, nuanced approaches that recognize both the limitations and possibilities of technical intervention.

The Article is open access here.

Nerd Harder Website Graphic

0

New article: “Digital intermediaries and transparency reports as strategic communications”

ReidRingelA new study authored by Amanda Reid and Evan Ringel examines how “transparency reports” have become an institutionalized practice among digital intermediaries. This work frames platforms’ transparency reports as corporate social responsibility (CSR) disclosures, and it argues they represent an emerging institutional practice shaped by isomorphic pressures (organizations becoming more similar by mimicking each other).  Moreover, the article notes that while CSR research exists in other sectors, there’s a gap in studying CSR in the tech sector.  This research makes two main theoretical contributions.  First, the empirical evidence shows how this practice has spread across companies and across jurisdictions around the world.  And second, it offers a two-fold explanation for why different companies do this: (1) Big Tech companies use them as legitimacy-seeking strategic communications, and (2) SMEs (small and midsize enterprises) copy Big Tech’s practices through “mimetic isomorphism.”

Amanda Reid & Evan Ringel, Digital Intermediaries & Transparency Reports as Strategic Communications, 41 The Information Society 91-109 (2025) doi: 10.1080/01972243.2025.2453529.

See also Amanda Reid, Evan Ringel & Shanetta M. Pendleton, Transparency Reports as CSR Reports: Motives, Stakeholders, and Strategies, 20 Social Responsibility Journal 81-107 (2024), doi: 10.1108/SRJ-03-2023-0134; Amanda Reid, Shanetta M. Pendleton & Lightning E.H. JM Czabovsky, Big Tech Transparency Reports & CSR: Longitudinal Content Analysis of News Coverage, 13 The Journal of Social Media in Society 122-154 (2024), https://www.thejsms.org/index.php/JSMS/article/view/1447/693

0

2024 Hargrove Colloquium: Media Law in the Age of Artificial Intelligence

Generated by AI with Copilot Designer

On April 16, 2024, the Center for Media Law and Policy will be hosting the 2024 Hargrove Colloquium.  The topic for this year’s colloquium is Media Law in the Age of Artificial Intelligence. Come hear from David McCraw, deputy general counsel at The New York Times Co. and author of the book Truth in Our Times: Inside the Fight for Press Freedom in the Age of Alternative Facts, as well as Ruth Okediji from Harvard Law School, who served as a member of the National Academies’ Board on Science, Technology and Policy Committee on the Impact of Copyright Policy on Innovation in the Digital Era.

Our distinguished panel of experts will examine efforts at the federal and state level to prevent potential abuse of AI and will delve into the impact of generative AI on critical areas of media law, offering insights and sparking thought-provoking discussion. Key areas of focus will include:

  • Copyright Law: Who owns the creative output generated by AI? What is the impact on copyright holders when their work is used in training AI systems? How will existing copyright frameworks adapt to accommodate generative AI?
  • Defamation and Tort Law: Who, if anyone, can be held liable for harmful or defamatory content that AI generates? What are the legal implications for users and platforms employing AI-powered algorithms to curate and publish information?
  • Political Communication: How is AI being used in political campaigns and advertising? What are the potential risks and safeguards around AI-powered misinformation and voter manipulation?
  • Journalism: How is AI transforming the news industry? What are the legal and ethical considerations surrounding AI-generated news? How can journalists leverage AI while upholding journalistic integrity?

The Colloquium will take place at 7:00 PM at the George Watts Hill Alumni Center at the University of North Carolina and is free and open to the public. Visitor parking is available in the Rams Head Parking Deck.

You can read more about the colloquium on our event page.

0

New UNC Center on Technology Policy

I’m thrilled that UNC is launching a new center focused on technology policy!  The UNC Center on Technology Policy (CTP) will hold its first public event on Friday, April 29, but they have already been working hard — and having an impact — on the conversation about how to regulate online content, with a fantastic policy brief on “Understanding, Enforcement, and Investment: Options and Opportunities for State Regulation of Online Content.”

CTP’s mission is to help craft public policy for a better internet. Utilizing an interdisciplinary academic framework, CTP works to identify knowledge gaps and develop actionable policy frameworks that will enable us to realize the potential benefits of technology while minimizing its harms. By working closely with students and expanding the University’s offerings in technology policy analysis, we seek to cultivate and train the field’s future practitioners.  For more on CTP’s plans, you can read a recent overview of the center in The Well.

The new center is lead by Matt Perault, a professor of the practice at UNC’s School of Information & Library Science (SILS) and a consultant on technology policy issues.  He previously led the Center on Science & Technology Policy at Duke University and was a professor of the practice at Duke’s Sanford School of Public Policy.  Before that, Matt worked at Facebook, where he was a director on the public policy team and the head of the global policy development team.  He covered issues ranging from antitrust to law enforcement to human rights and oversaw the company’s policy work on emerging technologies like artificial intelligence and virtual reality. Matt holds a law degree from Harvard Law School, a Master’s degree in Public Policy from Duke’s Sanford School of Public Policy, and a Bachelor’s degree in political science from Brown University.

To mark their public launch, CTP will be hosting an event on Zoom at noon on Friday, April 29, about state efforts to regulate platform content.  They have a fantastic lineup of panelists, including Emma Llansó (Center for Democracy & Technology), Wendy Gooditis (VA House of Delegates), Mary-Rose Papandrea (UNC School of Law), and Steve DelBianco (NetChoice).  You can register for the event, which is free and open to the public, here.

The new center, which is based at the SILS, will work closely with UNC’s Center on Information, Technology, and Public Life and the UNC Center for Media Law and Policy.  Welcome to the neighborhood, CTP!

0