WITSA Webinar On the Societal Impact of Open Foundation Models

April 3, 2024

On April 3rd, WITSA hosted its third “WITSA Viewpoint: AI Regulation”Webinar, entitled “On the Societal Impact of Open Foundation Models” and featuring Sayash Kapoor, a Ph.D. candidate at Princeton University's Center for Information Technology Policy.


For those of you who missed it, the Webinar is available on the WITSA YouTubechannel.

One of the biggest tech policy debates today is about the future of AI, especially foundation models and generative AI. Should open AI models be restricted? This question is central to several policy efforts like the EU AI Act and the U.S. Executive Order on Safe, Secure, and Trustworthy AI. In this talk, Sayash Kapoor discussed the benefits and risks of open foundation models and introduced a framework to assess their marginal risk compared to closed models or existing technology. The framework helps explain why the marginal risk is low in some cases, clarifies disagreements in past studies by revealing the different assumptions about risk, and can help foster more constructive debate going forward.


The presentation was based on a recent publication entitled “On the Societal Impact of Open Foundation Models: Analyzing the benefits and risks of foundation models with widely available weights”, co-authored by Sayash Kapoor along with 24 other authors spanning 16 organizations across academia, industry, and civil society.


Governments around the world are crafting different policies on foundation models: The design and implementation of these policies should consider both open and closed foundation model developers. In particular, open foundation models provide significant societal benefits in terms of the distribution of power,innovation, and transparency. While open foundation models are conjectured to contribute to malicious uses of AI, the weakness of evidence is striking. More research is necessary to assess the marginal risk of open foundation models. Policymakers should also consider the potential for AI regulation to have unintended consequences on the vibrant innovation ecosystem around open foundation models. When regulations directly address open foundation models, the precise definition used to identify these models and developers should be duly considered. Hinging regulation exclusively on open weights may not be appropriate given the gradient of release. Hostile actors, for instance, could leverage open data and source code—without model weights— to retrain models and generate comparable harms. And even when regulations do not directly address open foundation models, they may have an adverse impact: Liability for downstream harms and strict content provenance requirements may suppress the open foundation model ecosystem. Consequently, if policymakers are to implement such interventions, direct consultation with the open foundation model community should take place, with due consideration given to their interests.


Sayash Kapoor is currently co-authoring a book titled AI Snake Oil with Arvind Narayanan, which provides a critical analysis of AI capabilities, separating the hype from the true advances. His research examines the societal impacts of artificial intelligence, with a focus on reproducibility, transparency, and accountability in AI systems. His work has been recognized with various awards,including a best paper award at ACM FAccT, an impact recognition award at ACMCSCW, and inclusion in TIME’s inaugural list of the 100 most influential people in AI.

For more information: See Background on AI Snake Oil.