Measuring the Openness of AI Foundation Models: Competition and Policy Implications

Thibault Schrepel, Jason Potts

Research output: Contribution to JournalArticleAcademicpeer-review

Abstract

This paper provides the first comprehensive evaluation of AI foundation model licenses as drivers of innovation commons. We introduce our analysis by outlining how AI licenses regulate access privileges to the fundamental inputs of AI innovation commons. We show that AI licenses operate as a bottleneck, as their level of openness directly influences the flow of knowledge and information into the commons. We then introduce a new methodology for evaluating the openness of AI foundation models. Our methodology extends beyond purely technical considerations to more accurately reflect AI licenses’ contribution to innovation commons. We proceed to apply it to today’s most prominent models—including OpenAI’s GPT-4, Meta’s Llama 3, Google’s Gemini, Mistral’s 8×7B, and MidJourney’s V6—and find significant differences from existing AI openness rankings. We conclude by proposing concrete policy recommendations for regulatory and competition agencies interested in fostering AI commons based on our findings.
Original languageEnglish
Pages (from-to)1-26
Number of pages26
JournalInformation & Communications Technology Law
Publication statusPublished - 5 Mar 2025

Fingerprint

Dive into the research topics of 'Measuring the Openness of AI Foundation Models: Competition and Policy Implications'. Together they form a unique fingerprint.

Cite this