ETO Logo

Still a drop in the bucket: new data on global AI safety research

A stylized image of a map and compass.

2025-04-30

The latest update from ETO's Research Almanac
🔔

Attention Substack users! ETO blog posts are also available on Substack.

Last year, ETO released what we believe is the most systematic analysis then published of global research into AI safety, drawing on data from our Research Almanac and the hundreds of millions of articles in our Merged Academic Corpus. We recently updated the Almanac with data through the end of 2023, giving us an opportunity to revisit last year's findings. Here's updates on the state of global AI safety research through ETO's lens:

Key findings

  • AI safety research is growing fast, with an especially big jump in 2023 relative to prior years - but it's still a drop in the bucket of AI research overall.
  • American schools and companies lead, followed by China, the UK, Germany and Canada.
  • Chinese organizations may be less prevalent in AI safety research than in other AI-related domains. But in both the U.S. and China, AI safety research is a small fraction of overall research effort.
👉
What counts as AI safety research? We use a very broad definition, aiming to capture a wide variety of concepts commonly linked to AI safety, security, and related concerns. There's no single authoritative definition of AI safety, so we tried to capture everything in the ballpark. We trained a machine learning model to identify articles meeting our AI safety definition across the Merged Academic Corpus. The definition has room for interpretation, and the model sometimes produces false positives, so the numbers in this post (and in all ETO resources that cover AI safety research) should be considered estimates.
  • According to the latest data from the Research Almanac, about 45,000 AI safety-related articles were released between 2018 and 2023. (This total, and the other Research Almanac-derived findings in this post, are based on articles with English titles and abstracts in our Merged Academic Corpus; they omit articles published solely in Chinese and non-public research. For further details and caveats, see the Almanac documentation.)
  • AI safety research grew 312% between 2018 and 2023 - a notable uptick relative to the longer-term trend.
  • Despite this rapid growth, safety research still comprises only about 2% of all research into AI. (Even though safety research is growing, AI research as a whole is growing too.)
  • Pound for pound, AI safety research is highly cited - the average AI safety-related research article has been cited 28 times, compared with 16 times for the average AI article.
  • 30% of the AI safety-related articles in the Research Almanac dataset had authors from American organizations. 12% had Chinese-affiliated authors, and 18% had European-affiliated authors. (Note that some articles lack information about authors' countries of affiliation, and articles published solely in Chinese are not captured, which could affect the numbers for Chinese-affiliated authors.)
  • Looking only at highly cited articles, America's advantage holds. 44% of top-cited AI safety articles (defined as the 10% of articles in each publication year with the most citations) had American-affiliated authors, compared to 18% with Chinese-affiliated authors and 17% with European-affiliated authors. Compared to our last analysis, U.S. authors dropped a couple percentage points relative to European authors, but because our numbers are fuzzy estimates, the difference may not be meaningful.
  • Compared to the U.S., Chinese-affiliated authors tend to be less prevalent in AI safety research than in AI research overall, or research in other AI-related subfields (in all cases, looking at research articles with English titles or abstracts only). That said, China still claims the number two spot overall - and AI safety research is a small "slice of the pie" for both the United States and China.
👉
To view the next five leading countries in AI safety research and see how global authorship has evolved over time across all countries, visit the "Countries" section in the Research Almanac.

Top organizations

  • The biggest producers of AI safety-related articles are American institutions well known for strength in artificial intelligence: Google, Carnegie Mellon, and Stanford. Other than Tsinghua University in fifth place, the remaining top ten institutions are all American.
  • When only highly cited articles are counted, Carnegie Mellon rises to the top of the table, and Oxford and the Chinese Academy of Sciences break the the top ten.
👉
To view the top ten companies active in AI safety research, visit the "Patents and industry" section in the Research Almanac.

For more insight into the fast-growing field of AI safety, visit its subject page in the Research Almanac, search for "AI safety" using the Map of Science subject search, or explore country-level research production and collaboration trends in our Country Activity Tracker. As always, we're glad to help - visit our support hub to contact us, book live support with an ETO staff member or access the latest documentation for our tools and data. 🤖

ETO Logo

Keep in touch

Twitter