Friday, May 16, 2025

Is AI Security Work Best Done In Academia or Industry? Part 1 – Communications of the ACM

Computer scienceIs AI Security Work Best Done In Academia or Industry? Part 1 – Communications of the ACM


A question that has been asked in our circles for a few years is, what is the best place to do AI research, academia or industry?

An insightful Science article from 2023 by Neil Thompson and colleagues at MIT shed light on part of this question. The title, “The Growing Influence of Industry in AI Research,” gives away what the crux of its conclusion.

A second-level question being debated in our circles is where is it better to do machine learning (ML) security research, academia or industry? With respect to the broader AI question, an obvious, if unsatisfyingly evasive, answer is that the optimal option is to be in both camps, like an academic who has a visiting position at an industrial org. Most of the leading AI organizations have Visiting Scientist positions, clothed in different garbs. Curiously, that optimal answer does not apply to this narrower question of ML security research.

In this post, I ponder on this question and, spoiler alert, provide the two sides of the coin, for you to decide based on your individual circumstance. How is that for unsatisfyingly evasive!

The majority opinion to the broader question of AI research, and it is a large majority, is that the groundbreaking advances are coming from industry and will continue to be so. AI security research though is less clear—for one, it has not been discussed formally, and then there are counterbalancing factors. 

First, let us look at reasons why industry should be the place to do AI security work. There are three primary factors:

  1. Vast compute resources
  2. Vast amounts of data
  3. Large teams of well-paid, smart professionals

All three reasons are self-evident once one pauses to ponder on them for a bit. Each of these reasons has been made before, in the context of the broader issue of advantages of doing AI research in industry. 

Vast compute resources

The foundational models, these days mostly transformer-based, take seemingly infinite amounts of processing power to train. The cost runs into at least the tens of millions of dollars. And so no academic team even dares to attempt that—the hubris of Icarus leading to his literal downfall is etched into many of our consciousnesses. So what we in academia are left to do is fine-tuning of the pre-trained models (a million thanks to those corporations who have released their models) and prompt engineering.

Vast amounts of data

The deep learning era had ushered in the primacy of data over rules, or in academic arcana, data over control. As models have become larger, their appetite for data has only grown to the point where it seems practically insatiable today. A trite playbook for the relentless march of AI seems to be to build bigger models following the latest winning architecture of the times (CNN, transformers, etc.) and feed its appetite for data. And out comes a model that seems just that little bit more intelligent. 

The compensation structure, a.k.a. the Greenbacks

A Google Deepmind Research Scientist’s salary starts at around $150,000 and quickly jumps to the vicinity of $250,000 as one becomes a Senior Research Scientist. Ballpark numbers, smoothing over outliers due to generous PIs (Principal Investigators) or miserly PIs (or generous/miserly academic departments), a Ph.D. student at a top research university makes $25,000-35,000 a year, a post-doctoral scholar $60,000-90,000 a year. So it would seem logical that unsentimental market forces will push the best and the brightest away from the academic treadmill into the warm embrace of industrial organizations.

Each of these factors is true, in its place, and the combination of the three indeed makes for a powerful concoction for the big advancements in AI in industrial organizations. However, this is not wholly true as leap-through advancements in AI models have been made time and again from academic teams (either solely, or academia-led teams with some industrial collaboration). This is because to generate the kernel of the sublime, game-changing idea needs imagination or creativity or its ilk, and does not necessarily rely any of the three factors above. Take two notable examples:

  • Backpropagation algorithm–the algorithm at the root of deep learning’s flowering came from the University of Toronto (Geoffrey Hinton, co-recipient of the 2024 Nobel Prize in Physics).
  • Recurrent Neural Networks (RNNs) were conceptualized in a simpler form at Princeton (John Hopfield, co-recipient of the 2024 Nobel Prize in Physics) and then at UCSD (David Rumelhart).

Now consider the counterpoints for the three factors above, followed by one towering advantage that makes it attractive for AI security research to flourish in academic lands. 

To be continued . . .

This post was originally published on Distant Whispers.

Saurabh would like to thank Rama Govindaraju of Nvidia for providing insightful comments on a draft of this article. The views in the article however are Saurabh’s own.

Saurabh Bagchi is a professor of Electrical and Computer Engineering and Computer Science at Purdue University, where he leads a university-wide center on resilience called CRISP. His research interests are in distributed systems and dependable computing, while he and his group have the most fun making and breaking large-scale usable software systems for the greater good.

Check out our other content

Check out other tags:

Most Popular Articles