Saturday, April 26, 2025

Reversing the Fossilization of Computer Science Conferences – Communications of the ACM

Computer scienceReversing the Fossilization of Computer Science Conferences – Communications of the ACM


Computer science research famously distinguishes itself from other fields by the prevalent role of conferences as a publication venue rather than just a meeting opportunity. For two decades, academics have been discussing this phenomenon, usually to lament it. Top conferences have tweaked their selection mechanisms by introducing such features as multi-cycle reviews and journal-first publication, which have not fundamentally altered the picture. For better or worse, conferences remain the first choice for first publication of new research.

Why not, after all? What counts is not the medium, but whether the publication culture fosters innovation. Conferences as they exist do not entirely succeed in that role. People attending the main conferences in each subfield of the discipline increasingly complain of the banality of many contributions (theirs excluded, of course). A number of important papers of recent years were not published in traditional academic conferences (such as those organized by ACM or IEEE or with Springer proceedings), but just released; two examples are the original bitcoin paper by Nakamoto (which many people have observed would never had passed refereeing in an academic conference on distributed systems) and the paper Attention is All You Need, which introduced the “transformer” technique at the core of the current LLM boom.

What threatens to make conferences irrelevant is a specific case of the general phenomenon of bureaucratization of science. Some of the bureaucratization process is inevitable: research no longer involves a few thousand elite members in a dozen countries (as it did before the mid-1900s), but is a global academic and industry business drawing in enormous amounts of money and millions of players for whom a publication is not just an opportunity to share their latest results, but a career step. This context does not, however, justify what computer science conferences have become.

In principle, a scientific conference is a meeting for reporting and discussing innovative ideas and achievements. That lofty goal remains, but increasingly comes second to the business role: the major conference in a subfield is a yearly exam and résumé-building exercise for its members. Having a paper accepted is a coveted brownie point for your forthcoming applications and promotions. While the two goals—scientific and careerist—are not entirely incompatible, the dominance of the second one in modern conferences gnaws at their very nature. As a simple example, consider a paper that introduces a new concept, but does not completely work out its implications and has a number of imperfections. In the careerist view, it is normal to reject it as not ready for full endorsement. In the scientific view, the question for the program committee (PC) becomes: is the idea important enough to warrant publication even if it still has rough edges? The answer may well be yes.

The problem with the careerist approach is that when conferences primarily serve as an annual qualifying exam, they accord ever-increasing importance, among selection criteria, to purely formal rules. Each of the major conferences in computer science and software engineering has its own little subculture, implying that at a certain time any paper that has any chance of getting accepted must conform to a very rigid preconception of what a suitable paper looks like for that community, down to an extreme level.

Some of the consequences have crossed the border into the ridiculous. The 2023 Call for Papers for OOPSLA/SPLASH (a major conference on programming and systems) has a litany of rules and restrictions without any connection to research quality. They include this requirement on bibliographic references: “Papers are expected to use author-year citations. Author-year citations may be used as either a noun phrase, such as ‘The lambda calculus was originally conceived by Church [1932]’, or a parenthetic phase, such as ‘The lambda calculus [Church 1932] was intended as a foundation for mathematics‘.” What in the world does such a recommendation have to do with programming theory? Obviously someone in the committee does not like sentences such as “[Church 32] introduced the lambda calculus” even though they are perfectly clear and normal. Everyone is entitled to one’s own pet peeves, but under what privilege can one impose them on an entire community? I do not like split infinitives and dangling participles; shall I make them causes for desk-rejection the next time I am PC chair? Of course not. Being a PC chair is not an exercise in power. It should be a humbling opportunity to serve the community by helping to select the most promising and sound innovations of the year.

The “write bibliographic references my way or else!” imposition is only a small if despicable regulation, but it is reflexive of much more serious ones. Refereeing for these conferences is often of the “gotcha” type, looking for tiny deviations from the accepted standard of the moment. Many of these top-level, historically prestigious conferences produce the impression that “all papers look the same.” Authors see a winning formula, including a fixed structure, and apply it for fear of rejection. Some good and even outstanding papers do manage to get through, of course, but others do not because they fail to conform. Good workhorse-type papers that adhere to the canons of the moment and have all details completely ironed out, with or without a real potential impact, have a better chance.

One may say, “What’s wrong with a standard?” Well, innovation proceeds by departures from the standard. Recently, I have been re-reading some milestone papers in CS, SE, and logic, and was struck by how unlike they are to each other. True, “They would not be accepted today” is not an interesting argument, since the state of the art evolves; but one cannot help thinking that if the standards at the time had been as focused on form over substance as they are today, some of these papers would have been rejected back then. (For a hilarious parody of petty refereeing, see Simone Santini’s 2005 IEEE Computer paper “We are sorry to inform you…”, of which I found a copy here.)

The problem of the conference-as-yearly-exam is that everyone submits the year’s best work to it, resulting in a plethora of papers that are at the very least decent contributions from competent professionals. Since top conferences boast of their high rejection rates, typically 80% to 90%, referees must look for reasons to reject the papers in their pile rather than arguments for accepting them. The workhorse kind of paper has all the details right even if its message is not that exciting; the brilliant but tentative innovation sometimes does not. Too often papers get rejected because they do not check one of the obligatory boxes in the standard structure. Failure to cite a relevant piece of work is one of the most common cases, but many more faults can be found if one is looking for departures from the requisite pattern.

An interesting case in software engineering is dismissal for lack of “evaluation.” It would be, of course, ridiculous to deny the benefits that the emphasis on systematic empirical measurement has brought to software engineering in the last three decades. But it has become difficult today to publish conceptual work not yet backed by systematic quantitative studies. Such contributions do have a place, as theoretical physics does alongside experimental science. The problem here is not the evolution of the field, but the risk of enforcing a single dogma, never a good recipe for progress.

One characteristic of the current academic culture is that it treats being a PC member or (better) chair as a valuable career achievement—another “brownie point,” for “service.” Often, as a result, the PC is staffed by junior, ambitious academics intent on filling their résumés. Note that it does not matter for these résumés whether the person did a good or bad job as a referee! Participating in such activities should be an honor by itself, and should not carry any career reward at all. The current practice is one of the sources of the problems of conferences, amplified by the anonymity of refereeing. We end up having experts’ work adjudicated by beginners. Some of the more exotic requirements cited above probably follow from inexperience too: a senior scientist is unlikely to tell others what syntactical roles are acceptable for bibliographic references. Such arrogance is more typical of beginners. The phenomenon of conference positions as résumé-building steps for aspiring novices seems to be modern, a result of the careerization of conferences. I very much doubt that the submissions of Einstein, Curie, Planck, and such to the Solvay conferences were assessed by postdocs. Top conferences should be the responsibility of the established leaders in the field.

Academic conferences will not go away any time soon, or lose their status as the publication target of choice, so what can we do to improve them? Here are a few modest ideas. One is a matter of mindset: PC chairs should instill in their PCs a culture of looking for sound innovations and accepting deficiencies that do not affect the soundness and potential value of the work. They should remind the PC members that they are not a promotion committee, but are there to foster the progress of their discipline. They should simplify the Calls for Papers, remove absurd bureaucratic rules; once you have specified the page limit, the deadline, a URL for submission, and the theme of the conference, you should pretty much leave authors alone. The rules of research and publication ethics, sound science and quality writing are there implicitly and will be applied in the reviews. Then let the best minds compete.

Another important change is to stop attaching any career value to conference-management activities and, as noted above, to put the final responsibility in the hands of the most competent experts in the field—not just as the “conference chair,” who will secure a conference venue and pronounce the opening welcome, but as the actual program chair, in charge of managing the selection process and ensuring that the conference meets its main goal, which is not to add one line to a few people’s Google Scholar profiles, but to advance the state of human knowledge.

The spectacular revival of conferences after the Covid hiatus has shown how much researchers want and need to meet. By focusing on substance over form, conferences can return to their basic mission.

Bertrand Meyer is a professor and Provost at the Constructor Institute (Schaffhausen, Switzerland) and chief technology officer of Eiffel Software (Goleta, CA).

Check out our other content

Check out other tags:

Most Popular Articles