It’s February 2025 and this, like some of my other recent posts (here, here), could be a snapshot of a landscape being transformed by the roaring river of artificial intelligence. But it isn’t. Rather, I’ll describe my puzzlement that some parts of the landscape that I’m close to aren’t changing much, and speculate about the causes of this (probably temporary) stability.
For context: Over the past year or two, large language models (LLMs) and other neural networks, which I’ll just refer to as artificial intelligence / AI for simplicity, have demonstrated abilities at least as strong as the average human for tasks related to writing, reading comprehension, and college level test taking in many fields. Recently, to pick two among almost innumerable examples of advances: (i) Open AI’s o3 model answered graduate-level biology, physics, and chemistry questions with 87% accuracy and reached average-human level on a general visual reasoning assessment (Dec. 2024; article). (ii) An experiment involving a university introductory physics course reports large learning improvements for students using an “AI tutor” compared to other teaching methods (magazine article, paper). Much of what I read online (like the Marginal Revolution blog) is preoccupied with AI developments, understandably. Given all this you’d think that a sizable fraction of the conversations, meetings, and seminars I’m part of, whether related to research, teaching, or university administration, would relate to or at least mention artificial intelligence. The actual fraction, however, is close to zero.
Why? I speculate on reasons. Some, if true, are more consequential than others for institutions like universities. Not only are these potential reasons not mutually exclusive, some are overlapping.
(1) People are unaware of or unfamiliar with the current state of AI.
This may seem absurd, like being unaware of the Trump presidency, but I’ve encountered natural sciences faculty even within the past few months who have never used LLMs like Claude or ChatGPT, either not at all or perhaps just tinkering for a few minutes. One can easily consume media in which AI is barely mentioned or superficially discussed. How large is this category of people who are unaware? I don’t know.
(2) People lack the time to become familiar enough with AI to meaningfully discuss it.
This overlaps with (1) but is worth noting separately. The field is moving at such a rapid pace, it’s hard to keep up with what exists and what can and can’t be done. I struggle with this myself; my “to read” / “to try” list keeps on getting larger. It’s especially hard to keep up if one wants to have a deep understanding of how AI works.
(3) A lack of agency related to trends in AI.
Implementations of AI will come, but like the weather, there’s nothing you or I can do that’s relevant to how it plays out. I don’t agree with this perspective, but given the rapid rate of change it makes some sense.
(4) AI implementation will be harder in practice than people think, and so isn’t worth prematurely worrying about.
There’s a difference between generating rhymes for entertainment and writing a rigorous research article, for example, and the latter is much harder than the former. I think this view has some validity for the experimental sciences — no LLM could have predicted, for example, that gut bacteria can stimulate gut contractions. We had to do the experiment, be surprised, and spend a few years figuring out what’s going on. I’m less sure this is valid for other areas, including a lot of “routine” theoretical or computational work.
(5) AI’s capabilties aren’t yet strong enough to care about, and may never be.
In other words, AI is impressive, but not human-expert-equivalent, and that’s the relevant benchmark. To give an example: My main use of AI currently is offloading boring programming tasks, like writing Python code to extract data from Excel sheets or do tedious manipulations of dictionaries that I don’t care to write out. Claude Sonnet, for example, is fantastic at this. I sometimes give it more challenging tasks; it does very well, but not quite well enough to take me out of the loop with respect to debugging or re-designing. For this, it’s important that I’m a strong programmer myself. Therefore, the AI tool can be thought of as an assistant to what I’m normally doing, rather than a transformational change.
Again, the landscape is changing rapidly. Some would argue that this gulf will never be bridged, but I am skeptical. Just yesterday, I read an account of OpenAI’s o1 writing what’s claimed to be a decent economics article on its own: X link.
(6) Antipathy towards the concept of AI.
Many people seem irrationally dismissive of AI because of some sort of belief in vitalism, it seems to me — the idea that thought has some indescribable underlying mechanism that can’t be reproduced by assemblies of atoms and molecules other than those found in a human body. I suspect that this belief and an aversion to the inhuman blinds people to the capabilities of AI.
(7) Implementing AI will be painful; we’d rather not discuss the issues involved.
Many of our institutional structures are rather precarious. For example, consider how we, in part, fund graduate students in the sciences in the US. First-year Ph.D. students in Physics are employed as teaching assistants, teaching part time (<50%) while pursuing their studies. This is also the case for about half the physics graduate students beyond the first year (at my institution), and for first-year Ph.D. students in other natural science departments. These positions are paid for by the university — i.e. by undergraduate student tuition fees and by taxpayer dollars. As noted at the start of the post, recent studies have shown that AI tutors are very effective — in the context of teaching introductory university physics probably at least as effective as graduate teaching assistants. Suppose all this is true, and our undergraduates would learn more, at less cost, with AI teaching assistants. Should we implement this? We’d then reduce the number of graduate students, the amount of research universities can do, the number of Ph.D.’s produced, etc. Whether these outcomes are good or bad is debatable, but the changes would certainly be dramatic. Massive change is hard to contemplate, and hard to confront!
(8) Everyone is discussing AI, but not when I’m around.
Self-explanatory.
I’d guess that a university, or other organization, that can deal with these factors before others do can pull ahead, preparing itself for the future. Unless it’s Reason 8. It’s not 8, is it? Guys? … guys? …
Today’s illustration
Cacti, not discussing AI.
— Raghuveer Parthasarathy. February 2, 2025

I’ll go with 2, 7, and 6, in that order. No time, worried about the fallout, and annoyance that this is even a thing I “should” be worrying about, as someone trying to teach people how to think on their own.
I have also been categorically advised not to even think about using AI for any type of human subjects research, even if anonymized. Plus, the times I have tried to use it to summarize open-ended survey results have not yielded as much value as simply skimming the results myself (at least with N ~ 200 per survey). So for #6, I do think there is some je ne sais quoi to human thought, at least as experienced by the thinker. When I don’t want to do the thinking and just want the results, #6 does not apply.
Question related to a podcast I recently listened to on this: How would you feel if your spouse wrote you a wonderful love letter that made you cry, but then later told you that AI had written it?
I will concur that 2, 7, and 6 are probably most likely. But more like 6, 2, 7, and 3. Antipathy in academics is strong, including me…
Personally I have belief in myself that AI wouldn’t make me cry (how could it be personal then?). If it did, I would want to know how much is AI generated, like did AI write the whole thing or did it just refine this writing you had. How much effort did you put into this? Why did you write me something if you didn’t want to write it, I hope you don’t feel obligated to write something. Now my gf is reading me love letters, so I appreciate the prompt haha.