Artificial Irreverence
Plus my year of social science wrapped (and next year pre-wrapped)
To be perfectly honest, I write this blog for purely selfish reasons: scratching intellectual itches, finding a healthy outlet for rants about polarization, a little dose of literacy to stave off the brainrot virus, etc. To my few regular readers and the many more who occasionally drop by to witness these incidents of writing: thank you for tuning in.
I’ve admittedly had a solidly mediocre run on Substack in 2025, a grand total of four posts followed by a total flatline since July. If you’ve been at all following what’s been going with our nation’s federal statistical agencies and adjacent industries you surely understand why our day jobs have been a bit more stressful than usual this year. I also took on a fun gig this fall teaching data science at Columbia’s new masters program in political analytics.1 Safe to say blog-writing has been more of a side-side-quest.
Still, I’ve deeply enjoyed my few years of writing here on and off, for audiences small and very small (I’m not here to make a serious buck or get on the Ezra Klein show!). It has helped me to crawl out of the bog of academic social science and sit down to think about about important, interesting, and utterly unpublishable social science topics: whether the 20th or 21st century sucked more, the lies told about pizza statistics, and illusions of group differences in political polls. Yes, I was incredibly wrong sometimes, such as when I said that Brad Lander ‘won the NYC mayoral media race’. But I’m happy with what I’ve put out there and I’ve enjoyed writing in a format that’s somewhere between ‘hot take in a hundred characters’ and ‘decennially published treatise of generalized knowledge.’
Artificial Irreverence
This year, by way of my work at NORC, I’ve accumulated many thoughts on AI in general2 and some thoughts on AI and writing in particular. In short, AI for writing (for me) has been a wholly mixed, if not slightly bad bag.3
To use some analogies, large language models — all recent reasoning and agentic advances considered — are great at playing scribe, translator, secretary, copyist, editor (all important for the interpretation and synthesis of knowledge given context), okay as librarian or research assistant (similar, but requiring complex synthesis and an understanding of subtext), and terrible at the function of writer (producing and articulating ideas in a way that commands human attention).
Attention, is the term for the mechanism that makes LLMs work, codifying the many linguistic relationships within your prompt and between your prompt and everything else the LLM has been trained on in the past. Human attention ironically works in the opposite way: not seeking coherence with past patterns but novelty from past patterns.
Structurally, AI behaves like regression to the mean, shaving away the interesting, novel, sometimes weird, sometimes off-putting variances in human writing. As my colleague John Roman puts it, “AI is about finding the middle of the word distribution, and charting the most average path. AI never whomps, it tucks in.” If AI performs regression to the mean, AI slop steepens the regression slope by reproducing those bland outputs for future models to reingest in pretraining, and making the next generation of chatbots (and content) even blander. To the LinkedIn ‘thought leaders’ parroting some version of “that’s not quitting, that’s resilience” in your posts … that’s not thought leadership, that’s literally KILLING the INTERNET! And if we keep it up, the long-run equilibrium of the Dead Internet is that nobody actually reads4 anything.
The painfully obvious AI slop I see in public and private life has pushed me in the direction of artificial irreverence. There are at least three fundamental risks to writers themselves who outsource the actual meat and potatoes writing of writing to AI.
First, we have evidence that AI usage accumulates what researchers call ‘cognitive debt’, an initial reduction of cognitive engagement for a particular task which can erode long-term learning ability. As it turns, the struggling part of learning serves a purpose. Students, I’m looking at you. We of course don’t fully understand the long-term impacts of cognitive debt (we don’t really understand any long-term impacts of AI!) but the U.S. public already recognizes the deterioration of human abilities as the most likely societal risk of AI adoption. And as I said earlier, I mostly write for my own learning so I (and I suspect most other writers) definitely agree!
Second, we have evidence that there is a reputational cost to being known as ‘guy who uses AI.’ According to a Harvard Business Review survey, 40% of U.S. employees report having received “AI workslop,” or low-effort content generated by AI in the previous month. And there’s a penalty to getting caught sending it: 53% of the workslop recipients report being annoyed, 38% confused, and 22% offended at the sender. This naturally begs a sort of inverse Turing test: if you could choose to receive written content—a blog post, news article, report, paper, or other distillation of knowledge—from an AI or a human, which would you pick? Yep, that’s what I thought (and three-quarters of all U.S. adults agree).5
Third, there is an underrated kind of brainrot, different from the first, that I would hypothesize occurs most among (a) people who are least exposed to good writing (more likely teens and younger adults) and (b) people who are least aware of AI technology and its artifacts (more likely older adults): the inability to actually discern even obvious AI-generated writing from human writing.
Where AI is useful is in small tweaks to highly scripted, routinized writing tasks: tone-smithing delicate parts of an email, optimizing the ‘action verbiage’ in resumés, condensing content down into slick slide decks, polishing up sections of academic papers (I still sometimes write those). Your god-gifted narrative voice wins you precisely zero points in these contexts. If used carefully, this is exactly where some of the time-saving gains of AI might come in.
And though AI is a poor replacement for thinking, it’s a decent aid for thinking out loud. Much like how writing shapes thought, or how talking to an inanimate rubber duck helps you debug, interacting with a gigantic real-time autocompletion machine can clarify what you already think. Less co-author, more interlocutor (as the website says, “what’s on your mind?”). It’s no different from talking things through with a slightly clueless, but enthusiastic and well-meaning friend when you’re stuck.
My Year Wrapped
Having finished my main spiel, you are fully welcome to close out this tab. Who am I really (and for that matter, who are you) when Charli XCX is right there? But if you do care to know, outside of Substack, I’ve had a pretty good year. Some things that might interest some of you:
I’ve spent a lot of time thinking about what generative AI methodologically means for survey research: what problems it can solve, what problems it introduces, how to design evaluations to identify both, and how to correctly use the right tools. Some of that work is soon to be published, and I’ve spoken about it here, here, here, and here.
Some of my academic work on other topics has finally seen the light of publication day including: the media effects of gen AI political deepfakes, the politicization of tobacco ads, and how a new question format can help reduce biases in event polling.
I finished guest-editing a new collection on politics and elections at Nature Scientific Data: 32 open-access papers spanning parliamentary speeches, voter attitudes, electoral geography, media datasets, legislative agendas, and elections across roughly a dozen countries. Many of them use LLMs in careful, thoughtful, and genuinely interesting ways. The best part is that you can download each and every dataset for free.
And yep, I got married.6
A basket of things I’ve been reading as we cross over into the late 2020’s (read it and weep):
Subliminal learning, weird generalization, and reward hacks (from researchers at Anthropic, Truthful AI, Berkeley, and others): You can induce surprising, misaligned behaviors (e.g., saying ‘women belong in the kitchen’) in otherwise “aligned” LLMs that are fine-tuned on signals that are unrelated to the task itself (e.g., archaic bird names from 1838) or come from misaligned LLMs (e.g., random number sequences from an model trained to be misogynistic).
NYC politics 101 (from Sachi Takahashi-Rial): A fantastic Substack on NYC politics, including highly informative bangers like when and where NYC can’t complete public projects on time and a primer on property taxes.
Chatbot Voting Advice Applications inform but seldom sway young unaligned voters (Velez, Green, and Sevi): Political scientists show that voting advice chatbots can indeed improve knowledge of where the parties stand on personally relevant issues (by up to +13%!), improve general issue identification accuracy (~4%), but do not change partisan sentiment or vote choice. One for the “positive AI societal impacts” pile.
The quietest crime story of the year (John Roman): Keep the positive vibes flowing with this read from one of the leading criminologists in the country. Anyone who’s paying attention knows that crime has gone down in 2025, but is this a story of cities getting better or a rising national tide? Find out.
You can afford a trad life (Matt Yglesias, perennial Spiciest Headline award winner): The decline of the two-parent, one-income household is a reflection of lifestyle inflation, sectoral wage gains, and social progress, not economic deprivation; pairs nicely as a response to Michael Green’s equally spicy take that American poverty line is $145k.
The Joe Rogan of the left, right, and center is just … Joe Rogan (team from NYU’s Center for Social Media and Politics): In 2024, much of the podcast manosphere, including Joe Rogan, has on net actually leaned left in positions taken by both host and speakers, though with some notable variation. Even in our polarized times, ideology is a complicated and heterogenous bundle!
What your news diet says about your politics (G. Elliott Morris): From one of America’s top data journalists, more evidence that Fox News viewers live in singularly siloed political universe, but why that may not bode ill for Democrats in 2026.
We polled young Americans on antisemitism and Israel-Palestine. Here’s what we found (Yale Youth Poll): Harvard’s scrubbing me from their files for this endorsement betrayal, but you should still check out some fantastically thorough survey research from these good folks.
I’ll end with a grab bag of ideas sitting in my drafts folder that I never published on this year. Consider this next year ‘pre-wrapped’: maybe I’ll write about them (let me know if you think I should!), maybe I won’t, or maybe you should write about them (drop me a line if you do!). Here they are:
Has Fox News decreased big city tourism? This year NYC has actually seen increased tourism but the threat of ICE may have deterred foreign tourism. Most NYC tourists are actually from blue states, but unclear how this has trended over time.
Cliff’s edge problems7 in public policy.
What are good and bad analogies for AI? Parrot? Mirror? Assistant? Tool?
Does crossing party lines enhance or reduce credibility? A question worth asking in a time of (1) high-profile intraparty break-ups and interparty love-fests and (2) low party attachment among young voters.
How dangerous is biking? (Data on this is hard to find!)
AI researchers need to understand statistics better.
Office politics as explained by organizational behavior. This one’s for the Herb Simon stans.
To another year of happy interruptions.
Shout out to the titans of the industry who served as my guest speakers: Damon Roberts, Kiegan Rice, Carl Bialik, and Jakob Hess. Please check out their great work!
More of this (good takes only) coming soon!
By contrast, I find AI to be extremely helpful in playing this role of a junior developer, analyst, or data scientist: I wrote up a cheeky, now seriously outdated post on public attitudes towards AI using AI to conduct nearly all of the data analysis. AI is an amazing code collaborator, though I’m cautious that it may not be a time-saver for all tasks. But for serious, “voice-y” writing, I have found it to be a totally mixed bag.
Video, of course, is a different story given its unique and automate-able quality to hack attention at the tails of the Internet-going age distribution. This still might be a hyperbolic outcome and you should read Kevin Munger’s post on a different possible equilibrium.
The corollary to this is that your preference is driven by prior relative trust in AI vs. humans, which can be contextual, dependent on the skill level of said humans, and probably partly by taste (I’m sure there are hundreds of Silicon Valley executives who no longer and will never again read anything written by humans).
On that note, thanks to my wife (Borat voice) for editing this post.


