Skip to content

Should ChatGPT make me feel better or worse about failing out of Harvard?

AI did the reading.

Zach Seward
Zach Seward
Should ChatGPT make me feel better or worse about failing out of Harvard?
Photo by Rohit Farmer / Unsplash

Maya Bodnick, a first-year student at Harvard College, asked her professors and teaching assistants to grade papers generated by OpenAI's cutting-edge GPT-4 model. She used real essay prompts from real classes—e.g., "What has caused the many presidential crises in Latin America in recent decades (5-7 pages)?"—and to guard against anti-AI bias, she told the graders that some of the papers were written by her, whereas all of them were actually generated by GPT-4.

How'd the bot do? Bodnick reports:

Not only can ChatGPT-4 pass a typical social science and humanities-focused freshman year at Harvard, but it can get pretty good grades. As shown in the report card above, ChatGPT-4 got mostly As and Bs and one C; these grades averaged out to a 3.34 GPA.

Other studies have already shown that GPT-4 can ace AP Art History, score a 1410 on the SATs, and pass the Bar exam required of practicing lawyers. But I took a particular interest in this new experiment because, two decades ago, I failed out of Harvard in my junior year with a GPA well below 3.34.

To be clear, it wasn't the quality of my essays that got me kicked out so much as the assignments I didn't do at all. I disliked Harvard's rigid conception of the liberal arts and struggled to care about the courses I was required to take, preferring small classes in departments like Black studies and creative writing. I poured most of my energy into the school newspaper, which helped jumpstart my career once all those skipped lectures finally caught up with me.

Now we know that a machine-learning model could probably get into Harvard (let's assume its parents are alumni and/or major donors) and definitely graduate (now that you no longer have to pass a swimming test) with outstanding grades in subjects like science, history, and economics and a few gentleman's Cs in English. It could then go into finance or consulting and make a six-figure salary like 42% of Harvard's class of 2023 who chose to enter the workforce after graduation.

So how should I feel about this?

I have always felt bad about wasting my parents' tuition money, although there's a strong argument to be made that the value of a Harvard education is not the degree but the social currency and connections to an elitist network. Indeed, all of the job offers I received after failing out were from Harvard alumni. Years later, an editor at the New York Times offered me a job there after an interview that consisted almost entirely of discussing our respective experiences in Cambridge. I probably also benefited from "Harvard dropout" becoming a kind of cultural flex thanks to famous examples like Mark Zuckerberg, who was a year ahead of me.

I also wish I could have smoked a little less weed in those years and stomached the required classes (or gone to Brown), because I would have learned a lot more at a prime stage in life for absorbing knowledge. Harvard has outstanding faculty, and though I bristled at the format, you could do a lot worse than just sitting back and listening to Luke Menand or Evelynn Hammonds think out loud. AI can transcribe those lectures about 1960s American culture or Black female sexuality, but not apply what it heard to completely different contexts later in life.

But what I mostly feel about "ChatGPT goes to Harvard" is vindicated—not for failing out, which was an unnecessary flourish, but for my youthful disregard and borderline contempt toward Harvard, which made it so easy to leave the place behind without feeling like a failure. It's commonly said that elite American higher education is about teaching you how to think for yourself, but what I found on campus was a student body eager to conform, to align themselves with a proven legacy, to get things right. The place is conservative to its core.

The vibe was not unlike how generative pretrained transformer (GPT) models cast about for the highest likelihood of success. Harvard students can obviously write a cogent response to a prompt like, "Pick a modern president and identify his three greatest successes and three greatest failures (6-8 pages)," but all that really proves is they did the reading. Well, GPT-4 did all the reading.

Harvard teaches obedience to a traditional model of education, which includes how to write a 6-8 page essay that should really just be a blog post. That this can also be accomplished by a piece of software shouldn't scare or impress us, but make us question what the hell Harvard has been seeking to accomplish with all those assignments in the first place. Should we value the most correct and polished response or the most interesting answer flush with novel insight?

It was no surprise to me that GPT-4 struggled the most in expository writing, a first-year Harvard class focused on the art of constructing an argument (and, not incidentally, taught by professional writers rather than academics). "This essay’s claims are consistently large and unclear," the teacher wrote about GPT-4's close reading of Middlemarch, "in part because they are unmoored from analysis and in part because the essay’s key terms are not clearly defined." Expos 20, taught by Jane Rosenzweig, was one of my favorite classes in college. I did all the work.

Reflecting on her experiment, Bodnick concludes that rampant cheating with AI will expose the liberal arts as a poor form of education for the next generation of elites, who should instead be trained for new types of AI-proof jobs. She's probably right about the cheating. I hope she's wrong about the liberal arts. Maybe, instead, AI will force universities to teach thinking and not conforming.