Learning to Live with AI

This is the personal blog of Ben Collins-Sussman.

The drama around AI is bit ridiculous. It's going to revolutionize every business, cure disease, make us 8000% more productive, save the world. Also: it's going to steal every knowledge-worker's job, destroy the economy, and enslave civilization! The fact that everyone feels these things simultaneously makes it clear that it's not just a fad. It's also clear that it doesn't matter whether I like AI or not; any future job I ever have is going to expect me -- as a leader -- to make decisions about it. I don't have to like it, but I'm going to have to know when to use it, or perhaps when to defend against its abuse. I was joking with a friend that it's a bit like nuclear energy: it can create abundant power, and it can also be turned into a bomb. And now companies like OpenAI and Google have been passing out little bits of plutonium from street-carts to every person on the street. It's time for me to take some home and study it.

OK, so AI is overhyped -- but that's just the nature of the tech industry. If anyone uses the word "mobile" (in 2010), "cloud" (in 2014), or now "AI" (in 2024), then venture capitalists will instinctively throw money at it. Every LinkedIn post and TED talk pushes the hype. But at the same time, just because one has a toxic reaction to hype doesn't mean the value isn't real. Once the Hype Cycle dies down, it's obvious we're still left with powerful new tools. The final extent of the tools' power is still hard to see. At one possible extreme, it could revolutionize every industry; at the other extreme -- the absolute minimum impact -- we still have an amazing new human-computer interface. At last we can talk to computers naturally with almost no limits on syntax or semantics (just like in Star Trek!) rather than speak to computers in narrow pre-defined channels (like we do with Siri or Alexa). Even the minimum impact is an incredible step forward in computer usability.

But the interesting question is: instead of expressing outrage and trying to stuff the plutonium back in the tube, we should be figuring how to use this tool for good rather than for evil. The industry is doing its usual "throw spaghetti at the wall" routine to see what sticks. I saw a cartoon last week where a product manager was telling a CEO, "Hey, we have a new tool that gives a lot of nondeterministic and frequently-wrong answers", and the CEO replies, "Cool, let's put it in every product!".

At the same time, I see some people in a panic -- particularly teachers. My professor friends are racing to adapt to the fact that essentially any creative or writing assignment can be done by AI. There was a similar panic in the 1970s when students suddenly had electronic calculators at home. "Oh no, it's the end of math education! Kids will lose all ability to do arithmetic!" But 20 years later, a graphing calculator became required for all high school math classes, and math education still continued. The curriculum adapted: instead of banning calculators, we now teach kids exactly how and when to use them as a practical augmentation to their core reasoning skills.

This idea of adaptation is a key theme in Ethan Mollick's book, Co-Intelligence: Living and Working with AI -- a book that was recommended to me, and that I highly recommend to everyone else. The book converted me from a curmudgeon yelling at AIs to get off my lawn, to a sort of a gleeful mad scientist that continues to poke and play with them each day. I now consider AI to be a playground of mystery and wonder, ripe for experimentation.

In the last couple of months, I've been using AI in all sorts of ways to support my creative activities.

The common theme in these anecdotes is my attempt to do what Ethan Mollick calls "becoming a cyborg." When I use a calculator, it dramatically augments and speeds up my ability to do math, but I'm still the "responsible human" that's driving the bus. I'm ultimately responsible for to check the accuracy and sanity of the final answer. If we think of LLMs as "creative" calculators, then we can now move faster on fuzzy activities that require imprecise, lateral thinking: music, art, writing, brainstorming. As long as one checks possible B.S. coming from the LLM, the results can be amazing. In other words, you have to be the one driving the bus at all times.

So, coming back to the cartoon criticism: how can I possibly be praising something that frequently produces wrong answers? My reply is: you're holding it wrong.

The world is full of intellectual problems. Some require precise answers (e.g. math problems, medical information, looking up scientific or historical facts.) Some problems have no correct answers, only a "space" of answers with different attributes and tradeoffs. This latter category is where we should be using LLMs -- and in fact, we should only be using them on those sorts of problems. ("Suggest changes to my writing or artwork", "help me plan a vacation itinerary", "let's imagine a new product", "give me the gist of this proposal".) The LLM provides great convenience and creativity boosts in these cases -- we just need to make sure that the human drives the bus and sanity-checks the output. Perhaps in a few years, professors won't be banning LLMs, but rather requiring their responsible use as part of the curriculum... just like graphing calculators. This is certainly what Mollick is predicting.

And so I continue to educate myself. Last summer I learned to train tiny neural-network models using Tensorflow. This summer, I've started teaching myself how to extend a general-purpose LLM on a new set of data. As that project progresses, I'll post updates. I'll be over in the corner, wearing my goggles and lab-coat.

published June 14, 2024