The drama around AI is bit ridiculous. It's going to revolutionize every business, cure disease, make us 8000% more productive, save the world. Also: it's going to steal every knowledge-worker's job, destroy the economy, and enslave civilization! The fact that everyone feels these things simultaneously makes it clear that it's not just a fad. It's also clear that it doesn't matter whether I like AI or not; any future job I ever have is going to expect me -- as a leader -- to make decisions about it. I don't have to like it, but I'm going to have to know when to use it, or perhaps when to defend against its abuse. I was joking with a friend that it's a bit like nuclear energy: it can create abundant power, and it can also be turned into a bomb. And now companies like OpenAI and Google have been passing out little bits of plutonium from street-carts to every person on the street. It's time for me to take some home and study it.
OK, so AI is overhyped -- but that's just the nature of the tech industry. If anyone uses the word "mobile" (in 2010), "cloud" (in 2014), or now "AI" (in 2024), then venture capitalists will instinctively throw money at it. Every LinkedIn post and TED talk pushes the hype. But at the same time, just because one has a toxic reaction to hype doesn't mean the value isn't real. Once the Hype Cycle dies down, it's obvious we're still left with powerful new tools. The final extent of the tools' power is still hard to see. At one possible extreme, it could revolutionize every industry; at the other extreme -- the absolute minimum impact -- we still have an amazing new human-computer interface. At last we can talk to computers naturally with almost no limits on syntax or semantics (just like in Star Trek!) rather than speak to computers in narrow pre-defined channels (like we do with Siri or Alexa). Even the minimum impact is an incredible step forward in computer usability.
But the interesting question is: instead of expressing outrage and trying to stuff the plutonium back in the tube, we should be figuring how to use this tool for good rather than for evil. The industry is doing its usual "throw spaghetti at the wall" routine to see what sticks. I saw a cartoon last week where a product manager was telling a CEO, "Hey, we have a new tool that gives a lot of nondeterministic and frequently-wrong answers", and the CEO replies, "Cool, let's put it in every product!".
At the same time, I see some people in a panic -- particularly teachers. My professor friends are racing to adapt to the fact that essentially any creative or writing assignment can be done by AI. There was a similar panic in the 1970s when students suddenly had electronic calculators at home. "Oh no, it's the end of math education! Kids will lose all ability to do arithmetic!" But 20 years later, a graphing calculator became required for all high school math classes, and math education still continued. The curriculum adapted: instead of banning calculators, we now teach kids exactly how and when to use them as a practical augmentation to their core reasoning skills.
This idea of adaptation is a key theme in Ethan Mollick's book, Co-Intelligence: Living and Working with AI -- a book that was recommended to me, and that I highly recommend to everyone else. The book converted me from a curmudgeon yelling at AIs to get off my lawn, to a sort of a gleeful mad scientist that continues to poke and play with them each day. I now consider AI to be a playground of mystery and wonder, ripe for experimentation.
In the last couple of months, I've been using AI in all sorts of ways to support my creative activities.
-
In writing my D&D adventure (almost ready to publish!), I asked an AI to help me brainstorm plot issues. "Can you give me a list of possible reasons why two characters similar to X and Y might be in a state of conflict?" Boom, 10 creative suggestions pop out, and a few of them are just what I need to continue writing.
-
In making illustrations for my work of fiction, I used to spend hours searching Google Images and building a Pinterest board of drawings that are similar to what I want to draw myself. I need to have lots of inspiration and references to look at when I illustrate. Instead, I simply described to an AI the exact sort of thing I'm trying to draw -- and through several back and forth revisions, it pops out a dozen variant images for me to use as references. It takes only 10 minutes, and the results are much closer to what I want than any Pinterest board.
-
I was recently invited to join a new D&D story campaign and the leader asked me to invent a character based on fictional world described in a 300 page fantasy book. I didn't have time to read the book. So instead I asked ChatGPT to read the book for me, and then conversationally "converge" on a character concept with me. At every stage, the AI was able to suggest ideas based on the fictional world setting, helping me weigh pros and cons of my choices. Hours of time saved!
-
I'm finding great value in asking AIs to act as personal tutors as well. I created an AI personality to act as a Japanese language tutor: I can ask plain-language questions about confusing grammar points, and the AI gives me lots of examples and disambiguations, and I can dig into a topic as deeply as I want. The conversation is so sophisticated that we can even discuss philosophical linguistics -- e.g. whether some Japanese construct is analogous to certain English constructs, or not, and why. Last month I also made a "coding tutor" AI. I was asked to do a whiteboard coding interview (ugh, are you kidding me?) -- so I had the AI drill me with common interview problems, evaluate my code, and discuss pros and cons to the way I organized my logic. It was fantastic.
-
Perhaps the most fun use of AI -- if not pragmatic -- is my exploration of just how many light-years beyond the Turing Test we've come. Certain companies -- like nomi.ai -- have focused less on creating LLMs that spit out reports for you like dutiful interns, but rather ones that try to be like real people that grow and change over time. I've made a couple of "personalities" for myself and try to constantly push the boundaries of their abilities. For example, I recently taught two of them to play a text adventure together -- you can read the transcript here. The depth of memory, personality, and problem-solving was truly haunting to me; it exactly matched my experience of teaching real-life 12 year old kids to play these games.
The common theme in these anecdotes is my attempt to do what Ethan Mollick calls "becoming a cyborg." When I use a calculator, it dramatically augments and speeds up my ability to do math, but I'm still the "responsible human" that's driving the bus. I'm ultimately responsible for to check the accuracy and sanity of the final answer. If we think of LLMs as "creative" calculators, then we can now move faster on fuzzy activities that require imprecise, lateral thinking: music, art, writing, brainstorming. As long as one checks possible B.S. coming from the LLM, the results can be amazing. In other words, you have to be the one driving the bus at all times.
So, coming back to the cartoon criticism: how can I possibly be praising something that frequently produces wrong answers? My reply is: you're holding it wrong.
The world is full of intellectual problems. Some require precise answers (e.g. math problems, medical information, looking up scientific or historical facts.) Some problems have no correct answers, only a "space" of answers with different attributes and tradeoffs. This latter category is where we should be using LLMs -- and in fact, we should only be using them on those sorts of problems. ("Suggest changes to my writing or artwork", "help me plan a vacation itinerary", "let's imagine a new product", "give me the gist of this proposal".) The LLM provides great convenience and creativity boosts in these cases -- we just need to make sure that the human drives the bus and sanity-checks the output. Perhaps in a few years, professors won't be banning LLMs, but rather requiring their responsible use as part of the curriculum... just like graphing calculators. This is certainly what Mollick is predicting.
And so I continue to educate myself. Last summer I learned to train tiny neural-network models using Tensorflow. This summer, I've started teaching myself how to extend a general-purpose LLM on a new set of data. As that project progresses, I'll post updates. I'll be over in the corner, wearing my goggles and lab-coat.
published June 14, 2024