We’re all going through it right now — an existential crisis spurred on by the blistering rise of generative artificial intelligence. Or perhaps, if the abundance of “AI stages of grief” posts in my social feeds are any indication, we’re simply mourning a recently bygone era where the only competition for humans was…other humans. The self-assured, heavily filtered, and highly curated “look at me” influencer culture that’s dominated the past decade is now giving way to an uncertain and insecurity-riddled era of “What about me?”
We’re all desperate to make sense of the dizzying pace of change ushered in by ChatGPT’s linguistic prowess, MidJourney’s spellbinding creative efficiencies, or Bing’s newfound search “personality.” But inertia will only serve to fossilize your industry, your skillset, and your perspective.
This techno-paranoia is nothing new; we’ve been through all of this before. It’s merely the start of yet another cycle of technological adoption. While there are those who — for engineering, enthusiast, or market reasons — are cheering on AI’s development and use, others are taking more of a wait-and-see approach.
Generative AI, however, is not a fad you can simply observe from the sidelines, nor is it a passing, ill-conceived trend in search of a purpose like the nonexistent “metaverse.” Much like the hardware and services we depend on so much today — your iPhones, your Google searches, your Facebook accounts, your Twitch streams, your TikTok feeds — generative AI will shift from harbinger of the end times to an integral part of the fabric of modern life. And, in doing so, it will create jobs, entertainment, news, connections, and most importantly, culture.
Still, knowing that we’ll come out on the other side of this AI evolution much stronger and more emboldened does nothing to quell the present pervasive panic. And that’s likely because the speed of generative AI’s advancement will far surpass anything we’ve lived through before. Whereas before Moore’s Law held us to a relatively comfortable two-year technological advancement cycle, the same pace of progress is not true of AI. According to research from venture capital firm Sequoia Capital, the computing power used to train AI models “increased by six orders of magnitude” over a five-year period.
Inertia will only serve to fossilize your industry, your skillset and your perspective.
If you think you feel unmoored now by all of generative AI’s advances — text to video, text to 3D, text to music, etc. — just wait until the end of the decade. During a recent earnings call, NVIDIA CEO Jensen Huang predicted that the company, which currently manufactures the GPUs which help to power ChatGPT, will create innovations in chipsets, algorithms, and operating systems that could “accelerate AI by another million times” within ten years’ time.
Don’t worry — you’re not alone in failing to imagine what that superpowered-AI world could even look like. What you can clearly grasp right now, however, is that the creation, use, and sale of generative AI will not belong solely to the tech sector. Huang’s vision of our near-future touches upon a pivot he believes nearly all businesses will make to becoming “AI factories,” essentially producing and selling AIs in addition to physical products.
Much like the proliferation of apps on our smartphones, so too will there be an AI for everything. Have a great idea for an app, but no clue how to build it? That’s already no longer an issue. Need a storyboard for your stellar ideas or to illustrate your script, but can’t draw? Again, that’s now an old problem. With this burgeoning AI toolset in our literal hands, we’ll no longer need to dwell on our deficiencies, on the skills we lack and the resulting physical, professional, and emotional constraints. Instead, we’ll be able to focus on executing our dreams with a symphony of AI assistance.
Already, AI-generated manuscripts are growing in popularity, as Reuters recently reported Amazon’s self-publishing Kindle Direct platform now boasts more than 200 books with ChatGPT authorship. These AI-written books are flooding into the inboxes of traditional publishers, too, prompting the sci-fi and fantasy magazine Clarkesworld to close submissions indefinitely, pending a review of its processes.
Knowing that this is all a large-scale experiment, we can make use of this tech right now responsibly. We can learn from it, adapt it to our needs, and develop guidelines for the future.
While the rosy DIY via AI era will surely come to supercharge us all, it’s still a little ways off. The stark reality is that generative AIs like ChatGPT, the infamous large language model (LLM) powering Microsoft’s new Bing, are not infallible. Rather, their output is a patchwork of convincing human-like script that can be riddled with inaccuracies. This is to be expected as these LLMs are still learning and are essentially in the beta phase of testing, albeit a very public one.
Knowing that this is all a large-scale experiment and bearing in mind generative AI’s present limits — i.e., accuracy, transparency, and legality — we can make use of this tech right now responsibly. We can learn from it, adapt it to our needs, and develop guidelines for the future. Ignore those parameters and you might find yourself in a situation like CNET just did. The tech site faced backlash to its stealthy rollout of generative AI-authored articles. CNET’s finance how-to’s, which began publishing in November 2022 and were engineered to gain the site search ranking dominance, backfired as they were full of errors. The site also failed to clearly disclose its use of generative AI to readers, garnering it further disdain.
More recently, generative AI’s hiccups have made it onto the front page of The New York Times. Microsoft’s superpowered Bing earned sensational negative press for the search engine’s off-the-rails, Fatal Attraction-esque chat with tech writer Kevin Roose. The mishap only fed the flames of the public’s paranoia around the rise of generative AI.
Need more proof that generative AI is still very much a work in progress that requires constant public input and tweaking? Consider this 2022 study from Cornell University which pegged the reliability of the best LLMs at just under 60 percent. That leaves room for a wide margin of error. So as plausible as results from co-piloted Bing, ChatGPT, or YouChat may seem, these systems still (and potentially may always) require heavy fact-checking; they require a human element. They breed efficiency, but they don’t displace us. It’s why at least one university professor is having their students correct historical essays written by ChatGPT, as noted by Reddit user SunRev.
It’s apparent that Big Tech knows generative AI isn’t quite ready for primetime, too. It’s why Google, the current ruler of search, had previously held off on deploying Bard, the company’s Bing+ChatGPT rival. We only know about Bard now because Microsoft’s flurry of buzzy headlines forced Google’s hand. And when it did, well, some Googlers labeled Bard’s announcement as “rushed” and “botched.”
If ever there were a lesson to be learned from the whiplash-inducing ascendance of social media, it’s that the “Move fast and break things” motto needs to be modified for this new era of generative AI experimentation. Now it’s more like: “Move fast and break things… but not public trust.”
Make no mistake — the coming AI sea change will surely upend our previous modes of living, of working, and even of socializing. But we’ve still got time to plot out our place in that reshaped reality; to mold it in our favor.
So far, Microsoft and OpenAI have successfully managed to wrestle the narrative and its associated spotlight onto their corner of the Big Tech pie. The pair have already cemented the public’s association of generative AI with their respective companies, which was essentially step one of the battle. Now, if OpenAI is to be believed, comes the phase of caution and considered learning. In fact, OpenAI revealed this new, more thoughtful approach to ChatGPT’s development in a blog post, saying that the company plans to “carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one.”
So far, Microsoft and OpenAI have successfully managed to wrestle the narrative and its associated spotlight onto their corner of the Big Tech pie. The pair have already cemented the public’s association of generative AI with their respective companies, which was essentially step one of the battle. Now, if OpenAI is to be believed, comes the phase of caution and considered learning. In fact, OpenAI revealed this new, more thoughtful approach to ChatGPT’s development in a blog post, saying that the company plans to “carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one.”
Except we all know that won’t be the case. Nothing that’s transpired in the past three months points to any amount of slowdown in public-facing generative AI developments. If anything, expect the announcements and innovations to only ramp up, with new applications and new entrants to the space.
Make no mistake — the coming AI sea change will surely upend our previous modes of living, of working, and even of socializing. But we’ve still got time to plot out our place in that reshaped reality; to mold it in our favor. So it’s worth taking advantage of this transitional period to shift our internal cross-examination. The question we should be asking ourselves about generative AI now is not “What about me?” Instead we should be wondering:
“How do I make this work for me?”