Two vowels have come to dominate our attention lately. Two vowels that can, quite literally, generate headlines, upend your business, enhance your creativity, bolster your productivity, and alter the status quo irrevocably. But what happens when some of the best and the brightest tech minds come together in Austin, Texas to discuss those two vowels at an event known for its consonants?
For anyone attending this year’s SXSW festival in search of AI enlightenment, the notion that we humans are the masters of our own AI destiny was pretty much ubiquitous across every major session. (Aaaand, exhale!) There was, however, one other major and somewhat sobering takeaway from our time in The Lone Star State:
No one really knows what lies ahead.
Sure, some of the more persuasive pundits, like affable futurist Amy Webb for one, may have gently eased us into a fleeting sense of false confidence with slick and entertaining presentations on the uncertain years ahead. And, in doing so, they may have restored our sense of control during what is clearly an out-of-control moment in human history. But the stark reality is we’re only going to figure out this generative AI-assisted world to come by…bravely plodding forward into it with arms somewhat open and a futurist’s roadmap in our back pocket.
Whether you like, love, fear, reject, or disapprove of generative AI is of little consequence — it’s here to stay. You still need to actively use AI so we can collectively steer its development.
Whether you like, love, fear, reject, or disapprove of generative AI is of little consequence — it’s here to stay. You still need to actively use AI so we can collectively steer its development, reveal its strengths and weaknesses, discover its untapped potential, hold it accountable, and teach it to be an additive positive in our lives. The onus is on all of us to make its inevitable omnipresence palatable.
Think of it this way: If AI is ultimately a mirror for humanity, then it’s on us to reflect back our best selves. That means we need to provide the training (i.e., the corpus of the internet and our prompts) and the discipline (i.e., moderation and feedback) to best mold it.
With that guiding principle as our North Star in this increasingly automated world, let’s take a look at some of the other key AI revelations from SXSW 2023 that’ll either have you breathing a sigh of relief or reaching for the Xanax.
1. The rise of the creative generalists
Perhaps one of the more reassuring predictions about AI to come from SXSW 2023 was this notion of democratized creativity. Ian Beacraft, chief futurist of Signal and Cipher, believes that those who will be best equipped to adapt to an AI-assisted workplace are the ones who haven’t specialized their skillset — the jacks-of-all-trades, masters-of-none. So if you work in a cross-departmental or cross-disciplinary creative role, better days are ahead! AI will merely level up your portfolio of creative skills. Got that? We are ALL artists.
2. Prompting the new digital divide
Amy Webb, CEO of the Future Today Institute, presented this nugget of prognostication as the new model of “rich” vs. “poor.” Remember that bit about not losing our jobs to AI? Well, let’s be clear, there’s an implied caveat and it applies to AI-savvy workers. As with any language, fluency is key to effective and efficient communication, and the same applies to interfacing with AI through prompts. The new digital divide will separate the prompt engineering savants from the prompt engineering illiterati, and award roles and salaries accordingly.
3. We no longer need to speak “machine” to the machines
Many of us will never learn to code, having long resigned that engineering superpower to the math-inclined among us. And it seems as though we won’t ever need to? According to John Madea, VP of design and artificial intelligence at Microsoft, speaking “human” to the machines will matter more in this new AI-assisted era. And it makes sense given that our main interface with these Large Language Models (LLMs), like ChatGPT, Bard, or Bing, centers around prompt engineering using natural language. It kind of puts those brand-new, high-paying prompt engineering roles into context, huh?
4. The metaverse will be built by user-generated content
The “metaverse” is perhaps the cringiest of all of Web3’s bluster, but we can’t ignore that one day soon it will actually exist. The unexpected twist? We the people, with an AI assist, will build it — not Big Tech. Jerome Pesenti, Meta’s former VP of AI, explained that these interconnected worlds will spring up through the use of generative AI and the assets we create and share. Unsurprisingly, this metaverse building is the exact kind of consumer-facing application Meta demoed at its Connect developer conference.
If AI is ultimately a mirror for humanity, then it’s on us to reflect back our best selves.
5. From an internet we search to an internet that searches us
This cleverly worded headscratcher comes via futurist Amy Webb. What Webb is actually saying here is that everything we do — yes, even your bathroom habits — will become a trainable data point for AI. In this sense, AI will be able to understand us even better than we think we know ourselves, and anticipate our needs and desires. It’s ambient computing writ large. And as Webb sees it, our embrace of digital tech during lockdown life was the first major contributor towards this shift.
6. AI for the people and by the people
Responsible innovation is hardly ever the bedfellow of scale. And never has this been more true than in the explosive public release of generative AI. (We’re looking at you, OpenAI.) But it’s good to know certain individuals operating in the upper echelons of the tech world have humanity’s best interests at heart. Take, for example, the founders of Humane, the aptly named AI hardware/software startup from former Apple engineers which puts people first, or researchers like Patrick Gage Kelley who work with Google to make sure the what, why, and how of AI is easily explainable.
7. An FYP for your HMO or PPO
It’s hard to envision how something like, say, TikTok’s addictive For You Page (FYP) would translate to the realm of medicine, but Anne Wojcicki, cofounder and CEO of 23andme, thinks that’s on the way. That’s right — personalization trends from social feeds could bleed into personalization in healthcare. And it makes sense when you consider the supportive impact of online communities that could arise from users scrolling through a feed of patients dealing with similar health conditions. Or imagine even a tailored (and engaging) healthcare landing page that zeroes in on doctors, procedures, services, and support that matters most to you. Approachable healthcare. It shouldn’t sound as far-fetched as it does.
8. Prevention as the best medicine
23andme’s Wojcicki was also confident that advances in AI would bolster a new model of preventive healthcare and popularize risk prediction for patients. The tech is already being used for numerous medical applications, one of which involves researchers creating a multitude of new protein structures. So it’s not that much of a stretch to train it on our DNA. No doubt this novel approach to healthcare would make generous use of 23andme’s genetic data, so this one may be a bit self-serving. We see you, Anne.
9. A very, very modern sort of love
Remember the movie Her with the breathy and alluring AI voiced by Scarlett Johansson? Yeah, it’s not so much the realm of science fiction anymore. In fact, a handful of futurists at SXSW 2023 were pretty confident that AI will eventually become a confidant for some folks. And, eventually, maybe even an embodied lover? AI companion apps already exist and we’ve already heard tales of emotional entanglements with AI, so expect this romantic application of the tech to rapidly evolve.
10. AI will always wake up and choose chaos
Maya Angelou said: “When people show you who they are, believe them the first time.” Who knew that would also apply to AI? Well, it does. As Jerome Pesenti pointed out, once an AI reveals itself to be chaotic or acts offensively, then you know that it can always be “jailbroken” to revert back to that chaotic state. Unsettling but good to know!
11. Assigning AI identity
The way futurist Kevin Kelly sees it, there are four potential frameworks for how humanity will come to perceive AI: slaves, pets, gods, aliens. Right off the bat, you can identify the two troubling modalities that should be immediately stricken from the list. But chin up, fellow human, because according to Kelly, we won’t give into our lesser impulses and impose domineering authority upon our AI copilots nor blindly subjugate ourselves to its whims. Instead, we’ll come to ultimately view AI as “artificial aliens” — something intelligent but altogether different from us. In the near term, expect to view them primarily as “pets” and cringe from instances of robotic “abuse” (see Spot [from Boston Dynamics] fall over). Just wait until AI can feel.
12. Bye-bye metaverse baddies, hello trololol bots
Le sigh. We have to talk about the “metaverse” again, but this time there’s some good news… and an existential question. Turns out, the major benefit to living in an AI-generated virtual world is that you also get an “off switch” for trolls. So if anyone starts to make your meta-playground into a meta-nightmare, just turn’em off. Now comes the tricky part: Did you just ban a bully or a bullying bot? (Does your head hurt, too?)
13. Anything you can do, AI can likely do better
Consider this the era of our great reckoning; the one in which we realize that inherently “human activities” are really just mechanical processes. You know those joyrides you like to take? That game of chess you like to play with your niece? That conversation you enjoyed having with the Whole Foods cashier? Sorry to burst your bubble, but according to futurist Kevin Kelly, these are all actions that AIs can easily automate and there are more to come. Pour one out for human fragility.
14. Reddit is (a good chunk of) the mother brain
No matter the AI-focused talk at SXSW 2023, there was always one elephant in the room and its name is Reddit. More than a handful of futurists at the Austin-based event hinted at the popular community-based website’s role in training LLMs. But it was Reddit COO Jen Wong who said the quiet part out loud during their panel. “Where do you think they get their answers from?” Hrmmm. Where, indeed? Maybe there’s a subreddit for that.
15. It’s not about copyright, it’s about right of reference
The U.S. Copyright Office is still grappling with the thorny issue of intellectual property rights granted to works derived in whole or part from generative AI. (The U.S. slow to legislate? No surprise there!) In the meantime, futurist Kevin Kelly thinks we should be taking a different approach — one that’s less about authorship and more about reference rights. In fact, Kelly believes creators may even someday jockey for the right to opt in to an AI training set. Influencers, this is your final form.
We the people, with an AI assist, will build the metaverse — not Big Tech.
16. 2024 will be the last “human” election in the U.S.
Did it even need to be said? If there’s one thing we can all depend upon, it’s humanity’s capacity to take a tool and do our worst with it. Generative AI meet the 2028 U.S. election. Before you grab your pitchforks, keep in mind this warning comes from Greg Brockman, cofounder of OpenAI, aka the-house-that-built-ChatGPT. As you might’ve guessed, Brockman envisions future election cycles that will become rife with deepfakes and plausible disinformation campaigns at a massive scale. Yah, we hate it here, too.
17. The EU is our only near-term hope for AI regulation
Much like how the European Union led the way and created safeguards around digital privacy and security with its General Data Protection Regulation (GDPR), so too are we now looking for its guidance on AI. The EU’s proposed Artificial Intelligence Act, which is entering into negotiations with policymakers this April, could very well have sweeping ramifications for how AI systems like ChatGPT are designed, implemented, and rolled out to the public in the U.S., and whether or not they’re labeled “high risk.”
18. Let AI be a hot mess
With one contentious piece of legislation about to hit the books (see above), the regulated fate of AI is still up in the air. But that heavy-handed legislative tactic is something engineers like Brockman and futurists like Kelly would advise against. Instead, they propose a sort of wait-and-see-and-then-clean-up approach. In other words, in order to know how AI can be abused, we have to let people abuse it first. Then, and only then, can we craft laws regarding responsible use, privacy, and safety. This has strong let-the-kid-touch-the-hot-stove-and-get-burned-so-they-know-better vibes. And I think we all know how that played out for social media.
19. TFW AI learns from itself
Currently, AI learns from the internet and the internet, as we know it, has been written by humans. But what happens when AI learns from an internet that’s been written by AI? Welp. There’s no guidance for this one, but hat tip to Google researcher Patrick Gage Kelley for embedding this metaphysical quandary into our brain. Consider it food for gnawing thought.
Remember what we said earlier about driving AI’s destiny? Better keep your prompt-writing hands at 10 and 2.