By Angus Stevens, CEO and cofounder Start Beyond, Chair and cofounder the Australian Metaverse Advisory Council (AMAC)
Would you like AI with that? Those two letters are everywhere – in your messages, your search bar, your documents, on social media, the news and in pitch decks to tech investors. At a recent panel at SXSW Sydney, there was a significant show of hands agreeing that adding AI to everything had gone too far. There’s no doubt AI is currently in its own hype cycle.
The first problem is that everyone means something different when they use the term ‘AI’. That’s a bad thing because there are valid concerns about what it does, but it means legitimate uses are also lumped in the same category.
Artificial intelligence is an umbrella term that covers a range of technology, from self-driving cars and facial and voice recognition to digital assistants and creative tools. Problems arise when AI produces content that is misinformation at best (so-called hallucinations) or deadly at worst (offering recipes with ammonia and bleach as ingredients with no oversight). High profile legal cases feature writers and artists whose works have been fed into generative AI banks without permission.
It falls to us in the tech industry to communicate the value of AI for the benefit of our clients and the wider public so when the hype dies down, the useful parts have enough momentum to remain useful.
Using AI in the metaverse
Just like virtual reality (VR) in 2017, AI is having its moment, and like virtual and augmented reality (AR) development, it needs to add value to have life beyond the hype cycle. VR and AR saw a great deal of hype around its use in games, but actually some of its most powerful ongoing applications are in areas such are training and education, giving learners access to simulations and experiences in controlled environments.
A prevalent use of AI in the metaverse is powering avatar interactions and creating bots that can enhance the user experience in virtual spaces. These bots might act as instructors or guides by providing interactive training modules to make the metaverse more interactive, immersive and intuitive. They could also run simulations where users can practice various scenarios to explore different ways of examining or handling a situation. Eventually, through sufficient use, these AI training tools can create personalised training environments to guide users through processes with realistic simulations.
We use AI in some of our production – to clean up images and build out virtual worlds for example – but clients continually request we add AI to areas where it would be problematic.
For instance, we have a VR experience that teaches aged care workers how to manage patients with dementia. It provides some scenarios that give trainees skills and experience without the potential physical and emotional risks of learning on the job. The proposed idea is that AI-powered bots could help care for elderly individuals by providing assistance and companionship while easing the workload of human caregivers. The challenge is you need incredibly good internet, great guardrails around what AI can and can't say, and to decide on the level of realism that you want the AI to have. There is also the risk that the patient could form an unhealthy attachment to the AI to the detriment of their human connections.
Then there’s the key question: is ‘easing the load’ beneficial to both caregiver and patient or will it dehumanise the role altogether? There’s a risk of delegating a task to AI that ultimately devalues the work. I would rather see AI take care of the admin side of healthcare and leave the caregiving to the human.
The challenges are real
There are more challenges with the current crop of AI tech that we will need to address before we can openly embrace AI and its full potential. Beyond the question of whether something needs a human touch, it’s clear that the hype has overtaken the actual capabilities of the technology. Many of its most useful functions are still prototypes, and let’s not ignore that some of the most impressive displays of AI, such as Tesla’s Optimus robot, are actually still largely human-controlled, leaving us with new generation Mechanical Turks.
Further to that, for all its potential use to assist with environment management, AI such as chatbots and generative tools like ChatGPT consume far more resources than the value of the outcome they deliver. According to The Washington Post, which worked with researchers at the University of California, a query or prompt that generates a 100-word response costs about a 500ml bottle of water, calculated by the water the servers need to be cool enough to function in data centres.
As with every tech hype cycle, not all AI will succeed, but there will likely be a version of AI that becomes as valuable to society as VR did. We’re already seeing AI take root in practical applications, and it’s only a matter of time before certain AI technologies prove indispensable. The question remains: will AI become a cheaper, more efficient replacement for human labour, or will it simply add complexity and cost to processes humans already handle well?