Humans And AI Face/Off

AI-created algorithms are advancing “deep fake” technology to the point of being mildly useful, or extremely nefarious.

I recently encountered an article about the real-time manipulation of AI-generated faces. The article includes a video demonstrating the technology and steps to reproduce it yourself. KnowBe4, a global security provider was duped in four video interviews by a North Korean agent using deep fake AI technology.. This follows Microsoft’s announcement earlier this year introducing VASA-1, their algorithm for turning any single portrait of a human along with an audio clip of their voice into a fully animated video of that person, complete with realistic expressions. Combined with other advancements in AI-driven generation, it is safe to say that, for better or worse, the technology to produce convincing representations of human individuals in real-time has advanced enough that widespread disruptions in the near future are a real possibility. Despite some beneficial uses, the overwhelming balance of potential applications leans nefarious.

 

You can be whoever you want to be

From a technical standpoint, these technologies are amazing. I marvel at all the things you could do. With models trained for voice, likeness, and language, you could present full-featured avatars of yourself, complete with video and audio. Certain kinds of routine or mundane business could be handled without our reluctant involvement. People with social anxiety could use it to avoid awkward interactions and project themselves with confidence instead. Everyone is now instantly photogenic in their pictures—doing things they normally wouldn’t be brave enough to do and looking good doing it. Even when a meeting does require personal attention, there’s no need to shower—your real-time avatar will be poised and coiffed, and you’ll have research and suggested responses at your unkempt fingertips.

Being yourself, even with the help of an effortless and perfect facade, can be a little much for some. Thankfully, you’re not limited to AI-model representations of yourself. You can present the visage of another person altogether—an alter ego, a super pseudonym. This could be particularly useful for individuals who want to be perceived with biases different from those they are used to experiencing, or for people who are persecuted, stalked, or whose lives may be in danger. While many people might use this to further disconnect from their authentic selves, it might help empower others to return to social life.

 

Dr Sbaitso and Mr Hyde

These technologies are positioned to further move our society in the direction it has been heading for some time: towards social isolation and disconnection by replacing authentic human interactions with soulless, virtual ones. Media is the surrogate at the center of this replacement. News, social, entertainment, and interactive media have all evolved to better hook into human emotional needs—effectively becoming sources of addiction and social maladjustment. Now, you can add an additional layer of separation by not even having to represent yourself in this brave new virtual world—you can have an AI model represent an artificial, idealized version of yourself. It is often said that AI will take our jobs, and in this case, it seems, you could lose the job of being yourself.

 

Fighting your Inner Daemons

The fallout from this technology will not be limited to unintended consequences. The potential for this technology to enable fraud simply cannot be overstated. Grandparent scams and dating app scams are about to be turbocharged. The scammers are ready for their FaceTime close-ups now, and Kim Jong Un is doing Cameos. We’ve already heard about deep fakes doing robocalls in Joe Biden’s voice—it’s surprising that we haven’t seen more of these unauthorized impersonations. Or maybe we have. The need for novel forms of trust and authentication is becoming all too clear. A society without trust is a dying one. Our systems of curation and validation have been unable to keep up with the monsoons of false information that now flow freely in our networks. In the same way, our laws and systems of enforcement won’t be able to cope with the coming trust crises.

The direction of media technology and its use in society has been all wrong. Technology should enhance or enable natural, in-real-life experiences, not replace them. It should not form an addictive avenue of escape or false promise. A virtual everything at your fingertips is worse than a lame alternative to real life; it’s a dangerous fool’s errand. It’s a trap and an avenue to a potential dystopian future. In an ever-more-disconnected world, technology has the potential to disconnect us even further. We should use these new advancements as an inflection point to think about the future we need and the future that no longer needs us.

 
 

A New Trust

It has become all too clear that the tools available to society are no longer up to the task of maintaining trust. Populations have grown too large and technology has advanced too quickly for them to keep up. With that in mind, I’d like to introduce SoftAtomic—a technology company I’ve formed with the goal of advancing the technology of trust. I’m developing a suite of tools to help individuals, businesses, and governments perform reliable, accurate, and automated comparisons, analysis, authentication, and trust assessments. It is my hope that these tools will be instrumental in reclaiming trust in one another. If you want to follow the company’s progress, send me an email at geordi [at] softatomic [dot] dev.







 

Doppelganger? I barely knew her!

Geordi

For those about to rock, we salute you.

Next
Next

Generative AI is a Reverse Mechanical Turk;