Well, imagine if stealing your identity could include stealing your image. And if scammers could then use that image to put words in your mouth and – in some cases – fake your very actions. This isn’t just some outlandish thought experiment, but a foreseeable hazard if we fail to prepare for a surge in the production of “deepfakes”.
It is important to ask at this nascent stage: should your descendants be able to assent to your deepfake resurrection?
As AI technology refines and multiplies, so do the methods of would-be attackers and fraudsters. And as our world becomes more networked, the attack surface grows and grows. In short, it is a very exciting time to be a technically-minded crook.
While GPT-3 has been quick to impress us, it was also quick to demonstrate its dark side. The system that trained on such huge amounts of human data was always going to take on both the good and the bad that lies therein
How should we respond to distressing, manipulatory, or abusive behavior in an immersive and interactive environment? Particularly when it graduates from “content” to something much more like a lived experience?
It is worth makers considering how fluid it will be for users to progress from “tapping to talking” in different domains. If a voice-controlled design cuts out a natural part of the process (as with shopping), is a clunky interaction (as with anything that requires fluent conversation), or makes us more nervous (as with personal admin), there will still be thinking to do…
In truth, no matter what gadgetry emerges victorious in the end of CES, there will still be some fundamental “meta themes” affecting technology in 2019. And though they may not have secured as many column inches as cutsie robots and 5G this week, these core topics are likely to have more staying power.
From customer service bots like Erica, to the Google Duplex debacle, and robotic citizens like Sophia of Arabia. We are perpetually looking to craft AI our own image—but why?