Episodes like the Cambridge Analytica scandal opened our eyes to the fact that some of our nearest and dearest technologies had become fully socialized before we truly understood the full force of their influence. Consequently, new tools and gadgets coming down the line are being closely examined so that we can begin to uncover any damaging consequences that could manifest 10, 20, or even 100 years from now.
What was previously the exclusive domain of engineers, programmers and scientific specialists is now fertile territory for politicians, campaigners, ethicists, philosophers, psychologists and sociologists, all of whom are all trying get to grips with the ethics of technology and its implications for broader society.
But there is some good news. A counter-movement of socially conscious technology is undoubtedly on the rise. Just a quick stroll around the exhibition hall at the recent TechCrunch Disrupt conference in San Francisco made clear that there’s a new cohort of ethically engaged technologists out to reset the balance and demonstrate the how their products can be leveraged for noble causes.
Take Jaunt’s virtual reality experience, Decisions: Party’s Over. Sponsored by the massive alcoholic beverages conglomerate Diageo, the simulation follows the journeys of four partygoers as they navigate the dangers of binge drinking. In theory, this VR functions as a very modern cautionary tale, deterring would-be party animals from overindulging by immersing them in—and scaring them with!—the harrowing consequences that could potentially befall them, including sexual assault.
Rather less didactic was CityCop, a mobile app its creators call a “Waze against crime.” The platform works in two ways; first, by encouraging citizens to report crime directly through the app—kind of like a 21st century neighborhood watch, and secondly by pulling and mapping crime data directly from social media and public crime reports, crowdsourcing offenses in real-time. All hotspots are displayed on a live local map, so that residents and visitors can proceed with caution (and would-be offenders can see where they’re being watched).
CityCopy is already operational with 250,000 users in four cities across Latin America, but it’s inventor Nadim Curi hopes to roll out the app more broadly, and ultimately for it to become a useful tool for law enforcement. This is where technology and community-mindedness collide, in a nice way.
On the mainstages, too, social consciousness prevailed. DNA company 23andMe told those gathered about its new $300M four year project with GSK to advance the discovery of new disease therapies and cures through the study of genetics. And in the notorious “Startup Battlefield” pitch contest, we saw companies born out of concerns for consumer privacy, like D-ID (a respectable fifth place in the competition), which uses AI to protect the biometric data present in photographs from facial recognition algorithms—without making perceptible changes to the image. That’s right, we are at once being protected and cured by some of the very mechanisms we’ve also been encouraged to fear: genetic data collection and artificial intelligence.
So are we at a turning point? Will coming years bring a raft of ethical tech, supporting and complementing the existing values of society?
Perhaps. But even if this isn’t the case, it is still important to reflect upon these test cases of “tech for good” (and it’s important to say that those highlighted are just a few among many other like-minded efforts). That’s because against a backdrop of cynicism and fear-mongering they demonstrate that artificially intelligent systems and related technologies are not inherently pernicious. Rather, they can be instrumental in helping us navigate some of humanity’s vexing problems: crime, disease, and personal fallibility, to name a few.
Trust, as the saying goes, is difficult to gain and easy to lose. But if these kinds of technologies succeed, they just may upend the one-dimensional and sensationalist criticisms of tech, and help regain fragile public confidence.