In 2000, a group of researchers at Georgia Tech launched a project they called “The Aware Home.” The collective of computer scientists and engineers built a three-story experimental home with the intent of producing an environment that was “capable of knowing information about itself and the whereabouts and activities of its inhabitants.” The team installed a vast network of “context aware sensors” throughout the house and on wearable computers worn by the home’s occupants. The hope was to establish an entirely new domain of knowledge — one that would create efficiencies in home management, improve health and well-being, and provide support for groups like the elderly.
“GPT-3 is not a mind, but it is also not entirely a machine. It’s something else: a statistically abstracted representation of the contents of millions of minds, as expressed in their writing.”
Regini Rini, Philosopher
In recent years, the AI circus really has come to town and we’ve been treated to a veritable parade of technical aberrations seeking to dazzle us with their human-like intelligence. Many of these sideshows have been “embodied” AI, where the physical form usually functions as a cunning disguise for a clunky, pre-programmed bot. Like the world’s first “AI anchor”, launched by a Chinese TV network and — how could we ever forget — Sophia, Saudi Arabia’s first robotic citizen.
But last month there was a furore around something altogether more serious. A system The Verge called, “an invention that could end up defining the decade to come.” It’s name is GPT-3, and it could certainly make our future a lot more complicated.
So, what is all the fuss about? And how might this supposed tectonic shift in technological development change the lives of the rest of us ?
With COVID-19 lockdown restrictions issued across the globe, millions of us have been forced to hunker down “in place”, or severely limit our movements outside of the home. On learning this, most will have reached reflexively for the nearest device — if we didn’t learn it from that device to begin with. Yet mostly we are cinched in a love-hate relationship with the presiding artefacts of our time; and often we resent tech’s power over us.
Nevertheless, new circumstances can breed new attitudes. Despite having spent the last few years debating whether or not technology will destroy us, March 2020 could be the month that at least partially redeems our faith in technology by demonstrating how fortunate we are to have some incredibly sophisticated tools in our homes.
For many, they are currently the sole portal to the outside world.
The dust has now settled after the madness of the world’s biggest annual tech fest, the Consumer Electronics Show (CES) in Las Vegas, NV. Since the show’s kick-off in early January, a parade of weird and wonderful new devices have dominated tech news and bylines; from lab produced pork to RollBot, Charmin’s robotic savior for those “stranded on the commode without a roll.”
The event itself really isn’t for the faint-hearted. It’s easy to feel overwhelmed by the sheer volume of companies vying to embed their (often ridiculous) tech gadgetry into our lives – both at work and at play. There is, of course, lots of money to be made from finding that elusive sweet spot; the point at which problem-solving, convenience, and affordability converge.
The following is a guest post byErin Green, PhD, a Brussels-based AI ethics and public engagement specialist. For more on the European scene, check out my recent interview with Hill + Knowlton Strategies “Creating Ethical Rules for AI.”
When it comes to the global AI stage, China and the US consistently grab headlines as their so-called arms race heats up, while countries like Japan and South Korea lead the way in innovation and social receptivity. Europe, though, is taking a slightly different approach – partly by choice, partly by design.
Somewhat independent of these interests, the EU itself is trying to carve out space in terms of regulatory prowess and inbringing coherence to a rather chaotic European AI scene. Think this is a bureaucratic exercise with not much reach or consequence beyond theBerlaymont? Just remember all thoseGDPR emails that clogged up your inbox sometime around May 25, 2018. The EU has real regulatory reach.
Introducing his students to the study of the human brain Jeff Lichtman, a Harvard Professor of Molecular and Cellular Biology, once asked: “If understanding everything you need to know about the brain was a mile, how far have we walked?”. He received answers like ‘three-quarters of a mile’, ‘half a mile’, and ‘a quarter of a mile’.
The professor’s response?: “I think about three inches.”
Last month, Lichtman’s quip made it into the pages of a new report by the Royal Society which examines the prospects for neural (or “brain-computer”) interfaces, a hot research area that has seen billions of dollars of funding plunged into it over the last few years, and not without cause. It is projected that the worldwide market for neurotech products – defined as “the application of electronics and engineering to the human nervous system” – will reach as much as $13.3 billion by 2022.
“What’s Hecuba to him, or he to Hecuba, That he should weep for her?”
The close of Act II Scene ii, and Hamlet questions how the performers in a play about the siege of Troy are able to convey such emotion – feel such empathy – for the stranger queen of an ancient city.
The construct here is complex. A play within a play, sparking a key moment of introspection, and ultimately self doubt. It is no coincidence that in this same work we find perhaps the earliest use of the term “my mind’s eye,” heralding a shift in theatrical focus from traditions of enacted disputes, lovers passions, and farce, to more a more nuanced kind of drama that issues from psychological turmoil.
Hamlet is generally considered to be a work of creative genius. For many laboring in the creative arts, works like this and those in its broader category serve as aspirational benchmarks. Indelible reminders of the brilliant outlands of human creativity.
Now, for the first time in our history, humans have a rival in deliberate acts of aesthetic creation. In the midst of the avalanche of artificial intelligence hype comes a new promise – creative AI; here to relieve us of burdensome tasks including musical, literary, and artistic composition.
Last month, Oscar Schwartz wrote a byline for OneZero with a familiarly provocative headline: “What If an Algorithm Could Predict Your Unborn Child’s Intelligence?”. The piece described the work of Genomic Prediction, a US company using machine learning to pick through the genetic data of embryos to establish the risk of health conditions. Given the title of the article, the upshot won’t surprise you. Prospective parents can now use this technology to expand their domain over the “design” of new offspring – and “cognitive ability” is among the features up for selection.
Setting aside the contention over whether intelligence is even inheritable, the ethical debate around this sort of pre-screening is hardly new. Gender selection has been a live issue for years now. Way back in 2001, Oxford University bioethicist Julian Savulescu caused controversy by proposing a principle of “Procreative Beneficence” stating that “couples (or single reproducers) should select the child, of the possible children they could have, who is expected to have the best life, or at least as good a life as the others, based on the relevant, available information.” (Opponents to procreative beneficence vociferously pointed out that – regrettably – Savulescu’s principle would likely lead to populations dominated by tall, pale males…).
“One child, one teacher, one book, one pen can change the world.”
These are the inspirational words of activist Malala Yousafzai, best known as “the girl who was shot by the Taliban” for championing female education in her home country of Pakistan. This modest, pared-down idea of schooling is cherished by many. There is something noble about it, perhaps because harkens back to the very roots of intellectual enquiry. No tools and no distractions; just ideas and conversation.
Traditionalists may be reminded of the largely bygone “chalk and talk” methods of teaching, rooted in the belief that students need little more than firm, directed pedagogical instruction to prepare them for the world. Many still reminisce about these relatively uncomplicated teaching techniques, but we should be careful not to misread Yousafzai’s words as prescribing simplicity as the optimal conditions for education.
On the contrary, her comments describe a baseline.