It is our human inclination to want to look good. Our desire to impress keeps the fashion industry alive, it also motivates many of us to work or study hard, and there are billions of dollars to be made from our desperation to look visibly fit and healthy. So, it should come as no surprise that as algorithms hold more and more sway over decision-making and the conferral of status (e.g. via credit or hiring decisions), many of us are keen to put our best foot forward and play into their discernible preferences.
This is certainly true of those in business, as discovered by the authors of the working paper How to Talk When A Machine is Listening: Corporate Disclosure in the Age of AI. An article posted by the National Bureau of Economic Research describes the study’s findings:
Companies go beyond machine readability and manage the sentiment and tone of their disclosures to induce algorithmic readers to draw favorable conclusions about the content. For example, companies avoid words that are listed as negative in the directions given to algorithms.
The researchers show this by contrasting the occurrence of positive and negative words from the Harvard Psychosocial Dictionary — which has long been used by human readers — with those from an alternative, finance-specific dictionary that was published in 2011 and is now used extensively to train machine readers. After 2011, companies expecting high machine readership significantly reduced their use of words labelled as negatives in the finance-specific dictionary, relative to words that might be close synonyms in the Harvard dictionary but were not included in the finance publication.
And companies aren’t just tweaking written public documents. The piece goes on to say that:
Managers who know that their disclosure documents are being parsed by machines may also recognize that voice analyzers may be used to identify vocal patterns and emotions in their commentary. Using machine learning software trained on a sample of conference call audio from 2010 to 2016, the researchers show that the vocal tones of managers at companies with higher expected machine readership are measurably more positive and excited.
That’s right. You may not have realized it, but there’s a whole lot of gaming going on as we endeavor to re-write and re-order to please our machines and — ultimately — give ourselves the advantage of looking a little better than perhaps we really are. In this case, it’s something like “botox for business.”
However, the practice is in no way limited to the peacocking firms might do in front of investors. In fact, increasingly, it seems that no domain is safe. Take this article about how a seventh grader and his mother learned to trick an algorithmic online education platform, Edgenuity, and send the 12 year-old’s grades soaring from an Fs to an A+s without him actually “learning a thing.”
Writing for The Verge, Monica Chin describes how the two plotted to fox the system after the child received a poor grade within a second of submitting the answers to a history test:
Simmons watched Lazare complete more assignments. She looked at the correct answers, which Edgenuity revealed at the end. She surmised that Edgenuity’s AI was scanning for specific keywords that it expected to see in students’ answers. And she decided to game it.
Now, for every short-answer question, Lazare writes two long sentences followed by a disjointed list of keywords — anything that seems relevant to the question. “The questions are things like… ‘What was the advantage of Constantinople’s location for the power of the Byzantine empire,’” Simmons says. “So you go through, okay, what are the possible keywords that are associated with this? Wealth, caravan, ship, India, China, Middle East, he just threw all of those words in.”
Apparently, that “word salad” is enough to get a perfect grade on any short-answer question in an Edgenuity test.
This is another, even starker, example of how the human inclination to write or present an offering in a particular way is being entirely reshaped to gain an advantage in a world of algorithms — arguably to the detriment of the work’s quality, as well as the student’s education.
Where such puppet dances are occurring, there is no shortage of online advice tailored to those who must learn its steps. The kids interviewed for The Verge article above had become adept at scouring the internet to locate treasure troves of the “key words” that fool learning algorithms and secure A grades. The adult equivalent might be articles like this one from CNBC, which teaches job applicants precisely how to massage their resumes in order to get noticed by the machine-driven hiring filters. It shrieks:
Job seekers, forget recruiters and hiring managers. There’s a new gatekeeper standing between you and your dream job that you need to please first.
Three-fourths of all resumes never even get seen by human eyes…So if you want to get hired, you’ll need to beat these bots.
Thankfully, that’s not hard to do. It just requires tweaking your resume to deliver exactly what the software system’s been programmed to search for — and nothing it hasn’t been told to want.
These tweaks include reformatting to a specific layout that systems can read (in one sample of 1000, 43% were thrown out for not being readable!), removing any images, logos, graphics, photographs or even color (so anything personal or creative is out…), using common headlines like “education” or “professional experience” rather than jazzy ones like “what I’ve been working on”, and (of course!) lacing the whole thing with a healthy dose of keywords stripped directly from the job description itself.
To stand-out we’re being advised to do just the opposite of what generations past have been encouraged to do: we must conform. If we don’t, we risk losing opportunities and the credit we may otherwise have earned from human eyes.
So, what’s the problem here? Writing to the chooser’s tastes — be they an investor, a teacher, or a hiring mechanism — is hardly a new thing, and guessing at human preferences is way more difficult? At least with this kind of standardization we can all learn the tricks and win the advantage!
But not so fast.
The online learning example clearly illustrates that “playing to the algorithm” can badly compromise the quality of the work involved. In this case, good grades are the reward for clever thinking, but not for demonstrating a true understanding of the material involved.
The same could be said of how the hiring algorithm influences the resume. But even if we can’t agree that it forces applicants churn out badly written resumes, it still seems to be true that in demanding uniformity these systems actually strip us of an opportunity to showcase creativity or personality in our resumes. Indeed, it hiring AI actually penalizes quirks in favor of cross-candidate consistency and predictability.
And if this effort to diminish self-expression and diversity doesn’t worry you, then hopefully you are worried about how it threatens to embed further societal inequities.
At a time when colleges and companies are attempting to address the racial and social imbalances within their institutions, we find ourselves with another subtle but problematic gatekeeping issue. Those who have not been availed of the magic keywords will not “pass go”, further compounding the many other ways in which algorithms have been found to embed biases in a variety of different places.
As the mother who conspired to trick the online learning system commented:
“He’s getting an A+ because his parents have graduate degrees and have an interest in tech. Otherwise he would still be getting Fs. What does that tell you about… the digital divide in this online learning environment?”