You’re right — robots did steal your job.

Photo by Franck V. on Unsplash

Automation makes the news a lot these days, often in discussions about the future of work. The robots of these headlines appear fully-formed; sentient constructions of hardware and code, ready to swoop in and replace human workers. Given the long-standing tradition of hyper-intelligent machines in science-fiction, it is unsurprising that conversations about automation draw on these same notions of human redundancy.

In thinking about automation as a concern of the future, it is easy to overlook the changes it has already wrought. This is especially true as artificial intelligence becomes a bigger part of automation (and, of course, as the two concepts continue to be conflated).

While automation can be based on AI, it doesn’t have to be. However, with dramatic growth in the ability to collect, store, and manipulate data — thanks to developments like increases in computing power, the creation of cloud computing, and new sources of data like social media and mobile technology — automation increasingly incorporates AI.

Broadly speaking, AI is rooted in the idea of “thinking machines” that can reason like a human might, while automation is about turning tasks into a series of directions a machine can follow. Automation has been a major part of work since the industrial revolution, and can be entirely mechanical.

Artificial intelligence (as a practical, implemented reality) is a much more recent development, and focuses on creating machines that can identify and learn from patterns. As a result, unlike automation, AI is expected to respond to novel situations without being given explicit instructions for each new scenario.

Media coverage focuses on the risk of AI’s ability to out-think and out-perform its human creators. Debates hinge on ideas about wholesale substitution of machines for human workers. Both automation and artificial intelligence, however, are already a part of work; and increasingly, they are part of the process of finding work.

The immediate threat to workers isn’t from AI-reliant automation that will perform the same jobs as human workers with no need for oversight or guidance (although there is plenty of work underway towards that goal). Currently, much of automation’s role is opaque or invisible, and the risks of AI come less from its ability to reason like a human and more from the stark discrepancies between human and artificial reasoning. The threats to work posed by machines are not presently caused by their ability to overpower human intellect; instead, they are a function of the ways in which robots continue to fail at tasks humans take for granted.

Consider applicant tracking systems, or ATS. Used by nearly all Fortune 500 companies, and essentially any time a candidate applies for a job online, such systems don’t simply organize prospective employees; they also sort and filter them based on set parameters. ATS is an example of automation, with some ATS software also using artificial intelligence to screen candidates.

Like human recruiters, ATS can make mistakes in selecting the most promising candidates for a position. Unlike a human being, however, the software lacks the cognitive flexibility that can make all the difference in getting a job-seeker’s resume onto a hiring manager’s desk.

Imagine a job posting that requires a bachelor’s degree in chemistry. A human can infer that someone with a degree in a related field — such as biochemistry or chemical engineering — may have the same requisite skills. ATS software cannot; at least, not unless it is explicitly instructed to also consider applicants from those disciplines.

Similarly, an ATS may reject a candidate with a graduate degree in chemistry if the parameters provided instruct it to look for a bachelor’s degree, or a candidate with a bachelor of arts rather than a bachelor of science. An ATS may also reject candidates if their resume does not use enough of the same keywords as the job posting, or if their resume is formatted in a way that makes it hard for the system to read (such as including bullets that are not perfectly round, or in some cases, using a serif font).

While the body of research on ATS use and its outcomes is less than robust, some investigations report that as many as 3 out of 4 applicants for a position will be rejected without a human being ever seeing their resume. Considering that some portion of those rejections are related to software issues — cases where the ATS is unable to read a candidate’s resume, or is given overly restrictive parameters — it’s clear that an unknown number of people are rejected in error. According to a recent survey of human resources professionals, over 60% believe that automated processes are incorrectly filtering out qualified applicants.

The priorities of ATS software — to identify qualified job-seekers while minimizing the time spent reviewing applications from unqualified candidates — also impact outcomes. One of the tricky things about solving human problems via automation is that it’s possible to maximize the sensitivity of a test, or the specificity of a test, but not both.

Sensitivity has to do with how good a classifier is at identifying items that meet the criteria. For ATS, having high sensitivity means correctly identifying every applicant who meets the minimum criteria for the job. Specificity is the ability of a classifier to reject items that do not meet given criteria. In the context of ATS, that means correctly eliminating any applicant who does not meet the minimum criteria for the job.

Systems that exist to reduce the need for human review often prioritize specificity over sensitivity, meaning that applicant tracking systems are designed to be better at rejecting candidates who might not be a good match than at identifying job-seekers who may fit the position. From a company’s perspective, there are numerous potential employees who could fill an opening. Failing to forward one qualified applicant for human review may be of negligible cost, so long as recruiters see enough applications to successfully fill the position.

From this point of view, using an ATS makes sense. Online job postings can lead to hundreds of applicants vying for a single position, especially in regions or eras of high unemployment. Automation is one way to address the truly monumental task of combing through all the resulting resumes, many of which may legitimately fail to meet the basic requirements for the position.

Hiring is costly in terms of both time and money, and employers have good reason to seek ways to reduce its expense. Furthermore, human recruiters are themselves biased, and may discriminate on the basis of name, perceived race, and other aspects of a candidate’s background.

There is a risk, however, of trading one form of bias for another. Encoding processes can lead to a sense that they are fixed and unchangeable, even in the face of evidence that a better way exists. Automation bias — the failure of humans to notice errors produced by automated systems and the concomitant tendency to assume accuracy of a computed solution — amplifies this issue.

Under a paradigm where automation is considered an endpoint, artificial intelligence is the latest development and the supposed future of work. It is presumed that automation serves as the solution to the error-prone inefficiencies of humans.

Redesigning entrenched systems frequently calls for the kind of flexible thinking that continues to challenge machines. Artificial intelligence strives to be forward looking, but it is past referential. It only creates based on things it has already seen, in the form of data presented to it directly or gathered from the world.

There are no measurements of events that have not yet occurred. Thus, in making predictions about future outcomes, models and systems look to the past for patterns. Sometimes, these patterns reveal something about the best choices; if every successful hire for an analyst role has had a degree in mathematics, perhaps such a degree is necessary for the job.

But if an ATS filters out any applicant without such a degree before they can interview — based on the assumption that the people currently occupying the role are uniquely successful in the position and that their qualifications inherently underlie that success — it becomes much harder to say that candidates with mathematics degrees are the best at the job. The best compared to what? Perhaps there are other people who would have done the job better, somewhere in the circular file.

It’s an oversimplification, but this example is emblematic of a real challenge in automating complex processes. Humans design systems, including automation; to what degree can any model be separated fully from the beliefs of its creators? Decisions about how much weight to give to various metrics, and about what is worth measuring and how to quantify it, introduce human judgments and human prejudices that automation cannot eliminate. A significant dilemma remains in relying on computers to solve human problems: it is difficult to build an unbiased process or model in a society that is itself biased.

When a society or system grants more advantages or opportunities to some groups than others, people from those groups succeed at a rate that outpaces those who received fewer of the same benefits. Determining the necessary qualities or traits for success by looking at who is currently successful makes it is very easy to amplify existing discrimination. Using a machine to do it allows a veneer of objectivity, but the underlying system is not free of the biases of the people who built it, or the society in which it was built.

The silent, unobtrusive automation in technologies like applicant tracking systems is widespread. Automation has created unprecedented human progress, but it shifts the focus towards problems that can be measured in a certain way. Computation is easily applied to problems that are quantifiable, but much of life is instead murky, nuanced, and hard to capture as data.

ATS software filters candidates based on the degree to which their resume fits the job description, but perhaps some skepticism is called for regarding how well a resume represents an employee. Many of the traits of a valuable worker — curiosity, critical thinking, cognitive flexibility, self-direction — may be shown indirectly, if at all, on a resume. In a society that does not afford truly equal opportunities, how well can a list of past accomplishments hope to portray future potential?

Automation and artificial intelligence are rapidly evolving technologies, and the day may come when superintelligent computers pose a threat to the human workforce (and to humanity in general). In the meantime, there must be more discussion about the threat of invisible, widespread systems that already make decisions impacting humans. After all, there’s no need to worry that robots will steal your job if they prevent you from ever getting one in the first place.

read original article at https://medium.com/@ianmatthewmoura/youre-right-robots-did-steal-your-job-f0e21c4dd4dc?source=rss——artificial_intelligence-5