Photo by Johannes Plenio on Unsplash

In the foreseeable future, the Turing test, which measures the ability of humans to determine if an unseen communicator is a human or a computer, will undoubtedly undergo a significant enhancement: a test to see if a computer can determine whether the unseen communicator is a human or another computer.

Brett Frischmann, professor at Benjamin N. Cardozo School of Law, and Evan Selinger, philosophy professor at the Rochester Institute of Technology, also cite this need in their book, Being Human in the 21st Century. Frischmann also says that changes in technology and our environment are slowly but surely making humans more machinelike.

It seems intuitively obvious that computers are becoming more humanlike. The question is, are humans really becoming more robotic?

The general feeling at the start of the computer revolution was that machines would perform the menial aspects of production and processing, while humans would be free to handle the more complex areas and conditions that algorithms cannot handle. Not only hasn’t it turned out that way thus far, but artificial intelligence (AI) experts feel that this prognosis will be turned on its head in the future.

Computers are already driving cars, performing medical procedures, and beating all human opponents at Jeopardy!, Go, and chess. Experts vary on predictions of AI, the current evolution path for computers and robots. In the July 17, 2017, issue of Fox News Tech, Elon Musk, founder of SpaceX and cofounder of Tesla, contends that since by 2030 AI will be able to beat humans at everything, and could decide to simply eliminate humans, preemptive regulation in the development of AI is required. Richard Freeman, in the May–June 2016 issue of Harvard Magazine, is concerned more with the dire economic fate of workers if the ownership of robots is not shared by business owners and workers. In a July 14, 2015, article in Nature News, Steve Wozniak, cofounder of Apple, is probably the most sanguine of all. While conceding that there are some dangers with developing AI technologies, he feels that AI robots will probably treat humans just as kindly as humans treat their pets.

Whether you subscribe to the dire predictions of Elon Musk, the economic warnings of Richard Freeman, or the optimism of Steve Wozniak, these scenarios generally take place fairly well into the future. A far more critical period may be upon us right now.

This will be a danger period simply because it is a transitional one. More and more functions formerly performed by humans will be taken over by AI. Historically, during transitional periods, people are confused, and events may occur that are not in the public good. Here are some trends that do presage an unhealthy, robotic role for humans:

  1. Humans are becoming unwilling, unable, or unauthorized to handle work situations that are not covered by existing procedures, protocols, or algorithms. Historically, with computer processing, handling these situations was the province of humans.
  2. Humans in large numbers are already performing the menial tasks necessary for preparing data for input to computers, just the opposite of what was supposed to happen. There is no evidence that this would abate under AI, and even if it does, it may adversely affect human privacy.
  3. Partly because of the stultifying effects of Items 1 and 2, humans, in general, lack the critical thinking skills necessary to detect the machinations of demagogues and power brokers, who take advantage of technology to further their individual and special-interest goals rather than those for the public good.

Here are some personal examples of Items 1 and 2 of this trend:

Case 1: In 2016, in response to a hefty increase in my insurance premium resulting from a claim I submitted, I switched insurance carriers with no lapse in coverage. For some reason, a form indicating the switch was either not sent to the department of motor vehicles, or lost at the DMV. Both my wife’s car and mine are on the same policy.

Several months later, I received a letter from the DMV stating that I’d had a lapse in liability insurance coverage on my car, and could either pay a $50 fine or contest the finding. The letter listed a website address for contesting the fine. The problem was, the website provided no means to contest the finding, just an opportunity to pay the fine. I then called the department directly to try to straighten this out. As most people can attest, it can be very difficult to reach a live person at some government agencies. The representative I was finally able to reach conceded that there had been no lapse in coverage, but the DMV computer could not accept the correction because the switch in coverage had occurred more than six months in the past (which it hadn’t). She suggested I claim the switch in coverage occurred at a more recent date, and that I pay the lapse-in-liability-coverage fine.

Lying about the date I had switched insurance companies did not seem like a good idea. I then sent a certified letter to the state’s secretary of transportation explaining the whole situation. No response. Fearing I was running out of options, I sent an e-mail to a DMV complaint line. An agent named Elizabeth responded, asking for more information. After I provided this, she wrote, “I’ll send an e-mail.” I admit I was skeptical, doubting that her e-mail could be more effective than my certified letter. Shortly thereafter, she sent me a note saying it was all resolved.

Or so I thought. A week later I received another letter from the DMV threatening to come to my home and forcibly remove the license plates from my wife’s car if I did not pay the $50 lapse-in-liability-coverage fine. I again called the DMV office, and this time Elizabeth answered the phone. “I’ll send an e-mail,” she said cheerfully. Shortly thereafter, I received a note from the DMV that the problem had been cleared.

Case 2: Some years ago the Internal Revenue Service associated my address with that of a construction company which had, unfortunately, neglected to file a tax return for the previous year. The letter contained the charge of a hefty fine. I tried calling the IRS, but after being placed on interminable hold, I finally gave up. I then wrote a certified letter to the IRS, pointing out that this was a residential address and there was not, nor had there ever been, a construction company at this address. No response. The next month the IRS did send another letter doubling the fine. When I mentioned to a friend that this was causing me a great deal of angst, he said, “Oh, you have to call Mrs. G—- at the Greensboro IRS office.” I called Mrs. G—-, who investigated the matter and simply said, “It’s all resolved.”

One could argue that these two situations are simply anecdotal information illustrating bureaucratic inefficiency rather than a tendency toward robotization of the workforce. However, these situations are not uncommon. In addition, of all the personnel I contacted in regard to these two problems, Elizabeth and Mrs. G—- were the only ones interested in and capable of redressing obvious injustices in a satisfactory way. What happens when Mrs. G—- retires and Elizabeth finds an employer more appreciative of her e-mail-writing skills? In Case 2, many might opt, and perhaps not unwisely, to simply pay the fine, assuming it is not excessive, to avoid the stress and huge amount of time and effort involved in resolving the issue.

A common implementation of robotization in the business world is in cost avoidance, particularly in the customer service area. The one-on-one encounter of customer with service agent would seem to be the most expensive method of problem-solving, given the attempts by businesses to curtail or eliminate these interactions. To reduce the cost of these transactions, businesses have tried understaffing customer service departments (resulting in long customer wait times), charging for customer service, using prepared frequently asked questions (FAQs) lists, requiring e-mails for presenting problems, not publishing the telephone numbers for direct interaction with service personnel (a practice that can be thwarted via an Internet search), and encouraging the use of online chat exchanges. The last item is the most conducive to turning service personnel into robots. Chat room agents generally handle problems of between two and six customers simultaneously.

In response to the agent’s scripted responses, it is counterproductive for the customer to vent his or her frustrations via the chat exchange. For example, in response to the message “I hope you’re doing well today,” one is tempted to respond with something like, “I’d be doing a lot better if your software did what it’s supposed to do.” Not only does that take a lot of keystrokes, but it will prompt a scripted response like, “I’m sorry to hear that,” followed by a hiatus while the agent goes on to handling another customer’s problem.

Frischmann cites electronic contracts as an example of consumers’ robotic responses. “You see a little button that says ‘click to agree’ and what do you do? You click. Because it’s a stimulus response,” he says. “It’s easy to dismiss those things. But…every day, you and I and millions of other people routinely respond to a stimulus and click and go without understanding what we’re getting ourselves into.” This indicates we are behaving like machines.

In considering what distinguishes humans from machines, Frischmann and Selinger cite the human combination of common sense, rational thinking, and irrational thinking. Frischmann points out that as humans, our emotions sometimes make us behave irrationally.

It appears that historically, when the balance between emotions and rational thinking gets out of whack, emotions generally prevail. Rational thinking eventually takes over, but the length of that interim can be devastating. Hence the urgency for heightening our critical thinking skills.

This imbalance could be happening right now, and the march toward greater use of AI could be involved. In a PBS NewsHour report, Carl Frey of Oxford University maintained that in the 2016 U.S. presidential election, anxiety over the rapid increase of automation was strongly linked to votes for Donald Trump.

Of course, we can always, as a society, submissively not heed the loss-of-control warnings of Elon Musk, dismiss the economic dangers cited by Richard Freeman, and not beef up our critical thinking skills in this current transitional period. We can instead hope for the eventual, sooner rather than later, governance of benevolent robots. Just don’t bet on that last one.


Bill Carr is a writer and education specialist who is originally from New York and now lives in North Carolina. His essay, “Decision Time,” appeared in the Riggwelter literary journal. He has had several articles published relative to online education and the computer industry, and worked for many years for IBM as an education specialist. Many of his short stories have been published in literary journals. He received his master’s degree from Brooklyn College, and currently serves as chairperson of the North Carolina B’nai B’rith Institute of Judaism.