It’s inevitable that artificial intelligence will replace some jobs. But emerging AI systems not only support their human colleagues but actively collaborate with them, tapping the unique strengths of both. Credit: Just_Super / Getty Images From the heavy machinery of the industrial revolution to the computer age of digitization and, recently, the rapid advances in artificial intelligence, technological advances often raise the spectre of job losses. But experts believe that as well as replacing jobs, AI systems are set to augment human efforts, improving efficiency and reducing the burden of more arduous tasks that can be offloaded to intelligent algorithms. This means humans cooperating and interacting directly with AI. “For those of us who do not lose our jobs to automation, we are going to be working with increasingly intelligent software, side-by-side,” said J.P. Gownder, vice president and principal analyst at research firm Forrester. “It is going to become applicable to almost every kind of business process you can imagine.” Combining the power of cloud computing and advances around machine learning, the idea is that AI assistants can take on some of the cognitive burden for human workers, who can focus on tasks they are better suited to. It’s already happening at many businesses: A 2020 Deloitte survey of 1,300 CIOs and senior technology leaders found that only 12% of organizations are using AI to replace workers, with 60% using to AI assist staff instead. Collaborative design Take generative design, for example. Designers and engineers have long relied on computer-aided design (CAD) tools to create 3D “drawings” of components or products in fields such as manufacturing. With generative design, the user feeds parameters such as material type, performance criteria, and requirements around cost into the algorithm, which then creates a huge range of alternative models — whether it be for a machine component or piece of furniture — that the designer or engineer can select from. The result can be unusual, organic designs that don’t match the expected aesthetic of what humans typically produce, but fit the specifications, sometimes more efficiently. In practice it takes some of the grunt work out of a design process that requires designers and engineers to create many iterations of their work, according to Seth Hindman, strategy manager for Autodesk’s generative design and machine learning products. This in turn frees users up to focus on higher-value aspects of their role. “[Generative design] is incredibly complimentary as a collaborator to the engineer, because engineers don’t have the time, nor do they even have the inclination, to explore the full design space,” he said. “It’s about augmenting and focusing the engineer on actually doing engineering.” Autodesk’s work around generative design began with the creation of the experimental platform Project Dreamcatcher at its R&D arm. The technology has been piloted by industrial firms such as Airbus, which used the technology to create lightweight aircraft components, while renowned architect and designer Philippe Starck used the generative design platform in a chair design project. The technology has since made its way into Autodesk’s commercial Fusion 360 product, which is used by businesses such as Lightning Motorcycles, a manufacturer of electric motorcycles in San Jose, Calif. This has allowed the bike-maker to punch above its weight by enabling designers to create new parts more quickly and efficiently, said CEO and founder Richard Hatfield. Previously, Lightning’s team would design a part and then conduct analyses around strength and other specifications before making modifications — a time-consuming process, he said. “With the generative design software, it’s capable of doing millions of these iterations and simulations with a huge speed improvement compared to what it takes to do it manually. It’s like trying to draw a component with a pen and paper versus using software to do iteration,” said Hatfield. “It’s a big leap forward.” Lightning Motorcyles designers created the swingarm highlighted in blue using Autodesk’s generative design software. For Alexandre Martin, a product designer at Austrian design studio Edera Safety, which creates personal safety equipment such as spine-protector harnesses, using the Fusion 360 generative design tools hands on has saved significant time in his day-to-day work. “Generative design speeds up my design process tenfold,” he said. “It’s akin to having a super-efficient creative team doing months of work, then I pick and choose the most effective result.” Collaborating with AI opens the door to new design possibilities that might otherwise have seemed counterintuitive, Martin said. “The AI shows me iterations I may have deemed illogical or simply overlooked, and it really feels like a logical part of the design process,” he said. Human-AI partnerships at work Many workers already interact with AI in more subtle ways, often without realizing it, from getting instant translations in office software to accepting canned reply suggestions in email. At the same time, interactions with AI assistants are becoming more sophisticated. Voice assistants such as Alexa, Google Assistant, Siri and Cortana that are familiar in our personal lives have begun to make inroads to the workplace — for instance, helping users locate information or book meetings. It means interacting with AI more directly. With recent advances in call center software from the likes of Google and Amazon Web Services, call center agents are able to interact with AI assistants that coach them through each customer interaction — surfacing supporting notes and information, discerning customer sentiment and suggesting responses, all in real time. Rather than automating away the job entirely with a chatbot, the AI helps the agent to provide a better service, improving customer satisfaction and consequently increasing sales. “That is a case where the call center person is not being replaced by AI, but is using AI side-by-side to gain a better handle on that interaction,” said Forrester’s Gownder. “It is not widespread, but it is starting to happen.” It’s not just in the office that workers are interacting with AI. Collaborative robots, or “cobots,” have become more prevalent in factories, where they’re equipped to operate alongside engineers to hold heavy objects or tools in place, and in warehouses, such as Amazon’s huge facilities, where robots help human workers pick and pack goods for delivery. Creating AI systems that interact with humans in a natural and reliable way means anticipating and adapting to the needs of human workers — or in other words, learning to be a good team player — according to Julie Shah, associate professor in the Department of Aeronautics and Astronautics at MIT and head of the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL). “There is enormous potential, not for the [AI] to take over very challenging jobs where we rely on humans to parse ambiguous and uncertain information, but to understand some part of how we do this and support it, provide the right information, and make suggestions so that the human is freed up to do the more challenging aspect of the work,” said Shah. Her research focuses on how AI robots can interact with human workers more effectively, whether that’s finding ways to ensure robots provide the right materials at the right time on an automotive production line, or the development of intelligent support systems that aid humans in challenging decision-making tasks. “Everything that I do in my lab is focused on developing AI that fits together as a puzzle piece to enhance human capability, rather than to replace or supplant what a human is doing,” she said. “The key technology behind that is the ability to infer what a person is thinking, their mental state, and being able to anticipate what they’ll do next, jump in and offer the right information or the right physical materials at the right time.” It means mimicking the complex processes that humans are adept at carrying out, with the development of algorithms to predict and anticipate the movements of workers, for instance. “So much of my work is focused on ‘how do you provide the right information at the right time in the right sequence? How do you provide the right parts on the assembly line at the right time in the right sequence?’ They are task-allocation scheduling problems; that is what makes our world work,” said Shah. One research project MIT CSAIL conducted at the Beth Israel Deaconess Medical Center investigated human willingness to trust AI in the workplace. It involved using an AI system hosted in a humanoid Nao robot to provide scheduling suggestions on a hospital labor ward, a setting where continuous split-second decisions are required to coordinate care. MIT researchers created an AI system to provide recommendations for room allocation and nurse assignment for C-sections and other procedures at Beth Israel Deaconess Medical Center. In charge is the head nurse, who is tasked with coordinating a team of 10 nurses, 20 patients and 20 rooms at once. There are a huge number of variables for scheduling, with head nurses required to attempt to predict factors such as when a woman will arrive in labor and how long the labor will last. “[They] are basically performing an air traffic controller role on a hospital floor, deciding which patients go to which rooms and which nurses are assigned to which patients,” said Shah. The AI system was trained to replicate the scheduling carried out by the head nurse, with the ability to anticipate room assignments and suggest which nurses to assign to a particular procedure. The nurse could query the robot, which responded with suggestions using text-to-speech software. During the live pilot demonstration, nurses accepted AI recommendations 90% of the time, as well as rejecting “low-quality” suggestions at the same rate. Feedback from nurses was positive, with those involved highlighting benefits for training new staff and for sharing workloads. In AI we trust? As more workers interact with AI in their jobs, both employees and their organizations may question when it’s appropriate to rely on an algorithm to make major decisions and when a human’s contextual knowledge is of greater value. Each has its relative strengths, and an AI system can circumvent some of the biases that a human may not be aware of. “The algorithm might be able to incorporate information that would be costly for the human to gather,” said Susan Athey, Economics of Technology Professor at Stanford Graduate School of Business and an Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence. She raised the example of screening résumés from job applicants. “A human looking at a résumé might have an overall stereotype of a university from a few people, but an AI might be able to read that same information and come up with a more accurate assessment of what that particular university means,” she said. “Maybe the algorithm knows that it’s a weak state university, but the engineering program is actually highly selective, and the human doesn’t take the time to gather that information.” At the same time, AI algorithms are fallible and may have unintended biases programmed into them, so transparency is important to ensure that humans are aware of how much an algorithm can be trusted. “You would not want an algorithm to always override [human decisions where they disagree]: it is contextual,” said Athey. “You need to build algorithms that communicate enough information in a way that humans can understand whether they should listen to the algorithm or listen to themselves, and how they incorporate the information from the algorithm.” Shah makes the point that making a system trustable is not the same as making it trustworthy. In the aviation industry, for instance, numerous accidents have resulted from pilots’ reliance on imperfect cockpit automation systems. “We know that it is relatively easy to engender inappropriate trust in a system,” she said. “There are small things you can do: if you make it more anthropomorphic, if you have it speak to you rather than provide you with a text read-out of the instruction, [then] people are more likely to comply with a system’s recommendations and trust it. “It is not a matter of making these systems trustable, helping a person calibrate their trust in a system appropriately; [it’s important to] understand when it is making decisions within bounds of its competence and when it is outside of those bounds so that the person can shore up the machine’s capability.” She added: “We often ask, ‘How willing is a person to trust the system?’ That’s the wrong question to ask. The right question to ask is: ‘Is the system trustworthy?’ There’s a trustable system versus a trustworthy system.” Why not just automate jobs completely? Why the need for human and AI to cooperate? Why not automate jobs wholesale? One answer is that, for now at least, it is not technically possible in most cases. Humans perform significantly better at certain tasks, according to researchers. A key advantage that humans hold over AI is the ability to use intuition to solve problems that have not been encountered previously, drawing on information from a range of sources. In other words, we can use our common sense. “Humans can extrapolate circumstances they haven’t seen before very well because they have a lot of common sense,” said Stanford’s Athey. “An AI and a human are going to have different strengths. [Humans] can make sure their predictions are not wildly off, while an AI responds only to the data it has.” AI, on the other hand, excels with processing larger volumes of data that a human brain would struggle with. “AI can look at a lot more data; more data about a situation, as well as potentially incorporate a larger set of outcomes than a human will have encountered in their own personal experience,” Athey said. Phillippe Stark worked with furniture maker Kartell and Autodesk Research using generative design to create “the first chair in production created by artificial intelligence in collaboration with human beings,” according to Autodesk. While it is possible that AI may be equipped to carry out a wider range of tasks, the technology is currently limited in what it is able to achieve without human input. Even in selecting which data is provided to the machines, we are structuring the world. When training an AI system on certain images, it is still humans who take the photos and frame them around objects of interest. “It is a very different problem to recognize things in the environment when a robot is freely navigating and the images are not framed up with our eyes,” said MIT’s Shah. “We as humans do have a unique capability that AI will not have for the foreseeable future, which is the ability to take an unstructured problem and structure it,” said Shah. “Once we have structured a problem, AI is very valuable and performs quite well, but I think we often underestimate the effort that goes into structuring a problem for AI today.” “There is a fallacy out there that AI can be human-like,” said Gina Schaefer, a managing director with Deloitte Consulting and head of the consultancy’s intelligent automation practice. “We are so, so, so far away from an AI being human-like. It can do such foundational things, such amazing things, but it also lacks the abilities that a five-year-old child might [have] today, to understand context and other types of things. And so that’s the beauty of the interaction,” Schaefer said. “What has been overlooked is that, while you can replace humans with some of this technology, the benefit is to enable humans to do what is uniquely human about their job.” Preparing for a human-AI workforce If implemented correctly, AI can be a boon for both employers and employees, with the latter able to spend less time on repetitive work. “In the ideal situation, these technologies are helping make better decisions, giving more insights, helping people execute certain tasks on their behalf automatically, or automating processes that are clearly not something that employees want to engage in anyway,” said Forrester’s Gownder. While businesses see the advantage of workers interacting with AI, it can require training and a shift in skills, with greater emphasis on creativity and complex reasoning as jobs are adapted. It is important for employers to actively support workers as they interact with AI more frequently, Gownder stressed. “Are your employees equipped with the right culture, skills, inclinations to be able to start working with increasingly intelligent software? A lot of people may not want to do that or may not have the skills — they may be intimidated by the technology,” he said. Making the transition will require a major adjustment for many organizations: 59% of organizations believe it is important to redesign jobs to integrate AI in the next 12-18 months, according to the Deloitte study, but only 7% say they are ready to do so. And only a small proportion of respondents (17%) are making significant investments in reskilling. “It is quite possible to craft a better employee experience by investing in AI and automation, but it is also possible to do this wrong, as with anything,” Gownder cautioned. What is clear is that AI will impact all sorts of jobs more closely in the coming years. “Although it is leading-edge behavior for many, it is going to become extremely important soon,” said Gownder. “We are all going to see our jobs transform in the next decade by intelligent software and automation, and we need to start getting ready for it.” Tech Spotlight: 5 machine learning success stories: An inside look (CIO) AI at work: Your next co-worker could be an algorithm (Computerworld) How secure are your AI and machine learning projects? (CSO) How to choose a cloud machine learning platform (InfoWorld) How AI can create self-driving data centers (Network World) SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe