Robot Use: New Possibilities and Challenges

Robin R. Murphy, Ph.D., Raytheon Professor of Computer Science and Engineering, Texas A&M University, was the first recipient of a National Science Foundation grant using the term “rescue robots.” CREDIT: Jan Dufek

While robots, drones, and machine learning take over many dangerous, dull, and demanding jobs, issues surrounding data security, funding priorities, and muddled policies remain to be solved.

By Sandra Guy, SWE Contributor

Alina Bartley can recall as if it were yesterday how nervous she would be, suited up in a mask, harness, and headlamp so she could climb inside a chemical plant’s dark, nitrogen-purged vessel to inspect nuts, bolts, welds, and platings.

“I’m scared of heights … and I’d often be 100 feet up in the air,” said Bartley, a SWE member who conducted such inspections for four of her six years working as a chemical engineer and a project engineer at an oil company in Houston. “It was dangerous work (the inspector could suffocate if her mask or breathing apparatus failed or fell off), and all you got at the end of the day were some measurements and pictures using a handheld camera.”

Bartley, who holds a bachelor’s degree in chemical engineering and an MBA, is now a director specializing in supply chain management for Alvarez & Marsal Corporate Performance Improvement consultancy. She believes that using robots for such hazardous jobs “is really exciting” and can lead to safer, more accurate, efficient, and cost-effective results.

“We can’t put people at risk to do that kind of work in the 21st century,” said Bartley.

Perhaps that is why robots have become familiar sights in other kinds of dull, demanding, or dangerous jobs, including in foundries; on auto assembly lines; in wildfires; and in aerial, land, water, and search rescues. These days robots weld, lift heavy equipment, vacuum-clean sewers, stir molten metal, collect and package radioactive waste, and do repetitive tasks in toxic dust-filled rooms. Drones — a type of robot — fly over disaster sites to spot survivors, while specially designed robots search through debris in ways that animals and people cannot.

In addition to robots, drones and other types of unmanned autonomous vehicles use laser and/or radar scanners for navigation, thermal imaging to spot survivors, and video cameras to record and broadcast details back to remote crews.

Alina Bartley, a director with Alvarez & Marsal Corporate Performance Improvement, conducted inspections for four of her six years (formerly) working as a chemical engineer and a project engineer for an oil company in Houston.

Rescue robots and unmanned systems face headwinds

Emergency responders in the United States who use drones for rescues have run into a thicket. That is because in 2020 the Department of Justice (DOJ) banned using federal grants to buy drones and other unmanned aerial systems from foreign groups deemed threats.

The DOJ guidance memo, dated Oct. 5, 2020, said the effort sought to “promote the security of unmanned aerial systems by requiring applicants for Office of Justice Programs loans to prove they can mitigate any cybersecurity and privacy risks posed by these systems, and that the applicant has a plan to address any civil liberties-related complaints that could arise.”

The Department of Justice has made the ban official: No agency may use DOJ funds for any unmanned aircraft manufactured by a “covered foreign entity,” wrote Tannyr Watkins, public affairs specialist with the Office of Justice Programs, in a March 21 email.

The covered foreign entity, in DOJ lingo, is what the Department of Justice deems “subject to or vulnerable to extrajudicial direction from a foreign government,” which includes drones made by the world’s leading manufacturer, Da-Jiang Innovations (DJI), Watkins wrote.

Adam Welsh, DJI’s head of global policy, however, told CNBC that the Shenzhen-based company requires users to opt in to share their data with DJI and prevents its drone data from being transmitted over the internet, thus indicating its security.

The ban hit the rescue community hard because DJI products were the most popular and widely used. DJI dominates the global drone-maker market with an estimated 70% market share. The global drone market is expected to grow from $30.6 billion in 2022 to $55.8 billion by 2030, according to a report by Drone Industry Insights. Meanwhile, U.S. rescue and law-enforcement communities are now turning to American-made drones, according to industry reports.

Some private companies, such as Drone Amplified in Lincoln, Nebraska, are working with non-Chinese drone manufacturers to adapt their systems to different drones. Drone Amplified’s co-founder and CEO Carrick Detweiler, Ph.D., has said he supports building a domestic U.S. drone industry.

“We as citizens have to ‘up’ our expectations and demand accountability from our political leaders and make the market incented.”

– Robin R. Murphy, Ph.D., Raytheon Professor of Computer Science and Engineering, Texas A&M University

But for now, American drones cost more than ones made by DJI, they don’t have the same capabilities, and they take longer to produce and ship, said Robin R. Murphy, Ph.D., Raytheon Professor of Computer Science and Engineering at Texas A&M University. “The way we do emergency management in the United States is ad hoc,” Dr. Murphy said. “It’s a low-volume and low-profit market. There’s no centralized purchasing among municipalities or state agencies.”

Dr. Murphy was awarded the first National Science Foundation grant using the term “rescue robots” in 1996, and she led the first use of small, unmanned aircraft system drones in a disaster in Hurricane Katrina’s aftermath in 2005. She also has led or participated in 14 drone deployments to disasters in the past 18 years.

From her vantage point, the U.S. Federal Emergency Management Agency, which operates under the Department of Homeland Security, needs to take action to set minimum rescue-drone standards; help local government agencies pay for rescue drones; and, as a result, encourage drone makers to compete and innovate to win funding grants.

Dr. Murphy points out that high-tech, camera-equipped search tools should be available in many shapes and sizes. One example is a snakelike device to look inside piles of rubble without crushing anyone underneath. A rescue tool should be able to send data in short, easy-to-understand bursts designed for each person making split-second decisions in a rescue, Dr. Murphy said. Emergency responders on the ground need video and data on where to dig, and structural engineers need an aerial 3D view of damage. Two human operators are better than one in picking out and disseminating the drone’s or robot’s transmissions, she said.

A robot’s artificial intelligence (AI) can add another timely layer of help in desperate situations, Dr. Murphy said. “AI could catch indicators that the rescuers are fatigued and increase the font size being used for messages or reinforce going through each step of a process, rather than taking shortcuts.”

“We as citizens have to ‘up’ our expectations and demand accountability from our political leaders and make the market incented,” she said.

Ambika Dubey, software engineer, Microsoft Corp. CREDIT: Ambika Dubey

Data collection raises questions

Beyond mundane tasks and rescues, robots’ futures have become increasingly controversial. What kind of information is being collected, and with whom is it being shared?

The Chicago Police Department started a secretive drone program using off-budget cash to pay for the new technology, the Chicago Sun-Times reported on May 12, 2021. Leaked emails revealed that police intended to use the drone to search for missing persons, take crime-scene photos, and be involved in unspecified “terrorism-related issues.”

In Los Angeles, police began using drones in 2015 for public safety and were able to use drones to detect brush-fire hotspots in 2017. Although the Los Angeles Police Department consulted with the American Civil Liberties Union (ACLU), the police department’s drone use still runs into opposition for lacking strict-enough privacy measures. Police officials say, however, that drones are used only by SWAT officers, hazardous materials specialists, and bomb squad personnel. Drone use also requires approval through the chain of command.

Hector Villagra, J.D., executive director of the ACLU of Southern California, said, “The LAPD must put in place strong policies to ensure that the use of drones is limited to the narrow and approved purposes; that binding policies on use, access, retention, and sharing of information gathered from drones are in place to protect privacy; and that there is an auditing and oversight mechanism in place to ensure those policies are followed.”

On the other hand, despite the need for oversight and privacy protections, public safety advocates say some laws hinder police from using drones to monitor public events, and inadvertently miss safety threats. One example is an Illinois state law that prevented officials in Highland Park, a suburb north of Chicago, from using a surveillance drone during the city’s 2022 Fourth of July parade, according to a March 24 news report in the Chicago Sun-Times. During that parade, a rooftop sniper killed seven people and wounded 48 others.

Added to these concerns are worries about individual and state-sponsored hackers who have learned to evade cybersecurity tools.

Robots infused with AI and data pose a conundrum

Ambika Dubey, a SWE member and a software engineer with Microsoft Corp., watched in February as the computer giant eliminated a robotic autonomous-controls team that Dubey worked on. The team’s goal was to use reinforcement learning so that manufacturing-line robots could explore possible actions and learn over time to decide the most desirable outcome.

The team was eliminated shortly after Microsoft announced Jan. 18 that it would cut 10,000 jobs, or about 5% of its workforce, because of inflation and a slowing global economy. It was the largest layoff in more than eight years.

Dubey said a robot doing specific tasks can be advantageous. For example, a robot vacuum cleaner follows a floor map, remembers it for next time, stops when it detects edges to prevent the robot from falling down a flight of stairs, and empties the dust and dirt that it has collected in a receptacle.

“A lot of the dangerous, misbehaving technology of robots tend to be the ones that are more general purpose,” Dubey said. “The general-purpose assistive AI trying to help tends to be the tech that I get most frustrated with. We’ve set up our lights and our thermostat to connect to Google Home, but every time I try to open or close the blinds, the voice assistant will answer, ‘Sorry, I can’t control the volume on that.’

“If you constrain the scope of the problem, it’s much easier to create a robot or AI that will be extremely useful for that use case,” Dubey said. “A perfect use case for this technology would be a well-defined problem, where the user could craft clearly understandable goals, rather than an open-ended, all-knowing and all-doing robot.”

In fact, Dubey had been building with her now-dissolved Microsoft team a platform through which users could train an AI — over many iterations in a simulator — to learn what actions to take to reach user-defined objectives. With the project cut in February, Dubey now works as part of Azure’s Internet of Things (IoT) division.

Separately, Goldman Sachs estimated in a November 2022 report that “a $6 billion market (or more) in people-sized-and-shaped robots is achievable in the next 10 to 15 years.” This market “would be able to fill 4% of the projected U.S. manufacturing labor shortage by 2030 and 2% of global elderly care demand by 2035,” according to Goldman Sachs.

Currently, companies ranging from Figure to Tesla to Agility Robotics are working on humanoid robots.

“A machine cannot solve a problem that hasn’t been solved. A machine wouldn’t know which items to test to make a vaccine, but it could run 100 million [mixtures] to see what worked.”

– Pamela Rutledge, Ph.D., director, Media Psychology Research Center

AI enters the workday

In early March of this year, artificial intelligence technology entered the daily work world through generative pretrained transformers (GPTs), a type of AI known as a large language model, or LLM.

The technology is a neural network machine-learning model that requires only a small amount of the user’s text or visual prompts to generate more pertinent, accurate, and sophisticated results than its predecessors.

The technology comes from OpenAI, founded in December 2015 by Elon Musk; Sam Altman; Greg Brockman; Ilya Sutskever, Ph.D.; Wojciech Zaremba, Ph.D.; and John Schulman, Ph.D. as a nonprofit research company. Musk left the company’s board in 2018 to avoid a conflict of interest with the AI research that Tesla is conducting.

Since then, OpenAI has transformed into a closed-source, for-profit company. In November 2022, OpenAI released ChatGPT to generate humanlike texts in a conversational way. The dialogue format lets ChatGPT write essays and answer follow-up questions, but also enables the technology to admit to its mistakes and challenge incorrect premises.

Microsoft incorporated the most updated version — ChatGPT-4 — into its updated Bing search engine in early February. Then, on March 14, OpenAI released GPT-4 to the world.

Microsoft co-founder Bill Gates wrote in his March 21 blog that artificial intelligence — and the sudden proliferation of chatbots — “is as revolutionary as mobile phones and the internet.”

One of OpenAI’s first partnerships is to help people with visual impairments. Be My Eyes, a free mobile app that lets visually impaired and low-vision people ask sighted people to describe what their phones see, introduced a “Virtual Volunteer” that offers AI-powered help at any time.

Users can send images, such as a photo of their refrigerator’s contents, through the app to the Virtual Volunteer. The Virtual Volunteer can identify what is in the refrigerator, offer recipes, and give step-by-step directions on how to prepare a dish using the ingredients. The Virtual Volunteer can also read maps, describe how a dress looks, and direct the user to a workout machine at a gym.

Microsoft Office, now part of Microsoft 365, including Word, PowerPoint, Excel, and Outlook, will also soon be updated with the latest OpenAI technology for its “Copilot AI” assistant.

The AI-powered Copilots can generate emails, documents, and slide decks from knowledge the software has gained scanning corporate files and listening to conference calls, Microsoft announced on March 16. Microsoft, parent company of LinkedIn, is an investor in OpenAI.

OpenAI launched plug-ins for ChatGPT on March 24, letting the chatbot retrieve answers from the web and interact with specific sites and online services. Previously, ChatGPT could pull information only from its training data.

Pamela Rutledge, Ph.D., director, Media Psychology Research Center. CREDIT: Cathy Gregory
Lokesh Ramamoorthi, lecturer, software engineering and cybersecurity, University of Miami. CREDIT: University of Miami, College of Engineering

Google responded by saying it, too, is testing its own AI technology that can help people write in Gmail, Docs and, eventually, in its Chat, Meet, Slides, and Sheets platforms.

Of course, such advances raise issues of plagiarism; copyright infringement; potential privacy violations; mischief-making in deepfake videos, audio, and text; questions about data collection methods; and the unintended biases that data incorporate.

Pamela Rutledge, Ph.D., director of the Media Psychology Research Center, an independent organization in Las Vegas that promotes the positive use and development of media and technology, said the underlying data require qualitative analysis to confirm accuracy, especially given structural discrimination.

“Most of what is happening is in a black box, and no one is aware of [the accuracy of the data being fed to the AI technologies],” Dr. Rutledge said. “AI can process really well, quickly, and impressively, but it doesn’t have judgment.”

No matter how many mundane or even marginally intellectual tasks AI technology can perform, Dr. Rutledge said job opportunities of the future will “lie in the places where you’re doing something uniquely human.”

“If the latest computer technology can write code, God bless them. That’s a repetitive task,” Dr. Rutledge said. “Think how cool it will be if people were freed up to spend more time making the application work better (rather than spending that time writing the code for the application).”

People will always be needed to visualize, think ahead, and anticipate problems, Dr. Rutledge said.

“A machine cannot solve a problem that hasn’t been solved,” she said. “A machine wouldn’t know which items to test to make a vaccine, but it could run 100 million iterations of [mixtures] to see which worked.”

Lokesh Ramamoorthi, a lecturer of software engineering and cybersecurity with the department of electrical and computer engineering at the University of Miami, regards AI favorably.

“As a teacher and a technologist, I see [AI development] as a positive thing,” Ramamoorthi said. “It is trying to push humans into a more intelligent world. Yes, unfortunately, there will be jobs that may become obsolete. However, humans are very adaptive. It pushes us further forward.”

Ramamoorthi said he has already saved time writing letters and enhancing his course content using ChatGPT. “It gives me a beautiful template,” he said. “Once I get the frame, I review it and modify it to my needs. I sometimes use it to create interesting question prompts so that my lectures can be more engaging.

“But I use ChatGPT as my assistant to enhance my tasks — not as an expert consultant,” Ramamoorthi said. “For my course assessments, I moved to paper-based exams and cloud-based labs. If the students cheat, they are responsible for the consequences,” he said.

The students are going to be in the workplace where AI technologies such as ChatGPT exist, Ramamoorthi noted. “I teach my students how to use these technologies effectively and the limitations and biases of these tools.”

As for a possible next step — a sentient robot or AI — the very idea raises incredulity and deep concerns.

Dubey said part of the complexity involves defining “sentience” in an AI setting. The practicality of creating a sentient AI goes back to the idea of constraining the problem, she said. “Even among humans, our individual knowledge is constrained by our life experiences and domain expertise, so creating a sentient AI with no constraints at all would be an incredible challenge.”

COPYRIGHT 2023 SWE MAGAZINE. ALL RIGHTS RESERVED.