Purdue Researchers Build New Models to Detect Human Trust in Smart Machines

New “classification models” detect how well humans trust intelligent machines they work together with, a step toward enhancing teamwork and the quality of interactions.

How should intelligent machines be designed so as to “earn” the trust of humans? New models are informing these designs. (Purdue University photo/Marshall Farthing)

The long-term goal of the general field of research is to design smart machines that can change their behavior to improve human trust in them. The new models were created in research led by assistant professor Neera Jain and associate professor Tahira Reid, in Purdue University’s School of Mechanical Engineering.

Intelligent machines, and more broadly, intelligent systems are becoming increasingly common in the everyday lives of humans. As humans are increasingly required to interact with intelligent systems, trust becomes an important factor for synergistic interactions.

Neera Jain, Assistant Professor, School of Mechanical Engineering, Purdue University.

For instance, industrial workers and aircraft pilots regularly interact with automated systems. Humans will occasionally override these intelligent machines pointlessly if they think the system is stalling.

“It is well established that human trust is central to successful interactions between humans and machines,” Reid said.

The scientists have created two types of “classifier-based empirical trust sensor models,” a step toward enhancing trust between humans and intelligent machines.

The work goes along with Purdue’s Giant Leaps celebration, acknowledging the university’s universal advancements made in algorithms, AI, and automation as part of Purdue’s 150th anniversary. This is one of the four themes of the yearlong celebration’s Ideas Festival, designed to exhibiting Purdue as an intellectual center solving real-world challenges.

The models use two methods that offer data to gauge trust: electroencephalography and galvanic skin response. The first records brainwave patterns, and the second tracks variations in the electrical characteristics of the skin, offering psychophysiological “feature sets” correlated with trust.

Forty-five human participants put on wireless EEG headsets and wore a device on one hand to measure galvanic skin response.

One of the new models, a “general trust sensor model,” uses the same set of psychophysiological features for all 45 participants. The other model is tailored for each human subject, resulting in enhanced mean accuracy but at the cost of an increase in training time. The two models had a mean accuracy of 71.22%, and 78.55%, respectively.

It is the first time EEG measurements have been used to measure trust in real time, or without interruption.

“We are using these data in a very new way,” Jain said. “We are looking at it in sort of a continuous stream as opposed to looking at brain waves after a specific trigger or event.”

Findings are described in a research paper published in a special issue of the Association for Computing Machinery’s Transactions on Interactive Intelligent Systems. The journal’s special issue is titled “Trust and Influence in Intelligent Human-Machine Interaction.”  The paper was authored by mechanical engineering graduate student Kumar Akash; former graduate student Wan-Lin Hu, who is now a postdoctoral research associate at Stanford University; Jain and Reid.

“We are interested in using feedback-control principles to design machines that are capable of responding to changes in human trust level in real time to build and manage trust in the human-machine relationship,” Jain said. “In order to do this, we require a sensor for estimating human trust level, again in real-time. The results presented in this paper show that psychophysiological measurements could be used to do this.”

The subject of human trust in machines is crucial for the efficient operation of “human-agent collectives.”

The future will be built around human-agent collectives that will require efficient and successful coordination and collaboration between humans and machines. Say there is a swarm of robots assisting a rescue team during a natural disaster. In our work, we are dealing with just one human and one machine, but ultimately we hope to scale up to teams of humans and machines.

Neera Jain, Assistant Professor, School of Mechanical Engineering, Purdue University.

Algorithms have been added to automate many processes.

“But we still have humans there who monitor what’s going on,” Jain said. “There is usually an override feature, where if they think something isn’t right they can take back control.”

Sometimes this action is not necessary.

“You have situations in which humans may not understand what is happening so they don’t trust the system to do the right thing,” Reid said. “So they take back control even when they really shouldn’t.”

In certain cases, for instance, in the case of pilots overriding the autopilot, taking back control might actually hamper the safe operation of the aircraft, leading to accidents.

“A first step toward designing intelligent machines that are capable of building and maintaining trust with humans is the design of a sensor that will enable machines to estimate human trust level in real time,” Jain said.

To corroborate their method, 581 online participants were asked to work a driving simulation in which a computer identified hurdles in the road. In some situations, the computer properly identified obstacles 100% of the time, while in other scenarios the computer erroneously identified the obstacles 50% of the time.

So, in some cases, it would tell you there is an obstacle, so you hit the brakes and avoid an accident, but in other cases, it would incorrectly tell you an obstacle exists when there was none, so you hit the breaks for no reason.

Tahira Reid, Associate Professor, School of Mechanical Engineering, Purdue University.

The testing enabled the scientists to identify psychophysiological features that are linked to human trust in intelligent systems, and to design a trust sensor model accordingly. “We hypothesized that the trust level would be high in reliable trials and be low in faulty trials, and we validated this hypothesis using responses collected from 581 online participants,” she said.

The results confirmed that the technique efficiently induced trust and distrust in the intelligent machine.

“In order to estimate trust in real time, we require the ability to continuously extract and evaluate key psychophysiological measurements,” Jain said. “This work represents the first use of real-time psychophysiological measurements for the development of a human trust sensor.”

The EEG headset records signals across nine channels, each channel capturing various parts of the brain.

“Everyone’s brainwaves are different, so you need to make sure you are building a classifier that works for all humans.”

For autonomous systems, human trust can be classified into three groups: dispositional, situational, and learned.

Dispositional trust refers to the component of trust that is based on demographics such as culture and gender, which have potential biases.

We know there are probably nuanced differences that should be taken into consideration. Women trust differently than men, for example, and trust also may be affected by differences in age and nationality.

Tahira Reid, Associate Professor, School of Mechanical Engineering, Purdue University.

Situational trust may be impacted by a task’s level of risk or effort, while learned is based on the human’s past experience with autonomous systems.

The models they constructed are called classification algorithms.

“The idea is to be able to use these models to classify when someone is likely feeling trusting versus likely feeling distrusting,” she said.

Jain and Reid have also explored dispositional trust to account for cultural and gender differences, as well as dynamic models able to forecast how trust will vary in the future based on the data.

The study received funding from the National Science Foundation. The scientists have published several papers since the work started in 2015.

Can You Measure How Much Humans Trust Machines?

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.