Automated Computer System Identifies Liars 82.5 Percent of the Time

Inspired by the work of psychologists who study the human face for clues that someone is telling a high-stakes lie, University of Buffalo computer scientists are exploring whether machines can also read the visual cues that give away deceit.

Results so far are promising: In a study of 40 videotaped conversations, an automated system that analyzed eye movements correctly identified whether interview subjects were lying or telling the truth 82.5 percent of the time.

That’s a better accuracy rate than expert human interrogators typically achieve in lie-detection judgment experiments, said Ifeoma Nwogu, a research assistant professor at UB’s Center for Unified Biometrics and Sensors (CUBS) who helped develop the system. In published results, even experienced interrogators average closer to 65 percent, Nwogu said.

“What we wanted to understand was whether there are signal changes emitted by people when they are lying, and can machines detect them? The answer was yes, and yes,” said Nwogu.

The research was peer-reviewed, published and presented as part of the 2011 IEEE Conference on Automatic Face and Gesture Recognition.

Nwogu’s colleagues on the study included CUBS scientists Nisha Bhaskaran and Venu Govindaraju, and UB communication professor Mark G. Frank, a behavioral scientist whose primary area of research has been facial expressions and deception.

In the past, Frank’s attempts to automate deceit detection have used systems that analyze changes in body heat or examine a slew of involuntary facial expressions.

The automated UB system tracked a different trait -- eye movement. The system employed a statistical technique to model how people moved their eyes in two distinct situations: during regular conversation, and while fielding a question designed to prompt a lie.

People whose pattern of eye movements changed between the first and second scenario were assumed to be lying, while those who maintained consistent eye movement were assumed to be telling the truth. In other words, when the critical question was asked, a strong deviation from normal eye movement patterns suggested a lie.

Previous experiments in which human judges coded facial movements found documentable differences in eye contact at times when subjects told a high-stakes lie.

What Nwogu and fellow computer scientists did was create an automated system that could verify and improve upon information used by human coders to successfully classify liars and truth tellers. The next step will be to expand the number of subjects studied and develop automated systems that analyze body language in addition to eye contact.

Nwogu said that while the sample size was small, the findings are exciting.

They suggest that computers may be able to learn enough about a person’s behavior in a short time to assist with a task that challenges even experienced interrogators. The videos used in the study showed people with various skin colors, head poses, lighting and obstructions such as glasses.

This does not mean machines are ready to replace human questioners, however -- only that computers can be a helpful tool in identifying liars, Nwogu said.

She noted that the technology is not foolproof: A very small percentage of subjects studied were excellent liars, maintaining their usual eye movement patterns as they lied. Also, the nature of an interrogation and interrogators’ expertise can influence the effectiveness of the lie-detection method.

The videos used in the study were culled from a set of 132 that Frank recorded during a previous experiment.

In Frank’s original study, 132 interview subjects were given the option to “steal” a check made out to a political party or cause they strongly opposed.

Subjects who took the check but lied about it succes

sfully to a retired law enforcement interrogator received rewards for themselves and a group they supported; Subjects caught lying incurred a penalty: they and their group received no money, but the group they despised did. Subjects who did not steal the check faced similar punishment if judged lying, but received a smaller sum for being judged truthful.

The interrogators opened each interview by posing basic, everyday questions. Following this mundane conversation, the interrogators asked about the check. At this critical point, the monetary rewards and penalties increased the stakes of lying, creating an incentive to deceive and do it well.

In their study on automated deceit detection, Nwogu and her colleagues selected 40 videotaped interrogations.

They used the mundane beginning of each to establish what normal, baseline eye movement looked like for each subject, focusing on the rate of blinking and the frequency with which people shifted their direction of gaze.

The scientists then used their automated system to compare each subject’s baseline eye movements with eye movements during the critical section of each interrogation -- the point at which interrogators stopped asking everyday questions and began inquiring about the check.

If the machine detected unusual variations from baseline eye movements at this time, the researchers predicted the subject was lying.

Featured

  • Survey: 48 Percent of Worshippers Feel Less Safe Attending In-Person Services

    Almost half (48%) of those who attend religious services say they feel less safe attending in-person due to rising acts of violence at places of worship. In fact, 39% report these safety concerns have led them to change how often they attend in-person services, according to new research from Verkada conducted online by The Harris Poll among 1,123 U.S. adults who attend a religious service or event at least once a month. Read Now

  • AI Used as Part of Sophisticated Espionage Campaign

    A cybersecurity inflection point has been reached in which AI models has become genuinely useful in cybersecurity operation. But to no surprise, they can used for both good works and ill will. Systemic evaluations show cyber capabilities double in six months, and they have been tracking real-world cyberattacks showing how malicious actors were using AI capabilities. These capabilities were predicted and are expected to evolve, but what stood out for researchers was how quickly they have done so, at scale. Read Now

  • Why the Future of Video Security Is Happening Outside the Cloud

    For years, the cloud has captivated the physical security industry. And for good reasons. Remote access, elastic scalability and simplified maintenance reshaped how we think about deploying and managing systems. Read Now

  • UL Solutions Launches Artificial Intelligence Safety Certification Services

    UL Solutions Inc., a global leader in safety science, today announced the launch of artificial intelligence (AI) safety certification services, enabling comprehensive assessments for evaluating the safety of AI-powered products. Read Now

  • ESA Announces Initiative to Introduce the SECURE Act in State Legislatures

    The Electronic Security Association (ESA), the national voice for the electronic security and life safety industry, has announced plans to introduce the SECURE Act in state legislatures across the country beginning in 2025. The proposal, known as Safeguarding Election Candidates Using Reasonable Expenditures, provides a clear framework that allows candidates and elected officials to use campaign funds for professional security services. Read Now

    • Guard Services

New Products

  • A8V MIND

    A8V MIND

    Hexagon’s Geosystems presents a portable version of its Accur8vision detection system. A rugged all-in-one solution, the A8V MIND (Mobile Intrusion Detection) is designed to provide flexible protection of critical outdoor infrastructure and objects. Hexagon’s Accur8vision is a volumetric detection system that employs LiDAR technology to safeguard entire areas. Whenever it detects movement in a specified zone, it automatically differentiates a threat from a nonthreat, and immediately notifies security staff if necessary. Person detection is carried out within a radius of 80 meters from this device. Connected remotely via a portable computer device, it enables remote surveillance and does not depend on security staff patrolling the area.

  • Connect ONE’s powerful cloud-hosted management platform provides the means to tailor lockdowns and emergency mass notifications throughout a facility – while simultaneously alerting occupants to hazards or next steps, like evacuation.

    Connect ONE®

    Connect ONE’s powerful cloud-hosted management platform provides the means to tailor lockdowns and emergency mass notifications throughout a facility – while simultaneously alerting occupants to hazards or next steps, like evacuation.

  • QCS7230 System-on-Chip (SoC)

    QCS7230 System-on-Chip (SoC)

    The latest Qualcomm® Vision Intelligence Platform offers next-generation smart camera IoT solutions to improve safety and security across enterprises, cities and spaces. The Vision Intelligence Platform was expanded in March 2022 with the introduction of the QCS7230 System-on-Chip (SoC), which delivers superior artificial intelligence (AI) inferencing at the edge.