We know you are lying!


Artificial intelligence can detect untruths better than humans. Should we use it?

Everybody lies – and that’s the truth. But we’re about to get sprung, because AI (artificial intelligence) has proven very good at detecting what’s true and what isn’t. 

The problem is that if we implement AI lie detection, we run the risk of upsetting social harmony.  

Recent research indicates that people are not especially good at detecting lies. Our success rate is “not much better than chance”. 

That’s partly because, according to “truth default theory” we have a default position that what we are hearing is true. 

Researchers in Germany found that AI outperformed humans in text-based lie detection and that people were much more likely to express their suspicion that they had been lied to if they were supported by AI. 

According to a study led by Professor Alicia von Schenk of Julius Maximilians Universität, “Only under specific conditions, such as when discrepancies or contradictions are noticed, this default to truth is overridden, and skepticism arises.

“This theory explains why detecting deception is challenging and why people are often susceptible to believing false information, such as the content of deepfake videos.” In an interview, Prof von Schenk said the results of their research suggested that using AI for detecting lies could significantly disrupt social harmony. 

“If people more frequently express the suspicion that their counterpart may have lied, this fosters general mistrust and increases polarisation between people that already struggle to trust one another,” she said. 

The study’s co-author, Prof Victor Klockmann, said the fact AI could identify lies much better than humans did have a positive side in this time of fake news, dubious statements by politicians, and manipulated videos.

“It may be possible to prevent dishonesty and explicitly encourage honesty in communication,” he said. 

However, when dealing with statements that might or might not be factual, only one-third of participants in the university research took advantage of the opportunity to ask the AI algorithm for its assessment. 

“While people are currently still reluctant to use technical support to detect lies, organisations and institutions may embrace it differently,” he said. 

Prof Klockmann suggested AI could be used in communications with suppliers or customers, in job interviews, and in verifying insurance claims. 

But the study said, “Consulting a lie-detection algorithm, or delegating accusations to the algorithm, could reduce accusers’ sense of accountability, increase the psychological distance from the accused, and blur questions of liability. 

“As a result, we might witness a rise in accusations, leading to new social, legal, and ethical challenges.” 

The researchers have called for a comprehensive legal framework to regulate the impact of AI lie-detection algorithms. 


Further reading: Newsreel, iScience, UWLax 

Author

Brett Debritz

Brett Debritz

Communications Specialist, National Seniors Australia

Latest news articles


Prostate surgery to the rescue

Prostate surgery to the rescue

An occasion to remember

An occasion to remember

We've got your back

With National Seniors, your voice is valued. Discover how we campaign for change on your behalf.

Learn more