New Biometric Tool Helps Confirm the Identity of Smartphone Users

In a public space, people can be often seen wearing earphones or earbuds. Zhanpeng Jin, a computer scientist from University at Buffalo (UB), was curious over the prevalence of this old-meets-new technology, specifically on college campuses.

When a sound is played into someone’s ear, the sound propagates through and is reflected and absorbed by the ear canal—all of which produce a unique signature that can be recorded by a microphone attached to the earbud, which then sends the info via Bluetooth to the user’s smartphone for verification. (Image credit: University at Buffalo)

We have so many students walking around with speakers in their ears. It led me to wonder what else we could do with them.

Zhanpeng Jin, PhD, Associate Professor, Department of Computer Science and Engineering, School of Engineering and Applied Sciences, University at Buffalo

That interest has resulted in a new biometric tool called EarEcho. Headed by Jin, a research group is developing this novel device that utilizes altered wireless earbuds to confirm smartphone users through the special geometry of their ear canal.

A model of this system has been illustrated in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies—a journal published by the Association for Computing Machinery. This prototype exhibited an efficiency of approximately 95%.

A provisional patent application for the new technology has been filed by UB’s Technology Transfer office.

How EarEcho Works

The researchers used off-the-shelf products—such as a minute microphone and a pair of in-ear earphones—to build the prototype. They also developed models to share data between the components of the EarEcho, and acoustic signal processing methods to reduce noise interference.

When an individual hears a sound, the same sound spreads through and then gets reflected and absorbed by the ear canal. From this whole process, an exclusive signature is created that can be captured by the microphone.

It doesn’t matter what the sound is, everyone’s ears are different and we can show that in the audio recording. This uniqueness can lead to a new way of confirming the identity of the user, equivalent to fingerprinting.

Zhanpeng Jin, PhD, Associate Professor, Department of Computer Science and Engineering, School of Engineering and Applied Sciences, University at Buffalo

The data recorded by the microphone is subsequently conveyed to the smartphone via the Bluetooth connection of the earbuds and is finally analyzed.

In order to test the device, 20 subjects were allowed to listen to audio samples that contained different music, speech, and other content. The researchers performed tests in diverse environmental settings, for example, in a shopping mall, on the street, etc., and with the subjects in varied positions like head tilted, standing, sitting, etc.

The EarEcho device showed an efficiency of about 95% when given a single second to confirm the subjects. The score jumped to 97.5% when it went on to track the subject in 3-second windows.

How EarEcho can be Used

Hypothetically, users can depend on the EarEcho device to unlock their smartphones, thus decreasing the need for various biometrics like facial recognition, fingerprints, and passcodes.

But Jin believes that the device can be mainly used in continuous monitoring of smartphone users. The EarEcho device functions when people are listening to their earphones. It is a passive system, which means users do not have to perform any action—like submitting a voice command or fingerprint—to make it work, he added.

Jin argued that a system like that is perfect for situations where users have to confirm their identity like making mobile payments. In addition, it can avoid the need to re-enter fingerprints or passcodes when the phone locks up after it is not used.

Think about that. just by wearing the earphones, which many people already do, you wouldn’t have to do anything to unlock your phone.

Zhanpeng Jin, PhD, Associate Professor, Department of Computer Science and Engineering, School of Engineering and Applied Sciences, University at Buffalo

Other co-authors of the study include Yang Gao and Wei Wang, both graduate students in Jin’s laboratory; Wei Sun, PhD, associate professor in the Department of Communicative Disorders and Sciences in the College of Arts and Sciences; and Vir V. Phoha, PhD, professor of electrical engineering and computer science at Syracuse University.


Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.