Thought Leaders

Using Facial Expression to Enhance Virtual Reality for Disabled Users

Thought LeadersArindam DeyUniversity of QueenslandDr. Mark BillinghurstUniversity of South Australia

AZoSensors speaks with Arindam Dey from the University of Queensland and Dr. Mark Billinghurst from the University of South Australia about their research into using facial expressions to navigate virtual reality environments, making virtual reality more accessible for people with a wide range of disabilities.

Facial expressions have been a subject of interest in virtual reality for over a decade. However, the use of facial expressions as a means of interaction – navigation and manipulation of objects – in VR has never before been explored. Why is this?

Facial expressions in VR have been researched for a long time. However, using facial expressions to enable interactions has been difficult, primarily because the bulk of the face is covered with the VR headset, and hence many of the facial features are hidden. So researchers had to first work out how they could reliably detect facial expressions with high accuracy. It took a long time for the researchers to achieve considerable accuracy, and in fact, this is still an ongoing topic of research. 

Using Facial Expression to Enhance Virtual Reality for Disabled Users.

Image Credit: Shutterstock.com/ franz12

However, we now have reliable enough technology to detect facial expressions, at least some of them, that can be used for various purposes in virtual environments. However, the bulk of research has used facial expressions to make virtual avatars appear more realistic, and until now, no one has used them for interaction.

One reason for this could be that making facial expressions might not be an enjoyable method for interaction, and secondly, most people are interested in developing VR technology for able-bodied users, which is the majority of the consumer market.

One of our research directions has always been using VR “for good,” and as a result, we decided to use facial expressions to allow users who may otherwise not be able to use VR.

What inspired your research into using facial expressions to influence objects in a virtual reality (VR) setting?

We have always been interested in “for good” research using virtual and augmented reality technologies and how AR and VR can be used to improve people’s lives. We have run multiple workshops over the last few years on this topic to motivate the researcher community to showcase and discuss more research in this direction.

One of the directions for good research is to make these technologies more accessible and inclusive.

A major assumption about VR users is that they will use hand-held controllers that come with commercial VR headsets for interaction. We thought about other groups of users in the community who cannot use their hands to interact in VR. That’s when we started planning an alternative method for interaction that does not require hands.

Fortunately, at that time, three motivated students (Bowen Yuan, Aaron Goh and Gaurav Gupta) joined our research group at the University of Queensland for their final-year research project, and when we pitched the idea to them, they were super keen to make it a reality. With their help, using facial expressions for interaction in VR became a reality.

In conventional VR settings, it is common to use touchpads or hand-held controllers to move objects. Can you provide an overview of how your team was able to capture the facial expressions used to trigger specific actions in virtual reality settings?

We used an EEG device manufactured by Emotive, which records brain activities through 14 different electrodes. At the same time, this device is capable of measuring certain muscle activities in the face, primarily to detect “noise” or unwanted data in the recorded brain activities.

Interacting in VR with Facial Expressions

Video Credit: Arindam Dey/YouTube.com

Normally, there are some data processing techniques that are used to remove this noise from the neural data before any analysis is done. However, we used the noise in the data to detect facial expressions. The EEG device provides an interface to achieve this. They call them “smart artifacts.” So when the user clenches their teeth to make other facial expressions, this causes noise in the EEG data, which can be detected and the expression recognized. 

We then connected this EEG system to our VR system and used the three chosen facial expressions in the VR environments to interact with them. So, in other words, we did not develop the technology to detect the facial expressions, but we used it in a novel way in the VR system for the first time to enable interaction with facial expressions. More technical details of this interface are available in the paper.

Can you provide some examples of certain facial expressions and what actions they enabled in the VR settings? 

There are seven different facial expressions that the EEG device can detect. We used only three of them - smile, frown and clench. The smile was used to start moving, and the frown was used to stop movement.

The clench was used to perform certain actions in the environment, such as picking up objects or shooting zombies. For example, in VR, a user could smile and would begin moving forward in the direction that they are looking and then frown to stop moving.

Using Facial Expression to Enhance Virtual Reality for Disabled Users.

Image Credit: Shutterstock.com/ G-Stock Studio

The main reason for using these three expressions was that we found they are more reliably and accurately detected by the EEG system when wearing the headset, and users also found them easy to perform. 

How did you measure the efficacy of your facial expression-based method? What were the three virtual environments that you used?

We ran a user study to measure the performance of the facial expression-based method compared to a traditional hand-held controller-based method. The primary measures were neural activities, physiological signals such as electrodermal activity, usability and the sense of presence or immersion in VR.

We used three different environments eliciting three different types of emotions. A happy environment exposed the participants to a bright open field with many butterflies flying around them. They had to catch the butterflies using a net. A neutral environment placed the participants in a bright warehouse where they had to pick up objects from the shelves. In the scary environment, the participants were placed in a dark warehouse with many zombies attacking them and they had to survive by shooting (eliminating) the zombies. All of these environments had appropriate sound effects.

Our results indicated that, in general, the controller-based method performed well, but participants felt more immersed in the VR environments when they used the facial expressions.

It should be noted that our goal was not to prove that facial expressions were better than hand-held controllers for interacting with VR, but it was to test the viability of facial expressions as an interaction method.

This is because it will give a group of users the ability to use VR that was otherwise not possible for them.

Now that we have proven the viability, in the future, we want to improve the facial expression-based interaction so that it is more comparable to the traditional methods but with more advanced and less cumbersome technologies.

Did you come across any challenges during your research, and if so, how did you overcome them?

The main challenge was the pandemic. It restricted the use of lab facilities, and when we were ready and allowed to run the user study, we had to follow strict health and safety guidelines. Needless to say, finding participants to try our systems was also a challenge, as people were skeptical about stepping outside and using shared equipment, which is understandable.

Technically, there were some challenges in terms of connecting the EEG device and the facial expressions to the experimental system that we developed. 

In terms of usability, how does using facial expression compare to conventional controllers?

In terms of usability, fully-abled people found it harder to make facial expressions for input compared to using a traditional hand-held controller. Compared to using buttons on a controller, which is recognized every time, facial expression recognition has its flaws.

However, for people who cannot use a hand-held control, our system gives them an ability that they did not have before.

At present, it is difficult for disabled people to interact in a VR environment. How will your facial-expression-based method make VR more inclusive to amputees or those with motor neuron disease, for example?

As previously mentioned, our goal was not to revolutionize VR but instead to make it a more inclusive environment. If people have control over their facial expressions, they will be able to use our system to interact in VR.

In this way, we are making VR more accessible for people with a wide range of disabilities.

Are there any applications for this facial recognition technology beyond the entertainment industry?

Our technique could be used in many different VR applications, not just for gaming and entertainment. For example, the technology could be used for VR training experiences or social VR experiences. Basically, any VR application that uses simple movement or interaction could be adapted for our system.

What are the next steps for your research?

There are many next steps that we could take. We need to test the system with disabled people to see how usable it is for them. Until now, we have only been able to test the technology with fully-abled people.

We would also like to explore methods for facial recognition that are faster and more usable than the current technique. For example, we could use fewer EEG sensors or other sensors like EMG to measure muscle movements. There are many exciting directions that this research could go in.

About Arindam Dey

Arindam Dey is a computer scientist on a mission to make Metaverse (AR/VR) better and more inclusive for users in various ways. He is currently an Honorary Academic at the University of Queensland. Until February 2022, he was a Lecturer at the University of Queensland’s School of ITEE, primarily focusing on Mixed Reality, Empathic Computing, and Human-Computer Interaction. He co-founded and directed Empathic XR and Pervasive Computing Laboratory. He believes in designing solutions for users and putting users ahead of the technology accordingly. Most of his work involves user research and statistics.

Before joining the University of Queensland, he was a Research Fellow at the Empathic Computing Laboratory (UniSA), working with one of the world leaders in the field of Augmented Reality Prof. Mark Billinghurst between 2015 and 2018. Mark’s pioneering work in the field of Empathic Computing has directed him to this enticing research area of utilizing emotion and cognition in the extended reality (XR) interfaces. Earlier, he held postdoctoral positions at the University of Tasmania, Worcester Polytechnic Institute (USA), and James Cook University.

About Prof. Mark Billinghurst

Prof. Mark Billinghurst has a wealth of knowledge and expertise in human-computer interface technology, particularly in the area of Augmented Reality (the overlay of three-dimensional images on the real world).

In 2002, the former HIT Lab US Research Associate completed his Ph.D. in Electrical Engineering, at the University of Washington, under the supervision of Professor Thomas Furness III and Professor Linda Shapiro. As part of the research for his thesis titled Shared Space: Exploration in Collaborative Augmented Reality, Dr. Billinghurst invented the Magic Book – an animated children’s book that comes to life when viewed through the lightweight head-mounted display (HMD).

Not surprisingly, Dr. Billinghurst has achieved several accolades in recent years for his contribution to Human Interface Technology research. He was awarded a Discover Magazine Award in 2001 for Entertainment for creating the Magic Book technology. He was selected as one of eight leading New Zealand innovators and entrepreneurs to be showcased at the Carter Holt Harvey New Zealand Innovation Pavilion at the America’s Cup Village from November 2002 until March 2003. In 2004 he was nominated for a prestigious World Technology Network (WTN) World Technology Award in the education category and in 2005, he was appointed to the New Zealand Government’s Growth and Innovation Advisory Board.

Originally educated in New Zealand, Dr. Billinghurst is a two-time graduate of Waikato University, where he completed a BCMS (Bachelor of Computing and Mathematical Science)(first-class honors) in 1990 and a Master of Philosophy (Applied Mathematics & Physics) in 1992.

Research interests: Dr. Billinghurst’s research focuses primarily on advanced 3D user interfaces such as:

  • Wearable Computing – Spatial and collaborative interfaces for small wearable computers. These interfaces address the idea of what is possible when you merge ubiquitous computing and communications on the body.
  • Shared Space – An interface that demonstrates how augmented reality, the overlaying of virtual objects on the real world, can radically enhance face-face and remote collaboration.
  • Multimodal Input – Combining natural language and artificial intelligence techniques to allow human-computer interaction with an intuitive mix of voice, gesture, speech, gaze and body motion.

Disclaimer: The views expressed here are those of the interviewee and do not necessarily represent the views of AZoM.com Limited (T/A) AZoNetwork, the owner and operator of this website. This disclaimer forms part of the Terms and Conditions of use of this website.

Bethan Davies

Written by

Bethan Davies

Bethan has just graduated from the University of Liverpool with a First Class Honors in English Literature and Chinese Studies. Throughout her studies, Bethan worked as a Chinese Translator and Proofreader. Having spent five years living in China, Bethan has a profound interest in photography, travel and learning about different cultures. She also enjoys taking her dog on adventures around the Peak District. Bethan aims to travel more of the world, taking her camera with her.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Davies, Bethan. (2022, June 10). Using Facial Expression to Enhance Virtual Reality for Disabled Users. AZoSensors. Retrieved on April 26, 2024 from https://www.azosensors.com/article.aspx?ArticleID=2564.

  • MLA

    Davies, Bethan. "Using Facial Expression to Enhance Virtual Reality for Disabled Users". AZoSensors. 26 April 2024. <https://www.azosensors.com/article.aspx?ArticleID=2564>.

  • Chicago

    Davies, Bethan. "Using Facial Expression to Enhance Virtual Reality for Disabled Users". AZoSensors. https://www.azosensors.com/article.aspx?ArticleID=2564. (accessed April 26, 2024).

  • Harvard

    Davies, Bethan. 2022. Using Facial Expression to Enhance Virtual Reality for Disabled Users. AZoSensors, viewed 26 April 2024, https://www.azosensors.com/article.aspx?ArticleID=2564.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this article?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.