Simplifying the Generation of Three-Dimensional Holographic Displays

Holograms have long held the promise of offering immersive three-dimensional (3D) experiences, but the challenges involved in generating them have limited their widespread use. Capitalizing on the recent developments in deep learning, researchers from Chiba University now propose a game-changing approach that utilizes neural networks to transform ordinary two-dimensional color images into 3D holograms. The proposed approach can simplify 3D hologram generation and can find applications in numerous fields, including healthcare and entertainment.

Researchers propose a computationally inexpensive deep learning approach that utilizes three neural networks to transform two-dimensional images captured using regular cameras to three-dimensional holograms. Image Credit: Kunal Mukherjee from flickr

Holograms that offer a three-dimensional (3D) view of objects provide a level of detail that is unattainable by regular two-dimensional (2D) images. Due to their ability to offer a realistic and immersive experience of 3D objects, holograms hold enormous potential for use in various fields, including medical imaging, manufacturing, and virtual reality. Holograms are traditionally constructed by recording the three-dimensional data of an object and the interactions of light with the object. However, this technique is computationally highly intensive as it requires the use of a special camera to capture the 3D images. This makes the generation of holograms challenging and limits their widespread use.

In recent times, many deep-learning methods have also been proposed for generating holograms. They can create holograms directly from the 3D data captured using RGB-D cameras that capture both color and depth information of an object. This approach circumvents many computational challenges associated with the conventional method and represents an easier approach for generating holograms.

Now, a team of researchers led by Professor Tomoyoshi Shimobaba of the Graduate School of Engineering, Chiba University, propose a novel approach based on deep learning that further streamlines hologram generation by producing 3D images directly from regular 2D color images captured using ordinary cameras. Yoshiyuki Ishii and Tomoyoshi Ito of the Graduate School of Engineering, Chiba University were also a part of this study, which was made available online on August 2, 2023, in Optics and Lasers in Engineering.

Explaining the rationale behind this study, Prof. Shimobaba says, “There are several problems in realizing holographic displays, including the acquisition of 3D data, the computational cost of holograms, and the transformation of hologram images to match the characteristics of a holographic display device. We undertook this study because we believe that deep learning has developed rapidly in recent years and has the potential to solve these problems.”

The proposed approach employs three deep neural networks (DNNs) to transform a regular 2D color image into data that can be used to display a 3D scene or object as a hologram. The first DNN makes use of a color image captured using a regular camera as the input and then predicts the associated depth map, providing information about the 3D structure of the image. Both the original RGB image and the depth map created by the first DNN are then utilized by the second DNN to generate a hologram. Finally, the third DNN refines the hologram generated by the second DNN, making it suitable for display on different devices.

The researchers found that the time taken by the proposed approach to process data and generate a hologram was superior to that of a state-of-the-art graphics processing unit. “Another noteworthy benefit of our approach is that the reproduced image of the final hologram can represent a natural 3D reproduced image. Moreover, since depth information is not used during hologram generation, this approach is inexpensive and does not require 3D imaging devices such as RGB-D cameras after training,” adds Prof. Shimobaba, while discussing the results further.

In the near future, this approach can find potential applications in heads-up and head-mounted displays for generating high-fidelity 3D displays. Likewise, it can revolutionize the generation of an in-vehicle holographic head-up display, which may be able to present the necessary information on people, roads, and signs to passengers in 3D. The proposed approach is thus expected to pave the way for augmenting the development of ubiquitous holographic technology.

Source: https://www.chiba-u.ac.jp/e/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.