Please tell us a bit about your company and why you are attending Sensors Expo & Conference 2018.
We are Metawave and we are focused on long range sensors with imaging capabilities - basically to give the car a brain, vision and perception of the world around it at very long distance. Distances that the camera and LiDAR cannot reach.
Our technology is based on radar. We are also targeting 5G and fixed wireless because our core technology also solves a big challenge for this market.
There is a huge focus on autonomous vehicles, because sensors are the big part of the capabilities of these futuristic cars. A lot of coverage has been carried out on the camera and the LiDAR side, but not so much around radar. Mainly because people are just focused on the mainstream way of solving the radar challenge, which does not work.
We went in the opposite direction - we focused on the analog, before we digitized the signal. What can you do to enable the radar to go very far? The only way to do that is to squeeze the signal in a very narrow cone, to make a very intense beam, in order to propagate long distance and also to get a good reflection from the object.
In all sensors, that is the only way to solve the propagation-loss challenge. We do this using analog beam forming - our own proprietary technology. We were the first ones to demonstrate this technology.
You gave two talks this year at the show - can you give a brief description of them for our readers?
One was an automotive workshop. They brought in camera companies and LiDAR companies - we were the only radar company.
The whole audience understood the scanning radar key-enabling technologies and all of their capabilities, but of course, these sensor companies do not talk about the limitations. I am a physicist, so from that point of view, I shared the challenges in the propagation; especially in bad weather conditions when you have dense fog or you have dirty roads due to snow.
The LiDAR and camera, they will suffer in terms of getting access to the environment around it, but the radar will survive. That is why the radar is considered a very robust sensor. Eventually it will be the primary sensor, enabling level four and level five capabilities. Without our radar, in my opinion, you can never reach level four or level five. It is exciting!
How do you see autonomous vehicle sensors changing in the future?
With the conventional automotive sector, the chip companies supplying the material and components to the tier one. The tier ones, they work to deliver the components and modules to the car companies. And then the car companies sell the car and make profit. But now with mobility or shared mobility as a service, the business model is changing.
The initial level four and level five will be a fleet or robo-taxis, so not selling autonomous vehicles to consumers. They will monetize on miles driven or usage. It will be a recurring business model to automotive companies. They will distinguish their service by showing that the number one priority is safety.
Ideally the consumer would like to have cool cars and all the nice apps and features too. However, at the end, safety will always be number one. We stress on that to make sure that we give the car enough time to react. We do that by letting the car see far away so that it has enough time to react. We focus on safety. That is our number one priority.
Obviously, down the road as they become very safe, we will look at how to decrease the cost.
These are deep-learning engines - we train our own radar at the edge, independently from the sensor fusion and other sensors, to provide this early information to the car so it will tell the LiDAR or the camera, "Focus on this trouble area as the car gets closer in order to give me time to make a decision."
Eventually, when every car on the road will be a level four and five, they will interfere with each other. Part of our safety campaign is to make sure we are immune to interference and we cause less interference. We minimize the interference level that we cause, something that competing technologies cannot beat.
Who came up with the idea of your technology?
I am a physicist and I have a PhD from MIT in particle physics. Part of my journey as a physicist, I also work in the defense sector. I work on optical components and systems.
In the past three years, I was approached by Xerox PARC (Palo Alto Research Center). This is the research arm of Xerox. It is located here at the Palo Alto. They have been around for decades. I was very happy when I was asked to lead their spin-out start-up and we are still hosted on Xerox PARC campus.
It is very exciting to be able to work with them, understand what they have done in that space, and realize the connection to the opportunity and the technology. It is a broad technology, not developed enough to go commercial, but we have a vision of its capability. By now we have over 50 patents filed in the past year, because we want to own that space. We understand that we will solve the problem.
What are you hoping to achieve at Sensors Expo & Conference 2018?
I am not here to compete against LiDAR. I think all sensors are going to be very important, but I just want to make sure that the capability of the radar is well understood from day one.
Radar is new to a lot of people. It is very exciting to start at teaching the audience. The audience is basically the customers and partners. We need the chip providers and we need the sensor fusion providers. We need the tier ones, and also the media who are also trying to understand exactly what the radar can do. I can sense that there is a reluctance in exposing the capability of LiDAR because there is so much money has been invested in it. So I think they are waiting until some the investors make money before they promote the radar.
Will LiDAR and radar work in partnership?
Initially, they will work in harmony.
You have these convolutional network algorithms that need to be defined. The more data you provide the engine, the better. It becomes smarter and able to more accurately predict behaviour as it drives through unknown areas. Initially all of these sensors (LiDAR, radar, etc.) are required because you need a lot of data.
Training an engine with the data from all of the sensors makes more sense than just using one for now. In my opinion, it is very important for the car audience to work with all the sensors but also try to train their engines with data from the leading sensor. They have to classify them in terms of priority. Today of course, radar is primary in the Tesla and in other cars (for example, Mercedes). Others will eventually make a decision as more and more advanced radar are available on the market.
Secondary are always going to be the cameras and then of course the LiDAR. The radar is always going to be the first sensor to see at these long distances. As the car gets closer, it passes this information to the other sensors; the LiDAR and the camera. Today, the LiDAR and the camera have a rage between maybe 75 meters to 200 meters. They scan the whole scene all the time because they do not have time to analyze a few regions and decide what region they should focus on. Scanning the whole scene takes time and takes power.
Most of the cars are going to be electrical cars. If you are going to drain the battery every 20 or 30 miles, that is not a business that you can be in, right? If the radar provides information to the camera and the LiDAR, then they will focus only on these areas. It means they require less power and also they can process the information much quicker. T
hat is kind of the initial phase of all working together, and then the long-term 10 - 15 years from now, when you have radar and cameras working together to enable level four and level five.