Like it or not, self-driving vehicles in one form or another represent the future of automotive transportation, potentially saving lives and allowing many more vehicles to share a crowded highway system. Even today, technologies that warn drivers about lane departure and that can even control speed and steering are in limited use. As self-driving vehicles mature, the technologies that will control and sense the world around these vehicles remain in question. LIDAR—a combination of LIght and raDAR—has come to dominate this discussion, with a wide range of supporters but with one very notable detractor.
LIDAR works similarly to radar systems. Instead of a radar wave, however, it shoots focused laser beams into its environment and measures the time the reflection takes to return. With this data, it then obtains a point cloud of the surrounding area, letting the autonomous vehicle “see” its surroundings, not as an image but as a 3D model—as if creating a video-game-world based on the car’s surroundings, with the car itself inside.
A Variety of Possible Solutions
Other technologies attempt to imbue vehicles with the same perception but do so with different technologies such as vision sensors (cameras), ultrasound sensors, and radar. Each of these has advantages and disadvantages. Vision sensors, for example, can read road signs or detect pedestrians but cannot see three-dimensionally. In extreme cases, these sensors can even mistakenly think that an image of something is truly an object, like a pedestrian or bicycle. Ultrasound sensors can detect distance but see from a fixed perspective and give no visual context. Radar systems can view an area at a very long distance, even through rain and fog, but have difficulty with non-metallic objects like wood, plastic, and, most importantly, humans.
LIDAR Leads the Way
LIDAR units, as used today, physically spin the laser beam throughout their surroundings with a mirror and can sense with a resolution of a few centimeters at around a 100m range. While they are an incredibly capable technology, LIDAR units are quite expensive when compared to other sensing technologies. The good news is that prices for these units are rapidly decreasing, from around $80,000 in 2013 to a tenth of that, or $8,000, in late 2017, according to MIT Technology Review and Ars Technica. While still a huge cost to a new car, if this pace continues, the cost will not be as much of a barrier as it is today. Though expensive, most self-driving cars now use this technology, including vehicles from Alphabet’s Waymo as well as Uber™ and Toyota®.
Assuming costs will come down, what’s not to like? One potential drawback is reliability, as the units have a physically spinning mirror that will have to endure the bumps and acceleration of everyday driving. Additionally, one can’t entirely discount the fact that the physical form of LIDAR units—which require a 360-degree view of the surroundings—means that you’re paying for something that looks like a coffee pot on top of your otherwise-beautiful $100,000 car.
Self-driving car pioneer Elon Musk has been outspoken about his dislike of this technology in automotive applications, and Tesla™, of which he’s CEO, doesn’t currently employ this tech in self-driving vehicles. Since these automobiles tend to be beautiful pieces of artistic engineering, one can’t help but think of the ugliness that the solution contributes—whether overtly or subconsciously—to this hesitation.
David Hall, founder and CEO of manufacturer Velodyne LiDAR®, boasts that the new VLS-128™ system can now reach an impressive distance of 300m. This increase in range is a dramatic improvement in comparison to the VLS-128 system predecessor’s 120m range, allowing for better navigation at high speeds. Other new technologies under development include solid-state LIDAR sensors that are manufacturable at a much lower price and do not require a spinning mirror. These systems are not able to see around a car from a 360-degree perspective, but the use of multiple units could theoretically form a full picture of the surroundings.
Another application of the solid-state LIDAR comes from the startup AEye, whose sensors merge solid-state LIDAR setups with low-light cameras. Combining these two sensing methods in one unit allows an onboard processor to focus LIDAR scanning in specific areas, capturing important details in the same manner that human eyes focus on areas of interest while ignoring unimportant details.
While engineers must weigh the pros and cons of any new technology, it’s certain that we must be ready for the idea that (at some level) that vehicle autonomy is coming. Perhaps we’ll embrace LIDAR in one form or another, in the future, once it becomes the primary technology for autonomous automobiles. Or, perhaps it will be something entirely new. Regardless of how this debate unfolds, it’s a certainty that a variety of different sensors with advanced artificial intelligence (AI) will be necessary to keep passengers safely riding to their destinations.