Why Doesn’t Tesla Care About Using Sensors for Autonomous Driving?
Self-driving vehicles employ various sensor types, such as light detection and ranging (LiDAR) technology for measuring long variable ranges, ultrasonic sensors for short ranges, and radars, which are similar to LiDAR, but rely on radio waves instead of lasers.
Self-driving tech leaders such as General Motors, Waymo, and Mercedes-Benz all rely on sensors, but not Tesla. The Texas-based automaker did use both radar and cameras to make its Autopilot semi-autonomous driving system possible, but beginning May 2021, it announced that it was ditching radar for the Model 3 and Model Y in North America, shifting its focus to a solely camera-based approach that it called Tesla Vision.

But what were the reasons behind Tesla’s decision to remove radar and ultrasonic sensors from its cars and not even consider LiDAR or maps? Let’s explore this topic further.
Computer Vision: Tesla’s Plan
Tesla has developed its own computer vision system, called Tesla Vision, to computewhat Tesla’s self-driving car sees. Based on Nvidia’s CUDA, which is a parallel computing platform designed for graphics processing units (GPUs), this end-to-end system powers Tesla’s Autopilot and self-driving technology. It relies on computer vision to make sense of the visual information gathered by the vehicle’s cameras.
Rather than using LiDAR, Tesla’s approach involves training the computer to recognize and interpret the visual world, with the goal of achieving autonomous driving capabilities. The manufacturer says it can dramatically speed up the training process thanks to its use of machine learning and its very own neural network, which runs on a supercomputer called Dojo.

Cost Reduction
Tesla’s shift from sensor-based approaches to computer vision is primarily motivated by cost. Tesla aims to reduce vehicle prices by minimizing the number of parts required. However, eliminating parts can pose a challenge when the system cannot function without them, and Tesla drew a lot of criticism when it announced it was removing radar from its cars.
A research paper fromCornell Universitysuggests that stereo cameras have the potential to generate a 3D map that is almost as precise as a LiDAR map. This presents an interesting point, as it indicates that instead of investing $7,500 in a LiDAR device, one could use a few cameras that are much cheaper, costing only $5. As a result, when Tesla claims that such technology may become obsolete in the near future, it might be onto something.

The other side of the coin is that after removing radar support, Tesla’s Autopilot system experienced several feature downgrades that took months to be restored. Additionally,many Tesla owners have reported issues with the no-radar system, such as frequent “phantom braking” events where the vehicle brakes unnecessarily for nonexistent obstacles.
Although many companies consider sensors such as LiDAR and radar essential for reliable self-driving, Tesla has chosen computer vision due to its potential for faster development. While LiDAR and radar can detect obstacles with high accuracy today, cameras still require further refinement to achieve the same level of reliability. Nonetheless, Tesla believes that the computer vision approach is the way forward.

Lower Complexity
While having a greater number of sensors can offer numerous advantages, including improved data management through skilled sensor fusion, it also presents significant drawbacks. The increased number of sensors can lead to the creation of more complicated software. The complexity of data pipelines is also increased, and the supply chain and production processes during vehicle assembly become trickier.
Furthermore, sensors need to be adjusted and their corresponding software maintained. Proper calibration is also essential to ensure the fusion process operates correctly.

Despite the potential advantages of more sensors, the cost and complexity of integrating them into a system cannot be overlooked. Tesla’s decision to decrease the number of sensors in its vehicles demonstrates the trade-off between the benefits and drawbacks of incorporating more sensors.
Code Verbosity
Code verbosity is a common issue in software development, where unnecessary complexity and length can make code hard to comprehend and maintain. In Tesla’s case, the use of radar and ultrasonic sensors increases code verbosity, leading to processing delays and inefficiencies.
To mitigate this problem, it employed the computer vision approach to minimize verbosity, enhance software performance and reliability, as well as provide a better user experience for its customers.
Elon Musk’s Philosophy
Elon Musk, the founder of Tesla, has a unique philosophy when it comes to designing and manufacturing electric vehicles. The “best part is no part” mentality is central to his approach, which aims to reduce complexity, cost, and weight wherever possible. This is evident in Tesla vehicles, which are characterized by their minimalist design and user-friendly interface.
One aspect of this philosophy is the decision to remove sensors from Tesla vehicles and not consider the use of LiDAR technology. While some competitors rely on LiDAR sensors to help their self-driving cars see the world around them, Musk has criticized this approach as a fool’s errand. He’s also said that any company that relies on this type of tech is doomed. He argues that LiDAR is too expensive and that mapping the world and keeping it up-to-date are too costly. Instead, Tesla focuses on vision-based systems, which he believes are more effective and cost-efficient.
According to Musk, roads are designed to be interpreted with vision, and Tesla’s technology is optimized to rely on cameras and other vision-based sensors to navigate the world. This also means that vehicles that only have cameras will also be able to adapt to new road conditions better than systems that require extensive pre-mapped datasets in order to function.
However, speaking toElectrek, Musk said that he’s not unwilling to use radar, but he believes the current quality of radar isn’t up to scratch. “A very high-resolution radar would be better than [Tesla Vision], but such a radar does not exist,” he said. “I mean, Vision with high-res radar would be better than pure Vision.” As the tech improves and the price drops, we may see radar reintegrated into Tesla’s cars.
Are Sensors Going to Be Phased Out?
In aForbesinterview with Zoox (Amazon’s self-driving subsidiary) CEO Jesse Levinson, the topic of Tesla’s decision to abandon sensors in its cars was discussed. Levinson acknowledged that adding more sensors can be complex and noisy, but argued that the benefits outweigh the costs.
With more development, vision alone may eventually suffice, but computers lack the same capabilities as the human brain. Tesla still has a lot of work ahead if it ever wants to create vehicles that fully drive themselves without any driver input.
Tesla isn’t the only auto-manufacturer with self-driving features, you know.
These are the best free movies I found on Tubi, but there are heaps more for you to search through.
Flagship price, mid-range phone.
One casual AI chat exposed how vulnerable I was.
It saves me hours and keeps my sanity intact.
So much time invested, and for what?