Ganesh Venkataramanan, Tesla’s project lead for its ambitious Dojo supercomputer project for the past five years, has left the company. Bloomberg reported this development, stating that the news was confirmed by sources familiar with the matter. Peter Bannon, a former executive at Apple Inc. and a director at Tesla for the last seven years, has now taken the helm of the project.
Venkataramanan's departure from Tesla last month is now stirring conversations about the potential impacts on Tesla's future initiatives. His contributions to the Dojo project have been pivotal, especially in designing the custom D1 chip that powers the supercomputer. Venkataramanan, with his extensive experience, including a significant tenure at Advanced Micro Devices Inc. (AMD), was a crucial asset in setting up Tesla’s AI hardware and silicon teams in 2016.
Dojo: A Cornerstone for Tesla’s Self-Driving Aspirations
The Dojo supercomputer is a critical element of Tesla's strategy to enhance its self-driving capabilities. Designed to train machine learning models integral to Tesla's autonomous systems, Dojo processes vast amounts of data captured by Tesla vehicles. This rapid data processing is essential for improving the company’s algorithms, with analysts suggesting that Dojo could be a significant competitive advantage for Tesla. In a recent estimation by Morgan Stanley, the project could potentially add $500 billion to Tesla’s value.
Elon Musk has been vocal about the company's commitment to the Dojo project, planning an investment exceeding $1 billion by the end of 2024. The project's importance was underscored in Tesla's decision to shift from relying on Nvidia Corp.’s supercomputers to developing Dojo, poised to rival systems from Hewlett Packard Enterprise Co. and IBM.
Looking Ahead: Impact and Future Prospects
The recent leadership changes raise questions about the future direction of the Dojo project. Venkataramanan's exit, coupled with the departure of another critical artificial intelligence player from Tesla last year, Andrej Karpathy, signals a transition period for the company’s AI and self-driving teams.
However, Tesla's robust talent pool, blending experienced and emerging professionals, offers a silver lining. Bannon's promotion to lead the Dojo project is seen as a strategic move, leveraging his experience and insights gained from his tenure at Apple. Moreover, the recent installation of Dojo hardware in Palo Alto, California, marks a step forward in centralizing and enhancing the project’s capabilities.
Tesla’s ambitions for Dojo extend to making it one of the world’s top supercomputers. The company envisions reaching a computational capability of 100 exaflops* by October 2024, a testament to its commitment to advancing artificial intelligence and self-driving technology.
* Confused about "exaflops?" "Flops" stands for Floating Point Operations Per Second. It's a way to measure how fast a computer can process data. "Exa" means a billion billion, or 1, followed by 18 zeros (1,000,000,000,000,000,000). So, when we say a computer can perform 100 exaflops, it can do 100 billion billion calculations per second. That's incredibly fast!
Subscribe
Subscribe to our newsletter to stay up to date on the latest Tesla news, upcoming features and software updates.
Tesla’s FSD has made some truly incredible strides since V11, and since FSD V12.5, the experience has been hands-free for vehicles with a cabin camera.
However, a persistent point of frustration for many users is the strictness of the Driver Monitoring System (DMS), often referred to as the “nag.” In a recent interaction on X, Mike P detailed his grievances about how strict the DMS was.
This post drew a response from Elon Musk, who said, “You’re right.” Just a few days and a relatively unassuming point release later, Tesla has already decided to take action to improve its DMS.
The core issue here, which many who use FSD can attest to, isn’t about wanting to be irresponsible. Instead, it is about the current system’s sensitivity. The DMS can feel overly punitive for brief, normal interactions with the vehicle’s center display.
User Experience Woes
Mike P’s experience was common - you can’t even glance at the display to change the song or add a nav stop without the DMS warning you to pay attention.
If you continue, then you risk receiving a FSD strike. This leads to most drivers disabling FSD and typing their destination in while manually driving. For the casual observer, you can tell that it is clearly far more dangerous.
This highlights a safety paradox: a system designed to ensure attentiveness can sometimes lead to less safe workarounds. One must acknowledge that Tesla is in an odd position, being incredibly cautious about safety and ensuring it stays within NHTSA guidelines. However, the nag today is overkill in some situations, such as glancing at the center screen.
Tesla Confirms Change
Musk’s relatively concise answer resonated with his previous outlook on the matter. During Tesla’s Q1 2025 Earnings Call, he acknowledged that the DMS can be too strict and mentioned that Tesla is actively looking into ways to loosen those restrictions. He also pointed out the irony between the current system encouraging users to disengage FSD for simple tasks, only to re-engage it moments later - a less-than-safe cycle.
In a post on X, Ashok Elluswamy, Tesla’s VP of Autopilot AI, delivered welcome news. He confirmed that the latest FSD update, V13.2.9, includes a loosening of the cabin camera nag. This is an undocumented change, and one that we’re very excited to see.
This undocumented change is the latest step in Tesla’s overall plan forward Unsupervised FSD, which would drop the DMS completely. Previous updates, like the shift to vision-based driver attention monitoring in V12.4 and V12.5, aimed to balance safety with user experience.
What Does This Mean?
While the full extent of changes in V13.2.9 will become clearer as Software Update 2025.14.6 rolls out to more FSD users, the confirmation of loosened cabin camera nag suggests a few things.
This likely means greater tolerance for brief glances at the screen for essential tasks, whether it be adjusting climate settings, inputting a nav destination, or changing the current song. It could also include a potentially more forgiving threshold for looking away, especially in low-speed scenarios. The DMS does not ding you for using the display or looking away while the vehicle is waiting at a red light today, but Tesla could expand this to driving under 10 mph (16 km/h).
Ashok Elluswamy, Tesla's Vice President of Autopilot and AI Software, recently discussed Tesla's artificial intelligence programs' current state and future ambitions. He covered FSD and then extended it to the broader topics of robotics and Artificial General Intelligence (AGI).
Journey to Truly Autonomous Driving
At the core of Tesla’s AI efforts lies the quest for fully autonomous vehicles. Ashok reiterated the long-term vision where, eventually, all newly manufactured cars are expected to be self-driving, with older, human-driven cars potentially becoming items for specialized hobbies or unique purposes.
However, he did acknowledge that the current advanced driver assistance systems (ADAS), including Tesla’s own FSD, require better reliability before the human can be completely removed from the equation.
The development process, he emphasized, is fundamentally rooted in machine learning rather than traditional programming. A crucial aspect of this is that AI is consistent across every vehicle, learning collectively from the fleet’s experiences rather than being unique to each car.
Progress in AI is continuous.
Safety and reliability remain Tesla’s focus for FSD. Now, with Tesla just weeks away from launching its Robotaxi Network in Austin, Texas, this is more true than ever, as any accidents could cause a delay in the program’s expansion or stop the program entirely.
No LiDAR
Ashok confirmed that Tesla still has no interest in LiDAR while discussing Tesla's vision-based sensor suite. He reiterated that cost and scalability remain key concerns with LiDAR, adding that its perceived usefulness diminishes as vision-based systems continue to improve.
Beyond the Road: FSD and Robotics
Ashok described Tesla’s AI network poetically - a “digital living being.” This emphasizes the organic way FSD absorbs information from the environment and learns from it. But FSD isn’t just for cars. Tesla uses FSD, as well as the same AI4 hardware from its vehicles, for its humanoid robot, Optimus.
Ashok expects that there will be a tremendous wave in robotics over the next 10 to 20 years. A key part of this will be the development of humanoid robots, which he believes will eventually be capable of complex industrial and domestic tasks, interacting with natural language, likely by 2035.
This recent surge in AI capabilities has been heavily driven by advancements in deep learning and the availability of massive computing power. Tesla is making heavy investments in both software and hardware. It recently started construction of its Cortex 2.0 Supercomputer cluster at Giga Texas.
Envisioning Sustainable Abundance & AGI
The conversation also covered the topics of Artificial General Intelligence. Ashok offered a pretty bold prediction that AGI will arrive in as little as the next 10 years, based on the rate of advancement that he’s seen so far. He further projected that AI-based software could become capable of performing most human tasks, whether spreadsheets or even robotic athletics, within the next 15 years.
This technological leap, he believes, ties into Tesla’s newer mission statement of sustainable abundance. Sustainable abundance is where the combination of intelligent machines and effective robotics helps to move greater portions of society away from poverty. This has become Tesla’s guiding philosophy since the 2025 All-Hands Meeting earlier this year.
Sustainable abundance should be a win-win scenario for all involved, helping reshape both production and creative industries to help humans do what they want to do rather than what they have to do.
Future of Mobility
As FSD and other AGI tech mature, Ashok believes that all cars being manufactured by 2035 will become autonomous. By then, the very concept of car ownership may change and transform. Owning a car would be a more “premium experience,” as the convenience and efficiency of self-driving vehicles might make personal ownership less of a necessity for many people. This shift would also necessitate infrastructure improvements to accommodate potentially increased vehicle usage.
We took a look at what the future may look like when autonomous vehicles become commonplace. It’ll have a drastic effect on our society, as parking lots will need to be a fraction of the size they are today, drop-off and loading zones will need to be bigger, and, for the most part, road signs may no longer be needed.
Will need this big time in the future. With autonomous vehicles we'll have affordable premium transport for everyone. This will likely increase traffic due to the increased usage, even though each vehicle is much more efficiently utilized. https://t.co/xvdvmxmzxd
Touching on the Indian vehicle market, Ashok noted that EVs, especially when combined with technologies like FSD, are well suited to the typical travel patterns in India and could make a big difference. With Tesla putting its eyes on a potential factory expansion in the coming years in India, there’s a lot riding on Tesla being able to take on the challenge of Indian roadways, where traffic laws are not enforced and well known.
Ashok’s interview was a fantastic look into what he believes will be next for Tesla - and he left with some parting advice for the next generation of engineers.
Master core concepts and leverage the wealth of online resources available. There is an emphasis on talent and innovation over traditional corporate hierarchies, and don’t forget your priorities: work and family.
You can watch the full interview here. Closed captioning is available.