Tesla's latest Full Self-Driving (FSD) Beta v12 is reaching users after an extended rollout over the weekend. This update is not just another iteration; it represents a leap in self-driving technology, primarily due to its integration of end-to-end neural networks for vehicle control. So far, it seems to be living up to the hype.
A Neural Network Driven Approach
At the core of FSD v12 is a shift from traditional programming to neural network-based decision-making. This allows the vehicle to process raw camera footage and vehicle kinematics directly into driving actions, mimicking human cognitive processes more closely than ever before. Ashok Elluswamy, Director of Autopilot Software at Tesla, highlighted the monumental effort to surpass the capabilities of the previous v11, setting a new standard for FSD's future.
Ability to Reverse Is Coming
The release has garnered widespread acclaim, with tech leaders like Michael Dell praising its capabilities and likening the car to human-like driving proficiency. Many in the Tesla community have been posting their weekend drivers, including Chuck Cook, who was amazed during one point of his drive, referring to a move as “Robo-taxi navigation.” Cook believed he was in too tight of a spot to pull a U-turn, but his Tesla pulled it off. Elluswamy commented: “Reverse coming soon when the Actually Smart Summon and the FSD models merge together over the next few releases.”
This clip blew my mind, next level for FSDBeta v12.3 in finding its way out of a bad situation. Nice work @Tesla_AIpic.twitter.com/9cDccpr2uO
There were numerous examples of v12.3 navigating complex driving scenarios easily, showcasing significant improvements over earlier versions.
“Reverse coming soon when the Actually Smart Summon and the FSD models merge together over the next few releases.” - Ashok Elluswamy
Next-Level Capabilities on the Horizon
Elon Musk has teased that v12.4 will introduce even more advanced features, emphasizing the continuous improvement in training compute constraints. In fact, he was so thrilled with the next update that he said it could be called version 13. He posted that “V12.4 is another big jump in capabilities. Our constraint in training compute is much improved.”
Tesla’s FSD trajectory seems to be hitting a new level, as it appears to be headed toward approval of use on roadways in Europe. While it is taking longer than initially believed, the dream of autonomous driving seems to be getting closer to reality.
Subscribe
Subscribe to our newsletter to stay up to date on the latest Tesla news, upcoming features and software updates.
Just over a week into the Robotaxi launch, Tesla began laying the groundwork for a more scalable remote supervision model, which will be key to achieving success with the Robotaxi Network.
About a week ago, Elon Musk posted on X that Tesla will likely reach the crucial safety threshold to enable this shift within a month or two. While that means at least another month of in-vehicle Safety Monitors, it does provide us with a timeline of what to expect.
As soon as we feel it is safe to do so.
Probably within a month or two. We continue to improve the Tesla AI with each mile driven.
This timeline came in response to a question about Tesla’s plans for the ratio of autonomous vehicles to remote supervisors. The more vehicles that a single human can supervise, the better, especially if that number can be reduced to something drastic, like a 100:1 ratio. A single human operator would be able to manage an entire city of Robotaxis, which will be critical to make the Robotaxi Network turn a profit.
While Tesla works towards that ambitious future, it is also taking immediate steps to improve the current user experience during the Austin pilot program, where 15-minute wait times have become the norm.
Solving for Wait Times
According to Eric E, one of Tesla’s principal engineers on Robotaxi, the current 15-minute wait times are a classic logistics challenge. The supply of vehicles is lower than the current demand for rides. To solve this, there’s a two-pronged solution for Tesla.
First, Tesla is directly increasing supply by hiring more Safety Monitors/Vehicle Operators in Austin, even hosting an on-site hiring event.
We're looking to hire more Vehicle Operators in Austin, TX to accelerate Robotaxi deployment. We will be hosting an onsite hiring event next Thursday. Please consider applying to the official job posting and completing this hiring event form:
Second, Tesla is working to make FSD and the Robotaxi fleet management software faster and smarter. This means they are utilizing the data from the pilot to better orchestrate the fleet by predicting demand and pre-positioning vehicles in prime locations to reduce wait times. After dropping someone off, the vehicle can start traveling to areas of higher demand, even if someone hasn’t booked a ride yet.
Next Up: Remote Supervision
These immediate fixes are all in service of that much larger goal. Scaling the Robotaxi Network isn’t just about having more cars; it’s about increasing the number of vehicles a single human can safely supervise remotely, which is a requirement for Robotaxi to turn a profit.
Elon’s comments give us this timeline. A more flexible and favorable ratio of 3:1 (although still far from the ideal 100:1) is likely to be achieved within a few months.
Tesla is committed to safety, as evidenced by the safety monitors in the vehicle. A single incident could not only tarnish the public’s view of the Robotaxi Network but could also halt Tesla’s operations altogether.
The data gathered from more Robotaxis on the road is crucial to the whole project. Tesla is gathering more data and issuing newer FSD builds specific to the Robotaxi.
As FSD requires less remote oversight per mile driven autonomously, Tesla can safely increase the number of vehicles per remote supervisor, moving the service closer to its ultimate goal.
Tesla has laid out an aggressive roadmap for the Robotaxi Network and its next few phases. We’ll have to wait and see just how this goes over the next few months, and whether they feel comfortable enough to increase the geo-fence and remove safety monitors.
Following the recent news of Grok being almost ready for Tesla vehicles, Elon Musk confirmed on X that the next major step is with Optimus, Tesla’s humanoid Robot. xAI’s advanced Grok models will eventually serve as the voice and brain for Optimus. This will be a convergence of Musk’s two biggest AI ventures — Tesla and xAI.
This will combine a physically humanoid robot - the brawn - with the new brains, Grok. This integration is more than just giving Optimus a voice - it suggests that Tesla is thinking ahead and possibly intends to use Grok to understand the environment around Optimus, while FSD will handle the robot’s movements.
The combination of Optimus and Grok creates a relationship where each component plays to its strengths.
For years, Tesla’s robotics team has been focused on the immense challenge of physical autonomy. Optimus learns complex tasks by observing humans, basically training itself through video by watching humans. This helps Optimus develop the physical dexterity needed to work in the real world. This is the brawn - the ability to navigate, manipulate objects, and perform useful work.
Grok provides the conversational brain. It adds a layer of natural language understanding, reasoning, and interaction. Instead of needing a computer, a specialized app, or pre-programming commands to give Optimus instructions, a user will be able to simply talk to it in a natural way. This makes Optimus infinitely more approachable and useful, especially for tasks in a dynamic environment, such as work or at home.
xAI and Tesla
Viewed from a different perspective, this move isn’t just about upgrading one product. It is the clearest evidence that xAI and Tesla are collaborating together to build a single, unified AI platform. Musk’s biographer, Walter Isaacson, believes Tesla and xAI will merge. Seeing Tesla and xAI both play critical roles in creating Optimus makes us believe that it may very well be the case.
Transformation to a Humanoid Robot
The confirmation of Grok in Optimus is one of the most significant milestones for the project to date. While Optimus’s ability to walk and work (and dance) is already an incredible engineering feat, it has all been physical abilities so far. Adding the ability to interact with Optimus in a human-like way will transform Grok from a machine to a true, general-purpose humanoid robot.
The ability to understand nuanced requests, ask clarifying questions, and respond intelligently is what will ultimately make Optimus a daily fixture in our lives.