Tesla FSD Beta v12 Auto Parks, Completes U-Turns, But Removes Traffic-Aware Cruise Control Ability

By Kevin Armstrong
Tesla has released FSD Beta v12 to some customers
Tesla has released FSD Beta v12 to some customers
Not a Tesla App

Tesla's FSD Beta version 12.2.1, update 2023.44.30.20, recently started going out to some owners, which resulted in more videos posted on X. There are several examples of amazing technology at work, but also evidence that more work is needed.

Ashok Elluswamy, Tesla's Director of Autopilot Software, recently highlighted the sophistication of FSD Beta v12 on X, emphasizing how the system's end-to-end approach is tackling complex driving scenarios with remarkable ease. His response came to a video of FSD maneuvering around a large puddle.

FSD V12 Does U-Turns

One of the standout features of FSD Beta v12 is its ability to execute U-turns seamlessly when required by the route. This is where real-world examples show the good and the bad of this highly advanced maneuver come into play. X user AI DRIVR, an account posting several high-quality videos of V12.2.1 in action, demonstrates a flawless U-turn.

Unfortunately, not all U-turns posted on X are as pretty; Randolph Kim has been experimenting with several scenarios. While later videos showed better behavior with u-turns and roundabouts, the earlier attempts had to be disengaged.

Parking Mode / First Glimpse at Park Seek

During our first glimpse of FSD v12 during Musk’s livestream, we noticed a new behavior when the vehicle reached its destination. Instead of just stopping, the vehicle now pulled over to the side of the road. However, it looks like the newest release goes one step further.

In a video by ArthurFromX, the vehicle is navigating to a parking lot. Not only does the vehicle successfully navigate to the parking lot, but it hunts around for a spot and then successfully parks without any additional instructions.

This could be our first glimpse at Tesla’s upcoming Park Seek feature that will eventually let the vehicle drop you off at the door and then go park itself.

Return of the Snapshot Button

Tesla appears to have reintroduced the Snapshot button in this update, at least to some owners. The snapshot button allows drivers to send additional information to Tesla regarding Autopilot's performance. This feature and the existing voice command feedback option provide Tesla with invaluable data to improve the FSD system further.

Automatic Speed Offset

Another noteworthy addition is the Automatic Set Speed Offset feature, which grants the vehicle autonomy to adjust its speed based on factors such as road type, traffic flow, and environmental conditions. The video below shows this feature in action. The feature is turned off by default and it currently only applies to street-level roads, but it’s a shift toward more human-like behavior for FSD Beta.

TACC is No Longer Accessible

Recently, Tesla revised the Autopilot activation method to avoid confusion and offered drivers two choices — a single pull of the stalk to enable FSD Beta or the traditional two taps. However, with FSD Beta v12, drivers are now required to use the single pull method to activate Autopilot.

Traffic-Aware Cruise Control (TACC) has traditionally been one pull of the stalk and Autopilot two pulls, but with the new single-pull method to activate Autopilot, TACC becomes unavailable. This hasn’t been a big deal until the release of FSD v12. With v12 Tesla is now requiring FSD Beta to use the single tap activation method.

This means that if a driver chooses to use FSD Beta, then TACC is no longer accessible. The only way to enable it is to go into Controls > Autopilot and turn off FSD Beta and instead choose Autosteer (or TACC). However, if you wish to enable FSD Beta again later, then it requires the vehicle to be in Park. Switching between Autosteer and FSD Beta isn’t practical for drivers. For those who rely on TACC, this issue could be a significant disadvantage in this release.

Update 2023.44.30.20

FSD 12.2.1
Installed on 0% of fleet
0 Installs today
Last updated: Jan 12, 12:46 pm UTC

Several drivers have praised FSD Beta v12’s ability to navigate complex situations, better decision-making, and smoother behavior. However, as with any cutting-edge technology, there have been instances where the system's responses have room for improvement, highlighting the importance of its continued development.

Ordering a New Tesla?

Consider using our referral code (nuno84363) to get up to $2,000 off your new Tesla and get 3 Months of FSD for free.

Tesla Included FSD V12.6.1 and V13.2.4 in the Same Update: What Caused This and What It Means

By Karan Singh
Not a Tesla App

Tesla launched two FSD updates simultaneously on Saturday night, and what’s most interesting is that they arrived on the same software version. We’ll dig into that a little later, but for now, there’s good news for everyone. For Hardware 3 owners, FSD V12.6.1 is launching to all vehicles, including the Model 3 and Model Y. For AI4 owners, FSD V13.2.4 is launching, starting with the Cybertruck.

FSD V13.2.4

A new V13 build is now rolling out to the Cybertruck and is expected to arrive for the rest of the AI4 fleet soon. However, this build seems to be focused on bug fixes. There are no changes to the release notes for the Cybertruck with this release, and it’s unlikely to feature any changes when it arrives on other vehicles.

While this update focuses on bug fixes, Tesla’s already working on bigger features for FSD V13.3, which we have already confirmed to include improvements to highway following and speed control.

FSD V12.6.1

FSD V12.6.1 builds upon V12.6, which is the latest FSD version for HW3 vehicles. While FSD V12.6 was only released for the redesigned Model S and Model X with HW3, FSD V12.6.1 is adding support for the Model 3 and Model Y.

While this is only a bug-fix release for users coming from FSD V12.6, it includes massive improvements for anyone coming from an older FSD version. Two of the biggest changes are the new end-to-end highway stack that now utilizes FSD V12 for highway driving and a redesigned controller that allows FSD to drive “V13” smooth.

It also adds speed profiles, earlier lane changes, and more. You can read our in-depth look at all the changes in FSD V12.6.

Same Update, Multiple FSD Builds

What’s interesting about this software version is that it “includes" two FSD updates, V12.6.1 for HW3 and V13.2.4 for HW4 vehicles. While this is interesting, it’s less special when you understand what’s happening under the hood.

The vehicle’s firmware and Autopilot firmware are actually completely separate. While a vehicle downloading a firmware update may look like a singular process, it’s actually performing several functions during this period. First, it downloads the vehicle’s firmware. Upon unpacking the update, it’s instructed which Autopilot/FSD firmware should be downloaded.

While the FSD firmware is separate, the vehicle can’t download any FSD update. The FSD version is hard-coded in the vehicle’s firmware that was just downloaded. This helps Tesla keep the infotainment and Autopilot firmware tightly coupled, leading to fewer issues.

What we’re seeing here is that HW3 vehicles are being told to download one FSD version, while HW4 vehicles are being told to download a different version.

While this is the first time Tesla has had two FSD versions tied to the same vehicle software version, the process hasn’t actually changed, and what we’re seeing won’t lead to faster FSD updates or the ability to download FSD separately. What we’re seeing is the direct result of the divergence of HW3 and HW4.

While HW3/4 remained basically on the same FSD version until recently, it is now necessary to deploy different versions for the two platforms. We expect this to be the norm going forward, where HW3 will be on a much different version of FSD than HW4. While each update may not include two different FSD versions going forward, we may see it occasionally, depending on which features Autopilot is dependent on.

Thanks to Greentheonly for helping us understand what happened with this release and for the insight into Tesla’s processes.

Nvidia’s Cosmos Offers Synthetic Training Data; Following Tesla’s Lead

By Karan Singh
Not a Tesla App

At the 2025 Consumer Electronics Show, Nvidia showed off its new consumer graphics cards, home-scale compute machines, and commercial AI offerings. One of these offerings included the new Nvidia Cosmos training system.

Nvidia is a close partner of Tesla - in fact, they produce and supply the GPUs that Tesla uses to train FSD - the H100s and soon-to-be H200s, located at the new Cortex Supercomputing Cluster at Giga Texas. Nvidia will also challenge Tesla’s lead in developing and deploying synthetic training data for an autonomous driving system - something Tesla is already doing.

However, this is far more important for other manufacturers. We’re going to take a look at what Nvidia is offering and how it compares to what Tesla is already doing. We’ve done a few deep dives into how Tesla’s FSD works, how Tesla streamlines FSD, and, more recently, how they optimize FSD. If you want to get familiar with a bit of the lingo and the background knowledge, we recommend reading those articles before continuing, but we’ll do our best to explain how all this synthetic data works.

Nvidia Cosmos

Nvidia’s Cosmos is a generative AI model created to accelerate the development of physical AI systems, including robots and autonomous vehicles. Remember - Tesla’s FSD is also the same software that powers their humanoid robot, Optimus. Nvidia is aiming to tackle physical, real-world deployments of AI anywhere from your home, your street, or your workplace, just like Tesla.

Cosmos is a physics-aware engine that learns from real-world video and builds simulated video inputs. It tokenizes data to help AI systems learn quicker, all based on the video that is input into the system. Sound familiar? That’s exactly how FSD learns as well.

Cosmos also has the capability to do sensor-fused simulations. That means it can take multiple input sources - video, LiDAR, audio, or whatever else the user intends, and fuse them together into a single-world simulation for your AI model to learn from. This helps train, test, and validate autonomous vehicle behavior in a safe, synthetic format while also providing a massive breadth of data.

Data Scaling

Of course, Cosmos itself still requires video input - the more video you feed it, the more simulations it can generate and run. Data scaling is a necessity for AI applications, as you’ll need to feed it an infinite amount of data to build an infinite amount of scenarios for it to train itself on.

Synthetic data also has a problem - is it real? Can it predict real-world situations? In early 2024, Elon Musk commented on this problem, noting that data scales infinitely both in the real world and in simulated data. A better way to gather testing data is through real-world data. After all, no AI can predict the real world just yet - in fact, that’s an excellent quantum computing problem that the brightest minds are working on.

Yun-Ta Tsai, an engineer at Tesla’s AI team, also mentioned that writing code or generating scenarios doesn’t cover what even the wildest AI hallucinations might come up with. There are lots of optical phenomena and real-world situations that don’t necessarily make sense in the rigid training sets that AI would develop, so real-world data is absolutely essential to build a system that can actually train a useful real-world AI.

Tesla has billions of miles of real-world video that can be used for training, according to Tesla’s Social Media Team Lead Viv. This much data is essential because even today, FSD encounters “edge cases” that can confuse it, slow it down, or render it incapable of continuing, throwing up the dreaded red hands telling the user to take over.

Cosmos was trained on approximately 20 million hours of footage, including human activities like walking and manipulating objects. On the other hand, Tesla’s fleet gathers approximately 2,380 recorded minutes of real-world video per minute. Every 140 hours - just shy of 6 days - Tesla’s fleet gathers 20 million hours of footage. That was a little bit of back-of-the-napkin math, calculated at 60 mph as the average speed.

Generative Worlds

Both Tesla’s FSD and Nvidia’s Cosmos can generate highly realistic, physics-based worlds. These worlds are life-like environments and simulate the movement of people and traffic and the real-life position of obstacles and objects, including curbs, fences, buildings, and other objects.

Tesla uses a combination of real-world data and synthetic data, but the combination of data is heavily weighted to real-world data. Meanwhile, companies who use Cosmos will be weighting their data heavily towards synthetically created situations, drastically limiting what kind of cases they may see in their training datasets.

As such, while generative worlds may be useful to validate an AI quickly, we would argue that these worlds aren’t as useful as real-world data to do the training of an AI.

Overall, Cosmos is an exciting step - others are clearly following in Tesla’s footsteps, but they’re extremely far behind in real-world data. Tesla has built a massive first-mover advantage in AI and autonomy, and others are now playing catch-up.

We’re excited to see how Tesla’s future deployment of its Dojo Supercomputer for Data Labelling adds to its pre-existing lead, and how Cortex will be able to expand, as well as what competitors are going to be bringing to the table. After all, competition breeds innovation - and that’s how Tesla innovated in the EV space to begin with.

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

More Tesla News

Tesla Videos

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

Subscribe

Subscribe to our weekly newsletter