Tesla Releases FSD V13.2: Adds Ability to Reverse, Start FSD from Park, Autopark at Destination and Much More

By Karan Singh
DirtyTesla/YouTube

Last night Tesla finally launched FSD V13.2 with a bevy of new features for its early access testers with update 2024.39.10. While they barely missed Thanksgiving's floaty deadline, they still managed to deliver it in November, marking another big win for the Tesla AI team.

Early Access Only

FSD V13.2 started to roll out to early access testers - who generally get hands-on with the latest builds in advance of everyone else. They’re the equivalent of Tesla’s trusted testers who aren’t running internal builds - and they’re able to catch more scenarios outside of Tesla’s pretty extensive safety training suite.

If no major issues are spotted, Tesla will begin a slow rollout to more and more vehicles over the next few weeks. Assuming all goes well with this build, it could be in most customer’s hands by Christmas.

Of course, as a reminder, FSD V13 is still limited to vehicles equipped with AI4—and for now, anything but the Cybertruck. The Cybertruck is on its own FSD branch, without access to Actually Smart Summon and Speed Profiles, but with End to End on the Highway. The Cybertruck was recently upgraded to update 2024.39.5 (FSD V12.5.5.3).

FSD V13.2 Features

Let’s take a look at everything in FSD V13.2 - which is the build version going out now on Tesla software update 2024.39.10. While we previously got a short preview of what was expected with V13, we now can see everything included in FSD V13.2.

Start FSD from Park, Reverse & Park at Destination

Parked to Parked has been the goal for FSD for quite a while now. Elon Musk has been saying that it was going to be the key to demonstrating Tesla’s autonomy framework back with the release of V12.3.6 - when V12.5 was but a glimmer in the Tesla AI team’s eye.

Now, with V13, FSD has integrated three key functionalities.

Unpark: FSD can now be started while you’re still parked. Simply set your destination and tap and hold the new Start FSD button. The car will now shift out of park and into drive or reverse in order to get to its destination.

Reverse: FSD has finally gained the ability to shift. Not only can the vehicle go into reverse now, but it can seamlessly shift between Park, Drive and Reverse all by itself. It can be perform 3-point turns.

Park: When FSD reaches its destination, it will now park itself if it finds an open parking spot near the final location. Tesla says that further improvements are coming to this, and drivers will be able to pick between pulling over, parking in a parking spot, driveway or garage in the future.

If everything goes smoothly on a drive, users will no longer need to give the vehicle any input at all, from its original location to its final parking spot. No more user intervention other than supervision is needed, unless an intervention is needed.

Full Resolution AI4 Video Input

Until now, FSD V12.5 and V12.6 have been using reduced image quality at reduced framerates to match the lower resolution and lower refresh rate provided by Hardware 3 cameras. For the first time, FSD will be using AI4’s (previously known as Hardware 4) cameras at higher resolution and 36 frames per second.

In short, that means better image quality for both training and in use and higher accuracy for things like signage and distance measurement.

Speed Profiles for All Roads

FSD V12.5.6.2 brought new and improved Speed Profiles to both city streets and highways, including the new Hurry Mode, which replaced Assertive Mode. However, on V12.5.6.2, there were a few limitations - roads needed a fairly high minimum speed limit of 50mph (80km/h) or higher. Now, that’s no more. City Streets has speed profiles for all speed limits now.

Native AI4 Inputs and Neural Network Architecture

Similar to the video resolution and refresh rate, AI4 has a lot of new hardware features that help optimize how fast FSD’s AI model can run. We dug into how Tesla’s Universal Translator streamlines FSD for each platform - this is a case of having fewer constraints and more optimization versus Hardware 3.

5x Training Compute

Cortex, Tesla’s massive new supercomputer cluster at Giga Texas, is now online and crunching data at a truly insane rate. It's one of the fastest AI clusters in the world—and it's dedicated to FSD. Tesla has 5x the training compute crunching away to solve the March of 9’s now that FSD is close to being feature complete.

Faster Decision Making

Tesla refactored how it handles image-to-processing in FSD V13 - another huge set of changes to improve performance. In this release - a 2x faster photon-to-control latency, which is massive. In layman’s terms - that’s faster decision-making - it was already faster than a human, and now it's twice as fast as it was before.

Collection Data for Audio Input

One of the features Tesla lists in FSD V13.2 is the ability for the vehicle to collect and share audio snippets with Tesla. The vehicle will ask you whether you’re okay with sharing 10-second audio files with Tesla so that the vehicle can detect emergency vehicles by sound in the future.

Camera Visibility Detection

The vehicle will now prompt you at the end of a drive if visibility issues are detected. The new option is under Controls > Service > Camera Visibility. Tesla will also retain images from the cameras when the vehicle experiences visibility issues during a drive so that you can analyze them later.

As of FSD V12.5.6.2, your Tesla will warn you when it needs cleaning - and guide you to help clean the cameras too. This, along with less annoying notifications that FSD is degraded, are going to be fantastic changes for those who aren’t driving around in sunny weather.

Better Collision Avoidance

Due to all the changes to the AI model in V13, it also brought along with it changes to how the AI perceives and handles collision avoidance.

FSD has already earned a reputation for cleanly avoiding T-bone collisions in red light incidents, but it's going to get even better from here on out.

Vehicle to Fleet Communication

One of the features V12.5 was supposed to bring was fleet-based dynamic routing. If a route was closed, your Tesla would turn around and navigate through an alternative path - and also warn the rest of the fleet of the closure.

V13 lets AI4 vehicles do this, and it's another element of the Robotaxi network that Tesla needs to get off the ground to ensure that once they do begin to deploy their first fleets - they function well. So far, with new job postings for Robotaxi Engineers and talks with Palo Alto to launch a Robotaxi service, things are on track for both Unsupervised FSD and Robotaxi sometime in 2025.

Better Traffic Controller

Another big update is a redesigned traffic controller - which makes for smoother and more accurate tracking of other vehicles and objects around the vehicle. We dug into how the traffic controller processes information in this article here, where you can learn all about how Tesla’s signal processing works.

Upcoming Improvements

Tesla has mentioned a lot of upcoming improvements panel for FSD V13 too, which includes bigger models, audio inputs, better navigation and routing, improvements to false braking, destination options, and better camera occlusion handling. That’s a pretty big list for V13, so we’ll keep an eye on all these upcoming features that are expected in a future release.

What About Hardware 3?

Tesla’s previous roadmap update didn’t mention HW3 getting FSD V13. Instead, those of us on Hardware 3 will need to keep waiting and looking for Tesla to optimize another FSD Model - until then, you’ll be on FSD V12.5.4.2, which is still a fairly capable build.

Tesla has mentioned that they could potentially upgrade HW3 computers - not cameras - if engineers aren’t able to get FSD Unsupervised working on HW3. While there isn’t a lot to share here yet, it certainly looks like HW3 owners will be receiving some sort of free hardware upgrade in the future, but it’s not clear yet when or what they will be.

Keep an eye out in the new year for updates on what’s coming next with HW3. We hope to see an optimized V13 build eventually make its way to HW3 sometime in the future - Tesla has been working pretty hard on this, so let’s give them some time.

Release Date

For everyone who’s been patiently waiting to see more of FSD V13 since the sneaky reveal at We, Robot, you’ll be waiting a bit longer. This build is currently going out to early access testers, who serve as a critical step in Tesla’s safety verification process.

Once Tesla is comfortable with the rate of disengagement, Tesla will evaluate their results, make any final changes, and then begin rolling it out in waves. Fingers crossed, wider waves for V13 will make their way to AI4 S3XY vehicles and the Cybertruck by Christmas.

Tesla Plans Massive 10x Robotaxi Expansion: A Look at the Potential New Area

By Karan Singh
Not a Tesla App

With Tesla’s first major expansion of the Robotaxi Geofence now complete and operational, they’ve been hard at work with validation in new locations - and some are quite the drive from the current Austin Geofence.

Validation fleet vehicles have been spotted operating in a wider perimeter around the city, from rural roads in the west end to the more complex area closer to the airport. Tesla mentioned during their earnings call that the Robotaxi has already completed 7,000 miles in Austin, and it will expand its area of operation to roughly 10 times what it is now. This lines up with the validation vehicles we’ve been tracking around Austin.

Based on the spread of the new sightings, the potential next geofence could cover a staggering 450 square miles - a tenfold increase from the current service area of roughly 42 square miles. You can check this out in our map below with the sightings we’re tracking.

If Tesla decides to expand into these new areas, it would represent a tenfold increase over their current geofence, matching Tesla’s statement. The new area would cover approximately 10% of the 4,500-square-mile Austin metropolitan area. If Tesla can offer Robotaxi services in that entire area, it would prove they can tackle just about any city in the United States.

From Urban Core to Rural Roads

The locations of the validation vehicles show a clear intent to move beyond the initial urban and suburban core and prepare the Robotaxi service for a much wider range of uses.

In the west, validation fleet vehicles have been spotted as far as Marble Falls - a much more rural environment that features different road types, higher speed limits, and potentially different challenges. 

In the south, Tesla has been expanding towards Kyle, which is part of the growing Austin-San Antonio suburban corridor spanning Highway 35. San Antonio is only 80 miles (roughly a 90-minute drive) away, and could easily become part of the existing Robotaxi area if Tesla obtains regulatory approval there.

In the East, we haven’t spotted any new validation vehicles. This is likely because Tesla’s validation vehicles originate from Giga Texas, which is located East of Austin. We won’t really know if Tesla is expanding in this direction until they start pushing past Giga Texas and toward Houston.

Finally, there have been some validation vehicles spotted just North of the new expanded boundaries, meaning that Tesla isn’t done in that direction either. This direction consists of the largest suburban areas of Austin, which have so far not been serviced by any form of autonomous vehicle.

Rapid Scaling

This new, widespread validation effort confirms what we already know. Tesla is pushing for an intensive period of public data gathering and system testing in a new area, right before conducting geofence expansions. The sheer scale of this new validation zone tells us that Tesla isn’t taking this slowly - the next step is going to be a great leap instead, and they essentially confirmed this during this Q&A session on the recent call. The goal is clearly to bring the entire Austin Metropolitan area into the Robotaxi Network.

While the previous expansion showed off just how Tesla can scale the network, this new phase of validation testing is a demonstration of just how fast they can validate and expand their network. The move to validate across rural, suburban, and urban areas simultaneously shows their confidence in these new Robotaxi FSD builds.

Eventually, all these improvements from Robotaxi will make their way to customer FSD builds sometime in Q3 2025, so there is a lot to look forward to.

Caught on Video: Tesla FSD Tackles a Toll Booth — Here’s How It Pulled It Off

By Karan Singh
@DirtyTesLa on X

For years, the progress of Tesla’s FSD has been measured by smoother turns, better lane centering, and more confident unprotected left turns. But as the system matures, a new, more subtle form of intelligence is emerging - one that shifts its attention to the human nuances of navigating roads. A new video posted to X shows the most recent FSD build, V13.2.9, demonstrating this in a remarkable real-world scenario.

Toll Booth Magic

In the video, a Model Y running FSD pulls up to a toll booth and smoothly comes to a stop, allowing the driver to handle payment. The car waits patiently as the driver interacts with the attendant. Then, at the precise moment the toll booth operator finishes the transaction and says “Have a great day”, the vehicle starts moving, proceeding through the booth - all without any input from the driver.

If you notice, there’s no gate here at this toll booth. This interaction all happened naturally with FSD.

How It Really Works

While the timing was perfect, the FSD wasn’t listening to the conversation for clues (maybe one day, with Grok?) The reality, as explained by Ashok Elluswamy, Tesla’s VP of AI, is even more impressive.

FSD is simply using the cameras on the side of the vehicle to watch the exchange between the driver and attendant. The neural network has been trained on enough data that it can visually recognize the conclusion of a transaction - the exchange of money or a card and the hands pulling away - and understands that this is the trigger to proceed.

The Bigger Picture

This capability is far more significant than just a simple party trick. FSD is gaining the ability to perceive and navigate a world built for humans in the most human-like fashion possible.

If FSD can learn what a completed toll transaction looks like, it’s an example of the countless other complex scenarios it’ll be able to handle in the future. This same visual understanding could be applied to navigating a fast-food drive-thru, interacting with a parking garage attendant, passing through a security checkpoint, or boarding a ferry or vehicle train — all things we thought that would come much later.

These human-focused interactions will eventually become even more useful, as FSD becomes ever more confident in responding to humans on the road, like when a police officer tells a vehicle to go a certain direction, or a construction worker flags you through a site. These are real-world events that happen every day, and it isn’t surprising to see FSD picking up on the subtleties and nuances of human interaction.

This isn’t a pre-programmed feature for a specific toll booth. It is an emergent capability of the end-to-end AI neural nets. By learning from millions of videos across billions of miles, FSD is beginning to build a true contextual understanding of the world. The best part - with a 10x context increase on its way, this understanding will grow rapidly and become far more powerful.

These small, subtle moments of intelligence are the necessary steps to a truly robust autonomous system that can handle the messy, unpredictable nature of human society.

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

More Tesla News

Tesla Videos

Latest Tesla Update

Subscribe

Subscribe to our weekly newsletter