What’s Coming in Tesla FSD V13

By Karan Singh
Not a Tesla App

As part of an update to its AI roadmap, Tesla has also announced the features that will be in FSD v13. Tesla provided many details about what we can expect, and there’s a lot of info to break down.

Tesla’s VP of AI, Ashok Elluswamy, also revealed that FSD v13 is expected to make FSD Unsupervised feature complete. That doesn’t mean that autonomy will be ready, as each feature will still need to work at safety levels higher than a human, but it means every key feature of autonomous vehicles will be present in FSD v13.

Let’s examine the v13 feature list Tesla and Tesla employees have recently provided to see exactly what’s coming.

Higher Resolution Video & Native AI4

FSD v12 has been trained using Tesla’s HW3 cameras and downsampling the AI4 cameras to match. For the first time, Tesla will use AI4's native camera resolution to get the clearest image possible. Not only will Tesla increase the resolution, but they’re also increasing the capture rate to 36 FPS (frames per second). This should result in extreme smoothness and the ability of the vehicle to detect objects earlier and more precisely. It’ll be a big boon for FSD, but it’ll come at the price of processing all of this additional information.

The HW3 cameras have a resolution of about 1.2 megapixels, while the AI4 cameras have a resolution of 5.44 megapixels. That’s a 4.5x improvement in raw resolution - which is a lot of new data for the inference computer and AI models to deal with. 

Yun-Ti Tsai, Senior Staff Engineer at Tesla AI, mentioned on X that the total data bandwidth is 1.3 gigapixels per second, running at 36 hertz, with nearly 0 latency between capture and inference. This is one of the baseline features for getting v13 off the ground, and through this feature update, we can expect better vehicle performance, sign reading, and lots of little upgrades.

Bigger Models, Bigger Context, Better Data

The next big item is that Tesla will increase the size of the FSD model by three times and the overall context length by the same amount. What that means, in simple terms, is that FSD will have a lot more information to draw upon—both at the moment (the context length) and from background knowledge and training (model size). 

In layman’s terms, Tesla has made the FSD brain bigger and increased the amount of information it can remember. This means that FSD will have a lot more data to work with when making decisions, both from what's happening right now and from what it has learned in the past.

Beyond that, Tesla has also massively expanded the data scaling and training compute to match. Tesla is increasing the amount of training data by 4.2 times and increasing their training commute power by 5x.

Audio Intake

Tesla’s FSD has famously only relied upon visual data—equivalent to what humans can access. LiDAR hasn’t been on Tesla’s books except for model validation, and radar, while used in the past, was mostly phased out.

Now, Tesla AI will integrate audio intake into FSD’s models, with a focus on better handling of emergency vehicles. FSD will soon be able to react to emergency vehicles, even before it sees them. This is big news and is in line with how Tesla has been approaching FSD—through a very human-like lens.

We’re excited to see how these updates pan out - but there was one more thing. Ashok Elluswamy, VP of AI at Tesla, confirmed on X that they’ll add the ability for FSD to honk the horn.

Other Improvements

The other improvements, while major, can be summarized pretty simply. Tesla is focusing on improving smoothness and safety in various ways. The v13 AI will be trained to predict and adapt for collision avoidance, navigation, and better following traffic controls. This will make it more predictable for users and other drivers and improve general safety.

Beyond that, Tesla is also working on a better representation of the map and navigation inputs versus what FSD actually does. In complex situations, FSD may choose to take a different turn or exit, even if navigation is telling it to go the other way. This future update will likely close this gap and ensure that your route and FSD’s path planner match closely.

Of course, Tesla will also be working on adding Unpark, Reverse, and Park capabilities, as well as support for destination options, including parking in a spot, driveway, or garage or just pulling over at a specific point, like at an entrance.

Finally, they’re also working on adding improved camera self-cleaning and better handling of camera occlusion. Currently, FSD can and will clean the front cameras if they are obscured with debris, but only if they are fully blocked. Partial blockages do not trigger the wipers. Additionally, when the B-Pillar cameras are blinded by sunlight, FSD tends to have difficulties staying centered in the lane. This specific update is expected to address both of these issues.

FSD V13 Release Date

Tesla announced that FSD v13 will be released to employees this week, however, it’ll take various iterations before it’s released to the public. Tesla mentioned that they expect FSD v13 to be released to customers around v13.3, but surprisingly, they state that this will happen around the Thanksgiving timeframe — just a few weeks away.

Tesla is known for delays with its FSD releases, so we’re cautious about the late November timeline. However, the real takeaway is that FSD v13 is expected to offer a substantial leap in capability over the next few months—even if it’s exclusive to AI4.

Tesla’s Robotaxi Easter Egg: Surprise Tip

By Karan Singh
BLKMDL3

Tesla has always embraced whimsy in its software, packing it with playful Easter eggs and surprises. From transforming the on-screen car into James Bond’s submarine to the ever-entertaining Emissions Testing Mode and the fan-favorite Rainbow Road, these hidden features have become a signature part of Tesla’s software.

Of course, launching a new product like Robotaxi wouldn’t be complete without a fun little easter egg of its own. The end-of-ride screen in the Robotaxi app presents a familiar option “Leave a tip.”

For anyone pleased with their Robotaxi ride, they may be tempted to leave a tip. However, tapping the button presents our favorite hedgehog instead of a payment screen.

The app displays a message, alongside the familiar Tesla hedgehog, that simply states “Just kidding.”

While it's a fun prank, it’s also a nod to what Tesla really wants to do. They want to reinforce the economic advantage of an autonomous Robotaxi Network. Without a driver, there is simply no need to tip. The gesture is playful, but it’s a reminder of what Tesla’s real aim is here.

Even Elon is in on the joke. It is a small detail, but it’s all about those small details with Tesla.

First Recorded Tesla Robotaxi Intervention: UPS Truck Encounter [VIDEO]

By Karan Singh
@BLKMDL3 on X

Over the last few days, we’ve seen some exceptionally smooth performance from the latest version of FSD on Tesla’s Robotaxi Network pilot. However, the entire purpose of an early access program with Safety Monitors is to identify and learn from edge cases.

This week, the public saw the first recorded instance of a Safety Monitor intervention, providing a first look at how they’re expected to stop the vehicle.

The event involved a complex, low-speed interaction with a reversing UPS truck. The Safety Monitor intervened to stop the Robotaxi immediately, potentially avoiding a collision with the delivery truck. Let’s break down this textbook case of real-world unpredictability.

The Intervention [VIDEO]

In a video from a ride in Austin, a Robotaxi is preparing to pull over to its destination on the right side of the road, with its turn signal active. Ahead, a UPS truck comes to a stop. As the Model Y begins turning into the spot, the UPS truck, seemingly without signaling, starts to reverse. At this point, the Safety Monitor stepped in and pressed the In Lane Stop button on the main display, bringing the Robotaxi to an immediate halt.

This is precisely why Tesla has employed Safety Monitors in this initial pilot. They are there to proactively manage ambiguous situations where the intentions of other drivers are unclear. The system worked as designed, but it raises a key question: What would FSD have done on its own? It’s not clear whether the vehicle saw the truck backing up, or what it would do when it finally detected it. It’s also unclear whether the UPS driver recognized that the Robotaxi was pulling into the same spot at the exact same time.

It’s possible this wouldn’t result in a collision at all, but the Safety Monitor did the right thing by stepping in to prevent a potential collision, even one at low speed. Any collision just a few days after the Robotaxi Network launch could result in complications for Tesla.

Who Would Be At Fault?

This scenario is a classic edge case. It involves unclear right-of-way and unpredictable human behavior. Even for human drivers, the right-of-way here is complicated. While a reversing vehicle often bears responsibility, a forward-moving vehicle must also take precautions to avoid a collision. This legal and practical gray area is what makes these scenarios so challenging for AI to navigate.

Would the Robotaxi have continued, assuming the reversing truck would stop?

Or would it have identified the potential conflict and used its own ability to stop and reverse?

Without the intervention, it’s impossible to say for sure. However, crucial context comes from a different clip involving, surprisingly, another UPS delivery truck.

A Tale of Two Trucks

In a separate video posted on X, another Robotaxi encounters a remarkably similar situation. In that instance, as another UPS delivery truck obstructs the path forward, the Robotaxi comes to a stop to let its two passengers out just a few feet from their destination.

Once they depart, the Robotaxi successfully reverses and performs a three-point turn to extricate itself from a tight spot. That was all done without human intervention, by correctly identifying the situation. 

This second clip is vital because it proves that the Robotaxi's FSD build has the underlying logic and capability to handle these scenarios. It can, and does, use reverse to safely navigate complex situations.

A Valuable Data Point

Far from being a failure, this first intervention should be seen as a success for Tesla’s safety methodology. It shows the safety system is working, allowing monitors to mitigate ambiguous events proactively.

More importantly, this incident provides Tesla’s FSD team with an invaluable real-world data point.

By comparing the intervened ride with the successful autonomous one, Tesla’s engineers can fine-tune FSD’s decision-making, which will likely have a positive impact on its edge case handling in the near future.

This is the purpose of a public pilot — to find the final edge cases and build a more robust system, one unpredictable reversing truck at a time.

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

More Tesla News

Tesla Videos

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

Subscribe

Subscribe to our weekly newsletter