Inside Tesla’s FSD: Patent Explains How FSD Works

By Karan Singh
Not a Tesla App

Thanks to a Tesla patent published last year, we have a great look into how FSD operates and the various systems it uses. SETI Park, who examines and writes about patents, also highlighted this one on X.

This patent breaks down the core technology used in Tesla’s FSD and gives us a great understanding of how FSD processes and analyzes data.

To make this easily understandable, we’ll divide it up into sections and break down how each section impacts FSD.

Vision-Based

First, this patent describes a vision-only system—just like Tesla’s goal—to enable vehicles to see, understand, and interact with the world around them. The system describes multiple cameras, some with overlapping coverage, that capture a 360-degree view around the vehicle, mimicking but bettering the human equivalent.

What’s most interesting is that the system quickly and rapidly adapts to the various focal lengths and perspectives of the different cameras around the vehicle. It then combines all this to build a cohesive picture—but we’ll get to that part shortly.

Branching

The system is divided into two parts - one for Vulnerable Road Users, or VRUs, and the other for everything else that doesn’t fall into that category. That’s a pretty simple divide - VRUs are defined as pedestrians, cyclists, baby carriages, skateboarders, animals, essentially anything that can get hurt. The non-VRU branch focuses on everything else, so cars, emergency vehicles, traffic cones, debris, etc. 

Splitting it into two branches enables FSD to look for, analyze, and then prioritize certain things. Essentially, VRUs are prioritized over other objects throughout the Virtual Camera system.

The many data streams and how they're processed.
The many data streams and how they're processed.
Not a Tesla App

Virtual Camera

Tesla processes all of that raw imagery, feeds it into the VRU and non-VRU branches, and picks out only the key and essential information, which is used for object detection and classification.

The system then draws these objects on a 3D plane and creates “virtual cameras” at varying heights. Think of a virtual camera as a real camera you’d use to shoot a movie. It allows you to see the scene from a certain perspective.

The VRU branch uses its virtual camera at human height, which enables a better understanding of VRU behavior. This is probably due to the fact that there’s a lot more data at human height than from above or any other angle. Meanwhile, the non-VRU branch raises it above that height, enabling it to see over and around obstacles, thereby allowing for a wider view of traffic.

This effectively provides two forms of input for FSD to analyze—one at the pedestrian level and one from a wider view of the road around it.

3D Mapping

Now, all this data has to be combined. These two virtual cameras are synced - and all their information and understanding are fed back into the system to keep an accurate 3D map of what’s happening around the vehicle. 

And it's not just the cameras. The Virtual Camera system and 3D mapping work together with the car’s other sensors to incorporate movement data—speed and acceleration—into the analysis and production of the 3D map.

This system is best understood by the FSD visualization displayed on the screen. It picks up and tracks many moving cars and pedestrians at once, but what we see is only a fraction of all the information it’s tracking. Think of each object as having a list of properties that isn’t displayed on the screen. For example, a pedestrian may have properties that can be accessed by the system that state how far away it is, which direction it’s moving, and how fast it’s going.

Other moving objects, such as vehicles, may have additional properties, such as their width, height, speed, direction, planned path, and more. Even non-VRU objects will contain properties, such as the road, which would have its width, speed limit, and more determined based on AI and map data.

The vehicle itself has its own set of properties, such as speed, width, length, planned path, etc. When you combine everything, you end up with a great understanding of the surrounding environment and how best to navigate it.

The Virtual Mapping of the VRU branch.
The Virtual Mapping of the VRU branch.
Not a Tesla App

Temporal Indexing

Tesla calls this feature Temporal Indexing. In layman’s terms, this is how the vision system analyzes images over time and then keeps track of them. This means that things aren’t a single temporal snapshot but a series of them that allow FSD to understand how objects are moving. This enables object path prediction and also allows FSD to understand where vehicles or objects might be, even if it doesn’t have a direct vision of them.

This temporal indexing is done through “Video Modules”, which are the actual “brains” that analyze the sequences of images, tracking them over time and estimating their velocities and future paths.

Once again, heavy traffic and the FSD visualization, which keeps track of many vehicles in lanes around you—even those not in your direct line of sight—are excellent examples.

End-to-End

Finally, the patent also mentions that the entire system, from front to back, can be - and is - trained together. This training approach, which now includes end-to-end AI, optimizes overall system performance by letting each individual component learn how to interact with other components in the system.

How everything comes together.
How everything comes together.
Not a Tesla App

Summary

Essentially, Tesla sees FSD as a brain, and the cameras are its eyes. It has a memory, and that memory enables it to categorize and analyze what it sees. It can keep track of a wide array of objects and properties to predict their movements and determine a path around them. This is a lot like how humans operate, except FSD can track unlimited objects and determine their properties like speed and size much more accurately. On top of that, it can do it faster than a human and in all directions at once.

FSD and its vision-based camera system essentially create a 3D live map of the road that is constantly and consistently updated and used to make decisions.

Tesla Autonomously Delivers Its First Vehicle to Customer — And It’s More Impressive Than Expected [VIDEO]

By Karan Singh
Not a Tesla App

In a world first, Tesla has successfully completed its first fully autonomous delivery of a new vehicle from Gigafactory Texas to a customer’s home. While Musk announced this was coming, some of the details make the achievement even more impressive.

Traveling on the Highway

A Tesla Model Y left the factory, navigating highways at speeds up to 72mph, a day ahead of Tesla’s previously announced schedule. Most critically, Elon also confirmed two key factors that make this achievement even more impressive than Tesla’s launch of the Robotaxi last week.

There were no Safety Monitors in the car, and no remote operators took control of the Model Y at any time, really making this an amazing achievement.

While the launch of the Robotaxi was an amazing step for Tesla, this one easily takes the cake.

No Safety Monitor, No Passengers, No Limits

The significance of this event lies in just how it differs from the current Robotaxi service operating in Austin.

First and most importantly, there was no Safety Monitor. Nobody was sitting up front, ready to tap one of the emergency stop buttons on the screen. The vehicle was empty, fresh from the factory. This is the unsupervised experience and future that we’ve been waiting for.

Max speed was 72 mph -- Ashok Elluswamy

Why There Was No Safety Monitor

However, there is an important distinction with this autonomous ride — that there were no passengers. This is the crucial regulatory distinction. By operating as a logistics trip rather than as a commercial ride-hailing service, Tesla was likely able to bypass many of the stringent rules governing passenger transport. 

This freedom is what enabled the other key difference: operating with fewer restrictions. That included a 72mph top speed on the highway, which is well outside the geofenced Robotaxi Network that’s currently available in Austin.

Ahead of Schedule

This event wasn’t a surprise - Elon had previously stated that Tesla expects the first fully autonomous delivery to happen on June 28th. He even worked some flex time into that, saying the timing could potentially slip into early July.

It turns out that additional time wasn’t needed, as Tesla ended up delivering its first vehicle a day early. It seems that Tesla is pulling data quickly from its fleet of slightly modified Model Ys cruising the streets of Austin, which likely enabled the confidence behind giving this the green light.

Video of the Drive

Tesla shared a video of the entire drive, from the vehicle leaving Giga Texas to it arriving at the customer’s home. The entire ride took 30 minutes, crossing parking lots and going on the highway.

While there are some disadvantages to autonomous deliveries, they could lower the cost of a vehicle significantly.

Challenging Uber Eats and Others

This successful delivery is another fantastic use case for FSD that could be another entire business in and of itself for Tesla. The ability to autonomously move vehicles, potentially with cargo inside them, has massive implications for both Tesla’s factory-to-customer logistics, as well as challenging other services like Uber Eats and Skip the Dishes down the road.

Additionally, logistics-focused autonomy may be easier to scale than the Robotaxi network. It sidesteps many of the complex safety, liability, and customer-facing service challenges that come with carrying human passengers. This could be a faster and clearer path for regulatory approval.

Fork in the Road

But it's more than just a new business.

Back in 2022, Elon commissioned an art piece that now stands outside Giga Texas. It is, quite literally, A Fork in the Road. Part of Elon’s greater goal is to ensure we pass Fermi’s Great Filters, and that means ensuring we generate green energy, electrify and automate transportation, and move towards sustainable abundance.

The point of the fork here is that Tesla’s first autonomous delivery isn’t just a publicity stunt. We’re finally here, at the fork in the road. We’ve hit it - true autonomous capabilities being demonstrated on public highways under a specific and challenging set of conditions. That’s a true Level 4 autonomous capability with no one in the car.

While Robotaxi is a fantastic step towards changing personal transport, this successful delivery proves that there are even more uses to FSD beyond what we’ve seen so far.

Tesla Issues Physical Recall for Some Model 3 & Model Y Vehicles Over Seat Fasteners

By Karan Singh
Not a Tesla App

Tesla has issued a new, voluntary safety recall for a small batch of Model 3 and Model Y vehicles due to an issue with improperly tightened fasteners in the first-row seats. 

The recall impacts only about 48 vehicles and will require a Tesla service visit to resolve. 

Improperly Tightened Fasteners

According to the recall notice, the issue stems from the first-row seating that may have been manufactured with improperly torqued fasteners that attach the seat back to the seat bottom. In some cases, the fasteners may be loose or missing, which could cause a rattle or the seat to detach.

This is a critical safety issue, as a seat back that is not properly anchored could detach, leaving the driver or passenger unsupported and increasing the risk of an accident.

According to Tesla’s investigation, this issue originated from a production change made for vehicles manufactured between April 3rd and May 7th of 2025. However, not all vehicles built within that date are impacted by the recall. The issue impacts 30 2026 Model Ys and 18 2025 Model 3s, across all variants, including RWD, AWD, and Performance (for the Model 3).

Thankfully, there have been no incidents related to this issue to date.

The Fix

Since this is a physical recall, Tesla will have to inspect impacted vehicles and replace and properly retorque the seat fasteners as needed, free of charge. 

Owners of vehicles who have been impacted have already been contacted under the voluntary recall, and most vehicles should have been repaired by the time this notice is formally issued.

You can also check if your VIN is impacted by a recall using Tesla’s Recall Tool.

Tesla has noted the repair should take approximately one hour of work at a Service Center, and up to two hours if a Mobile Ranger addresses the recall.

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

More Tesla News

Tesla Videos

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

Subscribe

Subscribe to our weekly newsletter