Inside Tesla’s FSD: Patent Explains How FSD Works

By Karan Singh
Not a Tesla App

Thanks to a Tesla patent published last year, we have a great look into how FSD operates and the various systems it uses. SETI Park, who examines and writes about patents, also highlighted this one on X.

This patent breaks down the core technology used in Tesla’s FSD and gives us a great understanding of how FSD processes and analyzes data.

To make this easily understandable, we’ll divide it up into sections and break down how each section impacts FSD.

Vision-Based

First, this patent describes a vision-only system—just like Tesla’s goal—to enable vehicles to see, understand, and interact with the world around them. The system describes multiple cameras, some with overlapping coverage, that capture a 360-degree view around the vehicle, mimicking but bettering the human equivalent.

What’s most interesting is that the system quickly and rapidly adapts to the various focal lengths and perspectives of the different cameras around the vehicle. It then combines all this to build a cohesive picture—but we’ll get to that part shortly.

Branching

The system is divided into two parts - one for Vulnerable Road Users, or VRUs, and the other for everything else that doesn’t fall into that category. That’s a pretty simple divide - VRUs are defined as pedestrians, cyclists, baby carriages, skateboarders, animals, essentially anything that can get hurt. The non-VRU branch focuses on everything else, so cars, emergency vehicles, traffic cones, debris, etc. 

Splitting it into two branches enables FSD to look for, analyze, and then prioritize certain things. Essentially, VRUs are prioritized over other objects throughout the Virtual Camera system.

The many data streams and how they're processed.
The many data streams and how they're processed.
Not a Tesla App

Virtual Camera

Tesla processes all of that raw imagery, feeds it into the VRU and non-VRU branches, and picks out only the key and essential information, which is used for object detection and classification.

The system then draws these objects on a 3D plane and creates “virtual cameras” at varying heights. Think of a virtual camera as a real camera you’d use to shoot a movie. It allows you to see the scene from a certain perspective.

The VRU branch uses its virtual camera at human height, which enables a better understanding of VRU behavior. This is probably due to the fact that there’s a lot more data at human height than from above or any other angle. Meanwhile, the non-VRU branch raises it above that height, enabling it to see over and around obstacles, thereby allowing for a wider view of traffic.

This effectively provides two forms of input for FSD to analyze—one at the pedestrian level and one from a wider view of the road around it.

3D Mapping

Now, all this data has to be combined. These two virtual cameras are synced - and all their information and understanding are fed back into the system to keep an accurate 3D map of what’s happening around the vehicle. 

And it's not just the cameras. The Virtual Camera system and 3D mapping work together with the car’s other sensors to incorporate movement data—speed and acceleration—into the analysis and production of the 3D map.

This system is best understood by the FSD visualization displayed on the screen. It picks up and tracks many moving cars and pedestrians at once, but what we see is only a fraction of all the information it’s tracking. Think of each object as having a list of properties that isn’t displayed on the screen. For example, a pedestrian may have properties that can be accessed by the system that state how far away it is, which direction it’s moving, and how fast it’s going.

Other moving objects, such as vehicles, may have additional properties, such as their width, height, speed, direction, planned path, and more. Even non-VRU objects will contain properties, such as the road, which would have its width, speed limit, and more determined based on AI and map data.

The vehicle itself has its own set of properties, such as speed, width, length, planned path, etc. When you combine everything, you end up with a great understanding of the surrounding environment and how best to navigate it.

The Virtual Mapping of the VRU branch.
The Virtual Mapping of the VRU branch.
Not a Tesla App

Temporal Indexing

Tesla calls this feature Temporal Indexing. In layman’s terms, this is how the vision system analyzes images over time and then keeps track of them. This means that things aren’t a single temporal snapshot but a series of them that allow FSD to understand how objects are moving. This enables object path prediction and also allows FSD to understand where vehicles or objects might be, even if it doesn’t have a direct vision of them.

This temporal indexing is done through “Video Modules”, which are the actual “brains” that analyze the sequences of images, tracking them over time and estimating their velocities and future paths.

Once again, heavy traffic and the FSD visualization, which keeps track of many vehicles in lanes around you—even those not in your direct line of sight—are excellent examples.

End-to-End

Finally, the patent also mentions that the entire system, from front to back, can be - and is - trained together. This training approach, which now includes end-to-end AI, optimizes overall system performance by letting each individual component learn how to interact with other components in the system.

How everything comes together.
How everything comes together.
Not a Tesla App

Summary

Essentially, Tesla sees FSD as a brain, and the cameras are its eyes. It has a memory, and that memory enables it to categorize and analyze what it sees. It can keep track of a wide array of objects and properties to predict their movements and determine a path around them. This is a lot like how humans operate, except FSD can track unlimited objects and determine their properties like speed and size much more accurately. On top of that, it can do it faster than a human and in all directions at once.

FSD and its vision-based camera system essentially create a 3D live map of the road that is constantly and consistently updated and used to make decisions.

Tesla Confirms Upcoming FSD Rollout in Australia and New Zealand

By Karan Singh
Not a Tesla App

The long long wait for FSD (Supervised) in Australia and New Zealand may be coming to an end. Thom Drew, Tesla’s Country Director for Australia & New Zealand, has confirmed on LinkedIn that Tesla has been working with local authorities in both countries and that there are no regulatory blockers for the release of FSD in the region.

The confirmation came in response to questions following Tesla’s FSD demo video in Sydney, Australia.

Hurdles Cleared

For many years, the main question surrounding the release of FSD in other Asia-Pacific countries, especially Australia, has been the status of regulatory approval. Drew’s statement provides the clearest answer yet regarding regulatory barriers, and it appears that the path is clear from a governmental standpoint.

“We have been working with local authorities across AU & NZ and there are no regulatory blockers for release. We are running through the final stages of validation prior to public release. Looking to start with HW4 on certain vehicles and then release in phases from there.”

  • Thom Drew, Tesla’s Country Director for Australia & New Zealand (LinkedIn)

With the regulatory question answered, the timeline for the release is now entirely in Tesla’s hands. According to his statement, Tesla is in the final phases before a public rollout, likely meaning Tesla is doing some final testing and veritifcation on local roads before flipping the switch.

The Rollout Plan: HW4 First

Drew also provided the first details on how Tesla plans to launch FSD in the two countries, and it seems to be a similar approach to the one Tesla took in China.

That means that the release will begin with AI4 (HW4) equipped vehicles first. Once those vehicles are up and running, they will slowly begin to phase in older AI3 (HW3) vehicles over the next few releases.

For owners of HW3 vehicles (everything we know about the HW3 upgrade), this phased release means that they’ll be waiting a little longer than other owners, but at least there’s progress and some clear next steps now. A little more waiting isn’t too bad, especially when you consider just how long many owners in Australia and New Zealand have been waiting for any semblance of FSD in their nations. Tesla initially outlined Q2 2025 as the target availability date for FSD in RHD markets, so this timing is roughly on track with what Tesla initially announced in September 2024.

Hopefully, Tesla also opens up the opportunity for FSD transfer for HW3 owners in both countries, as we’re sure many people would upgrade alongside the official release of FSD to the latest hardware.

And just in case you thought the first video wasn’t true because it wasn’t upside down - well, Tesla provided us the original too.

Tesla’s Q2 2025 Earnings Call: What to Expect and Top Questions

By Karan Singh
Not a Tesla App

Another quarter has passed, and that means it’s time to submit questions and vote for Tesla’s Q2 2025 Earnings Call. While Q1 was a tough quarter for the company, Q2 saw some recovery in sales, although there’s still some work to be done.

However, there’s always a lot to be excited about during Tesla’s Q&A session, where we usually learn a lot about future software improvements and upcoming vehicles. We may hear more about FSD Unsupervised, Robotaxi, or the more affordable vehicle, or its upcoming larger 6-seater Model Y, the Model Y L. Tesla also mentioned a potential FSD price hike back in the Q1 2025 Earnings Call, so that could be something that is brought up as well.

Tesla’s Q2 So Far

Tesla has already released their Q2 2025 Production and Delivery numbers, which were up from Q1 of this year, but still down compared to Q2 last year.

Production

Deliveries

Model 3/Y

396,835

373,728

Model S, X, and Cybertruck

13,409

10,394

Total

410,244

384,122

How to Submit & Vote

Tesla lets shareholders submit a question that will be voted on and may be answered during the Q&A session. To submit your own question or vote on an already submitted question, you’ll need to be a verified shareholder. You can go to Say’s platform and link your brokerage accounts.

Once it is verified, you’ll be able to log in and vote your shares on your own question, or on someone else’s question.

Here’s the link to get started on Say’s Tesla Q&A. You must submit your questions and votes by July 23rd, 2025, at 4:00 PM EDT.

Top Questions So Far

Unsurprisingly, people have already been submitting questions, and here are the top ones so far. 

  1. Can you give us some insight how robotaxis have been performing so far and what rate you expect to expand in terms of vehicles, geofence, cities, and supervisors?

  2. What are the key technical and regulatory hurdles still remaining for unsupervised FSD to be available for personal use? Timeline?

  3. What specific factory tasks is Optimus currently performing, and what is the expected timeline for scaling production to enable external sales? How does Tesla envision Optimus contributing to revenue in the next 2–3 years?

  4. Can you provide an update on the development and production timeline for Tesla’s more affordable models? How will these models balance cost reduction with profitability, and what impact do you expect on demand in the current economic climate?

  5. Are there any news for HW3 users getting retrofits or upgrades? Will they get HW4 or some future version of HW5?

  6. When do you anticipate customer vehicles to receive unsupervised FSD?

And here are some other ones we found interesting:

  • Have any meaningful Optimus milestones changed for this year or next and will thousands of Optimus be performing tasks in Tesla factories by year end?

  • Are front bumper cameras going to be necessary for unsupervised full self driving? If so, what is the companies plan to retrofit vehicles that do not have them?

  • Will there be a new AI day to explain the advancements the Autopilot, Optimus, and Dojo/chip teams have made over the past several years. We still do not know much about the HW4.

Earnings Call Details

Tesla will hold its earnings call on Wednesday, July 23rd, at 4:00 PM EDT. It's still early for an access link, but we’ll make sure we have a link up on the site before the earnings call that day.

If you do miss the earnings call, no worries. We will provide a full recap following the call, and we’ll also do some in-depth dives into what was said and what we know.

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

Tesla Videos

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

Subscribe

Subscribe to our weekly newsletter