Thanks to a Tesla patent published last year, we have a great look into how FSD operates and the various systems it uses. SETI Park, who examines and writes about patents, also highlighted this one on X.
This patent breaks down the core technology used in Tesla’s FSD and gives us a great understanding of how FSD processes and analyzes data.
To make this easily understandable, we’ll divide it up into sections and break down how each section impacts FSD.
Vision-Based
First, this patent describes a vision-only system—just like Tesla’s goal—to enable vehicles to see, understand, and interact with the world around them. The system describes multiple cameras, some with overlapping coverage, that capture a 360-degree view around the vehicle, mimicking but bettering the human equivalent.
What’s most interesting is that the system quickly and rapidly adapts to the various focal lengths and perspectives of the different cameras around the vehicle. It then combines all this to build a cohesive picture—but we’ll get to that part shortly.
Branching
The system is divided into two parts - one for Vulnerable Road Users, or VRUs, and the other for everything else that doesn’t fall into that category. That’s a pretty simple divide - VRUs are defined as pedestrians, cyclists, baby carriages, skateboarders, animals, essentially anything that can get hurt. The non-VRU branch focuses on everything else, so cars, emergency vehicles, traffic cones, debris, etc.
Splitting it into two branches enables FSD to look for, analyze, and then prioritize certain things. Essentially, VRUs are prioritized over other objects throughout the Virtual Camera system.
The many data streams and how they're processed.
Not a Tesla App
Virtual Camera
Tesla processes all of that raw imagery, feeds it into the VRU and non-VRU branches, and picks out only the key and essential information, which is used for object detection and classification.
The system then draws these objects on a 3D plane and creates “virtual cameras” at varying heights. Think of a virtual camera as a real camera you’d use to shoot a movie. It allows you to see the scene from a certain perspective.
The VRU branch uses its virtual camera at human height, which enables a better understanding of VRU behavior. This is probably due to the fact that there’s a lot more data at human height than from above or any other angle. Meanwhile, the non-VRU branch raises it above that height, enabling it to see over and around obstacles, thereby allowing for a wider view of traffic.
This effectively provides two forms of input for FSD to analyze—one at the pedestrian level and one from a wider view of the road around it.
3D Mapping
Now, all this data has to be combined. These two virtual cameras are synced - and all their information and understanding are fed back into the system to keep an accurate 3D map of what’s happening around the vehicle.
And it's not just the cameras. The Virtual Camera system and 3D mapping work together with the car’s other sensors to incorporate movement data—speed and acceleration—into the analysis and production of the 3D map.
This system is best understood by the FSD visualization displayed on the screen. It picks up and tracks many moving cars and pedestrians at once, but what we see is only a fraction of all the information it’s tracking. Think of each object as having a list of properties that isn’t displayed on the screen. For example, a pedestrian may have properties that can be accessed by the system that state how far away it is, which direction it’s moving, and how fast it’s going.
Other moving objects, such as vehicles, may have additional properties, such as their width, height, speed, direction, planned path, and more. Even non-VRU objects will contain properties, such as the road, which would have its width, speed limit, and more determined based on AI and map data.
The vehicle itself has its own set of properties, such as speed, width, length, planned path, etc. When you combine everything, you end up with a great understanding of the surrounding environment and how best to navigate it.
The Virtual Mapping of the VRU branch.
Not a Tesla App
Temporal Indexing
Tesla calls this feature Temporal Indexing. In layman’s terms, this is how the vision system analyzes images over time and then keeps track of them. This means that things aren’t a single temporal snapshot but a series of them that allow FSD to understand how objects are moving. This enables object path prediction and also allows FSD to understand where vehicles or objects might be, even if it doesn’t have a direct vision of them.
This temporal indexing is done through “Video Modules”, which are the actual “brains” that analyze the sequences of images, tracking them over time and estimating their velocities and future paths.
Once again, heavy traffic and the FSD visualization, which keeps track of many vehicles in lanes around you—even those not in your direct line of sight—are excellent examples.
End-to-End
Finally, the patent also mentions that the entire system, from front to back, can be - and is - trained together. This training approach, which now includes end-to-end AI, optimizes overall system performance by letting each individual component learn how to interact with other components in the system.
How everything comes together.
Not a Tesla App
Summary
Essentially, Tesla sees FSD as a brain, and the cameras are its eyes. It has a memory, and that memory enables it to categorize and analyze what it sees. It can keep track of a wide array of objects and properties to predict their movements and determine a path around them. This is a lot like how humans operate, except FSD can track unlimited objects and determine their properties like speed and size much more accurately. On top of that, it can do it faster than a human and in all directions at once.
FSD and its vision-based camera system essentially create a 3D live map of the road that is constantly and consistently updated and used to make decisions.
Subscribe
Subscribe to our newsletter to stay up to date on the latest Tesla news, upcoming features and software updates.
Tesla’s Dan W Priestley attended the Advanced Clean Transportation (ACT) Expo in Anaheim, California, and provided an update on Tesla’s Semi truck program. The presentation covered several key developments on the status of Tesla’s Nevada Semi Factory, refinements to the Semi, and Tesla’s plans for charging and ramping production through 2026.
Let’s dig in and take a look at everything that was captured by the Out of Spec team at ACT Expo. The original video is embedded below if you’d like to watch it.
Semi Factory & Production Ramp
Priestley reaffirmed the timelines mentioned during Tesla’s Q4 2024 Earnings Call that Tesla will scale Semi production in 2026. To achieve this, Tesla has been actively building and expanding the Gigafactory Nevada site, specifically to support the production of the Tesla Semi. The dedicated Semi facility will have a targeted annual capacity of 50,000 Semi trucks.
Following the beginning of production, Tesla will utilize the initial trucks to integrate into its own logistics operations. This will serve as both a final real-world testing ground as well as an opportunity for Tesla to gather data internally. Tesla plans to begin subsequent customer deliveries throughout 2026 as the ramp-up continues.
Reuters also reported that Tesla is hiring over 1,000 new employees at the Semi Factory to begin the rapid ramping of the program.
Semi has already amassed 7.9 million miles with Tesla’s current testing and operational fleets, providing some real-world data and testing. Feedback for the truck has been exceptionally successful, with many drivers praising the Semi’s performance and comfort.
New Tesla Semi Features
Of course, it wouldn’t be a Tesla keynote without showing off some new things. The Semi will be available in 500-mile and 300-mile range configurations, now featuring updated mirror designs and a drop-down glass section to improve visibility and allow easier interaction with external elements—such as control panels at ports, for example.
New Electric Power Take-Off (e-PTO)
The Tesla Semi will also feature a new capability called Electric Power Take-Off, or e-PTO system. Similar to the PTO systems found on other vehicles, this will allow the Semi’s high-voltage battery to power auxiliary equipment at variable voltages. That includes being able to power things like climate-controlled reefer trailers, potentially replacing the noisy and polluting diesel generators traditionally used for this purpose.
Charging and Batteries
Out of Spec BITS/YouTube
Tesla is also working on an updated battery pack design for the final production design of the Semi. This new pack is designed to be more cost-effective to manufacture. The battery pack itself is slightly smaller than before, but the truck maintains the same level of range through efficiencies. Dan also confirmed during his keynote that the battery cells for the Semi will be sourced domestically inside the United States, helping to alleviate potential burdens due to tariffs.
On the charging front, Tesla is using MCS - the Megawatt Charging System - capable of 1.2MW - and designed specifically for Semi. The system uses the same V4 charging hardware found at Supercharger sites but focuses on that larger power output. Alongside a smaller physical footprint, Tesla will be able to configure these V4 cabinets for either dedicated Semi charging or for shared power scenarios with regular Superchargers. Tesla is also working on an integrated overnight charging product, but Tesla isn’t ready to talk about it yet.
46 Semi Charger Sites Coming
The 46 new MCS sites coming soon.
Out of Spec BITS/YouTube
Finally, Tesla has made substantial investments in a public charging network for the Semi. There are currently 46 sites in progress throughout the United States, and plans for significant expansion throughout 2026 and 2027. These sites are strategically located alongside major truck routes and within industrial areas to support long-haul and regional operations. Tesla is aiming to offer the lowest possible energy costs to operators to help incentivize adoption.
This was one of the best updates to the Tesla Semi we’ve received since its initial unveiling. It seems that the Semi will receive a big portion of Tesla’s attention in 2026, while Robotaxi and FSD Unsupervised take the stage this year.
The Tesla Semi has the potential to transform transportation even more dramatically than EVs already have, serving as a testament to Tesla’s mission to electrify the world.
Sentry Mode is an invaluable tool for owners - capable of keeping the vehicle safe and secure even when you’re not around. This is especially true in recent times, with the misguided and unfortunate incidents surrounding Tesla ownership, including damage to Tesla vehicles, showrooms, and Superchargers.
B-pillar Camera Recording and Dashcam Viewer
With the 2025 Spring Update on 2025.14, Tesla is expanding Sentry Mode’s functionality for certain vehicles with some much-needed changes. Sentry Mode and Dashcam can now record footage from the vehicle’s B-pillar cameras. These cameras are located on the side pillars of the vehicle, between the front and rear doors.
This adds two crucially needed viewpoints, making Tesla’s Sentry Mode a truly 360-degree security system. These cameras also provide the best angles for capturing license plates when parked, so they will be greatly appreciated by owners in the event of an incident.
These vehicles are also receiving an improved Dashcam Viewer, which now displays the six camera feeds along the bottom and a new grid view. It also allows users to jump back or forward in the video in 15-second increments.
However, to the disappointment of many owners, not all vehicles are receiving these updates due to the additional processing power needed.
Limited to Hardware 4 Vehicles, Ryzen Isn’t Enough
We have confirmed that Tesla is only adding the additional camera recording and improved Dashcam Viewer on hardware 4 (HW4 / AI4) vehicles. The newer hardware presumably has the additional processing power and bandwidth needed to handle recording and saving the two additional video streams during Sentry Mode and Dashcam.
For the time being, owners of HW3 vehicles are not receiving this feature. This includes all vehicles with HW3, even those with AMD Ryzen infotainment systems. If you’re not sure whether your vehicle has HW3 or HW4, you can refer to our FSD hardware guide.
While there’s no doubt that recording two additional camera streams would be more computationally intensive, we hope that Tesla adds the improved Dashcam Viewer to HW3 vehicles in a future update.
Cybertruck Also Missing Improved Sentry Mode
Surprisingly, and most confusing for many - is the fact that the Cybertruck is also not receiving the improved Dashcam Viewer and B-pillar camera recording with this update. This struck us as odd, especially since the Cybertruck is currently the only vehicle with the improved, more efficient version of Sentry Mode.
Every Cybertruck is equipped with HW4 and AMD Ryzen infotainment units, so this clearly isn’t a hardware restriction. It’s possible the more efficient Sentry Mode is playing a role here due to the infrastructure changes. However, we expect Tesla to address this in a future update and eventually release these features for the Cybertruck as well.
Given the Cybertruck’s high visibility and its status as a frequent target for both positive and negative attention, many owners hoped that the Cybertruck would be one of the vehicles to receive this feature.
Adaptive Headlights
Tesla finally started rolling out its adaptive headlights in North America. While the new Model Y already came with the feature when it was released last month, other vehicles with matrix headlights are now receiving the feature in the Spring Update.
All vehicles with matrix headlights are receiving this feature, which includes the new and old Model 3, first-gen Model Y, and the new Model S and Model X.
If you’re not sure if your vehicle includes matrix headlights, check out our guide. What’s interesting here is that older vehicles that were retrofitted with matrix headlights due to an accident or user replacement are also receiving the adaptive headlights feature.
Legacy Model S & Model X
As with most updates, the older legacy Model S and Model X are not receiving all the features included in this update. Unfortunately, some of the features, which include the Blind Spot Camera on the instrument cluster, Save Trunk Height Based on Location and Keep Accessory Power On are limited to the new Model S and X.
Legacy S and X models will receive the Alternative Trip Plans feature, Avoid Highways (Requires Intel MCU) and the Keyboard Languages feature.
These vehicles are also receiving all the features in the Minor Updates section except for the visualization showing how far the door is opened, which is exclusive to the Cybertruck. These additions include improved music search results, contact photos in the phone app, automatic connecting to hotspots, the ability to show third-party chargers, view Supercharger amenities, and various improvements to music services.
While many users will be disappointed not to receive the B-pillar camera recording and Dashcam Viewer improvements, it’s important to remember that Tesla typically does a great job at bringing features to older vehicles, at least with the Model 3 and Model Y. If a feature isn’t added, it’s usually due to a hardware limitation.