How Tesla’s FSD Works - Part 2

By Karan Singh
Not a Tesla App

We previously dived into how FSD works based on Tesla’s patents back in November, and Tesla has recently filed for additional patents regarding its training of FSD.

This particular patent is titled “Predicting Three-Dimensional Features for Autonomous Driving” - and it’s all about using Tesla Vision to establish a ground truth - which enables the rest of FSD to make decisions and navigate through the environment. 

This patent essentially explains how FSD can generate a model of the environment around it and then analyze that information to create predictions.

Time Series

Creating a sequence of data over time - a Time Series - is the basis for how FSD understands the environment. Tesla Vision, in combination with the internal vehicle sensors (for speed, acceleration, position, etc.,) establishes data points over time. These data points come together to create the time series.

By analyzing that time series, the system establishes a “ground truth” - a highly accurate and precise representation of the road, its features, and what is around the vehicle. For example, FSD may observe a lane line from multiple angles and distances as the vehicle moves through time, allowing it to determine the line’s precise 3D shape in the world. This system helps FSD to maintain a coherent truth as it moves forward - and allows it to establish the location of things in space around it, even if they were initially hidden or unclear.

Author’s Note

Interestingly, Tesla’s patent actually mentions the use of sensors other than Tesla Vision. It goes on to mention radar, LiDAR, and ultrasonic sensors. While Tesla doesn’t use radar (despite HD radars being on the current Model S and Model X) or ultrasonic sensors anymore, it does use LiDAR for training.

However, this LIDAR use is for establishing accurate sensor data for FSD - for training purposes. No Tesla vehicle is actually shipped with any LiDAR sensors. You can read about Tesla’s use for its LIDAR training rigs here.

Associating the Ground Truth

Once the ground truth is established, it is linked to specific points in time within the time series - usually a single image or the amalgamation of a set of images. This association is critical - it allows the system to predict the complete 3D structure of the environment from just that single snapshot. In addition, they also serve as a learning tool to help FSD understand the environment around it.

Imagine FSD has figured out the exact curve of a lane line using data from the time series. Next, it connects this knowledge to the particular image in the sequence where the lane line was visible. Next, it applies what it has learned - the exact curve, and the image sequence and data - to predict the 3D shape of the line going forward - even if it may not know for sure what the line may look like in the future.

Author’s Note

This isn’t part of the patent, but when you combine that predictive knowledge with precise and effective map data, that means that FSD can better understand the lay of the road and plan its maneuvers ahead of time. We do know that FSD takes into account mapping information. However, live road information from the ground truth is taken as the priority - mapping is just context, after all.

That is why when roads are incorrectly mapped, such as the installation of a roundabout in a location where a 4-way stop previously existed, FSD is still capable of traversing the intersection.

Three Dimensional Features

Representing features that the system picks up in 3D is essential, too. This means that the lane lines, to continue our previous example, must move up and down, left and right, and through time. This 3D understanding is vital for accurate navigation and path planning, especially on roads with curves, hills, or any varying terrain.

Automated Training Data Generation

One of the major advantages of this entire 3D system is that it generates training data automatically. As the vehicle drives, it collects sensor data and creates time series associated with ground truths.

Tesla does exactly this when it uploads data from your vehicle and analyzes it with its supercomputers. The machine learning model uses all the information it gets to better improve its prediction capabilities. This is now becoming a more automated process, as Tesla is moving away from the need to manually label data and is instead automatically labeling data with AI.

Semantic Labelling

The patent also discusses the use of semantic labeling - a topic covered in our AI Labelling Patent. However, a quick nitty-gritty is that Tesla labels lane lines as “left lane” or “right lane,” depending on the 3D environment that is generated through the time series.

On top of that, vehicles and other objects can also be labelled, such as “merging” or “cutting in.” All of these automatically applied labels help FSD to prioritize how it will analyze information and what it expects the environment around it to do.

How and When Tesla Uploads Data

Tesla’s data upload isn’t just everything they may catch - even though they did draw an absolutely astounding 1.28 TB from the author’s Cybertruck once it received FSD V13.2.2. It is based on transmitting selective sensor information based on triggers. These triggers can include incorrect predictions, user interventions, or failures to correctly conduct path planning. 

Tesla can also request all data from certain vehicles based on the vehicle type and the location - hence the request for the absurd 1.28 TB coming from one of the first Canadian Cybertrucks. This allows Tesla to collect data from specific driving scenarios - which it needs to help build better models that are more adaptable to more circumstances while also keeping data collection focused, thereby making training more efficient.

How It Works

To wrap that all up, the model applies predictions to better navigate through the environment. It uses data collected through time and then encapsulated in a 3D environment around the vehicle. Using that 3D environment, Tesla’s FSD formulates predictions on what the environment ahead of it will look like.

This process provides a good portion of the context that is needed for FSD to actually make decisions. But there are quite a few more layers to the onion that is FSD.

Adding in Other Layers

The rest of the decision-making process lies in understanding moving and static objects on the road, as well as identifying and reducing risk to vulnerable road users. Tesla’s 3D mapping also identifies and predicts the pathing of other moving objects, which enables it to conduct its path planning. While this isn’t part of this particular patent per-say, it is still an essential element to the entire system.

If all that technical information is interesting to you, we recommend you check out the rest of our series on Tesla’s patents:

We’ll continue to dive deep into Tesla’s patents, as they provide a unique and interesting source to explain how FSD actually works behind the curtains. It’s an excellent chance to get a peek behind the silicon brains that make the decisions in your car, as well as a chance to see how Tesla’s engineers actually structure FSD.

Tesla’s LFP Factory in North America Almost Complete — More LFP Vehicles Could Follow

By Karan Singh
Not a Tesla App

In a new video posted to X, Tesla is showing the progress of its first Lithium Iron Phosphate (LFP) cell manufacturing factory in North America. The facility, located in Sparks, Nevada, will be used to produce LFP battery cells for Megapacks and Powerwall.

However, the implications of this new factory extend beyond Tesla Energy. By on-shoring the production of these cost-effective batteries, Tesla is not only securing its energy supply chain but also opening the door to potentially reintroducing LFP-based vehicles in North America.

Megapack First

The immediate beneficiary of the new Nevada LFP facility is Tesla’s Energy division. LFP chemistry is ideal for stationary storage products like Megapack and Powerwall. It offers a very long life cycle, is extremely thermally stable and safe, and is significantly cheaper to produce than nickel-based batteries, partly because it contains no cobalt.

Until now, Tesla has relied on suppliers like CATL in China for these cells. A dedicated, domestic supply will enable Tesla to dramatically ramp up Megapack production to meet North America’s increasing demand for grid-scale energy. On the other hand, Megafactory Shanghai continues to utilize CATL’s LFP batteries and will support the rest of the world. 

Tesla first revealed that they were planning to onshore LFP production in North America at the Q1 2025 Earnings Call, which will help them avoid costs, innovate in new technology, and insulate themselves from geopolitical supply chain risks.

A Potential Return for LFP Vehicles?

Another exciting application for Tesla is what this new factory means for Tesla’s budget-oriented lineup. For years, Tesla has been constrained in its ability to offer LFP-based vehicles in North America. While LFP packs are used in other markets for specific standard-range RWD vehicles, tariffs on important Chinese cells made it difficult to import these cells for use in North America.

With a domestic supply of LFP cells produced in Nevada, this tariff-related barrier will be mostly eliminated, pending the sourcing of lithium from a North American site. This is likely to lead to the reintroduction of LFP-based vehicles to the North American market, possibly in late 2026 or 2027.

An American-made LFP pack could lead to a more affordable base Model 3 or Model Y, or potentially help Tesla cut costs on the next-generation Affordable Model even further. This helps to give customers a lower-cost entry point without sacrificing a lot of range, and with the added benefit of being able to regularly charge to 100%.

Mega Nevada

With Mega Nevada now progressing well, Tesla is in an excellent position to continue iterating on its vertical integration and scaling Megapack and Powerwall—two of Tesla’s fastest-growing businesses—further. There are tons of benefits for consumers in the future as Tesla continues down this path, with more affordable Powerwalls for the home, cheaper electricity prices thanks to grid-forming Megapacks, and cheaper LFP vehicles.

Tesla Grok App: First Look at Its Interface and Features

By Karan Singh
@greentheonly on X

The next major upgrade for Tesla’s in-car experience is pretty much already here - just hiding beneath the surface, awaiting the flick of a switch. According to new details uncovered by Tesla hacker Greentheonly, a fully functional version of the Grok conversational AI assistant is already present in recent firmware builds, just waiting for Tesla to activate it.

The feature, which is currently behind a server-side switch, could be enabled at any time by Tesla for vehicles running update 2025.20 and newer. The findings provide a better picture of what we already learned from Green’s breakdown on Grok last month.

Grok’s Requirements

@greentheonly on X

According to what Green determined from the latest software builds, the foundation for Grok was laid with update 2025.14, with more abilities and functionality added in 2025.20 to flesh it out. He also determined exactly which vehicles will be receiving Grok.

In terms of hardware, any vehicle with a Ryzen-based infotainment computer will receive Grok. This means that vehicles with the older Intel Atom processor will not be supported, at least initially. The underlying Autopilot hardware is not a factor, as Grok’s processing is not done in-vehicle.

Grok will also require premium connectivity or a Wi-Fi connection for the vehicle. At this point, we’re not sure whether Grok in your Tesla will also require you to sign up for SuperGrok, X Premium, or X Premium+, but Tesla is requiring you to sign into your Grok account. It’s just not clear whether the free version of Grok will work, or if you’ll need the premium version.

Grok User Experience

@greentheonly on X

Green also revealed the user interface for Grok for the first time. You’ll find many of the same features from the Grok app, but surprisingly, it looks like it’ll have a dark UI, even if you’re using light mode in your vehicle.

It appears that there will be a Grok app, likely for settings. However, Grok will largely operate in a modal, similar to voice commands, which are displayed near the bottom left corner of the screen.

There’s an on-screen microphone button, as well as drop-down menus for the voice and type of assistant you’d like to use. 

Similar to the Grok app currently on mobile devices, you’ll be able to select from a set of voices and then define their personality. The available voices for now are the standard Ara (Upbeat Female), Rex (Calm Male), and Gork (Lazy Male).

There’s also a settings button, which, when expanded, allows you to enable or disable NSFW mode (including swearing and adult topics), as well as a Kids Mode, which will tone Grok down to be suitable for when kids are in the car.

@greentheonly on X

How Grok Will Work (Button / Wake Word)

Users will be able to activate Grok by pressing a button, likely the same one that activates voice commands today. Grok will then remain enabled for the duration of your conversation, allowing you to go back and forth, asking and answering questions. To end your conversation, you’ll press the mic button again.

While it doesn’t appear to use a wake word yet, Green says that some code refers to a wake word, so it’s possible that this could be an option Tesla plans to activate in the future.

Replacing Voice Commands

The most significant implication of Grok’s future integration is in its potential to fully replace the existing and relatively rigid voice command system. Green notes that internally, this feature is part of the car assist module, and that eventually, the plan is for Grok to take over car control functions.

Unlike the current system, which requires specific phrases, a true conversational AI like Grok can understand natural language. This will enable more intuitive requests, completely changing how drivers interact with their car.

Language Support

@Greentheonly/X

Grok will also launch with multi-language support, similar to its current abilities in the Grok app. Green says that it already appears to have support for English and Chinese and one or two other languages.

Release Date

Grok appears ready to go from a vehicle standpoint, but Green wasn’t able to actually test it out. While development appears to be nearly complete in the vehicle, Tesla and xAI may still be working on some server-side changes to better integrate with the vehicle. If they plan for Grok to replace voice commands on day one, then it’ll need to be trained and be able to execute a variety of vehicle commands.

It’s possible Tesla is actively testing Grok or adding server-side changes to replace voice commands. However, it looks like vehicle development is nearly complete and Grok could launch as soon as the next major Tesla update, which is expected to be update 2025.24.

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

More Tesla News

Tesla Videos

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

Subscribe

Subscribe to our weekly newsletter