Tesla's early FSD Betas included driving visualizations that used simple wireframe boxes to represent vehicles and lane markings were made up of individual dots.
Tesla adds scalable vehicle models to the latest FSD Beta
The visualizations were a great look into some of the information that is provided to Autopilot, but even then only a fraction of the information Autopilot uses was actually displayed onscreen.
In reality, Autopilot is creating a 3D representation of every object it tracks. Each object detected then has various attributes. For example, a detected vehicle will have attributes for how fast it's going, how far away it is, the type of vehicle, its predicted path, and more.
Tesla's visualizations in early FSD Betas
The car visualizations are an important part of FSD because they help us better understand what the car is capable of seeing and reacting to. However, the information and visualization Autopilot needs is drastically different than what humans need.
In order for Tesla to achieve FSD, they essentially need to be able to build a highly accurate video game that represents the real world, in real-time.
The car wants access to as much information about each object as possible. Meanwhile, humans want a visualization that closely resembles the real world.
With the introduction of FSD Betas 9.x, Tesla released a more human consumable visualization. One that included proper 3D models of general vehicle types, road pylons, and solid lane markers.
The road edges and lane markings are more distinguished lines, 3D models have working brake lights, and other objects such as speed bumps, bike lanes, and crosswalks are depicted using visualizations that match the real world.
In order for Tesla to achieve FSD, they essentially need to be able to build a highly accurate video game that represents the real world, in real-time.
However, something that has been missing is visualizations is dynamic vehicle sizing. The 3D vehicle models that Tesla has been using have a static size. When the vehicle sees a bus, it calculates its length, width, and height in addition to a bunch of other metrics. However, the 3D model that is shown onscreen is a predefined size, meaning that it does not actually match what the vehicle saw.
This is why you may have seen a tractor-trailer shift forward and backward or you may have seen two vehicles on top of each other. One is signifying the start of the vehicle and since the vehicle is so much longer than the model, it's adding another vehicle to the end to signify the end of the vehicle.
Scalable Vehicle Models
However, in the latest 10.10.2 FSD update, we are now seeing Tesla scale individual vehicle models so that they represent the calculated size of surrounding vehicles. Contextually this could be helpful in better understanding our car’s situation in the world.
In 10.10.2, the car shrinks or stretches the 3D vehicle models in each dimension so that the 3D model matches the calculated dimensions for each vehicle. This is especially apparent in longer vehicles such as buses, trucks, and tractor-trailers, where the vehicle lengths are more likely to vary, but you can also see it scale other vehicle models such as very small cars.
In this example below, you'll see that Tesla is now able to accurately represent buses of different sizes. Tesla only has a model for a full length bus, but in this case, Tesla detected that the length of one of the buses is considerably shorter than the vehicle model so it chose to reduce the length of the bus to the length Autopilot had calculated. In the image below you can see how the same bus model is shown in two different sizes.
Tesla can now accurate render vehicles of different sizes
It's important to realize the difference between the visualizations and what Autopilot uses. The visualizations are there merely to help us better understand what Autopilot can see. The FSD computer itself has always been taking note of the size of surrounding objects and various other data points. Trajectory, approach velocity, proximity, and so forth have also been a part of this, but this update helps Tesla achieve visualizations that provide a more accurate representation of reality.
It's not only buses and trucks that are scaled up or down. Tesla resized a bobcat down to a vehicle that is about half the length of its normal sedan model.
Tesla can now accurate render vehicles of different sizes
Models are adjusted in all three dimensions. We witnessed some truck models that were stretched to become taller while also having their length reduced. It's not perfect because you're scaling all components of the truck at the same rate, but it produces a much more accurate representation of the vehicle and the amount of space it takes up.
Vehicles are resized in three dimensions to better match the vehicle's length, width and height
Tesla has come a long way in a short period with how many objects they're able to detect, but obviously, when you compare the environment the car sees today, there is still a lot missing.
In the short term, we'd like to see more objects visualized. Objects that are commonly encountered while driving, such as trailers and gates.
We'd also like to see other common objects added, such as additional traffic light configurations, crosswalks, mailboxes, and maybe even a generic object that lets us know the vehicle sees something it needs to maneuver around, but it may not know exactly what it is.
In the future, I think we'll see Tesla display a rich, fuller 3D environment that will display static and moving objects that are important for the vehicle to avoid, objects such as barriers, buildings, trees, sidewalks, and more. Today Tesla is one step closer to achieving this goal.
Tesla appears to be preparing to expand its Robotaxi geofence in Austin, Texas, with numerous engineering vehicles taking to the road. One of the most interesting sights, between the short and tall LiDAR rigs, was a Cybertruck validation vehicle, which we don’t often see.
Tesla’s expansion is moving the Robotaxi Network into downtown Austin, a dense urban environment that is currently outside the geofence. It appears Tesla is content with the latest builds of Robotaxi FSD and is ready to take on urban traffic.
The inclusion of a Cybertruck in the validation fleet is noteworthy, as the rest of the vehicles are Model Ys. This suggests that Tesla may be addressing two challenges simultaneously: expanding its service area while also addressing the FSD gap between the Cybertruck and other HW4 Tesla vehicles.
Tesla Validating Downtown Austin before expanding the Robotaxi geo-fence area. pic.twitter.com/ylFATtjcDi
Recent sightings have shown a fleet of Tesla vehicles, equipped with rooftop validation sensor rigs, running routes throughout downtown Austin and across the South Congress Bridge. While these rigs include LiDAR, it’s not a sign that Tesla is abandoning its vision-only approach.
Instead, Tesla uses the high-fidelity data from the LiDAR as a ground truth measurement to validate and improve the performance of its cameras. In short, it essentially uses the LiDAR measurements as the actual distances and then compares the distances determined in vision-only to the LiDAR measurements. This allows Tesla to tweak and improve its vision system without needing LiDAR.
This data collection in a new, complex environment right outside the Robotaxi geofence is an indicator that plans to expand the geofence. Tesla has previously indicated that they intend to roll out more vehicles and expand the geofence slowly. Given that their operational envelope includes the entire Austin Metro Area, we can expect more locations to open up gradually.
Once they expand the operational radius to include downtown Austin, they will likely also have to considerably increase the number of Robotaxis active in the fleet at any given time. Early-access riders are already saying that the wait time for a Robotaxi is too long, with them sometimes having to wait 15 minutes to be picked up.
With a larger service area, we expect Tesla to also increase the number of vehicles and the number of invited riders to try out the service.
After all, Tesla’s goal is to expand the Robotaxi Network to multiple cities within the United States by the end of 2025. Tesla has already been running an employees-only program in California, and we’ve seen validation vehicles as far away as Boston and New Jersey, on the other side of the country.
Cyber FSD Lagging Behind
One of the most significant details from these recent sightings is the presence of a Cybertruck. Cybertruck’s FSD builds have famously lagged behind the builds available on the rest of Tesla’s HW4 fleet. Key features that were expected never fully materialized for the Cybertruck, and the list of missing features is quite extensive.
Start FSD from Park
Improved Controller
Reverse on FSD
Actually Smart Summon
It may not look like a lot, but if you drive a Cybertruck on FSD and then hop in any of the rest of Tesla’s HW4 vehicles, you’ll notice a distinct difference. This is especially evident on highways, where the Cybertruck tends to drift out of the lane, often crossing over the lane markings.
Tesla was testing parts of Downtown Austin, TX with this Cybertruck which had a massive roof rack, and sensors.
We previously released an exclusive mentioning that a well-positioned internal source confirmed with us that a new FSD build for the Cybertruck was upcoming, but we never ended up receiving that particular build, only a point release to V13.2.9. The AI team’s focus had clearly shifted to getting the latest Robotaxi builds running and validated, and while a flagship, the Cybertruck fleet was small and new, and really a secondary task.
The Cybertruck’s larger size, steer-by-wire, rear-wheel steering, and different camera placements likely present a bigger set of challenges for FSD. Deploying it now as a validation vehicle in a complex environment like downtown Austin suggests that Tesla is finally gathering the specific data needed to bring the Cybertruck’s capabilities up to par. This focused effort is likely the necessary step to refine FSD’s handling of the Cybertruck before they begin rolling out new public builds.
When?
Once Tesla’s validation is complete, we can probably expect the Robotaxi Network to expand its borders for the first time in the coming days or weeks. However, we’ll likely see more signs of the expansion, such as Robotaxi vehicles driving themselves around the area, before the expansion actually happens.
Hopefully, the Cybertruck will also learn from its older siblings and receive the rest of its much-needed FSD features, alongside an FSD update for the entire fleet.
Tesla is rolling out a fairly big update for its iOS and early-access-only Robotaxi app, delivering a suite of improvements that address user feedback from the initial launch last month. The update improves the user experience with increased flexibility, more information, and overall design polish.
The most prominent feature in this update is that Tesla now allows you to adjust your pickup location. Once a Robotaxi arrives at your pickup location, you have 15 minutes to start the ride. The app will now display the remaining time your Robotaxi will wait for you, counting down from 15:00. The wait time is also shown in the iOS Live Activity if your phone is on the lock screen.
How Adjustable Pickups Work
We previously speculated that Tesla had predetermined pickup locations, as the pickup location wasn’t always where the user was. Now, with the ability to adjust the pickup location, we can clearly see that Tesla has specific locations where users can be picked up.
Rather than allowing users to drop a pin anywhere on the map, the new feature works by having the user drag the map to their desired area. The app then presents a list of nearby, predetermined locations to choose from. Once a user selects a spot from this curated list, they hit “Confirm.” The pickup site can also be changed while the vehicle is en route.
This specific implementation raises an interesting question: Why limit users to predetermined spots? The answer likely lies in how Tesla utilizes fleet data to improve its service.
Here is the new Tesla Robotaxi pickup location adjustment feature.
While the app is still only available on iOS through Apple’s TestFlight program, invited users can download and update the app.
Tesla included these release notes in update 25.7.0 of the Robotaxi app:
You can now adjust pickup location
Display the remaining wait time at pickup in the app and Live Activity
Design improvements
Bug fixes and stability improvements
Nic Cruz Patane
Why Predetermined Pick Up Spots?
The use of predetermined pickup points is less of a limitation and more of a feature. These curated locations are almost certainly spots that Tesla’s fleet data has identified as optimal and safe for an autonomous vehicle to perform a pickup or drop-off.
This suggests that Tesla is methodically “mapping” its service area not just for calibration and validation of FSD builds but also to help perform the first and last 50-foot interactions that are critical to a safe and smooth ride-hailing experience.
An optimal pickup point likely has several key characteristics identified by the fleet, including:
A safe and clear pull-away area away from traffic
Good visibility for cameras, free of obstructions
Easy entry and exit paths for an autonomous vehicle
This change to pick-up locations reveals how Tesla’s Robotaxi Network is more than just Unsupervised FSD. There are a lot of moving parts, many of which Tesla recently implemented, and others that likely still need to be implemented, such as automated charging.
Frequent Updates
This latest update delivers a much-needed feature for adjusting pickup locations, but it also gives us a view into exactly what Tesla is doing with all the data it is collecting with its validation vehicles rolling around Austin, alongside its Robotaxi fleet.
Tesla is quickly iterating on its app and presumably the vehicle’s software to build a reliable and predictable network, using data to perfect every aspect of the experience, from the moment you hail the ride to the moment you step out of the car.