Sentry Mode is an invaluable tool for owners - capable of keeping the vehicle safe and secure even when you’re not around. This is especially true in recent times, with the misguided and unfortunate incidents surrounding Tesla ownership, including damage to Tesla vehicles, showrooms, and Superchargers.
B-pillar Camera Recording
With the 2025 Spring Update on 2025.14, Tesla is expanding Sentry Mode’s functionality for certain vehicles with some much-needed changes. Sentry Mode and Dashcam can now record footage from the vehicle’s B-pillar cameras. These cameras are located on the side pillars of the vehicle, between the front and rear doors.
This adds two crucially needed viewpoints, making Tesla’s Sentry Mode a truly 360-degree security system. These cameras also provide the best angles for capturing license plates when parked, so they will be greatly appreciated by owners in the event of an incident.
Updated Dashcam Viewer
These vehicles are also receiving an improved Dashcam Viewer, which now displays the six camera feeds along the bottom and a new grid view that allows you to view four cameras simultaneously. It also allows users to jump back or forward in the video in 15-second increments.
However, to the disappointment of many owners, not all vehicles are receiving these updates due to the additional processing power needed.
We have confirmed that Tesla is only adding the additional camera recording on hardware 4 (HW4 / AI4) vehicles. The newer hardware presumably has the additional processing power and bandwidth needed to handle recording and saving the two additional video streams during Sentry Mode and Dashcam.
For the time being, owners of HW3 vehicles are not receiving this feature. This includes all vehicles with HW3, even those with AMD Ryzen infotainment systems. If you’re not sure whether your vehicle has HW3 or HW4, you can refer to our FSD hardware guide.
While there’s no doubt that recording two additional camera streams would be more computationally intensive, we hope that Tesla adds the improved Dashcam Viewer to HW3 vehicles in a future update.
Update: Tesla is including the new Dashcam viewer for Ryzen vehicles.
New Dashcam Viewer is Available on HW3 / Ryzen Vehicles
Tesla doesn’t include the new Dashcam Viewer as a feature in the 2025.14.3 release notes, but owners with Ryzen-based HW3 vehicles are receiving the improved Dashcam player. However, they’re only receiving the improved player, not the B-pillar camera recording. The new player is a worthy addition, so it’s great to see Tesla include this feature even if B-pillar recording isn’t possible.
The new player includes four improvements:
New grid view that lets you view four cameras at the same time
Different camera views, including the grid view are spread out across the bottom instead of at each corner
You can jump back and forward in 15-second increments
Users can jump to the next Sentry Mode video by tapping the button on the top right corner, making it easier to jump to the next event instead of having to go back to the list of events.
The existing functionality remains largely intact, including the ability to jump to a Sentry Mode event. However, the playback speed selection of 0.5x, 1x, and 2x has been removed.
Surprisingly, and most confusing for many - is the fact that the Cybertruck is also not receiving the improved Dashcam Viewer and B-pillar camera recording with this update. This struck us as odd, especially since the Cybertruck is currently the only vehicle with the improved, more efficient version of Sentry Mode.
Every Cybertruck is equipped with HW4 and AMD Ryzen infotainment units, so this clearly isn’t a hardware restriction. It’s possible the more efficient Sentry Mode is playing a role here due to the infrastructure changes. However, we expect Tesla to address this in a future update and eventually release these features for the Cybertruck as well.
Given the Cybertruck’s high visibility and its status as a frequent target for both positive and negative attention, many owners hoped that the Cybertruck would be one of the vehicles to receive this feature.
Adaptive Headlights
Tesla finally started rolling out its adaptive headlights in North America. While the new Model Y already came with the feature when it was released last month, other vehicles with matrix headlights are now receiving the feature in the Spring Update.
All vehicles with matrix headlights are receiving this feature, which includes the new and old Model 3, first-gen Model Y, and the new Model S and Model X.
If you’re not sure if your vehicle includes matrix headlights, check out our guide. What’s interesting here is that older vehicles that were retrofitted with matrix headlights due to an accident or user replacement are also receiving the adaptive headlights feature.
Legacy Model S & Model X
As with most updates, the older legacy Model S and Model X are not receiving all the features included in this update. Unfortunately, some of the features, which include the Blind Spot Camera on the instrument cluster, Save Trunk Height Based on Location and Keep Accessory Power On are limited to the new Model S and X.
Legacy S and X models will receive the Alternative Trip Plans feature, Avoid Highways (Requires Intel MCU) and the Keyboard Languages feature.
These vehicles are also receiving all the features in the Minor Updates section except for the visualization showing how far the door is opened, which is exclusive to the Cybertruck. These additions include improved music search results, contact photos in the phone app, automatic connecting to hotspots, the ability to show third-party chargers, view Supercharger amenities, and various improvements to music services.
While many users will be disappointed not to receive the B-pillar camera recording and Dashcam Viewer improvements, it’s important to remember that Tesla typically does a great job at bringing features to older vehicles, at least with the Model 3 and Model Y. If a feature isn’t added, it’s usually due to a hardware limitation.
Subscribe
Subscribe to our newsletter to stay up to date on the latest Tesla news, upcoming features and software updates.
With Tesla’s first major expansion of the Robotaxi Geofence now complete and operational, they’ve been hard at work with validation in new locations - and some are quite the drive from the current Austin Geofence.
Validation fleet vehicles have been spotted operating in a wider perimeter around the city, from rural roads in the west end to the more complex area closer to the airport. Tesla mentioned during their earnings call that the Robotaxi has already completed 7,000 miles in Austin, and it will expand its area of operation to roughly 10 times what it is now. This lines up with the validation vehicles we’ve been tracking around Austin.
Based on the spread of the new sightings, the potential next geofence could cover a staggering 450 square miles - a tenfold increase from the current service area of roughly 42 square miles. You can check this out in our map below with the sightings we’re tracking.
If Tesla decides to expand into these new areas, it would represent a tenfold increase over their current geofence, matching Tesla’s statement. The new area would cover approximately 10% of the 4,500-square-mile Austin metropolitan area. If Tesla can offer Robotaxi services in that entire area, it would prove they can tackle just about any city in the United States.
From Urban Core to Rural Roads
The locations of the validation vehicles show a clear intent to move beyond the initial urban and suburban core and prepare the Robotaxi service for a much wider range of uses.
In the west, validation fleet vehicles have been spotted as far as Marble Falls - a much more rural environment that features different road types, higher speed limits, and potentially different challenges.
In the south, Tesla has been expanding towards Kyle, which is part of the growing Austin-San Antonio suburban corridor spanning Highway 35. San Antonio is only 80 miles (roughly a 90-minute drive) away, and could easily become part of the existing Robotaxi area if Tesla obtains regulatory approval there.
In the East, we haven’t spotted any new validation vehicles. This is likely because Tesla’s validation vehicles originate from Giga Texas, which is located East of Austin. We won’t really know if Tesla is expanding in this direction until they start pushing past Giga Texas and toward Houston.
Finally, there have been some validation vehicles spotted just North of the new expanded boundaries, meaning that Tesla isn’t done in that direction either. This direction consists of the largest suburban areas of Austin, which have so far not been serviced by any form of autonomous vehicle.
Rapid Scaling
This new, widespread validation effort confirms what we already know. Tesla is pushing for an intensive period of public data gathering and system testing in a new area, right before conducting geofence expansions. The sheer scale of this new validation zone tells us that Tesla isn’t taking this slowly - the next step is going to be a great leap instead, and they essentially confirmed this during this Q&A session on the recent call. The goal is clearly to bring the entire Austin Metropolitan area into the Robotaxi Network.
While the previous expansion showed off just how Tesla can scale the network, this new phase of validation testing is a demonstration of just how fast they can validate and expand their network. The move to validate across rural, suburban, and urban areas simultaneously shows their confidence in these new Robotaxi FSD builds.
Eventually, all these improvements from Robotaxi will make their way to customer FSD builds sometime in Q3 2025, so there is a lot to look forward to.
For years, the progress of Tesla’s FSD has been measured by smoother turns, better lane centering, and more confident unprotected left turns. But as the system matures, a new, more subtle form of intelligence is emerging - one that shifts its attention to the human nuances of navigating roads. A new video posted to X shows the most recent FSD build, V13.2.9, demonstrating this in a remarkable real-world scenario.
Toll Booth Magic
In the video, a Model Y running FSD pulls up to a toll booth and smoothly comes to a stop, allowing the driver to handle payment. The car waits patiently as the driver interacts with the attendant. Then, at the precise moment the toll booth operator finishes the transaction and says “Have a great day”, the vehicle starts moving, proceeding through the booth - all without any input from the driver.
If you notice, there’s no gate here at this toll booth. This interaction all happened naturally with FSD.
While the timing was perfect, the FSD wasn’t listening to the conversation for clues (maybe one day, with Grok?) The reality, as explained by Ashok Elluswamy, Tesla’s VP of AI, is even more impressive.
It can see the transaction happening using the repeater & pillar cameras. Hence FSD proceeds on its own when the transaction is complete 😎
FSD is simply using the cameras on the side of the vehicle to watch the exchange between the driver and attendant. The neural network has been trained on enough data that it can visually recognize the conclusion of a transaction - the exchange of money or a card and the hands pulling away - and understands that this is the trigger to proceed.
The Bigger Picture
This capability is far more significant than just a simple party trick. FSD is gaining the ability to perceive and navigate a world built for humans in the most human-like fashion possible.
If FSD can learn what a completed toll transaction looks like, it’s an example of the countless other complex scenarios it’ll be able to handle in the future. This same visual understanding could be applied to navigating a fast-food drive-thru, interacting with a parking garage attendant, passing through a security checkpoint, or boarding a ferry or vehicle train — all things we thought that would come much later.
These human-focused interactions will eventually become even more useful, as FSD becomes ever more confident in responding to humans on the road, like when a police officer tells a vehicle to go a certain direction, or a construction worker flags you through a site. These are real-world events that happen every day, and it isn’t surprising to see FSD picking up on the subtleties and nuances of human interaction.
This isn’t a pre-programmed feature for a specific toll booth. It is an emergent capability of the end-to-end AI neural nets. By learning from millions of videos across billions of miles, FSD is beginning to build a true contextual understanding of the world. The best part - with a 10x context increase on its way, this understanding will grow rapidly and become far more powerful.
These small, subtle moments of intelligence are the necessary steps to a truly robust autonomous system that can handle the messy, unpredictable nature of human society.