Tesla's latest V12 user interface will change the look and feel of some of the vehicle’s operations. This new interface, announced on X, is already partially available on the Cybertruck but will now be rolled out to the Model 3 and Model Y, equipped with AMD Ryzen processors according to Tesla. However, the new Model S and Model X will likely receive it as well, although possibly not at the same time. It integrates several new features and aesthetics that set new standards in vehicle interface design.
The new interface will become available in Tesla update 2024.14, which started rolling out to employees yesterday.
New Parked Vehicle Visualization
A standout feature of the v12 UI update, not to be confused with FSD v12, is the centralized vehicle visualization, which dominates the display while parked (video below). This design choice enhances visual appeal and improves functionality by placing critical vehicle performance metrics, and status updates front and center. Similar to the layout in the Cybertruck, this feature provides drivers with a clear and immediate view of their vehicle's status and shows off the gorgeous 3D model.
A nice look at Tesla new parked visualization in Tesla update 2024.14.
My favorite part is the nice transition when you put it the car into drive.
It immediately shows surrounding objects in the same view for a split second (photo in comments). pic.twitter.com/CIHXQGY8tW
There’s a new media player that’s larger and easier to use. By increasing the size of the media player, Tesla is now able to fit additional options that were hidden before, such as EQ and audio settings, the search icon and shuffle and repeat options.
The new media player appears while the vehicle is parked, driving or while the visualizations are in full-screen mode.
The media player is available on the Model 3 and Model Y and according to Tesla it’ll be limited to vehicles with the Ryzen-based infotainment center.
Tesla adds a new media player in update 2024.14
Not a Tesla App
Improved Navigation
The navigation system will see several improvements. You’ll now see a little trip progress bar that lets you visually see how far along you are on your route.
If your vehicle has a rear screen, as in the new Model 3, the redesigned Model S or Model X, then trip information such as ETA will also be displayed on the rear screen.
Tesla already has the ability to update your route if a faster route becomes available. You can change some of these settings under Controls > Navigate. However, now the vehicle will show you if a faster route becomes available and gives you a chance to cancel the updated route if needed.
Expanded Autopilot Visualizations
Tesla is now bringing its full-screen visualizations outside of North America. However, there will be some improvements as well. In addition to being able to have the visualizations go full screen, there will now be a small map displayed in the corner as well.
That’s one of the issues with the full-screen FSD visualizations right now. If you make them full-screen, you lose your navigation map completely and only have the next turn available.
V12 Ansicht (Frühlingsupdate) 👉Kein Autopilot, EAP, FSD aktiv. 👉Bremslichter werden angezeigt. 👉Blinken wird angezeigt. 👉Geile Darstellung ❤️Danke liebes Tesla Team pic.twitter.com/lmzTjKIq1x
This will be the first time full-screen visualizations are available outside North America. It’s not clear whether all the FSD visualizations will be available such as traffic lights, curbs and more will be displayed, but Tesla has slowly been adding additional visualizations for non-FSD users, so there’s a chance that this feature will finally display all FSD visualizations to users outside of North America.
It’s not immediately clear whether this feature will require Enhanced Autopilot (EAP) or FSD.
Update: The full-screen visualizations do not require FSD or EAP, but unfortunately the visualizations displayed are still the same ones as in previous updates, so it won’t display the surrounding environment and curbs.
Full-Screen Browser Support
With this update, Tesla will finally let you manually make the browser full-screen. While this will make Tesla’s Theater apps a little redundant since they just loaded the website of the selected streaming service, you’ll now be able to stream any video service full screen, as long as the service supports Tesla’s browser.
As expected, the full-screen button will only be available while the vehicle is parked.
Checking Compatibility
Owners can verify their vehicle’s compatibility with the new full-screen visualizations while parked and driving by navigating to Controls > Software > Additional Vehicle Info on their Tesla’s touchscreen. This update is tailored for Tesla vehicles equipped with the AMD Ryzen processor.
Subscribe
Subscribe to our newsletter to stay up to date on the latest Tesla news, upcoming features and software updates.
For years, Tesla owners have been intrigued by the promise of a truly hands-off parking experience, one that goes beyond simply just letting your car park itself when you arrive at the parking lot. Banish, sometimes also known as Banish Autopark or Reverse Summon, was envisioned as the ultimate parking convenience. Your Tesla would drop you off at the entrance to your destination, full chauffeur style, and then leave to find a suitable parking spot nearby. Coupled with Park Seek, your Tesla would drive through a parking lot to locate an open space and then park itself, waiting on standby.
Then, when you were ready, you would be able to Summon it to the entrance, showing up right as you do, for the smoothest autonomous experience. However, despite the initial excitement and focus from Elon back when V12.5 was supposed to include it, we’ve heard very little about Banish. It has remained a relatively elusive feature - and the last time we saw anything on it was all the way back in October 2024, when it was alluded to in some Tesla app code.
So, what happened to Banish?
The Original Promise: A Smarter Way to Park
The concept of Banish was a logical extension of Tesla’s existing Summon and Autopark capabilities. Instead of just parking when a spot is identified by the driver, Banish and Park Seek were meant to give your Tesla more agency. After dropping off the occupants, your Tesla would leverage FSD and its autonomy to:
This functionality was often discussed in conjunction with improvements to Autopark and was highlighted as a step towards Tesla’s vision of a Robotaxi future. Interestingly, while the October 2024 FSD Roadmap mentioned Park, Unpark, and Reverse, it did not mention Banish. The lack of Banish as a milestone in the FSD Roadmaps leaves us to believe that Tesla has put this feature on the back burner while it works on other FSD-related priorities.
Today’s FSD & Autopark: Capable, But Not Quite Banish
Fast forward to Spring 2025, and FSD V13 does exhibit some tendencies in terms of self-parking capabilities. As noted by many on social media, FSD can identify and maneuver into parking spots when arriving at a destination. However, this is generally not the proactive Park Seek envisioned for Banish. The current system requires the driver to be present, even if hands-off. It often identifies spots as it directly approaches them, and its seeking behavior in a larger parking lot is extremely limited.
Users have also observed that while Tesla’s vision-based Autopark is often impressively accurate even on the massive Cybertruck, letting FSD nose-in to a spot can sometimes result in the car being poorly aligned or missing the lines entirely. This suggests that while your Tesla can park itself, the nuanced understanding and precision required for a truly reliable and Unsupervised Banish experience are still under development.
V13’s upcoming features indicate that it is supposed to provide additional support for personal and parking garages and driveways, which haven’t been added in quite yet. In fact, none of V13’s upcoming features have been realized yet - and it has been a while since a proper FSD update has come from Tesla.
The Underlying Tech is Ready
Interestingly, the core AI capabilities required for Banish and Park Seek are detailed extensively in a recently published Tesla Patent covering Autonomous and User Controlled Vehicle Summon to a Target. This patent describes generating an occupancy grid of the parking lot, then conducting path planning to the spot, and making decisions to safely navigate the lot at low speeds while keeping in mind pedestrians and other road users.
This indicates that Tesla has been working on the foundational AI for low-speed maneuvering in tight locations for quite some time. However, the challenge likely lies in achieving the necessary reliability, safety, and real-world robustness across an almost infinite variety of parking lot designs and in dynamic conditions.
What’s Next? Robotaxi.
The impending launch of Tesla’s Robotaxi Network in Austin in June brings the need for Banish-like capabilities into sharp focus. For a fleet of autonomous vehicles to operate efficiently, they must be able to manage their parking autonomously. A Robotaxi will need to drop off its passenger at the entrance to a location and then proceed to either its next pickup or autonomously find a parking or staging spot to await its next ride or even go back to base to charge.
It is plausible that a functional, robust version of Park Seek and Banish is being developed and tested internally as a component for Tesla’s Robotaxi launch and presumably what will be FSD Unsupervised. The initial rollout in Austin may just be the first real-world deployment of this tech from Tesla.
While Banish has yet to launch, the key components are in place and just need to be improved. The issue likely lies in safety, as parking lots account for 1 and 5 accidents that occur in North America.
In all likelihood, Banish isn’t canceled but is being integrated into an FSD Unsupervised and the Robotaxi feature set. That means a public rollout will likely depend on achieving a higher level of safety and confidence before Tesla is willing to let vehicles park themselves autonomously or even while being Supervised through the Tesla app.
For now, you’ll have to keep parking yourself, or letting FSD or Autopark do the job. A convenient curbside drop-off isn’t in the cards yet, but given the necessity for Robotaxi, it’ll need to arrive eventually.
Tesla’s Summon, Smart Summon, and Actually Smart Summon features have long been a source of fascination (and occasional frustration), offering FSD users a glimpse into a future where your vehicle picks you up.
While we await further improvements to Actually Smart Summon to increase reliability and range, a recently published Tesla patent (US20250068166A1) provides an inside look into the intricate AI and sensor technologies that make these complex, low-speed autonomous maneuvers possible.
Notably, the list of inventors on this patent reads like a "who's who" of Tesla's AI and Autopilot leadership, including Elon Musk and former Director of AI Andrej Karpathy, among many others.
Though the patent is a continuation of earlier work, with some dates stretching back to 2019, it lays out the core logic that powers Tesla's vision-based system.
Step-by-Step Navigation
Tesla’s patent details a sophisticated system designed to allow a vehicle to autonomously navigate from its current position to a target location specified by a remote user. The remote user can also designate themselves as the target, even while they’re moving, and have the vehicle meet them.
This process begins with destination and target acquisition. The system is designed to receive a target geographical location from a user, for example, by dropping a pin via the Tesla app. Alternatively, it can use a “Come to Me” feature, where the car navigates to the user’s dynamic GPS location. In this same section, the patent also mentions the ability to handle altitude, which is crucial for multi-story parking garages, and even handle final orientations at arrival.
Occupancy Grid
At the heart of the system is the use of sensor data to perceive the environment. This is done through Tesla Vision, which builds a representation of the surrounding environment, similar to how FSD maps and builds a 3D world in which to navigate. A neural network processes this environment to determine drivable space and generate an “occupancy grid.” This grid maps the area around the vehicle, detailing drivable paths versus obstacles.
The patent still references the use of alternative sensors, like ultrasonic sensors and radar, even though Tesla does not use them anymore. The system can also load saved occupancy grids from when the car was parked to improve initial accuracy.
Path Planner
Once the environment is understood, a Path Planner Module calculates an intelligent and optimal path to the target. This isn’t just the shortest route; the system uses cost functions to evaluate potential paths, penalizing options with sharp turns, frequent forward/reverse changes, or a higher likelihood of encountering obstacles. The path planning also considers the vehicle’s specific operating dynamics, like its turning radius. Interestingly, the Path Planner Module can also handle multi-part destinations with waypoints - a feature that isn’t available yet on today’s version of Actually Smart Summon.
Generating Commands
Once the path is determined, the Vehicle Controller takes the path and translates it into commands for the vehicle actuators, which control the steering, acceleration, and braking to navigate the vehicle along the planned route. As the vehicle moves, the Path Planner continues to recalculate and adjust the path as required.
Since Actually Smart Summon is nearly autonomous with the exception of the user having to hold the Summon button (app update hints at not having to hold the button soon), continuous safety checks are integral. This includes using the Path Planner and the occupancy grid to judge if there is a chance for a collision, and overriding navigation if necessary. The patent also mentions the possibility of users remotely controlling aspects like steering and speed but with continuous safety overrides in place. This is another cool little feature that Tesla has yet to include with today’s Actually Smart Summon - being able to control your full-size car like an RC car. This feature could be used for robotaxis if the vehicles get stuck and need to be tele-operated.
Reaching the Target
Upon reaching the destination, or the closest safe approximation (like the other side of a road), the system can trigger various actions. These include sending a notification to the user, turning on the interior or exterior lights, adjusting climate control, and unlocking or opening the doors. Another yet-to-arrive feature here is the fact that the destination triggers in the patent also include correctly orienting the vehicle for charging if the destination is a charger. This part of the patent doesn’t reference wireless charging, but we’re sure there’s more to this than it seems.
A Glimpse Into the Future
While this patent has dates stretching back to 2019, its recent publication as a continued application tells us that Tesla is still actively iterating on its Summon functionality. It details a comprehensive system that has been well thought out for complex, confined spaces, which will be key for both today’s convenience features like Actually Smart Summon - but also for Tesla’s upcoming robotaxis.
The depth of engineering described, from neural network-based perception to sophisticated path planning and safety protocols, explains the impressive capabilities of Tesla's Summon features when they work well and the inherent challenges in making them robust across an infinite variety of real-world scenarios. As Tesla continues to refine its AI, the foundational principles laid out in this patent will undoubtedly continue to evolve, actually bringing "Actually Smart Summon" to reality.