Tesla made a lot of improvements in the 2024 Holiday Update, including more than 15 undocumented improvements that were included in the release. One of these was a stealthy performance improvement to the YouTube app.
Several people have mentioned they’ve seen improved performance on YouTube since this year’s Holiday Update - and there’s an interesting reason why.
The improved YouTube performance in Tesla vehicles comes from an unexpected source—Tesla actually rolled back support for YouTube’s newer AV1 video encoding. Instead, vehicles now default to the older VP9 encoding standard.
While AV1 is highly efficient in terms of bandwidth, it requires considerably more processing power to decode and display videos. VP9, on the other hand, is less computationally demanding but uses more bandwidth to achieve the same video quality. This trade-off means smoother playback and better overall performance, even if it comes at the cost of slightly higher data usage.
Intel Inside
The VP9 video codec that the YouTube app is now using is much easier to decode, making it less taxing on the vehicle’s processor. This change is particularly beneficial for Tesla vehicles with Intel processors, which previously struggled to stream video at just 720p. When using AV1, these vehicles often experienced stuttering, sometimes forcing the YouTube app to automatically downgrade playback to 480p.
With this update, Intel-based Teslas should now be able to stream at 1080p smoothly. Streaming at 1440p is also possible, although occasional stutters still occur as the system keeps up with the decoding process.
Intel-based vehicles are the big winners with this change, but this appears to affect AMD Ryzen-based infotainment units as well, providing even smoother playback.
Chromium Web App
Tesla’s Theater apps aren’t native applications; instead, they run as chromeless web apps, leveraging the open-source browser built into Teslas known as Chromium (the open-source version of Chrome). Although this works quite well, there is a severe limitation - Chromium hardware acceleration isn’t supported on Linux, the operating system Tesla uses for their OS.
As a result, Tesla vehicles rely on software decoding instead of hardware decoding, which would otherwise handle video playback far more efficiently. A potential solution could be for Tesla to transition away from Chromium-based web apps in favor of a Mozilla Firefox-based browser, as Firefox does support hardware acceleration on Linux. This switch could also open the door to better streaming performance and the possibility of expanding Tesla’s in-car entertainment options.
However, Tesla’s choice of Chromium likely stems from Digital Rights Management (DRM) requirements for streaming services like Disney+ and Netflix, which rely on DRM-enabled playback. Firefox on Linux has had inconsistent support for DRM due to codec availability and variations in operating system versions.
We’re hopeful that Tesla will either adopt Firefox or develop a fully native application to improve video streaming, rather than continuing with the current web-based Tesla Theater. This shift could also pave the way for additional in-car applications built on Tesla’s native Linux environment—perhaps even reviving the long-rumored Tesla App Store.
Regardless, this update is a welcome improvement, particularly for YouTube, which remains one of the most widely used Theater Mode apps due to its accessibility, free content, and mix of short and long-form videos. It remains to be seen whether similar improvements are made for Netflix, Disney+, or other streaming platforms.
If you’ve noticed improved performance in Theater Mode, now you know why.
Thanks to Brian Zheng for making us aware of these changes.
Subscribe
Subscribe to our newsletter to stay up to date on the latest Tesla news, upcoming features and software updates.
For years, Tesla owners have been intrigued by the promise of a truly hands-off parking experience, one that goes beyond simply just letting your car park itself when you arrive at the parking lot. Banish, sometimes also known as Banish Autopark or Reverse Summon, was envisioned as the ultimate parking convenience. Your Tesla would drop you off at the entrance to your destination, full chauffeur style, and then leave to find a suitable parking spot nearby. Coupled with Park Seek, your Tesla would drive through a parking lot to locate an open space and then park itself, waiting on standby.
Then, when you were ready, you would be able to Summon it to the entrance, showing up right as you do, for the smoothest autonomous experience. However, despite the initial excitement and focus from Elon back when V12.5 was supposed to include it, we’ve heard very little about Banish. It has remained a relatively elusive feature - and the last time we saw anything on it was all the way back in October 2024, when it was alluded to in some Tesla app code.
So, what happened to Banish?
The Original Promise: A Smarter Way to Park
The concept of Banish was a logical extension of Tesla’s existing Summon and Autopark capabilities. Instead of just parking when a spot is identified by the driver, Banish and Park Seek were meant to give your Tesla more agency. After dropping off the occupants, your Tesla would leverage FSD and its autonomy to:
This functionality was often discussed in conjunction with improvements to Autopark and was highlighted as a step towards Tesla’s vision of a Robotaxi future. Interestingly, while the October 2024 FSD Roadmap mentioned Park, Unpark, and Reverse, it did not mention Banish. The lack of Banish as a milestone in the FSD Roadmaps leaves us to believe that Tesla has put this feature on the back burner while it works on other FSD-related priorities.
Today’s FSD & Autopark: Capable, But Not Quite Banish
Fast forward to Spring 2025, and FSD V13 does exhibit some tendencies in terms of self-parking capabilities. As noted by many on social media, FSD can identify and maneuver into parking spots when arriving at a destination. However, this is generally not the proactive Park Seek envisioned for Banish. The current system requires the driver to be present, even if hands-off. It often identifies spots as it directly approaches them, and its seeking behavior in a larger parking lot is extremely limited.
Users have also observed that while Tesla’s vision-based Autopark is often impressively accurate even on the massive Cybertruck, letting FSD nose-in to a spot can sometimes result in the car being poorly aligned or missing the lines entirely. This suggests that while your Tesla can park itself, the nuanced understanding and precision required for a truly reliable and Unsupervised Banish experience are still under development.
V13’s upcoming features indicate that it is supposed to provide additional support for personal and parking garages and driveways, which haven’t been added in quite yet. In fact, none of V13’s upcoming features have been realized yet - and it has been a while since a proper FSD update has come from Tesla.
The Underlying Tech is Ready
Interestingly, the core AI capabilities required for Banish and Park Seek are detailed extensively in a recently published Tesla Patent covering Autonomous and User Controlled Vehicle Summon to a Target. This patent describes generating an occupancy grid of the parking lot, then conducting path planning to the spot, and making decisions to safely navigate the lot at low speeds while keeping in mind pedestrians and other road users.
This indicates that Tesla has been working on the foundational AI for low-speed maneuvering in tight locations for quite some time. However, the challenge likely lies in achieving the necessary reliability, safety, and real-world robustness across an almost infinite variety of parking lot designs and in dynamic conditions.
What’s Next? Robotaxi.
The impending launch of Tesla’s Robotaxi Network in Austin in June brings the need for Banish-like capabilities into sharp focus. For a fleet of autonomous vehicles to operate efficiently, they must be able to manage their parking autonomously. A Robotaxi will need to drop off its passenger at the entrance to a location and then proceed to either its next pickup or autonomously find a parking or staging spot to await its next ride or even go back to base to charge.
It is plausible that a functional, robust version of Park Seek and Banish is being developed and tested internally as a component for Tesla’s Robotaxi launch and presumably what will be FSD Unsupervised. The initial rollout in Austin may just be the first real-world deployment of this tech from Tesla.
While Banish has yet to launch, the key components are in place and just need to be improved. The issue likely lies in safety, as parking lots account for 1 and 5 accidents that occur in North America.
In all likelihood, Banish isn’t canceled but is being integrated into an FSD Unsupervised and the Robotaxi feature set. That means a public rollout will likely depend on achieving a higher level of safety and confidence before Tesla is willing to let vehicles park themselves autonomously or even while being Supervised through the Tesla app.
For now, you’ll have to keep parking yourself, or letting FSD or Autopark do the job. A convenient curbside drop-off isn’t in the cards yet, but given the necessity for Robotaxi, it’ll need to arrive eventually.
Tesla’s Summon, Smart Summon, and Actually Smart Summon features have long been a source of fascination (and occasional frustration), offering FSD users a glimpse into a future where your vehicle picks you up.
While we await further improvements to Actually Smart Summon to increase reliability and range, a recently published Tesla patent (US20250068166A1) provides an inside look into the intricate AI and sensor technologies that make these complex, low-speed autonomous maneuvers possible.
Notably, the list of inventors on this patent reads like a "who's who" of Tesla's AI and Autopilot leadership, including Elon Musk and former Director of AI Andrej Karpathy, among many others.
Though the patent is a continuation of earlier work, with some dates stretching back to 2019, it lays out the core logic that powers Tesla's vision-based system.
Step-by-Step Navigation
Tesla’s patent details a sophisticated system designed to allow a vehicle to autonomously navigate from its current position to a target location specified by a remote user. The remote user can also designate themselves as the target, even while they’re moving, and have the vehicle meet them.
This process begins with destination and target acquisition. The system is designed to receive a target geographical location from a user, for example, by dropping a pin via the Tesla app. Alternatively, it can use a “Come to Me” feature, where the car navigates to the user’s dynamic GPS location. In this same section, the patent also mentions the ability to handle altitude, which is crucial for multi-story parking garages, and even handle final orientations at arrival.
Occupancy Grid
At the heart of the system is the use of sensor data to perceive the environment. This is done through Tesla Vision, which builds a representation of the surrounding environment, similar to how FSD maps and builds a 3D world in which to navigate. A neural network processes this environment to determine drivable space and generate an “occupancy grid.” This grid maps the area around the vehicle, detailing drivable paths versus obstacles.
The patent still references the use of alternative sensors, like ultrasonic sensors and radar, even though Tesla does not use them anymore. The system can also load saved occupancy grids from when the car was parked to improve initial accuracy.
Path Planner
Once the environment is understood, a Path Planner Module calculates an intelligent and optimal path to the target. This isn’t just the shortest route; the system uses cost functions to evaluate potential paths, penalizing options with sharp turns, frequent forward/reverse changes, or a higher likelihood of encountering obstacles. The path planning also considers the vehicle’s specific operating dynamics, like its turning radius. Interestingly, the Path Planner Module can also handle multi-part destinations with waypoints - a feature that isn’t available yet on today’s version of Actually Smart Summon.
Generating Commands
Once the path is determined, the Vehicle Controller takes the path and translates it into commands for the vehicle actuators, which control the steering, acceleration, and braking to navigate the vehicle along the planned route. As the vehicle moves, the Path Planner continues to recalculate and adjust the path as required.
Since Actually Smart Summon is nearly autonomous with the exception of the user having to hold the Summon button (app update hints at not having to hold the button soon), continuous safety checks are integral. This includes using the Path Planner and the occupancy grid to judge if there is a chance for a collision, and overriding navigation if necessary. The patent also mentions the possibility of users remotely controlling aspects like steering and speed but with continuous safety overrides in place. This is another cool little feature that Tesla has yet to include with today’s Actually Smart Summon - being able to control your full-size car like an RC car. This feature could be used for robotaxis if the vehicles get stuck and need to be tele-operated.
Reaching the Target
Upon reaching the destination, or the closest safe approximation (like the other side of a road), the system can trigger various actions. These include sending a notification to the user, turning on the interior or exterior lights, adjusting climate control, and unlocking or opening the doors. Another yet-to-arrive feature here is the fact that the destination triggers in the patent also include correctly orienting the vehicle for charging if the destination is a charger. This part of the patent doesn’t reference wireless charging, but we’re sure there’s more to this than it seems.
A Glimpse Into the Future
While this patent has dates stretching back to 2019, its recent publication as a continued application tells us that Tesla is still actively iterating on its Summon functionality. It details a comprehensive system that has been well thought out for complex, confined spaces, which will be key for both today’s convenience features like Actually Smart Summon - but also for Tesla’s upcoming robotaxis.
The depth of engineering described, from neural network-based perception to sophisticated path planning and safety protocols, explains the impressive capabilities of Tesla's Summon features when they work well and the inherent challenges in making them robust across an infinite variety of real-world scenarios. As Tesla continues to refine its AI, the foundational principles laid out in this patent will undoubtedly continue to evolve, actually bringing "Actually Smart Summon" to reality.