Tesla FSD Beta 11.4.8 Release Notes Explained: Faster Decision Making, Improved Park Assist and More
Tesla's most recent FSD Beta, v11.4.7.3 was released almost a month ago on October 19th, but it now looks like Tesla may be looking to release another FSD Beta update.
Release notes for an alleged subsequent version, FSD Beta 11.4.8, have surfaced on Reddit. While their authenticity isn't confirmed, the release notes use the same syntax and language Tesla typically uses. Here's a breakdown of what may be included in Tesla's next FSD release.
Update: These release notes have now been confirmed and the update is rolling out to Tesla employees for further testing. The update is version 2023.27.11.
Simplified Autopilot Activation
Single-Tap Autopilot: The update reportedly allows drivers to activate Autopilot with just one press of the stalk, instead of the current two-press method. This could make engaging and disengaging Autopilot quicker and more straightforward.
This feature, along with separate audio for passengers using the rear display recently made its way to production in update 2023.38.8, which adds some credibility to these leaked release notes. This could also mean that this version of FSD Beta may be based on a more recent production branch, instead of the current version of 2023.27, which is now starting to lag in terms of features.
Advanced Video Processing
New Video Module: A new video processing component has been introduced to improve vehicle detection, movement understanding (semantics), speed (velocity), and other attributes. This improvement means the system can process visual information more efficiently and quickly, enhancing overall performance.
Enhanced Object Detection
Better Object Detection: The system's ability to notice objects crossing its path is said to be improved by 6%. Additionally, vehicle detection has become more precise due to updated data and the new video module.
Improved Vehicle Interaction
Cut-In Vehicle Detection: The precision in detecting vehicles that cut into the Tesla's lane is reportedly improved by 15%. This is crucial for safer lane changes and merges.
Accuracy in Speed and Movement
Reduced Errors in Speed and Acceleration: The system now makes fewer errors in judging other vehicles' speed (by 3%) and acceleration (by 10%). This means a more accurate response in traffic.
Faster Decision-Making
Reduced Network Latency: The update claims to reduce the delay (latency) in the vehicle's decision-making network by 15%, allowing for quicker responses without compromising performance.
Pedestrian and Cyclist Safety
Rotation Error Reduction: There's an over 8% reduction in errors related to understanding how pedestrians and cyclists are moving or turning. This could improve interactions with these road users.
Enhanced Parking Assistance
Vision Park Assist Accuracy: The geometric accuracy of the Vision Park Assist system is improved by 16%, making parking assistance more reliable by leveraging data from hardware 4 vehicles. It appears that these improvements will apply to all vehicles without ultrasonic sensors, although it's not very clear.
Smoother Lane Changes
Lane Change Accuracy: The accuracy of lane changes in response to path blockages is improved by 10%, likely leading to smoother and safer driving in complex traffic situations.
While these updates, if true, indicate a continued effort by Tesla to refine and improve FSD Beta, Tesla also continues work on the next major release of FSD Beta, version 12. V12 is expected to be 'end-to-end' neural networks, which will be the first time that neural networks are used to control the vehicle.
It's not clear when Tesla expects to release FSD v12, which is also when Musk says FSD will graduate from its beta status. Musk recently showed off FSD v12 and its capabilities in a livestream on X.
The complete release notes that were shared on Reddit are below.
FSD Beta 11.4.8 Release Notes
-Added option to activate Autopilot with a single stalk depression, instead of two, to help simplify activation and disengagement.
-Introduced a new efficient video module to the vehicle detection, semantics, velocity, and attributes networks that allowed for increased performance at lower latency.This was achieved by creating a multi-layered, hierarchical video module that caches intermediate computations to dramatically reduce the amount of compute that happens at any particular time.
-Improved distant crossing object detections by an additional 6%, and improved the precision of vehicle detection by refreshing old datasets with better autolabeling and introducing the new video module.
-Improved the precision of cut-in vehicle detection by 15%, with additional data and the changes to the video architecture that improve performance and latency.
-Reduced vehicle velocity error by 3%, and reduced vehicle acceleration error by 10%, by improving autolabeled datasets, introducing the new video module, and aligning model training and inference more closely.
-Reduced the latency of the vehicle semantics network by 15% with the new video module architecture, at no cost to performance.
-Reduced the error of pedestrian and bicycle rotation by over 8% by leveraging object kinematics more extensively when jointly optimizing pedestrian and bicycle tracks in autolabeled datasets.
-Improved geometric accuracy of Vision Park Assist predictions by 16%, by leveraging 10x more HW4 data, tripling resolution, and increasing overall stability of measurements.
-Improved path blockage lane change accuracy by 10% due to updates to static object detection networks.