Full Self-Driving (Beta) Suspension
We have reset the "Force Autopilot Disengagements" counter on your vehicle to 0.
For maximum safety and accountability, use of Full Self-Driving (Beta) will be suspended if improper usage is detected. Improper usage is when you, or another driver of your vehicle, receive five 'Forced Autopilot Disengagements'. A disengagement is when the Autopilot system disengages for the remainder of a trip after the driver receives several audio and visual warnings for inattentiveness. Driver-initiated disengagements do not count as improper usage and are expected from the driver. Keep your hands on the wheel and remain attentive at all times. Use of any hand-held devices while using Autopilot is not allowed.
FSD Beta v10.69.2 Release Notes
- Added a new "deep lane guidance" module to the Vector Lanes neural network which fuses features extracted from the video streams with coarse map data, i.e. lane counts and lane connectivities. This architecture achieves a 44% lower error rate on lane topology compared to the previous model, enabling smoother control before lanes and their connectivities becomes visually apparent. This provides a way to make every Autopilot drive as good as someone driving their own commute, yet in a sufficiently general way that adapts for road changes.
- Improved overall driving smoothness, without sacrificing latency, through better modeling of system and actuation latency in trajectory planning. Trajectory planner now independently accounts for latency from steering commands to actual steering actuation, as well as acceleration and brake commands to actuation. This results in a trajectory that is a more accurate model of how the vehicle would drive. This allows better downstream controller tracking and smoothness while also allowing a more accurate response during harsh maneuvers.
- Improved unprotected left turns with more appropriate speed profile when approaching and exiting median crossover regions, in the presence of high speed cross traffic ("Chuck Cook style" unprotected left turns). This was done by allowing optimisable initial jerk, to mimic the harsh pedal press by a human, when required to go in front of high speed objects. Also improved lateral profile approaching such safety regions to allow for better pose that aligns well for exiting the region. Finally, improved interaction with objects that are entering or waiting inside the median crossover region with better modeling of their future intent.
- Added control for arbitrary low-speed moving volumes from Occupancy Network. This also enables finer control for more precise object shapes that cannot be easily represented by a cuboid primitive. This required predicting velocity at every 3D voxel. We may now control for slow-moving UFOs.
- Upgraded Occupancy Network to use video instead of images from single time step. This temporal context allows the network to be robust to temporary occlusions and enables prediction of occupancy flow. Also, improved ground truth with semantics-driven outlier rejection, hard example mining, and increasing the dataset size by 2.4x.
- Upgraded to a new two-stage architecture to produce object kinematics (e.g. velocity, acceleration, yaw rate) where network compute is allocated O(objects) instead of O(space). This improved velocity estimates for far away crossing vehicles by 20%, while using one tenth of the compute.
- Increased smoothness for protected right turns by improving the association of traffic lights with slip lanes vs yield signs with slip lanes. This reduces false slowdowns when there are no relevant objects present and also improves yielding position when they are present.
- Reduced false slowdowns near crosswalks. This was done with improved understanding of pedestrian and bicyclist intent based on their motion.
- Improved geometry error of ego-relevant lanes by 34% and crossing lanes by 21% with a full Vector Lanes neural network update. Information bottlenecks in the network architecture were eliminated by increasing the size of the per-camera feature extractors, video modules, internals of the autoregressive decoder, and by adding a hard attention mechanism which greatly improved the fine position of lanes.
- Made speed profile more comfortable when creeping for visibility, to allow for smoother stops when protecting for potentially occluded objects.
- Improved recall of animals by 34% by doubling the size of the auto-labeled training set.
- Enabled creeping for visibility at any intersection where objects might cross ego's path, regardless of presence of traffic controls.
- Improved accuracy of stopping position in critical scenarios with crossing objects, by allowing dynamic resolution in trajectory optimization to focus more on areas where finer control is essential.
- Increased recall of forking lanes by 36% by having topological tokens participate in the attention operations of the autoregressive decoder and by increasing the loss applied to fork tokens during training.
- Improved velocity error for pedestrians and bicyclists by 17%, especially when ego is making a turn, by improving the onboard trajectory estimation used as input to the neural network.
- Improved recall of object detection, eliminating 26% of missing detections for far away crossing vehicles by tuning the loss function used during training and improving label quality.
- Improved object future path prediction in scenarios with high yaw rate by incorporating yaw rate and lateral motion into the likelihood estimation. This helps with objects turning into or away from ego's lane, especially in intersections or cut-in scenarios.
- Improved speed when entering highway by better handling of upcoming map speed changes, which increases the confidence of merging onto the highway.
- Reduced latency when starting from a stop by accounting for lead vehicle jerk.
- Enabled faster identification of red light runners by evaluating their current kinematic state against their expected braking profile.
Press the "Video Record" button on the top bar UI to share your feedback. When pressed, your vehicle's external cameras will share a short VIN-associated Autopilot Snapshot with the Tesla engineering team to help make improvements to FSD. You will not be able to view the clip.
Full Self-Driving (Beta)
Full Self-Driving is in early limited access Beta and must be used with additional caution. It may do the wrong thing at the worst time, so you must always keep your hands on the wheel and pay extra attention to the road. Do not become complacent. When Full Self-Driving is enabled, your vehicle will make lane changes off highway, select forks to follow your navigation route, navigate around other vehicles and objects, and make left and right turns. Use Full Self-Driving in limited Beta only if you pay constant attention to the road, and be prepared to act immediately, especially around blind corners, crossing intersections, and in narrow driving situations.
Your vehicle is running on Tesla Vision! Note that Tesla Vision also includes some temporary limitations, follow distance is limited to 2-7 and Autopilot top speed is 85mph.
The cabin camera above your rearview mirror can now determine driver inattentiveness and provide you with audible alerts, to remind you to keep your eyes on the road when Autopilot is engaged. Camera images do not leave the vehicle itself, which means the system cannot save or transmit information unless you enable data sharing. To change your data settings, tap Controls > Software > Data Sharing on your car's touchscreen. Cabin camera does not perform facial recognition or any other method of identity verification.
Seat Belt System Enhancement
This enhancement builds upon your vehicle's superior crash protection - based upon regulatory and industry standard crash testing- by now using Tesla Vision to help offer some of the most cutting-edge seat belt pretensioner performance in the event of a frontal crash. Your seat belts will now begin to tighten and protect properly restrained occupants earlier in a wider array of frontal crashes.
Reset the learned tire settings directly after a tire rotation, swap, or replacement to improve your driving experience. To reset, tap Controls > Service > Wheel & Tire Configuration > Tires.
Tesla Adaptive Suspension
Tesla Adaptive Suspension will now adjust ride height for an upcoming rough road section. This adjustment may occur at various locations, subject to availability, as the vehicle downloads rough road map data generated by Tesla cars. The instrument cluster will continue to indicate when the suspension is raised for comfort. To enable this feature, tap Controls > Suspension > Adaptive Suspension Damping, and select the Comfort or Auto setting.
Customize how your car appears on the touchscreen and mobile app with the Car Colorizer. Change the color of your car's exterior by tapping Controls > Software > Colorizer icon, or using Colorizer in the ToyBox.
Improvements to Energy Prediction
Tesla is making further improvements to its navigation energy prediction. Tesla already took HVAC use, speed, outside temperature and more into account when predicting its vehicles' range. Recently in update 2022.16 Tesla added even more parameters such as wind speed and direction, humidity and more.
However, with this update, Tesla is bringing it to a whole new level. Your vehicle will now more accurately predict its range by also considers the number of passengers in the vehicle, the vehicle's tire pressures, the amount of current being drawn out of USB ports and more.
Automatic Supercharger Rerouting
If you're navigating to a Supercharger and it suddenly becomes more congested before you arrive, Tesla will now calculate whether there are any nearby Supercharger that may be less congested.
If Tesla believes that it can reduce your total travel time by navigating to a less congested charger, it will reroute you to a Supercharger that's less busy.
The Bluetooth menu has been updated to make it more obvious which device is connected and new device icons have been added.
Previously Tesla used to show a Bluetooth logo to the right of the device name. The logo would be blue if the device was connected or gray if it wasn't.
If a device is connected, it will now display the text 'Connected' underneath the device name, along with a green dot.
With the introduction of v11, Tesla added HomeLink controls that appear when you're geographically close to one or more of the HomeLink devices.
The buttons feature a label underneath them that up until this update displayed whether tapping the button would cancel the the auto-open feature or activate the HomeLink device.
With this update Tesla has swapped these labels. The Activate or Cancel (if auto-open is turned on) text will now appear inside of the button, and label given to the device such as 'Left Garage' will now appear underneath the button.
Battery at Arrival
Tesla's navigation system will once again display your estimated range upon arrival at your destination near your estimated time of arrival.
Regeneration / Acceleration Line
This is an undocumented change in this release.
The line directly above the speedometer reading in a Model 3 and Model Y shows the amount of regenerative braking (green) or acceleration (black) that is occuring. The center of the line is neutral where there is no acceleration or regenerative braking occuring.
The further the line grows to the left, the greater the amount of regenerative braking is taking place, and the more it goes to the right, the greater the acceleration.
With this update the regeneration line will now also show when physical brakes are being applied. The amount of physical brakes being used will appear as a gray line after the green, regen line.
The physical brake line is only show when the vehicle is in Autopilot.
The regen/acceleration line is now also thicker, making it easier to see.
Heater & Low Voltage Battery
You can now view additional information about your car by tapping Controls > Software > Additional vehicle information.
The list of information will now include the type of low-voltage battery installed and whether your vehicle has a heat-pump.