With the release of update 2021.32.22, Tesla released Safety Score, a feature that assesses your driving behavior. To assess your driving, Tesla is using a similar model to what they use to determine Tesla insurance rates.
By opting-in to Safety Score, you give Tesla the authority to collect certain driving metrics that Tesla will use to measure your driving behavior. Tesla will assess your risk across five major categories.
The five categories are:
Forward Collision Warnings
Hard Braking
Aggressive Turning
Unsafe Following
Forced Autopilot Disengagement
Your Safety Score is then available in the latest Tesla app (version 4.1), which is currently available to iPhone users, but will soon be available on Android as well.
Your Safety Score is rated from 0 to 100 and the app does an excellent job breaking down the score for each category and comparing it to the Tesla fleet median.
You can also dig deeper and drill down into an individual day or even a specific trip to see which drives affected your score the most. Tesla will even show you which driver profile was used for an individual trip.
Tesla is currently using this feature to decide the next batch of FSD Beta testers, but this is an excellent feature that should be available to everyone.
Safety Score is a useful feature if you’re looking to improve your safety on the road. It can also be used to monitor new drivers or even to see how your vehicle was handled when used by other individuals such as valet service or car rental services such as Turo.
Tesla Safety Score is currently limited to vehicles on 2021.32.22 and later and to owners in the US. You must opt-in through the Request FSD Beta button in the Autopilot menu.
Expect Tesla to continue improving this feature and expanding it to other regions. I would also expect Tesla to use this data to further improve their FSD Beta.
Subscribe
Subscribe to our newsletter to stay up to date on the latest Tesla news, upcoming features and software updates.
Tesla has always embraced whimsy in its software, packing it with playful Easter eggs and surprises. From transforming the on-screen car into James Bond’s submarine to the ever-entertaining Emissions Testing Mode and the fan-favorite Rainbow Road, these hidden features have become a signature part of Tesla’s software.
Of course, launching a new product like Robotaxi wouldn’t be complete without a fun little easter egg of its own. The end-of-ride screen in the Robotaxi app presents a familiar option “Leave a tip.”
For anyone pleased with their Robotaxi ride, they may be tempted to leave a tip. However, tapping the button presents our favorite hedgehog instead of a payment screen.
The app displays a message, alongside the familiar Tesla hedgehog, that simply states “Just kidding.”
While it's a fun prank, it’s also a nod to what Tesla really wants to do. They want to reinforce the economic advantage of an autonomous Robotaxi Network. Without a driver, there is simply no need to tip. The gesture is playful, but it’s a reminder of what Tesla’s real aim is here.
Over the last few days, we’ve seen some exceptionally smooth performance from the latest version of FSD on Tesla’s Robotaxi Network pilot. However, the entire purpose of an early access program with Safety Monitors is to identify and learn from edge cases.
This week, the public saw the first recorded instance of a Safety Monitor intervention, providing a first look at how they’re expected to stop the vehicle.
The event involved a complex, low-speed interaction with a reversing UPS truck. The Safety Monitor intervened to stop the Robotaxi immediately, potentially avoiding a collision with the delivery truck. Let’s break down this textbook case of real-world unpredictability.
The Intervention [VIDEO]
In a video from a ride in Austin, a Robotaxi is preparing to pull over to its destination on the right side of the road, with its turn signal active. Ahead, a UPS truck comes to a stop. As the Model Y begins turning into the spot, the UPS truck, seemingly without signaling, starts to reverse. At this point, the Safety Monitor stepped in and pressed the In Lane Stop button on the main display, bringing the Robotaxi to an immediate halt.
This is precisely why Tesla has employed Safety Monitors in this initial pilot. They are there to proactively manage ambiguous situations where the intentions of other drivers are unclear. The system worked as designed, but it raises a key question: What would FSD have done on its own? It’s not clear whether the vehicle saw the truck backing up, or what it would do when it finally detected it. It’s also unclear whether the UPS driver recognized that the Robotaxi was pulling into the same spot at the exact same time.
It’s possible this wouldn’t result in a collision at all, but the Safety Monitor did the right thing by stepping in to prevent a potential collision, even one at low speed. Any collision just a few days after the Robotaxi Network launch could result in complications for Tesla.
Who Would Be At Fault?
This scenario is a classic edge case. It involves unclear right-of-way and unpredictable human behavior. Even for human drivers, the right-of-way here is complicated. While a reversing vehicle often bears responsibility, a forward-moving vehicle must also take precautions to avoid a collision. This legal and practical gray area is what makes these scenarios so challenging for AI to navigate.
Would the Robotaxi have continued, assuming the reversing truck would stop?
Or would it have identified the potential conflict and used its own ability to stop and reverse?
Without the intervention, it’s impossible to say for sure. However, crucial context comes from a different clip involving, surprisingly, another UPS delivery truck.
A Tale of Two Trucks
In a separate video posted on X, another Robotaxi encounters a remarkably similar situation. In that instance, as another UPS delivery truck obstructs the path forward, the Robotaxi comes to a stop to let its two passengers out just a few feet from their destination.
Once they depart, the Robotaxi successfully reverses and performs a three-point turn to extricate itself from a tight spot. That was all done without human intervention, by correctly identifying the situation.
This second clip is vital because it proves that the Robotaxi's FSD build has the underlying logic and capability to handle these scenarios. It can, and does, use reverse to safely navigate complex situations.
Far from being a failure, this first intervention should be seen as a success for Tesla’s safety methodology. It shows the safety system is working, allowing monitors to mitigate ambiguous events proactively.
More importantly, this incident provides Tesla’s FSD team with an invaluable real-world data point.
By comparing the intervened ride with the successful autonomous one, Tesla’s engineers can fine-tune FSD’s decision-making, which will likely have a positive impact on its edge case handling in the near future.
This is the purpose of a public pilot — to find the final edge cases and build a more robust system, one unpredictable reversing truck at a time.