October 10th was Tesla’s spectacular ‘We, Robot’ robotaxi event, and now we know a lot more about Tesla’s upcoming robotaxi – officially named the Cybercab – than ever before.
So, grab your Tesla-stamped BBQ burgers and put on your 12 gallon hat, we’re going to take a look at everything we know about Tesla’s Cybercab.
Exterior
Not a Tesla App
The Cybercab showed up to ‘We, Robot’ with both a front and rear lightbar, similar to the Cybertruck. However, unlike the truck – it’s not stainless steel. Instead, the prototypes that were at the robotaxi event arrived with aluminum body panels painted silver.
While the exterior finish won’t be as tough-as-nails as the Cybertruck, the Cybercab is designed to be cheaply mass-produced, so this decision makes sense. While there was early talk about using a stainless steel “exoskeleton,” it appears Tesla decided that aluminum and steel body panels would be easier and/or cheaper to manufacture.
While many initial concepts thought the Cybercab may only include three wheels, it does indeed have four wheels like a normal car.
And of those four, only the front two did the steering – so no rear-wheel steering here. Speaking of the wheels, they were mostly covered up with a disc-shaped plate, making them extremely aerodynamic. Tesla also painted the sidewalls of the tires silver, leaving them looking super slim in comparison to the size of the wheel.
Looking at the whole vehicle, the Cybercab doesn’t have Tesla’s iconic glass roof – but a simpler metal roof. The windows are not frameless either – they are framed (metal around the glass opening), which makes them easier to maintain and produce. All these changes are clearly aimed at reducing the overall cost of the vehicle, fitting its robotic taxi role.
Not a Tesla App
The one oddball in terms of price-to-function ratio is the butterfly doors. Cybercab’s butterfly doors are super impressive – and strike a pose just as iconically as the Model X. We’re interested to see what Tesla has planned for these automatic doors – as they may be difficult to maintain and service in colder climates given snow and ice build-up.
Interior
On the interior, the Cybercab comfortably seats two adults with large, padded seats. In these prototype vehicles, the seats are not ventilated, but they are heated. The seats themselves were fairly simple in comparison to Tesla’s other seat designs, even when compared to the simpler Mexican Model 3 with its fabric seats.
Tesla has made the overall interior design very simplistic and easy to clean. They showed off a new automatic vacuum and scrubbing unit that was cleaning the robotaxi’s seats and screens – so these seats are likely intended to take some punishment. And the screen will likely need to be cleaned often. There were no other major controls in the vehicle to clean – no steering wheel, no pedals.
However, the interior is classic Tesla—super spartan, stylish, and clean, with an extremely large 20.5” center display intended to display trip progress and entertainment. In comparison, the Cybertruck currently has the largest display in any Tesla, with an 18.5” screen. The Model 3 and Model Y use a 15” screen. Unsurprisingly, it looked like both video games and movies and TV shows would be available in the Cybercab.
Not a Tesla App
Two drink holders are also located just in front and below the center armrest. Just under the drink holders (towards the passengers) are the buttons to open and close the doors. The doors normally close automatically when the passenger(s) buckle up, but they can also be closed manually.
As expected, the controls for the windows are on the doors, so nothing too special there. Tesla has only shown the white interior so far, with black trim throughout the interior, including the carpet floor and plastic headliner. We’re hoping that Tesla also introduces a black interior – even with how resilient Tesla’s whites are – a black interior is likely to better last through the day-to-day punishment a taxi goes through.
FSD Hardware/AI
At the event, Elon Musk confirmed that Cybercab would be shipping with an “upsized” Hardware 5/AI 5. It looks like AI5 has mostly the same camera layout as AI4—with two (+1 fake) cameras at the top of the windshield. The car also features a front bumper camera, the usual two B-pillar cameras, and one rear-facing camera.
The Cybercab's rear end has a fairly large amount of storage—the rear hatch opens upwards and reveals a sizeable cavity. From some rough estimates, it will be possible to comfortably throw 3-4 large suitcases back there, along with a few other items.
Internally, there’s less space, but as there is no center storage console, there is a large amount of legroom. If you potentially needed extra space, you could put a backpack on the floor of the Cybercab between your feet, and still have plenty of space to stretch.
Release Date
Elon acknowledged he’s been overly optimistic about timelines and relented that production for Cybercab should begin no later than 2027. However, he did mention 2026 as a likely start date.
Now that the Cybercab has been unveiled, we’ll likely start seeing design and build prototypes on the roads in Texas and California – where Tesla plans to start Unsupervised FSD – sometime in late 2026. More vehicles will show up in 2027.
Price
In a somewhat surprising move, Tesla announced that they’ll also sell the Cybercab to anyone who wants to buy it, whether it’s for personal use or to operate their own fleet of autonomous taxis. Tesla announced that they plan to sell the Cybercab for under $30,000 USD. Given the lack of steering wheel and pedals, we’re not sure whether the US Federal EV Rebate or the Canadian iZEV rebates would be applicable to these Cybercabs, but we’ll see how that pans out in the future. Both of these rebate programs are set to expire before the Cybercab hits the road.
Cybercab Hubs – Cleaning & Charging
Elon also confirmed that the Cybercab has inductive charging – a first for a fleet-scale EV. It seems that Cybercabs will likely belong to “hubs” where they can be charged and get cleaned. Whether these hubs are Tesla-owned facilities or consumer-owned is yet to be determined.
Not a Tesla App
Tesla also showed off a very short clip of the Cybercab getting cleaned with robotic arms. The cost and complexity of this are likely to drive a model where Tesla provides the facilities for charging and cleaning while owners simply let their vehicles be charged or cleaned as required.
We’re excited to hear more details about how exactly Tesla intends to build out these potential hubs and more details about the upcoming Cybercab. Now that the event has passed, we should start to see a steady flow of new information as Elon or other Tesla executives share new details.
Subscribe
Subscribe to our newsletter to stay up to date on the latest Tesla news, upcoming features and software updates.
If you’re ever involved in a collision with your Tesla, there’s a good chance you can retrieve dashcam footage of the incident—provided the USB drive and glovebox survive. However, sometimes they don’t, and even dashcam footage may not be sufficient.
In such cases, you can request a full recording from Tesla, along with a detailed data report showing comprehensive information about your vehicle’s performance during the collision. Shortly after any impact, your Tesla automatically uploads its crash log and related data to Tesla’s servers when possible, even using the low-voltage battery if the high-voltage pyro fuses are triggered.
Hopefully, you’ll never need this, but here’s how to request a Tesla Vehicle Data Report and what you can expect in the report.
How to Request a Report
Tesla has a simple, automated process for owners to request a Vehicle Data Report. To do so, simply go to Tesla’s Data Association Page and log into your Tesla Account.
From there, you’ll see a form that contains several options. Under “Regarding,” you’ll choose “Data Privacy Request,” and in the next selection, choose “Obtain a Copy of My Data.”
Tesla will then ask you to choose a vehicle that’s attached to your account and a range of dates for which you want data.
Tesla will provide details for the whole date range you specify, so it’s best to keep it small. Once you hit Submit, Tesla will start processing the request. Collision data is kept for an extended period of time, enabling people to go back and find data as required.
The summary page for the Vehicle Data Report
@bilalsattar on X
Within 30 days, and often much sooner, Tesla should e-mail you back with the report in PDF format and a CSV sheet containing all the raw information related to the request. This information is available for any country in which Tesla sells cars. If your country is outside of Tesla’s regular sales zones, you can try reaching out anyway, but we’re not sure if they will retain your data or not.
Tesla can also send you footage and data even if the incident wasn’t recorded as a collision – they’ll send you whatever is available – for any specific timeline you request.
Tesla Vehicle Data Report
The report is several pages long and comes in a nicely formatted PDF package. It breaks down the incident into various sections that help highlight what happened during the event. The first page summarizes the incident by highlighting key events and metrics like Autopilot use and speed.
Summary & Event Information
The Summary and Event Information sections are on the left side of the first page. The summary section is a text version of what happened during the event, reporting the time of the incident, speed, and whether seat belts were used, among other details.
The event information section includes some high-level information, such as the location of the incident, whether dashcam recordings are available, and the date and time.
Driver Log Data Overview
The Driver Log Data Overview section focuses on a few Tesla features and shows whether they were enabled at the five-second mark before the incident, one second before the incident occurred, or at the time of the incident. Tesla will show whether Autosteer / FSD, Driver Monitoring, Cruise Control, and Manual Brakes were used at all of these points. They will also show the status of the driver’s seat belt. Later in the report, Tesla will show graphs for each of these features so that you can see if they changed over the course of the incident.
Speed and Collision
The Vehicle Data Report's formatting
@bilalsattar on X
The Speed and Collision section shows a timeline-based graph of the vehicle’s speed at the time of collision, four seconds before the collision, and four seconds after. The vertical line on the graph represents the collision, giving you a better understanding of what happened before and after the incident. However, in the following pages of the report, Tesla provides numerous time-based graphs that highlight many other metrics, including brake pedal use, accelerator, steering wheel torque and more.
Area of Detected Impact
This area of the report shows the vehicle from the top-down view and which areas detected an impact.
Time-Based Graphs
The rest of the report includes detailed graphs showing various vehicle metrics before, during, and after the incident.
Some of these graphs include the vehicle’s speed, steering wheel torque, steering wheel angle (how much the wheel was being turned and in which direction), accelerator and brake pedal usage, and pressure of the brake master cylinder. Tesla will even show whether any doors were opened during the seconds leading up to or following the incident.
If Tesla has a video of the incident, that will also be provided as a link for you to download. All the other information - the charts and graphs - raw information is also provided in a CSV file, which you can open in software like Microsoft Excel or Google Sheets.
Not a Tesla App
To wrap up, Tesla is currently the only car manufacturer in the world that can provide this information to its customers at the drop of a hat. This information is immensely valuable, and it could be the difference between someone paying an insurance claim or being charged with a crime.
Tesla produces some of the safest vehicles on the planet, and their commitment to safety and reporting is spectacular. We’re happy to see Tesla continue to take steps to better help their customers.
Thanks to Nic Cruze Patane for sharing the report. We hope you never need to use this, but it’s good to know that it’s available.
Thanks to a Tesla patent published last year, we have a great look into how FSD operates and the various systems it uses. SETI Park, who examines and writes about patents, also highlighted this one on X.
This patent breaks down the core technology used in Tesla’s FSD and gives us a great understanding of how FSD processes and analyzes data.
To make this easily understandable, we’ll divide it up into sections and break down how each section impacts FSD.
Vision-Based
First, this patent describes a vision-only system—just like Tesla’s goal—to enable vehicles to see, understand, and interact with the world around them. The system describes multiple cameras, some with overlapping coverage, that capture a 360-degree view around the vehicle, mimicking but bettering the human equivalent.
What’s most interesting is that the system quickly and rapidly adapts to the various focal lengths and perspectives of the different cameras around the vehicle. It then combines all this to build a cohesive picture—but we’ll get to that part shortly.
Branching
The system is divided into two parts - one for Vulnerable Road Users, or VRUs, and the other for everything else that doesn’t fall into that category. That’s a pretty simple divide - VRUs are defined as pedestrians, cyclists, baby carriages, skateboarders, animals, essentially anything that can get hurt. The non-VRU branch focuses on everything else, so cars, emergency vehicles, traffic cones, debris, etc.
Splitting it into two branches enables FSD to look for, analyze, and then prioritize certain things. Essentially, VRUs are prioritized over other objects throughout the Virtual Camera system.
The many data streams and how they're processed.
Not a Tesla App
Virtual Camera
Tesla processes all of that raw imagery, feeds it into the VRU and non-VRU branches, and picks out only the key and essential information, which is used for object detection and classification.
The system then draws these objects on a 3D plane and creates “virtual cameras” at varying heights. Think of a virtual camera as a real camera you’d use to shoot a movie. It allows you to see the scene from a certain perspective.
The VRU branch uses its virtual camera at human height, which enables a better understanding of VRU behavior. This is probably due to the fact that there’s a lot more data at human height than from above or any other angle. Meanwhile, the non-VRU branch raises it above that height, enabling it to see over and around obstacles, thereby allowing for a wider view of traffic.
This effectively provides two forms of input for FSD to analyze—one at the pedestrian level and one from a wider view of the road around it.
3D Mapping
Now, all this data has to be combined. These two virtual cameras are synced - and all their information and understanding are fed back into the system to keep an accurate 3D map of what’s happening around the vehicle.
And it's not just the cameras. The Virtual Camera system and 3D mapping work together with the car’s other sensors to incorporate movement data—speed and acceleration—into the analysis and production of the 3D map.
This system is best understood by the FSD visualization displayed on the screen. It picks up and tracks many moving cars and pedestrians at once, but what we see is only a fraction of all the information it’s tracking. Think of each object as having a list of properties that isn’t displayed on the screen. For example, a pedestrian may have properties that can be accessed by the system that state how far away it is, which direction it’s moving, and how fast it’s going.
Other moving objects, such as vehicles, may have additional properties, such as their width, height, speed, direction, planned path, and more. Even non-VRU objects will contain properties, such as the road, which would have its width, speed limit, and more determined based on AI and map data.
The vehicle itself has its own set of properties, such as speed, width, length, planned path, etc. When you combine everything, you end up with a great understanding of the surrounding environment and how best to navigate it.
The Virtual Mapping of the VRU branch.
Not a Tesla App
Temporal Indexing
Tesla calls this feature Temporal Indexing. In layman’s terms, this is how the vision system analyzes images over time and then keeps track of them. This means that things aren’t a single temporal snapshot but a series of them that allow FSD to understand how objects are moving. This enables object path prediction and also allows FSD to understand where vehicles or objects might be, even if it doesn’t have a direct vision of them.
This temporal indexing is done through “Video Modules”, which are the actual “brains” that analyze the sequences of images, tracking them over time and estimating their velocities and future paths.
Once again, heavy traffic and the FSD visualization, which keeps track of many vehicles in lanes around you—even those not in your direct line of sight—are excellent examples.
End-to-End
Finally, the patent also mentions that the entire system, from front to back, can be - and is - trained together. This training approach, which now includes end-to-end AI, optimizes overall system performance by letting each individual component learn how to interact with other components in the system.
How everything comes together.
Not a Tesla App
Summary
Essentially, Tesla sees FSD as a brain, and the cameras are its eyes. It has a memory, and that memory enables it to categorize and analyze what it sees. It can keep track of a wide array of objects and properties to predict their movements and determine a path around them. This is a lot like how humans operate, except FSD can track unlimited objects and determine their properties like speed and size much more accurately. On top of that, it can do it faster than a human and in all directions at once.
FSD and its vision-based camera system essentially create a 3D live map of the road that is constantly and consistently updated and used to make decisions.