It’s now been over a year since Elon Musk started saying Full Self Driving (FSD) is coming “next month.” If you are wondering what’s taking so long maybe check out this recent paper “Why AI is Harder Than We Think” by Melanie Mitchell. She is the Davis Professor of Complexity at the Santa Fe Institute, and Professor of Computer Science (currently on leave) at Portland State University. Her research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. Given that Tesla FSD relies on visual AI it would seem that she may be qualified to explain the delay in FSD rollout. Her paper does not mention Tesla or FSD at all, but I think her observations apply to the current status of FSD.
Mitchell points to four classic fallacies in the predictions made by AI developers:
1. Narrow intelligence is on a continuum with general intelligence
It’s easy to assume that if you make some incremental progress in an AI problem that it’s just a matter of time before you solve the whole thing. I.e. just a few more months to FSD. But Mitchell says that’s like claiming that the first monkey that climbed a tree was making progress towards landing on the moon. Ain’t gonna happen. Plus there’s this unexpected obstacle in the assumed continuum of AI progress. “The problem of common sense,” she says, which humans have subconsciously but AI systems lack completely. Nobody knows how to code for common sense, which comes in handy when you’re driving a car.
2. Easy things are easy and hard things are hard
In fact easy things for us are hard for computers. She quotes Hans Moravec the computer scientist who came up with one of the first algorithms for computer vision. He once wrote, “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, yet difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” Unfortunately FSD is all about perception and mobility. We simply don’t appreciate the complexity of our own thought processes and we overestimate how easy it is to give these abilities to a computer. Mitchell says Moravec put it this way: “Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it.” FSD doesn’t have any of this. Mitchell then quotes the grandfather of AI Marvin Minsky who said, “In general, we’re least aware of what our minds do best.”
3. The lure of wishful mnemonics or metaphors
The computational technique underlying FSD is a neural network, a metaphor loosely inspired by the brain, but with major differences. Mitchell says, “Machine learning or deep learning methods do not really resemble learning in humans (or in non-human animals). Indeed, if a machine has learned something in the human sense of learn, we would expect that it would be able use what it has learned in different contexts. However, it turns out that this is often not the case.” Computer scientist Drew McDermott first used the term “wishful mnemonics” in 1976, pointing out that by labeling some computer code “Full Self Driving,” for instance, we are imbuing it with the “wish” that it will actually do what it says. He said a better idea would be to label it “G0034” and then see if the programmers can convince themselves or anyone else that G0034 implements some part of self driving. We all seem to be caught in a sort of wishful FSD state at the moment. But wishing it won’t make it happen.
4. Intelligence is all in the brain
This fallacy assumes intelligence is disembodied and lives only in the brain, the so called “information processing model of mind.” It’s the old idea that if you had enough computing power you could “upload” a mind into a machine. But a growing number of cognitive scientists now believe in a sort of “embodied cognition.” Mitchell says, “Nothing in our knowledge of psychology or neuroscience supports the possibility that “pure rationality” is separable from the emotions and cultural biases that shape our cognition and our objectives. Instead, what we’ve learned from research in embodied cognition is that human intelligence seems to be a strongly integrated system with closely interconnected attributes, including emotions, desires, a strong sense of selfhood and autonomy, and a common sense understanding of the world. It’s not at all clear that these attributes can be separated.” While one could argue that Tesla’s FSD also embodies the car via sensors and cameras it may take more than a few weeks before Tesla’s programmers pull off this trick that took Nature a billion years to integrate.
Ultimately Mitchell uses these four fallacies to explain the cyclic nature of AI research since its inception in the 1970s. It tends to blossom in Spring time with magnificent overconfident predictions at first, but then when the scale of the challenge is realized, a sort of AI Winter descends and progress can stall for up to a decade.
Obviously we all want Tesla to be successful and pull off Full Self Driving next week, next month, or even next year. And Elon Musk has pulled a rabbit out of a hat more than once. Who can forget when those two returning Falcon Heavy booster rockets landed perfectly and simultaneously in 2018? So fingers crossed. But if you’re ever craving an explanation for why FSD is taking so long devote 20 minutes to Melanie Mitchell’s paper.
Subscribe
Subscribe to our newsletter to stay up to date on the latest Tesla news, upcoming features and software updates.
To show off its scalability, Tesla has officially launched its first major expansion of its Robotaxi service area in Austin, Texas. The expansion comes just 22 days after the program’s initial public launch.
That’s a stunningly quick pace that sets a benchmark for how fast we’ll be expecting Tesla to roll out additional expansions as they validate and safety-check in additional area and cities. The new geofence not only adds a significant amount of new territory, but also makes Tesla’s service area in Austin approximately 4 miles larger than Waymo’s.
The expansion, which went live for users in the early access program earlier today, reshapes the map into… what we can call an upside-down T. It helps connect more parts of the city, and increases the service area by more than double.
So far, the initial launch has been operating without any significant issues, which means Tesla is ready and willing to continue expanding the program.
Rapid Scaling
While the larger map is a clear win for early-access users and especially those who live in Austin, the most significant aspect here is just how fast Tesla is going. Achieving a major expansion in just over three weeks since its initial launch is a testament to Tesla’s generalized autonomy approach with vision only.
Unlike methods that require intensive, street-by-street HD mapping that can take months or even years just to expand to a few new streets, Tesla’s strategy is built for this type of speed.
This is Tesla’s key advantage - it can leverage its massive fleet and AI to build a generalized, easily-applicable understanding of the world. Expanding to a new area becomes less about building a brand-new, high-definition map of every street light and obstacle, but instead a targeted safety validation process.
Tesla can deploy a fleet of validation vehicles to intensely focus on one zone, allowing the neural nets to learn the quirks of that area’s intersections and traffic flows. Once a high level of safety and reliability is demonstrated, Tesla can simply just redraw the geofence.
Geofence Size
Tesla went from approximately 19.7 sq mi (51 sq km) to 42.07 sq mi (109 sq km)in just 22 days, following the initial launch and safety validation. Within a few short days of launch, we began seeing the first Tesla engineering validation vehicles, hitting Austin’s downtown core, preparing for the next phase.
The larger footprint means more utility for riders, and that’s big, especially since the new service area is approximately four square miles larger than Waymo’s established operational zone in the city.
Highways and Fleet Size
The new territory enables longer and more practical trips, with the longest trip at tip-to-tip taking about 42 minutes from the southern edge of the old geofence to the northern edge of the new geofence. For now, Tesla has limited its fleet to operating exclusively on surface streets and does not use highways to complete its routes.
We also don’t know if Tesla has increased the vehicle fleet size quite yet - but if they’re intending to maintain or reduce wait times for even the early-access riders, the fleet size will easily need to be doubled to keep up with the new area.
This video clip shows the @robotaxi follows the Interstate (I-35) but does not take the highest.
Perhaps the most telling bit about how fast Tesla is expanding is that they’re already laying the groundwork for the next expansion. Validation vehicles have been spotted operating in Kyle, Texas, approximately 20 miles south of the geofence’s southern border.
Robotaxi Validation vehicles operating in Kyle, Texas.
Financial_Weight_989 on Reddit
This means that while one expansion is being rolled out to the public, Tesla is already having its engineering and validation teams work on the next expansion. That relentless pace means that if this keeps up, Tesla will likely have a good portion of the Austin metropolitan area - the zone they’ve applied for their Autonomy license for - serviceable by the end of 2025.
The pilot? A success. The first expansion? Done. The second expansion? Already in progress. Robotaxi is going to go places, and the next question won't be about whether the network is going to grow. Instead, the new questions are: How fast, and where next?
One of the most welcome features of the recently refreshed 2026 Model S and Model X is the addition of a front bumper camera. Now, thanks to some clever work by the Tesla community, it has been confirmed that this highly requested feature can be retrofitted onto older HW4-equipped (AI4) Model S and Model X vehicles.
The discovery and first installation were performed by Yaro on a Model X, and Tesla hacker Green helped provide some additional insight on the software side.
Unused Port and a Software Switch
The foundation for this retrofit has been in place for a long time, laid by Tesla itself. All HW4-equipped Model S and Model X vehicles, even those built before the recent refresh, have an empty, unused camera connector slot on the FSD computer, seemingly waiting for this exact purpose.
While the physical port is there, getting the car to recognize the camera requires a software change. According to Green, a simple configuration flag change is all that is needed to enable the front camera view on the vehicle’s main display once the hardware is connected and ready.
The Hardware: Parts & Costs
Yaro, who performed the installation on a Model X, provided a detailed breakdown of the parts and approximate costs involved.
Front Camera - $200 USD
Bumper Grill (with camera cutout) - $80 USD
Bumper Harness - $130 USD
Washer Pump - $15 USD
Washer Hoses - $30 USD
The total cost for the Model X hardware comes to around $455 USD, which isn’t too expensive if you were to DIY it. Tesla’s Electronic Parts Catalog has some of these parts available for order, and some can be ordered via your local Service Center. Yaro did note that he had to jerry-rig the camera connector cable, having salvaged the cable from a different camera harness.
The Model S vs Model X
This is where the project varies significantly. For the Model X, the retrofit is relatively simple. Because the main bumper shape is the same, only the lower bumper grill needs to be swapped for the version with the camera opening, along with installing the camera itself and the washer hardware.
For the Model S, the process is a bit more complex and expensive. Due to the different shape of the pre-refresh bumper, the entire front fascia assembly must be replaced to accommodate the camera. This makes the project far more expensive and laborious.
DIY or Official Retrofit?
The official front bumper camera on the Model X
Not a Tesla App
Right now, this is only a DIY retrofit. Tesla hasn’t indicated that they intend to offer this as an official retrofit for older vehicles at this time, but given the fact that it isn’t too complex, we expect that there is a possibility that they may do so in the near future.
All in all, this is about 3-5 hours of labor for the Model X, and approximately 5-7 hours of labor for the Model S, based on the official Tesla Service Manuals, using the front fascia reinstall process as a guide.
That means if Tesla does offer this as a retrofit service, it will likely cost between $800 and $1,200 USD when factoring in Tesla’s labor rates, but the total cost will vary regionally.
For those who own an AI4 Model S or Model X, it could be possible to request service for this installation, but as far as we’re aware, there is no official service notice for this retrofit at this time.
What About the Model 3?
For owners of the refreshed Highland Model 3, the only vehicle now left without a front bumper camera, the possibility of a retrofit is still uncertain. It has been noted by Green that some, but not all Model 3s built in late 2024 have an empty camera port on the FSD computer. This inconsistency means that while a retrofit may be possible for a subset of Model 3s, it isn’t a guaranteed upgrade path like it is for the Model S or Model X.