Tesla's FSD hardware 4.0 to use cameras with LED flicker mitigation

By Nuno Cristovao

At Tesla's AI Day event last summer, Elon answered some questions during the Q&A portion of the event that revealed Tesla's plans for FSD hardware.

Elon informally announced that Tesla is already researching the next major hardware revision for full self-driving.

Tesla's camera housing
Tesla's camera housing

Elon said that he expects the current hardware in today's vehicles to be capable of "super-human" driving, but Tesla will continue to update their cars to have the most powerful hardware available.

Elon guessed that Tesla may release FSD hardware 4.0 with the introduction of the Cybertruck. Since the Cybertruck won't be released until the end of this year at the earliest, we may even see new hardware before its arrival, although this is unlikely.

In another question related to new hardware, Elon also said that Tesla is looking into new cameras. He was clear that the current cameras aren't a hindrance, but if Tesla can do better, then they will.

Elon didn't go into details about whether the new cameras would be tied to the Cybertruck or hardware 4.0. He also didn't mention specifically whether the new cameras would be higher resolution, contain larger sensors, or possibly even be placed in different locations.

Tesla currently utilizes three separate front-facing cameras in the housing at the top of the windshield. The camera unit contains wide-angle, standard, and narrow view lenses with three separate CMOS sensors.

Each sensor has a resolution of 1280x960 or about 1.2 megapixels.

New Camera

According to Chris Zheng on Twitter, who has connections with some Tesla suppliers, Tesla is planning to use Sony's new IMX490 automotive sensor in their upcoming hardware 4.0 revision.

Wide-angle camera comparison
Wide-angle camera comparison

This new sensor offers many benefits over the current sensors that Tesla uses in their vehicles.

The new sensors support a much higher resolution of 2896x1876. That's 5.4 megapixels compared to the 1.2 megapixels that Tesla's current front-facing cameras support.

At more than four times the resolution of current sensors, these new sensors will allow Tesla to more accurately detect objects that are required for FSD.

Detecting road edges and lane markings that are further away are especially difficult. Due to the angle of the road and the current image resolution, Tesla often has to make assumptions based on just a few pixels.

The increased resolution will offer a more detailed image and should improve the accuracy in these situations.

The new wide-angle sensor may also allow Tesla to drop the number of cameras used in the front-facing module. Due to the higher resolution in these sensors, Tesla may be able to use this wide-angle sensor to interpret objects that previously required the use of the standard or telephoto lenses before. This may allow Tesla to reduce the number of cameras that are required in the front of the vehicle from three down to two or even a single lens.

The new camera sensor also supports HDR shooting at 10 bit and 40 FPS, offering better contrast and richer colors. It's not clear whether Tesla would take advantage of HDR as it would require more power to process, but it's something Tesla may consider depending on the compute power available.

LED flicker mitigation
LED flicker mitigation

More importantly, the new sensor will also support LED flicker mitigation. Camera sensors often have a hard time capturing the light emitted from LED lights in a constant fashion. Since LEDs are widely used in electronic signs and traffic lights this will be an important feature and should make it easier for Tesla to interpret such signs.

Tesla is working closely with Samsung to produce their next-generation chips that will be used in FSD hardware 4.0. Tesla will likely launch updated cameras at the same time they introduce a new FSD computer.

It makes sense for Tesla to introduce this new hardware in a new model first before retrofitting new camera modules into existing models.

Tesla Plans Massive 10x Robotaxi Expansion: A Look at the Potential New Area

By Karan Singh
Not a Tesla App

With Tesla’s first major expansion of the Robotaxi Geofence now complete and operational, they’ve been hard at work with validation in new locations - and some are quite the drive from the current Austin Geofence.

Validation fleet vehicles have been spotted operating in a wider perimeter around the city, from rural roads in the west end to the more complex area closer to the airport. Tesla mentioned during their earnings call that the Robotaxi has already completed 7,000 miles in Austin, and it will expand its area of operation to roughly 10 times what it is now. This lines up with the validation vehicles we’ve been tracking around Austin.

Based on the spread of the new sightings, the potential next geofence could cover a staggering 450 square miles - a tenfold increase from the current service area of roughly 42 square miles. You can check this out in our map below with the sightings we’re tracking.

If Tesla decides to expand into these new areas, it would represent a tenfold increase over their current geofence, matching Tesla’s statement. The new area would cover approximately 10% of the 4,500-square-mile Austin metropolitan area. If Tesla can offer Robotaxi services in that entire area, it would prove they can tackle just about any city in the United States.

From Urban Core to Rural Roads

The locations of the validation vehicles show a clear intent to move beyond the initial urban and suburban core and prepare the Robotaxi service for a much wider range of uses.

In the west, validation fleet vehicles have been spotted as far as Marble Falls - a much more rural environment that features different road types, higher speed limits, and potentially different challenges. 

In the south, Tesla has been expanding towards Kyle, which is part of the growing Austin-San Antonio suburban corridor spanning Highway 35. San Antonio is only 80 miles (roughly a 90-minute drive) away, and could easily become part of the existing Robotaxi area if Tesla obtains regulatory approval there.

In the East, we haven’t spotted any new validation vehicles. This is likely because Tesla’s validation vehicles originate from Giga Texas, which is located East of Austin. We won’t really know if Tesla is expanding in this direction until they start pushing past Giga Texas and toward Houston.

Finally, there have been some validation vehicles spotted just North of the new expanded boundaries, meaning that Tesla isn’t done in that direction either. This direction consists of the largest suburban areas of Austin, which have so far not been serviced by any form of autonomous vehicle.

Rapid Scaling

This new, widespread validation effort confirms what we already know. Tesla is pushing for an intensive period of public data gathering and system testing in a new area, right before conducting geofence expansions. The sheer scale of this new validation zone tells us that Tesla isn’t taking this slowly - the next step is going to be a great leap instead, and they essentially confirmed this during this Q&A session on the recent call. The goal is clearly to bring the entire Austin Metropolitan area into the Robotaxi Network.

While the previous expansion showed off just how Tesla can scale the network, this new phase of validation testing is a demonstration of just how fast they can validate and expand their network. The move to validate across rural, suburban, and urban areas simultaneously shows their confidence in these new Robotaxi FSD builds.

Eventually, all these improvements from Robotaxi will make their way to customer FSD builds sometime in Q3 2025, so there is a lot to look forward to.

Caught on Video: Tesla FSD Tackles a Toll Booth — Here’s How It Pulled It Off

By Karan Singh
@DirtyTesLa on X

For years, the progress of Tesla’s FSD has been measured by smoother turns, better lane centering, and more confident unprotected left turns. But as the system matures, a new, more subtle form of intelligence is emerging - one that shifts its attention to the human nuances of navigating roads. A new video posted to X shows the most recent FSD build, V13.2.9, demonstrating this in a remarkable real-world scenario.

Toll Booth Magic

In the video, a Model Y running FSD pulls up to a toll booth and smoothly comes to a stop, allowing the driver to handle payment. The car waits patiently as the driver interacts with the attendant. Then, at the precise moment the toll booth operator finishes the transaction and says “Have a great day”, the vehicle starts moving, proceeding through the booth - all without any input from the driver.

If you notice, there’s no gate here at this toll booth. This interaction all happened naturally with FSD.

How It Really Works

While the timing was perfect, the FSD wasn’t listening to the conversation for clues (maybe one day, with Grok?) The reality, as explained by Ashok Elluswamy, Tesla’s VP of AI, is even more impressive.

FSD is simply using the cameras on the side of the vehicle to watch the exchange between the driver and attendant. The neural network has been trained on enough data that it can visually recognize the conclusion of a transaction - the exchange of money or a card and the hands pulling away - and understands that this is the trigger to proceed.

The Bigger Picture

This capability is far more significant than just a simple party trick. FSD is gaining the ability to perceive and navigate a world built for humans in the most human-like fashion possible.

If FSD can learn what a completed toll transaction looks like, it’s an example of the countless other complex scenarios it’ll be able to handle in the future. This same visual understanding could be applied to navigating a fast-food drive-thru, interacting with a parking garage attendant, passing through a security checkpoint, or boarding a ferry or vehicle train — all things we thought that would come much later.

These human-focused interactions will eventually become even more useful, as FSD becomes ever more confident in responding to humans on the road, like when a police officer tells a vehicle to go a certain direction, or a construction worker flags you through a site. These are real-world events that happen every day, and it isn’t surprising to see FSD picking up on the subtleties and nuances of human interaction.

This isn’t a pre-programmed feature for a specific toll booth. It is an emergent capability of the end-to-end AI neural nets. By learning from millions of videos across billions of miles, FSD is beginning to build a true contextual understanding of the world. The best part - with a 10x context increase on its way, this understanding will grow rapidly and become far more powerful.

These small, subtle moments of intelligence are the necessary steps to a truly robust autonomous system that can handle the messy, unpredictable nature of human society.

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

More Tesla News

Tesla Videos

Latest Tesla Update

Subscribe

Subscribe to our weekly newsletter