Neural Networks to Use Surround Video Soon
If you've used a Tesla you have probably noticed that sometimes the car can show multiple vehicles or objects on the screen when in reality there is only one. This is due to the various cameras the car uses and there being some overlap in the camera feeds. When the car analyzes this data, it's unclear whether there is one object or two in the surroundings.
Tesla will soon be going away from analyzing individual camera feeds and instead stitch them together into one big surround-like video. The stitching process will figure out where the images overlap and put them together like a puzzle.
Elon Musk has said that this will be a huge leap forward in the car's self-driving ability and it will essentially mean the car will go from analyzing still images to full surround video.
Elon has now said that they are in the process of upgrading all of their neural networks to surround video and we can start seeing some of these upgrades as early as next week.
Interestingly, he also mentioned they will start using subnets on focal areas and not treat all areas of the video equally.
For example if they realize some areas always include a view of the sky or certain events never occur in specific areas, then they could avoid processing those areas as thoroughly. This will likely save them some compute power as they continue to push FSD tech.