This is exactly what I have always said about having multiple sensor combinations. The particular comment I am referring to is the one where he said, “And what we found is that, when you have multiple sensors, they tend to get confused. So do you believe the camera or do you believe lidar?”. That sentence alone, is why I have always argued that multiple sensors are not the way forward, and can basically lead to what would be the equivalent of what humans experience called choice paralysis. Where you have so many options or information, and you basically get stuck on making a decision. I am not a programmer, designer, engineer, or any sort of relevant field related to autonomous vehicles, but I do know I only use my eyes to drive. Maybe in this case of autonomous vehicles, the solution isn't to have various different sensor types, but more cameras in more positions, and maybe even multiple cameras in the same approximate positions for depth perception. The only reasonable other sensor I could see being actually useful, but only in the case of extreme weather, is good old radar. But even then, those too are limited, but I guess it would be more preferable to stop because of sensor conflict than to slam into another vehicle, pedestrian, or object. I am unsure, but to me, it seems that the Tesla AI team behind FSD would know what they need to get a reliable and safe autonomous vehicle fleet. And so far, it seems that cameras alone aren't a limiting factor to achieving that, but more so the amount of cameras, where they are located, and the processing power on the vehicle to preform the task. Very exciting times ahead, especially once we have seen Robotaxi in action, and if it lives up to all it has been rumored to be capable of.