OK, let me take this explanation of AI one step further.
When training an AI, the programmer usually uses a reward/penalty approach. The AI doesn't know which actions result in which result until they attempt the action. In the case of an AI driving a car, it takes driving into things in order for the AI to learn that driving into things results in a penalty. Basically, the AI doesn't know that something is bad until it does it, and there's no way to encode every good or bad thing into an AI algorithm because of the sheer number of things that the AI could do.
It's all theoretical anyway. The 3 laws of robotics was an Isaac Asimov construct written 75 years ago for use in various fiction about robots. They aren't in use anywhere except fiction.