This blog’s inspiration came, from all places, on the observations from one of the hosts of BBC's Top Gear. The philosophical question put forward was: “if a self-driving car has no choice but to kill one of two people, who does it kill?” (very deep by Top Gear standards).
The scenario is this: a self-driving car is forced into a scenario where there will be an accident – a tree falling in front of it with no time to stop safely for example – and the car has only two choices:
- Slam into the fallen tree and kill the driver
- Swerve onto the sidewalk and kill a pedestrian
Which does it choose?
Car manufacturers, as they develop their self-driving vehicles, their decisioning algorithms will need to take this account (the first real world implementation of Isaac Asimov's Three Laws of Robotics, formulated long ago, however, we won’t go there). This would evolve to include multiple permutations that must be enumerated, weighed, judged, and executed in real time, for example:
- Spare innocent bystander over driver?
- Spare multiple passengers over single bystander?
- Spare children (as judged by height) over adults?
- Whichever action spares the most people?
Which leads to the question for the insurance industry: who pays for the liability and damages?
In the beginning, automobile policies will continue to pay the liability for any event that causes a third party death. As long as there is a manual override, where a person is allowed to take control of the car, there will be a need for liability insurance as it exists in the market today. If a car becomes fully automated, with no human control other than destination input, this liability shifts to more of a product liability issue for automakers. It may be that insurance requirements will always, out of convention and perceived societal good, stay with the “driver” but product liability will come into play in at least two scenarios: subrogation for third party liability by auto insurers paying out claims, direct claims for first party injury to the passengers of the car.
The $64,000 question: will product liability cover losses where the car makes a deliberate choice? In general, deliberate acts are not covered by insurance. When an automaker programs a car to make specific choices on who to save or not save in a Sophie's Choice decision (even if it’s matrixed, there is a decision pattern that needs to be followed), it’s a deliberate act. Will the product liability respond? In the long term, probably, because it would be considered in society’s best interest but no doubt there will be multiple court cases to determine this issue. It’s expected that both product liability insurers and automakers would deny liability. Regardless, this will result in new wrinkles to the auto insurance industry, both for third party liability and new products around first party protection – protecting passengers from their own automobiles and use of third party autos. Either way, there will definitely be changes to both automobile and product liability policies.
How do insurers prepare? While there is no real place for insurers to “train” around this type of issue, Usage Based Insurance (UBI) offers insurers the opportunity to learn how to take the tremendous amount of data that will be produced by automobiles for the auto insurance industry and turn it into pricing and product development opportunities. It’s assumed that UBI insurance products will morph into self driving product models and the data resulting from both, or better yet the usage of the data resulting from both, will separate the winners from the losers in the market. Once more, the need to be able to collect, analyze, and (most importantly) operationalize data is critical for insurers as markets develop.
Download the 5 Steps to Optimizing Customer Value in Insurance eBook to discover how you can leverage predictive analytics to increase customer satisfaction, sales, retention, and revenues.