We are now more reliant than ever on digital systems to manage and control physical systems. We enjoy using systems like Ring to manage the front door and Nest to manage the HVAC system. Even my irrigation system has a digital interface and an app that makes turning on the sprinklers a breeze. Companies, of course, rely on digital system and rules to make decisions in increasingly impactful ways. When digital systems find anomalies, like fraud, the recovery process kicks in, automatically, often without a sanity or statistical check to examine if that anomaly is even believable or should be treated as actionable. The costs of such errors can be staggering.
As we soon become reliant on AI and Generative AI systems to conduct investigations and then take action against such cases, we are losing a powerful set of skills in incorporating “conforming or conditional information.” Indeed, an AI rarely stops to asks if the input information could be correct or if conditional information might make the input information unbelievable (for some valid reason).
Consider the case of Hertz falsely reporting hundreds of its customers to law enforcement for automobile theft, with many customers being arrested at gunpoint. Check out this disturbing link at NPR. In the public disclosures, it was suggested that Hertz had one digital record that said the car was missing, yet other internal systems that were charging the customer for rental agreement extensions, suggesting the cars were on valid rental extensions. It is easy to see how a rules engine or an AI processing the missing car data could conclude that a car is stolen, but in reality the car is still being used by Hertz or its customer. The error comes from not leveraging confirming or conditional information. The error has cost over 364 customers a great deal of suffering and pain and resulted in Hertz paying over $168 millions in damages to those customers. At one point, Hertz even had lawsuits of a half a billions dollars pending on these cases.
How can something so smart be so dumb?
Just consider the Hertz data. ” Of the company’s 25 million rental transactions, 0.014% are reported stolen each year, or about 3,500, the company has said.” That suggests about 10 customers a day were committing automobile theft. It turns out they had a huge false positive rate. Statisticians have long worried about false positives and rightfully so. When the penalty of being wrong is high, the risk of a false positive is unacceptable. In the case of Hertz, simply looking at another internal system would have provided valuable information on the location and status of the car. It is easy to also think that vehicles can be located by GPS systems and even tracked via cameras and physical record locators. All of these are examples of conditional or confirming information to explain the missing car record.
In a world that focuses an AI on looking for discrepancies, it is critical to ask if the input data can be believed. In statistics, we call this conditional probability assessment. Human minds are surprisingly good at assessing many complex conditional information impacts. For instance, we learn that travel on raining days takes more time. Such knowledge for predicting travel times is built over years of experience, of course. When a loved one is late on a rainy day, we don’t immediately assume they were abducted, but explain their late arrival with a longer travel time due to the rain. This is a complex analysis that looks at conditional information to explain the data with the most likely cause. Of course, with no information about the weather, a long delay in someone coming home might look they a missing person case or worse. The connection of the conditional information is critical. An AI or digital rules engine looks at a system and sees a missing car record at Hertz but might not consider many other conditional sources. Looking at conditional data might have found the cars and saved a lot of money. Looking at conditional data might also have been inexpensive.
So, as you build the next generation AI systems in your company, consider some key points:
Take matters into your own hand. When renting car, take a picture with a time stamp and geophysical locator. Confirming that a car has been turned is now a critical customer step that requires conditional information.
About Russell Walker, Ph.D.
Professor Russell Walker helps companies develop strategies to manage risk and harness value through analytics and Big Data. He is Associate Teaching Professor at the Foster School of Business at the University of Washington. He has worked with many professional sports teams and leading marketing organizations through the Analytics Consulting Lab, an experiential class that he founded and leads at University of Washington, Foster School of Business.
His most recent and award-winning book, From Big Data to Big Profits: Success with Data and Analytics is published by Oxford University Press (2015), which explores how firms can best monetize Big Data through digital strategies. He is the author of the text Winning with Risk Management (World Scientific Publishing, 2013), which examines the principles and practice of risk management through business case studies.