Today’s entry - Two questions/observations. On Uber what is to stop drivers from discriminatory practices coded into passenger ratings? On facial recognition, is the 10% error rate an incorrect ID or an unable to ID. A big difference in implications.
Bottom Line: These two stories were teed up on yesterday's show. Uber’s new policy of banning passengers with especially low ratings from drivers does open itself up to potential pitfalls. First, let’s look at why they’re doing it. Driver safety. Uber’s done a lot to improve the safety of its drivers for passengers, but until now hadn’t done anything to attempt to protect drivers. I’m not saying this idea is the magic bean but what they’re trying to do makes some sense. We often talk about the previous warning signs for violent people who harm others after the fact. It’s rare that someone goes from being a model citizen to a violent attacker. This might help Uber catch a potential problem passenger before they harm a driver. I’m willing to give them the benefit of the doubt for now and would want some type of effort made if I were the one driving. But to the crux of your question. Could biased drivers lead to unfair passenger ratings? Certainly possible. It’s not implausible to think a racist driver could rate all passengers they’re prejudiced against poorly based on their race, as an example.
The upshot is this. Similar risks exist in all aspects of our society and there are two motivating factors for Uber to not let this get out of hand. The first is the need for passengers. Uber’s never turned a profit. They’re a long way from turning a profit and need all the passengers and revenue they can get. They’re not positioned to err on the side of banning passengers. I’d expect them to aggressively monitor low ratings of customers by drivers, especially at the point where they’d be banning someone from using the service. Given the technology, it'd be easy for them to pick up on trends from drivers if they existed.
As for the facial recognition question. Official government research demonstrates that high-level AI is accurate 90%+ of the time in nationwide searches. So, what about the 10% of the time that’s not accurate? The answer is generally a false positive. That is problematic for the obvious reasons. I think that’s why it’s important to rely on more than AI. But do I think that law enforcement should be able to use it as a tool? Without a doubt.
Submit your questions by one of these methods.
Facebook: Brian Mudd https://www.facebook.com/brian.mudd1
Photo by: DON EMMERT/AFP/Getty Images