Can predictive analytics be made safe for humans?

Massive-scale predictive analytics is a relatively new phenomenon, one that challenges both decades of law as well as consumer thinking about privacy.

As a technology, it may well save thousands of lives in applications like predictive medicine, but if it isnt used carefully, it may prevent thousands from getting loans, for instance, if an underwriting algorithm is biased against certain users.

In a post-Cambridge Analytica world, every company has to assess what data it has on its customers and mitigate the risk of harm. How to do that, though, is at the cutting edge of the new field of data governance, which investigates the processes and policies through which organizations manage their data.

Its based around the simplistic idea that a datum such as a customers address shouldnt be shared with, say, a marketer without that users knowledge.

Predictive analytics is a fuzzy term, but essentially means interpreting raw data and drawing new conclusions through inference.

This is the story of the famous Target data crisis, where the retailer recommended pregnancy-related goods to women who had certain patterns of purchases.

Many shoppers purchase soap and cotton balls, but when someone suddenly starts buying lots of scent-free soap and extra-big bags of cotton balls, in addition to hand sanitizers and washcloths, it signals they could be getting close to their delivery date.

Even though the law hasnt caught up to this exponentially more challenging problem, companies themselves seem to be responding in the wake of Target and Facebooks very public scandals.

You allow us to get off the ad-laden media churn conveyor belt and spend quality time on amazing ideas, people and companies.

Original article