Five years ago, in The Data Revolution, Rob Kitchin defined “Big Data Ethics” as the construction of systems of right and wrong in relation to the use of (in particular) personal data. The magnitude of data use, and its effect on things like elections and public policy, might have seemed exotic or quaint in 2014, but now we’re seeing companies like Google, Facebook, and others “setting up institutions to support ethical AI” in relation to data use. Google recently created the “Advanced Technology External Advisory Council” with a mission to steer the company towards “responsible development and use” of artificial intelligence, including facial recognition ethics. The advisors are “academics” from the fields of ethics, public policy, and technical applied AI. Entrepreneur.com also reports that the council includes members from all over the world.
It’s certainly a good time for companies to be conspicuously and conscientiously doing things like this. We’re learning that AI can often inadvertently (a strange word to use in this context) behave in ways that, if humans so behaved, we’d call “conspiratorial” or collusive. The Wall Street Journal recently reported (behind a paywall) on algorithms “colluding” to unfairly raise consumer prices. When competing algorithms received “price maximization goals,” they integrated consumer data to figure out where to raise prices, and out-compete one another in doing so.
But self-governance will always have limits–and those limits are not necessarily attributable to the bad intent of actors in the system. In the case of price “colluding” algorithms, as Andrew White wrote, “[r]etailers have been using neural networks to optimize prices of baskets of good for years, in order to exploit shopping habits.” Advances in AI simply allow the logic of price optimization to run its course without the intervention of retailers’ personal street wisdom about pricing.
And Facebook’s creation of similar advisors seems not to have kept it from asking some new users to provide their email address passwords “as part of the sign-up process,” which is a pretty tremendous failure to read the room by that platform.
And finally, the teeth that these advisory boards have is, of course, always going to be limited by the will of the companies that support them. In a world where people can legitimately reject the monopoly-like holds of Google or Facebook, the findings of such groups would carry some weight, but that’s proving to be practically impossible. In the end, such hopes also ignore the basic paradox that users want their preferences to matter, but are skittish about having their data mined–what some have called the “personalization and privacy paradox.”
The conversation among data scientists may offer the best guide for ethical practices by corporations. Last year Lucy C. Erickson, Natalie Evans Harris, and Meredith M. Lee, three members of Bloomberg’s Data for Good Exchange (D4GX) community, published “It’s Time to Talk About Data Ethics,” where they bring up the “need for a ‘Hippocratic Oath’ for data scientists,” and report on efforts to hold large conferences and symposia soliciting dozens of proposal papers on codes of ethics, from which working principles could be distilled. It’s scientists using something very much like the scientific method to develop ethics for their own methods. Not a bad model.