Ethics in the Absence of Artificial Intelligence: What don’t we talk about when we talk about the ethics of AI?
Author: Evie Fowler
When we talk about ethics in AI, we tend to talk about its failures. We repeat embarrassing stories and gotcha moments, evidence that the wonders of the technological age are just as blunder prone as the ways of yore. We talk about the risks of putting too much power in the hands of machines and algorithms few people understand.
We don’t talk about the alternative: avoiding AI, clinging to traditional decision-making methods, and perpetuating direct human control over more of our built environment. We don’t talk about who designed our existing methods of resource allocation, who benefits from them, or what their motivations are.
In the following, I will discuss the ethics of leaning into or away from AI in terms of four key points in the AI life cycle: data collection, data storage, data use, and resource allocation.
Transactional Data Exchange
Much personal data collection happens with the expressly given consent of the surveilled parties. Perhaps few have ever actually read a Terms of Service agreement in full, but the tradeoff is generally understood: increased surveillance in exchange for the use of an app or service (or for personalized retail discounts, access to online media, or some other convenience). Sometimes increased surveillance is explicitly the goal, and sharing data is understood to be the tradeoff: we’d like to know exactly how active we are, even if it means our step counts will be shared with sneaker manufacturers across the world.
Consent to data collection is thus given expressly, but perhaps not freely. Some apps and services offer paid subscriptions to (partially) opt out of data collection, but most will only serve customers who agree to the full release. It’s difficult to imagine participating in modern life without making the tradeoff: it might be possible to live without social media and store loyalty cards, but who can really plan every trip with a paper atlas? Secure and maintain employment without a cell phone contract? Access medical care without signing a single digital release waiver? In many cases, there is no meaningful alternative to forgoing data rights to access these services.
So, the cost to the individual of rejecting surveillance and data collection can be severe. The cost to society of doing so is less clear. Collecting personal data allows the apps and services that keep the world turning to operate without directly charging fees. It creates space in the market for new ideas to operate before they’ve been able to attract a paying customer base. Prohibiting the collection of personal data in the course of business would likely eliminate many of these services, and create larger direct monetary costs to access the ones that survive. To accept the role of AI and data collection in our lives is to privilege the value of innovation over the concerns and full societal participation of those with enhanced concerns about privacy and data collection. To reject it is to privilege concerns about data collection over innovation and access to the fruits of that innovation for the economically disadvantaged.
More ambiguous is the collection of what is sometimes called data exhaust. Data exhaust is a byproduct of individuals’ online and real-life movements and actions. The term refers to things like the temporary log files generated by websites when they are visited, or cell phone tower ping logs that can be used to reconstruct an individual phone user’s movements. The Big Data Revolution promised to convert data exhaust into insights that would change the world but has largely not delivered. Instead, data exhaust is mostly monetized in the form of ad sales informed by demographic and user history information.
It’s difficult to argue against businesses keeping user logs, but the collection of data exhaust can still feel just as wrong as a desperate marketer sifting through household trash for insight into a lead. To embrace this aspect of AI is to abandon all hope of individual privacy, or at least the perception of it. To reject it is merely to abandon the bigger promises of the Big Data Revolution.
Having stipulated to the collection of data, savvy customers will wonder how theirs is being stored. Much of the individual risk associated with an AI-driven world comes from data leaks that allow unauthorized parties access to personal information. A network of vendors and contractors who all need access to sensitive information to offer their services dramatically increases the risk of a leak, simply by increasing the number of entities with access to the data – even if they are all taking steps to store it securely.
It should be noted, however, that eliminating this type of data storage doesn’t completely eliminate the risk of personal data exposure. Forbidding the storage of personal information means that it needs to be re-entered each time it’s needed, creating additional security concerns at the point of use. It should also be noted that the digital nature of the data storage isn’t at the root of the problem here. Pen and paper systems aren’t inherently secure – keeping them that way requires everyone’s participation via things like clean desk policies, printer pickup policies, appropriate file destruction, and more.
Assuming a user has agreed to having their data collected and stored, let’s consider how and why other entities might use that information.
For the Benefit of the Individual
Some altruistic organizations might use their collected data purely for the benefit of the individuals who generated it. Perhaps they are developing low-touch screening tools for various diseases, and are passing that risk information back to the individual. Perhaps they are using weather sensor data to develop early warning systems for natural disasters, or using email histories as fodder for text-to-speech systems that will improve accessibility for people with disabilities. This is an unambiguous ethical good (though should be subject to the same fair use guidelines as any other research project).
For the Benefit of the Collector
More often, however, organizations collect and use personal data explicitly for their benefit. Retailers evaluate shopping patterns to establish what prices and offerings will most increase sales. Airlines evaluate email histories to automate their customer service functions and operate at a lower cost. Here we return to the paradigm described above – if businesses use AI to provide goods and services at a lower cost than they would otherwise be able, does that constitute a public good? How should that public good be balanced against privacy and fair use concerns? If instead businesses use AI to boost their profits by lowering operational costs without dropping prices, are the individuals who supplied data to drive that innovation entitled to part of the benefit? How else does the calculus change?
Of course, the simple development of an AI model is rarely the ultimate goal of this type of pipeline. The goal is to use these models to replace some other methods of allocating resources – prioritizing patients for medical testing, selecting applicants for loan approval, setting insurance rates, and so on. The risk of embracing new technology in these use cases is in not fully understanding the models in use and in accidentally replicating and propagating biases. There is a growing awareness that AI-driven resource allocation systems must be examined for fairness and equity in order to mitigate this.
Interestingly, there has been very little push to examine the legacy systems being replaced in the same way. These systems are not immune to the sorts of biases that plague AI models – after all, they are human-generated as well. Furthermore, they are often rooted in eras before modern attitudes toward bias took root. It’s tempting to view these more established resource allocation methods as natural, or as a fair standard against which new systems can be judged. This could not be further from the truth. It is crucial to examine all such systems for bias and inequity, whether or not an alternative is on the horizon. To shy away from the development of AI in this way is to commit to maintaining the status quo with whatever biases it entails.
Conversations around the ethics of AI are often overlooking a fundamental question: what are the ethics of not using AI? Addressing this question in the context of data collection, data storage, data use, and resource allocation can help us not only better understand our existing systems but also allow us to improve them in an ethical way with the onset of AI-based solutions. For more information about AI or other data science-related topics, make sure to follow our blog. If you would like to utilize our dedicated team of specialists, contact us today.