I used to volunteer at a community radio station (I had my radio show for a while), where, from time to time, we had volunteers from a senior center nearby. I remember one particular gentleman, Bill — who was probably septuagenarian then (the early 2010s). He used to greet me with the phrase “working hard or hardly working”? That phrase and the humour behind it stayed with me and inspired the title of today’s post to highlight a few issues related to how we are talking about ethics and AI.
A step back first. Before we talk about AI, we need to understand ethics and (information) technology in general. There are two prominent frameworks for ethical debates (a vastly complicated field of study so this is a simplification). First, the impact view approach: how technology creates new possibilities for humans and how human beings face ethical questions they never faced before. Ethical issues here have to do with human beings and their actions and interaction. Deborah Johnson (author of one of the earliest books on computer ethics) points out that whereas in some cases actions “are simply instrumented through the human body,” in other cases, such actions can be “instrumented through technology”. A somewhat different approach argues that because computer technologies, unlike previous technologies, is logically malleable, it gives rise to new possibilities for human action. These new possibilities can, in turn, create certain vacuums regarding normative rules and policies to guide new choices for action made possible by computers.
Over the past few years, these philosophical discussions have been extended to software and data and most recently to AI (for more in depth reading, see this open access book by Bernd Stahl and this article published in AI & Society. There are also works by ( about bias and racism) Safia Umoja Noble and Ruha Benjamin that are noteworthy.
The trends in ethics of AI discussion follow a pattern trudging between impact/harm and policy. Some of the proposals are techno-solution focused, some are social/policy oriented (here’s a 2020 article that discussed how the AI ethics field tends to be dominated by the “law conception of ethics”). Even there is an idea of creating a “moral Turning test”:
One interesting example in the AI ethics literature is to postulate a “moral Turing test” where a system can be thought of as ethical if it can convince someone interacting with it that it is reasonably moral, such as how regular human interaction would accommodate ethical pluralism in human interactions.
Even though there are vibrant debates in the academic circle and the media, we are heading towards an unresolvable dilemma — are we making our AI systems more ethical (“artificially ethical”) or can AI handle ethics (ethically artificial)? In a way, we need both of these questions and much more (imagine a Möbius Strip type of thinking field that lets you slip into different terrains). A framework focusing on impact or policy does not provide the space for nuances. For example, think of software where the operations hide under the logic of algorithms and codes. How do we study software? The physical objects that we deal with such as cars or ATMs often are open for scrutiny by the user (even though codes are involved, the physical aspect makes such scrutiny easier). Software, on the other hand, is mostly not open to scrutiny for users even when they are open source. Important assumptions and biases in software are obscured inside the proverbial ‘black-box’. It is not possible for ordinary users to scrutinize the assumptions and biases. Embedded in the software codes are complex rules of logic and categorization that may have material consequences for those using it.
We need a view of ethics that will open up the ‘black box’, reveal the values and interests not only in its final design but also in the process of development. For these, we need both technologists and humanists. Ed Finn talks about this in the conclusion of What Algorithms Want:
The humanities has long grappled with the story that started this book: the mythic power of language, the incantatory magic of words and codes. We desperately need more readers, more critics, to interpret the algorithms that now define the channels and horizons of our collective imaginations.
Others have echoed this sentiment — “reading” of software, algorithm, and data requires special training (see July 2021 article entitled Reading datasets: Strategies for interpreting the politics of data signification by Lindsay Poirier; works by Ted Underwood on literary texts and machine learning). On the philosophical side, scholars like Luciano Floridi and Louise Amoore, Clarissa Véliz are doing amazing works that should be required reading for technologists. Historians like Mar Hicks are providing a valuable lens by examining our past. There are also people like Rumman Chowdhury — Director of META (ML Ethics, Transparency, and Accountability) team at Twitter — who are providing critical voices from within the industry.
I tend to favor the framework proposed by Amoore (in Cloud Ethics: Algorithms and the Attributes of Ourselves and Others) that suggests moving away from accountability, transparency, and legibility and focus more on opacity, partiality, and illegibility:
In this book I propose a different way of thinking about the ethicpolitics of algorithm. What I call cloud ethics is concerned with the political formation of relations to oneself and to others that is taking place, increasingly, in and through algorithms. My use of the term cloud here is not confined to the redefined sovereignties and technologies of a “cloud computing era,” as understood by Benjamin Bratton and others, but refers to the apparatus through which cloud data and algorithms gather in new and emergent forms. The cloud in my cloud ethics is thus closer to that envisaged by John Durham Peters, for whom clouds are media in the sense that they are “containers of possibility that anchor our existence and make what we are doing possible”. [..]
A cloud ethics acknowledges that algorithms contain, within their spatial arrangements, multiple potentials for cruelties, surprises, violences, joys, distillations of racism and prejudice, injustices, probabilities, discrimination, and chance. Indeed, many of the features that some would like to excise from the algorithm -- bias, assumptions, weights -- are routes into opening up their politics. Algorithms come to act in the world precisely in and through the relations of selves to selves, and selves to others, as these relations are manifest in the clusters and attributes of data. […] In a real sense, the algorithm must necessarily discriminate to have any traction in the world. The very essence of algorithms is that they afford greater degrees of recognition and value to some feature of a scene than they do to others.
I think what this framework captures is our ability to acknowledge, comprehend, and process our current situation, including everything that’s broken from politics to supply chain to software. To get back to the phrase we started with, we might find some closure in asking a system, “are you Ethically Artificial or Artificially Ethical? which might shed some lightness in our quest.