Welcome to another edition of the S.A.D newsletter — an exploration of our software, algorithms, and the data-driven world that surrounds us.
In recent times, the term “algorithm” has become part of our everyday conversations. The advent of AI-powered tools like ChatGPT has brought algorithms into the spotlight, shifting them from a niche technical and academic concept to a subject of broad public interest, particularly through social media. While this heightened awareness has improved our understanding of algorithm capabilities, it has also given rise to misconceptions and concerns. Even though everyone seems to be an expert of AI now a days, we really do not have a clue about this.
One prevalent misunderstanding involves attributing agency to algorithms. The ability of Generative AI, like ChatGPT, to generate human-quality text has led some to believe in the existence of independent thoughts and feelings within algorithms. This anthropomorphisation overlooks the fundamental programming and training data, dismissing the socio-technical context in which algorithms exist.
The hype surrounding AI, fuelled by sensationalist headlines and futuristic predictions, has contributed to curiosity but also to misunderstandings. Concerns about existential risks and job displacement have raised awareness, yet there is a tendency to overestimate algorithm capabilities while underestimating human agency and the socio-technical context.
Despite these concerns, discussions about algorithms persist, especially in the realm of social media platforms. People are now more aware of tracking, privacy, and the surveillance economy. However, while users understand how platforms function, their comprehension of the algorithms governing their experiences often remains limited, fostering a sense of helplessness and manipulation. But there are some hope, if we decided to engage with the details.
Anthropologist Nick Seaver offers a unique perspective on algorithms and culture. In his work, “Computing Taste:Algorithms and the Makers of Music Recommendation," Seaver explores how algorithms shape our musical preferences. He argues that algorithms are not neutral conduits but active participants, reflecting the biases of their creators and the broader cultural context. However, his approach emphasises understanding and exploiting user motivations rather than negating autonomous agency. In this ethnographic work, he talked with engineers that worked for companies like Spotify. He finds that these engineers are not simply neutral conduits of data, but rather active participants in shaping our musical experiences. Their algorithms reflect their own tastes and biases, as well as the broader cultural context in which they operate. Seaver delves into the concept of “captivation metrics,” frequently employed by widely-used recommender systems. Treating recommender systems as traps necessitates delving into the users’ mindset; for traps to be successful, their creators must comprehend and align with the target’s worldview and motivations. This approach ensures that the autonomous agency of the user is not nullified but rather strategically leveraged.
Many anthropologists are currently exploring ethnographic perspectives to comprehend the role of algorithms in our world. They assert that algorithms operate within “algorithmic ecologies” across various modalities, including imaginaries, infrastructures, interfaces, identities, and investments. These modalities reveal the full complexity of algorithms, although this nuanced understanding has yet to permeate public discourse.
Another book. Similarly, Ed Finn’s “What Algorithms Want” (2018) delves into the philosophical implications of algorithms, questioning notions of agency and their impact on human values. Although Finn’s book predates the GenAI timeframe, his exploration of algorithmic imagination remains pertinent. He also examines the concept of arbitrage, ranging from high-frequency trading (HFT) in the stock market, bitcoin, to social media engagement, investigating how “algorithmic arbitrage” builds upon existing social relationships but scales and shifts them. When you use a platform to view images, videos, or listen to a song, you are essentially trading your data for the free service provided. This becomes an arbitrage situation as the company exploits the difference in value between your data and the service you receive, all while concealing this behind abstraction to simplify the transaction. According to Finn, we are drawn to these simplifications through the “romance of clean interfaces and tidy ontologies.” This, in part, contributes to the success of ChatGPT:
These companies [Netflix,Uber,Google,Amazon] are engaged in a form of a algorithmic arbitrage, handling the messy details for us and becoming middlemen in every transaction. The role begins as something like a personal assistant or a general contractor but gradually evolves into a position of grand vizier or dragoman of the Porte, exercising the power of not merely to enact our decisions, but to control the decision pathways, the space of agency. The economies of abstraction depend on an aesthetics of competence, trust, and openness to build the kind of rapport that such intimate forms of algorithmic sharing require.
This openness is not about altruism. Abstraction conceals much more. Finn discusses how high-frequency trading (HFT) altered the notion of trading by removing humans from the trading floor:
..these systems operate in open competition with humans and one another, and they are gradually transforming the broader movement of capital. HFT arbitrageurs build their advantage through complex geographical manoeuvres, by locating their servers and fiber-optic lines a few feet closer to the exchanges’ central servers than their competitors, or leasing communication lines that shave a few miles off the best direct signal pathway between two points on the financial grid. While they are traders, they almost never lose money, since their form of arbitrage always depends on superior information and they do not arbitrage risk like more traditional brokers. They never hold stocks at the end of the day, only cash.
In essence, this is cultural arbitrage — nothing too unfamiliar but now aided by the magic of algorithms. It shifts and replaces the structure of an ecosystem. In the case of stock trading, it’s no longer about trading shares; it’s about the process of time-space compression. Similar shifts occurred when bitcoin arrived. It's not the decentralisation or public ledger that we should be paying attention to—it's the “programmable money.”
The central tenets of Bitcoin’s system for creating value epitomize a similar shift in all corners of cultural production, where daily business may seem the same but the majority votes, central authorities, and validation structures are being subsumed by algorithmic processes.
A recent concept, algorithmic pluralism aims to counteract monopolistic tendencies by promoting choice and diversity in AI systems. This approach empowers users to select the biases they interact with, fostering a more nuanced AI interaction. However, challenges like reduced accountability and market consolidation must be addressed for algorithmic pluralism to realise its potential benefits.
In conclusion, algorithms have transitioned from a technical concept to a cultural force shaping our lives. Amidst misconceptions and concerns, algorithmic pluralism emerges as a promising solution, fostering an informed and equitable digital future. Whether seen as traps or ignorance, navigating this landscape requires a balanced approach. As Ed Finn puts it, it's the dance between familiarity and forgiveness:
As technical systems, algorithms have always embodied fragments of ourselves: our memories, our structure of knowledge and belief, our ethical and philosophical foundations. They are mirrors for human intention and progress, reflecting back the explicit and tacit knowledge that we embed in them. …The promised salvation of algorithmic theology stubbornly remains in the distant future: the clunky disjointed, implementation of the computation layer on cultural life leaves much yet to be desired. Algorithms still get it wrong far too often to make a believable case of transcendent truth.
While this overview doesn’t delve into bias and justice, “Unmasking AI” by Joy Buolamwini is a recommended read for those interested in exploring these aspects further.