top of page

AI: The Ouroboros Problem

Updated: Mar 11

by Vic Valera

It hurt itself in its confusion!

Pattern recognition- the core with which artificial intelligence functions. To recognize particular patterns and produce the desired product, AI must first be provided with a database. The sources of information from these databases are questionable, as they primarily rely on grossly underpaid and hyper-surveilled data-labellers to sort through the raw quantity of information, much of which is stolen writing or artistic works made through human labor. Many databases also exacerbate real-life inequities in representations of race, class, gender,  disability, and other axes of marginalization. However, we have moved past the point where humans are the only ones generating databases for AI. 

AI-generated material has now oversaturated online spaces with artificially written news pieces and images. An emerging trend occurs when AI consumes its product- consumes itself- and it then exacerbates any microscopic error or assumption that the previous generation caused. The following iterations continue to carry these flaws, ultimately ruining the final product. In its production and consumption, artificial intelligence is doing so mindlessly. 

While AI cannot be faulted for its mindless consumption, the individuals and organizations that design it are not exempt from this criticism. Much like the ouroboros, the growth and development of large-scale computing requires constant consumption of resources. As nebulous as the term AI is, there are still physical materials required to provide energy to, build the infrastructure for, and conduct research on these systems. Data acts as a commodity and means of control, and despite all the potential benefits of thorough AI implementation we more often see state investment in surveillance technologies using AI. We can argue that the consumption of these resources is for some greater good of innovation, but resources include the labor of people. 

It is people, not AI, that see themselves displaced and exploited to mine the metals required to build our machines under the threat of violence. Such is made clear by what is happening in the Democratic Republic of the Congo as the U.S. acquires the cobalt for the production of lithium-ion batteries, never mind that it is “cobalt dug by women and children with their bare hands.” (NPR). It is people who are surveilled by AI facial recognition technologies so that larger amounts of them are put into the prison system- so that they may then perform the labor that makes what these AI do possible without compensation. AI may be new, but the systems it empowers and the technologies it is based on are not. If the improvements to quality of life only come for a chosen few, then we cannot responsibly call that innovation. 

There is substantial historical precedent for the justification of colonization and resource extraction due to the potential benefits of the technology built by those materials. If ethical considerations are considered an impeding factor to progress, then that illuminates the true nature of the progress we've devoted ourselves to.  Why can't our progress be defined by the evolution of our ethics and politics? We can have circular, redundant conversations on the ethical implementation of AI, but who is it being implemented for? When conversations around AI are oversaturated with its newest “ groundbreaking” ability, we forget that we've been at these ethical crossroads before. 

Sam Altman, the recently reinstated CEO of OpenAI, claims that its unorthodox power structure and establishment as a non-profit differentiates it from other organizations less concerned with ethical qualms. Altman was allegedly ousted from his position as CEO due to, among other factors, a letter from researchers regarding the power of a project known as Q* in the pursuit of artificial general intelligence (AGI). Despite this, our culture of consumption does little to retort Altman. His claims facilitate our comfortable usage of ChatGPT without the guilt of being partially responsible for the exploitation. The pursuit of AI ethically is a priority for OpenAI, especially the potential to improve society as a whole. However, Altman’s governing philosophy of “effective altruism”, a belief in accumulating wealth and influence to achieve greater societal good, is still contingent on gaining money and power. 

The faux-rad dogma that occupies spaces of technological innovation can survive because the way of thinking- one regarded as the only correct way of thinking- has remained the same. We repeat cycles of exploitation for the “greater good”. While we currently have an ouroboros problem, the same symbolism can be applied to our solution. We need to recognize our own patterns. During this cycle of change, destruction, and re-creation, we can build more egalitarian worlds. If you haven’t already, ask yourself: Who is AI made for? Who do we imagine making AI? Who’s futures are we building and changing, and how are we building them?


Prompt: Write a college newspaper article on the ethical implementation of AI.

ChatGPT’s response: 

Title: "Navigating the Ethical Frontier: Implementing AI Responsibly in Higher Education"

Subtitle: Striking the Balance Between Progress and Ethical Considerations

In the rapidly evolving landscape of technology, Artificial Intelligence (AI) has emerged as a powerful tool with the potential to revolutionize various aspects of our lives, including education. As universities embrace the benefits of AI to enhance learning experiences, it becomes imperative to scrutinize the ethical considerations surrounding its implementation.

AI, when integrated into educational systems, has the capacity to personalize learning, streamline administrative tasks, and even contribute to groundbreaking research. However, with great power comes great responsibility, and ethical concerns should be at the forefront of any AI implementation strategy.

One key ethical consideration is transparency. Students and faculty alike must be informed about the use of AI in educational settings, including how it will impact their learning experience and the data that will be collected. Clear communication ensures that individuals can make informed decisions about their participation in AI-driven initiatives and fosters a sense of trust in the institution.

Moreover, universities must prioritize data privacy. As AI systems collect and analyze vast amounts of student data, it is crucial to establish robust security measures to protect this sensitive information. Striking a balance between utilizing data for educational improvements and safeguarding privacy is essential in maintaining ethical standards.

Bias in AI algorithms is another ethical minefield. If not carefully designed and monitored, AI systems can inadvertently perpetuate and even exacerbate existing biases present in society. Institutions must actively work to eliminate biases from AI algorithms, ensuring that decisions made by these systems are fair and equitable for all students, regardless of their background.

To address these concerns, universities should establish ethical guidelines for AI implementation. These guidelines should be developed collaboratively, involving input from students, faculty, and other stakeholders. By incorporating diverse perspectives, institutions can create a framework that considers a wide range of ethical considerations and mitigates potential risks.

3 views0 comments


bottom of page