TL;DR Using Generative AI is an ethical minefield that actively denigrates societal growth and human creativity.

The Ethics of Generative AI

There is a lot to unpack when discussing Generative AI, and a lot of arguments that have been already made. For instance Hank Green has a video that discusses the Water Use discussion about AI better than I can. The Power waste of Generative AI is a discussion that can also be had about the ethics of environmental protection and conservation of resources for the future.

I am only well educated in the power argument and educated enough to be dangerous in the water and environment arguments.

I am, however, an expert in Information Science, so today we’re going to talk about that. The use of generative AI has a problem with information sovereignty, societal control, bias, technnological access disparity and the wealth gap, and economic wealth consolidation. It is, quite literally, problematic in so many ways it is difficult to pick which ones to discuss.

Further note, this essay discusses AI solely in the context of Generative AI driven by Generative Pre-Trained Transformers as postulated between 2017 and 2018. It is not a critique on classical AI, Machine Learning, or Data Analytical models that can also be colloquially defined as Artificial Intelligences.

Information Sovereignty, or the responsibilities that come with Information Freedom

AI training is built on the works of hundreds of thousands of humans. That is not an exaggeration. The Training that goes into these models is absolutely massive, and contains everything from memes on Reddit to full on academic texts. Many of these humans have even come out and expressed an explicit discontent with the use of their information in this way. That information, knowledge, and wisdom these humans have created is then consolidated into a mathematical model that is designed to create an average, tantamount to mediocrity, of all of the inputs. This output hopefully approximates what all of these humans would have put together.

This does not create free access to the information, as all information should be freely accessible, but it exploits freedom of information for the profit of the few owners of the AI systems. (More detail on that to come in later sections.) When a human releases information to the world, ideally, that information should be freely accessible if it is to contribute to the betterment and growth of the societies we live in. Pretty much every Sociological theorist from Macchiavelli to Kant agrees that there is a baseline requirement of the members of a society to not actively damage the society they live in. AI’s wanton absorbption of information to regurgitate approximations of thought is doing just that. It creates, among other things, a consumerist mentality for the very information and knowledge we gain from the world around us.

Human beings, to paraphrase John Locke, are constantly molded by the world around us. We start as blank slates and learn from what we are exposed to. AI limits that exposure, and worse, by creating a mediocre approximation of human experience, designed to appeal to the bell curve of humanity, it drives everyone to the same thought patterns. It has the, intentional or unintentional, outcome of limiting the way humans think by exposing them to an onslaught of the same thought patterns. This, in turn, encourages the human to adopt that thought pattern to communicate with the machine¹ and thus the vicious synergy begins leading to humans thinking like the machine. This creates a “you” that is no longer unique, but part of a greater hive mind sharing the same thought patterns.

We are AI, Resistance is futile.

Societal Control and Bias

These manipulations of thought patterns create a society that is malleable to complete control. If we look at Hobbes and Macchiavelli we can see why that might be bad. The Macchiavellian types could manipulate these machines to create thought patterns that allow them to create a societal drive to perform actions that are to the detriment of everyone but themselves. This can be, and is, accomplished by creating Bias in the AI models themselves. An example of this is the blatant manipulation of the Grok AI by X to ensure that output conforms with certain political and cultural beliefs. These outputs are then fed to users who may have questions or thoughts they want answers to, and all of the information these human users learn from, provided by the Grok AI, is then tainted. It is not unbiased or proper learning. It is biased learning with political or cultural ends in mind. That is, practically, the definition of indoctrination. It is encouraging the user to adopt a set of beliefs while not only discouraging critical analysis, but actively preventing it.

In addition to these intentional biases, the access to information is limited by happenstance. By training these models on the accessible information of the internet, it is limiting the voice of those in cultures and societies around the world without strong technological access. If the information is not online, the information does not make it into the training models. This creates a further insidious bias that the only cultures with a voice are those that believe in, and regularly use, the tools that build profit for a technological elite, globally.

The Technological Access Gap

That voice-limiting gap in access to participate in the model training is further compounded by the actual ability to use the models. These models exist, largely, as online-accessible tools made by giant corporations. If one does not have stable or regular access to the internet, use of the Generative AI models is a practical impossibility. Not only does this mean that a large part of the global community is being left out of the training of the model they are left out of accessing the model. If one concedes a use of these linguistic and artistic slot machines (I do not concede that, but for the sake of argument I will entertain the logic), it is not accessible to a large part of the world population.

This means that a technological elite is attempting to create, and claiming in some was to have created, a world changing technology that is by the nature of its very accessibility restricted from a large part of the world itself. The sheer depth of the information and knowldge gap that exists in the world today is already profound, this technology kicks that into overdrive and increases the distance between the information haves and have-nots of the world to immeasurable lengths. So far, this has only considered technological access, when we also consider that most of these AIs are paid models, or have paid requirements for most features, it gets even worse.

The Growing Wealth Gap, and how AI consolidates the economy even more

The models also cannot be used by those who do not have the disposable income to waste on the tokens necessary to pay the companies that make these models. Make no mistake, this is definitely not an accident. The goal of these technologies is to create profit at massive scale. This is openly claimed that these companies will generate record profits for their investors, and that they will do so by monopolizing information. The goal is to consolidate economic power and wealth into the hands of the owners of these tools. I do not feel a need to elaborate too much on this as it is, apparently, a feature not a bug in the capital economies fawning over these information monstrosities.

One of the ways they claim this is by eliminating the need for human workers in entry-level roles. By doing so, they are openly admitting to a goal of removing jobs from an economy in, what I think as an amateur in the study of economics is, a blatant destruction of economic velocity. Removing jobs from an economy in a society that demands that all have jobs to survive is a morally reprehensible act. It is creating a situation whereby tens to hundreds of thousands will be left in abject poverty intentionally.

The Measurable Damage to the Growth of Society

Additionally, by creating these shared thought patterns, the destruction of creative modalities of thought, and the removal of crucial experience granting entry-level positions these models and the owners thereof are complicit in a measurable injury to society. For societies and cultures to grow, new ideas need to be had and certain roles need to be fulfilled. For instance, an organization requires some level of planning. Successful planning of, say, an environmental project requires someone who has experience actually working on environmental projects. If the intellectual grunt work of, say, doing the research to determine what species need to be protected and how that protection is done is outsourced to a machine prone to hallucinations (which we did not discuss here, but is also fascinating) there will be no one experienced enough in fulfilling these higher-level planning roles to ensure that the planning is done properly. An artist who never experiences the frustration of not quite being able to get the right color match, or the right brush stroke, on a piece of art will never grow into a great artist, capable of creating the masterpieces that define humanity.

Folks who write down everything from fiction to design and operation manuals for critical and dangerous machinery will never experience how to edit and properly define their own copy. The myriad ways this can lead to actual physical injuries cannot be understated.

Society requires experienced individuals. If AIs are filling the roles that grant the experience, then there will be no one to fill the more experienced roles. We are actively creating a time-bound experience gap that will, inevitably, lead to economic, industrial, intellectual, and academic failure in the not very distant future.

What do we do from here?

All I can do, from my position, is scream into the void the risks and problems we are facing. Change of this scale requires societal and individual action at a coordinated level. For my part, I do not use Generative AI, and I discourage others from the same. I try to educate folks on the risk so that maybe, just maybe, we can stop the damage in its tracks.

That said, Ned Ludd went further and actually destroyed the Knitting frames that were stealing jobs … But knitting still became an industrial act. I fear that we will only change our ways on this path when we have gone so far as to be, as humanity, damaged beyond repair. We are sowing seeds for the future, what hell will we reap from them?

Footnotes

¹ Theories of communication to be addressed at a later time, but I suggest reading some works of Jean Piaget and the absorption of knowledge in education for more on this.