Jiwon's Alcove

Is Technology or Morality Foundational?

Feb 27, 2023

Given recent technological breakthroughs in AI, everyone seems to be conversing about technological advancement and its societal implications. A particularly thorny topic is the ethics of popular models, which were trained on copyrighted data, and is capable of reproducing the style of human creators with much less effort.

Given this milieu, I've been thinking about how moral principles and technology interact with each other. My current attempt at describing the interaction, without any value judgment, can be summarized as follows.

Generally, the technology available at a time informs moral frameworks, not the other way around.

In other words, technology is more foundational than moral frameworks.

My argument hinges on a few observations which, while certainly up for debate, seem like reasonable assumptions to me.

  1. People tend to make decisions via habits and emotions, then use logic to justify the decision, rather than making a purely logical decision.
  2. If a piece of technology proves its utility, then its adoption by people over time tends to follow the adoption curve.
  3. So as long as humans have existed, technology has always advanced, as it is the optimal strategy for an individual or a group's fitness.
  4. People tend to view whatever technology that exists when they're young as the normal state of the world.
  5. Society requires young newborns to sustain themselves.

The full argument is as follows.

Society requires young newborns to sustain themselves. As such, unless a catastrophic event occurs, human society has a tendency to continue. So as long as human society continues to exist, technology will advance, since it maximizes an individual or a group's competitive edge to adopt new technologies which provide greater utility than its precedent. As such, a new technology that provides genuine utility is first used and developed by early adopters, then rapidly spreads to most of society beyond a certain inflection point. Thus, for every single new piece of genuinely useful technology, there will be a future time point in which a new generation is born with the technology already widespread. Since people tend to view whatever technology exists when they're young as the normal state of the world, and humans' decision-making is based on emotion rather than logic, their moral framework will tend to echo whichever world they perceive as normal, even if the technology may be a net negative. Therefore, that generation's moral framework will deem the specified piece of technology acceptable.

Let us echo another common observation: sufficiently advanced technology is indistinguishable from magic.

Ultimately, these observations mean that our future generations will most likely deem technology that is absurd to us as acceptable. Furthermore, we can imagine some technologies that will exist at the consumer level in the future will be considered morally gray or unacceptable to us.

Some examples in the realm of our imagination may include perpetual life, genetically removing harmful mutations from the population, machine governance, and advanced AGI which displaces almost all human jobs and creations. Heck, humans as we know it may go extinct.

This leads to the difficult question.

How can we be empathetic toward future humans, whose moral framework disregards everything we currently consider "human" as outdated nonsense?

Well, I don't know. If I knew, I would be a famous philosopher.