Long Ramblings on Compassion vs. Power Ethics
Today I’m going to ramble about ethics.
Ethical systems are frameworks for guiding how we interact with others. I suppose this could include our ethics toward the environment and other more inanimate phenomena too. As such, there are infinite possible ethics.
As a good little human prone to black-and-white thinking, I find myself primarily drawn to two coherent ethical systems. (Does it matter that they are coherent?) I call them Compassion Ethics and Power Ethics. At a glance they may both seem simple, but there are any number of subtleties in detailed implementation (but isn’t that oft the case in life?). I’m also too lazy to find out what the going names for these are~
(Although I should look into Ayn Rand’s self-interest philosophy.)
In Compassion Ethics you, well, have compassion for the plight, joys, aspirations, wills, and generally the experiences of other entities capable of these (i.e. most animals, maybe plants, etc). You care that they have as much freedom as we can grant them to live their lives as they please. The social contract and mutual reciprocity do come in here: we will have to do something about our mass murdering tendencies, as well as the brutal sadism of our feline friends; we may have to limit reproduction for general well-being too. Nor are we sure exactly where the boundaries of our compassion will lie (but we can be pretty sure most mammals are eligible targets). There are many details to be a figured out, but the gist is simple: do what we can to care for and grant autonomy to compassionable entities.
In Power Ethics you bargain with your relative equals for what you don’t want done to you. The classical example is that you don’t want to be killed, so you come to an agreement not to murder each other and to penalize those who break this social contract. (Of course, currently, we’re born into social contracts.) The questions are, “what protections do I want”, “can I get them”, and “what do I have to exchange for them”. This is all pretty neat among relative equals. But how do (non-human) animals fit in? They don’t have any real bargaining power (anymore), so they won’t be afforded any protections. The only protections they’ll get is from people with bargaining power who want their ‘environment’ protected (or, ugh, caring individuals who value Compassion Ethics). One must note that a group of people without bargaining power will also get no protections; under the Power Ethics slavery is not necessarily wrong. They did get bargaining power and won their fight though ;-). Things also get interested when heading into a variegated post-human future: a holder of Power Ethics must have much confidence in verself. There are many details to be a figured out, but the gist is simple: might is right.
Mind you, ethics could be based in many things, not just compassion, power or self-interest. What about an anti-stagnation ethics (pro-chaos) or one based in aesthetics. Values aren’t based in ‘logic’ or ‘reasoning’, so any value system could probably be used as a guide for our mutual interactions.
Realistically speaking, it’s easy to argue that “might is right” is what actually happens. Yes, the values we choose to base our collective ethics on are generally chosen by a group possessing them fighting and winning (not necessarily physically). Is Power Ethics not then vacuous? A blatant misunderstanidng of what ethics means? A rejection to having any strict, unwavering values?
Power Ethics is not an ethic of autonomy, where we value the autonomy of individuals to pursue what they want so long as they don’t unduely interfere with others (which is a mess of trade-offs). That’s also closer to compassion, but not quite the same (how do you balance the trade-off of helping others versus respecting their autonomy? What if they don’t or are not capable of knowing what’s best for them (even by their values)? Etc). You’ll only sanction autonomy insofar as you value its reciprocation (from a specific party).
Power Ethics is fluid: you’re leaving yourself free to change what you value. In fact, you’re only using ‘ethics’ as a tool. It may potentially suck if other, more powerful entities also hold Power Ethics and conflift with you, but holding Autonomy or Compassion ethics doesn’t guarantee all post-humans, aliens or simply powerful humans will either. One rejects valuing things in and of themselves.
Actually, there is one value that is almost unwavering and central to Power Ethics: we value not valuing the values of entities without power. No or negligible weight is to be assigned to the values of those with no bargaining power.
This is why the Power Ethics are actually a value-based ethical system, and not just a story about how value-based power struggles work. In Compassion or Autonomy ethics, one holds that even entities with no relative power ought to be given some consideration, we should assign some weight to their values.
Now, while I view these both as essentially consistent and honest, I’m not sure about some in-betweeens.
What about valuing the autonomy and freedom from suffering of humans (and pets) but not other sentient entities? This is ostensibly a coherent position. It’s almost a merger of Compassion Ethics and Social Contract (Power) Ethics: oh, we’ll be compassionate if you’re in the in-group. What happens when AIs demand to be let in? We won’t be able to say no forever, and then what are we left with? Compassion for those who can make their case formally? – Sounds a bit like Power Ethics. Yes, I think Autonomy/Compassion for humans (and pets) only is just a front for Power Ethics. There’s no good answer to the question, “why humans?” Even if you try to use “intelligence” or something, some other animals would be let in, even though they don’t currently have bargaining power with humans.
So then, next up is, should we choose some variant of Compassion/Autonomy Ethics or Power Ethics and fight to stay on top? How can we choose? They’re both rationally coherent. Both are actualized in the same way through power-dynamics. If you have overwhelming compassion for non-human sentient entities, then your choice is easy. But what if you don’t? You may feel for them, but are fine murdering them for food, research, or fun as well. You just don’t find sentience to be an all-important phenomena that is to be enshrined on a pedestal.
One method of choosing is to envision the world where Compassion Ethics wins and where Power Ethics wins. Envision it now and in the future. Envision a post-human world with many variegated life-forms in both scenarios. Which matches your values better?
It’s possible that Compassion Ethics makes for a more interesting, more easily livable world for individuals. Unless, of course, you’re very confident in your supreme powers.
Let’s see. in the Compassion route, we don’t kill animals for meat, but that will be okay shortly thanks to in-vitro meat. We have to find ways to de-fang and get food to carnivorous animals, another big challenge. Easier than immediately integrating animals into our society, we may want to have large non-human animal reserves where they can live pleasantly. This will generally be good for having complex ecosystems and the environment though. In the long run an abolitionist approach makes sense, but then we may also want to educate animals as much as we can about their ‘rights’ in our society. Will this accelerate their evolution towards intelligence? Maybe. Especially with the more intelligent of them.
Will plants enter the story? Maybe :o? If plants have some sort of conscious experience with ‘preferences’, even if they don’t that strongly resemble ours, then we will have a duty to think of them too. We may have to genetically modify animals and/or them to be able to coexist better: i.e. eating fruit is okay, but what about eating grass? Hard to say….
Humans too. Even if they haven’t found their own bargaining power, we will see them as compassionable and deserving basic sustenance and the means to trying to accomplish their aspirations. As soon as possible, a variant of basic guaranteed income is but common-sense. How to enter an era of abundance and care for the planet enough to get resources from it is a difficult challenge, but it’s one we must ethically tackle.
What about post-humans? AIs? Simple narrow AI that are not compassionable will be rampant supporting civilization (kind of like how many plants animals, microbes, etc. support the organic ecosystem, though many of these are compassionable? Is that just a different paradigm?) Once we have AI that have their own values (whether autonomous or not, whether robotically embodied or not), we’ll have to respect those too. There may be restrictions on when and how you can create new compassionable AI, as we can’t turn them off once we turn them on!
Life will in general be growing and additive. Out of compassion, we’ll have to try and give all compassionable entities indefinite life-spans, so our civilization will become very rich and variegated, lush and brimming with life. If space is an issue, we may have to port many animals, plants and humans into virtual substrates. How do we explain this to them? Social necessities over autonomy? Trickery? It’d be easy to make a dog seem to choose a nice VR, but does it really understand it? Maybe there will be some duty to uplift animals and plants; although the method could be long-drawn out and slow. If done patiently, it could probably be done with their ‘consent’, insofar as they can consent.
We’ll generally have more and more. When we have less of X, it will generally be because X decided to be something else instead. We’ll move on by choice or not.
What initially seemed like restrictions and hard challenges, such as investing large amounts of resources to help animals, plants, humans, and AIs without much bargaining power will cease to be restrictive. In the large scheme of things, dogs and people don’t want that much energy or processing power. Just as most people today don’t need supercomputers, but to an even greater extent.
…
…
…
Okay, that seems pretty nice. But if we want happy-cheery diversity, we can get it, whether we let most current species die or not.
What’s the Power Ethics route look like? Of course we keep killing animals and plants to suit our needs. We’ll stop eventually thanks to compassionists, but more thanks to in-vitro meats that can be better for our health and our pockets. It looks like for our own well-being, we have to take decent care of the environment. So we can do this, but variety of species is just a number correlated with robust environments. Plant and animal pain don’t really matter (and predators are a good enough mechanism of natural birth control).
What about less privileged humans? Well, there is a route where many of them die as bargaining power is increasingly centralised. Many humans just don’t manage to help themselves. Moreover, there’s always the threat that a group amasses enough resources that it can screw the rest of us over.
But let’s assume they manage to, and they manage to tie their own well being to the rest of ours enough that things turn out okay for them. Basic income is no longer common-sense, but as AIs take our jobs, we’ll have little other option to fight for. So it will likely be won.
What about the rest of our resources though? It will be harder to get the masses to fight for everyone to have ‘fair’ access. Normal people just don’t care that much about it. But that’s all the better to you if you are in such a position of power.
What about when we get AGIs and uploads? Cyborgs and people advancing their intelligence and capabilities to various levels? Will they hold themselves in check? Maybe.
Legacy humans may be left behind with ancestor pig. Those of us who want to can probably augment ourselves and move to the next level of intelligent life. At some point, the cares of those who didn’t join us will seem trite. Maybe they’ll die off, hang on, or get stuck in some VR cul-de-sac.
How will equal, self-protecting laws made by humans affect the newer breeds of intelligent life? Will they be upheld? Not necessarily. A bunch of intelligent, digital beings can come to agreements on what to and not to allow via other means. Maybe law-enforcement will be powerful, and relative equanimity will be preserved. Or maybe highly-distributed intelligences will be able to bypass law-enforcement and be bound more by their own ‘code’.
When and if we start craving more life-like variety, we can work on artificial life and chemistry simulations. We can bring back animals and plants from their DNA if necessary. We can mold ecosystems as we desire them, so having destroyed them isn’t very detrimental.
In fact, there’s probably no prohibition on creating new AIs, values and sentience or not, and killing them shortly later, or on doing some experiments on them. Doing so isn’t really a threat to the current ones. This might, actually, lead to more interesting life forms possibly being developed (or do constraints lead to that >:P).
For the survivors, which could still be most of us, things are still pretty good. In protecting our own asses, we generally secure most of the benefits of Compassion Ethics for ourselves anyway. Mainly, the ability to eradicate unwanted and unprotected life changes the dynamics a bit; it may make them a bit more sparse – or at least otherly full.
Hmm, it’s much harder envisioning the Power Ethics scenario. If strictly followed, most humans could easily end up passing on the baton early. However, there are many ways the cookie could crumble, and the people can win to various degrees in the struggle of life.
Given this level of variety, how is one to compare the approaches?
Frankly, in the ‘better’ scenarios under Power Ethics, ‘we’ secure most that we would secure fighting for Compassion Ethics. Animal afficionados will see to it we get to live alongside plenty of animals anyway. And we’ll see to it that we and our beloved (pet) animals and AIs are treated well, even if most aren’t. Moreover, fighting for Compassion Ethics doesn’t guarantee that our successor intelligences will be won over to our compassionate values.
The main difference, in the end, really does seem to be whether you just care for other sentient beings or don’t.
The other difference lies in the power of stories. To what extent does having a coherent, clearly grounded story help your cause and fight? Compassion Ethics includes a lucid narrative encompassing you; Power Ethics is just a threat. Compassion Ethics puts forth a concrete value, making it easier to compell others. The value of Power Ethics comes with no such personal guarantees, just that you will fight for your right.
As I noticed above when trying to paint the visage of both ethics, a legal system taking Power Ethics at face value can seem quite dystopian. At face value, a homeless man who can’t contribute to his capticalist economy is to be left to die. That’s right. Ve’s only to be saved if someone cares to of vis own free will. Basic income? Hmm, do the massses really have the power to get that out of the automation owners? We can’t look at this through the eyes of current democracies: they try to give ‘equal’ votes to everyone, but the value of Power Ethics is to not value those who can’t contribute to the exchange. Exchanges to not kill each other and monetary/resource based exchanges could be viewed as separate issues. Laws about what to do with tax money could be done through net-worth weighted votes. That would make some sense.
Of course, the above is all pretty reasonable. Actually, it is telling a compelling story. However, that’s only if you have the audacity to strive to be on top in the story. It’s the story of a conquering badass 😉
In what seemed, to me, to be a more appealing Power Ethics scenario, people won over more protections than a bare bones power-based civilization; they won some protections by grouping together and getting powerful institutions or security AI agents to be on their side. In doing this, what story did they use? The story they used in the ‘better’ scenarios isn’t the one behind Power Ethics.
A group without individual bargaining power creates a powerful institution to protect them and argues that the values of those without bargaining power shouldn’t be listened to. Will those with actual bargaining power, with brains in the new posthuman era buy this story? It’s pretty obviously just a residual institution from an older era. The story they tell is one where their facade of power crumbles. If they create nanny AI to oversee them and it’s not autonomous, well, will it not be outsmarted? On the other hand, if it is autonomous with its own values, why is it protecting this group? If it listens to their story, it will see they have little of value to offer and value them little. If it doesn’t listen to their story, then, well, the group is just full of shit. They are telling one ethical story and relying on another ethical story to actually stay safe. An ethical story where a superior being protects them just because it values them. Hmm.
So, probably, they used a half-assed, semi-coherent version of Compassion or Autonomy Ethics. In ‘good’ Power Ethics scenarios, stories resembling Compassion Ethics are likely to be used when grouping together to fight for our rights and values as opposed to other entities’ values and rights.
So the main case for Compassion Ethics would be caring for other sentient beings.
The second would be that the story behind Compassion Ethics is more compelling and thus likely to secure you a good future (instead of just the strong a good future).
When would we best start fighting for Compassion Ethics then?
For the compassionate, now.
For the selfish, well, won’t we be better off waiting until the last possible minute?
Perhaps. Perhaps. When is the last possible minute?
Reminds me a bit of “First they came for …”.
Of course, the Power Ethics ending to the poem is: “Then they came for me – and I kicked their asses.” 😡
Do we wait until the first AGI demand their rights?
That also may not be that far in the future. We can hopefully teach the developing AGI Compassionate Ethics even if we aren’t good role-models, right?
Do we wait until most people don’t have useful economic contributions?
That would be pretty soon, actually. And, frankly, it’s better to start when we do have more bargaining power, not less.
Do we start when people are first uploaded?
The dynamics of their protections will be different. How will we treat personality theft and torment? Will it be harder to argue in this one specific case?
Do we start when people start living full-time in various virtual worlds on various servers?
What license agreements will there be? Will the autonomy of the netizens be respected? Should they be? Are they just using the servers or living there? There are many additional complications here that could get in the way of the ethical ones, but from a first-person-perspective, we don’t want to get lost in this.
One message here is that it’s best to get the ball rolling before we need it. Every case that needs it comes along with many other complications, by which our potential future well-being could be marginalized. If we were already fighting for Compassion Ethics, the road would be (a bit) more paved.
Further, the fight could be made harder by starting after it’s needed. Other forces at play could interfere, such as autonomy-restricting license agreements in virtual worlds, even for permanent uploads. Or having less political sway once we need compassion more.
Which both lead to the conclusion that we should start campaigning Compassion Ethics now to set the frame as well as possible by the time it’s needed more and more, for those weaklings of us who can’t “fight them off.”
Last, but not least, do we have to include animals, plants, and other life-forms in Compassion Ethics?
If you value logical consistency in your ethical systems, then, yes. Anthropocentrism is ultimately grounded in Power Ethics, a facade that will be unmasked as we enter the post-human age. (Nonetheless, anthropocentric compassion ethics are still probably better for ‘us’ than preaching Power Ethics.)
And you should value logical consistency because lacking it detracts from the power of a story. You’ll be telling one story when any intelligent observer can see that you’re actually practicing Power Ethics. One can then expect to reap partial benefits. :-s Well, in reality, this half-assed approach may be more likely…
Hmm, as far as I can tell, there isn’t quite as satisfactory an argument for Compassion Ethics as I’d like, but it still seems to be stronger. It seems to be the future more full of life. And it seems to be better for the weaker life-forms, which we aren’t yet. The argument as to why we should act now suffers from the same issues that voting does (your individual vote doesn’t count for much, so why bother?).
Egads, that was long -___-