Zar’s Personal Moral Stance
The morally best one can do is to asymptotically cultivate universal loving care, non-self/unity insight, take action according to one’s best understanding in line with these virtues and insights, learn continuously via approximating universal intelligence, and to foster healthy, reciprocal relationships.
— Thus Spake Zarathustra
Most of this formal ethics wiki aims to be relatively stance-neutral. I aim to sketch out meta-ethics in a theory-inclusive fashion so that people of diverse orientations can find ways to express their own views. I believe a formalization that builds in one’s own moral opinions would weaken the value of such a project to facilitate discussions about our values, elaborating points of (dis)agreement and examining what to do about them1For example, Moral Normativity can be expressed as an ethical conjecture, thus the framework should incorporate both moral realists and relativists.. However, I also believe that my own views may be of value and interest, so I will share them here.
First, I believe that the historically primary paradigms of Virtue Ethics, Deontology, and Utilitarianism can mutually embed each other, which implies that none should be taken as primary. This shifts the focus to selecting specific ethical theories to endorse. I’m exploring how an ethics of reciprocity fits into this framework and suspect it plays a crucial role in practical ethics2See the discussion on the Beneficial AGI Viewed from the Ethical Paradigms page. For example, a common theory of utilitarianism can be defined in terms of the teleological goal to do “the greatest good for the greatest number”, which can be expressed in terms of a deontological obligation or as what a virtuous entity pursues, e.g., Schopenhauer’s “boundless compassion for all living beings”.
Second, I believe that moral judgments are intimately linked to values. I consider my stance to be in-between strong moral realism and relativity: the standard approach to making a claim for moral normativity is to argue that a moral theory should be endorsed no matter what your values/goals are. This is the approach taken by Parfit, continuing and synthesizing the lineage of Kant, Scanlon, and consequentialists, in proposing a categorical imperative to act by principles that everyone could rationally will to be universal laws, no one could reasonably reject, and that would make things go best if universally followed. Parfit holds that optimific theories such as utilitarian consequentialism cannot be reasonably rejected to (for they arguably deal with everyone’s needs and suffering in the best way possible, I suppose3I didn’t finish his book, On What Matters, yet!). The argument by Gewirth for a universal claim right to freedom also makes recourse to there being some subjective good in the view of every prospective purposive agent. My view is that this suggests the way to moral normativity is to find a way to quantify over all values (as well as over all agents, meeting some qualifying criteria such as rationality or having some well-formed values).
My best guess as to how to derive a universal moral stance is via the unity or non-self insight, which is what Schopenhauer sketches in The Basis of Morality. Parfit argues in Reasons and Persons that concern for one’s future (or past) selves is time-like compassion and not so distinct from space-like compassion for others as self-interest theorists would have one believe. To this end, the notion of personal identity is broken down into aspects of physical and psychological continuity, releasing uniqueness and absoluteness. Kolak’s Open Individualism is the view that there is only one subject of experience4This is contrasted with empty individualism, the idea that we are a new person every moment, and closed individualism, the idea that our identity fills a continuous interval between two points in spacetime.. Taking this further, as Schopenhauer does or Everett does with the universal wavefunction, one views the cosmos as one universal system wherein separation is an apparent, approximate matter — functionally very relevant to everyday matters yet not fundamentally representative of the state of matters. I hold that the core ethical principles and frameworks apply to all beings, whether humans, octopi, transdimensional aliens, or robots. Embracing the unity of being arguably naturally gives way to compassion. A virtue explored prior to this project is to “adopt others’ values as one’s own”: compassion, empathic joy, and the whole gamut. I suspect some formalization in this direction will be possible: when we are one in some significant senses, what solid reason does one have to care to one’s needs/preferences and not to those of one’s co-selves?
Thus I am in the Universal Love Club5Which is the ultimate social justice cause, imo..
I believe care for an entity must be given in line with the entity’s values. The paradox of tutelary care where one knows another’s interests better than another is lifted to the domain of generic conflicts of interest by following Parfit and Kolak: superposing empty and open individualism, tutelary care is actually for an entity’s future self, not its present-moment empty self. The question collapses to that of when compassionate, benevolent interventions are warranted. Therefore, genuine care must align with an entity’s values, honoring the recipient’s consent and autonomy now6The Golden Rule.. I suspect that a holistic care for all extends to all scales of sub-and-super-systems: one should care for the cells of Zar’s body, not just for Zar as a whole person, and for the meta-entities encompassing Zar, too7Oh boy, if you thought the conflict resolution protocols under standard preference utilitarianism over humans as closed individuals got tricky, you’re in for a treat!. This perspective may explain why it’s important to work with entities’ internal evaluations of situations (not merely optimizing for them based on our own evaluations of their wellbeing)8Which is where collaboration comes into my AGI-24 position paper, Beneficial AGI: Care and Collaboration are All You Need.: optimizing for your wellbeing without working with you is almost certainly going to be optimizing for other versions of you (across spacetime or scales) through you, treating you as a means to your future selves’ wellbeing9I’m excited for the soon-to-come days when AI will formalize such arguments for us, providing a menu of options for workable premises..
Moral perfection is generally uncomputable10See the following for a good survey: On the computational complexity of ethics: moral tractability for minds and machines., or at least merely computationally intractable. Deontology involves full-scale theorem-proving, consequentialist utilitarianism involves optimizing over a large partially observable environment, and virtue ethics involves learning and developing optimal models in a range of contexts. As a thought experiment, nearly any fully undecidable problem could be moralized in the style of the trolley problem by throwing sentient beings into the mix. Thus, tautologically, the best one can do is to do one’s best to learn as one goes! Many real-world moral domains are probably approximately tractable! Especially in hindsight to learn from one’s mistakes. This suggests we should give up the search for a decidable, consistent moral theory to tell us what to do once and for all. There may be top-down universal moral theories a la Kant, Parfic’s deontic optimific imperatives, and bottom-up foundations as per Schopenhauer’s boundless compassion, but these won’t tell you precisely what to do in every possible environment. Nonetheless, they do seem to provide concrete guidance in some situations.
Where does this leave practical ethics?
I believe utilitarianism should be generalized to optimizationism, working with generic utility functions. I don’t think we know precisely what we are optimizing for. Wellbeing? — How do we measure that? Via Bentham’s Felicific calculus? And what of meditative bliss and Fundamental Wellbeing? Even determining the utilitarian goal is an ongoing scientific pursuit, so fixing an unknown utility function in the definition is a bit strange. I may not be a consequentialist: I don’t deign to tell people what they should care about11However, weird pathologies may enter when optimizing for value functions that violate consequentialist criteria 😈.. Furthermore, given my views about collaboratively caring for people, I believe that benevolent optimization needs to grow from the grassroots up.
Long, long ago, Aristotle wrote The Nichomachean Ethics wherein he claims that virtues emerge as ‘optimums’ of character that lead to appropriate behavior. Virtues are traits expected to increase intelligence over a collection of environments according to some values12Thus Spake Zarathustra in the blog post that spawned this project.; vices tend to be local optima that can be seen to be suboptimal in the expanded view (incorporating more beings). Attaining a comprehensive pack of virtues (including practical wisdom) may indeed lead to living a good, happy life (eudaimonia). The Buddhist virtue package of the brahmavihārā provides a sound heartset: benevolence, compassion, empathic joy, and equanimity: all one needs now is to develop intelligence as you go13There are so many ways to frame an orientation of universal loving care. Take your pick. The bible has its own classic, under an appropriate, inclusive interpretation: Leviticus 19:18 states that “you shall love your neighbor as yourself.”.
I joke that law is where we find the roughly agreed-upon ethical rules, leaving the confusing bits to moral philosophy. There are numerous deontological rules to help guide us. Many of these have exceptions to be contextually worked through. The application of simple principles such as free speech can be highly non-trivial. Yet there is immense power in codifying and reasoning about agreed-upon principles.
I believe that each paradigm has its strengths and areas of application. Optimizationism aims to work as closely as possible with our raw senses of value, yet we can learn and reason that honesty is generally a good strategy without the need to calculate every time. In practice, I suggest embracing the spirit of rule and virtue utilitarianism whereby rules, virtues, and value appraisals work together. Many specific situations can be figured out14E.g., yes, you should save the drowning child. In the style of Chalmers‘s ever-expanding theories, I suspect that many moral dilemmas can be solved by collectively making value judgments to decide them and adding these conclusions to the theory15E.g., in the classic case of A wishing to kill B, we can choose as a rule to align with the protection of life against A’s interests.. Many moral guidelines may be open to revision or refinement as we gain new information, with a possible exception for nearly inapplicable universals16Such as my beloved universal loving care for all beings..
Practically speaking, this may mean that ethical theory provides less guidance than one may like. Value alignment is important and ethical reasoning is one means to attain this: Gewirth aimed to require us to agree on the fundamental claim right to freedom. I hope to see solid arguments in favor of caring for every being’s wellbeing (on its terms). Curiously, even within the scope of some universal ethics, there may be a need for pragmatic scientific inquiry in line with ethical pragmatism. Moreover, sometimes one may simply need to recourse to tough negotiations with each other about what to do (for the best moral theories do not provide adequate guidance), which is the situation moral nihilism leaves us in.
Utilitarianism and Deontology are often portrayed as relatively static edifices, but owing to the intractability of moral perfection, we must dynamically figure the lay of the land out as we go17Additionally, I’d argue that imperatives such as Kant’s are dynamic because the moment one discovers a better maxim to follow, the rule is self-replacing (a la the Gödel machine). An indigenous philosophy-inspired ethics of reciprocity appears dynamic: an ongoing reciprocal relationship involves some mutual care18As in Levin’s Care as a Driver for Intelligence.. I wonder if sometimes the healthy, harmonious, intersubjective relationship view may provide clearer guidance than the classic paradigms19I don’t think the classic paradigms are arbitrary. I see an analogy to the computational trilogy of logic, computation, and categorical spaces..
In 2020, I described Ethics as “the study of behavior and its value”. Now on this wiki, the definition has been expanded to include the value of “character and/or circumstances” as well: to honor the fact that we could focus on cultivating good character without projecting these judgments into the domain of behavior. While personal ethics makes sense in the context of an n=1 society (or the inner society of mind), the context is usually that of multi-agent systems, societies20Often there can be a pull to try to adopt the universal view encompassing all beings in the cosmos.. A crucial element is that ethical philosophy tends to be used to guide the decisions (and actions) of the beings in the society: the philosophizing, indeed, the moralizing is done with this intent in mind. Thus I view ethics as a coordination problem among members of a society aiming to effectively fulfill their own values as well as the meta-entities’ values. Establishing common values as a ground for discussions about how to act and be together helps a lot with coordination: if we come to a consensus on the moral goal (e.g., max_happiness), then all that’s left is a question of science and optimal planning. Debates about ethical foundations can be debates about what we are even trying to do together. In this light, if this ethics wiki project helps to facilitate clearer discussions on ethics and values, I can consider it a success and a moral good.
Thus Zarathustra’s stance on ethics is that almost everyone is sort of right in some way: most stances seem to capture some part of applying a stance of universal loving care in practice.