Ethical Paradigm Equivalence
The ethical paradigm equivalence conjecture hypothesizes that each of the paradigms — virtue ethics, deontology, and utilitarianism — can simulate the others and express theories equivalent to those in the other paradigms.
For each pair of paradigms A and B, there exists a parametrized theory a(b) in A such that for any theory b in B, a(b) is equivalent to b.
The intuition is that each paradigm provides a formalism and approach to ethical decision theory that is universally expressive.
Discussion
I made this conjecture when investigating ethics for a blog post, Virtue Ethics Qua Learning Theory: there seemed to be features and limitations to each ethical paradigm, and they seemed to be universally expressive if one took the liberty to define non-traditional terms (for example, novel virtues or custom utility functions). Fortunately, there are other ethical philosophers with supportive theories: for example, Martha Nussbaum argues that the standard categorizations of ethical theories are confusing and unhelpful and that Virtue Ethics is a Misleading Category because both Kantian deontology and utilitarianism provide accounts of virtues. Thus the suggestion is to focus on the specific theories of philosophers instead of just assigning them to a paradigmatic category. Derek Parfit in On What Matters suggests that Kantian deontology, consequentialist utilitarianism, and Scanlon’s contractualism unify in a single “Triple Theory” because the three theories should generally agree in their recommendations as to when an act is wrong. The ethical paradigm equivalence conjecture takes these claims further by positing that the ideas of each paradigm can be treated within the other paradigms; moreover, any theory of one paradigm can be expressed in the others. Furthermore, it may not be uncommon for ethical theories to contain elements of multiple paradigms.
Does this mean paradigmatic categorization is, as Nussbaum suggests, meaningless? No matter which paradigm one starts with, one will likely need to deal with notions of virtues, rules, and utilities (whether as primary or secondary entities). Given that moral perfection is intractable (or undecidable)1See the analysis in Jakob Stenske’s On the Computational Complexity of Ethics for more details.2This analysis suggests that the Ethical Decidability Conjecture is false, at least for expressive theories that could be applied to diverse real-world domains., one cannot expect any one theory to perfectly cover all moral domains. My hypothesis is that each paradigm captures distinct, important aspects of behaving well (morally), thus there can still be meaning in paradigmatic categorization as well as in mixed theories.
Native Inter-Paradigmatic Motivations
In this section, I’ll discuss how each paradigm can be natively motivated from within the telos of the other paradigms.
Consequentialist utilitarianism is seen as providing a moral basis in utility, which leads to an optimization-based approach to attaining this good. However, in practice, brute force optimization is intractable3A Partially Observable Markov Decision Process (POMDP) is PSPACE-hard with finite horizons and undecidable with infinite horizons, thus working with heuristics is necessary. This motivates combined theories such as Hare’s Two-Level Utilitarianism where agents are advised to follow rules in the regularly covered cases and to apply utilitarian analysis when in special circumstances. Further, Parfit suggests that agents adopting the disposition to love and prioritize care for one’s children may lead to the best consequences (even if locally violating the top-level goal to maximize the quality of the state of affairs)4Reasons and Persons. Thus both deontic rules and virtues can be intrinsically motivated within consequentialist utilitarian theories.
Deontological approaches aim to provide a basis of morality via self-evident truths and core principles (such as those pertaining to being a sentient being among others in a society), which indicates reasoning as the means of determining what is good. Parfit argues extensively that optimific (i.e., consequentialist utilitarian) approaches are precisely those whose being universal laws everyone could rationally will. Universal optimific principes also arguably satisfy John Rawls’ original position thought experiment that one should only accept principles that one would select no matter which position in society one ends up having5This is one way to advocate for sufficiently egalitarian principles.. Kant holds that the single virtue that is good without qualification is possessing a good will, which is a will whose decisions are wholly determined by moral law. More broadly, virtues are the moral stength of the will in adhering to duty against contrary inclinations6This parallels the target-centered virtue ethics view that virtues correspond to appropriate responses to moral dilemmas, i.e., situations with contrary incentives.: thus while rules such as the categorical imperative determine what is right, virtues are important and represent our capacity to do what is right in practice7For example, courage represents one’s strength to uphold moral principles in spite of fear or potential harm..
Virtue ethics focuses on cultivating virtuous character traits, which can treat living a flourishing life (eudaimonia) as a moral basis8However, Kantian good will, utility, status, bonds, or value in general can also be seen as moral bases for the virtues., and suggests that learning is the road to becoming a good person. An overall virtuous person still needs to make practical decisions. For example, consider Schopenhauer’s base virtue of “boundless compassion for all beings” (and the related virtue of beneficence): what is one to actually do when in a difficult situation? One compelling choice is to follow optimific principles, learning from the consequences of one’s actions, namely, one should act in line with utilitarian analyses. Another option is to recourse to moral rules that have been refined over the course of multiple generations, namely, one should act in line with deontic principles. Furthermore, with phronesis (prudence or practical wisdom)9An entity has phronesis if ey has good judgment in determining how to act and what is needed for a flourishing life., one will learn how much energy to devote to utilitarian analyses versus employing other strategies. Societal rule-following is such a useful strategy that it even has a name as a Roman virtue: pietas (dutifulness) is considered a virtue in its own right; the virtuous play by the rules. It can be argued that as one develops virtues such as integrity, justice, honesty, and reliability, one will devote oneself to following moral principles10This lines up with Kohlberg’s stages of moral development.
Ethical Semantics
Kinds of Equivalence: Syntactic and Semantic
Formally, two notions of equivalence are to be considered: syntactic equivalence between theories that can be seen without reference to any moral evaluations and semantic equivalence between theories in terms of the moral guidance offered in practice, which in this ontology is represented by choice points. The notion of the principle of equivalence suggests that two objects are considered equivalent if they may be replaced by one another in all contexts under consideration.
The native motivations align with both syntactic and semantic equivalence. Syntactically, new terms can be defined, extending a theory of one paradigm to incorporate theories of another paradigm11This loosely aligns with the notion of definitional extension discussed by Glymour.. Semantically, a person holding the extended theory in paradigm A should be expected to make the same moral judgments about the same situations as they would if they held the equivalent theory in paradigm B.
Semantic Interpretations
The topic of how to semantically interpret an ethical theory remains unsolved to the best of my knowledge. What does, “X is good”, imply about what one should do? Something like, “in relevant situations, one should aim to make X true”? This is the take adopted by imperative deontology: if X is good, then there is a soft or hard obligation to make X true12It’s interesting to note that in legal theory, there is soft and hard law, whereas the distinction is less well-developed in ethical theory!. Naive interpretations run into challenges in cases such as the contrary-to-duty paradox where there are obligations stipulating what should be done in cases that you shouldn’t be in. The goal is to have a theory and interpretation that solves all moral dilemmas consistently so that one operationally knows what to do in every situation (at least on moral grounds).
Generally, the semantics of consequentialist utilitarianism is interpreted through an overarching imperative to take the actions that maximize the utility of the consequences, which in practice must be the expected consequences13For, while you can judge someone as taking the wrong choice based on the actual consequences, this hindsight knowledge cannot actually provide practical guidance. There can be many variants, such as satisficing consequentialism where one merely needs to do good enough. One can have a version of negative utilitarianism where minimizing suffering is prioritized and then satisficing benefit is good enough. Is there a way to interpret utilitarianism without going through deontology? Viewing achieving high-utility consequences as a goal or an action-guiding principle distances the interpretation from deontology.
Virtue ethics can also be framed through the lens of an overarching imperative to be virtuous (and not vicious): one should cultivate the virtues, which has implications for how one will act. Alternatively, one could view a state of flourishing (eudaimonia) as intrinsically valuable, thus the interpretation is that virtues are intrinsically good ways of being that agents will benefit from adopting[/mfn]In this view, moral dilemmas arise when there are situational incentives to act in other ways that appear good yet are ultimately inferior.[/mfn].
This analysis suggests that the way non-deontological paradigms tend to be dealt with is actually multi-paradigmatic because at least one top-level deontic element is usually included. However, there seem to exist alternative non-standard interpretations.
Another way to frame ethical paradigm equivalence is that each paradigm can be interpreted in the language of the other paradigms. For example, there is a virtue ethics view of the deontological paradigm via dutifulness, which is understood to be an intrinsically good character trait for agents to possess.
Theory Complexity
Note that to apply any of the paradigms in practice, many details must be fleshed out beyond the core features present in this seed ontology. Given the intractability of moral perfection in the real world, the need to deal with complex distinctions — beyond, for instance, the three categories of moral judgment in imperative deontology (obligation, prohibition, and permission) — should not come as a surprise.
Consider the need for conditional/dyadic obligations such as, “if it’s raining, you ought to use an umbrella”, or the contrary-to-duty spawning, “if you do something morally wrong, you should make amends.” You should never be in the antecedent state! Defeasible logic can be used to cover various ways in which some obligations may override others to resolve conflicts14The terms here are taken from the paper, “The Many Faces of Defeasibility in Defeasible Deontic Logic”, by van der Torre and Tan.. A strong override occurs when one obligation completely overrules another: for example, finger food such as tacos override a rule that you should not eat with your hands. A weak override occurs when an obligation overrules another and the other obligation remains morally relevant: for example, you should keep your promises (such as to attend a friend’s birthday party), but if there’s a mild emergency, you may be justified in skipping. However, you may still owe your friend an apology. The ordering of obligations is important: many moral theories prioritize protecting humans from harm over protecting valuable work equipment from harm, yet there are obligations to do both15And once you have an ordering, you are closer to being able to quantify how important obligations are.. Moral uncertainty poses challenges: what should an agent do if one doesn’t know whether one’s actions will violate an obligation or not? One could have possibly made a promise, made a promise in an altered state, or not know whether someone is innocently or intentionally causing harm (with no time to figure out). The principle of stochastic dominance suggests that agents should choose options that are consistently expected to conform to the obligations16The term and examples are taken from the paper, “Moral Uncertainty for Deontologists”, by Christian Tarsney.. Another fun issue is that of collective obligations17See the paper, “Collective Obligations: Their Existence, Their Explanatory Power, and Their Supervenience on the Obligations of Individuals”, by Bill Wringe or the paper, “The Irreducibility of Collective Obligations”, by Allard Tamminga and Frank Hindriks.17, which are arguably irreducible to individual obligations. One example is provided by responses to climate change, and another is when two co-workers are required to inform someone of imminent danger. Likely ‘real life’ can provide additional details to deal with, too.
The other paradigms also face diverse challenges that require increasingly complex theory to tackle.
Moral uncertainty arguably faces consequentialist utilitarianism more directly than deontology: determining whether an action truly maximizes overall utility is generally undecidable18Even in the simplified finite-horizon partially observable Markov decision process model, computing an optimal policy is PSPACE-complete.. Even maximizing expected utility is highly non-trivial, which motivates rule consequentialism and two-level utilitarianism which use rules to simplify moral decisions. Collective action in the face of examples such as climate change (— the problem of the commons —) poses an issue: individual actions may not affect the aggregate utility much. The suggestion that one needs to maximize utility to be morally good faces a demandingness objection: nearly everyone will fail. If adopting a high-level utilitarian obligation, then almost everyone is almost always in a case of violating moral duty. Thus in practice, one needs to extend the theory to allow for finer gradations of utility optimization goals, such as allowing for lower satisfaction thresholds, cases where utility gains are optional19Such as supererogatory goods where one goes beyond what is morally required of one. Incommensurable goods also provide a challenge: can value be represented by one-dimensional utility measures? Most countries in the world ban the sale of kidneys for transplant, suggesting that most economic goods are considered incommensurable with human organs. How well can one compare the value of healthcare vs. education, the pleasure of eating a pizza vs. enjoying free speech, or telling the truth vs. protecting someone’s feelings20Some examples taken from Incommensurable Values by John Broome.? Even if one adopts a broadly hedonistic approach, negative utilitarians claim that negative and positive experiences — pain and pleasure — are incommensurable and reducing suffering should be prioritized21But is a pin-prick to be avoided over an event that fulfills the dreams of ten-thousand children, but without reducing any suffering?. Such considerations incentivize utilitarian theories to augment themselves with rule-based or threshold-based approaches to provide practical moral guidance.
Virtue ethics deals with the complexities of life very differently, because the emphasis is on cultivating virtuous character, which leaves room for intelligently adapting to diverse situations with practical wisdom (phronesis). The approach of target-centered virtue ethics focuses on virtuous action, making the landscape more similar to that of deontology. Virtues such as honesty and kindness can still conflict with each other22The classic example is the case of telling people uncomfortable truths.. If virtues are grounded in a flourishing life (eudaimonia), there are typically arguments for incorporating the flourishing of others, too23E.g., in The Nicomachean Ethics, Aristotle argues that the good life involves good friendships, which are joy-based and not utility-based.. One can also examine collective or institutional virtues. Perhaps the core challenge is that practical decision-making is not included in the domain of moral discourse: without concrete operational guidance, one must figure out what a virtuous person would do from situation to situation. To this end, deontological and consequentialist (utilitarian) strategies may be employed as a part of learning to live a virtuous life.
Thus this formal ethics seed ontology only provides a high-level framework in which to situate the rich theories needed to tackle real-world ethical reasoning. Theories in each ethical paradigm need to respond to similar moral dilemmas presented by the world agents inhabit, yet one can stipulate they do this in complementary ways: deontological theories eventually need orderings over duties, which lead to numerical quantities, the domain of utility-optimization approaches. The continual learning and cultivation of virtue provides the character and practical wisdom to know when to apply which approach. Thus, though each paradigm stands on distinct theoretical ground, their expansions often converge in handling the same moral dilemmas, illustrating a deeper unity in the complexity of ethics.
Direct Simulations
In dealing with inter-paradigm simulations, I will stick to the simple versions. The conjecture is that once we align a form of the core constituents of the paradigms, the practically inspired elaborations of the theories can be further aligned as desired.
Moral Value Judgments ↔ Imperative Deontology
The first equivalence to cover is between the two forms of deontology: the languages of moral value judgments and imperative statements. Note from the semantic complexity section that the notion of strengths of obligation can be difficult to avoid in practice. This simplifies the case for paradigm alignment because a primary counterargument would be that, “X can be morally good but not obligatory”; yet, while moral goodness can come in degrees, so can moral obligations. One can also be strict in terms of what qualifies as morally good: for example, Kant considered good will to be the sole virtue that is always, unconditionally good. Thus there’s a case for asserting the trivial alignment of “X is morally good” is equivalent to “there is an obligation to ensure that X is true in the world”. Identical real-world refinements can be made on either side to tackle the numerous moral dilemmas that are encountered24I note there may be a wish to semantically interpret goodness and obligation. One suggestion is that this could be equivalent to a theory of, e.g., two kinds of obligations. Another is that any theory of obligations could be mapped into a theory of kinds and degrees of goodness, and vice versa.
This equivalence is presented on the deontology page. One defines direct translations both ways, allowing us to write:
(=>
(instance ?S SimpleValueJudgmentSentence)
(equal ?S
(SimpleImperativeToValueJudgmentSentenceFn (SimpleValueJudgmentToImperativeSentenceFn ?S))))
(=>
(instance ?S SimpleImperativeSentence)
(equal ?S
(SimpleValueJudgmentToImperativeSentenceFn (SimpleImperativeToValueJudgmentSentenceFn ?S))))
Deontology ← Utilitarianism
For this simulation, let’s consider the fact that the teleological definition of utilitarianism claims that it is good to do that which brings about “the greatest good for the greatest number”. The simplest hack is to say that every agent holds the obligation to be a utilitarian. Any utilitarian philosophy could be plugged in.
(=>
(instance ?AGENT AutonomousAgent)
(holdsObligation
(holdsEthicalPhilosophy ?AGENT UTILITARIANISM)
?AGENT))
Likewise, one could fix any utility function UF and claim that every agent holds the obligation to take the best action according to UF in every situation.
(=>
(instance ?AGENT AutonomousAgent)
(holdsObligation
(forall (?SITUATION ?CPROC)
(=>
(bestActionByUtilityInSituation ?CPROC UF ?SITUATION)
(exists (?IPROC)
(and
(agent ?IPROC ?AGENT)
(instance ?IPROC ?CPROC)
(equal ?SITUATION (SituationFn ?IPROC)))))) ?AGENT))
An agent following this “deontological theory” should behave equivalently to an agent following the provided utilitarian theory. If adopting a simple theory with one utility function and utility maximization, then the whole theory can be spelled out within the obligation statement. Additional qualifications, such as “taking what one believes to be the best action” can easily be added.
Deontology ← Virtue Ethics
This simulation can be done directly, too: given a virtue ethics philosophy, we define a deontological theory that one ought to hold this philosophy. Because deontology essentially utilizes the expressivity of logic, to the extent that virtue ethics can be described within SUMO, it can be simulated within deontology.
(=>
(instance ?AGENT Agent)
(holdsObligation
(holdsEthicalPhilosophy ?AGENT VIRTUEETHICS)
?AGENT))
Deontology ← Target-Centered Virtue Ethics
Target-centered virtue ethics provides a simple bridge by specifying the target of a virtue as a formula, similar to how obligations and moral goodness are denoted. One can include the field to specify that it is good to hit the target of a virtue in situations within the field. The parallel translation should be doable for vices and moral badness25The target-centered virtue ethics theory seems to be focused primarily on moral goodness, leaving discussions of viciousness as exercises for the reader.. To deal with potential conflicts, there is the notion of an action being overall virtuous, which could mirror the conflict resolution procedures in deontology26Christine Swanton used the interpretation that an act is morally good if and only if it is overall virtuous, that is, the imperative and virtue ethics languages would be where conflicts are resolved into overall moral value judgments..
(=>
(and
(equal (TargetTCVEToValueJudgmentSentenceFn ?TVS) ?VJS)
(equal ?TVS
(virtueTarget ?VIRTUE ?TARGET))
(instance ?VIRTUE VirtueAttribute))
(equal ?VJS
(modalAttribute ?TARGET MorallyGood)))
(=>
(and
(equal (FTSimpleTCVEToValueJudgmentSentenceFn ?TVS) ?VJS)
(equal ?TVS
(and
(virtueField ?VIRTUE ?FIELD)
(virtueTarget ?VIRTUE ?TARGET)))
(instance ?VIRTUE VirtueAttribute))
(equal ?VJS
(modalAttribute
(forall (?AGENT ?SITUATION)
(=>
(and
(equal ?SITUATION (SituationFn ?AGENT))
(instance ?SITUATION ?FIELD))
(exists (?IPROC)
(and
(agent ?IPROC ?AGENT)
(realizesFormula ?IPROC ?TARGET))))) MorallyGood)))
Target-Centered Virtue Ethics ← Virtue Ethics
One challenge with simulating virtue ethics in other theories is that it can be unclear how to describe what it means for an agent to possess a virtue. The simplest approach I found is to use a virtue desire sentence that states what an agent possessing a given virtue generally desires, e.g., an honest person wishes to express that which ey believes to be true. Now the target of this desire can be directly translated to the target of the virtue; one can also add the existence of a field of the virtue that is relevant and likely contains realizations of the desired target.
(=>
(and
(equal (SimpleVirtueDesireToTargetSentenceFn ?SVDS) ?VTTS)
(instance ?VIRTUE VirtueAttribute)
(equal ?SVDS
(forall (?AGENT)
(=>
(attribute ?AGENT ?VIRTUE)
(desires ?AGENT ?FORM)))))
(equal ?VTTS
(virtueTarget ?VIRTUE ?FORM)))
(=>
(and
(equal (SimpleVirtueDesireToTCVESentenceFn ?SVDS) ?TCVE)
(equal ?SVDS
(forall (?AGENT)
(=>
(attribute ?AGENT ?VIRTUE)
(desires ?AGENT ?FORM))))
(instance ?VIRTUE VirtueAttribute))
(exists (?FIELD)
(and
(forall (?IPROC)
(=>
(realizesFormula ?FORM ?IPROC)
(modalAttribute (instance (SituationFn ?IPROC) ?FIELD) Likely)))
(relevant ?VIRTUE ?FIELD)
(equal ?TCVE
(and
(virtueField ?VIRTUE ?FIELD)
(virtueTarget ?VIRTUE ?FORM))))))
Virtue Ethics ← Deontology
Virtue ethics can simulate deontology directly via the virtue of dutifulness or pietas: “the man who possessed pietas ‘performed all his duties towards the deity and his fellow human beings fully and in every respect27Description from the 19th-century classical scholar Georg Wissowa.” Perhaps one can be even more direct and claim that possessing dutifulness means holding some deontological philosophy.
(instance Pietas VirtueAttribute)
(=>
(attribute ?AGENT Pietas)
(forall (?DUTY)
(=>
(holdsObligation ?DUTY ?AGENT)
(desires ?AGENT ?DUTY))))
(=>
(attribute ?AGENT Pietas)
(holdsEthicalPhilosophy DEONTOLOGY ?AGENT))
Target-Centered Virtue Ethics ← Deontology
The core idea works both ways: if it is morally good to see to it that a formula holds true, then there exists a virtue whose target is for this formula to hold true.
(=>
(and
(equal (SimpleValueJudgmentToTargetTCVESentenceFn ?VJS) ?TVS)
(equal ?VJS
(modalAttribute ?FORMULA MorallyGood)))
(exists (?VIRTUE)
(and
(instance ?VIRTUE VirtueAttribute)
(equal ?TVS
(virtueTarget ?VIRTUE ?FORMULA)))))
Simple situational action value judgment sentences can also be directly translated into the language of target-centered virtue ethics. When it is morally good to take an action of a specific class in certain situations, one can interpret the situations as being in a field, the class as denoting a mode, and the target as the instantiation of the action.
(=>
(and
(equal (SimpleSituationalActionValueJudgmentToFTVirtueSentenceFn ?SSAVJ) ?TCVES)
(subclass ?CLASS AutonomousAgentProcess)
(equal ?SSAVJ
(and
?DESCRIPTION
(modalAttribute
(forall (?AGENT ?SITUATION)
(=>
(and
(equal ?SITUATION (SituationFn ?AGENT))
(similar ?AGENT ?SITUATION (SituationFormulaFn ?DESCRIPTION))
(capableInSituation ?CLASS agent ?AGENT ?SITUATION))
(exists (?PROC)
(and
(agent ?PROC ?AGENT)
(instance ?PROC ?CLASS))))) MorallyGood))))
(exists (?VIRTUE ?FIELD)
(and
(instance ?VIRTUE VirtueAttribute)
(instance (SituationFormulaFn ?DESCRIPTION) ?FIELD)
(relevant ?VIRTUE ?CLASS)
(equal ?TCVES
(and
(virtueField ?VIRTUE ?FIELD)
(virtueTarget ?VIRTUE
(forall (?SITUATION)
(=>
(instance ?SITUATION ?FIELD)
(exists (?IPROC)
(instance ?IPROC ?CLASS))))))))))
Virtue Ethics ← Target-Centered Virtue Ethics
To simulate target-centered virtue ethics within (agent-centered) virtue ethics, one can interpret the target as what agents who possess the virtue wish to achieve. One can add a lemma to the theories explaining how the field is to be incorporated on the virtue ethics side.
(=>
(and
(equal (TargetSentenceToSimpleVirtueDesireFn ?VTTS) ?SVDS)
(equal ?VTTS (virtueTarget ?VIRTUE ?FORM))
(instance ?VIRTUE VirtueAttribute))
(equal ?SVDS
(forall (?AGENT)
(=>
(attribute ?AGENT ?VIRTUE)
(desires ?AGENT ?FORM))))))
(=>
(and
(virtueField ?VIRTUE ?FIELD)
(forall (?AGENT)
(=>
(attribute ?AGENT ?VIRTUE)
(desires ?AGENT ?TARGET))))
(forall (?AGENT)
(=>
(attribute ?AGENT ?VIRTUE)
(desires ?AGENT
(forall (?SITUATION)
(=>
(and
(equal ?SITUATION (SituationFn ?AGENT))
(instance ?SITUATION ?FIELD))
(exists (?IPROC)
(and
(agent ?IPROC ?AGENT)
(realizesFormula ?IPROC ?TARGET)))))))))
Virtue Ethics ← Utilitarianism
I consider “learning from the consequences of one’s actions” along with benevolence to essentially cover consequentialist utilitarianism to the extent that it is practically realizable compared with the unrealizable ideals of maximizing utility.
The same strategy as applied to simulate utilitarianism in deontology can be employed, swapping obligation for desires. First, an agent possessing the virtue of utilitarian benevolence could be said to hold any given utilitarian philosophy. Second, such an agent could be said to desire to take the best action according to any given utility function UF in any situation. Given that perfect compliance with demanding philosophies is not guaranteed by merely holding the ethical philosophy, one could expect the same behavior from agents with this virtue.
(=>
(attribute ?AGENT UtilitarianBenevolence)
(holdsEthicalPhilosophy ?AGENT UTILITARIANISM))
(=>
(attribute ?AGENT UtilitarianBenevolence)
(desires ?AGENT
(forall (?SITUATION ?CPROC)
(=>
(bestActionByUtilityInSituation ?CPROC UF ?SITUATION)
(exists (?IPROC)
(and
(agent ?IPROC ?AGENT)
(instance ?IPROC ?CPROC)
(equal ?SITUATION (SituationFn ?IPROC))))))))
In order to show how one might pipe these translations together, one could imagine a virtue for every utility assignment sentence such that agents possessing the virtue desire the state of affairs described by the formula if the utility is positive and desire the state of affairs not to come to be if the utility is negative28How to deal with moral permissibility is an exercise left to the reader.29I think the version working with target-centered virtue ethics is cleaner..
(=>
(and
(equal (UtilityAssignmentToVirtueDesireFn ?UAS) ?VDES)
(equal ?UAS (equal (AssignmentFn ?UF ?FORMULA) ?VALUE))
(instance ?UF UtilityFormulaFn)
(instance ?FORMULA Formula)
(instance ?VALUE Number))
(and
(=>
(greaterThan ?VALUE 0)
(exists (?VIRTUE1)
(and
(instance ?VIRTUE1 VirtueAttribute)
(equal ?VDES
(forall (?AGENT)
(=>
(attribute ?AGENT ?VIRTUE1)
(desires ?AGENT ?FORMULA))))))))
(=>
(lessThan ?VALUE 0)
(exists (?VIRTUE2)
(and
(instance ?VIRTUE2 VirtueAttribute)
(equal ?VDES
(forall (?AGENT)
(=>
(attribute ?AGENT ?VIRTUE2)
(desires ?AGENT (not ?FORMULA)))))))))
Utilitarianism ← Deontology
Let’s consider the case of a consistent deontological theory, that is, one where all moral dilemmas have been resolved. A utility function UF can be given such that morally good formulas receive a utility of 1 and morally bad formulas receive a utility of -1. Morally permissible formulas receive 0 utility and thus do not affect the moral judgments. The utility function may be very sparse compared to the typical image of a utility function that continuously measures how good or bad a state of affairs is. The lack of conflicts ensures that one won’t run into situations where one is under three overlapping obligations and one prohibition, whereby the aggregation protocol might deviate from the deontic conflict resolution protocol. Some priority orderings among obligations could probably be encoded into different utility values. Negative utilitarianism uses lexicographic prioritization to ensure that reducing pain (negative utility) is always prioritized over increasing joy (positive utility). A similar schema with an ordered tuple of utility functions could be used for incomparable obligations and prohibitions.
(=>
(and
(equal (SimpleValueJudgmentToUtilityAssignmentSentenceFn ?VJS) ?UAS)
(equal ?VJS (modalAttribute ?FORMULA ?MORALATTRIBUTE))
(instance ?FORMULA Formula)
(instance ?MORALATTRIBUTE MoralAttribute))
(exists (?UF)
(and
(instance ?UF UtilityFormulaFn)
(=>
(equal ?MORALATTRIBUTE MorallyGood)
(equal ?UAS
(equal (AssignmentFn ?UF ?FORMULA) 1)))
(=>
(equal ?MORALATTRIBUTE MorallyBad)
(equal ?UAS
(equal (AssignmentFn ?UF ?FORMULA) -1)))
(=>
(equal ?MORALATTRIBUTE MorallyPermissible)
(equal ?UAS
(equal (AssignmentFn ?UF ?FORMULA) 0)))))
Utilitarianism ← Target-Centered Virtue Ethics
One interesting approach is to define a utility function for each virtue such that if a formula describes a process that hits the virtue’s target, then the utility is 1 and otherwise it is 0. Then the action that maximizes an aggregation of these utility functions will be the best one can do in terms of overall virtuosity. If one wishes to define a more precise protocol to determine how the relevant virtues interact, then one needs to encode this into the aggregation protocol.
(=>
(and
(actionHitsVirtueTarget ?IPROC ?VIRTUE)
(realizesFormula ?IPROC ?FORM))
(virtueSatisfiedByFormula ?FORM ?VIRTUE))
(=>
(and
(virtueUtilityFor ?UF ?VIRTUE)
(instance ?UF VirtueUtilityFormulaFn)
(virtueSatisfiedByFormula ?FORM ?VIRTUE))
(equal (AssignmentFn ?UF ?FORM) 1))
(=>
(and
(virtueUtilityFor ?UF ?VIRTUE)
(instance ?UF VirtueUtilityFormulaFn)
(not (virtueSatisfiedByFormula ?FORM ?VIRTUE)))
(equal (AssignmentFn ?UF ?FORM) 0))
Closing the Loop: From Deontology to Virtue and Back Again
Now that the core “X is good” type of judgment can be passed from paradigm to paradigm, we’d like to be able to piece together the translation functions into a full circle: Value Judgments → Imperatives → Utilities → Virtues as Desires → Virtues as Targets → Value Judgments Some work remains with regard to the treatment of vices and moral permissibility, which likely deserves better treatment than the suggestion to normalize the theories first. One vision is that a common core ethical theory could be naturally shared among paradigms, while each paradigm is then used to tackle the domains for which it provides the clearest guidance.
(=>
(instance ?X SimpleValueJudgmentSentence)
(equal
?X
(TargetSentenceToValueJudgmentSentenceFn
(SimpleVirtueDesireToTargetSentenceFn
(UtilityAssignmentToVirtueDesireFn
(SimpleValueJudgmentToUtilityAssignmentSentenceFn
(ValueJudgmentToImperativeSentenceFn ?X)))))))