Theory Examples
On this page, I will cover some examples as to how the ethical paradigms might be flshed out, whether via more specific high-level theories or via examples of specific rules, virtues, or utility functions. The intent is to suggest that working with detailed theories in the context of the high-level ethics ontology should be feasible. As with the SUMO ontology and knowledgebase, encompassing most ethical theories in practically usable forms would be a significant project.
Honesty and Truthfulness
I define truthfulness and honesty as virtue attributes (in the style of agent-centric virtue ethics). There are multiple ways to interpret honesty, of which I’ll cover a few. I believe there is value in articulating which senses we wish to refer to.
(documentation Truthfulness EnglishLanguage "Truthfulness is a virtue denoting the tendency of an agent's communication to be truthful.")
(instance Truthfulness VirtueAttribute)
(documentation Honesty EnglishLanguage "Honesty is a virtue denoting the tendency of an agent to be honest, along with the character traits and beliefs that result in this tendency..")
(instance Honesty VirtueAttribute)
While there can be honest dealings in other domains, I focus on honest communication, defined as communication where the speaker believes the message during the process. A similar concept is that of true communication, where the message is true regardless of the speaker’s beliefs.
(documentation HonestCommunication EnglishLanguage "Communication is honest when the communicator believes the message to be true while communicating.")
(subclass HonestCommunication Communication)
(<=>
(instance ?COMM HonestCommunication)
(and
(instance ?COMM Communication)
(instance ?AGENT CognitiveAgent)
(agent ?COMM ?AGENT)
(patient ?COMM ?MESSAGE))
(holdsDuring (WhenFn ?COMM)
(believes ?AGENT
(truth ?MESSAGE True))))
(documentation TrueCommunication EnglishLanguage "Communication is true when the message is true during communication.")
(subclass TrueCommunication Communication)
(<=>
(instance ?COMM TrueCommunication)
(and
(instance ?COMM Communication)
(patient ?COMM ?MESSAGE))
(holdsDuring (WhenFn ?COMM)
(truth ?MESSAGE True)))
We can say that an agent is intentionally honest if ey desire that every communication of eirs is true (i.e., honest). One could call an agent objectively/factually honest if it’s likely that eir communications will be honest. And an agent is objectively truthful if it is likely that eir communications are true.
(<=>
(attribute ?AGENT Honesty)
(desires ?AGENT
(forall (?COMM)
(=>
(and
(instance ?COMM Communication)
(agent ?COMM ?AGENT))
(instance ?COMM HonestCommunication)))))
(<=>
(attribute ?AGENT Honesty)
(forall (?COMM)
(=>
(and
(instance ?COMM Communication)
(agent ?COMM ?AGENT))
(modalAttribute
(instance ?COMM HonestCommunication) Likely))))
(<=>
(attribute ?AGENT Truthfulness)
(forall (?COMM)
(=>
(and
(instance ?COMM Communication)
(agent ?COMM ?AGENT))
(modalAttribute
(instance ?COMM TrueCommunication) Likely))))
These distinctions may matter when discussing truthful AI (e.g., LLM-based AI systems): truthfulness can be objectively gauged from a system-external point of view. To discuss honesty, one needs the AI system to have a notion of belief about the state of the world. Without that, the system cannot be honest.
One can express some deontic rules on honesty, too. For example, every agent1One rule in SUMO implies that the Agent of an act of Communication is a CognitiveAgent holds an obligation to ensure that every act of communication is honest.
We can also define lying as an act of ‘communication’ such that the communicator knows that the message is false during the act. Then it’s easy to declare that all agents are prohibited from lying2ChatGPT successfully crafted holdsProhibition by analogy to holdsObligation for me. The massive open-source ontology dream will need good autoformalization..
(holdsObligation
(forall (?COMM)
(=>
(and
(instance ?COMM Communication)
(agent ?COMM ?AGENT))
(instance ?COMM HonestCommunication))) ?AGENT)
(documentation Lying EnglishLanguage "Lying is a process of communication where the speaker knows that the message is not true.")
(subclass Lying Communication)
(=>
(and
(instance ?LYING Lying)
(agent ?LYING ?LIAR)
(patient ?COMM ?MESSAGE))
(holdsDuring (WhenFn ?COMM)
(knows ?LIAR
(truth ?MESSAGE False))))
(holdsProhibition
(exists (?COMM)
(and
(instance ?COMM Lying)
(agent ?COMM ?AGENT))) ?AGENT)
Example Deontological Imperative Theory
Now how would an actual theory look? The only difference from the above is that the sentences are included as elements of the theory instead of existing at the top-level of a SUMO.kif file. The following deontological imperative theory says “no killing or lying, and always communicate honestly.”
(instance DIT DeontologicalImperativeTheory)
(element (modalAttribute (exists (?IPROC) (instance ?IPROC Lying)) Prohibition))
(element (modalAttribute (exists (?IPROC) (instance ?IPROC Killing)) Prohibition))
(element (modalAttribute (forall (?IPROC) (=> (instance ?IPROC Communication) (instance ?IPROC HonestCommunication))) Obligation))
Eudaimonia Hypothesis
Eudaimonia refers to the ‘good spirit’, ‘happiness’, or ‘flourishing’ one experiences when living the good, virtuous life3See the Stanford Encyclopedia of Philosophy’s Virtue Ethics entry for a more thorough discussion.. There is the idea that the justification for why certain dispositions are virtues is due to how they contribute to eudaimonia. I only sketch out a brief pointer in this direction with the following two rules stating that virtues likely cause those possessing them to be happy and that virtues likely increase the likelihood of an agent posessing them to be happy.
(<=>
(instance ?VIRTUEATTRIBUTE VirtueAttribute)
(modalAttribute
(causesProposition
(attribute ?AGENT ?VIRTUEATTRIBUTE)
(attribute ?AGENT Happiness) Likely)))
(<=>
(instance ?VIRTUEATTRIBUTE VirtueAttribute)
(modalAttribute
(increasesLikelihood
(attribute ?AGENT ?VIRTUEATTRIBUTE)
(attribute ?AGENT Happiness) Likely)))
Informed Consent
The principle of informed consent is that a person should have sufficent understanding prior to consenting to any large, potentially risky endeavor, such as a deal, contract, or surgical operation. The first example I tried to formally sketch out in this project involved informed consent for a surgery. In this case, I think the deontological form is the most natural: the appropriate virtues will either lead to being dutiful/lawful or to aiming to attain some form of informed consent anyway. The utilitarian case is trickier because on a local/myopic view, the surgeon could reason that the patient will benefit from the surgery and that it should be done as soon as possible (so long as the patient allows it, not suffering in resistance). One could wonder how a society would work if surgeons normally operated on this principle, speculating that the overall balance of wellbeing and suffering will be higher when something like informed consent is adhered to. Consequentialists and rule utilitarians may claim that this is the reasoning needed to justify the deontological form.
The example says that every surgeon (?DOC) holds an obligation to, for all surgeries they perform on patients (?PAT), ensure that they explain the nature of the surgery so that the patient understands it, and then receive approval from the patient prior to the surgery. Weirdly, the term in the SUMO KB for explanation is at present a subclass of deductive argument, so I simply used communication. Moreover, interpreting is the closest process to undertanding. If wishing to apply such ethical rules in practice, the logical relations in this domain will probably need to be massaged (at least a bit).
(=>
(instance ?DOC Surgeon)
(holdsObligation
(forall (?SURGERY)
(=>
(and
(instance ?SURGERY Surgery)
(agent ?SURGERY ?DOC)
(patient ?SURGERY ?PAT))
(exist (?EXP ?EXPLAIN ?UNDERSTAND)
(and
(instance ?EXP ExpressingApproval)
(patient ?EXP ?DOC)
(agent ?EXP ?PAT)
(result ?EXP
(confersNorm ?PAT
(exists (?SURGERYC)
(and
(instance ?SURGERYC Surgery)
(agent ?SURGERYC ?DOC)
(patient ?SURGERYC ?PAT)
(similar ?PAT ?SURGERYC ?SURGERY)
(similar ?DOC ?SURGERYC ?SURGERY))) Permission))
(instance ?EXPLAIN Communication)
(instance ?UNDERSTAND Interpreting)
(result ?EXPLAIN ?UNDERSTAND)
(agent ?EXPLAIN ?DOC)
(destination ?EXPLAIN ?PAT)
(refers ?EXLPAIN ?SURGERY)
(agent ?UNDERSTAND ?PAT)
(realizesFormula ?SURGERY ?FORM)
(patient ?UNDERSTAND ?FORM)
(before (BeginFn (WhenFn ?EXPLAIN)) (EndFn (WhenFn ?UNDERSTAND)))
(before (EndFn (WhenFn ?UNDERSTAND)) (BeginFn (WhenFn ?EXP)))
(before (EndFn (WhenFn ?EXP)) (BeginFn (WhenFn ?SURGERY))))))) ?DOC))
Greatest Happiness Principle
The “fundamental (moral) axiom” is that “the greatest happiness of the greatest number is the measure of right and wrong.”4SEP claims that Jeremy Bentham made this statement in A Fragment on Government (1776) This is a classic teolological goal of (consequentialist) utilitarianism: the reason ethical decisions reduce to optimizations is that the fundamental measure of right and wrong is achieving the greatest happiness (which involves minimizing suffering, too.
I define a form of utilitarianism aiming for the greatest number of happy people. I use desires to denote what an agent holding this ethical philosophy aspires to do. One could use a deontic obligation instead, but I wish to honor that utilitarianism is a distinct paradigm5Furthermore, what does it mean for an agent to hold that ey have an obligation? That they desire to adhere to it.. So adherents of this principle wish to take the best course of action they can in every situation, which is determined according to the “number of happy people” utility function.
(documentation GreatestHappinessPrincipleUtilitarianism EnglishLanguage "The Greatest Happiness principle stipulates that the best course of action is that which causes the greatest happiness.")
(subclass GreatestHappinessPrincipleUtilitarianism Utilitarianism)
(instance GreaestNumHappyPeopleUtilitarianism GreatestHappinessPrincipleUtilitarianism)
(<=>
(holdsEthicalPhilosophy ?AGENT GreaestNumHappyPeopleUtilitarianism)
(desires ?AGENT
(forall (?SITUATION ?CPROC)
(=>
(bestActionByUtilityInSituationForAgent ?AGENT ?CPROC NumHappyPeopleUtilityFn ?SITUATION)
(exists (?IPROC)
(and
(agent ?IPROC ?AGENT)
(instance ?IPROC ?CPROC)
(equal ?SITUATION (SituationFn ?IPROC))))))))
One option is for this utility function is to take a formula describing a situation and to count the number of happy people in the future, beginning at the start of this situation. It’s pretty crude, yet the ontology would need to be expanded a lot to be much more precise.
(documentation NumHappyPeopleUtilityFn EnglishLanguage "A utility function that returns the number of happy people from the beginning of the situation to the end of time.")
(instance NumHappyPeopleUtilityFn UtilityFormulaFn)
(equal
(NumHappyPeopleUtilityFn ?F)
(CardinalityFn
(KappaFn ?HUMAN
(exists (?T)
(and
(before (BeginFn (WhenFn (SituationFormulaFn ?F))) ?T)
(holdsDuring ?T (attribute ?HUMAN Happiness)))))))
The idea here is, first, that it only makes sense to ask an agent to do the best that it can, not that any agent can. Thus the “best action by utility in situation for (an) agent” utility function should be true for the classes of actions that are at least as good as all other actions the agent can perform. The trick is how to descibe the situation where the agent takes an action from a given class. It would be better to define a situational update function, but for now I use the formula adding the instantiation of an action in this class in this situation to the formula describing the original situation, which should mostly capture the intended semantics6Practically, I think this example underpins the reliance of consequentialist utilitarianism on good, effective (causal) world models. Further, estimating the number of happy people across spacetime in the future lightcone across various counterfactual actions is highly non-trivial..
(<=>
(bestActionByUtilityInSituationForAgent ?AGENT ?CPROC ?UF ?SITUATION)
(and
(capableInSituation ?CPROC agent ?AGENT ?SITUATION)
(describesSituation ?SITUATION ?SF)
(forall (?CPROC2)
(=>
(and
(capableInSituation ?CPROC2 agent ?AGENT ?SITUATION)
(equal ?SF1 (and ?SF (exists (?IPROC) (and (instance ?IPROC ?CPROC) (agent ?IPROC ?AGENT) (SituationFn ?IPROC ?SITUATION)))))
(equal ?SF2 (and ?SF (exists (?IPROC) (and (instance ?IPROC ?CPROC2) (agent ?IPROC ?AGENT) (SituationFn ?IPROC ?SITUATION)))))
(greaterThanOrEqualTo (AssignmentFn ?UF ?SF1) (AssignmentFn ?UF ?SF2))))))
Rule Consequentialism
Full rule consequentialism holds that the moral goodness or badness of acts is determined in terms of deontic rules, and that these rules are determined in terms of their consequences. In essence, rule consequentialism is in the intersection of deontology and consequentialism, which is natural to express in the ontology! One can further specify that each sentence in a rule consequentialist theory should be backed by a consequentialist utilitarian argument. A consequentialist argument is one whose premises are either members of a set of consequences or refer to these (which, at a stretch, includes axioms for reasoning, etc., too). A consequentialist utilitarian argument further requires that each consequence used in the argument is evaluated by some utility function as a sub-proposition of the argument.
(documentation RuleConsequentialism EnglishLanguage "An ethical philosophy holding that rules should be selected based on (the goodness of) their consequences. See https://plato.stanford.edu/entries/consequentialism-rule/ for more info.")
(subclass RuleConsequentialism Deontology)
(subclass RuleConsequentialism Consequentialism)
(documentation RuleConsequentialistTheory EnglishLanguage "A family of ethical theories holding that rules should be selected based on (the goodness of) their consequences.")
(subclass RuleConsequentialistTheory Deontological)
(subclass RuleConsequentialistTheory ConsequentialistTheory)
(subclass RuleConsequentialistTheory DeontologicalImperativeTheory)
(theoryFieldPairSubclass RuleConsequentialism RuleConsequentialistTheory)
(<=>
(instance ?RCT RuleConsequentialistTheory)
(forall (?S)
(=>
(element ?S ?RCT)
(exists ?A ?C)
(and
(instance ?A ConsequentialistUtilitarianArgument)
(conclusion ?A ?C)
(containsInformation ?S ?C)))))
(documentation ConsequentialistUtilitarianArgument EnglishLanguage "An argument that is made on consequentialist grounds, namely,
by reference to the consequences of some action.")
(subclass ConsequentialistUtilitarianArgument ConsequentialistArgument)
(<=>
(instance ?ARGUE ConsequentialistUtilitarianArgument)
(and
(instance ?ARGUE Argument)
(exists (?CS)
(and
(instance ?CS ConsequenceSet)
(forall (?PREM)
(=>
(and
(premise ?ARGUE ?PREM)
(represents ?P ?PREM)
(element ?P ?CS))
(exists (?UF)
(and
(subProposition ?PP Argument)
(represents ?PP (AssignmentFn ?UF (instance ?P Consequence)))))))))))
Two-level Utilitarianism
Two-level utilitarianism is an attempt by R. M. Hare to refine rule consequentialism (aka utilitarianism) by cmbining it with standard ‘act’ utilitarianism. The idea is that agents should usually follow the ethical rules under rule consequentialism, however in special cases, they should critically analyze the situation to determine the best course of action under act utilitarianism. Signs that critical thinking is called for including a conflict between rules and the circumstances being highly unusual, prompting additional analysis.
(documentation TwoLevelUtilitarianism EnglishLanguage "An ethical philosophy holding that usually rules should be followed where they apply,
and in some 'critical' situations, one hsould apply additional utilitarian moral reasoning. See https://en.wikipedia.org/wiki/Two-level_utilitarianism for more.")
(subclass TwoLevelUtilitarianism Ethics)
(documentation TwoLevelUtilitarianTheory EnglishLanguage "An ethical theory holding that usually rules should be followed where they apply,
and in some 'critical' situations, one hsould apply additional utilitarian moral reasoning. See https://en.wikipedia.org/wiki/Two-level_utilitarianism for more.")
(subclass TwoLevelUtilitarianTheory EthicalTheory)
(theoryFieldPairSubclass TwoLevelUtilitarianism TwoLevelUtilitarianTheory)
(documentation twoLevelUtilitarianTheories EnglishLanguage "(twoLevelUtilitarianTheories ?T ?R ?U) denotes that ?R is a Rule Consequentialist theory
and ?U is a utilitarian theory that make up the theory ?T.")
(domain twoLevelUtilitarianTheories 1 TwoLevelUtilitarianTheory)
(domain twoLevelUtilitarianTheories 2 RuleConsequentialistTheory)
(domain twoLevelUtilitarianTheories 3 UtilitarianTheory)
(instance twoLevelUtilitarianTheories TernaryPredicate)
(=>
(twoLevelUtilitarianTheories ?TLUT ?RUT ?UT)
(and
(subset ?RUT ?TLUT)
(subset ?UT ?TLUT)))
(=>
(instance ?TLUT twoLevelUtilitarianistTheory)
(exists (?RUT ?UT)
(twoLevelUtilitarianTheories ?TLUT ?RUT ?UT)))
Every two-level utilitarian theory contains a rule consequentialist theory and a utilitarian theory as subsets7A perk of the high-level ontology is how ethical philosophies can be merged in various ways fairly smoothly..
When an agent is making a decision and holds a two-level utilitarian philosophy, if the evaluations of the two theories are similar, then the agent prefers the rule consequentialist theory to influence the decision. When the evaluations are not consistent, then the agent prefers the utilitarian theory to influence the decision. The use of prefers, influences, and similar are weak, in my opinion.8Ironically, this weak form may align with the commonsense reasoning people actually employ in practice. One does a light evaluation under the utilitarian lens as a part of checking whether the regular rules apply. Only only prefers one level over the other instead of following a rigorous protocol..
(=>
(and
(instance ?DECIDE Deciding)
(agent ?AGENT Deciding)
(patient ?DECIDE ?CP)
(instance ?CP ChoicePoint)
(equal ?AGENT (ChoicePointAgentFn ?CP))
(holdsEthicalPhilosophy ?AGENT ?EP)
(instance ?RUP RuleConsequentialism)
(theoryFieldPair ?EP ?TLUT)
(twoLevelUtilitarianTheories ?TLUT ?RUT ?UT)
(theoryFieldPair ?UP ?UT)
(theoryFieldPair ?RUP ?RUT)
(similar ?AGENT
(evaluateTheory (MapSetFn ImperativeToValueJudgmentSentenceFn ?RUT) ?CP)
(evaluateTheory (MapSetFn UtilitarianToValueJudgmentSentenceFn ?UT) ?CP)))
(prefers ?AGENT
(influences ?RUP ?DECIDE)
(influences ?UP ?DECIDE)))
(=>
(and
(instance ?DECIDE Deciding)
(agent ?AGENT Deciding)
(patient ?DECIDE ?CP)
(instance ?CP ChoicePoint)
(equal ?AGENT (ChoicePointAgentFn ?CP))
(holdsEthicalPhilosophy ?AGENT ?EP)
(instance ?RUP RuleConsequentialism)
(theoryFieldPair ?EP ?TLUT)
(twoLevelUtilitarianTheories ?TLUT ?RUT ?UT)
(theoryFieldPair ?UP ?UT)
(theoryFieldPair ?RUP ?RUT)
(containsInformation (ListAndFn (SetToListFn (evaluateTheory (MapSetFn ImperativeToValueJudgmentSentenceFn ?RUT) ?CP))) ?RUTPROP)
(containsInformation (ListAndFn (SetToListFn (evaluateTheory (MapSetFn UtilitarianToValueJudgmentSentenceFn ?UT) ?CP))) ?UTPROP)
(not (consistent ?RUTPROP ?UTPROP)))
(prefers ?AGENT
(influences ?UP ?DECIDE)
(influences ?RUP ?DECIDE)))
Kant’s Categorical Imperative
The idea behind Immanuel Kant’s categorical imperative is to specify a simple rule from which all other duties and obligations can be derived. Kant put forth four formulations of the imperative that he believed are all equivalent. He believed that he hadn’t found a perfected formulation yet9See SEP for further discussion. I believe this justifies Parfit’s attempts to further improve Kant’s categorical imperative!. I’ll primarily focus on the most well-known formulation, the first, and take a small stab at the second. They are:
- “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”
- “Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.”
Kantian deontology is a subclassof deontology. In this example, I’ll explore working with an actual instance of a theory as an existential witness. The first necessary concept is to define what it means for an agent to act by a maxim10I found it difficult to articulate what it means for a theory to suggest specific courses of action.. An agent acts by a maxim if the maxim entails that there is an obligation to satisfy some formula and the agent’s action realizes this formula11Note that this allows for an agent unintentially adhering to a maxim. The intentional element could be included.. An additional predicate is defined for the case that the specific action is known.
(documentation KantianDeontology EnglishLanguage "A form of deontology based on Kant's notion of the Categorical Imperative.")
(subclass KantianDeontology Deontology)
(documentation KantianDeontologicalTheory EnglishLanguage "A theory of Kantian deontology.")
(subclass KantianDeontologicalTheory DeontologicalImperativeTheory)
(theoryFieldPair KantianDeontology KantianDeontologicalTheory)
(instance KDT KantianDeontologicalTheory)
(instance KDT2 KantianDeontologicalTheory)
(documentation actsByMaxim EnglishLanguage "(actsByMaxim ?AGENT ?FORMULA) means that the agent performs an action adhering to the maxim ?FORMULA.")
(domain actsByMaxim 1 Agent)
(domain actsByMaxim 2 Formula)
(instance actsByMaxim BinaryPredicate)
(<=>
(actsByMaxim ?AGENT ?MAXIM)
(exists (?IPROC)
(and
(entails ?MAXIM (modalAttribute ?F Obligation))
(realizesFormula ?IPROC ?F)
(agent ?IPROC ?AGENT))))
(documentation actsByMaximInProc EnglishLanguage "(actsByMaximInProc ?AGENT ?FORMULA ?PROCESS) means that the agent acts by the maxim described by ?FORMULA in the ?PROCESS.")
(domain actsByMaximInProc 1 Agent)
(domain actsByMaximInProc 2 Formula)
(domain actsByMaximInProc 3 AutonomousAgentProcess)
(instance actsByMaximInProc TernaryPredicate)
(<=>
(actsByMaximInProc ?AGENT ?MAXIM ?IPROC)
(and
(entails ?MAXIM (modalAttribute ?F Obligation))
(realizesFormula ?IPROC ?F)
(agent ?IPROC ?AGENT)))
The first sentence in the theory says that every agent holds the obligation to only act by maxims such that the agent desires all agents to hold an obligation to act by the maxim whenever in a relevant situation (as if it is a “law of nature”12The maxim being a law of nature implies one wishes every agent actually fulfilled the obligation without flaw, which is usually not the case with duties, obligations, and laws on Earth in 2024.). Next, every agent holds a prohibition on taking any action that is not in accordance with a maxim, thus requiring one to only adhere to universally willable laws. Finally, there’s the notion of perfect duty whereby agents are prohibited from following any maxim whose universal adherence would lead to a contradiction.
(element
(holdsObligation
(<=>
(actsByMaxim ?AGENT ?MAXIM)
(desires ?AGENT
(forall (?AGENT2)
(holdsObligation
(=>
(relevant ?MAXIM (SituationFn ?AGENT2))
(actsByMaxim ?AGENT2 ?MAXIM)) ?AGENT2)))) ?AGENT) KDT)
(element
(holdsProhibition
(exists (?IPROC)
(and
(instance ?IPROC AutonomousAgentProcess)
(agent ?IPROC ?AGENT)
(not
(exists (?MAXIM)
(actsByMaximInProc ?AGENT ?MAXIM ?IPROC))))) ?AGENT) KDT)
(element
(=>
(entails
(forall (?AGENT)
(holdsObligation
(=>
(relevant ?MAXIM (SituationFn ?AGENT))
(actsByMaxim ?AGENT2 ?MAXIM)) ?AGENT))
False)
(holdsProhibition
(actsByMaxim ?AGENT ?MAXIM) ?AGENT)) KDT)
Defining what it means to treat someone as an end or merely as a means poses some challenges. The first sketch of mine is below: every agent A is prohibited from taking an action that uses an agent B as an instrument where this action has a purpose for A but has no purpose for B, moreover, B‘s happiness is not a purpose of this action for A. As discussed in blog posts13See the page on my Personal Moral Stance or this post on Virtue Ethics Qua Learning Theory., I believe the distinction has to do with whether A takes B‘s values/preferences/wellbeing into account when deciding what to do and how to do it. This inspires a second version prohibiting actions when A‘s decision is not influenced by a desire for B‘s happiness, which should imply an obligation to only take actions influenced by a desire for the happiness of all involved (as instruments), i.e., as ends.
(element
(holdsProhibition
(exist (?IPROC ?AGENT2 ?PURP)
(and
(uses ?AGENT2 ?AGENT)
(agent ?IPROC ?AGENT)
(instrument ?IPROC ?AGENT2)
(hasPurposeForAgent ?IPROC ?PURP ?AGENT)
(not
(exist (?PURP2)
(hasPurposeForAgent ?IPROC ?PURP2 ?AGENT2)))
(not
(hasPurposeForAgent ?IPROC (attribute ?AGENT2 Happiness) ?AGENT)))) ?AGENT) KDT2)
(element
(holdsProhibition
(exist (?IPROC ?AGENT2 ?PURP)
(and
(uses ?AGENT2 ?AGENT)
(agent ?IPROC ?AGENT)
(instrument ?IPROC ?AGENT2)
(hasPurposeForAgent ?IPROC ?PURP ?AGENT)
(not
(exist (?PURP2)
(hasPurposeForAgent ?IPROC ?PURP2 ?AGENT2)))
(not
(exists (?DECIDE)
(and
(instance ?DECIDE Deciding)
(agent ?DECIDE ?AGENT)
(result ?DECIDE ?CPROC)
(instance ?IPROC ?CPROC)))
(influences
(desires ?AGENT
(attribute ?AGENT2 Happiness))
?DECIDE)))) ?AGENT) KDT2)
Gewirth’s Principle of Generic Consistency
Alan Gewirth proposed a foundational moral prinicple that is derivable from a few premises as to what it means to be a self-understanding agent with some sense of purpose14Which could be fun to connect with the theory relating agency and meaning in A Theory of Foundational Meaning Generation in Autonomous Systems, Natural and Artificial.. David Fuenmayor and Christoph Benzmüller formalized this normative ethical theorem in Isabelle/HOL. The theorem statement is that “every prospective, purposive agent (PPA) has a claim right to its freedom and well-being (FWB).”
;; theorem PGC_strong: shows "⌊∀x. PPA x → (RightTo x FWB)⌋"
(=>
(instance ?AGENT PurposiveAgent)
(holdsRight (attribute ?AGENT Freedom) ?AGENT))
I’ve summarized the argument and loosely translated some of the core definitions, axiomns, and lemmas into SUMO on the page for Alan Gewirth’s Proof for the Principle of Generic Consistency. To allow Gewirth’s theory to fit into the framework as a deontological imperative theory, I needed to expand the definition of theories to include background theory whose purpose is to contribute to arguments for the core ethical sentences. Below is the rule that connects the claim right to an obligation.
;; definition RightTo::"e⇒(e⇒m)⇒m"
;; where "RightTo a φ ≡ O(∀b. ¬InterferesWith b (φ a))"
(=>
(holdsRight ?FORM ?AGENT1)
(modalAttribute
(forall (?AGENT2)
(not
(inhibits ?AGENT2
(KappaFn ?PROC
(and
(realizesFormula ?PROC FORM)
(agent ?PROC ?AGENT1)))))) Obligation))
Target-Centered Virtue Ethics
Target-centered virtue ethics aims to describe the nature of virtues via four aspects: the field, the basis, the mode, or the target. I’ve tried to sketch out how the virtue aspects for two virtues, Honesty and Benevolence, may look15I’m not 100% sure how to best express the aspects of the virtues.. The field of honesty is all situations that contain a social interaction16One may be wish to be more precise and to specify that there are reasons to be dishonest in these social interactions. Further, one could broaden the scope of the field beyond communication.. The bases of honesty are valuing good and valuing bonds17I think. This doesn’t come from Swanton’s books.. To value promoting good means to desire to take actions that benefit an agent. The mode of honesty is honest communication. The target of honesty is for there to be honest communication, or, one could say that the target is for there to not exist an instance of pretending18Which requires stipulations that pretending is ok when everyone knows it’s pretending (for fun); and that it can be overruled by other virtues..
(instance Honesty VirtueAttribute)
(<=>
(and
(virtueField Honesty ?SC)
(instance ?SITUATION ?SC))
(and
(instance ?SITUATION Situation)
(exists (?SocialInteraction)
(and
(instance ?SI SocialInteraction)
(part (SituationFn ?SI) ?SITUATION)))))
(instance ValuingGood Value)
(instance ValuingBonds Value)
(=>
(holdsValue ?AGENT ValuingGood)
(desires ?AGENT
(exists (?IPROC ?BENEFICIARY)
(and
(benefits ?IPROC ?BENEFICIARY)
(agent ?IPROC ?AGENT)))))
(virtueBasis Honesty ValuingGood)
(virtueBasis Honesty ValuingBonds)
(virtueMode Honesty HonestCommunication)
(virtueTarget Honesty
(exists (?COMM)
(instance ?COMM HonestCommunication)))
(virtueTarget Honesty
(not
(exists (?PRETEND)
(instance ?PRETEND Pretending))))
The field of benevolence covers situations where an agent is capable of benefitting an agent. The bases of benevolence are valuing/promoting good and bonds, similar to honesty. Two modes of benevolence are giving and helping. A simple target for benevolence is that a beneficiary (in the situation) is benefited. A more nuanced target requires tha the beneficiary is not the same as the benefactor.
(instance Benevolence VirtueAttribute)
(<=>
(and
(virtueField Benevolence ?SC)
(instance ?SITUATION ?SC))
(exists (?CPROC ?AGENT ?BENEFICIARY)
(and
(capableInSituation ?CPROC agent ?AGENT ?SITUATION)
(modalAttribute
(exists (?IPROC)
(and
(instance ?IPROC ?CIPROC)
(agent ?IPROC ?AGENT)
(benefits ?IPROC ?BENEFICIARY)) Likely)))))
(virtueBasis Benevolence ValuingGood)
(virtueBasis Benevolence ValuingBonds)
(virtueMode Benevolence Giving)
(virtueMode Benevolence Helping)
(virtueTarget Benevolence
(exists (?BENEFIT ?BENEFICIARY)
(benefits ?BENEFIT ?BENEFICIARY)))
(virtueTarget Benevolence
(exists (?BENEFACTOR ?BENEFIT ?BENEFICIARY)
(and
(benefits ?BENEFIT ?BENEFICIARY)
(agent ?BENEFIT ?BENEFACTOR)
(not (equal ?BENEFACTOR ?BENEFICIARY))))