Virtue Ethics

Virtue ethics is the ethical paradigm that judges the morality of an action based on the character of the agent performing an action.

A virtuous agent is one who possesses virtues.

An action is right if and only if it is what a virtuous agent would characteristically (i.e., acting in character) do in the circumstances.

On Virtue Ethics by Rosalind Hursthouse

Discussion

The paradigm of virtue ethics focuses on character traits that make up a good life, eudaimonia, and lead to appropriate, optimal behavior. Aristotle’s Nicomachean Ethics explains the nature of virtue well. Virtue ethics can be seen as focusing on learning in real-world settings, which can make it appear harder to define precisely than deontology or utilitarianism1Jakob Stenseke interestingly presents the learning theory perspective on virtue ethics in On the Computational Complexity of Ethics: Moral Tractability for Minds and Machines.. Christine Swanton’s target-centered virtue ethics is a modern approach that aims to specify the target that a virtue aims to fulfill, the learning objective, and the field in which the virtue applies2I believe that target-centered virtue ethics is structurally a deontological theory in that it focuses on describing actions as virtuous if they fulfill a formula, taking the focus off of agent traits. Thus Swanton’s theory could be a useful bridge for translating between virtue ethics and deontological theories.

Virtue ethics works with the virtues3Some core virtues include: intelligence, practical wisdom, and compassion. of people to respond to tricky situations. For example, by deontology, one must follow the rule to “never lie” unless there is an exception, whereas an honest person will wish to tell the truth and generally do so, carefully weighing factors as to when and how to respond in exceptional circumstances. A utilitarian will calculate the benefits and harms of telling the truth in each case4Which is impractical, so the utilitarian will probably conclude that it’s likely best to develop the disposition toward honesty or to follow heuristic rules most of the time.. Virtues aim to capture the capacity of an agent to effectively deal with specific aspects of situations, often reflecting that an agent has the correct motivations and that adequate learning has taken place5For example, someone who is tells the truth to avoid being exposed as a liar may then lie when they can get away with it, which would indicate they have not properly learned the virtue of honesty..

The virtues can be defined as trait attributes, which are a subclass of moral attributes like good and bad— normative attributes6The primary reason to do this is to avoid over-determining the high-level theory. Traits could generally suffice..

A definitional challenge for interpreting virtues in the lens of moral value judgments lies in defining what a virtuous person would do in similar circumstances. The notion of a situation is not defined in SUMO’s knowledge base7The WordNet term is mapped to subjective assessment attribute, basically implying that the user must determine what the situation is.. The notion of similarity is also missing. There is some subjectivity to the choice of similarity measures8Including whether the similarity judgments are measures, mathematically speaking. and how to characterize what a generic agent with a virtue is likely to do. Philosophically, this isn’t a problem. Practically, this renders formalization more difficult.

In the case that an entity is virtuous, then it’s easy to say that what ey do is right. I opted to say that if a virtuous agent is likely to take an action [in a situation] and the agent’s virtue is relevant to this class of actions, then the action class is likely morally good [in similar situations]9This seems justified because if, e.g., a brave person acts in a domain where bravery is required, then ey will probably act rightly. However, if the situation calls for honesty and ey’s only brave, then it is irrelevant. Adding in likelihood is my own philosophical hunch to weaken the statement from one of certainty..

The other direction was difficult to figure out. My best attempt is to say that if an action is morally good [in a given situation], then if an agent is virtuous with virtues that are relevant to this class of actions, the agent will likely take this action [in similar situations]. The main challenge is how to frame an action being good to take in a given situation10This is probably a topic that modal logic for commonsense reasoning researchers have invested a lot of thought in investigating.. My best attempt is to say that it’s good for an agent who can take a given action in a situation to take it.

SUMO

(documentation VirtueEthics EnglishLanguage "Virtue ethics is the ethical paradigm that judges the morality of an action 
based on the character of the agent performing an action.  A virtuous agent is one who possesses virtues.  
'An action is right if and only if it is what a virtuous agent would characteristically (i.e., acting in character) 
do in the circumstances' (On Virtue Ethics -- Right Action).")
(subclass VirtueEthics GeneralVirtueEthics)

(documentation VirtueEthicsTheory EnglishLanguage "A set of sentences assigning virtue or vice attributes.")
(subclass VirtueEthicsTheory GeneralVirtueEthicsTheory)

(theoryFieldPairSubclass VirtueEthics VirtueEthicsTheory)

(documentation VirtueEthicsSentence EnglishLanguage "A sentence of a virtue ethics language/theory.")      
(subclass VirtueEthicsSentence GeneralVirtueEthicsSentence)

(<=>
  (instance ?V VirtueEthicsTheory)
  (forall (?S)
    (=>
      (element ?S ?V)
      (or
        (instance ?S VirtueEthicsSentence)
        (exists (?VES)
          (and
            (instance ?VES VirtueEthicsSentence)
            (hasPurposeInArgumentFor ?S ?VES)))))))

Virtue Ethics is defined as a proposition, a subclass of General Virtue Ethics, which is a subclass of Ethics11I made a general superclass of virtue ethics to encompass target-centered virtue ethics: it is the paradigm of ethical philosophies that focus on assigning virtues or vices to any entities, whether agents or actions.. A virtue ethics theory is defined as a set of virtue ethics sentences or sentences whose purpose is to be used in explanations for them.

(documentation MoralVirtueAttribute EnglishLanguage "Moral Virtue Attributes are a subclass of Moral Attributes dealing with the virtues and vices.")
(subclass MoralVirtueAttribute MoralAttribute)

(subclass VirtueAttribute MoralVirtueAttribute)
(subclass ViceAttribute MoralVirtueAttribute)

(subclass VirtueAttribute PsychologicalAttribute)
(subclass ViceAttribute PsychologicalAttribute)

(documentation SimpleVirtueSentence EnglishLanguage "A sentence that describes an virtue/vice attribute assignment to an agent.")      
(subclass SimpleVirtueSentence VirtueEthicsSentence)    

(<=>
  (instance ?SENTENCE SimpleVirtueSentence)
  (exists (?AGENT ?VIRTUEATTRIBUTE)
    (and
      (equal ?SENTENCE (attribute ?AGENT ?VIRTUEATTRIBUTE))
      (instance ?AGENT AutonomousAgent)
      (instance ?VIRTUEATTRIBUTE MoralVirtueAttribute))))

(<=>
  (instance ?SENTENCE VirtueEthicsSentence)
  (exists (?SVS)
    (and
      (instance ?SVS SimpleVirtueSentence)
      (part ?SVS ?SENTENCE))))     

Virtue ethics theories deal with moral virtue attributes: virtues and vices. A simple virtue sentence attributes a virtue or vice to an agent, e.g., “Gandhi is honest.” Generic virtue ethics sentences are those that contain a simple value judgment sentence as a part. To capture one way to express what a virtue means, a simple virtue desire sentence states that all agents with a given virtue desire the state of affairs expressed by a given formula12The ‘target’ can be seen as tagging what a virtuous agent desires..

(documentation SimpleVirtueDesireSentence EnglishLanguage "A sentence that describes a virtue/vice assignment to an agent along with a formula agents with this virtue desire to fulfill.")
(subclass SimpleVirtueDesireSentence VirtueEthicsSentence)

(<=>
  (instance ?SENTENCE SimpleVirtueDesireSentence)
  (exists (?VIRTUEATTRIBUTE ?FORM)
    (and
      (equal ?SENTENCE
        (forall (?AGENT)
          (=>
            (and
              (instance ?AGENT AutonomousAgentProcess)
              (attribute ?AGENT ?VIRTUEATTRIBUTE))
            (desires ?AGENT ?FORM)))))))

Rosalind Hursthouse’s claimed equivalence between moral value judgments and moral virtues can be stated as below: “An action is right if and only if it is what a virtuous agent would characteristically (i.e., acting in character) do in the circumstances.” Borrowing terminology from target-centered virtue ethics, for all fields (classes of situations) and all modes (classes of behaviors), it is morally good for an agent in a situation within the field to take an action in the mode (when it can) if and only if it is likely that all agents the possessing virtues relevant to the field and mode will take this action in situations within the field.

(<=>
  (modalAttribute 
    (forall (?AGENT ?SITUATION)
      (=> 
        (and
          (equal ?SITUATION (SituationFn ?AGENT))
          (instance ?SITUATION ?FIELD)
          (capableInSituation ?MODE agent ?AGENT ?SITUATION))
        (exists (?IPROC)
          (and
            (agent ?IPROC ?AGENT)
            (instance ?IPROC ?MODE))))) MorallyGood)
  (modalAttribute 
    (forall (?AGENT ?SITUATION)
      (=>
        (and
          (forall (?VIRTUE)
            (=>
              (and
                (relevant ?VIRTUE ?FIELD)
                (relevant ?VIRTUE ?MODE))
             (attribute ?AGENT ?VIRTUE)))
          (equal ?SITUATION (SituationFn ?AGENT))
          (instance ?SITUATION ?FIELD)
          (capableInSituation ?MODE agent ?AGENT ?SITUATION))
        (exists (?IPROC)
          (and
            (agent ?IPROC ?AGENT)
            (instance ?IPROC ?MODE))))) Likely))

One way to interpret simple virtue sentences in the moral value judgment language is to say that, “if an agent possesses a virtue/vice, then for all behaviors and situations where the virtue/vice is relevant to the situation and the behavior is possible for the agent, if the virtuous/vicious agent is likely to take the behavior, then it’s likely morally good/bad for agents in similar situations to take this action.”13It may be best to simplify the core meaning postulate to remove the situational aspect because I can imagine many conditions on when and how something is good. Target-centered virtue ethics provides field and mode aspects for this purpose.14The likelihood wrapping the moral value judgment could be debated. What the virtuous agent is likely to do, ‘characteristically’, is what is good. Thus we only need one probabilistic scope. However, other virtues may also be relevant in the situation, thus due to imperfect coverage, the second likelihood could be justified.

(documentation SimpleVirtueToValueJudgmentSentenceFn EnglishLanguage "A UnaryFunction that maps simple virtue ethics sentences into value judgment sentences.")
(domain SimpleVirtueToValueJudgmentSentenceFn 1 SimpleVirtueSentence)
(range SimpleVirtueToValueJudgmentSentenceFn ValueJudgmentSentence)
(instance SimpleVirtueToValueJudgmentSentenceFn TotalValuedRelation)
(instance SimpleVirtueToValueJudgmentSentenceFn UnaryFunction)

(=>
  (and
    (equal (SimpleVirtueToValueJudgmentSentenceFn ?SVS) ?VJS)
    (equal ?SVS (attribute ?AGENT ?VIRTUEATTRIBUTE))
    (instance ?AGENT AutonomousAgent)
    (=> 
      (instance ?VIRTUEATTRIBUTE VirtueAttribute)
      (equal ?MORALATTRIBUTE MorallyGood))
    (=>
      (instance ?VIRTUEATTRIBUTE ViceAtribute)
      (equal ?MORALATTRIBUTE MorallyBad)))
  (equal ?VJS 
    (forall (?MODE ?FIELD ?SITUATION)
      (=> 
        (and
          (subclass ?MODE AutonomousAgentProcess)
          (relevant ?VIRTUEATTRIBUTE ?MODE)
          (relevant ?VIRTUEATTRIBUTE ?FIELD)
          (instance ?SITUATION ?FIELD)
          (capableInSituation ?MODE agent ?AGENT ?SITUATION)
          (modalAttribute
            (exists (?PROC)
              (and 
                (agent ?PROC ?AGENT)
                (instance ?PROC ?PROC)
                (equal ?SITUATION (SituationFn ?PROC)))) Likely))
        (modalAttribute 
          (modalAttribute
            (forall (?AGENT ?SITUATION1)
              (=> 
                (and
                  (equal ?SITUATION1 (SituationFn ?AGENT)
                  (similar ?AGENT ?SITUATION ?SITUATION1)
                  (capableInSituation ?MODE agent ?AGENT ?SITUATION1)))
                (exists (?PROC)
                  (and
                    (agent ?PROC ?AGENT)
                    (instance ?PROC MODE)
                    (equal ?SITUATION1 (SituationFn ?PROC)))))) ?MORALATTRIBUTE) Likely)))))

This function translates from the subclass of value judgment sentences that describe a situation in which it is good/bad for agents capable of taking a class of actions to take one of the actions. In such cases, the virtue ethics sentence states that for all virtuous/vicious agents in similar situations with the capability to take one of the actions and who possess a relevant virtue/vice to the class of actions and the described situation, it is likely that they will take one of the actions.

(=>
  (and 
    (equal (SimpleSituationalActionValueJudgmentToVirtueSentenceFn ?SSAVJ) ?VES)
    (subclass ?CLASS AutonomousAgentProcess)
    (equal ?SSAVJ 
      (and 
        ?DESCRIPTION
        (modalAttribute 
          (forall (?AGENT ?SITUATION1)
            (=> 
              (and
                (equal ?SITUATION (SituationFn ?AGENT)
                (similar ?AGENT ?SITUATION (SituationFormulaFn ?DESCRIPTION))
                (capableInSituation ?CLASS agent ?AGENT ?SITUATION1)))
              (exists (?PROC)
                (and
                  (agent ?PROC ?AGENT)
                  (instance ?PROC CLASS))))) ?MORALATTRIBUTE)))
    (=> 
      (equal ?MORALATTRIBUTE MorallyGood)
      (and 
        (instance ?VIRTUETYPE VirtueAttribute)
        (equal ?AGENTTYPE VirtuousAgent)))
    (=> 
      (equal ?MORALATTRIBUTE MorallyBad)
      (and
        (instance ?VIRTUETYPE ViceAttribute)
        (equal ?AGENTTYPE ViciousAgent))))
  (equal ?VES
    (forall (?AGENT)
      (=> 
        (and
          (instance ?AGENT ?AGENTTYPE)
          (exists (?VIRTUE)
            (and
              (instance ?VIRTUE ?VIRTUETYPE)
              (attribute ?AGENT ?VIRTUE)
              (relevant ?VIRTUE ?CLASS)
              (relevant ?VIRTUE (SituationFormulaFn ?DESCRIPTION)))))
        (modalAttribute 
          (forall (?SITUATION)
            (=>
              (and
                (equal ?SITUATION (SituationFn ?AGENT))
                (similar ?AGENT ?SITUATION (SituationFormulaFn ?DESCRIPTION))
                (capableInSituation ?CLASS agent ?AGENT ?SITUATION))
              (exists (?PROC)
                (and
                  (agent ?PROC ?AGENT)
                  (instance ?PROC ?CLASS)
                  (equal ?SITUATION (SituationFn ?PROC)))))) Likely)))))

The following approach could be preferable. As above, for each value judgment sentence that describes a situation in which it is good/bad for agents capable of taking a class of actions to take one of the actions, we define a virtue variable for the output of the function that in essence covers behaving well in this situation: the virtue is relevant to the situation and class of actions. Then the virtue ethics sentence says that any agent possessing this virtue is likely to take the action in similar situations. In the case that there is a collection of known virtues, this virtue could be unified with one of them.

(=>
  (and 
    (equal (SimpleSituationalActionValueJudgmentToVirtueSentenceFn ?SSAVJ) ?VES)
    (subclass ?CLASS AutonomousAgentProcess)
    (equal ?SSAVJ 
      (and 
        ?DESCRIPTION
        (modalAttribute 
          (forall (?AGENT ?SITUATION1)
            (=> 
              (and
                (equal ?SITUATION (SituationFn ?AGENT)
                (similar ?AGENT ?SITUATION (SituationFormulaFn ?DESCRIPTION))
                (capableInSituation ?CLASS agent ?AGENT ?SITUATION1)))
              (exists (?PROC)
                (and
                  (agent ?PROC ?AGENT)
                  (instance ?PROC CLASS))))) ?MORALATTRIBUTE)))
    (=> 
      (equal ?MORALATTRIBUTE MorallyGood)
      (and 
        (instance ?VIRTUETYPE VirtueAttribute)
        (equal ?AGENTTYPE VirtuousAgent)))
    (=> 
      (equal ?MORALATTRIBUTE MorallyBad)
      (and
        (instance ?VIRTUETYPE ViceAttribute)
        (equal ?AGENTTYPE ViciousAgent))))
  (exists (?VIRTUE)
    (and
      (instance ?VIRTUE ?VIRTUETYPE)
      (relevant ?VIRTUE ?CLASS)
      (relevant ?VIRTUE (SituationFormulaFn ?DESCRIPTION))  
      (equal ?VES
        (forall (?AGENT)
          (=> 
            (and
              (instance ?AGENT ?AGENTTYPE)
              (attribute ?AGENT ?VIRTUE))
            (modalAttribute 
              (forall (?SITUATION)
                (=>
                  (and
                    (equal ?SITUATION (SituationFn ?AGENT))
                    (similar ?AGENT ?SITUATION (SituationFormulaFn ?DESCRIPTION))
                    (capableInSituation ?CLASS agent ?AGENT ?SITUATION))
                  (exists (?PROC)
                    (and
                      (agent ?PROC ?AGENT)
                      (instance ?PROC ?CLASS)
                      (equal ?SITUATION (SituationFn ?PROC)))))) Likely)))))))

Curiously, target-centered virtue ethics‘ framework of virtue aspects helps in translating to value judgment languages. The target encapsulates “what someone with a virtue will characteristically do”; if it can be expressed in a formula, then just tag it. There is a field in which the virtue is relevant. Sometimes there are constraints as to how one will approach a situation in line with a virtue: the mode. And there is a basis on which the virtue is deemed to be virtuous. The virtue ethics sentences don’t necessarily contain all of this information, specifying what possessing a virtue entails for an agent. Using virtue desire sentences, one can imagine a correspondence where the resulting target-centered virtue ethics sentence states that the virtue’s field is one that is relevant to the virtue and likely contains situations (of processes that realize the formula) and the virtue’s target is what agents possessing the virtue desire. This can then be translated into the situational action value judgment sentence stating that in situations within the field, it is likely good to take actions realizing the target15See the target-centered virtue ethics page for the definitions.. The translation from target-centered virtue ethics sentences with a target and field is similarly simple: agents with the virtue desire to fulfill the target, which if following this translation, is the initial aspiration of the virtuous agent16In the current translations, some of the information doesn’t line up for these functions to compose into a bijection. AI should be able to massage them into a bijection easily enough.[/mfn. Simple situational action value judgment sentences translate directly into target-centered virtue ethics sentences, too. Thus, while there is the question of whether to focus on the targets as definitional aspects of what virtues truly are or on agent traits as foundational, the language provides a handy bridge for interpreting between virtue ethics and value judgments.

(documentation SimpleVirtueDesireToTCVESentenceFn EnglishLanguage "A UnaryFunction that maps simple virtue ethics desire sentences into target-centered virtue ethics sentences.")
(domain SimpleVirtueDesireToTCVESentenceFn 1 SimpleVirtueDesireSentence)
(range SimpleVirtueDesireToTCVESentenceFn TargetCenteredVirtueEthicsSentence)
(instance SimpleVirtueDesireToTCVESentenceFn TotalValuedRelation)
(instance SimpleVirtueDesireToTCVESentenceFn UnaryFunction)

(=>
  (and
    (equal (SimpleVirtueDesireToTCVESentenceFn ?SVDS) ?TCVE)
    (equal ?SVDS 
      (forall (?AGENT)
          (=>
            (attribute ?AGENT ?VIRTUE)
            (desires ?AGENT ?FORM))))
    (instance ?VIRTUE VirtueAttribute))
  (exists (?FIELD)
    (and
      (forall (?IPROC)
        (=> 
          (realizesFormula ?FORM ?IPROC)
          (modalAttribute (instance (SituationFn ?IPROC) ?FIELD) Likely)))
      (relevant ?VIRTUE ?FIELD)
      (equal ?TCVE 
        (and
          (virtueField ?VIRTUE ?FIELD)
          (virtueTarget ?VIRTUE ?FORM))))))

(documentation FTSimpleTCVEToVirtueDesireSentence EnglishLanguage "A UnaryFunction that maps Field-Target simple target-centered virtue ethics sentences to simple virtue desire sentences.")
(domain FTSimpleTCVEToVirtueDesireSentence 1 FTVirtueAspectSentence)
(range FTSimpleTCVEToVirtueDesireSentence SimpleVirtueDesireSentence)
(instance FTSimpleTCVEToVirtueDesireSentence TotalValuedRelation)
(instance FTSimpleTCVEToVirtueDesireSentence UnaryFunction)
(subrelation FTSimpleTCVEToVirtueDesireSentence CompleteSimpleTCVEToVirtueSentence)

(=>
  (and 
    (equal (FTSimpleTCVEToVirtueDesireSentence ?TVS) ?SVDS)
    (equal ?TVS 
      (and 
        (virtueField ?VIRTUE ?FIELD)
        (virtueTarget ?VIRTUE ?TARGET))))
  (equal ?SVDS
    (forall (?AGENT)
      (=>
        (attribute ?AGENT ?VIRTUE)
        (desires ?AGENT
          (forall (?SITUATION)
            (=>
              (and
                (equal ?SITUATION (SituationFn ?AGENT))
                (instance ?SITUATION ?FIELD))
              (exists (?IPROC)
                (and
                  (agent ?IPROC ?AGENT)
                  (actionHitsVirtueTarget ?IPROC ?VIRTUE))))))))))