Formal Ethics Ontology Worklog 2

The first edition was quite long, so this time, I’m going to write as I go.

It turns out that writing as I go becomes very, very messy. Especially as the direction in which to proceed when dealing with syntax becomes confusing and requires much back-editing. So please forgive me if there is some already-outdated content. Maybe I’ll learn firsthand why people tend to only report the polished result. Indeed, dear reader, unless you’re curious about the nuanced thought processes, waiting may be advisable!

First off, I finally set up the SUMOjEdit plugin, which makes type-checking much easier (as it filters out a lot of superfluous ‘false flag errors’ from the output generated when translating SUMO to TFA (typed first-order formulas with arithmetic).  For posterity, I should mention the command used.

java -Xmx7g -classpath /home/sumo/workspace/sigmakee/build/classes:/home/sumo/workspace/sigmakee/build/lib/* com.articulate.sigma.trans.SUMOKBtoTFAKB

Fixing ‘Deciding’ Again

Ironically, after updating Deciding in the official repo, it came to Jennie Pease’s attention that I’d made an egregious type error!  For you see, all CaseRoles basically work over instances, not classes, so referring to a subclass of behaviors as the patient of a decision process is out.  Which is a bummer as I still think it’s weird to decide over concrete instances.  Her idea was to make a set of instances be the patient, which is precisely what a class is in Chad’s Higher Order (HO) Set Theory interpretation of SUMO.  Instances are members of the universe, \(U\), and classes are subsets of \(U\).  Class is the powerset, \(P(U)\).  Thus, as a hack, we can use case roles on instances of sets, which happen to correspond to the sets of instances of a subclass of behaviors we wish to work with 😈🤓.

My first thought when seeing the Set hack was to use a set containing the subclasses of behavior being decided upon.  I was already talking about decision sets.  So this could actually streamline things.

;; For every instance of deciding, there is a non-empty set over which one is deciding.
(=>
  (instance ?DECIDE Deciding)
  (exists (?S)
    (and
      (instance ?S NonNullSet)
      (patient ?DECIDE ?S))))

;; For every option in a decision set for an agent,
;; the agent believes itself capable of performing this option.   
(=>
  (and
    (instance ?DECIDE Deciding)
    (agent ?DECIDE ?AGENT)
    (instance ?S Set)
    (patient ?DECIDE ?S)
    (element ?O ?S))
    (believes ?AGENT 
      (capability ?O agent ?AGENT)))

;;  For every resulting decision set element, the agent believes it will enact that behavior in the future.  
(=>
  (and 
    (instance ?DECIDE Deciding)
    (agent ?DECIDE ?AGENT)
    (result ?DECIDE ?DECISION)
    (instance ?DECISION Set)
    (element ?O ?S))
  (believes ?AGENT
    (holdsDuring 
      (FutureFn
        (WhenFn ?DECIDE))
      (exists (?P)
        (and
          (instance ?P ?O)
          (agent ?P ?AGENT))))))

;; The resulting decision-set is a subset of some decision set being decided over.       
(=>
  (and 
    (instance ?DECIDE Deciding)
    (result ?DECIDE ?DECISION)
    (instance ?DECISION Set))
  (exists (?S)
    (and
      (patient ?DECISION ?S)
      (subset ?DECISION ?S))))

This isn’t bad, actually, and could result in cleaner synergy with how Deciding is actually used down the road. The Set-hack version, thanks to Jennie, allows me to keep the definitions almost exactly as I philosophically wanted, however! First, we need to define a ClassToSet function, which should work so long as Class isn’t a subclass of Class.

(documentation ClassToSetFn EnglishLanguage "A UnaryFunction that maps a Class into the set of instances of the Class.")
(domainSubclass ClassToSetFn 1 Class)
(instance ClassToSetFn TotalValuedRelation)
(instance ClassToSetFn UnaryFunction)
(range ClassToSetFn Set)

(<=>
    (element ?INSTANCE (ClassToSetFn ?CLASS))
    (instance ?INSTANCE ?CLASS))

So we claim that all patients of Deciding are sets of options corresponding to some class (pretty much by definition in the HO Set interpretation). Next, we can simply wrap the “subclass” in the “ClassToSetFn” as if type casting from one view of classes to a more manageable one. And the class can be used as desired. Easy peasy.

(=>
  (and
    (instance ?DECIDE Deciding)
    (patient ?DECIDE ?OPTIONSET))
  (exists (?CLASS)
    (and
      (equal
        (ClassToSetFn ?CLASS) ?OPTIONSET)
      (subclass ?CLASS IntentionalProcess))))

(=>
    (and 
        (instance ?DECIDE Deciding)
        (agent ?DECIDE ?AGENT)
        (patient ?DECIDE (ClassToSetFn ?OPTION)))
    (believes ?AGENT 
        (capability ?OPTION agent ?AGENT)))

(=>
    (and 
        (instance ?DECIDE Deciding)
        (agent ?DECIDE ?AGENT)
        (result ?DECIDE (ClassToSetFn ?DECISION)))
    (believes ?AGENT
    (holdsDuring 
      (FutureFn
        (WhenFn ?DECIDE))
      (exists (?P)
          (and 
            (instance ?P ?DECISION)
            (agent ?P ?AGENT)
            (earlier
              (WhenFn ?DECIDE)
              (WhenFn ?P)))))))

Hopefully, I can stop refining definitions of Deciding soon. This is helping me to understand more of the “compiler” type details underlying formal (logic) systems, 🤓. My approach will be to see which form of Deciding winds up integrating best with the rest of the formalizations.

Uplifting Consequentialism to Classism

The definition of Consequentialism still confounds me, to be honest. It’s almost defined via negativa, i.e., by claiming that the utility function should not depend on virtue or duty/rule-based features of the situation. Defining ‘influence’ and ‘dependence’ is tricky in SUMO, for me. If working with a particular structure of utility function, maybe there would be some clear way to say that it only depends on the consequences of an act. Perhaps defining the utility function as mapping physical situations to real numbers?

Anyway, my best attempt thus far is to say that when there is a moral judgment concluding that «it is morally good/bad for there to be an instance of behavior X», then there exists a valid deductive argument with this conclusion. Moreover, for every premise of this argument, if it refers to a physical entity, then this is an outcome possibly resulting from an instance of behavior X. And all premises of the argument may not refer to attributes of the agent (who could possibly be the agent of an instance of behavior X) nor may the premise refer to a possible manner of an instance of the behavior.

I think this can probably be compressed and distilled a bit. A philosophically interesting point is that the introduction of Possibility unveils an aspect of consequentialism that is easy to overlook: one can only truly gauge the moral quality of an action in hindsight because, before the act, one doesn’t know what will happen. Thus, insofar as consequentialist considerations will guide one’s actions, one is working with estimations as to the likely consequences of one’s actions and their evaluations. And one is uncertain as to whether the right choice has been made.

(and
    (refers Consequentialism ?STATE)
    (instance ?STATE Statement)
    (equals ?STATE
        (=>
            (and
                (instance ?JUDGE MoralJudging)
                (result ?JUDGE ?CONCLUSION)
                (or (equal ?MORALATTRIBUTE MorallyGood) (equal ?MORALATTRIBUTE MorallyBad))
                (equals ?CONCLUSION
                    (modalAttribute (hasInstance ?CBEHAVE) ?MORALATTRIBUTE)))
            (exists (?ARGUMENT)
                (and
                    (instance ?ARGUMENT ValidDeductiveArgument)
                    (conclusion ?ARGUMENT ?CONCLUSION)
                    (forall (?PREMISE)
                        (=> 
                            (premise ?ARGUMENT ?PREMISE)
                            (and
                                (=> 
                                    (and   
                                        (refers ?PREMISE ?PHYS)
                                        (instance ?PHYS Physical))
                                    (and
                                        (instance ?PHYS Outcome)
                                        (modalAttribute 
                                            (exists (?IBEHAVE)
                                                (and
                                                    (instance ?IBEHAVE ?CBEHAVE)
                                                    (result ?IBEHAVE ?PHYS))) Possibility)))
                                (forall (?ATT) 
                                    (=> 
                                        (modalAttribute 
                                            (exists (?IBEHAVE)
                                                (and
                                                    (instance ?IBEHAVE ?CBEHAVE)
                                                    (agent ?IBEHAVE ?AGENT))) Possibility))
                                        (attribute ?AGENT ?ATT)
                                        (not (refers ?PREMISE ?ATT))))
                                (forall (?MAN)
                                        (=> 
                                            (modalAttribute
                                                (exists (?IBEHAVE)
                                                    (and 
                                                        (instance ?IBEHAVE ?CBEHAVE)
                                                        (manner ?MAN ?IBEHAVE))) Possibility)
                                            (not (refers ?PROP ?MAN))))))))))))

Moral Theories

I also said I’d like to explore distinguishing the moral theories in terms of the syntactic forms of the moral statements they make. In math/logic, a (formal) theory is simply a set of sentences (in a formal language). This can easily be added to SUMO:

(documentation Theory EnglishLanguage "A set of sentences.")
(subclass Theory Set)

(<=> 
    (instance ?T Theory)
    (forall (?S)
        (=> 
            (element ?S ?T)
            (instance ?S Sentence))))

A deontological theory is one that consists of statements of the form «Bla is morally good/bad». I am not sure what the best way to constrain the formulas is, so the implication is, for now, one way.

(documentation DeontologicalTheory EnglishLanguage "A set of sentences assigning moral attributes.")
(subclass DeontologicalTheory Theory)

(=> 
    (instance ?D DeontologicalTheory)
    (forall (?S)
        (=> 
            (element ?S ?D)
            (exists (?F ?MORALATTRIBUTE)
                (and 
                    (equal ?S
                       (modalAttribute ?F ?MORALATTRIBUTE))
                    (instance ?F Formula)
                    (or (equal ?MORALATTRIBUTE MorallyGood) (equal ?MORALATTRIBUTE MorallyBad)))))))

I’m not sure what the best way in SUMO is to specify that every element of a set has a specific syntactic form. Virtue Ethics theories consist of statements claiming that «So-and-so is virtuous/vicious».

I haven’t formalized it yet, but I might wish to include, «In such-and-such a situation, a person of virtue X is likely to do Y», in the set of sentence forms. One of the challenges is that I don’t yet know how to cleanly talk about situations in SUMO. One idea, thinking of the ontological level, is to simply define situation in SUMO and to define a few relevant “necessary but not sufficient” rules, even if I don’t connect it to much else. Then the notion of a situation will begin to be defined based on my usage!

(documentation VirtueEthicsTheory EnglishLanguage "A set of sentences assigning virtue or vice attributes.")
(subclass VirtueEthicsTheory Theory)
                    
(<=> 
    (instance ?V VirtueEthicsTheory)
    (forall (?S)
        (=> 
            (element ?S ?V)
            (exists (?AGENT ?VIRTUE)
                (and
                    (equal ?S 
                        (attribute ?AGENT ?VIRTUE))
                    (instance ?AGENT AutonomousAgent)
                    (or (instance ?VIRTUE VirtueAttribute) (instance ?VIRTUE ViceAttribute)))))))

The utilitarian sentence structure is a bit tricky. If one considers the utility function as outputting how good/bad a thing is, then it makes more sense. To start, I sketched out two schemas:

  1. «The utility of doing Bla is >/</≥/≤/= to some number»
  2. «The utility of doing Bla is >/</≥/≤/= to the utility of doing Yadda.»

That is, we may make absolute utility judgments as to how ‘good’ or ‘bad’ doing something is, or we may make comparative utility judgments as to whether doing B is better/worse than doing Y.

(<=>
  (instance ?U UtilitarianTheory)
  (forall (?S)
    (=>
      (element ?S ?U)
      (exists (?CBEHAVE ?CBEHAVE2)
        (and
          (subclass ?CBEHAVE AutonomousAgentProcess)
          (subclass ?CBEHAVE2 AutonomousAgentProcess)
          (or
            (equal ?S
              (
                (?C
                  (UtilitySubclassFn ?CBEHAVE) ?N)))
            (equal ?S
              (?C
                (UtilitySubclassFn ?CBEHAVE)
                (UtilitySubclassFn ?CBEHAVE2))))
          (instance ?N Number)
          (or
            (equal ?C greaterThan)
            (equal ?C lessThan)
            (equal ?C greaterThanOrEqualTo)
            (equal ?C lessThanOrEqualTo)
            (equal ?C equal)))))))

The sentence structure for a moral nihilistic theory might be, pragmatically, «If you do X, I will do Y to you». Direct negotiations 😎. Maybe I should actually find some way to formalize this! Given there’s no morally true statement (of the deontological variety), we’re left with offering rewards and threats to each other when we wish to influence each others’ actions.

An interesting observation when specifying the formal structure of moral sentences in the various paradigms is that the way I specified the moral paradigms in drafts 1-3 were all framed in the deontological language. One idea is that each paradigm has an overarching (normative) moral imperative. In deontology, one should try to take the right actions, that is, those which are morally good. In utilitarianism, one should try to maximize the utility (measured via situations, surveys, etc). In virtue ethics, one should try to be a virtuous, good person. One could be more concerned about what it means to be an honest, compassionate person than about whether every single one of one’s actions is ‘good‘. In each paradigm, one guides one’s actions via a different basis.

The idea that an action is right if and only if it is what a virtuous person would do is an attempt to pigeonhole virtue-based ethics into the deontological framework (because the logic/duty-based paradigm seems the most ‘proper’). Yes, try to take the action that benefits the most people via the best measurements (utility functions) we have. What does it matter whether we call your attempt at doing this ‘good’ or ‘bad’, i.e., why project it into the deontological view? One could also project ethical rules into the utilitarian view: if you follow proper moral precepts, then your actions will probably bring about good utility scores. And if they don’t, ahaha, they’re probably wrong! Or, we can project into the virtue-ethics view: if your actions usually result in good utility scores for all involved, you’re probably a fairly virtuous person. I plan to show that each paradigm can encapsulate the others — and the standard presentation is simply how one does the embedding into the deontological language. Maybe the formalization is getting close to the point where I should make sure these inter-paradigm mappings can be cleanly defined ;- ).

Draft 4

For the first few drafts, I figured that proper syntax and type checking weren’t that important as it’s better to explore the ideas w/o worrying so much that everything is ‘just right’. Also, I was putting off the time-sink of figuring out how to use Sigma. It seems the main error in draft 3 is the class-instance stuff, which, indeed, is tricky to solve, and trying to do so early might just take more time as I’d have less experience. Now I’ll try to make sure everything ‘compiles‘ (down to TFA) as I go 🤓🤖.

The first big question is how to weave the moral theory languages into the foundations of the ethical paradigms and moral judgments, which I founded draft 3 on. While in worklog 1, I expressed some skepticism about the nature of the seemingly endless quagmire of meta-ethical debates, I think they are, in part, trying to shed light on the relation between moral judgments and moral statements. Does a moral judgment by \(J\) about a moral statement \(S\) indicate that \(J\) believes \(S\) to be factually true or that \(J\) believes \(S\) to represent the state-of-the-art understanding as to effective, prosocial human conduct in modern societies? There’s some wiggle room as to what one believes should be done about the moral judgment \(J\) that \(S\) holds, in a sense.

There’s also some tedious house-cleaning to do (whose details will be elided): Adam structures SUMO files so that the core concept definitions, as well as support definitions and their documentation class headers, all come together. I’ve been separating them out, finding it easier, personally. I’ll probably wish to merge my repo with the sumo repo (in part to make use of the continuous integration feature an open-source contributor is working on), so draft 4 should be done in a style closer to Adam’s 😉.

It’s fun how improvements come up as processing the drafts: with increasing experience writing SUMO expressions, I see obvious ways to improve the crude drafts. The basic gist of a virtuous agent is that it’s an agent possessing virtues. Yet the below definition implicitly quantifies over all virtues, which is very generic.

(=>
  (and
    (instance ?AGENT AutonomousAgent)
    (instance ?VIRTUE VirtueAttribute)
    (attribute ?AGENT ?VIRTUE))
  (instance ?AGENT VirtuousAgent))

Maybe it’s more precise to say that the existence of a virtue possessed by an agent increases the likelihood that the agent is virtuous. Ultimately, we might wish to store in some state monad a profile of an agent’s virtues that gets updated with each exhibition of virtuous behavior and then define some class of functions that map from a virtue profile to an overall assessment as to whether an agent is virtuous or not. — Oh, I’m totally getting into the implementation layer again 😋🤭. The reverse direction should also be included: if someone’s virtue, then there’s some virtue that they possess.

(increasesLikelihood
  (exists (?VIRTUE)
    (and
      (instance ?AGENT AutonomousAgent)
      (instance ?VIRTUE VirtueAttribute)
      (attribute ?AGENT ?VIRTUE)))
  (instance ?AGENT VirtuousAgent))

(=>
  (instance ?AGENT VirtuousAgent)
  (exists (?VIRTUE)
    (attribute ?AGENT ?VIRTUE)))

So it turns out that to define moral judging on top of moral theories, I should actually throw in moral theories! Go figure 😜. Given that I defined utilitarian theories without any reference to moral attributes, I’m not sure what else to add here. Maybe the next incarnation of Zar will get it 😉. What I really wished for is the moral sentence encapsulating all three paradigms, however.

(documentation MoralTheory EnglishLanguage "A set of sentences in a moral theory")
(subclass MoralTheory Theory)

(documentation MoralSentence EnglishLanguage "A sentence of a moral theory")
(subclass MoralSentence Sentence)

(<=>
  (instance ?SENTENCE MoralSentence)
  (exists (?THEORY)
    (and
      (instance ?THEORY MoralTheory)
      (element ?SENTENCE ?THEORY))))

Now I can simply say that moral judging is a subclass of judging where the result is a moral sentence. This is almost too clean, right?

(documentation MoralJudging EnglishLanguage "A subclass of Judging where the proposition believed is 
a moral sentence from a moral theory (in a given paradigm).")
(subclass MoralJudging Judging)

(=>
  (instance ?JUDGE MoralJudging)
  (exists (?SENTENCE)
    (and
      (instance ?SENTENCE MoralSentence)
      (result ?JUDGE ?SENTENCE))))

I aesthetically sort of like how what aspect of physical situations is being judged is omitted. Technically, from the behavior, one can extract “the situation in which the behavior takes place”, so it sort of “doesn’t matter”. It seems hard to capture every possible sentence type. However, maybe I only need the sentences that form the final judgments — the arguments leading up to these are distinct?

Allow me to take a moment to honor this beautiful helper function. May it rest in peace 🙏.

(=> 
    (instance hasInstance ?REL)
    (valence ?REL 1))

(<=>
    (hasInstance ?CLASS)
    (exists (?INSTANCE)
        (instance ?INSTANCE ?CLASS)))

As usual, starting with Moral Nihilism is just the easiest ;- D. Now that we have moral sentences, it’s even easier: there is simply no moral sentence that is true ;- ).

(documentation MoralNihilism EnglishLanguage "'Moral Nihilism is the view that nothing is morally wrong' (SEP - Moral Skepticism). 
Moral Nihilism can also be defined as 'the view that there are no moral facts' (Ethics: The Fundamentals).")
(subclass MoralNihilism Ethics)
  
(=>
  (instance ?MN MoralNihilism)
  (exists (?PROP ?STATE)
    (and
      (subProposition ?PROP ?MN)
      (containsInformation ?STATE ?PROP)
      (similar ?STATE
        (not
          (exists (?MORALSTATEMENT)
            (and
              (instance ?MORALSTATEMENT MoralSentence)
              (instance ?MORALSTATEMENT Fact))))))))

Abstracting out behavior, frankly, starts to feel a bit weird. Referencing the RL model, I want STATES and ACTIONS. Probably even with my current modular approach, I might wish to weave behavior into each form again. :- p. The structure is becoming such that it may be easier to import Formal Ethics into the mix! So I might as well explicitly state that there is no behavior class that it’s morally wrong to instantiate. This fits the form of a deontological moral sentence, so it’s implied by the above version; however, it’s probably good to see this version as well.

(=>
  (instance ?MN MoralNihilism)
  (exists (?PROP ?STATE)
    (and
      (subProposition ?PROP ?MN)
      (containsInformation ?STATE ?PROP)
      (similar ?STATE
        (not
          (exists (?BEHAVIORCLASS)
            (and
              (subclass ?BEHAVIORCLASS AutonomousAgentProcess)
              (modalAttribute
                (exists (?BEHAVIORINSTANCE)
                  (instance ?BEHAVIORINSTANCE ?BEHAVIORCLASS)) MorallyWrong))))))))

Deontology v4.0

In wondering how to link up the high-level moral sentences with the moral philosophy paradigms (and then the moral judging), one very simple attempt is to say that every instance of a deontological ethical philosophy will contain propositions that correspond to the sentences of some deontological theory. (Which exclusive ineffable intuitive deontological philosophies 🤓). This is basically a claim linking the semantics (‘abstract’) with the syntax (‘physical sentences’)1.

(=>
  (instance ?D Deontology)
  (exists (?PROP ?SENT)
    (and
      (subProposition ?PROP ?D)
      (containsInformation ?SENT ?PROP)
      (instance ?SENT MoralSentence)
      (exists (?THEORY)
        (and 
          (instance ?THEORY DeontologicalTheory)
          (element ?SENT ?THEORY))))))    

This is probably not what we want, yet it’s fun to see. The scope of deontological sentences sketched above probably doesn’t actually cover all such philosophies. Maybe there are some with a basis other than evaluations of formulae into good or bad?

Experimenting with the high-level framing, I question whether I wish to quantify over all instances of deontology. Isn’t that overly strong? The field may be a bit fuzzy, tho, lol, I am trying to clarify it, which pressures people to conform or adopt a new name. How about I be generous and simply claim that there exists a deontological theory and its interpretation such that for every contained sentence, there exists a rule and a deontic operator such that, in the case of obligation, it’s good for the rule to be realized and bad for the rule to not be realized. When the rule specifies a behavior that autonomous agents are obliged to do, then the typical agential form appears.

(exists (?MT ?MP)
  (and
    (instance ?MP Deontology)
    (instance ?MT DeontologicalTheory)
    (forall (?S)
      (=>
        (element ?S ?MT)
        (and
          (exists (?PROP)
            (and
              (containsInformation ?S ?PROP)
              (subProposition ?PROP ?MP)))
          (exists (?RULE ?DEONTIC)
            (and
              (instance ?DEONTIC DeonticAttribute)
              (modalAttribute ?RULE ?DEONTIC)
              (=>
                (equal ?DEONTIC Obligation)
                (equal ?S
                  (and
                    (modalAttribute
                      (exists (?PROC)
                        (realizesFormula ?PROC ?RULE)) MorallyGood)
                    (modalAttribute
                      (not
                        (exists (?PROC)
                          (realizesFormula ?PROC ?RULE))) MorallyBad)))))))))))

Yet I might as well do the syntax—>semantics direction, claiming that for every deontological theory, there exists a deontological philosophy containing the propositions expressed by the sentences of the theory.

(=>
  (instance ?MT DeontologicalTheory)
  (exists (?MP)
    (and
      (instance ?MP Deontology)
      (forall (?S)
        (=>
          (element ?S ?MT)
          (exists (?PROP)
            (and
              (containsInformation ?S ?PROP)
              (subProposition ?PROP ?MP))))))))

And once I have this, the existential claim about a standard deontological theory can be simplified:

(exists (?MT)
  (and
    (instance ?MT DeontologicalTheory)
    (forall (?S)
      (=>
        (element ?S ?MT)
        (exists (?RULE ?DEONTIC)
          (and
            (instance ?DEONTIC DeonticAttribute)
            (modalAttribute ?RULE ?DEONTIC)
            (=>
              (equal ?DEONTIC Obligation)
              (equal ?S
                (and
                  (modalAttribute
                    (exists (?PROC)
                      (realizesFormula ?PROC ?RULE)) MorallyGood)
                  (modalAttribute
                    (not
                      (exists (?PROC)
                        (realizesFormula ?PROC ?RULE))) MorallyBad))))))))))

MIT’s Intro to Ethics course defines rights as:

A right is a justified claim, entitlement or assertion of what a rights-holder is due.

I think it’s not such a stretch to hypothesize that most rights-based deontological theories can be translated into obligation, permission, and prohibition-based deontological theories. Thus let’s just claim that this holds for all deontological theories. This is pretty similar to draft v3.0 and a bit simpler due to leaving out (1) moral judgments and (2) the fact that the rules are autonomous agent processes. I’m not sure if, somewhere down the line, not mentioning that the “?PROC” variables are instances of processes will cause an issue; at least, requiring theorem provers to do this type inference will likely slow them down.

(=>
  (instance ?MT DeontologicalTheory)
  (forall (?S)
    (=>
      (element ?S ?MT)
      (exists (?RULE ?DEONTIC)
        (and
          (instance ?DEONTIC DeonticAttribute)
          (modalAttribute ?RULE ?DEONTIC)
          (=>
            (equal ?DEONTIC Obligation)
            (equal ?S
              (and
                (modalAttribute
                  (exists (?PROC)
                    (realizesFormula ?PROC ?RULE)) MorallyGood)
                (modalAttribute
                  (not
                    (exists (?PROC)
                      (realizesFormula ?PROC ?RULE))) MorallyBad))))
          (=>
            (equal ?DEONTIC Prohibition)
            (equal ?S
              (modalAttribute
                (exists (?PROC)
                  (realizesFormula ?PROC ?RULE)) MorallyBad)))
          (=>
            (equal ?DEONTIC Permission)
            (equal ?S
              (forall (?CPROC)
                (=>
                  (realizesFormulaSubclass ?CPROC ?RULE)
                  (modalAttribute
                    (exists (?PREN)
                      (and
                        (instance ?PREN AutonomousAgentProcess)
                        (prevents ?PREN ?CPROC))) MorallyBad))))))))))

The next hanging question is how to weave moral judgments into the picture. In the first attempt, it seems I wish to go back to the definition of ethics. Recall that I’m working with Lillie’s from his 1948 book, An Introduction to Ethics:

Ethics is the normative science of the conduct of human beings living in society, which judges this conduct to be right or wrong, to be good or bad, or in some similar way.

(and
  (refers Ethics
    (and ?MT ?GROUP))
  (instance ?MT MoralTheory)
  (instance ?GROUP Group)
  (forall (?S)
    (=>
      (element ?S ?MT)
      (and
        (believes ?GROUP ?S)
        (refers ?S (ClassToSetFn ?B))
        (subclass ?B AutonomousAgentProcess)
        (believes ?GROUP
          (exists (?MEMB)
            (and
              (member ?MEMB ?GROUP)
              (capability ?B agent ?MEMB))))))))

So Ethics refers to a moral theory and a group where for each sentence in the moral theory, the group believes the sentence. Each sentence refers to a class of behaviors where the group believes that members of the group are capable of performing the behavior. Being a Moral Theory, the sentence should refer to assigning a moral attribute to the behavior. To integrate moral judgments, one approach might be to say that for each sentence, there exists a judgment by the group that the sentence holds.

(and
  (refers Ethics
    (and ?MT ?GROUP))
  (instance ?MT MoralTheory)
  (instance ?GROUP Group)
  (forall (?S)
    (=>
      (element ?S ?MT)
      (exists (?JUDGE)
        (and
          (instance ?JUDGE MoralJudging)
          (agent ?JUDGE ?GROUP)
          (result ?JUDGE ?S)
          (refers ?S (ClassToSetFn ?B))
          (subclass ?B AutonomousAgentProcess)
          (believes ?GROUP
            (exists (?MEMB)
              (and
                (member ?MEMB ?GROUP)
                (capability ?B agent ?MEMB)))))))))

A subtle addition here is that this implies that for each ethical code endorsed by the group, there was some point in time when a judging process took place before which the sentence was not believed and after which the sentence is believed (to be true). There is still some question as to what is meant by believing that so-and-so is morally good. This could also be sketched out.

I also note that this is still a deontology-based definition, for virtue-based approaches focus on the character of the agent, and utility-based approaches focus on measuring situations. Nonetheless, these alternative lenses are still generally made in the scope of decisions and actions. A fun philosophical question is: what if one can effectively explore the virtue and utility-based analyses even where the connection to action is unclear? For example, consider the quality of peace of mind exhibited in awakened states (such as persistent non-symbolic experience and fundamental wellbeing): is this not a sort of virtue that is exhibited constantly regardless of the actions one takes externally? Can one make some counterargument in terms of ‘internal actions’? It’s not so obviously clear. Possibly, yet the action-centric view likely makes it harder to discuss.

Yet there is the notion that, as with decision theory, all of these moral considerations serve to guide our actions, and the evaluations play a role in these decision processes. Thus even if the moral sentences of some paradigms don’t explicitly deal with action, the judgment takes actions into account. The precise way in which virtues guide our actions may even vary, which Christine Swanton’s Target-Centered Virtue Ethics tries to account for. So while the sentence may not refer to behavior, perhaps it’s better to subtly shift things to say that the moral judgment does.

(and
  (refers Ethics
    (and ?MT ?GROUP))
  (instance ?MT MoralTheory)
  (instance ?GROUP Group)
  (forall (?S)
    (=>
      (element ?S ?MT)
      (exists (?JUDGE)
        (and
          (instance ?JUDGE MoralJudging)
          (agent ?JUDGE ?GROUP)
          (result ?JUDGE ?S)
          (refers ?JUDGE (ClassToSetFn ?B))
          (subclass ?B AutonomousAgentProcess)
          (believes ?GROUP
            (exists (?MEMB)
              (and
                (member ?MEMB ?GROUP)
                (capability ?B agent ?MEMB)))))))))

. . . Maybe I am being a bit too pedantic here. And it’s time to move on for now. So the general schema seems to be that we separate and link the syntax and semantics of moral theories and shift the moral judging over to the level of a group of agents, such as a society.

Ah, an additional issue is that the actual sentences may describe something of the situation in which a morally good/bad element is pinpointed. At least, this is what I did above: I tried to say that what is ‘bad‘ is taking action that prevents something from being done when there is permission to do it, which violates the strict syntax of a deontological sentence I gave above. Let’s take a look at this version of the deontology rule at least:

(=>
  (instance ?MT DeontologicalTheory)
  (forall (?S)
    (=>
      (element ?S ?MT)
      (exists (?RULE ?DEONTIC)
        (and
          (instance ?DEONTIC DeonticAttribute)
          (modalAttribute ?RULE ?DEONTIC)
          (=>
            (equal ?DEONTIC Obligation)
            (exists (?S2 ?G ?B)
              (and
                (element ?S2 ?MT)
                (equal ?G
                  (modalAttribute
                    (exists (?PROC)
                      (realizesFormula ?PROC ?RULE)) MorallyGood))
                (equal ?B
                  (modalAttribute
                    (not
                      (exists (?PROC)
                        (realizesFormula ?PROC ?RULE))) MorallyBad))
                (or 
                  (and (equal ?S ?G) (equal ?S2 ?B))
                  (and (equal ?S ?B) (equal ?S2 ?G))))))
          (=>
            (equal ?DEONTIC Prohibition)
            (equal ?S
              (modalAttribute
                (exists (?PROC)
                  (realizesFormula ?PROC ?RULE)) MorallyBad)))
          (=>
            (equal ?DEONTIC Permission)
            (exists (?S2 ?TEMP1 ?TEMP2)
              (and
                (element ?S2 ?MT)
                (equal ?TEMP1
                  (modalAttribute
                    (exists (?CPROC)
                      (realizesFormulaSubclass ?CPROC ?RULE)) MorallyNeutral))
                (equal ?TEMP2
                    (modalAttribute
                        (exists (?CPROC ?PREN)
                          (and
                            (realizesFormulaSubclass ?CPROC ?RULE)
                            (exists (?PREN)
                              (and
                                (instance ?PREN AutonomousAgentProcess)
                                (prevents ?PREN ?CPROC))))) MorallyBad))
                (or 
                  (and (equal ?S ?TEMP1) (equal ?S2 ?TEMP1))
                  (and (equal ?S ?TEMP2) (equal ?S2 ?TEMP2)))))))))))

First, for obligation, I need to say that a second sentence exists so that both sides are covered: one says that doing the duty is good, and the other says that not doing the duty is bad. And for permission, the whole formula describing the situation of «something there is permission to do being prevented» is denoted as bad. We then need to add that it is not bad for there to be a process that realizes a permission rule. I added a morally neutral attribute for this purpose. So it seems possible to define things while only working with this very specific form of sentence structure. And maybe that’s good: I should have added in the neutral moral attribute 😋.

It’s probably better, however, to simply expand the scope of deontological sentences to include, at least, compound sentences (such as in the case of obligation). Or instead of defining some special language, I can just use a simple hack (because SUMO doesn’t define ‘subFormula’ (– let alone the syntax for SUO-KIF formulas 😜):

(=>
  (instance ?D DeontologicalTheory)
  (forall (?S)
    (=>
      (element ?S ?D)
      (exists (?F ?MORALATTRIBUTE)
        (and
          (part (modalAttribute ?F ?MORALATTRIBUTE) ?S)
          (instance ?F Formula)
          (or
            (equal ?MORALATTRIBUTE MorallyGood)
            (equal ?MORALATTRIBUTE MorallyBad)))))))

Now I can just keep the original version of deontology 😉😎.

It seems that trying to introduce syntactic forms for precision adds a lot of interesting headaches. This helps me to sympathize with my forefathers, who worked out so many kinds of formal logic! Writing a work log where I need to go back and re-do things from earlier in the post is also interesting. This document is definitely not intended to be a clean summary of the current state of the work. Philosophically and sociopolitically, I am in favor of being open as to the nature of research: let’s show the struggles and confusions, not just the shiny results. Many open-source projects release incremental git commits, so why not stream-of-consciousness logs? Furthermore, writing about my progress actually seems to help clarify my thought — it doesn’t need to be public, but proč ne?

I also realized that I could perhaps define ImperativeTheory and ValueJudgmentTheory as two forms of deontological theories. The above is the value theory, and the imperative theory is full of sentences about obligations, permissions, and prohibitions.

(=> 
  (instance ?MT DeontologicalImperativeTheory)
  (forall (?S)
    (=>
      (element ?S ?MT)
      (exists (?F ?DEONTICATTRIBUTE)
        (and
          (part (modalAttribute ?F ?DEONTICATTRIBUTE) ?S)
          (instance ?F Formula)
          (instance ?DEONTICATTRIBUTE DeonticAttribute))))))

Now the above definition could actually be framed as translating an Imperative Theory into a Value Judgment Theory. Yet first some additional house cleaning is needed. Gee, I thought I was going to just jump into Virtue Ethics, yet it seems that there are many syntactic games to play. As I sort of thought earlier, it seems there’s a need to define the moral sentences in more granularity to work with them:

(documentation DeontologicalSentence EnglishLanguage "A sentence of a deontological language/theory.")      
(subclass DeontologicalSentence MoralSentence)

(<=>
  (instance ?SENTENCE MoralSentence)
  (exists (?THEORY)
    (and
      (instance ?THEORY DeontologicalTheory)
      (element ?SENTENCE ?THEORY))))

(documentation SimpleValueJudgmentSentence EnglishLanguage "A sentence that describes the attribution of a moral value judgment.")      
(subclass SimpleValueJudgmentSentence DeontologicalSentence)

(<=>
  (instance ?SENTENCE SimpleValueJudgmentSentence)
  (exists (?F ?MORALATTRIBUTE)
    (and
      (equal (modalAttribute ?F ?MORALATTRIBUTE) ?SENTENCE)
      (instance ?F Formula)
      (or
        (equal ?MORALATTRIBUTE MorallyGood)
        (equal ?MORALATTRIBUTE MorallyBad)
        (equal ?MORALATTRIBUTE MorallyNeutral)))))

(documentation ValueJudgmentSentence EnglishLanguage "A sentence that describes the attribution of a moral value judgment.")      
(subclass ValueJudgmentSentence DeontologicalSentence)

(<=>
  (instance ?SENTENCE ValueJudgmentSentence)
  (exists (?VJS)
    (and
      (instance ?VJS SimpleValueJudgmentSentence)
      (part ?VJS ?SENTENCE))))

(documentation SimpleImperativeSentence EnglishLanguage "A sentence that describes an imperative deontic operator.")      
(subclass SimpleImperativeSentence DeontologicalSentence)

(<=>
  (instance ?SENTENCE SimpleImperativeSentence)
  (exists (?F ?DEONTICATTRIBUTE)
        (and
          (equal (modalAttribute ?F ?DEONTICATTRIBUTE) ?SENTENCE)
          (instance ?F Formula)
          (instance ?DEONTICATTRIBUTE DeonticAttribute))))

(documentation ImperativeSentence EnglishLanguage "A sentence that describes an imperative deontic operator.")      
(subclass ImperativeSentence DeontologicalSentence)    

(<=>
  (instance ?SENTENCE ImperativeSentence)
  (exists (?IT)
    (and
      (instance ?IT SimpleImperativeSentence)
      (part ?IT ?SENTENCE))))

So the simple version specifies the «this formula is good/bad/neutral» syntax, and then generic value judgment sentences are those which contain this as a part. And the situation is similar for imperative sentences. This overly generic formulation means that a sentence discussing both imperatives and value judgments will be both. I note that this modularization makes the specification of the theories very simple, so it’s probably better.

(documentation ValueJudgmentTheory EnglishLanguage "A set of sentences assigning moral attributes.")
(subclass ValueJudgmentTheory DeontologicalTheory)

(=>
  (instance ?D lValueJudgmentTheory)
  (forall (?S)
    (=>
      (element ?S ?D)
      (instance ?S ValueJudgmentSentence))))

(documentation DeontologicalImperativeTheory EnglishLanguage "A set of sentences containing moral imperatives.")
(subclass DeontologicalImperativeTheory DeontologicalTheory)

(=> 
  (instance ?MT DeontologicalImperativeTheory)
  (forall (?S)
    (=>
      (element ?S ?MT)
      (instance ?S ImperativeSentence))))

Subjectively it’s becoming clear that draft 4 will be a very chaotic place of brainstorming that needs to be refined for draft 5. Now there’s the idea to translate the deontic operator statements into value judgment sentences about judging an action as good, bad, or neutral:

(documentation ImperativeToValueJudgmentSentenceFn EnglishLanguage "A UnaryFunction that maps simple imperative sentences into value judgment sentences.")
(domain ImperativeToValueJudgmentSentenceFn 1 SimpleImperativeSentence)
(range ImperativeToValueJudgmentSentenceFn ValueJudgmentSentence)
(instance ImperativeToValueJudgmentSentenceFn TotalValuedRelation)
(instance ImperativeToValueJudgmentSentenceFn UnaryFunction)

(=> 
  (and 
    (equal (ImperativeToValueJudgmentSentenceFn ?ITS) ?VJS)
    (equal ?ITS (modalAttribute ?RULE ?DEONTIC))
    (instance ?RULE Formula)
    (instance ?DEONTIC DeonticAttribute))
  (and
    (=>
      (equal ?DEONTIC Obligation)
      (equal ?VJS
        (and
          (modalAttribute
            (exists (?PROC)
              (realizesFormula ?PROC ?RULE)) MorallyGood)
          (modalAttribute
            (not
              (exists (?PROC)
                (realizesFormula ?PROC ?RULE))) MorallyBad))))
    (=>
      (equal ?DEONTIC Prohibition)
      (equal ?VJS
        (modalAttribute
          (exists (?PROC)
            (realizesFormula ?PROC ?RULE)) MorallyBad)))
    (=>
      (equal ?DEONTIC Permission)
      (equal ?VJS
        (forall (?CPROC)
          (=>
            (realizesFormulaSubclass ?CPROC ?RULE)
            (modalAttribute
              (exists (?PREN)
                (and
                  (instance ?PREN AutonomousAgentProcess)
                  (prevents ?PREN ?CPROC))) MorallyBad)))))))

Now we can take this to simplify the statement about the nature of the actual value judgment theories (beyond the scope of what is possible to express given the syntactic restrictions):

(=>
  (instance ?MT ValueJudgmentTheory)
  (forall (?S)
    (=>
      (element ?S ?MT)
      (exists (?ITS)
        (and
          (instance ?ITS SimpleImperativeSentence)
          (equal ?S (ImperativeToValueJudgmentSentenceFn ?ITS)))))))

All I was saying before was that for every sentence in a value judgment theory, its form is equal to the output of translating a simple imperative sentence into a value judgment sentence. The additional element present in the draft 3 versions is that each of these moral sentences is the result of a moral judgment, which in draft 4, I am aiming to express as part of the notion of ethics: namely that some group, such as society, has judged that each sentence in a moral theory is true.

An additional perk is that I was planning to map between each of the paradigms, so already getting practice at this for mapping among deontological theories is good. To be honest, I am uncertain as to trying to separate out the paradigms in terms of which aspects of situations they judge. Moreover, perhaps it’s better to distinguish the value judgment theories from deontology theories. For example, consider the following example: «It’s good to be honest». This is clearly in line with virtue ethics and is essentially stating that honesty is a virtue. Whereas the deontological version is: «It is good to tell the truth». So the value judgment language is paradigm agnostic.

And now you can witness my being convinced of this change. 😉. Making this change is easy enough. It also means that a lot of my work above is basically worthy of a scratchpad 🤣🤓. This is soooooooooo a worklog! In this view, mapping from the paradigm’s language of moral sentences into value judgments will be one way in which they are interpreted. Yet each paradigm can sort of be interpreted in the languages and points of view of the others. A fun example to think about could be:

(modalAttribute
  (exists (?PET)
    (and
        (instance ?PET DomesticAnimal)
        (located ?PET ?LOC))) Prohibition)

My current translation will say that «It is wrong to take an action that realizes a pet being at ?LOC». Which seems good and fine. Maybe I want to instead say, «It is wrong for a pet to be at ?LOC». And to then formulate some lemma, «It is wrong to take an action that causes the realization of a situation that is wrong». If one wishes to really focus on morally judging choices and actions, then the generic statement would not be used, and its use would be considered sloppy.

I probably wish to work on the basics without worrying too much about these details as it’s not hard to formulate either one. The generic version almost seems too easy and too broad — yet focusing precisely on the actions could be too specific.

On this topic, there is a very simple translation from value judgments to imperatives:

(=> 
  (and 
    (equal (ValueJudgmentToImperativeSentenceFn ?VJS) ?ITS)
    (equal ?ITS (modalAttribute ?SITUATION ?MORALATTRIBUTE))
    (instance ?SITUATION Formula)
    (instance ?MORALATTRIBUTE MoralAttribute))
  (and
    (=>
      (equal ?MORALATTRIBUTE MorallyGood)
      (equal ?ITS 
        (modalAttribute ?SITUATION Obligation)))
    (=>
      (equal ?MORALATTRIBUTE MorallyBad)
      (equal ?ITS
        (modalAttribute ?SITUATION Prohibition)))
    (=>
      (equal ?MORALATTRIBUTE MorallyNeutral)
      (equal ?ITS
        (modalAttribute ?SITUATION Permission)))))

If it’s good, you’re obliged to do it. If it’s bad, don’t do it. If it’s neutral, you probably have permission. At least weakly speaking. We could do the same in the reverse direction: if you’re obliged to do it, then it’s good to do it and bad to not do it. If the formula describes a situation, then the situation is good. If the formula describes an action, then the action is good. This nearly synonymous treatment of value judgments and deontic operators is why I thought that the value judgments are basically the deontological point of view. There is the challenge that when describing things in SUMO, everything is rule-based. Perhaps my addition that it’s bad to prevent someone from doing something they’re permitted to do is extraneous: it should be an additional lemma. Translating back and forth leads to the conclusion that it is wrong to prevent someone from doing something morally neutral.

Strictly speaking, it may be debatable as to whether this correspondence holds: are we necessarily obliged to do that which is good? Surely there are optional goods, no? This would seem to weaken the strength of obligations quite a lot (or else to severely limit the scope of use for morally good, bad, and neutral). Maybe this just means weight parameters are needed 🤓😋.

I suppose I’ll say that for now, I have some new machinery for interpreting deontic operators in terms of value judgments and a claim that ethics involves a group passing judgments over a theory of moral sentences. What does it mean, practically speaking, that a group believes «theft is bad»? Will they do something about it? I’d like to at least sketch this out.

Maybe the corresponding inverse can be simply implemented:

;; Maybe it should be simplified to the following:
(=> 
  (and 
    (equal (GenericImperativeToValueJudgmentSentenceFn ?ITS) ?VJS)
    (equal ?ITS (modalAttribute ?RULE ?DEONTIC))
    (instance ?RULE Formula)
    (instance ?DEONTIC DeonticAttribute))
  (and
    (=>
      (equal ?DEONTIC Obligation)
      (equal ?VJS
        (modalAttribute ?RULE MorallyGood)))
    (=>
      (equal ?DEONTIC Prohibition)
      (equal ?VJS
        (modalAttribute ?RULE MorallyBad)))
    (=>
      (equal ?DEONTIC Permission)
      (equal ?VJS 
        (modalAttribute ?RULE MorallyNeutral)))))

If there’s an obligation, it’s good for it to be true/realized. If there’s a prohibition, it’s bad. If there’s permission, then it’s neutral. Whether the rule is «instantiating an action» or not is left up to the user.

I realized that we don’t need to say that it’s bad not to do what is obliged in the translation. Why? Because an obligation to do ?F implies a prohibition on not doing ?F:

(=>
  (and
    (=>
      (modalAttribute ?FORMULA Obligation)
      (not
        (modalAttribute
          (not ?FORMULA) Permission)))
    (=>
      (not
        (modalAttribute
          (not ?FORMULA) Permission))
      (modalAttribute
        (not ?FORMULA) Prohibition)))
  (modalAttribute
    (not ?FORMULA) Prohibition))

So the deontic logic already deals with this, and a prohibition on “(not ?F)” sort of implies that it’s wrong to prevent someone from doing ?F. For the case of permission, perhaps we need to specify an additional moral lemma:

(forall (?RULE ?CPROC)
  (=>
    (and
      (realizesFormulaSubclass ?CPROC ?RULE)
      (modalAttribute ?RULE Permission))
    (modalAttribute
      (exists (?PREV)
        (and
          (instance ?PREV AutonomousAgentProcess)
          (prevents ?PREV ?CPROC))) Prohibition)))

For all rules and classes of processes with permission, there’s a prohibition on preventing the instantiation of the process. Maybe we need to include agents because deciding not to do something permitted could be considered as preventing it? Either way, the moral is clear: we might be able to push through a simple one-to-one correspondence between the imperative and value judgment forms of deontological theories. The additional complications arose because of including certain lemmas of standard moral theories “in the definition” by mistake.

I think this allows us to make the cleaner claims:

(=>
  (instance ?MT ValueJudgmentTheory)
  (forall (?S)
    (=>
      (element ?S ?MT)
      (forall (?P)
        (=> 
          (and
            (part ?P ?S)
            (instance ?P SimpleValueJudgmentSentence))
          (exists (?ITS)
            (and
              (instance ?ITS SimpleImperativeSentence)
              (equal ?P (GenericImperativeToValueJudgmentSentenceFn ?ITS)))))))))

(=>
  (instance ?MT DeontologicalImperativeTheory)
  (forall (?S)
    (=>
      (element ?S ?MT)
      (forall (?P)
        (=> 
          (and 
            (part ?P ?S)
            (instance ?P SimpleImperativeSentence))
          (exists (?VJS)
            (and
              (instance ?VJS SimpleValueJudgmentSentence)
              (equal ?P (ValueJudgmentToImperativeSentenceFn ?VJS)))))))))

For every value judgment theory, each part that is a simple value judgment sentence is equal to the translation of an imperative sentence. And the same applies to every imperative theory. Claiming that each theory of one variety has an equivalent theory of the other variety seems to require structural recursion over subformulas to properly state, which I don’t feel like specifying in SUMO at the moment (because Formulas aren’t even formally specified).

We also get that these translations are each others’ inverses:

(=> 
  (instance ?S SimpleValueJudgmentSentence)
  (equal ?S 
    (GenericImperativeToValueJudgmentSentenceFn (ValueJudgmentToImperativeSentenceFn ?S))))

(=> 
  (instance ?S SimpleImperativeSentence)
  (equal ?S
    (ValueJudgmentToImperativeSentenceFn (GenericImperativeToValueJudgmentSentenceFn ?S))))

So perhaps it’s finally time to get to virtue ethics and utilitarianism. What sorts of judgments and sentences do they make, and how are these mapped into value judgments?

Virtue Ethics v4.0

So now that I’ve done the next increment for deontology, will Virtue Ethics follow suit easily enough?

We’ll start in the same style as the simplified deontology version by defining a simple virtue ethics sentence and using this tot define general virtue ethics sentences and virtue ethics theories.

(documentation VirtueEthicsTheory EnglishLanguage "A set of sentences assigning virtue or vice attributes.")
(subclass VirtueEthicsTheory MoralTheory)

(documentation VirtueEthicsSentence EnglishLanguage "A sentence of a virtue ethics language/theory.")      
(subclass VirtueEthicsSentence MoralSentence)

(<=>
  (instance ?SENTENCE VirtueEthicsSentence)
  (exists (?THEORY)
    (and
      (instance ?THEORY VirtueEthicsTheory)
      (element ?SENTENCE ?THEORY))))

(documentation SimpleVirtueSentence EnglishLanguage "A sentence that describes an virtue/vice attribute assignment to an agent.")      
(subclass SimpleVirtueSentence VirtueEthicsSentence)    

(<=>
  (instance ?SENTENCE SimpleVirtueSentence)
  (exists (?AGENT ?VIRTUEATTRIBUTE)
    (and
      (equal ?SENTENCE (attribute ?AGENT ?VIRTUEATTRIBUTE))
      (instance ?AGENT AutonomousAgent)
      (instance ?VIRTUEATTRIBUTE MoralVirtueAttribute))))

(<=>
  (insance ?SENTENCE VirtueEthicsSentence)
  (exists (?SVS)
    (and
      (instance ?SVS SimpleVirtueSentence)
      (part ?SVS ?SENTENCE))))       

(<=>
  (instance ?V VirtueEthicsTheory)
  (forall (?S)
    (=>
      (element ?S ?V)
      (instance ?S VirtueSentence))))

And similar to deontology, let’s try to formulate the correspondence between virtue ethics as a proposition and virtue ethics as a theory of sentences. For every theory, there is some philosophy containing the propositions of the theory; and for every philosophy, there is some theory that is contained within it. This last bit is a bit trivializing, yet I worry that quantifying over subPropositions doesn’t work because \(P\) will be a subproposition of \(\neg P\). One trick might be to stipulate that a theory can be conjoined (by a series of ands) into a single sentence: then one can say that the virtue ethics philosophy instance is contained in this one sentence 😎. I’m not sure SUMO contains the standard technology to easily go into more detail (in a logical or functional programming fashion). I’m veering a bit far from the common sense reasoning domain . . ..

(=>
  (instance ?MP VirtueEthics)
  (exists (?MT)
    (and
      (instance ?MT VirtueEthicsTheory)
      (forall (?S)
        (=>
          (element ?S ?MT)
          (exists (?PROP)
            (and
              (containsInformation ?S ?PROP)
              (subProposition ?PROP ?MP))))))))

(=>
  (instance ?MP VirtueEthics)
  (exists (?MTS)
    (and
      (containsInformation ?MTS ?MP)
      (instance ?MTS VirtueEthicsSentence))))

(=>
  (instance ?MT VirtueEthicsTheory)
  (exists (?MP)
    (and
      (instance ?MP VirtueEthics)
      (forall (?S)
        (=>
          (element ?S ?MT)
          (exists (?PROP)
            (and
              (containsInformation ?S ?PROP)
              (subProposition ?PROP ?MP))))))))

I wound up deciding to deal with this by defining what it means for a theory to be concatenated into one sentence. I defined SentenceList to simplify type guards and then modeled ListAndFn based on ListSumFn. I figure because Formulas are Sentences, this works out fine. And then, we have some (non-deterministic) function that maps sets to lists of elements.

(documentation SentenceList EnglishLanguage "A list of Sentences.")
(subclass SentenceList List)

(=>
  (and
    (instance ?LIST SentenceList)
    (inList ?SENTENCE ?LIST))
  (instance ?SENTENCE Sentence))

(documentation ListAndFn EnglishLanguage "The and-concatenation of all the sentences in a SentenceList.")
(domain ListAndFn 1 SentenceList)
(instance ListAndFn UnaryFunction)
(range ListAndFn Sentence)

(=>
  (and
    (equal ?S
      (ListAndFn ?L))
    (equal 1
      (ListLengthFn ?L)))
  (equal ?S
    (ListOrderFn ?L 1)))

(=>
  (and
    (equal ?S
      (ListAndFn ?L))
    (greaterThan
      (ListLengthFn ?L) 1))
  (equal ?S
    (and
      (FirstFn ?L)
      (ListAndFn
        (SubListFn 2
          (ListLengthFn ?L) ?L)))))

(documentation SetToListFn EnglishLanguage "A function that converts a set into a list.  The order is unspecified.")
(instance SetToListFn UnaryFunction)
(domain SetToListFn 1 Set)
(range SetToListFn List)

(=>
  (equal
    (SetToListFn ?SET) ?LIST)
  (and
    (equal
      (ListLengthFn ?LIST)
      (CardinalityFn ?SET))
    (forall (?ELEMENT)
      (<=>
        (element ?ELEMENT ?SET)
        (inList ?ELEMENT ?LIST)))))

Now we can more precisely specify the connection between a moral philosophy and a moral theory. For a moral philosophy, there is a corresponding moral theory such that the concatenation of its sentences contains the information of the moral philosophy. Moreover, we could say that each sentence of the theory contains the information of some sub-proposition of the philosophy (which is probably implied anyway).

(=> 
  (instance ?MP VirtueEthics)
  (exists (?MT ?MTS)
    (and
      (instace ?MT VirtueEthicsTheory)
      (containsInformation ?MTS ?MP)
      (instance ?MTS VirtueEthicsSentence)
      (equal ?MTS (ListAndFn (SetToListFn ?MT))))))

(=> 
  (instance ?MP VirtueEthics)
  (exists (?MT ?MTS)
    (and
      (instace ?MT VirtueEthicsTheory)
      (containsInformation ?MTS ?MP)
      (instance ?MTS VirtueEthicsSentence)
      (equal ?MTS (ListAndFn (SetToListFn ?MT)))
      (forall (?S)
        (=> 
          (element ?S ?MT)
          (exists (?P)
            (and
              (containsInformation ?S ?P)
              (subProposition ?P ?MP))))))))

Now things get a bit tricky. What do we do with these? One is the old claim that if an agent is virtuous, then what it does is likely good. This can be seen as a translation from simple virtue sentences to complex value judgment sentences, I think? Let’s try!

I found this direction to be relatively simple and similar to the draft 3 approach. I’ll only show the virtuous <-> morally good side here (as the vicious <-> morally bad side is the same). Recall that the definition says that “an act is morally good if and only if it is what a virtuous person would characteristically do in the circumstances”, so if there’s a statement that someone is virtuous in some manner, then whatever they would characteristically do is good. One idea is to say that «if an agent possesses a virtue, then for all decisions the agent made that are in the scope of the virtue, it’s probably morally good to take such an action.» Of course, in SUMO, this “in the scope of” relation is yet to be defined, so I just use refers, which is underspecific.

(=>
  (and
    (equal
      (SimpleVirtueToValueJudgmentSentenceFn ?SVS) ?VJS)
    (equal ?SVS
      (attribute ?AGENT ?VIRTUEATTRIBUTE))
    (instance ?AGENT AutonomousAgent))
    (=>
      (instance ?VIRTUEATTRIBUTE VirtueAttribute)
      (equal ?VJS
        (forall (?DECIDE ?DECISION)
          (=>
            (and
              (instance ?DECIDE Deciding)
              (agent ?DECIDE ?AGENT)
              (refers ?VIRTUEATTRIBUTE ?DECIDE)
              (result ?DECIDE
                (ClassToSetFn ?DECISION)))
            (modalAttribute
              (modalAttribute
                (exists (?INSTANCE)
                  (and
                    (agent ?INSTANCE ?AGENT)
                    (instance ?INSTANCE ?DECISION))) MorallyGood) Likely))))))

I would like to remove the decision middle man to say that, «if a virtuous person does something relevant to the virtue, then it’s likely morally good to do something of the same class.» A bit better is probably to say that, «if a virtuous person is likely to do something relevant to the virtue, then it’s likely morally good to do something of the same class (“similar”).» Here “is likely to do” is a somewhat crude gloss of “characteristically”.

(=>
  (and
    (equal (SimpleVirtueToValueJudgmentSentenceFn ?SVS) ?VJS)
    (equal ?SVS (attribute ?AGENT ?VIRTUEATTRIBUTE))
    (instance ?AGENT AutonomousAgent))
    (=> 
      (instance ?VIRTUEATTRIBUTE VirtueAttribute) 
      (equal ?VJS 
        (forall (?PROC)
          (=> 
            (and
              (subclass ?PROC AutonomousAgentProcess)
              (refers ?VIRTUEATTRIBUTE (ClassToSetFn ?PROC))
              (modalAttribute
                (exists (?INSTANCE)
                  (and 
                    (agent ?INSTANCE ?AGENT)
                    (instance ?INSTANCE (ClassToSetFn ?PROC)))) Likely))
            (modalAttribute 
              (modalAttribute 
                (exists (?INSTANCE)
                  (instance ?INSTANCE ?PROC)) MorallyGood) Likely))))))

I found it fun to try to implement an over-generalizing associative definition: if a virtuous person plays any role in an event, then it’s probably good for there to be an instance of that event. Whyfor a virtuous person will avoid morally bad events like the plague 🤣😎.

(=>
  (and
    (equal
      (SimpleVirtueToValueJudgmentSentenceFn ?SVS) ?VJS)
    (equal ?SVS
      (attribute ?AGENT ?VIRTUEATTRIBUTE))
    (instance ?AGENT AutonomousAgent))
    (=>
      (instance ?VIRTUEATTRIBUTE VirtueAttribute)
      (equal ?VJS
        (forall (?PROC ?ROLE ?TIME ?PLACE)
          (=>
            (and
              (subclass ?PROC Process)
              (playsRoleInEventOfType ?AGENT ?ROLE ?PROC ?TIME ?PLACE)
              (refers ?VIRTUEATTRIBUTE (ClassToSetFn ?PROC)))
            (modalAttribute
              (modalAttribute
                (exists (?INSTANCE)
                  (instance ?INSTANCE ?PROC)) MorallyGood) Likely))))))

All of these translate a simple virtue statement into a moral value judgment statement, which involves evaluations of what a virtuous agent does (or is likely to do). Another possibility would be to have a function that maps the simple virtue statement into a multiset of moral judgments for each class of behaviors the virtuous agent is likely to do. Sort of pre-computing the evaluation of the virtuous agent to produce a simple set of value judgments :- D.

The other direction, from value judgment sentences to virtue attribution statements, is quite tricky (for me). Which is interesting since I am basically doing exactly what I did before. The challenge is that I’m working with a larger scope of syntactic forms and leaving off as much context as I can. Maybe one solution is to work with partial translations that only have defined values when the appropriate context is present? As in the deontology case, it could be that we wish to create new lemmas to facilitate simple translations, such as the lemma stating: «If some formula representing a process is good, then it’s good for there to be an instance of this process.»

(=>
  (and
    (instance ?SVJ SimpleValueJudgmentSentence)
    (equal ?SVJ (modalAttribute ?SITUATION ?MORALATTRIBUTE))
    (realizesFormulaSubclass ?PROC ?SITUATION))
  (modalAttribute
    (exists (?INSTANCE)
      (instance ?INSTANCE ?PROC)) ?MORALATTRIBUTE))

A more general variant is: «If some formula is good, is realized by a process, and some behavior causes this process, then it’s likely good for there to be an instance of this behavior.»

(=>
  (and 
    (instance ?SVJ SimpleValueJudgmentSentence)
    (equal ?SVJ (modalAttribute ?SITUATION ?MORALATTRIBUTE))
    (realizesFormulaSubclass ?PROC ?SITUATION)
    (causesSubclass ?CAUSE ?PROC)
    (subclass ?CAUSE AutonomousAgentProcess))
  (modalAttribute
    (modalAttribute
      (exists (?INSTANCE)
        (instance ?INSTANCE ?CAUSE)) ?MORALATTRIBUTE) Likely)) 

I suppose these are actually generic lemmas about the Value Judgment Formulations that start to tie the Formula representation back to Behaviors. This helps as I’m not sure we wish to talk about the moral quality of generic situations, let alone formulas. How about, «It is good for one plus one to equal two»? Maybe I wish to assert the following lemma in the moral theories I work with: «For all formulas with moral attributes, there exists a behavior such that either the situation refers to this behavior or the situation refers to a class of processes that is influenced by this behavior.»

(=>
  (and 
    (instance ?SVJ SimpleValueJudgmentSentence)
    (equal ?SVJ (modalAttribute ?SITUATION ?MORALATTRIBUTE))
    (instance ?SITUATION Formula))
  (exists (?PROC)
    (and 
      (subclass ?PROC AutonomousAgentProcess)
      (or
        (refers ?SITUATION (ClassToSetFn ?PROC))
        (exists (?PROC2)
          (and
            (refers ?SITUATION (ClassToSetFn ?PROC2))
            (subclass ?PROC2 Process)
            (influencs ?PROC ?PROC2)))))))

In essence, every moral value judgment has at least one relevant behavior and, tied with the above lemmas, it’s probably good (or bad) to enact this behavior. Maybe then I will wish to define a function or binary relation that allows us to refer to «the behavior relevant to this moral judgment». That’s probably a useful shorthand.

On to the tricky part, I am only defining a mapping from what I’ve called Simple Action Value Judgment Sentences to Virtue Ethics Sentences. These are sentences ascribing a moral value judgment to the instantiation of a behavior.

(documentation SimpleActionValueJudgmentSentence EnglishLanguage "A sentence that describes the attribution of a moral value judgment to an action.")      
(subclass SimpleActionValueJudgmentSentence SimpleValueJudgmentSentence)

(<=>
  (instance ?SENTENCE SimpleActionValueJudgmentSentence)
  (exists (?CLASS ?FORMULA ?MORALATTRIBUTE)
    (and 
      (equal ?SENTENCE (modalAttribute ?FORMULA ?MORALATTRIBUTE))
      (equal ?FORMULA 
        (exists (?PROC)
          (instance ?PROC ?CLASS)))
      (subclass ?CLASS AutonomousAgentProcess))))

The first version says that «If it’s good to do an action, then for all agents, if the agent is virtuous with a virtue that refers to this class of actions, then it’s likely that the agent will do such an action». Omitted is a description of «which situations are relevant». Perhaps this isn’t needed because any common sense reasoning agent will have some intuitive models of the relevant situations.

(documentation SimpleActionValueJudgmentToVirtueSentenceFn EnglishLanguage "A UnaryFunction that maps simple action value judgment sentences into simple virtue ethics sentences.")
(domain SimpleActionValueJudgmentToVirtueSentenceFn 1 SimpleActionValueJudgmentSentence)
(range SimpleActionValueJudgmentToVirtueSentenceFn VirtueEthicsSentence)
(instance SimpleActionValueJudgmentToVirtueSentenceFn TotalValuedRelation)
(instance SimpleActionValueJudgmentToVirtueSentenceFn UnaryFunction)

(=>
  (and 
    (equal (SimpleActionValueJudgmentToVirtueSentenceFn ?SAVJ) ?VES)
    (subclass ?CLASS AutonomousAgentProcess)
    (equal ?SAVJ (modalAttribute 
      (exists (?PROC)
        (instance ?PROC ?CLASS)) ?MORALATTRIBUTE)))
  (and
    (=>
      (equal ?MORALATTRIBUTE MorallyGood)
      (equal ?VES
        (forall (?AGENT)
          (=> 
            (and
              (instance ?AGENT VirtuousAgent)
              (exists (?VIRTUE)
                (and
                  (instance ?VIRTUE VirtueAttribute)
                  (attribute ?AGENT ?VIRTUE)
                  (refers ?VIRTUE (ClassToSetFn ?CLASS)))))
            (modalAttribute 
              (exists (?PROC)
                (and
                  (instance ?PROC ?CLASS)
                  (agent ?PROC ?AGENT))) Likely)))))
    (=>
      (equal ?MORALATTRIBUTE MorallyBad)
      (equal ?VES
        (forall (?AGENT)
          (=> 
            (and
              (instance ?AGENT ViciousAgent)
              (exists (?VICE)
                (and 
                  (instance ?VICE ViceAtribute)
                  (attribute ?AGENT ?VICE)
                   (refers ?VIRTUE (ClassToSetFn ?CLASS)))))
            (modalAttribute 
              (exists (?PROC)
                (and
                  (instance ?PROC ?CLASS)
                  (agent ?PROC ?AGENT))) Likely)))))))

The next versions work with generic value judgment formulas again: SUMO has causesProposition and increasesLikelihood, both of which are highly generic relations over Formulas, similar to what I’ve been doing here. So if a formula is good and an agent takes an action that causes the formula’s proposition to hold, then the agent likely possesses all virtues that are relevant to the formula. And the same applies if the formula is bad for vices. This is a bit overly generic yet shows that we can start to say the right sort sof sentences.

(=>
  (and
    (equal
      (SimpleValueJudgmentToVirtueSentenceFn ?SAVJ) ?VES)
    (equal ?SAVJ
      (modalAttribute ?FORMULA ?MORALATTRIBUTE)))
  (equal ?VES
    (forall (?AGENT ?PROC)
      (=>
        (and
          (instance ?AGENT AutonomousAgent)
          (subclass ?PROC AutonomousAgentProcess)
          (exists (?INSTANCE)
            (CausesProposition
              (and
                (agent ?INSTANCE ?AGENT)
                (instance ?INSTANCE ?PROC)) ?FORMULA)))
        (forall (?VIRTUE)
          (=>
            (and
              (or
                (and 
                  (equal ?MORALATTRIBUTE MorallyGood)
                  (instance ?VIRTUE VirtueAttribute))
                (and
                  (equal ?MORALATTRIBUTE MorallyBad)
                  (instance ?VIRTUE ViceAttribute)))
              (refers ?VIRTUE ?FORMULA))
            (modalAttribute
              (attribute ?AGENT ?VIRTUE) Likely)))))))

The next version is basically the same, except the agent merely increases the likelihood of the good/bad formula. Also, I tried specifying the good<->virtue and bad<->vice in the antecedent clause. This makes the definition partially valued, as I’m not sure how to match moral neutrality into the virtue ethics language. Is it necessary to provide such a translation? If we were to assign strengths to the likelihood that the agent possesses virtues, I think they’d be stronger in the causes proposition case than in the increases likelihood case.

(=>
  (and
    (equal
      (SimpleValueJudgmentToVirtueSentenceFn ?SAVJ) ?VES)
    (equal ?SAVJ
      (modalAttribute ?FORMULA ?MORALATTRIBUTE))
    (=> 
      (equal ?MORALATTRIBUTE MorallyGood)
      (subclass ?VIRTUETYPE VirtueAttribute))
    (=> 
      (equal ?MORALATTRIBUTE MorallyBad)
      (subclass ?VIRTUETYPE ViceAttribute)))
  (equal ?VES
    (forall (?AGENT ?PROC)
      (=>
        (and
          (instance ?AGENT AutonomousAgent)
          (subclass ?PROC AutonomousAgentProcess)
          (exists (?INSTANCE)
            (increasesLikelihood
              (and
                (agent ?INSTANCE ?AGENT)
                (instance ?INSTANCE ?PROC)) ?FORMULA)))
        (forall (?VIRTUE)
          (=>
            (and
              (instance ?VIRTUE ?VIRTUETYPE)
              (refers ?VIRTUE ?FORMULA))
            (modalAttribute
              (attribute ?AGENT ?VIRTUE) Likely)))))))

Utilitarianism v4.0

That’s all for Virtue Ethics for now. Frankly, I’m becoming a bit impatient to move on past this syntactic-upgrade phase. It’s rather messy, so my focus is to show that it can be done before exploring examples and applications and cleaning it up. The AITP presentation is coming at a propitious time for figuring out how to present the high-level overview now.

Utilitarianism begins the same way as Deontology and Virtue Ethics: for each utilitarian philosophy, there exists a utilitarian theory whose concatenation contains its information, and for each utilitarian theory, there exists a utilitarian philosophy whose information is contained in the theory’s concatenation.

(documentation Utilitarianism EnglishLanguage "Utilitarianism is the ethical paradigm that judges the morality of an action 
based on whether it maximizes the good over the bad, which is typically determined via a utility function.")
(subclass Utilitarianism Ethics)

(=>
  (instance ?MP Utilitarianism)
  (exists (?MT ?MTS)
    (and
      (instance ?MT UtilitarianTheory)
      (equal ?MTS (ListAndFn (SetToListFn ?MT)))
      (containsInformation ?MTS ?MP)
      (forall (?S)
        (=>
          (element ?S ?MT)
          (exists (?P)
            (and
              (containsInformation ?S ?P)
              (subProposition ?P ?MP))))))))

(=>
  (instance ?MT UtilitarianTheory)
  (exists (?MP)
    (and 
      (instance ?MP Utilitarianism)
      (containsInformation (ListAndFn (SetToListFn ?MT)) ?MP))))

Especially when doing the Utilitarian translations, I am realizing that how to interpret one paradigm within another is yet another degree of freedom that an ethical community can have. Is there necessarily one correct approach? Perhaps not. Maybe my community sees value in moral value judgments yet doesn’t wish to engage in strict obligations and prohibitions at all? This would render any translation difficult. Meanwhile, other ethical communities might see an obvious connection, as I expressed above. I think in the cosmic scheme of things, it’s important to be aware of where our differences of custom lie and which claims are axiomatic and which claims are proven theorems.

I started with two kinds of utilitarian sentences: utility assignment sentences and utility comparison sentences, which are different beasts. I also changed the utility function from an instance of a unary function to a class of unary functions because one can have many utility functions! And to make it consistent with the other paradigms, it now simply evaluates formulas.

(documentation UtilitarianTheory EnglishLanguage "A set of sentences dealing with the utility of behaviors.")
(subclass UtilitarianTheory MoralTheory)

(documentation UtilitarianSentence EnglishLanguage "A sentence of the variety of a utilitarian theory.")
(subclass UtilitarianSentence MoralSentence)

(documentation UtilityFormulaFn EnglishLanguage "A UnaryFunction that maps Formulas to the net utility of that 
which is described.  Typically, the formula should refer to an action.")
(subclass UtilityFormulaFn TotalValuedRelation)
(subclass UtilityFormulaFn UnaryFunction)

(=>
    (instance ?UF UtilityFormulaFn)
    (and
        (domain ?UF 1 Formula)
        (range ?UF RealNumber)))

(documentation SimpleUtilitarianSentence EnglishLanguage "A sentence that assigns or compares the value of situations described by formulas.")      
(subclass SimpleUtilitarianSentence UtilitarianSentence) 

(documentation UtilityAssignmentSentence EnglishLanguage "A Sentence that assigns a (real) number value 
to a situation described by a formula.")
(subclass UtilityAssignmentSentence SimpleUtilitarianSentence)

(<=>
  (instance ?SENTENCE UtilityAssignmentSentence)
  (exists (?FORMULA ?VALUE ?UF)
    (and 
      (equal ?SENTENCE (equal (AssignmentFn ?UF ?FORMULA) ?VALUE))
      (instance ?UF UtilityFormulaFn)
      (instance ?FORMULA Formula)
      (instance ?VALUE Number))))

(documentation UtilityComparisonSentence EnglishLanguage "A sentence that compares the value of 
two situations described by formulas.")
(subclass UtilityComparisonSentence SimpleUtilitarianSentence)

(<=> 
  (instance ?SENTENCE UtilityComparisonSentence)
  (exists (?FORMULA1 ?FORMULA2 ?COMPARATOR ?UF)
    (and
      (instance ?FORMULA1 Formula)
      (instance ?FORMULA2 Formula)
      (instance ?UF UtilityFormulaFn)
      (or
            (equal ?COMPARATOR greaterThan)
            (equal ?COMPARATOR lessThan)
            (equal ?COMPARATOR greaterThanOrEqualTo)
            (equal ?COMPARATOR lessThanOrEqualTo)
            (equal ?COMPARATOR equal))
      (equal ?SENTENCE (?COMPARATOR (AssignmentFn ?UF ?FORMULA1) (AssignmentFn ?UF ?FORMULA2))))))

(<=>
  (instance ?SENTENCE SimpleUtilitarianSentence)
  (or
    (instance ?SENTENCE UtilityComparisonSentence)
    (instance ?SENTENCE UtilityAssignmentSentence)))

(<=>
  (instance ?SENTENCE UtilitarianSentence)
  (exists (?SUS)
    (and 
      (instance ?SUS SimpleUtilitarianSentence)
      (part ?SUS ?SENTENCE))))

(<=> 
  (instance ?U UtilitiarianTheory)
  (forall (?S)
    (=>
      (element ?S ?U)
      (instance ?S UtilitarianSentence))))

Providing an interpretation of utility assignment sentences in the language of value judgments is very easy: if the value is positive, it’s good; if the value is negative, it’s bad; and, if the value is zero, then it’s neutral.

(documentation UtilityAssignmentToValueJudgmentSentence EnglishLanguage "A UnaryFunction that maps utility assignment sentences into simple value judgment sentences.")
(domain UtilityAssignmentToValueJudgmentSentence 1 UtilityAssignmentSentence)
(range UtilityAssignmentToValueJudgmentSentence SimpleValueJudgmentSentence)
(instance UtilityAssignmentToValueJudgmentSentence TotalValuedRelation)
(instance UtilityAssignmentToValueJudgmentSentence UnaryFunction)

(=> 
  (and 
    (equal (UtilityAssignmentToValueJudgmentSentence ?UAS) ?VJS)
    (equal ?UAS (equal (AssignmentFn ?UF ?FORMULA) ?VALUE))
    (instance ?UF UtilityFormulaFn)
    (instance ?FORMULA Formula)
    (instance ?VALUE Number))
  (and
    (=>
      (greaterThan ?VALUE 0)
      (equal ?VJS
        (modalAttribute ?FORMULA MorallyGood)))
    (=>
      (lessThan ?VALUE 0)
      (equal ?VJS
        (modalAttribute ?FORMULA MorallyBad)))
    (=>
      (equal ?VALUE 0)
      (equal ?VJS 
        (modalAttribute ?FORMULA MorallyNeutral)))))

I’d like to note that this is somewhat arbitrary and depends on the utility function and how the ethical community uses the output. Maybe we have a range of utility values that are all basically neutral. It’s pretty much ok to do something that’s a little bit ‘negative‘ and doing something mildly ‘positive‘ isn’t soooo good. This would actually make the translation work better with the deontic operators: you only have an obligation to do things that are very good, rated above 10 by the utility function. You have permission to engage in mild evils (down to acts of score at least -10).

(=>
  (and
    (equal
      (UtilityAssignmentToValueJudgmentSentence ?UAS) ?VJS)
    (equal ?UAS
      (equal
        (AssignmentFn ?UF ?FORMULA) ?VALUE))
    (instance ?UF UtilityFormulaFn)
    (instance ?FORMULA Formula)
    (instance ?VALUE Number))
  (and
    (=>
      (greaterThanOrEqualTo ?VALUE 10)
      (equal ?VJS
        (modalAttribute ?FORMULA MorallyGood)))
    (=>
      (lessThanOrEqualTo ?VALUE -10)
      (equal ?VJS
        (modalAttribute ?FORMULA MorallyBad)))
    (=>
      (and
        (lessThan ?VALUE 10)
        (greaterThan ?VALUE -10))
      (equal ?VJS
        (modalAttribute ?FORMULA MorallyNeutral)))))

While I think there is some choice in the translation, there will clearly be some desired properties that not all translations possess.

Translating value judgments to utility assignments is also pretty trivial: let’s just assign a score of 1 to all morally good formulas and likewise for bad and neutral formulas. I’m unsure where to place the instantiation of the utility function, actually. In this example, it’s being assumed, so if we already have it, why do we need to spell it out? If translating a whole value judgment theory at once, we could say that there exists a utility function that is defined by this theory.

(documentation ValueJudgmentToUtilityAssignmentSentence EnglishLanguage "A UnaryFunction that maps value judgment sentences to utility assignment sentences.")
(domain ValueJudgmentToUtilityAssignmentSentence 1 SimpleValueJudgmentSentence)
(range ValueJudgmentToUtilityAssignmentSentence UtilityAssignmentSentence)
(instance ValueJudgmentToUtilityAssignmentSentence TotalValuedRelation)
(instance ValueJudgmentToUtilityAssignmentSentence UnaryFunction)

(=> 
  (and 
    (equal (ValueJudgmentToUtilityAssignmentSentence ?VJS) ?UAS)
    (equal ?VJS (modalAttribute ?FORMULA ?MORALATTRIBUTE))
    (instance ?FORMULA Formula)
    (instance ?MORALATTRIBUTE MoralAttribute)
    (instance ?UF UtilityFormulaFn))
  (and
    (=>
      (equal ?MORALATTRIBUTE MorallyGood)
      (equal ?UAS 
        (equal (AssignmentFn ?UF ?FORMULA) 1)))
    (=>
      (equal ?MORALATTRIBUTE MorallyBad)
      (equal ?UAS
        (equal (AssignmentFn ?UF ?FORMULA) -1)))
    (=>
      (equal ?MORALATTRIBUTE MorallyNeutral)
      (equal ?UAS
        (equal (AssignmentFn ?UF ?FORMULA) 0)))))  

Another option is to say that if a formula is good, then it’s most likely that the formula will receive a positive utility. Because a class of good actions is usually beneficial, yet in some circumstances, particular instances will not be. Another reason could be that it is sometimes morally good to take the “least bad option”, which will nonetheless have a negative utility. So many fun nuances that should be codifiable!

(documentation ValueJudgmentToUtilityAssignmentLikelihoodSentence EnglishLanguage "A UnaryFunction that maps value judgment sentences to utility assignment likelihood sentences.")
(domain ValueJudgmentToUtilityAssignmentLikelihoodSentence 1 SimpleValueJudgmentSentence)
(range ValueJudgmentToUtilityAssignmentLikelihoodSentence UtilitarianSentence)
(instance ValueJudgmentToUtilityAssignmentLikelihoodSentence TotalValuedRelation)
(instance ValueJudgmentToUtilityAssignmentLikelihoodSentence UnaryFunction)

(=> 
  (and 
    (equal (ValueJudgmentToUtilityAssignmentLikelihoodSentence ?VJS) ?UAS)
    (equal ?VJS (modalAttribute ?FORMULA ?MORALATTRIBUTE))
    (instance ?FORMULA Formula)
    (instance ?MORALATTRIBUTE MoralAttribute)
    (instance ?UF UtilityFormulaFn))
  (and
    (=>
      (equal ?MORALATTRIBUTE MorallyGood)
      (equal ?UAS
        (modalAttribute 
          (greaterThan (AssignmentFn ?UF ?FORMULA) 0) Likely)))
    (=>
      (equal ?MORALATTRIBUTE MorallyBad)
      (equal ?UAS
        (modalAttribute 
          (lessThan (AssignmentFn ?UF ?FORMULA) 0) Likely)))
    (=>
      (equal ?MORALATTRIBUTE MorallyNeutral)
      (equal ?UAS
        (modalAttribute 
          (equal (AssignmentFn ?UF ?FORMULA) 0) Likely)))))

Now for the utility comparison sentences, I decided that I should begin to define relevant and situation in SUMO because it’s hard to translate a comparison into a value judgment without a reference to the context. Yet before that, I’ll present a translation of a simple utility comparison sentence into a value judgment sentence that doesn’t need to directly reference the context: if the utility of A is greater than B, then A is more likely to be morally good than B. We could also say that it’s less likely to be bad.

(documentation UtilityComparisonToValueJudgmentSentence EnglishLanguage "A UnaryFunction that maps utility comparison sentences to value judgment sentences.")
(domain UtilityComparisonToValueJudgmentSentence 1 UtilityComparisonSentence)
(range UtilityComparisonToValueJudgmentSentence ValueJudgmentSentence)
(instance UtilityComparisonToValueJudgmentSentence TotalValuedRelation)
(instance UtilityComparisonToValueJudgmentSentence UnaryFunction)

(=> 
  (and 
    (equal (UtilityComparisonToValueJudgmentSentence ?UCS) ?VJS)
    (equal ?UCS (?COMPARATOR (AssignmentFn ?UF ?FORMULA1) (AssignmentFn ?UF ?FORMULA2)))
    (instance ?FORMULA1 Formula)
    (instance ?FORMULA2 Formula)
    (instance ?UF UtilityFormulaFn))
  (equal ?VJS 
        (?COMPARATOR 
          (probabilityFn (modalAttribute ?FORMULA1 MorallyGood)) 
          (probabilityFn (modalAttribute ?FORMULA2 MorallyGood)))))

It’s clear that the utilitarian framework is very expressive. It can easily express everything that the value judgment framework can with fine numeric graduations to denote how good or bad something is. From the formal view, there is probably little reason to use the value judgments aside from ease-of-use in everyday human speech. A counterargument may be that the arbitrariness of precise numeric values can throw people off more than simply using “good” with a few qualifiers for strength (e.g., “very”). Moreover, the renormalization of values may occur more than the simple goodness/badness of an action (but this should be solved by using a natural threshold, such as \(0\)).

Onto relevant, I decided that it can be a BinaryPredicate. I think fully defining relevance may be hard (and there’s probably some prior literature that I’m unaware of), so I’ll try to be “necessary and not necessarily sufficient”, possibly being too strong. To start, if an object plays some role in a process, then it’s relevant to the process. For example, subjects and objects are generally both relevant, as are destinations. Maybe if a process is located somewhere (even partly), then the location is relevant to the process.

(documentation relevant EnglishLanguage "The predicate relevant attempts to ontologically represent the notion of 
an entity ?E1 being relevant to ?E2: (relevant ?E1 ?E2). Relevant: having a bearing on or connection with the 
subject at issue; 'the scientist corresponds with colleagues in order to learn about matters relevant to her 
own research'.")

(instance relevant BinaryPredicate)  
(domain relevant 1 Entity)
(domain relevant 2 Entity)

(=> 
  (and
    (instance ?E1 Object)
    (instance ?E2 Process)
    (exists (?ROLE)
      (playsRoleInEvent ?E1 ?ROLE ?E2)))
  (relevant ?E1 ?E2))

(=> 
  (and 
    (instance ?E1 Physical)
    (instance ?E2 Process)
    (partlyLocated ?E2 ?E1))
  (relevant ?E1 ?E2))

I haven’t actually used relevant at all yet 😅.

I decided to go with the notion of a spatiotemporal situation, which is perhaps synonymous with a wrapper for When and Where. I debated whether a Situation should be Physical or Abstract, and decided on physical because then I can apply WhereFn to situations, which seems to make things easier. There is the notion of the relevant context: part of my current situation is that I submitted my Ph.D. dissertation and am waiting for it to be reviewed. I have a two-year residence permit in the Czech Republic. This is perhaps the abstract situation and where relevance would come in.

(documentation Situation EnglishLanguage "A spatiotemporal situation in which something occurs or exists.")
(subclass Situation Physical)

(=>
  (instance ?S Situation)
  (exists (?T)
    (equal ?T (WhenFn ?S))))
    
(=> 
  (instance ?S Situation)
  (exists (?L)
    (equal ?L (WhereFn ?S (BeginFn (WhenFn ?S))))))

(=>
  (and
    (instance ?S Situation)
    (equal ?T
      (WhenFn ?S)))
  (equal
    (WhereFn ?S
      (BeginFn ?T))
    (WhereFn ?S
      (EndFn ?T))))

For every situation, there is a time and a place of the situation.  For convenience, I said that the location at the beginning and end of the situation’s time interval is the same.  This is probably technically too strong (as is the fact that, technically WhereFn refers to the precise region of spacetime in which an object is located.  That is, the precise region of space a pot of tea occupies, not that the pot is on the table.)

Next, I define the parallel to capabilityAtLocation and capabilityDuring: capableInSituation.  The idea is to easily encapsulate what an agent can do in a particular situation to facilitate comparisons.  I do this by saying that something is capable in a situation if it is both capable at the time of the situation and capable at the place of the situation.

(documentation capableInSituation EnglishLanguage "(capableInSituation ?TYPE ?ROLE ?OBJECT ?SITUATION) 
means that ?OBJECT has the ability to play the CaseRole ?ROLE in Processes of ?TYPE in ?SITUATION.")
(domainSubclass capableInSituation 1 Process)
(domain capableInSituation 2 CaseRole)
(domain capableInSituation 3 Object)
(domain capableInSituation 4 Situation)
(instance capableInSituation QuaternaryPredicate)

(<=>
  (and
    (capableInSituation ?TYPE ?ROLE ?OBJECT ?SITUATION)
    (equal ?TIME
      (WhenFn ?SITUATION))
    (equal ?LOCATION
      (WhereFn ?SITUATION
        (BeginFn ?TIME))))
  (and
    (capabilityDuring ?TYPE ?ROLE ?OBJECT ?TIME)
    (capabilityAtLocation ?TYPE ?ROLE ?OBJECT ?LOCATION)))

Next, I need a SituationFn that maps physical entities into their situation(s).  Somewhat trivially, I’ll say that the situation of an entity physically and temporally contains the entity.  Thus each entity is a candidate situation for itself 🤓.  Maybe we can say that everything nearby an object is also in its situation — and I’m dropping transitivity so that the radius doesn’t encompass the universe as it would in Indra’s Net.  If a process has an agent, then everything near the agent is in the situation.  Maybe we could say something like, «the situation of everything relevant to a physical entity is part of its situation», again dropping transitivity.  I might wish to formalize these as GeneralizedSItuationProperties and then to say that the SituationFn is the minimal spacetime region that satisfies these properties.

(documentation SituationFn EnglishLanguage "Maps a Physical Entity to its Situation.  May be non-deterministic 
because situations have one enduring region whereas a process could cover multiple regions.")
(domain SituationFn 1 Physical)
(range SituationFn Situation)
(instance SituationFn UnaryFunction)
(instance SituationFn TotalValuedRelation)
(relatedInternalConcept SituationFn WhereFn)
(relatedInternalConcept SituationFn WhenFn)

(=>
  (equal ?SITUATION (SituationFn ?PHYSICAL))
  (exists (?TS ?TP ?LS ?LP)
    (and
      (equal ?TS (WhenFn ?SITUATION))
      (equal ?TP (WhenFn ?PHYSICAL))
      (temporalPart ?TP ?TS)
      (equal ?LS (WhereFn ?SITUATION ?TS))
      (equal ?LP (WhereFn ?PHYSICAL ?TP))
      (part ?LP ?LS))))

(=>
  (and
    (equal ?SITUATION (SituationFn ?PHYSICAL))
    (instance ?PHYSICAL Object))
  (forall (?NEAR)
    (=>
      (orientation ?NEAR ?PHYSICAL Near)
      (exists (?TS ?TR ?LS ?LR)
        (and
          (equal ?TS (WhenFn ?SITUATION))
          (equal ?TR (WhenFn ?NEAR))
          (temporalPart ?TR ?TS)
          (equal ?LS (WhereFn ?SITUATION ?TS))
          (equal ?LR (WhereFn ?NEAR ?TR))
          (part ?LR ?LS))))))   

(=>
  (and
    (equal ?SITUATION (SituationFn ?PHYSICAL))
    (instance ?PHYSICAL Process)
    (agent ?PHYSICAL ?AGENT))
  (forall (?NEAR)
    (=>
      (orientation ?NEAR ?AGENT Near)
      (exists (?TS ?TR ?LS ?LR)
        (and
          (equal ?TS (WhenFn ?SITUATION))
          (equal ?TR (WhenFn ?NEAR))
          (temporalPart ?TR ?TS)
          (equal ?LS (WhereFn ?SITUATION ?TS))
          (equal ?LR (WhereFn ?NEAR ?TR))
          (part ?LR ?LS))))))

(=>
  (equal ?SITUATION (SituationFn ?PHYSICAL))
  (forall (?REL)
    (=>
      (relevant ?REL ?PHYSICAL)     
      (part (SituationFn ?REL) ?SITUATION))))

Next, as I’m working with Formulas, I defined the SituationFormulaFn that maps a formula to the situation of whatever it describes.  Maybe it should be a partially valued function, as some formulas just don’t describe things where spacetime situations make sense (except even stuff like “\(1+1=2\)” did actually begin at some point in our history 😉).  Because all physical entities are either objects or processes, I can cover the bases by saying that if a process realizes the formula or an object conforms to the formula, then the situation of the formula is that of the process or object.  And more generally, if some physical entity represents a formula, then their situations are the same.  (This is possibly technically too strong 🤷‍♂️.)

(documentation SituationFormulaFn EnglishLanguage "Maps a Formula to the Situation it describes or the Situation of what it describes.")  
(domain SituationFormulaFn 1 Formula)
(range SituationFormulaFn Situation)
(instance SituationFormulaFn UnaryFunction)
(instance SituationFormulaFn TotalValuedRelation)
(relatedInternalConcept SituationFormulaFn SituationFn)

(=>
  (and
    (equal ?SITUATION (SituationFormulaFn ?FORMULA))
    (realizesFormula ?PROCESS ?FORMURA))
  (equal ?SITUATION (SituationFn ?PROCESS)))

(=>
  (and
    (equal ?SITUATION (SituationFormulaFn ?FORMULA))
    (conformsFormula ?OBJECT ?FORMURA))
  (equal ?SITUATION (SituationFn ?OBJECT)))

(=>
  (and
    (equal ?SITUATION (SituationFormulaFn ?FORMULA))
    (represents ?PHYSICAL ?FORMURA)
    (instance ?PHYSICAL Physical))
  (equal ?SITUATION (SituationFn ?PHYSICAL)))

(=> 
  (and 
    (equal ?SITUATION (SituationFn ?PHYSICAL))
    (represents ?PHYSICAL ?FORMULA)
    (instance ?PHYSICAL Physical)
    (instance ?FROMULA Formula))
  (equal ?SITUATION (SituationFormulaFn ?FORMULA)))

It’s fun how I’m always learning better ways to logically frame things on this journey. I suppose it’d be cool to have some wise mentor to offer pointers here, too. For all formulas and processes, if the process realizes the formula, then their situations are the same 😉.

(forall (?PROCESS ?FORMULA)
  (=>
    (realizesFormula ?PROCESS ?FORMULA)
    (equal (SituationFn ?PROCESS) (SituationFormulaFn ?FORMULA))))

Now I can return to the mapping of value judgment sentences into utility comparison sentences.  One idea is that «If some formula is attributed as good with a given utility function and a situation, then it’s likely that the utility of this formula is greater than or equal to the utility of every formula realizing a process that is possible in the situation.»  Neutrality often seems confusing to translate.  Here I just say that it means the utility is likely 0, not using comparisons.  This encapsulates how an action being good doesn’t necessarily mean that it will have positive utility: it may simply be the least shitty thing one can do.

(documentation ValueJudgmentToUtilityComparisonSentence EnglishLanguage "A UnaryFunction that maps value judgment sentences to utility comparison sentences.")
(domain ValueJudgmentToUtilityComparisonSentence 1 SimpleValueJudgmentSentence)
(range ValueJudgmentToUtilityComparisonSentence UtilitarianSentence)
(instance ValueJudgmentToUtilityComparisonSentence TotalValuedRelation)
(instance ValueJudgmentToUtilityComparisonSentence UnaryFunction)

(=> 
  (and 
    (equal (ValueJudgmentToUtilityComparisonSentence ?VJS) ?UCS)
    (equal ?VJS (modalAttribute ?FORMULA ?MORALATTRIBUTE))
    (instance ?FORMULA Formula)
    (instance ?MORALATTRIBUTE MoralAttribute)
    (instance ?UF UtilityFormulaFn)
    (equal ?SITUATION (SituationFormulaFn ?FORMULA)))
  (and
    (=>
      (equal ?MORALATTRIBUTE MorallyGood)
      (equal ?UCS
        (modalAttribute
          (forall (?F)
            (=> 
              (exists (?AGENT ?CP)
                (and
                  (capableInSituation ?CP agent ?AGENT ?SITUATION)
                  (realizesFormulaSubclass ?CP ?F)))
              (greaterThanOrEqualTo (AssignmentFn ?UF ?FORMULA) (AssignmentFn ?UF ?F)))) Likely)))
    (=>
      (equal ?MORALATTRIBUTE MorallyBad)
      (equal ?UCS
        (modalAttribute 
          (exists (?F ?AGENT ?CP)
            (and 
              (capableInSituation ?CP agent ?AGENT ?SITUATION)
              (realizesFormulaSubclass ?CP ?F)
              (lessThan (AssignmentFn ?UF ?FORMULA) (AssignmentFn ?UF ?F)))) Likely)))
    (=>
      (equal ?MORALATTRIBUTE MorallyNeutral)
      (equal ?UCS
        (modalAttribute 
          (equal (AssignmentFn ?UF ?FORMULA) 0) Likely)))))

Next up, I decided to include the other direction of this definition (as is done in draft 3): «If the utility of a formula is greater than or equal to that of all other possible actions in a situation, then it is morally good.»  Moreover, I define the partial valued function UtilityToValueJudgmentSentence such that it uses specific functions for utility assignment and comparison sentences in addition to translating this specific form of sentence, without providing rules for generic utility sentences.  I think there may be a natural extension of translations for the base cases into translations for all utility sentences, but it seems annoying to express in SUMO without first formalizing the SUO-KIF grammar.

(documentation UtilityToValueJudgmentSentence EnglishLanguage "A UnaryFunction that maps some utility sentences to value judgment sentences.")
(domain UtilityToValueJudgmentSentence 1 UtilitarianSentence)
(range UtilityToValueJudgmentSentence ValueJudgmentSentence)
(instance UtilityToValueJudgmentSentence PartialValuedRelation)
(instance UtilityToValueJudgmentSentence UnaryFunction)

(=> 
  (instance ?UCS UtilityComparisonSentence)
  (equal (UtilityToValueJudgmentSentence ?UCS) (UtilityComparisonToValueJudgmentSentence ?UCS)))

(=>
  (instance ?UAS UtilityAssignmentSentence)
  (equal (UtilityToValueJudgmentSentence ?UAS) (UtilityAssignmentToValueJudgmentSentence ?UAS)))

(=> 
  (and 
    (equal (UtilityToValueJudgmentSentence ?US) VJS)
    (instance ?UF UtilityFormulaFn)
    (equal ?US 
      (and 
        (equal ?SITUATION (SituationFormulaFn ?FORMULA))
        (forall (?F)
          (=> 
            (exists (?AGENT ?CP)
              (and
                (capableInSituation ?CP agent ?AGENT ?SITUATION)
                (realizesFormulaSubclass ?CP ?F)))
            (greaterThanOrEqualTo (AssignmentFn ?UF ?FORMULA) (AssignmentFn ?UF ?F)))))))
  (equal VJS (modalAttribute ?FORMULA MorallyGood)))

(=> 
  (and 
    (equal (UtilityToValueJudgmentSentence ?US) VJS)
    (instance ?UF UtilityFormulaFn)
    (equal ?US 
      (and 
        (equal ?SITUATION (SituationFormulaFn ?FORMULA))
        (exists (?F ?AGENT ?CP)
          (and
            (capableInSituation ?CP agent ?AGENT ?SITUATION)
            (realizesFormulaSubclass ?CP ?F)
            (lessThan (AssignmentFn ?UF ?FORMULA) (AssignmentFn ?UF ?F)))))))
  (equal VJS (modalAttribute ?FORMULA MorallyBad)))

I decided that Hedonistic and Consequentialist varieties of utilitarianism should be defined in terms of constraints on the type of utility function used.  I find it challenging to figure out how to stipulate that the utility function measures pain and pleasure.  Should I reference the relations used for measurements of physical quantities?  And then how to specify that the output of a function only depends on physical consequences of the argument of the function?  If we’re actually implementing the function, then perhaps I could impose restrictions on the building blocks . . ..

(documentation HedonisticUtilitarianism EnglishLanguage "Hedonistic Utilitarianism is a form of utilitarianism 
that focuses on maximizing pleasure and minimizing pain in evaluating the moral value of an action.")
(subclass HedonisticUtilitarianism Utilitarianism)

(documentation HedonisticUtilitarianTheory EnglishLanguage "A set of hedonistic utilitarian sentences.")
(subclass HedonisticUtilitarianTheory UtilitarianTheory)

(=>
  (instance ?MP HedonisticUtilitarianism)
  (exists (?MT ?MTS)
    (and
      (instance ?MT HedonisticUtilitarianTheory)
      (equal ?MTS (ListAndFn (SetToListFn ?MT)))
      (containsInformation ?MTS ?MP))))
      
(=>
  (instance ?MT HedonisticUtilitarianTheory)
  (exists (?MP)
    (and 
      (instance ?MP HedonisticUtilitarianism)
      (containsInformation (ListAndFn (SetToListFn ?MT)) ?MP))))

(documentation Consequentialism EnglishLanguage "Consequentialism is a moral theory that holds 
that 'whether an act is morally right depends only on consequences (as opposed to the circumstances 
or the intrinsic nature of the act or anything that happens before the act)' (Stanford Encyclopedia of Philosophy).")
(subclass Consequentialism Utilitarianism)    

(documentation ConsequentialistTheory EnglishLanguage "A set of consequentialist sentences.")
(subclass ConsequentialistTheory UtilitarianTheory)

(=>
  (instance ?MP Consequentialism)
  (exists (?MT ?MTS)
    (and
      (instance ?MT ConsequentialistTheory)
      (equal ?MTS (ListAndFn (SetToListFn ?MT)))
      (containsInformation ?MTS ?MP))))
      
(=>
  (instance ?MT ConsequentialistTheory)
  (exists (?MP)
    (and 
      (instance ?MP Consequentialism)
      (containsInformation (ListAndFn (SetToListFn ?MT)) ?MP))))

Actually, I should just define the pairing of an ethical philosophy and its corresponding sentential theory. This way, the connection for each new variety of theory becomes trivially easy.

(documentation theoryPhilosophyPair EnglishLanguage "This predicate denotes that a (moral) theory and 
a (moral) philosophy as a proposition are paired in the natural manner.") 
(domainSubclass theoryPhilosophyPair 1 Ethics)
(domainSubclass theoryPhilosophyPair 2 MoralTheory)
(relatedInternalConcept theoryPhilosophyPair abstractCounterpart)

(<=>
  (theoryPhilosophyPair ?MP ?MT)
  (and
    (forall (?IMP)
      (=>
        (instance ?IMP ?MP)
        (exists (?IMT)
          (and
            (instance ?IMT ?MT)
            (containsInformation (ListAndFn (SetToListFn ?IMT)) ?IMP)))))
    (forall (?IMT)
      (=>
        (instance ?IMT ?MT)
        (exists (?IMP)
          (and
            (instance ?IMP ?MP)
            (containsInformation (ListAndFn (SetToListFn ?IMT)) ?IMP)))))))
            
(theoryPhilosophyPair Consequentialism ConsequentialistTheory)
(theoryPhilosophyPair HedonisticUtilitarianism HedonisticUtilitarianTheory)
(theoryPhilosophyPair Utilitarianism UtilitarianTheory)
(theoryPhilosophyPair Deontology DeontologicalTheory)
(theoryPhilosophyPair VirtueEthics ValueJudgmentTheory)

Anyway, for hedonistic utility functions, I decided that if some formula is assigned a positive value, then it’s possible for there to exist an agent who is caused pleasure by the formula.  In the other direction, if there exists an agent who is pleased by the formula, then it’s possible for there to exist a hedonistic utility function such that the formula is assigned a positive value.  I think this definitely needs some work, and this is probably the theoretically right direction to explore.

(documentation HedonisticUtilityFormulaFn EnglishLanguage "A UnaryFunction that maps Formulas to the net utility 
of that which is described where utility measures the pain and pleasure exhibited in the situation.")
(subclass HedonisticUtilityFormulaFn UtilityFormulaFn)

(=>
  (and
    (instance ?UF HedonisticUtilityFormulaFn)
    (greaterThan (AssignmentFn ?UF ?FORMULA) 0))
  (modalAttribute 
    (exists (?AGENT)
      (CausesProposition ?FORMULA (attribute ?AGENT Pleasure))) Possibility))

(=> 
  (exists (?AGENT)
      (CausesProposition ?FORMULA (attribute ?AGENT Pleasure)))
  (modalAttribute 
    (exists (?UF)
      (and
        (instance ?UF HedonisticUtilityFormulaFn)
        (greaterThan (AssignmentFn ?UF ?FORMULA) 0))) Possibility))

(=>
  (and
    (instance ?UF HedonisticUtilityFormulaFn)
    (lessThan (AssignmentFn ?UF ?FORMULA) 0))
  (modalAttribute 
    (exists (?AGENT)
      (CausesProposition ?FORMULA (attribute ?AGENT Pain))) Possibility))

(=> 
  (exists (?AGENT)
      (CausesProposition ?FORMULA (attribute ?AGENT Pain)))
  (modalAttribute 
    (exists (?UF)
      (and
        (instance ?UF HedonisticUtilityFormulaFn)
        (lessThan (AssignmentFn ?UF ?FORMULA) 0))) Possibility))

And a consequentialist utility function is so far only defined in the case where the formula is realized in a class of behaviors.  Anything that influences the output of the utility function is a possible outcome of an instance of this behavior.  This is the gist of it, and defining precisely what this means is, well, a bitch.  But we’re doing a commonsense knowledge ontology 😉.

(documentation ConsequentialistUtilityFormulaFn EnglishLanguage "A UnaryFunction that maps Formulas to the net utility 
of that which is described where the utility measurement only depends on the consequences of an action.")
(subclass ConsequentialistUtilityFormulaFn UtilityFormulaFn)

(=>
  (and
    (instance ?UF ConsequentialistUtilityFormulaFn)
    (realizesFormulaSubclass ?CPROC ?FORMULA)
    (subclass ?CPROC AutonomousAgentProcess))
  (forall (?X)
    (=> 
      (influences ?X (ConsequentialistUtilityFormulaFn ?FORMULA))
      (and 
        (instance ?X Outcome)
        (modalAttribute
          (exists (?IPROC)
            (and 
              (instance ?IPROC ?CPROC)
              (result ?IPROC ?X))) Possibility)))))

And then, we specify that a hedonistic utilitarian theory is one where for each sentence part involving a utility function, it is a hedonistic utility function.  The same for a consequentialist utilitarian theory.

(=> 
  (instance ?HUT HedonisticUtilitarianTheory)
  (forall (?S)
    (=> 
      (element ?S ?HUT)
      (forall (?P ?UF ?FORMULA)
        (=> 
          (and 
            (part ?P ?S)
            (equal ?P (AssignmentFn ?UF ?FORMULA))
            (instance ?FORMULA Formula)
            (instance ?UF UtilityFormulaFn))
          (instance ?UF HedonisticUtilityFormulaFn))))))

(=> 
  (instance ?CUT ConsequentialistTheory)
  (forall (?S)
    (=> 
      (element ?S ?CUT)
      (forall (?P ?UF ?FORMULA)
        (=> 
          (and 
            (part ?P ?S)
            (equal ?P (AssignmentFn ?UF ?FORMULA))
            (instance ?FORMULA Formula)
            (instance ?UF UtilityFormulaFn))
          (instance ?UF ConsequentialistUtilityFormulaFn))))))

Concluding Remarks

And that’s it for the super rough draft four, where I made things highly syntactic.  I think the language is getting quite close to asymptoting now and can be cleaned up.  One change I wish to make is to use the top-level paradigm classes as placeholders: so I will not say much for a generic utilitarian theory.  I will define a subclass of that, “generic utilitarian theory” or something, and then everything I say will be about this.  I don’t wish to overly constrain the scope of utilitarian theories and to make it annoying for someone to define a different variety of them.

Draft four involved a lot of changes to the formalism, largely increasing the correctness and modularity. The trick to handle classes via a set-theoretic representation is pretty nifty (and primarily works around the quirks of SUMO). The upgrade of consequentialism v3.0 to work over classes of behavior ironically became outdated later in this post as I simplified it to simply saying that something influences the output of a utility function instead of saying that the judgment is the conclusion of an argument (which was obviously a meaningful hack).

The main change is the introduction of syntactic formalisms to characterize the different paradigms. In draft three, the moral judgment was mixed up with the statement types of each paradigm. Because I’m working in SUMO and SUO-KIF, the distinctions between the paradigms will be highly syntactic anyway. In formally defining theories, we can now modularly map moral judgments over the theories in a cleaner fashion. This switch led to a lot of confusion as to precisely how to do things: once I have a theory schema, then I wish to define notions for every possible sentence of this schema. Whereas in draft three, I’d work with very particular contexts and types of sentences. It has been confusing for the whole project how to refer to instances or classes of behaviors, and this culminated in just referring to formulas. Moreover, there are so many ways that one may wish to refer to a situated behavior that one is judging! I like the generality in theory, even as I may wish to define specialized, simple subclasses of them. Going forward, I will probably work with partial translations that only cover specific types of sentences in a given theory, which includes specifying the way that the formulas relate to behaviors.

Another interesting turn of events was to separate out the Imperative and Value Judgment forms of “deontological” theories. Technically, one could argue that anything defined within SUMO is “rule-based” and thus “deontological”. The whole approach of judging actions to be good, bad, or neutral is itself a sort of ethical paradigm. From the utilitarian point of view, there’s no need to label any specific action in this way: there can be one core obligation to maximize utility. Moreover, judging character traits to be good or bad leads to virtue ethics rather than deontology. There is a very simple correspondence between imperative and value judgment sentences, so people are justified in conflating the two. Moreover, there is a strong tendency to associate moral value judgments as being at the core of ethics itself (and utility functions are simply measurements of how good or how bad). It is the language in which the other paradigms tend to be interpreted.

Yet when defining structural differences in the moral theories, it becomes clear that they can each stand on their own ground. An ethical community doesn’t need to interpret its preferred paradigm and theory into the others to make sense of it and use it. We can operate on the ground of shared virtues and vices without emphasis on how to translate this into an interpretation of actions as good or bad. Or we can judge actions as good or bad, preferring the good, without thinking of duties to always do good. Or we can operate on obligations and permissions without worrying about how bad it is to break some prohibition. Or we can simply hill-climb our utility functions together without wondering which character traits bias one toward higher utility lifestyles. Or we can drop all notions of moral theory and pragmatically negotiate communal arrangements — Heil Nihilism!

It’s almost a bit insulting to straight-jacket one paradigm into another. Different communities might prefer some cross-paradigm interpretations over others. There are benefits to ethical cross-fertilization: utility measurements allow one to quantify how good or bad an act is, which helps in conflict resolution; when all the options are bad with negative utility and the applicable rules conflict, it may help to focus on what autonomous agents with virtuous qualities would do. And different interpretations may have subtly different use cases.

And, of course, I wish to show that each paradigm is sufficiently powerful to simulate the other paradigms. The syntactic shift in draft four gets me much closer to this demonstration.

What’s missing are examples of how (fragments of) actual ethical theories fit into the ontology as well as how to hook theories up to practical situations. This post and .kif file wound up being so long and messy that they’ll be moved to draft 5, or maybe 4.5. I will use the progress report at AITP as inspiration for more compactly packaging this draft and figuring out what few fun examples to explore.