# Utilitarianism

Utilitarianism is the ethical paradigm that judges the morality of an action based on whether it maximizes the good over the bad, which is typically determined via a utility function.

[Ethics is] the rules and precepts for human conduct, by the observance of which [a happy existence] might be, to the greatest extent possible, secured.

Utilitarianism by John Stuart Mill

## Discussion

The progenitors of classic utilitarianism, Mill and Bentham, sought to free up moral discourse and its role in political policy from the strictures of adherence to tradition, which the paradigms of *deontology* and *virtue ethics* seemed to promote. They wished to provide scientific grounds for determining which courses of action and policies would do the most moral good. This shifts the focus from determining what a good person is or what our duties are to that of determining what is good in general terms. The *utility function* represents the beneficial quality of a situation or object.

The debate is then what the utility function should actually measure and how to aggregate personal and societal utility functions. For example, are ‘higher’ or ‘lower’ pleasures more significant for a happy existence? Does utility fundamentally boil down to *pleasures and pains* as proponents of *hedonistic utilitarianism* hold? As an optimization problem, does minimizing pain unilaterally take higher precedence than maximizing pleasure as *negative utilitarianism* holds? *Preference utilitarianism* holds that preference fulfillment is to be maximized rather than happiness itself. A practical challenge is how to deal with *comparative* and *contradictory* utility functions. So far, each choice seems to have its own pros and cons.

Utilitarian philosophies are usually consequentialist, which is the claim that the *moral rightness* of an act depends only on its consequences. This can appear to be an explicit rejection of any virtue and duty-based grounding of moral good. Consequentialism limits the scope of possible utility functions.

A definitional difficulty is whether to consider the paradigm of utilitarianism as being defined by a central *teleological goal* of “the greatest happiness for all” or to consider “utility function-based optimization” as the defining feature of the paradigm. Given the aforementioned debates on what “happiness” means, I believe the more general, paradigmatic approach is to focus on utility function-based optimization, which could be called *optimizationism*^{1}David Weinberger presents an argument that we should prioritize optimized ethical performance over being able to explain why our choices are good in the case of applied AI systems..

In SUMO, I define a generic class of utility functions that map *formulas* into *real numbers*. I define two kinds of *utilitarian theories*: those that *assign* utility to formulas and those that *compare* the utility of two formulas. For a finite preference ordering, one can represent the preferences via a utility function if the ordering is *complete* and *transitive*^{2}See Wikipedia’s Utility entry. In the case of infinite options, the preference ordering must also be *continuous*.. Some (trivial) mappings between *utilitarian sentences* and *value judgment sentences* are presented. *Hedonic* and *Consequentialist* versions of utilitarianism are defined by additional *meaning postulates* about the utility functions involved.

## SUMO

```
(documentation Utilitarianism EnglishLanguage "Utilitarianism is the ethical paradigm that judges the morality of an action
based on whether it maximizes the good over the bad, which is typically determined via a utility function.")
(subclass Utilitarianism Ethics)
(documentation UtilitarianTheory EnglishLanguage "A set of sentences dealing with the utility of behaviors.")
(subclass UtilitarianTheory MoralTheory)
(theoryPhilosophyPair Utilitarianism UtilitarianTheory)
(documentation UtilitarianSentence EnglishLanguage "A sentence of the variety of a utilitarian theory.")
(subclass UtilitarianSentence MoralSentence)
(<=>
(instance ?U UtilitarianTheory)
(forall (?S)
(=>
(element ?S ?U)
(instance ?S UtilitarianSentence))))
```

*Utilitarianism* is defined as a proposition, a *subclass of Ethics*. *Utilitarian theory* is defined as a set of *utilitarian sentences*.

```
(documentation SimpleUtilitarianSentence EnglishLanguage "A sentence that assigns or compares the value of situations described by formulas.")
(subclass SimpleUtilitarianSentence UtilitarianSentence)
(documentation UtilityAssignmentSentence EnglishLanguage "A Sentence that assigns a (real) number value to a situation described by a formula.")
(subclass UtilityAssignmentSentence SimpleUtilitarianSentence)
(documentation UtilityComparisonSentence EnglishLanguage "A sentence that compares the value of two situations described by formulas.")
(subclass UtilityComparisonSentence SimpleUtilitarianSentence)
(<=>
(instance ?SENTENCE UtilitarianSentence)
(exists (?SUS)
(and
(instance ?SUS SimpleUtilitarianSentence)
(part ?SUS ?SENTENCE))))
(<=>
(instance ?SENTENCE SimpleUtilitarianSentence)
(or
(instance ?SENTENCE UtilityComparisonSentence)
(instance ?SENTENCE UtilityAssignmentSentence)))
```

*Utilitarian sentences* are those that contain a *simple utilitarian sentence* as a *part*. Every simple utilitarian sentence is either a *utility comparison sentence* or a *utility assignment sentence*^{3}Maybe they should also be ‘simple’!.

```
(documentation UtilityFormulaFn EnglishLanguage "A UnaryFunction that maps Formulas to the net utility of that which is described. Typically, the formula should refer to an action.")
(subclass UtilityFormulaFn PartialValuedRelation)
(subclass UtilityFormulaFn UnaryFunction)
(=>
(instance ?UF UtilityFormulaFn)
(and
(domain ?UF 1 Formula)
(range ?UF RealNumber)))
```

Every instance of a *utility formula function* is a *total-valued relation* and *unary function* that maps a *formula* to a *real number*. The idea is that constraints as to how to formulaically describe that which is evaluated can vary by use-case and theory, e.g., are actions or situations evaluated?

```
(<=>
(instance ?SENTENCE UtilityAssignmentSentence)
(exists (?FORMULA ?VALUE ?UF)
(and
(equal ?SENTENCE (equal (AssignmentFn ?UF ?FORMULA) ?VALUE))
(instance ?UF UtilityFormulaFn)
(instance ?FORMULA Formula)
(instance ?VALUE RealNumber))))
```

*Utility assignment sentences* are of the form, “utility(FORMULA) is VALUE”, that is, “the utility of that which is described by this formula is this value”^{4}Strictly speaking, the utility function is applied to the syntactic formula. The intended interpretation is probably that the utility function evaluates formulas in light of a semantic interpretation., where VALUE is a number.

```
(<=>
(instance ?SENTENCE UtilityComparisonSentence)
(exists (?FORMULA1 ?FORMULA2 ?COMPARATOR ?UF)
(and
(instance ?FORMULA1 Formula)
(instance ?FORMULA2 Formula)
(instance ?UF UtilityFormulaFn)
(or
(equal ?COMPARATOR greaterThan)
(equal ?COMPARATOR lessThan)
(equal ?COMPARATOR greaterThanOrEqualTo)
(equal ?COMPARATOR lessThanOrEqualTo)
(equal ?COMPARATOR equal))
(equal ?SENTENCE (AssignmentFn ?COMPARATOR (AssignmentFn ?UF ?FORMULA1) (AssignmentFn ?UF ?FORMULA2))))))
```

*Utility comparison sentences* express the comparison of the utility values of two formulas: the comparator can be “greater than”, “less than”, “greater than or equal to”, “less than or equal to”, or “equals”^{5}It seems that one would like to also include “incomparable” to the list..

```
(documentation UtilityAssignmentToValueJudgmentSentenceFn EnglishLanguage "A UnaryFunction that maps utility assignment sentences into simple value judgment sentences.")
(domain UtilityAssignmentToValueJudgmentSentenceFn 1 UtilityAssignmentSentence)
(range UtilityAssignmentToValueJudgmentSentenceFn SimpleValueJudgmentSentence)
(instance UtilityAssignmentToValueJudgmentSentenceFn TotalValuedRelation)
(instance UtilityAssignmentToValueJudgmentSentenceFn UnaryFunction)
(=>
(and
(equal (UtilityAssignmentToValueJudgmentSentenceFn ?UAS) ?VJS)
(equal ?UAS (equal (AssignmentFn ?UF ?FORMULA) ?VALUE))
(instance ?UF UtilityFormulaFn)
(instance ?FORMULA Formula)
(instance ?VALUE Number))
(and
(=>
(greaterThan ?VALUE 0)
(equal ?VJS
(modalAttribute ?FORMULA MorallyGood)))
(=>
(lessThan ?VALUE 0)
(equal ?VJS
(modalAttribute ?FORMULA MorallyBad)))
(=>
(equal ?VALUE 0)
(equal ?VJS
(modalAttribute ?FORMULA MorallyNeutral)))))
```

One way to *‘interpret’* *utility assignment sentences* in the language of *value judgment sentences* is to consider any value *greater than zero* as denoting *moral goodness*, any value *less than zero* as denoting *moral badness*, and a value of zero as denoting *moral neutrality*. One could similarly use any other partition of the real numbers into 2-3 contiguous subsets^{6}Two subsets in the case that one may not wish to involve the notion of moral neutrality..

```
(documentation ValueJudgmentToUtilityAssignmentSentenceFn EnglishLanguage "A UnaryFunction that maps value judgment sentences to utility assignment sentences.")
(domain ValueJudgmentToUtilityAssignmentSentenceFn 1 SimpleValueJudgmentSentence)
(range ValueJudgmentToUtilityAssignmentSentenceFn UtilityAssignmentSentence)
(instance ValueJudgmentToUtilityAssignmentSentenceFn TotalValuedRelation)
(instance ValueJudgmentToUtilityAssignmentSentenceFn UnaryFunction)
(=>
(and
(equal (ValueJudgmentToUtilityAssignmentSentenceFn ?VJS) ?UAS)
(equal ?VJS (modalAttribute ?FORMULA ?MORALATTRIBUTE))
(instance ?FORMULA Formula)
(instance ?MORALATTRIBUTE MoralAttribute)
(instance ?UF UtilityFormulaFn))
(and
(=>
(equal ?MORALATTRIBUTE MorallyGood)
(equal ?UAS
(equal (AssignmentFn ?UF ?FORMULA) 1)))
(=>
(equal ?MORALATTRIBUTE MorallyBad)
(equal ?UAS
(equal (AssignmentFn ?UF ?FORMULA) -1)))
(=>
(equal ?MORALATTRIBUTE MorallyNeutral)
(equal ?UAS
(equal (AssignmentFn ?UF ?FORMULA) 0)))))
```

A simple translation from *moral value judgments* into *utility assignment sentences* is to assign the value of *“1”* to every *morally good* *formula*, *“-1”* to every *morally bad formula*, and *“0”* to every *morally neutral formula*.

```
(documentation ValueJudgmentToUtilityAssignmentLikelihoodSentence EnglishLanguage "A UnaryFunction that maps value judgment sentences to utility assignment likelihood sentences.")
(domain ValueJudgmentToUtilityAssignmentLikelihoodSentence 1 SimpleValueJudgmentSentence)
(range ValueJudgmentToUtilityAssignmentLikelihoodSentence UtilitarianSentence)
(instance ValueJudgmentToUtilityAssignmentLikelihoodSentence TotalValuedRelation)
(instance ValueJudgmentToUtilityAssignmentLikelihoodSentence UnaryFunction)
(=>
(and
(equal (ValueJudgmentToUtilityAssignmentLikelihoodSentence ?VJS) ?UAS)
(equal ?VJS (modalAttribute ?FORMULA ?MORALATTRIBUTE))
(instance ?FORMULA Formula)
(instance ?MORALATTRIBUTE MoralAttribute)
(instance ?UF UtilityFormulaFn))
(and
(=>
(equal ?MORALATTRIBUTE MorallyGood)
(equal ?UAS
(modalAttribute
(greaterThan (AssignmentFn ?UF ?FORMULA) 0) Likely)))
(=>
(equal ?MORALATTRIBUTE MorallyBad)
(equal ?UAS
(modalAttribute
(lessThan (AssignmentFn ?UF ?FORMULA) 0) Likely)))
(=>
(equal ?MORALATTRIBUTE MorallyNeutral)
(equal ?UAS
(modalAttribute
(equal (AssignmentFn ?UF ?FORMULA) 0) Likely)))))
```

An alternative approach is to consider *moral value judgments* to be probabilistic of nature: that is, if a formula is morally good, then it is likely to have a utility greater than zero. This could be because a value judgment needs to be made of a whole class of actions even though there are edge cases where an instance of the class produces negative consequences.

```
(documentation ValueJudgmentToUtilityComparisonSentence EnglishLanguage "A UnaryFunction that maps value judgment sentences to utility comparison sentences.")
(domain ValueJudgmentToUtilityComparisonSentence 1 SimpleValueJudgmentSentence)
(range ValueJudgmentToUtilityComparisonSentence UtilitarianSentence)
(instance ValueJudgmentToUtilityComparisonSentence TotalValuedRelation)
(instance ValueJudgmentToUtilityComparisonSentence UnaryFunction)
(=>
(and
(equal (ValueJudgmentToUtilityComparisonSentence ?VJS) ?UCS)
(equal ?VJS (modalAttribute ?FORMULA ?MORALATTRIBUTE))
(instance ?FORMULA Formula)
(instance ?MORALATTRIBUTE MoralAttribute)
(instance ?UF UtilityFormulaFn)
(equal ?SITUATION (SituationFormulaFn ?FORMULA)))
(and
(=>
(equal ?MORALATTRIBUTE MorallyGood)
(equal ?UCS
(modalAttribute
(forall (?F)
(=>
(exists (?AGENT ?CP)
(and
(capableInSituation ?CP agent ?AGENT ?SITUATION)
(realizesFormulaSubclass ?CP ?F)))
(greaterThanOrEqualTo (AssignmentFn ?UF ?FORMULA) (AssignmentFn ?UF ?F)))) Likely)))
(=>
(equal ?MORALATTRIBUTE MorallyBad)
(equal ?UCS
(modalAttribute
(exists (?F ?AGENT ?CP)
(and
(capableInSituation ?CP agent ?AGENT ?SITUATION)
(realizesFormulaSubclass ?CP ?F)
(lessThan (AssignmentFn ?UF ?FORMULA) (AssignmentFn ?UF ?F)))) Likely)))
(=>
(equal ?MORALATTRIBUTE MorallyNeutral)
(equal ?UCS
(modalAttribute
(equal (AssignmentFn ?UF ?FORMULA) 0) Likely)))))
```

I found it tricky to translate moral value judgments into *utility comparison sentences*: can one claim that a moral value judgment about one formula implies anything about other formulas?

One idea is to consider the *situation* referred to by the formula and to take every other *formula* realizing an *action* in this situation. To state that a *formula* is *morally good* in this *situation* implies that it is *likely* of *greater than or equal utility* to the *utility* of the *formula realizing any (other) action*. In simple terms, “an action is good if it is the best that one can do.” And, “an action is bad if there exists a (significantly) better option”.

As can be seen in the case of *moral neutrality*, I lazily didn’t think of a translation. One approach would be to define “approximate equality” and to claim that all other options are approximately equal in value. Another option would be to define a *neutral situation* of *zero value* to compare with.

It’s noteworthy that the translation in essence defines the scope of a particular *moral theory*.

```
(documentation UtilityComparisonToValueJudgmentSentence EnglishLanguage "A UnaryFunction that maps utility comparison sentences to value judgment sentences.")
(domain UtilityComparisonToValueJudgmentSentence 1 UtilityComparisonSentence)
(range UtilityComparisonToValueJudgmentSentence ValueJudgmentSentence)
(instance UtilityComparisonToValueJudgmentSentence TotalValuedRelation)
(instance UtilityComparisonToValueJudgmentSentence UnaryFunction)
(=>
(and
(equal (UtilityComparisonToValueJudgmentSentence ?UCS) ?VJS)
(equal ?UCS (?COMPARATOR (AssignmentFn ?UF ?FORMULA1) (AssignmentFn ?UF ?FORMULA2)))
(instance ?FORMULA1 Formula)
(instance ?FORMULA2 Formula)
(instance ?UF UtilityFormulaFn))
(equal ?VJS
(?COMPARATOR
(probabilityFn (modalAttribute ?FORMULA1 MorallyGood))
(probabilityFn (modalAttribute ?FORMULA2 MorallyGood)))))
```

A weak translation of *utility comparison sentences* into *value judgment sentences* is to translate the *comparator* into a comparison of the *likelihoods* that the formulas are judged as *morally good*^{7}This begs the question of how the probabilistic judgments are grounded..

A simpler translation is the following in the case of the *comparators*, *“greater than”, “greater than or equal”, and “equal”*: if the utility of F1 is greater than the utility of F2, then if F2 is *morally good*, F1 will be *morally good*, too.

```
(documentation UtilityComparisonToValueJudgmentSentence2 EnglishLanguage "A UnaryFunction that maps utility comparison sentences to value judgment sentences.")
(domain UtilityComparisonToValueJudgmentSentence2 1 UtilityComparisonSentence)
(range UtilityComparisonToValueJudgmentSentence2 ValueJudgmentSentence)
(instance UtilityComparisonToValueJudgmentSentence2 PartialValuedRelation)
(instance UtilityComparisonToValueJudgmentSentence2 UnaryFunction)
(=>
(and
(equal (UtilityComparisonToValueJudgmentSentence2 ?UCS) ?VJS)
(equal ?UCS (AssignmentFn ?COMPARATOR (AssignmentFn ?UF ?FORMULA1) (AssignmentFn ?UF ?FORMULA2)))
(instance ?FORMULA1 Formula)
(instance ?FORMULA2 Formula)
(instance ?UF UtilityFormulaFn)
(or
(equal ?COMPARATOR greaterThan)
(equal ?COMPARATOR greaterThanOrEqualTo)
(equal ?COMPARATOR equal)))
(equal ?VJS
(=>
(modalAttribute ?FORMULA2 MorallyGood))
(modalAttribute ?FORMULA1 MorallyGood)))
```