On Teleological Definitions

Teleological approaches put the goal first and derive the appropriate frameworks, formalisms, and means to achieve the goal.

The ethical paradigms have been defined by the top-level goal that motivates the approach. The framework used can then be generalized and tweaked into different theories within the paradigm. I believe it’s important to note the teleological inspirations, which are often implicitly assumed when working with the frameworks. On this wiki, I prefer formalistic definitions as they seem more general and capable of accommodating changes in goal formulations.

  • A goal of consequentialist utilitarianism is “the greatest good for the greatest number”1Which is debated, with Bentham advocating “the less difference between the two unequal parts”.
    • Generally, “the best possible consequences”, can be an overarching goal.
  • A goal of virtue ethics is eudaimonia: to live a good life of “good spirit”, the pursuit of which leads to developing virtues.
  • A goal of Kantian deontology is to attain the Kingdom of Ends composed of rational beings who live by common, universal laws2This fits the normative ideal of codes of conduct that all rational people will put forth..
  • The goal of attaining social coordination for the benefit of all involved can fuel multiple paradigms.

SUMO Sketches and Discussion

The simplified version of the Greatest Happiness Principle is sketched out on the examples page. The core idea is that the ethical philosophy of wishing for the greatest good for the greatest number sets up an optimization problem: one needs to find ways to measure goodness and to optimally satisfy these measures. This naturally leads to the development of utility functions and felicific calculuses. Thus the paradigm follows from the goal, even if the paradigm could be applied to optimize for different values than wellbeing/goodness.

As for formalization, it’s dificult to specify the fully abstract goal without the use of a specific measure, e.g., “the number of happy people”: thus what I offer is a version of the greatest happiness principle that aspires to take the actions that best maximize the number of happy people.

(documentation GreatestHappinessPrincipleUtilitarianism EnglishLanguage "The Greatest Happiness principle stipulates that the best course of action is that which causes the greatest happiness.")
(subclass GreatestHappinessPrincipleUtilitarianism Utilitarianism)
(instance GreatestNumHappyPeopleUtilitarianism GreatestHappinessPrincipleUtilitarianism)

(<=>
  (holdsEthicalPhilosophy ?AGENT GreaestNumHappyPeopleUtilitarianism)
  (desires ?AGENT
    (forall (?SITUATION ?CPROC)
      (=>
        (bestActionByUtilityInSituationForAgent ?AGENT ?CPROC NumHappyPeopleUtilityFn ?SITUATION)
        (exists (?IPROC)
          (and 
            (agent ?IPROC ?AGENT)
            (instance ?IPROC ?CPROC)
            (equal ?SITUATION (SituationFn ?IPROC))))))))

Eudaimonia refers to the ‘good spirit’, ‘happiness’, or ‘flourishing’ one experiences when living the good, virtuous life. As with the goal of ‘happiness’, while arguably everyone desires happiness (thus making it an objective goal), how do we determine what happiness entails? Is the measurement self-evident, leaving the practical question of makes up a good life? Hedonistic pursuit of pleasure? A productive career contributing to economic development? A philosophical or meditative life? The virtues can be seen as character traits in virtue of their contribution to living the good life (eudaimonia). This could appear selfish, yet Aristotle claims: “The presence of friends, therefore, is desirable in all circumstances”3In the Nichomachean Ethics. The example sketch I have defines virtues as character attributes likely to cause or increase happiness4One could use a utility function to measure happiness and increase the precision of the statement..

(<=>
  (instance ?VIRTUEATTRIBUTE VirtueAttribute)
  (modalAttribute 
    (causesProposition
      (attribute ?AGENT ?VIRTUEATTRIBUTE)
      (attribute ?AGENT Happiness) Likely)))

(<=>
  (instance ?VIRTUEATTRIBUTE VirtueAttribute)
  (modalAttribute 
    (increasesLikelihood
      (attribute ?AGENT ?VIRTUEATTRIBUTE)
      (attribute ?AGENT Happiness) Likely)))

Kant’s Categorical Imperative aims to provide a single top-level duty from which the rest of morality can be derived. The idea is to work from self-evident principles to derive necessary duties. Gewirth’s Principle of Generic Consistency is a modern example in this lineage; however, the claim right to freedom may not be as inferentially powerful as Kant intended the categorical imperative to be. Members of the Kingdom of Ends will treat all other members as ends in themselves and will act according to maxims that they hold to be universally legislating (i.e., all must follow them as if it is an absolute necessity to do so). It’s unclear what this imperative implies in practice. Parfit suggests that this maxim will be an optimific principle such as consequentialism (which leads to another telos from which further ethical theory can be derived).

The forms of Kant’s categorical imperative:

  1. “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”
  2. “Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.”
  3. “Thus the third practical principle follows [from the first two] as the ultimate condition of their harmony with practical reason: the idea of the will of every rational being as a universally legislating will.”
  4. “Act according to maxims of a universally legislating member of a merely possible kingdom of ends.”

On the examples page, I sketch out versions of the first and second forms. Below is part of the first: every agent holds the obligation to only act by maxims such that the agent desires all agents to hold an obligation to act by the maxim whenever in a relevant situation.

(instance KDT KantianDeontologicalTheory)

(element 
  (holdsObligation 
    (<=>
      (actsByMaxim ?AGENT ?MAXIM)
      (desires ?AGENT
        (forall (?AGENT2)
          (holdsObligation
            (=>
              (relevant ?MAXIM (SituationFn ?AGENT2))
              (actsByMaxim ?AGENT2 ?MAXIM)) ?AGENT2)))) ?AGENT) KDT)

The frame of ethics as a coordination game in a society of agents with their own values can also lead to ethical and legal frameworks. In the spirit of mechanism design, laws can change the incentive structure of the game so that the agents will take pro-social paths in line with their values (toward their goals), that is, paths that minimally interfere with others’ pursuits of their values and goals. There are then psychological solutions that may not be appropriate to be legislate as law5This runs counter to Kant’s stern view.: Parfit argues that the most important of these are what we consider to be moral solutions. For example, sufficient altruism and “wishing to satisfy the Kantian Test6Namely, that one can will that others respond the same way in similar situations. bias one toward cooperative solutions to the Prisoner’s Dilemma. With limited computatinoal resources, the number of iterations needed to support cooperative strategies (e.g., forgiving benevolent tit-for-tat) can be lower, too. Thus moral codes of conduct can be seen as stemming from the society as a meta-agent to enhance its harmonious productivity (in doing whatever its members value doing).

As for a formalization, one sketch could be to say: a society desires to have laws and hold a philosophy such that the likelihood of the members of the society achieving the purposes they have is increased7This could probably be sketched out in SUMO. Instead of desires, a top-level obligation could be used.