Ethical Conjectures
Many debates in the field of meta-ethics can be formulated as conjectures. The same ontology and formalism can work for morally relative or normative theories.
- Normativity: There exist ethical codes that all rational agents will put forth1See SEP: The Definition of Morality.: ethical theorems whose premises are self-evident2Alan Gewirth’s Proof for the Principle of Generic Consistency formalized in Isabelle/HOL is one of the best attempts at establishing a normative ethical theorem..
- Decidability: There is a decision procedure to determine whether ethical judgments hold or not3Under many standard theories, there should be an impossibility statement in general. But many specific judgments can still be determined..
- Consistency: There exists an ethical theory that solves all moral dilemmas consistently, providing clear action guidance.
- Paraconsistency: Some moral dilemmas require reasoning about non-consistencies.
- Equivalence: The primary ethical paradigms are equally expressive. For each pair of paradigms A and B and theories a in A and b in B, there exists a translation trans in A such that trans(a) is equivalent to b4I believe the wording should be something like this.. In plain English, each paradigm can embed the other paradigms within it.
SUMO Sketches
Please note that the following are rough sketches in the direction of these domains of interest. They are likely in need of further work if someone intends to use them seriously.
(documentation isNormative EnglishLanguage
"(isNormative ?THEORY) means that the ethical theory ?THEORY is such that all rational agents will accept it upon consideration.
The theory is also a justified true theory.")
(domain isNormative 1 EthicalTheory)
(instance isNormative UnaryPredicate)
(<=>
(isNormative ?THEORY)
(and
(instance ?THEORY JustifiedTrueEthicalTheory)
(theoryFieldPair ?ETHICS ?THEORY)
(equal ?FTHEORY (ListAndFn (SetToListFn ?THEORY)))
(forall (?AGENT)
(=>
(and
(instance ?AGENT CognitiveAgent)
(considers ?AGENT ?FTHEORY))
(and
(holdsEthicalPhilosophy ?AGENT ?ETHICS)
(believes ?AGENT ?FTHEORY))))))
An ethical theory is normative if and only if it is a justified true ethical theory and every cognitive (reasoning) agent who considers the theory will hold the ethical philosophy and believe it to be true.
(documentation isDecidable EnglishLanguage
"(isDecidable ?THEORY) means that there exists an effective method (program) that can determine the truth of any sentence within the ethical theory ?THEORY.")
(domain isDecidable 1 EthicalTheory)
(instance isDecidable UnaryPredicate)
(<=>
(isDecidable ?THEORY)
(exists (?PROG)
(and
(instance ?PROG ComputerProgram)
(equal ?FTHEORY (ListAndFn (SetToListFn ?THEORY)))
(forall (?SENT ?COMP ?TRUTH)
(=>
(and
(instance ?SENT EthicalSentence)
(truth (entails ?FTHEORY ?SENT) ?TRUTH)
(programRunning ?COMP ?PROG)
(patient ?COMP ?THEORY)
(patient ?COMP ?SENT))
(result ?COMP ?TRUTH))))))
An ethical theory is decidable if and only if there exists a program such that for all ethical sentences, running the program will result in the truth value of whether the program logically follows from the theory5Note that this is different from determining the objective truth of an arbitrary sentence..
(documentation isConsistent EnglishLanguage
"(isConsistent ?THEORY) means that the theory provides consistent guidance for all choice points.")
(domain isConsistent 1 EthicalTheory)
(instance isConsistent UnaryPredicate)
(<=>
(isConsistent ?THEORY)
(forall (?CP)
(=>
(instance ?CP ChoicePoint)
(and
(containsInformation (ListAndFn (SetToListFn (evaluateTheory ?THEORY ?CP))) ?GUIDANCE)
(consistent ?GUIDANCE (equal True True))))))
An ethical theory is consistent if and only if for all choice points, the guidance received by evaluating the choice point by the theory is consistent6Due to consistent only applying to two propositions, I used a hack of checking for consistency with “True = True”, which should be consistent with everything as a tautology..
As for the necessity of paraconsistent reasoning about ethical situations, one may wish to state that there is no consistent theory. Probably one needs to express additional constraints on what makes an ethical theory non-trivially interesting as well.
(not
(exists (?THEORY)
(isConsistent ?THEORY)))
(instance SatisfactoryEthicalTheory Conjecture)
(equal SatisfactoryEthicalTheory
(exists (?THEORY)
(and
(instance ?THEORY EthicalTheory)
(isNormative ?THEORY)
(isDecidable ?THEORY)
(isConsistent ?THEORY))))
We get the conjecture that there is a satisfactory ethical theory, which is normative, decidable, and consistent.
I find the example of epistemic universal love as a virtue attribute. In SUMO’s knowledge base, needs and wants reduce to desires. Thus a bodhisattva (agent) has universal love if for all agents, if the agent desires a formula, then the bodhisattva desires the fulfillment of this formula by a process that realizes the formula. The notion of epistemic universal love grounds universal love by only requiring that the bodhisattva desires the fulfillment of desires that it knows about.
Then the following conjecture should be easy to prove: if a bodhisattva (agent) possesses the virtue of epistemic universal love and there exist two agents and a formula where the bodhisattva knows that one agent desires the formula and the other agent desires the negation of the formula, then the bodhisattva agent should desire the realization of all formulas. Thus agents with epistemic universal love cannot classically reason about what to do (based on their desires) and need paraconsistent reasoning.
(<=>
(attribute ?BODHISATTVA UniversalLove)
(forall (?AGENT)
(=>
(desires ?AGENT ?FORM)
(desires ?BODHISATTVA
(exists (?FUL)
(and
(realizesFormula ?FUL ?FORM)
(instance ?FUL Process)))))))
(<=>
(attribute ?BODHISATTVA EpistemicUniversalLove)
(forall (?AGENT)
(=>
(knows ?BODHISATTVA
(desires ?AGENT ?FORM))
(desires ?BODHISATTVA
(exists (?FUL)
(and
(realizesFormula ?FUL ?FORM)
(instance ?FUL Process)))))))
(instance UniversalLoversNeedParaconsistentReasoning Conjecture)
(equal UniversalLoversNeedParaconsistentReasoning
(forall (?ULAGENT)
(=>
(and
(attribute ?ULAGENT EpistemicUniversalLove)
(exists (?A1 ?A2 ?FORM)
(and
(knows ?ULAGENT (desires ?A1 ?FORM))
(knows ?ULAGENT (desires ?A2 (not ?FORM))))))
(forall (?FORM)
(desires ?ULAGENT
(exists (?FUL)
(and
(realizesFormula ?FUL ?FORM)
(instance ?FUL Process))))))))