# Alan Gewirth’s Proof for the Principle of Generic Consistency

Every prospective, purposive agent (PPA) has a claim right to its freedom and well-being (FWB)Archive of Formal Proofs

David Fuenmayor and Christoph Benzmüller^{1}🌐 in an epic historical first have formalized a normative ethical theorem in Isabelle/HOL: Alan Gewirth’s Proof for the Principle of Generic Consistency. The Argument is roughly as follows^{2}Paraphrased directly from their paper.:

- I act for some purpose E, that is, I am a purposeful agent.
- E is subjectively good.
- My freedom is a necessary condition for my agency, to attain any purpose.
- Freedom is a necessary good.
- I have a
*claim right*to my freedom. - Every purposeful agent has a claim right to its freedom.

Fuenmayor and Benzmüller found versions of the axioms regarding subjective goods, purpose, freedom and wellbeing, claim rights, etc., in such a manner that the proof *works in Isabelle/HOL*. An additional cool aspect is that they try to take the state-of-the-art analytic philosophy approaches to modeling perspectival judgments and formalize them. This is one of the weaknesses of working in SUMO that I discussed in the second worklog: how to model *situations*. To this end, they use a *dyadic deontic logic* with conditional *obligations* and a modal system for “dialectic necessity” based on “Kaplan’s logical framework LD”. It seems generally acknowledged that to do “*real-world reasoning*“^{3}Commonly called “commonsense reasoning”., one will need to work with multiple modal systems interleaved together, which becomes quite subtle. Benzmuller is a big proponent of showing that *Shallow Semantic Embeddings* can work in HOL as a *universal meta-logical reasoning system*, i.e., defining the *domain-specific logics* in HOL. Ben Goertzel et al. have also long been proponents of this approach^{4}See Meta-MeTTa: an operational semantics for MeTTa for their recent programming language to this end.

What’s the significance of this?

If you disagree with Gewirth’s theorem, then there should a particular property of the definitions or logical set-up that you disagree with.

Perhaps the perspectival modeling via Kaplan’s logical framework doesn’t capture the semantics properly? One *meaning postulate* I consider suspiciously strong is *“explicationGoodness3”*, of which they write, “The third axiom, however, has been added by us as an implicit premise in order to render Gewirth’s proof as correct. This axiom aims at representing the intuitive notion of “seeking the good”.” In essence, they tried to find the minimal axiom needed to allow the proof to hold. * explicationGoodness3* states that, “In all possible contexts, for all purposes P and agents A, if P is possible, then P is obligatory if P is necessarily good for A.” Are all necessary goods obligatory to fulfill? This certainly seems debatable. Picking apart precisely what the statement means already requires careful study. There is an open invitation for someone to find a more soundly convincing set of premises for the proof 😉.

The main point is that now the debate about whether this *normative ethical theorem* holds or not can be very precise and held at the level of definitions, logical frameworks, and how they map up with *reality*. Yay!

Below is a screenshot of the core proof: