Future Development Directions

This page documents some directions for future development of the formal ethics seed ontology that I believe could be fruitful or interesting. Some of them I inteded to do myself; some of them were obviously beyond the scope of my initial investment in this project.


Interesting Paper Summaries to Add

  1. GenEth: A General Ethical Dilemma Analyzer: An ILP system that generates principles in dialogue with ethics experts to cover ethically recommended behavior by building rules out of core domain principles: minimizing harm, maximizing benefit, respecting autonomy, fairness, etc.
  2. On the computational complexity of ethics: moral tractability for minds and machines: a survey of the computational complexity of doing ethics with various paradigms. Covered are consequentialist utilitarianism, deontology, virtue ethics, and contractualism. The conclusion is that moral perfection is impossible/intractable for both humans and machines.

Small Notes to Incorporate

  1. Add reference and mention of consequentialist virtue ethics, e.g., related to Julia Driver’s idea in The Virtues and Human Nature that “a trait is a virtue if and only if its exercise tends to bring about valuable states of affairs.”

Wiki Pages to Add

  1. Moral Pragmatism provides a non-normative explanation of how societies determine which ethical philosophies to endorse.
  2. A discussion of difficult terminological choices made in the project.
  3. A discussion of formalization challenges (e.g., instances vs classes in SUMO).
  4. A discussion of how to generalize translations between ethical paradigms from base cases to any theories.
  5. Ethics of Reciprocity: explore the idea, inspired by indigenous wisdom, of grounding ethics in the principle of maintaining reciprocal relationships.
  6. Projecting Ethical Paradigms into the RL Framework: one can view each paradigm as dealing with constraints over how reward should depend on (state, action, next_state) tuples. This is sketched out in the first worklog and could receive a dedicated page.
  7. Moral Relativism: the meta-ethical stance that there is no single, universally agreed-upon basis of morality, and to distinguish it from moral nihilism, that multiple sufficiently viable options exist. This ontology’s formal framework should allow both moral relativists and moral objectivists to discuss theories in the same languages.
  8. Meta-Ethics 101 Overview Page: a page that walks through the core definitions in the ontology in a sensible order, putting them all together.
  9. An example showcasing an ethical theory in the case of an ethical group and applied to a specific situation. The Ethical Theory Examples page only contains minimal snippets.

Technical Research and Development

  1. Port the ontology and necessary SUMO concepts into Isabelle/HOL, which features a native proof kernel and is more developed as an interactive theorem proving (ITP) ecosystem.
  2. Find interesting prototypical examples involving some reasoning within ethical theories. Best done after porting the ontology to another logical framework.
  3. Review by professional moral philosophers.
  4. Crowdsource Ethical Situations of interest to people and see if multi-paradigmatic formalization can help to navigate the value judgments and dilemmas involved. E.g.,
    • Navigating no-win situations such as the conflict in Gaza. [Courtesy of Dirk Bruere]
    • The ethics of global disparities between the availability of technologies. Will transhuman cyborg technology benefit the rich first as the poor are left in the dust? How are these trade-offs to be balanced and analyzed? [Courtesy of Casey Armstrong]
  5. Examine a real ethical situation that benefits from multi-paradigmatic analysis where the formalization helps to isolate the core value judgments that need to be made to resolve the dilemma. Possible candidates:
    • Beneficial AGI and AGI Safety Questions
    • The use of Autonomous Weapons Systems
    • Medical Triage
  6. Take a simple ethics dataset, such as the ETHICS benchmark, and have an AI system provide justifications for its answers (which align reasonably well with the human answers). Autoformalize these examples and justifications. Make sure that the arguments are sound proofs. Next, use ILP to try to find a small set of core principles from which most of the examples in the ethics dataset can be justified. This would result in a tentative commonsense ethical theory derived from surveys.