This note is currently in phase 1 -- braindump. Suppose phase 2 and 3 are better organization, then synthesis with other cards. They were taken 1/22/2020.

Ethics Notes

Ethics has 3 branches -- Applied, Normative, Meta. Meta = "do morals exist", normative = "Standards for rightness/wrongness", Applied = with respect to real application, what's right/wrong?

MetaEthics: Moral Epistemology. Does morality exist independently of humans? Moral Relativism (individual vs cultural). Why be moral? Psychological Egoism (my interests motivate all my cations). Psychological hedonism (to make me happy).

Normative Ethics: How SHOULD people live? How can we rate an action? • Virtue Theories: hold values and live by them (wisdom, courage, temperance, justice) • Duty Theories: Obligatory despite consequences. Right-based (John Locke). Kant's categorical imperative (always do x). (a) treat people as an end, and never as a means to an end.
• Consequentialist Theories: Ethical egoism (action value = consequences to agent performing)

Deontology: Duty

My POV: Meta: No, but let's assume they do since we want to live in a world like that. (

Normative: pending

Normative Ethics

How SHOULD people live? How can we rate/recommend an action? 1. People have differing goals/abilities/backgrounds 2. People's ability to affect others should be proportional to their logical reasoning and appeal to others' emotions for morals, nothing else (i.e. no charisma) 3. Mix of intent and effect (assume oracle)

Do we have a "universal goal"? People have different goals/abilities to reach them, so let's say a universal goal is to optimize people's abilities to do so (first by minimizing harm/blockers, then by allowing people to do more).

Q1: How to minimize blockers? Q2: How to help people do more? Q3: How do we deal with changing goals? (Greedy approach is more realistic than oracle).

  1. Start with good intent. You're responsible for predicting result; oracle isn't feasible. (Socrates) Good virtues -- wisdom, courage, justice, etc. are all good. Kant argues that there are things that are intrinsic good (knowledge, perseverance, pleasure). I agree with the first 2. Golden rule.
  2. Same applies. Rate change in difference towards goal (on % of lifespan goal), and help the "worst case". A society is good because of the least bad things that happen, not because of the peak.
  3. Prediction.

What is the above in technical terms? 1. Live virtuously. (Socrates -- virtue ethics), which includes a mix of consequentialism and deontological recommendations. 2. Golden rule allows psychological hedonism (make myself happy), but taking into account what the other would want. There's a struggle here. I'm not someone else, so I'd still help myself. What if I can hurt someone just a little to gain a lot? How do I compare the two? tl;dr: Don't do it.

How do these become a rule-based system on which we can evaluate? 1. Learn a lot to predict your actions.

How do we deal with "I now can't do anything that would hurt anyone ever". • Put it on a log scale. • Good/Bad scales are just Needs vs Wants.

How do we rate to a metric how good/bad something is? • How much it detracts from their ability to reach goals in an ideal world.

So how do we solve "kill 1 person to save many"? 1. Killing is a maximal rating, so the many would have to have significantly increased benefits. 2. How would I feel if I were the one killed? Assuming the objective is to "make the world a better place", this is better (see lucky volcano).

Referred in

If you think this note resonated, be it positive or negative, send me a direct message on Twitter or an email and we can talk. Also ping if you'd like to know the updates on this note.