In a blog post about intellectual property (which is its own can of worms), the author stated:


... moral questions come up when there is a choice over who gets harmed. If you're in a situation where no one gets harmed, then there should be no moral question.


This seems like it is related to the mirror image maxims: That which is not prohibited is allowed and Only what is permitted is allowed.


Over at AlterNet, Greta Christina discusses the difference between liberal and conservative (both in the old senses of the words): Get a Brain, Morons: Why Being Liberal Really Is Better Than Being ....


So, is harm (or not) to others a good basis for founding morality? How far does the concept of "others" extend -- does it include "things"? Is morality a means of conservation or is it a means of liberation? 


Tags: conservative, ethics, liberal, morality

Views: 28

Replies to This Discussion

It reminds me of the 'negative vs. positive liberty' debate.

'Don't do harm to others' is the obvious basis for a negative view of morality, but it's hardly sufficient. Think 'duty to rescue': you're not causing any (direct, at least) harm to someone you just choose to ignore and thus fail to protect from imminent danger.
Why can't it be both? Let people do what they want (liberate them) up to the point where they harm others (directly or by harming their work or their environment--conserving that which matters to people).
ATHEIST ETHICS IN 500 WORDS. John B. Hodges, Dec. 21, 2007.

How can you have any ethics if you don't believe in God?

The question must BE questioned. How can you have any ethics if you DO believe in a god?

Religious folk misunderstand morality at its roots. Religion teaches a child's view of ethics, that "being good" means "obeying your parent". Just as religious faith is believing what you are told, so religious morality is doing what you are told. Religious morality consists of obeying the alleged will of God, an invisible "Cosmic Parent", as reported by your chosen authority. But obedience is not morality, and morality is not obedience. We can all think of famous people who did good things while rebelling against authority, and others who did evil things while obeying authority.

Religious folk may be Good Samaritans or suicide bombers, it depends entirely on what their chosen authority orders them to do. If a believer, or a community of same, wishes to make war or keep slaves or oppress women, all they have to do is persuade themselves that their god approves. This seems not to be hard, and no god has ever popped up to tell believers that they were wrong. They do not have a code of morality except by the convenience of the priesthood. What they have is a code of obedience, which is not the same thing.

Atheism means looking at ethical questions as an adult among other adults. Civic morality is a means of maintaining peace and cooperation among equals, so that all may pursue happiness within the limits that ethics defines. This civic morality is objective. If you want to maintain peaceful relations, don't kill, steal, lie, or break agreements. As Shakespeare wrote: "It needs no ghost, Milord, come from the grave, to tell us this."

Because we are biological beings evolved by natural selection, most of us value the health of our families, where "health" is the ABILITY to survive, and "family" is "all who share your genes, to the extent that they share your genes." This is also called "inclusive fitness" by biologists. Essentially all living beings are going to seek this, because their desires are shaped by natural selection, and inclusive fitness is what natural selection selects for.

Because humans are social animals, who survive by cooperating in groups, we have a "natural" standard of ethics: The Good is that which leads to health, The Right is that which leads to peace. A "good person" is a desirable neighbor, from the point of view of people who seek to live in peace and raise families. Most people understand this intuitively. Understanding the logic of it is better. "If you want peace, work for justice."

There is a long history of philosophical thinking about ethics. Morality is not based on authority, but on reason and compassion. If I had to recommend just one book on ethics, it would be GOOD AND EVIL: A NEW DIRECTION by Richard Taylor.

I have a longer essay at http://civic.bev.net/atheistsnrv/articles/definition.html
I haven't read the longer version, John, but this makes a fair amount of sense. Clearly, morality comes from our evolutionary background, and in particular, I believe, our ability to remember past actions and to predict future actions. It obviously doesn't come from the psycho skydaddy in the Bible. You're correct that simple obedience to authority is a stunted form of morality, if it's any kind of morality at all (which I doubt). I just posted on another A|N thread where I go on at some length about morality as an outgrowth of game theory. The context there is about veganism and whether we owe moral considerations to other animals. I don't think we do unless they can meaningfully participate in our social contract, like dogs and dolphins appear to. Lions and tigers and bears, not so much because they lack the inclination, fish and chickens because they lack the capacity.
Excellent post, Jason.

This "game theory-based morality" of yours reminds me a little program I wrote many years ago (for my own amusement) to point out the benefits of altruist cooperation in society. The simulation had random samples of individuals make random deals with each other. All these deals were beneficial to the sample as a whole, but not necessarily to each of its individuals (and I made sure there was always at least a 'loser' in the deal - for reasons which I think will become obvious later).

E.g., with a sample of 4 random individuals a possible deal could be:

Ind./gain
i09 = +5
i13 = -2
i16 = -4
i23 = +3
(benefit to the society = +2)

Each individual was tagged 'altruist' or 'egoist'. The only difference is an altruist would always accept the deal (because it's always beneficial to the group), while an egoist would refuse a deal which is detrimental to him. Of course, all individuals within a sample had to accept a deal for it to be validated.

I generated a batch of hundreds of thousands of deals I submitted to an ideal altruistic society, to use as reference. Then I gradually replaced some of these altruists by egoists and repeated the test (using the same batch.) The egoists did better individually than the altruists, but of course the society performed worse as a whole. As was expected.

Then I changed the behavior of the altruists so they didn't consider egoists' gains as part of the deal: IOW, they would now refuse deals which didn't benefit the sub-group of altruists within the sample. Altruists now did about as well as egoists individually, but the effect on the society was detrimental (even less deals made), although it benefited the sub-society of altruists.

Then I introduced 'vindictiveness': altruists now would refuse a deal which was more beneficial to the sub-group of egoists than to the sub-group of altruists within the sample. The society did even worse than before, but individually the altruists were now more on less on par with the egoists (I assume that what they lost in negative gains, they recouped with altruist-only deals.)

Then I tweaked the egoists behavior so they could compare their individual success to their expected gains "had they been altruists", and I allowed them to switch to altruism (I called this 'redemption'.) This part was the less conclusive (too many parameters to tweak), and that's were I abandoned the experiment.

Anyway, my conclusions were: as long as egoists are identified as such, this simulation showed that altruism is as beneficial as egoism to you as a vindictive (!) individual, and that society as a whole performs better when egoism is less rampant. I.e., it's counterproductive to behave egoistically when dealing with someone who can remember your past bad deeds and retaliate. Which I assume is what the average human is.

Now, a good thing would be to tattoo a yellow star on every egoist's forehead as soon as we spot them :^)
Dang it, Jaume. Some of your egoist bots appear to have gotten loose and taken over the Republibaggertarian Party.

If I recall, Dawkins runs a similar simulation in The Selfish Gene, called Tit for Tat. He concludes that society can tolerate about a ten percent "cheater" rate and still come out ahead overall. But for that to work, society needs to scorn and punish cheaters when they are caught at it.

It's worth pointing out that in practice, morality is most often a win-win proposition. Most human interactions involve all parties gaining something, if only slightly. Certainly a free market economy is predicated on this idea. Altruism, in which some parties appear to lose something in the short term, actually usually include the potential for them to get something back in the long term. Sometimes that doesn't pay off, but it's like randomly planting seeds and hoping you, or at least somebody you might care about, gets to enjoy the fruits. It's like playing the state lotto, but with better odds. Even the kind of altruism that gets you killed can pay off for your genes, as Dawkins explains.
Actually, I think Dawkins only discusses the Tit for Tat strategy. The research was done by somebody else on the whole Prisoner's Dilemma problem. Axlerod, I believe.
Hofstadter also discussed it at length. He also introduced the concept of superrationality. And now that I think about it, I realize that what I tried to do at the end of my experiment, was to create a society of superrational bots. What I wasn't aware of at the time is superrationality requires absolute confidence in others, or a form of telepathy.
Hmm. Superrationality strikes me as a bit of a grandiose term. But I don't think absolute confidence or telepathy are required. You only need players who are willing to take their lumps occasionally. You need optimistic players. And if the punishment is not too serious, as it usually isn't in real life scenarios, people will think it's worth the risk.

For example, I occasionally bet $2 on my state lottery. The odds on my winning the multi-million dollar jackpot ar vanishingly remote, but the penalty for losing is only two bucks. So, eh, why not?

The consequences of most prisoner's dilemma calculations are not quite so dire as in the game theory example, so people will take a shot at the bigger payoff. This is why millions of people play penny ante poker for fun, but would be terrified to play the same game with a thousand dollar ante. As with most things, size matters, which is generally not considered in discussions of the Prisoner's Dilemma that I've seen.

Society is, in fact, the Prisoner's Dilemma writ large. Fortunately, it's usually easy to go for the win-win scenario because that's what most people would want, especially when the stakes are not too high. That trains people to think in win-win terms, which makes them more likely to go for that scenario even when the stakes go up.
You need optimistic players.

Actually I don't think even optimism is required. I'd bet the main reason why people choose to behave superrationally is because "it's the decent thing to do."

As with most things, size matters, which is generally not considered in discussions of the Prisoner's Dilemma that I've seen.

I'm glad you said that, as I've made exactly the same point on another board. The replies I got could be summed as "you can always extrapolate". Yet I have a feeling it's not as simple as that.
In a situation where nobody may be harmed (if one can exist) there should be no moral questions. Congruous with this, morality should be a means of liberation not conservation. That is, the precept: "What is not prohibited is allowed" should be effective in preference to "Only what is is permitted is allowed". This is the perspective that is consistent with true freedom. It should apply to life in general with higher life forms having priority over lower ones. A duty to rescue in the absence of unreasonable danger should apply because it involves a situation where somebody may be harmed. Great post Glenn.
I'll toss in my thoughts:

Morality is rarely black and white: right or wrong. Immature morality is simple obedience. Mature morality is based on compassion and equality. Our actions fall somewhere on a moral spectrum.

Humanist morality is always about making choices based upon our domain of compassion (that group of sentient beings, human or other species, to whom we extend sympathy, empathy and compassionate action) and our domain of equality (that group of sentient beings to whom we extend equal rights). By this scale of morality, I will readily admit that the vegan has a higher level of moral compassion than I do, being that I still eat meat, albeit much less often than I used to, and I always avoid factory-farm meat.

Game theory and the classic moral dilemmas such as should you switch the track to steer the runaway train into the woman with the baby rather than the large crowd are a bit absurd- it is purely a calculation to determine which would do less damage or the greater good. The moral aspect is that one is distressed that others will come to harm, and that one wants to do something to remedy the situation.

Being an atheist does not make you a moral humanist. Atheism only implies a lack of belief in God. Atheism is not a choice. It is a belief in a state of reality. Humanist morals are a value system, a belief in the way things ought to be, and this is a choice. I have known enough atheists to know that they can be just as immoral and non-humanist as the next guy.

RSS

Support Atheist Nexus

Donate Today

Donate

 

Help Nexus When You Buy From Amazon

Amazon

AJY

 

Latest Activity

© 2014   Atheist Nexus. All rights reserved. Admin: Richard Haynes.

Badges  |  Report an Issue  |  Terms of Service