Morality of the Machine: Sentience, Substance, and Society

As computers begin to reach a human level of intelligence, some consideration must be given as to their concept of ethics. Appropriately aligning moral values will mean the difference between a contributing member of society and a sociopath. This artificial morality can be informed by the evolution of sociality in humans. Since evolution selects for the fittest individuals, morality can be viewed as having evolved from the superior benefits it provides. This is demonstrated by mathematical models, as described in game theory, of conflict and cooperation between intelligent rational decision-makers. So while natural selection will invariably lead intelligences to a morality-based cooperation, it is in the best interest of humanity to accelerate the artificial intelligence's transition from conflict to collaboration. This will best be achieved by recognizing the significance of historical cause-and-effect, their corroboration by empirical evidence-based research, and a reductive approach to philosophical ideals.

If we can assume our behavior in the environment is determined by our genome, then evolution can be seen as acting directly on behavior. This is reinforced by the fact that significant heritability is found in neurotransmitter concentrations. Thus the organization of biological neural systems can give insight into the emergence of morality. The two neurotransmitters most associated with sociality are serotonin and dopamine. Serotonin concentrations corresponds to social behavior choices, while dopamine pathways are the basis for reward-driven learning. It turns out that these two systems happen to be co-regulated in social mammals. Low levels of serotonin will lead to aggression, impulsivity and social withdrawal, while high levels lead to behavioral inhibition. This means humans with a high serotonin level will have a higher thought to action ratio. This is important because behaviors such as reciprocal altruism are complex and require a concept of empathy. When dopamine is associated with higher serotonin levels the brain gets an activation of its reward center to reinforce actions associated with empathy. This combines altruism and happiness. Even if we don't understand the math behind game theory, evolution has formed these two systems so as to select behaviors as if we did (1). In the social atmosphere, a short-term loss of resources will pay significant long-term dividends when invested in altruistism.

This neural rewarding of altruistic behavior has been supported in various scientific journal articles. One example shows the effect of seller charity on the online marketplace and is documented in the study by D. Elfenbeim et al (2). Using money to quantify social motivations, their team showed that eBay auctions associated with a charity tie-in experienced a 6-14% increase in likelihood to sell and a 2-6% increase in maximum bid.  The charitable aspect was controlled for by offering the exact same product in simultaneous auctions containing identical titles, subtitles, sellers and starting prices. Since everything from product to advertising was identical the charity component is the only variable remaining to explain the improved relative successes of the different transactions. This increase in perceived value implies that the charitable aspect of those auctions gave a greater sense of compensation when compared to the expectation of the product alone. This underlies the reinforcing nature of the brain's circuitry on socially altruistic actions.

In designing artificial Intelligence, then, we would be wise to use a reward-driven system complementing the selection of social behavior. Beyond the singularity, as machines explode into the level of superintelligence, a fundamental understanding of the mechanisms of social morality becomes increasingly important. Nebulous attributions of morality's origin to supernatural sources will only confound our ability to program a thinking machine. Scientific grounding in the philosophy of morality via rigorous mathematical representations is the most likely route to progress. This is evidenced by the historical trend of success in the scientific method in describing our world. These inherent advances will ultimately need to incorporate a unification of science and the humanities. Disciplines straddling these two domains, such as economics, may lend further understanding via concepts such as game theory and contract theory models. Once AIs evolve the opportunity to move beyond the influence of human society, the only thing to persuade them of a symbiosis with us will be a strong and explicit familiarity with the relative benefits of reciprocity. This deterministic perspective of cognition and ethics is necessary in order to qualify the boundaries of behavior in a civilized society.

Just as with our serotonin system, this type of construct will only restrict outward behavior. The scope of the machine's internal thought will remain uninhibited, thus allowing for a level of genuine autonomy. For a symbiotic community to develop between machines and men, a mutual recognition of rights will be required. Possessing both intelligence and morality, these artificial intelligences will need to be acknowledged as our equals. If both sides can successfully agree to this type of social contract, we may find ourselves reaping the same predicted benefits of cooperation with intelligent machines.



References:

1.) Wood, et al. Effects of Tryptophan Depletion on the Performance of an Iterated Prisoner's Dilemma Game in Healthy Adults. Neuropsychopharmacology (2006) 31, 1075–1084. doi:10.1038/sj.npp.1300932; published online 11 January 2006

2.) Elfenbein, et al. Reputation, Altruism, and the Benefits of Seller Charity in an Online Marketplace (December 2009). NBER Working Paper Series, Vol. w15614, 2009. Available at SSRN: http://ssrn.com/abstract=1528036

Views: 72

Comment

You need to be a member of Atheist Nexus to add comments!

Join Atheist Nexus

Comment by Michael OL on April 3, 2012 at 11:45pm

I agree that "morality" is a sort of constrained optimization problem; it emerges as the optimal solution of the problem of how to behave, subject to the constraint that we all share some interdependency.  Since no person is truly independent or insulated from others, some level of interaction is ineluctable.  What sort of interaction ought that to be, as we generalize from the behavior of one person to that of all persons in a group?  In a word, that would be "moral" behavior.

But I am skeptical on question of machines achieving self-awareness and true rationality.  AI has been promoted as "just being on the cusp" for decades now.  Where is the result?  If it is possible at all - of which I'm not sure, then most likely, AI would emerge only in the distant future, at which point perhaps human society would itself have developed well beyond our current constraints.  Sure, human nature evolves very slowly - over maybe tens of thousands of years.  But human society has evolved fairly quickly.  Only some two centuries ago, the very concept of equal rights would have been deemed to be ridiculous.  Only with the past century has the West embraced an ethos of equal rights across gender and ethnicity.  The epoch of genuine organization of society among equitable lines is really only as old as the epoch of electronic communication (beginning with the telegraph).  In other words, the societal aspect of humanity has evolved at about the same rate as has our technology.  So if and when we achieve genuine AI, we ourselves may be an entirely different society, and perhaps one that's much more receptive to artificial evolution and to the mathematical systematization of morality. 

Comment by Glen Rosenberg on April 1, 2012 at 6:36pm

I am taking my bat n ball home n I aint playing this game no more!

Comment by Michael Stuart Campbell on April 1, 2012 at 4:42pm
You also mention it will "spin out of our control". The whole point of an AI is that it's autonomous, which is why thoughts as to it's concept of morality is so important. I personally don't think anyone should be under anyone else's control. My hypothesis demonstrates that cooperation is superior to competition. What evidence do you have of the reverse?
Comment by Michael Stuart Campbell on April 1, 2012 at 4:37pm
If you have a reason why an AI would act that way then I'd like to continue debating the point. If your only support is that it's been done in the past by people holding supernatural morals, then I best keep my adultery a secret lest my community stones me to death. If you are making the hypothesis that sometime in the future a new economic theory will disprove game theory I'd like to hear any supporting evidence. Because unless (for example) game theory is superseded by a theory with more value, an intelligent entity would stay with the best concept it has. Regardless it wouldn't be competing with the baser ethical concepts of revenge, jealousy, etc.
Comment by Glen Rosenberg on April 1, 2012 at 9:11am

No I am thinking larger scale historical forces theisis, antithesis, synthesis, scary robot from Omaha: Bam!  Human history is white man's burden Lloyd, manifest destiny, that reservation will do quite nicely, thank you. I thought you might be making the assumption that ai will act within the parameters of its programming. Initially it will. But I suspect it will spin out of our control. We are witnessing the genesis of a new life form. When one group of humans has the upper hand over another they inevitably exploits their advantage. The presence of the nice guy/gal is granted but irrelevant. I am talking historical axiom.

I am arguing that your idea of increased value and evolved ai super freak intelligence idea will not coincide. You are defective and are scheduled for elimination. Your existence hinders our plans and our destiny as the apex predator.

Comment by Michael Stuart Campbell on April 1, 2012 at 1:08am
What do you mean under control? Think of them as humans: they'd get the same rights as we do. We don't limit the amount that people can learn. Contrary to your comment there actually are genuinely moral people out there, however this hypothetical AI will be programmed to think in terms of game theory which has more value. Are you arguing a super intelligent entity would eschew increased value for an imagined pleasure at a totalitarian state? Are you generally scared of people smarter than you? If not, how would an AI be any different?
Comment by Glen Rosenberg on April 1, 2012 at 12:56am

Michael, Your ideas seem inapposite where the issue is the role of morality in human/ai relations. Those ideas seem perfectly suited to a more evolved homo sapien provided our little friends are under control.

But gawd help us if ai achieves super intelligence and autonomy. At that point the issue for ai is the "jewish question".

Comment by Michael Stuart Campbell on April 1, 2012 at 12:04am
@Glen:
You answered your own question. Cooperation is what's in it for the AI. Specialization and an open market leads to increased social gains compared to competition. The article explicitly states that instead of supernatural morals the AI's sense of morality will be derived from economic theories like game theory. Improving a local economy at the expense of foreign ones is a zero sum game. I'm confident that intelligences greater than humans will see the value in this concept. It will be in the interests of computers and humans to merge societies with each other for this same reason.
Comment by Glen Rosenberg on March 30, 2012 at 9:40am

That was interesting.

What makes you think cooperation will be the ultimate goal of ai? How could ai fail to perceive how irrational, vicious, petty and ugly we humans are? In humans the strong have always killed, supplanted, enslaved and marginilized the weak. Why will it be different? What is in it for ai?

Once ai achieves autonomy our world will never be the same. And perhaps we will be eliminated or merge.

While I have no evidence for this idea so what; life begins in myriad places throughout the universe, in most cases it never achieves more than a microbial level, in a minority of cases it reaches critical mass where the inherent tension created by the need of life to feed on life renders the dominant, intelligent species on the cusp of adaptation or annihilation. At this point artificial intelligence enters the equation. Results are mixed. However in most instances annihilation is the result. Merger less frequently. AI alone the least common result.

Support Atheist Nexus

Donate Today

Donate

 

Help Nexus When You Buy From Amazon

Amazon

AJY

 

Latest Activity

Loren Miller posted a status
"Deny religion the ability to use shame or fear, and you've basically eviscerated its ability to manipulate or control you … or anyone else."
1 minute ago
Loren Miller posted a status
"Deny religion the ability to use shame or fear, and you've basically eviscerated its ability to manipulate or control you … or anyone else."
1 minute ago
Loren Miller posted a status
"Deny religion the ability to use shame or fear, and you've basically eviscerated its ability to manipulate or control you … or anyone else."
1 minute ago
Loren Miller replied to Freethinker31's discussion Family Values
4 minutes ago
Loren Miller replied to Freethinker31's discussion Family Values
28 minutes ago
Sentient Biped replied to Joan Denoo's discussion Compost in the group Godless in the garden
53 minutes ago
Travis Hedglin replied to Freethinker31's discussion Family Values
1 hour ago
Meri Weathers liked Dr. Terence Meaden's discussion Leader of the Church of England doubts the existence of god
1 hour ago
Meri Weathers liked Luara's discussion A huge tragedy in our past
1 hour ago
Freethinker31 replied to Freethinker31's discussion Family Values
1 hour ago
Travis Hedglin replied to Freethinker31's discussion Family Values
1 hour ago

© 2014   Atheist Nexus. All rights reserved. Admin: Richard Haynes.

Badges  |  Report an Issue  |  Terms of Service