When J. Robert Oppenheimer saw the first atomic bomb and the destrctive capabilities it unleashed he was quoted as saying, "Now I am become Death, the destroyer of worlds".
It got me to think about the future I am helping to create. For the past two years the bulk of my spare time has been spent studying Nueroscience, the human brain and artifical intellience or simply AI. I began to think of what it is I want. Most scientists and engineers are working on robotics or building computers that can play chess. These are not my goals. My goal is to create an articifical brain. This is different than simply creating a computer that can think. I'm trying to make one that is capable of human emotions. I'm sure one could go on and on about the moral/ethical questions this could create, one could even tell me it's impossible. I don't believe anything is impossible. Everything comes down to how much time and effort one wants to apply to any event or project.
That being said, I can't help but examine the ramifications of my possible success. If a computer could think and could feel, what makes it inhuman? What makes it any different than us? Most philosophers say the only thing that seperates humans from animals is our ability to reason. I don't think humans are as complex and unique as we'd like to believe. Our minds are just signals and electrical pulses.