Monday, January 14, 2008

Transhumanism and the so-called "future good" of humanity.

{ Podcast this essay } I hear lots of college age people (generally earlier in their college careers) claim that moral values are relative, and that people in different cultures have the varying moral values they do because they were raised in different ways. This can be treated as a hypothesis, and if correct, we would not expect to find a core of moral values shared across cultures. However, and contra to naive opinion, we do, in fact, find a core of moral values shared across cultures. Thus, people in different cultures do not have significantly varying moral values because of being raised in different ways.

Donald E. Brown is an anthropologist and has identified many emotions and moral concepts which are universally noted among humans:

"a distinction between right and wrong; empathy; fairness; admiration of generosity; rights and obligations; proscription of murder, rape and other forms of violence; redress of wrongs; sanctions for wrongs against the community; shame; and taboos."[1]

Other anthropologists have cataloged similar universals. I think such studies are indeed enough to show that human morality isn't relative, but I don't think its enough to show that the values themselves are not relative. If human values are formed by the successful reproduction and survival calculi of evolution-- that people have the values they do because such values contributed to their success as biological beings -- then its possible that in the future a different set of core, shared values could arise.

It might be that, as humans move out into space and radically modify their form by bio-engeenering and technology, those next generation humans will have likewise next generation shared values. I'm not sure what those values would be, and perhaps the state of my brain, formed by the biological past that it has, actually cannot imagine those values. If it is true that current humans cannot image the values systems of what future humans or human-originated creatures might be, this would be bad news for transhumanists (sometimes called "post-humanists"). Transhumanists seem to say that rational modification and enhancement of human capabilities is a good project. But good in what sense? It seems odd to say that what we perceive as good now, would indeed be good later. At least this is what Transhumanists seem to be committed to when they say such modification is a good project.

Consider an investment analogy: Putting money into some stock now is good, since it has been going up recently. (Where 'good' is defined as 'profitable'.) But that does not mean putting money into the stock now will be good -- i.e. profitable -- later. Circumstances of a company and of the company's environment can change radically. Typically, one would counter, "When the company no longer is good, do not keep putting money in it." But here is where the analogy can point out the problem with transhumanism: In the stock case, there is but one investor making the decision, but suppose there were several in a historical line, a string of investors, stretching into the future, but each with a slightly different investment strategy. Mr. A is a bit like Mr. B in his strategy, and B a bit like C, and C to D, and etc. on to the far future of Mr. R. But each strategy is only a family resemblance to the precursors. In fact, given enough tiny changes over time, Mr. R. could actually have a contradictory strategy to Mr. A, the initial investor!

To apply the analogy, then, the values of the present core of humanity (situation A) could be radically different, even completely contradictory, to future humanity (situation R). Therefore, it seems quite naive to agree with Transhumanists that rational modification and enhancement of human capabilities is a 'good' project for humanity's future. There might be other bases to advocate for modification and enhancement of human capabilities, but some future 'good' to the next human (or human-like) situation likely isn't one of them.


REFERENCES

[image] "Terminator: Sara Conner Chronicles" Fox.com (Accessed 1/14/2008)

[1] Stephen Pinker "The Moral Instinct" New York Times Jan. 13, 2008 (Accessed 1/14/2008)


O.

Labels: , , ,

4 Comments:

At 1:51 PM, Blogger David Darmon said...

Interesting point. I never really thought to consider transhumanism from the perspective OF the transhumanists. That is, to consider if they would prefer to have stayed humans or not...

Though, I'm still going to have to side with the transhumanists. It just seems silly to me to consider that we have somehow reached the end of evolution. We've been 'evolving' culturally for the past 60,000 years. It only makes sense to take the next step.

Thanks for pointing me to this post! (I'm from nomrad.wordpress.com, but I had a blog here back in the day). Good luck with your future thinking!

 
At 11:53 PM, Blogger Robot Suit said...

Thanks for commenting my blog. I responded to this blog on my blog, please check it out:

http://theeraofthecreativecollective.blogspot.com/2008/01/response-to-brint-montgomerys.html

 
At 12:52 AM, Anonymous Anonymous said...

I'd have to agree with the article in general, I was speaking mostly to the Otherkin side of things. I also must confess to a fascination with dualism, and I was thinking about the contrast between cybernetic\genetic enhancement and the magical evolution

 
At 8:45 PM, Blogger Matt Osborne said...

While the human race shares a certain set of core values, those values can differ radically in execution -- for instance, "honor killings" in one part of the world are considered evil in another part. That's not to say honor killings aren't universally wrong, but that they aren't universally *SEEN* as wrong.

Furthermore, our human values have changed over time. Incest has been a sanctioned practice in many societies, but is universally abhorred today; slavery has had a similar historical arc.

Our future may also be posthuman whether we like it or not. Technological advance is an inherently iterative process, meaning that it can change societies slowly enough that they don't realize the vast changes coming over them. Just look at the cell phone and its effects, both wonderful and pernicious, on human behavior -- all within a single generation, but come at us piecemeal with each new "season" of phone models and pricing plans as to *SEEM* slower than it is.

IMHO, the reason why science fiction matters is because it asks, and offers possible answers, to long-term problems like the "posthuman future." As I say in the blog you commented on (which brought me here; you're bookmarked now), this is what science fiction has been doing since its earliest days.

Thanks for finding me!

 

Post a Comment

<< Home