HP3000-L Archives

May 2005, Week 1

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Wirt Atmar <[log in to unmask]>
Reply To:
Date:
Mon, 2 May 2005 16:37:04 EDT
Content-Type:
text/plain
Parts/Attachments:
text/plain (102 lines)
Denys writes:

> I notice that I got no response about the onset of awareness.  At what point
>  does the warm slime become aware of its surroundings?  When does it become
>  self-aware?  When does it start remembering things?  What is the exact
>  mechanism that enables descendents of pond scum to build a 777-200?

I see nothing particularly difficult about the evolution of intelligence or
self-awareness. Indeed, that was my opinion 30 years ago, and it remains my
opinion now. In 1976, I published my PhD dissertation:

     "Speculation on the evolution of intelligence and its possible
realization in machine form."

Almost immediately thereafter, in 1977, Carl Sagan published a book with a
very similar title:

     "The Dragons of Eden: Speculations on the Evolution of Human
Intelligence"

Sagan won the Putlizer Prize for his book; mine was barely noticed, but we
came to the same conclusions: intelligence is inevitable, if for no other reason
than it's built into the evolutionary process itself.

The most salient attribute of evolution is that it is a learning algorithm.
Given self-replication, Darwinian evolution is an inescapable thermodynamic
process, a mechanism that organizes itself into accumulating increasingly
appropriate behaviors within an evolving lineage of trials. Given self-reproduction,
the processes and consequences of Darwinian evolution cannot be avoided in a
finite, positively entropic universe. The biochemistries variously adopted by
life on other worlds may ultimately be found to be nothing much more than
exigencies to local conditions, but the Darwinian physics that underlies that life
will almost certainly be universal.

People often use the word "sentience" when they speak of consciousness, but
they misspeak when they do. Sentience means to feel, to be self-aware, but
that's all it means. Every earthworm is self-aware. It has to be. It could not
exist as a mobile animal otherwise. If you do not know where your body is in
space, and what you are contacting, you could neither move nor avoid dangerous
situations nor eat.

The word most people are looking for is "sapience" when they speak of
themselves, but even then, that's only an exaggerated condition of sentience.

A number of biologists have recently come to the same conclusions. Most
recently, Christian De Duve, in his 1995 book:

     "Vital dust: life as a cosmic imperative"

and Simon Conway Morris, with his 2003 book:

     "Life's Solutions: Inevitable Humans in a Lonely Universe"

Because Darwinian evolution is at its core a learning algorithm, it can be
put into machine form. I uploaded a very recent paper (December 2004) by my
third doctoral student, David Fogel, on our webserver for you. It's at:

     http://aics-research.com/ieee-chess-fogel.pdf

Although I had absolutely nothing to do with this work, I'll take some credit
for it anyway. David and I talked every day for 10 years, and I consider this
particular paper to be a particularly important result of that conversation.
In this paper, an evolutionary program, Blondie25, learns to play chess at the
grandmaster level, but no rules of chess were programmed into the structure.
It learned the rules of chess on its own, through experience.

Even more importantly philosophically, the contending variants were
selectively retained solely on their outcomes: win, lose or draw. A common approach in
optimization problems is called the "credit assignment problem," where every
intermediate step is assigned a credit or blame value for the outcome, and in
the end, the worst and best intermediate steps are either retained or modified.
But evolution doesn't work this way, nor can any system once its
organizational structure becomes exceedingly complex. This program, Blondie25, is unique
in that it attempts no intermediate credit assignments.

Blondie25 not only demonstrates that these intermediate credit assignments
are not only unnecessary, it also demonstrates that evolution operates with
amazing efficiency without them. Even more interesting perhaps is that if the
rules of chess were suddenly changed, for a while, Blondie25 would do miserably.
Evolution operates as a predictor of near-term future events based on its
learned behavior from the recent past. But once the rules have changed, the past is
no longer an accurate predictor of the future, and the memory stored of those
events is no longer valid. But eventually Blondie25 will learn the new rules,
whatever they may be, and will again approach grandmaster status.

Chess has long been used as a measure of human intelligence, but Blondie25
can easily outcompete 99.99% of all humans on the planet in this one task. But
it's also just as clear that Blondie25 is a mechanical process, where no
mystery or mysticism need be injected into its analysis to understand the basis of
its intelligence. We can build sentient machines now in the same way, and we're
not far away from eventually building sapient machines as well, which
eventually will quite likely possess a "consciousness" equal to our own.

At that point, will the machine ask where the seat of its soul resides? In
which transistor, in which disc drive? And what will happen to its soul when it
is unplugged?

Wirt Atmar

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

ATOM RSS1 RSS2