HP3000-L Archives

March 2001, Week 4

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Wirt Atmar <[log in to unmask]>
Reply To:
Date:
Mon, 26 Mar 2001 23:27:08 EST
Content-Type:
text/plain
Parts/Attachments:
text/plain (266 lines)
Ted asks exactly the right questions when he writes:

> Thus it was written in the epistle of Shahan, Ray,
>  > given enough time, any and all
>  > possible combinations/permutations will/can be resolved?
>
> I'm told there are two problems with that.  First, the number of seconds
> that the universe is estimated to have existed is miniscule compared to
> the possible arrangements of the elements necessary to make, say,
> hemoglobin, and second, the whole idea of having a system spontaneous
> *decrease* in entropy is counter our understanding of thermodynamics.


And Bruce provides exactly the right answer:

> There are analytical techniques for optimizing linear functions, but very
>  few for nonlinear functions, and then only for special cases. So we have
>  to turn to numerical experimentation. One such class of experimental
>  techniques is genetic algorithms, so named because they were inspired by
>  the process of evolution.
>
>  Suppose that you have a large nonlinear optimization problem. The problem
>  has a large vector of inputs, and a vector of outputs, not necessarily
>  large. There is some weighting function for the output vector, such that
>  it's possible to calculate a numerical score that expresses the
>  "goodness" of the output vector as a scalar value.
>
>  Now, generate a collection of semi-random input vectors. They're
>  semi-random because there are presumably some limits on the input
>  parameters, but other than fitting within the limits, there are no
>  restrictions. Evaluate the nonlinear function for each of these input
>  vectors, and select the input vector that results in the "best" output,
>  according to the output score.
>
>  Then generate another set of input vectors by random perturbations of
>  each element of the "best" input vector. Run these through the nonlinear
>  function and select the best. Repeat until good enough, or until no
>  further progress is made.
>
>  The result may not be the absolutely optimal solution, but rather a local
>  maximum. In fact, depending on the weighting function applied to the
>  output vector, there may be several maxima. But this provides a
>  probabilistic method for doing an optimization that's too complex to be
>  done deterministically.

This thread has all of a sudden become quite on-topic. While the original
discussion dealt with whether the complex designs of nature can be evolved
"by random", without a designer, the subject of interest has now become at
least tangentially whether computer programs can be evolved "by random",
quite similarly in the complete absence of a programmer.

Although perhaps this second topic is not as offensive to some, it is
nonetheless exactly the same question, and it has been a deep abiding
interest of mine for nearly 40 years now. Indeed, it was the subject of my
1976 doctoral dissertation, "Speculation on the evolution of intelligence and
its possible realization in machine form." Regardless of its philosophical
and religious implications, it is a subject that is amenable to hard-nosed
engineering approaches, especially if you fundamentally understand the
physics involved -- and we do.

Although it may seem odd on first hearing, Darwinian evolutionary biology,
thermodynamics, and information theory are the same subject, and they have
been philosophically intertwined from their very beginnings. The diagram I
draw below is one I often use in my lectures on the subject:

                                    S. Carnot 1824
                                    R.J.E. Clausius 1850-58
                                    Lord Kelvin 1848-52
       A.R. Wallace                 C. Maxwell 1859-60
       Ch.R. Darwin 1858               |
           |             \             |
           |              \            |
           |  J.G. Mendel  \           |
           |  1850-68       \          |
           |        |        \         |
       Ch.R. Darwin |         \        |
       1871         |       L. Boltzmann 1874
          /|        |                  | \
         / |        |                  |  \
        /  |   H. de Vries 1900        |   \
       /   |   K.E. Correns 1900       |    M. Planck
      /    |   E. von Tschermak 1900   |    P.A.M Dirac
     /     |       /                   |    A. Einstein
     |     |      /                    |    E. Schroedinger
     |     |     /                     |    1900-1940
     |    R.A. Fisher 1930-1960        |
     |     /\                          |
     |    /  \                         |
     |   /    \                        |
 Neo-Darwinian \                       |
 Paradigm       \                  C. Shannon 1948,49
 1940-1960       \                     |
     |            \                    |
     |          "Selfish               |
     |          Genetics"              |
     |          1970-1990              |
     |              |                  |
     |              |                  |
     |              |                  |

Carnot, Kelvin, Maxwell and Clausius can be used to represent that group of
people who first broke the knot of truly understanding thermodynamics in its
modern form, although they mathematically represented and continued to think
of heat as it if were a fluid.

Rudolph Clausius defined the term "entropy" (which literally means "in one
turn"). The mental model Clausius used in his thermodynamics descriptions was
that of the gearing system in a grist mill. Some fraction of the ordered
energy in every turn of the prime drive shaft was *not* returned to the
subsequent gears; rather, that ordered energy was irrecoverably lost to an
inaccessible pool of heat due to friction.

Ludwig Boltzmann relatively quickly redefined Clausius' entropy, not as a
bulk quality of heat, as Carnot, Kelvin, and Maxwell envisioned the process,
but as a population of particles inexorably moving from a state of some
specific degree of order to one of disorder. This is called the "microscopic
interpretation" in physics. The structure of this reinterpretation is due
wholly to Boltzmann's great enthusiasm and deep understanding of the
mechanistic explanation that Darwin put forward regarding selection operating
against the smallest of variations within a population of variants.

Intervening in that time between Carnot and Boltzmann were the ideas of
Darwin, Wallace, Clausius and Maxwell. The impact of Darwinism on physical
thought was immediate and extraordinarily profound, and almost wholly due to
one person, Ludwig Boltzmann. Boltzmann was so entranced by Darwin's ideas
that he wrote in his 1905 book, Populare Schriften, "If someone asked me what
name we should give this century, I would answer without hesitation that this
is the century of Darwin." As Ilya Prigogine, Nobel Laureate in Chemistry,
wrote in his 1984 book, "Order out of Chaos: man's new dialogue with nature":
"Boltzmann was deeply attracted by the idea of evolution, and his ambition
was to become the 'Darwin' of matter".

Indeed, Prigogine himself is apparently quite taken by Boltzmann's attraction
to Darwin because he prominently mentions that philosophical association at
almost every opportunity, as he does again in "From Being to Becoming: time
and complexity in the physical sciences. In that text, Prigogine writes:

"Boltzmann's approach had astounding successes. It has left a deep imprint on
the history of physics. The discovery of the quantum by Planck was an outcome
of Boltzmann's approach. I fully share the enthusiasm with which Schrodinger
wrote in 1929 that "[Boltzmann's] line of thought [Darwinism] may be called
my first love in science. No other has ever thus enraptured me or will ever
do so again'" (the insertion of the word "Darwinism" into the text was my
own, but is clearly implied by Prigogine's preceeding text).

Eighty years later, Shannon again reinterpreted entropy a third time,
directly emulating Boltzmann, but this time, defining "information" as
unexpected variation within a channel of signalling symbols. Shannon's
information measure, I, was defined more as a metric of surprise than
disorder. In that, a philosophical circle was completed. Evolutionary
optimization itself is a rather simple thermodynamic process where the agent
of selection (competition in a bounded arena) operates to minimize the
behavioral inappropriateness (surprise) of the trial variants.

The direct effects of the Darwinian/Boltzmannian reinterpretations of
thermodynamics on physics have been at least these:

     o it has become the philosophical underpinning of modern
       thermodynamics.

     o it provides a profound and measurable sense that time has
       an "arrow" (Sir Arthur Eddington's phrase), in sharp
       contradistinction to a Newtonian physics, where reversibility
       is not only possible, but assumed.

     o it is the derivative philosophical underpinning of the definition of
       the quantum, and thus all of quantum mechanics, and by further
       derivation, statistical mechanics.

     o it is also the philosophical underpinning, by derivation, of
information
       theory, and by further derivation, coding theory.

While there is much more to talk about on this subject than this very brief
sketch, I might only add that this derivative philosophy of Darwinism occurs
nowadays in the most unlikely places. Physicists have, from the very earliest
days, preferred to reify, if not outrightly personify, the process of
Boltzmannian/Darwinian/Markovian selection by invoking the images of agents,
demons, and barbs, although no physicist takes any of this imagery seriously.
Nonetheless, this philosophical heritage that is characteristic of physics --
but not of biology -- now appears (if not permeates) the terminology of
modern computer operating systems, such as UNIX, where agents and demons
populate a world of interconnected computers and peripheral devices.

It is considered poor form in scientific circles to reference your own
papers, but I'll do nevertheless that here. Work I did more than 20 years ago
I presented at an IEEE Asilomar conference in 1990, and the resulting short
paper is on the web at:

     http://aics-research.com/research/accel.html

The reason that I mention this paper is that it very directly answers Ted's
question, in exactly the same way that Steve Dirickson answered the question:
not all possibilities are evaluated, or even experienced, in an evolutionary
search. Indeed, only an infinitesimally small fraction of the state space is
explored, for exactly the reasons that the poet-physicist, Jacob Bronkowski,
earlier called "the barb of selection." Further, natural mechanisms have been
evolved that dramatically accelerate the speed of evolution within enormously
large state spaces, and that is the true crux of the paper.

A second, much longer paper that is also on the web:

     http://aics-research.com/research/notes.html

This second paper explores the attributes of creativity and purposivity
intrinsic to the evolutionary process. Although the paper was clearly meant
to be philosophical in nature, that philosophy was one of engineering design
and of a proper interpretation of the physics in force. I never gave any
consideration to its religious overtones when I wrote the paper, and as a
consequence I am somewhat bemused to see that it is referenced on a number of
religious websites now (Jehovah's Witness, Christian Science, and Roman
Catholic).

My interest in autonomously learning machines was greatly picqued when I was
in high school, in the late 1950's, when I read a "Mathematical Games"
article in Scientific American, and that interest was even further heightened
when I read a small book by Fogel, Owens and Walsh in 1966, entitled
"Artificial Intelligence through Simulated Evolution." Indeed, I used that
book as the basis of my 1976 dissertation.

But to demonstrate how much things have changed between now and 1976, there
was a fairly substantial level of disagreement among the large doctoral
committee that I reported to as to whether or not this work qualified as
being electrical engineering. The department head at the time, Dr. Frank
Carden, said that he wasn't sure what electrical engineering was any more;
when he went through school, the primary two areas of interest were only
radio and power engineering. There were no computers, no information theory,
no electromagnetics, no coding theory, no image processing -- and what I had
chosen to work on seemed perfectly fine to him. I have always held Frank in
very high regard since.

In 1986, one of the first AICS'ers, Jay Cunningham, happened to run into
Larry Fogel, the primary author of the book that I used as my dissertation
basis. He gave Larry some of my writings. As a result, Larry immediately
called and asked if I would come to La Jolla and talk with them for about a
week. It's there that I met his son, David, who was at the time a junior at
UC Santa Barbara.  Eventually, I came to supervise David's doctoral work,
with him finishing in 1996. David has become prolific and is now the author
of several books on the subject of evolutionary computation (go to Amazon.com
and search on the keywords: evolutionary computation fogel), as well as the
editor of an IEEE journal on the subject:

     http://www.ewh.ieee.org/tc/nnc/pubs/tec/

I do not want to leave you with the impression that my contributions have
been particularly key to the development of the field. All that I've
represented above is just one thin lineage in a subject area that has
exploded in the last few years. Nevertheless, because we do understand the
physics and processes of evolution so very well, you should inherently expect
engineering exploitation, and that exploitation is now beginning in earnest
and undoubtedly will only grow substantially over the next several decades.

As you might expect, the design of autonomous weapons and spacecraft will
likely be the first real implementations of self-evolving machinery:

     http://ic-www.arc.nasa.gov/ic/eh2000/index.html

A list of the "major" research laboratories in the world that are working on
evolving hardware can be found at the University of Sussex's Cognitive
Science Group's webpage (although the term "major" should certainly be taken
with a grain of salt):

     http://www.cogs.susx.ac.uk/users/adrianth/EHW_groups.html

Wirt Atmar

ATOM RSS1 RSS2