HP3000-L Archives

December 2003, Week 2

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Wirt Atmar <[log in to unmask]>
Reply To:
Date:
Tue, 9 Dec 2003 11:58:06 EST
Content-Type:
text/plain
Parts/Attachments:
text/plain (55 lines)
Mark remarks:

> There's been an insane move to centralization, IMHO, strictly for those
>  maintain systems and the risk of a central point of failure is much greater
>  (and service is worse).  Certainly, having systems all over the world is
>  difficult and probably more expensive to manage.  A good virtual system
>  would have the advantages of both a central and decentralized system:
fewer
>  systems to manage but redundancy to protect against communication
>  breakdowns, power failures or other nasty events.

There are some problems that are inherently parallelizable, but virtually
anything to do with an interactive database is not. Google is an especially good
example. Caching in a Google-like application is worthless simply because
every query is independent of another and its results represent only a small
fraction of the universe of information stored in the replicated databases.
Moreover, the flow of data is all outflow. There is no input into the databases
during the great majority of the time that the databases are up and running. Nor is
there any requirement that the multiply replicated databases be exactly in
synch with one another. Close enough is good enough.

Similarly, problems that are fractionable into a large array of partial
differential equations such as shock turbulent flow, weather patterns, photon
migration, etc. are also all prime candidates for massively parallel
implementations.

But a patient record system is not partitionable. The only safe way to
maintain such a database is on one central server, as one database. Any other
architecture courts catastrophe -- for exactly the same reasons that a partial
database restore has always been a recipe for catastrophe on HP3000s.


>  I think it is evolution (Duane?).  It is looking more like the human body,
>  there are main functions but none are served by a single cell.  If you
>  remove parts, at least not too many at a time, the system still runs.  It
>  seems to me anyway, a natural conclusion.

Unfortunately, it's the wrong conclusion. Complex neural nets have evolved at
least three times independently on this planet, once in the vertebrates, once
in the molluscs and once in the arthropods. In none of the three have
distributed processing ever much evolved beyond isolated ganglia, slightly
modularized processing nodes distributed at key points in the nervous system. Rather, in
all three architectures, the loud clear piercing cry of design has been
towards encephalization, a single processing site that receives input and commands
and controls the rest of the body.

If there were any merits to a distributed computing architecture, certainly
some species among the hundreds of millions that have existed would have toyed
with the design at some point, but to the best of our knowledge, none have.

Wirt Atmar

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

ATOM RSS1 RSS2