HP3000-L Archives

December 1998, Week 1

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Wirt Atmar <[log in to unmask]>
Reply To:
Date:
Tue, 1 Dec 1998 15:14:18 EST
Content-Type:
text/plain
Parts/Attachments:
text/plain (130 lines)
Jennifer writes:

> Wirt Atmar wrote in message ...
>  >There is another one of these articles on the web that essentially
predicts
>  >the end of the PC -- at least as a standalone processor.

>  I think the PC will change substantially, but it's extremely premature, IMO
>  to predict its demise. You could just as easily predict the demise of UNIX
>  or Cobol (been there, as I recall ), since all things eventually come to
>  an end. The question is how LONG before we put little x's over the PC's
>  proverbial eyeballs.

Let me say that I wasn't predicting the end of the PC, regardless of the
provocative subject title of this thread. Indeed, like almost every other
successful evolutionary event, the persistence of the PC is now essentially
guaranteed. There is very little chance that anything will come along and
unseat the PC in the next 25 years. Inertia alone will carry Windows-based PCs
into the future for at least that period of time.

What is being predicted is the end of the push towards "a mainframe on every
desk" phase of PC usage. Larry Ellison said the same thing last week when he
said, "The internet changes everything."

The PC is rapidly pushing on towards become a gateway to information, not an
isolated repository of information in and of itself.

That's not only reasonable, such usage also lowers support costs dramatically
and much increases both reliability and ease of use.



>  The PC must remain capable of being a standalone processor for as long as
>  the communications issues from host to client remain so poor. Modems are
>  still much too slow for much of the commercial web content currently being
>  published, and we are unlikely to see streamlining of code so that modem
>  users can see content faster. As it is, commercial web sites have to hold
>  back on functionality to accommodate the lowly modem user. Regardless of
the
>  wonders of T1 lines, 60% of web surfers are getting there via modem
>  connection. Home users outnumber corporate users, and always will. Until it
>  is standard for home users to have fast, effective, reliable connection to
>  the internet, we will not see the death of the PC. IMO, anyway.

The web-based browser is not the only paradigm out there. We started with
terminals. There's an excellent chance we'll actually wind up with terminals.

A terminal-based connection has a lot of advantages over the stateless,
script-based communication protocol that a browser uses. Traditional terminal-
based programming is enormously simpler to program up. It is also enormously
faster to execute. And it's persistent. There is nothing written in any stone
tablet anywhere that says that communications on the internet have to be made
so complicated or so slow.

Long-distance telnet is intrinsically no more complex than any other form of
phone line. If you program conservatively, and keep the material you're
transferring to the "extremely thin client" (a terminal) to a minimum, it can
be blazingly fast.




>  And on the 'too much power' theory...when voice recognition becomes more
>  reliable, those PC's will need every bit of power they can get. Again, this
>  is not from a work environment perspective...it's from a base user level.
>  Talk to people who can't type. They are DYING for this technology. We're in
>  a lull now (in need for more power -- insert Tim Allen grunt here), but I
>  don't expect it to last. Virtual reality is a fun idea, but voice
>  recognition is something that will help all computer users.

While I agree with almost everything else you say, Jennifer, I wouldn't bet
used doughnuts on voice recognition being a truly useful technology anytime
soon.

The problem with continuous speech, speaker independent voice recognition is
that it must work in an ordered hierarchy. At the lowest level, the system
must be constructed to quickly adapt to the formant frequencies of the speaker
so that it can properly hypothesize phoneme (or morpheme) breaks.

At the next level up, it must then be able to properly hypothesize words from
this ongoing string of morphemes, realizing perfectly well that its initial
hypothetizations may be incorrect, so it must have the capability of dropping
down a level or two (at every level) and reformulating its hypotheses. This
constant reformulation is something that everyone of us has experienced,
especially in a noisy party environment, listening to someone speak who has a
dialect substantially different than our own. It can sometimes take a minute
or two before it "dawns on us" what the person said -- and then it can be such
a shock that you have an almost inevitable desire to say, "Aha!"

At the next level up, an excellent voice recognition device must be able to
initially hypothesize context based on not much more than common word
associations, and thus be able to initially segregate out homonyms, and
hypothesize sentence and phrase breaks.

At the next level up, it must be able to resolve conflicting hypotheses and
firmly break sentences into punctuatable phrases and thoughts. But to perform
this activity properly requires hypothetizations and associations at the next
level up yet again, learned associative memory (basically a life's worth of
experiences). But even humans are fooled at this level -- and that's why we
find spoken phrases such as, "No pun in 10 did," funny (sometimes :-).

Without all of this capability (essentially requiring the intelligence of an
adult human locked in a box), voice recognition will always being constrained
to being about 80-90% accurate for generally random speech -- and that isn't
likely to be good enough.

Rather, for the time-being, and for a great while to come, it would probably
be far more cost-effective to pay for typing lessons.


Glenn Cole also writes:

> > 2.  They want to be connected to he company intranet and/or the
>  > Internet to get information from the enterprise's mini, mainframe
>  > or whatever.
>
>  THIS was the biggest reason for my last upgrade.  GIFs and JPEGs
>  draw SO much faster on the 180 MHz machine vs. the 25 MHz machine.
>  Otherwise, personal productivity may well be comparable.

That's absolutely correct, but even at 180MHz you haven't reached the knee,
where perceived performance gains begin to diminish, or are non-existant, with
every additional 100MHz you add to the processor. When the $600, 600MHz
machine arrives (perhaps as soon as 2000), local processing speeds won't be
the bottleneck, exactly as Jennifer says. It's going to be bandwidth, on the
backplane, from the local discs, and clearly from the internet -- and that's
not likely to improve all that quickly for the general population.

Wirt

ATOM RSS1 RSS2