HP3000-L Archives

February 2000, Week 2

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
"James Clark,Florida" <[log in to unmask]>
Reply To:
James Clark,Florida
Date:
Fri, 11 Feb 2000 10:31:19 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (76 lines)
As functionality is given and for that matter a limitation lifted, it has
been my experience that someone will, sooner as to latter, bump up against
that limit. Hugh amounts of data are a current problem. When I started out
in programming I was given a PC (for terminal emulation) with dual floppies,
and 256K RAM, (I did have a Apple II with 1 drive and 64K RAM) now that is
something to laugh at. Most personal PC's are coming with 8 - 10 GB disc
drives, and 64-128 MB RAM, of course this has not translated to higher data
needs to the central computer yet, but time is still ticking. I believe that
data warehousing has been the big push at the central CPU, and of course the
price of disc storage.

Also the HP 3000 Series 70 we had had a room full of disc drives but was
only about 5 - 10 GB now we have over 180 GB in two small enclosures and we
are a small shop. As for the filling up of data, it has been my experience
with HP that they come through sometimes in a timely manner with
hardware/software to accomplish a given task for their users. And when HP
has dragged their feet there are 3rd party partners which get the job done.

I am sorry if this is coming out as an arguement, it is not meant to be.
Just stating some figures which could rapidly change given the system would
allow it. I hope HP is not waiting for a justifiable need before putting
effort into it (I am glad they don't and I believe it is by your and other
SIG which keep them always improving). I appreciate also that they
incrementally improve their product and also build on strengths without
sacrificing previous work. (I have recently read that this is not so for
HP's printer division, maybe they should go to over to CSY for good lessons)

I mentioned 4G number of records only as a theoritical limit given the size
of the pointers. Given a simple 80 byte record, 1 Billion of them would
break the file limit currently given. I know I talked about size but what I
was trying to get across was that lets not pick on one thing which seemingly
looks easy to fix or patch but pay attention to the big picture. Fixing one
part may break another. As can be seen with the example of JUMBO set data
needing to be sorted. File sizes not big enough. And you brought out that
the machine would take too long to load data to these tables, would that be
a limitation on the CPU and I/O bandwidth? HP has done tests with 300GB data
for their TPC-D test. If it takes too long then HP needs to look at the
Image logic and how it handles I/O and records. Probably multiple processes
hammering away at the task. Again as we look at the big picture it is not so
easily to change but with planning and awareness it can be accomplished.

James

> -----Original Message-----
> From: Sletten Kenneth W KPWA [mailto:[log in to unmask]]
> Sent: Friday, February 11, 2000 12:19 AM
> To: James Clark,Florida; [log in to unmask]
> Subject: RE: MPE iX Release 6.5
>
>
<snip>

> BUT....  CONSIDER:
> Jerry Fochtman recently posted some interesting real-world
> data on how long it took to load;  what was it;  70 GB, running
> on a high-end machine...  think he said about seven days (and
> that was with only two search items in a Detail)....  At that rate
> loading 1 TB would take something like 100 days....  if people
> are going to need to do that, guess HP will have a good market
> for new, higher-performance HP 3000's....  oops...:  HPe3000
> (I refuse to put a space between "HP" and "e")...
>
> NOW:
> In his first above James talks about 4G *records*, not bytes of
> capacity in a dataset....  I have to ask:  Are there really sites out
> there now (or projected any time soon) that think they will have a
> need to store more than two billion individual RECORDS in a
> single IMAGE dataset anytime soon ??...  Seriously, if there are
> the SIGIMAGE Executive Committee (SIEC) would very much
> like to hear from you....  Even better, attend SIGIMAGE meeting
> next week on Wed 16 Feb 00 at SIG3000 if you can....   ;-)
>
> Ken Sletten
> SIGIMAGE Chair
>

ATOM RSS1 RSS2