HP3000-L Archives

February 2000, Week 2

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Jerry Fochtman <[log in to unmask]>
Reply To:
Jerry Fochtman <[log in to unmask]>
Date:
Fri, 11 Feb 2000 10:50:37 -0600
Content-Type:
text/plain
Parts/Attachments:
text/plain (78 lines)
At 10:31 AM 2/11/2000 -0500, James Clark,Florida wrote:
 >I mentioned 4G number of records only as a theoritical limit given the size
 >of the pointers. Given a simple 80 byte record, 1 Billion of them would
 >break the file limit currently given.

Yes, although it would break the current file limit, the large file system
itself was architected to support 1TB files.  However, as you've indicated
other limiting factors come into play that will also have to be addressed
as things grow to reach this limit.  This includes things such as compiler
technology, hardware architecture, I/O architecture, disc storage technology,
backup technology, etc.  Hence why HP's announcement and strategy to move
the HPe3000 towards IA64 is such an important move on their part for the
future of the HPe3000.  Without moving in this direction, most all of
the hardware/architecture issues would not be solvable...


 >And you brought out that the machine would take too long to load data
 >to these tables, would that be a limitation on the CPU and I/O bandwidth?

Yes... :)
I/O bandwidth which includes disc technology, controller technology,
back-plane
bandwidth, CPU design/pipeline, memory technology, software drivers;
essentially
the entire I/O stream.  The CPU itself was idle or waiting on I/O....


 >HP has done tests with 300GB data for their TPC-D test.

Hmmm... I find this interesting.  Apparently you've become aware of the
some type of results.  Do you know of a pointer to this information?


 >If it takes too long then HP needs to look at the Image logic and how
 >it handles I/O and records.

IMAGE doesn't really do much I/O itself, but leaves the task to storage
management, as IMAGE uses mapped access to reference the contents of the
datasets.  And then don't forget XM's involvement to ensure data integrity.


 >Again as we look at the big picture it is not so easily to change but
 >with planning and awareness it can be accomplished.

I agree.  However, one does not always have the luxury of being
able to hold-off, redo the entire process and then release something that
is all new, all at once.  There can be significant disruption not to mention
the extended delay in completing the entire picture as oppose to staging
and releasing the parts that are needed the most early and rolling-out the
other components as the need approaches.  Indeed, having a tentative plan
is a good thing, even if it changes through time due to data that is
gathered along the way.

Having worked quite a bit with the HP engineers in the performance area,
they are indeed sensitive to the things you've outlined, and then some,
when it comes to anticipating and working towards addressing these issues.
And despite all their efforts, sometimes the timing of things doesn't
quite work-out and things get tight for awhile.  And in terms of hardware
planning, we're lucking to have the skill and talent of Dave Snow in
CSY!



/jf
                               _\\///_
                              (' o-o ')
___________________________ooOo_( )_OOoo____________________________________

                          Friday, February 11th

           Today in 1847 - Thomas Edison was born.

___________________________________Oooo_____________________________________
                             oooO  (    )
                            (    )  )  /
                             \  (   (_/
                              \_)

ATOM RSS1 RSS2