HP3000-L Archives

March 1999, Week 1

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Leonard Berkowitz <[log in to unmask]>
Reply To:
Leonard Berkowitz <[log in to unmask]>
Date:
Wed, 3 Mar 1999 09:22:27 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (31 lines)
We are conducting a benchmarking effort to see what technical problems the company
will have as we project an increase of business volume (HMO members). Currently, we
encountered a maximum number of adddressable TurboImage blocks (something above 8
million, I was told). With the help of Bradmark, we were able to work around this
limitation by changing the TurboImage blocking factor to the maximum possible for
this particular dataset by increasing the block size to 2560.

We are now wondering up which creek will we be when we have a dataset with a very
wide media record so that we cannot make the blocking factor high enough to be within
the limitation of addressable blocks.

Our speculation goes further:

What other undocumented or not widely known limitations are there that we might
encounter as we scale our volumes upward? For example, there has been discussion on
this list about the maximum number of logging processes (~1100 ?) in write mode.

A further question: is HP planning to increase the various limits as customers attain
higher and higher volumes? The increase of dataset size to 80 Gb (if I have that
right) is now a comforting limit. I cite that as an example of HP trying to stay
ahead of the curve. However, several years ago, I remember that, where I formerly
worked, we were bumping into the then 2 Gb file limit when the advertised limit was 4
Gb.

Thanks.
========================
Leonard S. Berkowitz
mailto:[log in to unmask]
phone: (617) 972-9400 ext. 3250
fax:   (617) 923-5555

ATOM RSS1 RSS2