HP3000-L Archives

August 1996, Week 5

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Steven Lastic <[log in to unmask]>
Reply To:
Steven Lastic <[log in to unmask]>
Date:
Thu, 29 Aug 1996 01:37:00 UT
Content-Type:
text/plain
Parts/Attachments:
text/plain (120 lines)
Bill,  while yes it may be true that they are growing at a tremendous rate,
they are still only adding approximately 1 million claims and 2 million
service lines (their most active datasets) per year.  In my opinion, this
volume is not that great.  I have worked at other client sites who add this
many records in a 6 month period.  The largest volume comes from conversion of
other databases into this database.  This is the project I was brought in for.
 At the rate they are going, they'll be lucky to convert two sites a year
which means that the volume is not going to be as great as you anticipate.  I
have been working here for a year and a half and have not seen much growth in
their databases until it comes to these conversions.  Most of the data is
historical and all will not be converted.  The other thing to consider is that
they have an archiving process that removes records over two years old which
has not yet been run on their system.  This will reduce their databases sizes
considerably once they get on a regular schedule. I do feel that there are
systems (HP included) that can handle their growth without a problem.  I know
of many clients with much larger databases than this that have little problems
with the capacity/capabilities of their HP systems.  I have received many
emails today supporting this.  I maybe totally wrong in my opinions but I have
seen it done elsewhere.  Something tells me that this client should be no
different.  Since I am concerned about all my clients, I like to test the
waters to see what other solutions are available so that I can help them in
making the best decisions.  No ill will was intended towards you or your
company.  Thanks for your input.
 
Steven N Lastic
SW Consulting, Inc.     800-494-4977
[log in to unmask]
 
----------
From:   Bill Lancaster
Sent:   Wednesday, August 28, 1996 12:57 PM
To:     Steven Lastic; [log in to unmask]
Subject:        Re: Questions about Large Image Databases
 
At 03:53 AM 8/28/96 UT, Steve Lastic wrote:
>I have several questions pertaining to large Image databases.  I am currently
>working at a client whose database is growing and they are concerned about
>capacity and performance problems they are suffering.  Currently, due to a
>consultant's recommendation, they are a path to split out their database into
>two identical databases each approx. having half the records.  Then, they
will
>be using Netbase to shadow both databases into one large database for
>reporting purposes.  I strongly recommended against doing this since I feel
>that image can easily handle the number of records they are working with.
>Their database is currently approx. 150 datasets with the largest being 18.5
>million records.  Most datasets are in the 100k-1m entry range.  The physical
>size of the database has grown to over 60 million sectors.  Below if a
>Suprtool FO SETS command showing the database:
>
<database stuff snipped>
>
>Another thing to consider is this database is from a canned package and the
>structure of the database cannot be changed dramatically.  The company who
put
>this database together used alot of Automatic Masters and many image sort
>paths.  Since then they have begun using OMNIDEX but still rely on many of
the
>image paths and sort items.  I just wanted to know if anyone else has had
>experience with large image databases.  I would like to get some of the
>following questions answered if possible.
>
>1.  Does this database seem unusually large ?
>2.  Does anyone see problems with the current capacities/blocking factors?
>3.  The DBA tends to use the old 80% fullness rule for masters.  Is this
still
>true on datasets where there are 4 million entries and the 80% rule says to
>set the capacity to 5 million?
>4.  What can be done to improve database performance ?
>5.  What database spreading techniques do you recommend on a database of this
>size?
>6.  Does any one use DDE to expand the capacity of their databases and what
>are pros/cons?
>7.  Does anyone else other than me think that image can handle a database of
>this size?
>
>
 
Being the consultant in question I would like to add a couple of issues to
this.  First,
the biggest concern is that this customer has explosive annual growth.  They
are fully
expecting to increase their business by more that an order of magnitude in
the next
few years.  Second, the question isn't just that I question whether Image
can handle
the workload.  Clearly, it can, for now.  My concern is that as the account
grows, they
will near the DBPUT/DBDELETE semaphore bottleneck point.  Frankly, they aren't
that far from it now.  Finally, I am encouraging this account to think
vertically when it
comes to growth, instead of horizontally.  In other words, I don't think
that the high-end
of the HP 3000 line can keep pace with their growth.  Really, no hardware
vendor
can.  So, my recommendation is for them to begin clustering systems, using
NetBase,
disk arrays and mirrored disk, for maximum high availability, growth
potential and
minimized user impact in the event of system outages (which happen sometimes
daily at this account).
 
So, let's not be confused by the recommendation.  The recommendation has a lot
more to do with other issues than simply Image performance.
 
Bill
 
>Any information on this subject would be greatly appreciated.  Thanks in
>advance for your help.
>
>Steven N. Lastic
>SW Consulting, Inc.
>Email: [log in to unmask]
>1-800-494-4977 or 941-656-1011
>
>
---
Bill Lancaster         Lancaster Consulting
(541)926-1542 (phone)  (541)917-0807 (fax)
[log in to unmask]       http://www.proaxis.com/~bill

ATOM RSS1 RSS2