Subject: | |
From: | |
Reply To: | [log in to unmask][log in to unmask], 20 Oct 1999 04:29:59 EDT381_us-ascii Paul writes: > Have you considered donating it The HP Memorial Fish Reef? It is located off > the coast of Cupertino and is used as a training > ground for technological archeologists. A barge makes weekly dumps of old > equipment and the site is considered the largest > installed base of HP equipment in the world. <grin> [...]37_20Oct199904:29: [log in to unmask] |
Date: | Fri, 15 Oct 1999 08:59:06 -0700 |
Content-Type: | text/plain |
Parts/Attachments: |
|
|
>Please could anybody share their experience with a DLT7000 as backup device
>used on a high-end 9x9KS with dedicated FWD interface?
With uncompressed data, we've achieved rates around 20gb an hour with
a prototype version of Backup+. We did a test at HP on a 989KS/800
using one of their benchmark databases with high compression, and
achieved an aggregate throughput of 60gb/hour to a single drive!
The rated throughput of a DLT7000 in streaming mode is 18gb/hour.
But, burst rates are higher. The trick is to keep bursting the
data (large block sizes, dedicated channel and ready data) fast
enough to improve upon the 18gb rate. But, assuming only the
rated speeds, you should be able to obtain 18gb/hour with zero
compression and 36gb/hour with 2:1 compression. You can expect
higher rates if your data is more compressible.
As with all benchmarks, YMMV. In this case, the machine was highly
optimized for disk IO. The had a bunch (and I mean a BUNCH) of
4gb disks upon which this database was stored. That meant a
tremendous amount of IO parallelism. The DLT7000 was on a FWD
channel all to itself, and the disks were balanced across quite
a number of channels. There was no one else using the system
and with 8 processors, it had CPU to burn for doing the
compression.
|
|
|