HP3000-L Archives

December 1999, Week 2

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
John Clogg <[log in to unmask]>
Reply To:
John Clogg <[log in to unmask]>
Date:
Thu, 9 Dec 1999 09:34:40 -0800
Content-Type:
text/plain
Parts/Attachments:
text/plain (89 lines)
This is good (and exciting) news!  What kind of system were you using?

-----Original Message-----
From: Rick Winford [mailto:[log in to unmask]]
Sent: Thursday, December 09, 1999 9:15 AM
To: [log in to unmask]
Subject: Re: RAID5 Disc's


"Costantino, Rocky" wrote:

> I will offer a real life example that included an issue of "logical
volumes"
> as well as RAID-5 vs. RAID-1.
>
> The same account is now looking at data center consolidation and the XP256
> will be used in a RAID-5 configuration with 36.9GB drives yielding 7.2GB
> LUNs. The XP256 boasts that RAID-5 only suffers 3-4% less performance than
> RAID-1 (we'll see). The same software will be run. It is still early in
the
> migration, so I do not have any performance data yet but will post it when
> available (2nd quarter 2000 probably).

I consulted with a customer this fall to evaluate the performance advantages
of
an XP256 over
JBOD on their large production system.  Luckily, they had a test system with
almost as many
JBOD disks, so we had a month to do some serious benchmarking and testing.
The
test system
had 64 4gb and 9gb disks in HASS (Jamaica) cabinets, and we  were comparing
the
performance
to 60 OPEN-9 (7.5gb) LUNs on the XP256 in RAID-5 configuration.  (The JBOD
was
mirrored).  We were able to restore all databases and data files back to
original condition before
each run of the test--we ran 3 to 4 iterations of each test on the JBOD and
the
XP256.

The results were pretty impressive:  In all cases the XP256 outperformed the
JBOD, from 20%
improvement on the low end to 65% on the high-end.

We benchmarked the client's main application processes, a TPC-B
implementation,
and several Adager
database functions: detpack (write-intensive) and integrity scan
(read-intensive).

Because of the results of the testing, they decided to put the XP256 into
production, and the actual
results were even better than we expected from the benchmark.  The client
manager commented,
"we hit a home run with this."

The reasons for the success:

1.  We were replacing similar numbers of logical MPE disks, between JBOD and
XP256, and not
going from 100 disks to 20, for example.

2.  The XP256 screams.  The client also has experience with EMC disks and
they
chose XP256.

3.  We saw better performance improvement with write-intensive benchmarks
than
read-intensive,
due to the way memory cache works on the XP256.  Since it is mirrored, the
XP256
can acknowledge
the write as soon as the data are in memory, rather than waiting until they
are
actually on the disk mech.

4.  We were able to disable MPE disk mirroring.  While MPE mirroring is not
terribly expensive, it does
require twice as many disk writes, and so there's some overhead on the
channels
and the IO controllers
and the CPUs, etc.

Rick Winford
[log in to unmask]
usual disclaimer:  "my opionions are mine.....no one else would want them"

ATOM RSS1 RSS2