HP3000-L Archives

September 2003, Week 4

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
"Simpkins, Terry" <[log in to unmask]>
Reply To:
Simpkins, Terry
Date:
Wed, 24 Sep 2003 18:35:57 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (35 lines)
I was preparing to ask a very similar question, so I will.

We're not planning to upgrade the box, because we don't see significant CPU utilization,
but we have been seeing significant queue lengths via SOS (Lund).  

Our current configuration is:

959/200 (maximum memory installed)
system volume set  (model 10 array w/ 6 disks on F/W)
3 user volume sets 
  - total of 10 HASS enclosures
  - 32 2.0Gb disks (16 mirrored to 16) on 4 F/W channels
  - 48 4.3Gd disks (24 mirrored to 24) on 4 F/W channels

We intentionally stayed with smaller drives to keep the number of spindles up.
The production data is all on the 32 2Gb drives with 8 drives on each F/W channel.
The 4.3Gb drives have 12 drives per F/W channel, but those are test and backup sets.

What would others recommend to improve disk throughput? 
Should we replace the F/W with Fiber?
Should we replace the HASS units with Arrays and eliminate the mirroring?

Any other ideas?

*****************************
Terry W. Simpkins
Director - ISIT
Measurement Specialties
757-766-4278
[log in to unmask]
*****************************

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

ATOM RSS1 RSS2