HP3000-L Archives

April 2000, Week 2

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Carl McNamee <[log in to unmask]>
Reply To:
Carl McNamee <[log in to unmask]>
Date:
Mon, 10 Apr 2000 08:25:59 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (92 lines)
Thanks for your reply.

What really bugs me, and I have not found a good answer for it yet, is why
is the number of spindles so important.  I would think, like the emc
engineer who advised no more that 45 GB per channel, that the number of
spindles would not be as important as the amount of data on the channel.

For instance, if I put 10 4GB drives on a channel I would have ~40GB of
data.  However if I put 8 9GB drives I would have ~72GB of data.  I would
think that in a low i/o environment that the later would be ok but if the
applications did lots of i/o then you would be creating a severe bottle
neck.

Like I told someone else on Saturday.  I feel that HP needs to work through
the configuration of an emc/xp256 and give their customers real world
examples of how to configure the things for heavy i/o vs. low i/o.  There
are lots of things to factor in when using an emc/xp256 that you don't need
to consider when using jbod.

Carl

-----Original Message-----
From: Costantino, Rocky [mailto:[log in to unmask]]
Sent: Monday, April 10, 2000 8:09 AM
To: [log in to unmask]
Subject: Re: EMC on HP3k question


Carl,

The maximum number of LDEVs on a fast-wide controller is 15 (using target
mode addressing). The recommendation of 8-10 drives is one that I have heard
frequently and agree with. The "size" of the LUNs is the limiting factor. If
they LUNs were 11.5GB (2:1 splits for each physical), you would be able to
assign more "storage per interface". Of course, this means a LONG WEEKEND
reload :(

The caveat - I have been told by EMC engineers more than once that they
recommend not exceeding 45GB per controller on MPE. I do not necessarily
agree with this metric, as I have worked with customers running twice this
per FWD controller without a performance impact. The config that I refer to
was consiting of 18GB physical drives with 9GB LUNs.


Regards,

>       _________________________________________
>
>       Rocky J. Costantino
>       Vice President
>
>       Computer Design & Integration, LLC
>       696 Route 46 West
>       Teterboro, NJ 07608
>
> *     e-mail  [log in to unmask]
> *     Web     http://www.cdillc.com
> *     Phone   (201) 931-1420 x224
> *     Fax             (201) 931-0101
>
>


-----Original Message-----
From: Carl McNamee [mailto:[log in to unmask]]
Sent: Friday, April 07, 2000 10:01 AM
Subject: EMC on HP3k question


This is a cross post from the EMC list server:

A bit of back ground and a summary of our problem. We have a 3700 frame that
is about 60% full of 23GB drives. The drives are split into 4.6GB logical
partitions. We assigned drives to the controllers based on HP's
recommendation of no more than 8-10 drives for performance since we have
some very i/o intensive applications.
My problem is that all 32 controllers all "full", e.g. have 8-10 drives
assigned, but the emc box is only half full of drives. What I am interested
in is how many drives you have assigned to each controller?
HP's theory on the 8-10 drives deals with an optimal number of spindles, if
I'm not mistaken. Since a logical emc drive does not necessarily equate to a
spindle we are thinking about stringing the drives out 15-20 per controller
so that we can fill the frame with drives. In our theory we would just need
to ensure that we did not exceed the capacity of the f/w scsi channel during
peak processing. I think that this can be done by arranging the drives so
that we have a good mix of low use drives and high use drives on each
controller.
Any thoughts or comments? Feel free to poke holes in this!
Carl McNamee
Systems Administrator
Billing Concepts

ATOM RSS1 RSS2