HP3000-L Archives

February 2006, Week 3

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
James Reynolds <[log in to unmask]>
Reply To:
James Reynolds <[log in to unmask]>
Date:
Wed, 15 Feb 2006 13:55:19 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (88 lines)
ALRIGHT BILL!!!!    Now life is back to normal...the birds are
birding..the bees are beeing..and IT DEPENDS is in it proper place!! 


James
 

-----Original Message-----
From: HP-3000 Systems Discussion [mailto:[log in to unmask]] On
Behalf Of Bill Lancaster
Sent: Wednesday, February 15, 2006 1:52 PM
To: [log in to unmask]
Subject: Re: [HP3000-L] Memory usage on HP3000

John Wrote:

>I would add a couple of comments to Bill's advice.  First of all, a
poor 
>read hit percentage is not necessarily indicative of a memory shortage.
It 
>could easily be the result of poor data locality or very random
retrievals.  
>Of course, these are measures of overall system activity, not a
specific 
>process, so Bill's rule of thumb is valid in most cases.  Other
indicators 
>not mentioned are Memory Manager I/O and page faults.

I agree.  I'm working with a customer right now experiencing a poor read
hit percentage but having plenty of memory.  Their issue is data
locality (either through database performance problems (fixable with
Adager) or disk fragmentation (fixable with De-Frag/X)).  

I have had customers in the past who have experienced very poor read hit
percentages due to very random retrievals.  I had a customer once who
had many, many databases with many datasets. (They were effectively a
service bureau.)  Because the very nature of the I/O in this environment
was highly randomized the OS's ability to eliminate I/O's via cache was
highly restricted.

Generally though, these are edge cases and read hit percentage is highly
reliable in determining memory shortage.

Dave Waroff wrote:
>MPE performance strikes me as almost the inverse of a traditional
virtual 
>storage system where plentiful I/O is substituted for scarce memory.

>Not at all!  In fact, that "traditional virtual storage system" is
exactly 
>the issue.  As more and more of the data structures that are
"virtually" in 
>memory fail to fit into physical memory, more swapping must take place,
and 
>processes must wait for those swaps to complete.  When you increase 
>physical memory the overhead of managing virtual memory goes down.  I/O
is >typically the bottleneck in a memory-starved system, hence my
comments >about memory manager I/O and page faults above.

OK, I'm sorry, but I just have to say it.  "It depends."  (There you go,
James.)  

Sometimes when you increase physical memory the overhead of managing
virtual memory goes down but the overhead of managing main memory goes
up.  This is typically true of a large system with a lot of memory.  For
example, it's not uncommon for a large multiprocessor N-class with 16gb
of memory to have CPU spent on memory management in the 4-8% range.
(That's a "yellow" zone for most systems.)  While that's common in a
smaller system, 4-8% of the described box is MUCH more expensive that
4-8% of a 9x7, 9x8 or 9x9.

That being said, there really isn't much you can do about it so you just
have to go along for the ride, keeping up on disk and database
maintenance along the way.  The maintenance issues in this environment
become both more important and less possible (due to shrinking
maintenance windows) so the high end folks are pretty much screwed
anyway.

:-)

Bill

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

ATOM RSS1 RSS2