Subject: | |
From: | |
Reply To: | |
Date: | Tue, 15 Sep 1998 09:26:29 -0400 |
Content-Type: | text/plain |
Parts/Attachments: |
|
|
Bill Lancaster wrote:
> But, to borrow from Paul Harvey, there is a "rest of the story"...
>
> Having more memory than a rule of thumb would calculate isn't a waste.
> Rather, it can be a significant performance enhancement. By having
> more memory, more data can be brought in and kept in (with an overall
> reduction in the amount of swapping). With apologies to Ken Stout (a
> quality performance guy) I haven't seen a real-world situation where
> more memory (and less swapping) hasn't resulted in at least some gain,
> though there is a point of diminishing return where the improvement
> gets smaller (but never zero).
But remember we are also using write cache, and this leads to an
ever-increasing XM log buffer. You can reach a point where XM
checkpoint time becomes "excessive" and you have periods where the
system becomes apparently unresponsive for 5-10 seconds or more.
This is a corner case, particularly with MPE, since it requires such
extremes of memory and disk demand; but it is standard practice for the
9000 school of thought dealing with "disk buffer cache" (our disk cache)
and how much memory can be dedicated to this task (kernel parameters).
If you have "large" memory (gigabytes) then "sync" can take considerable
time to execute. This is standard fodder for HPUX sysadmin training
(and preached by Bill Hassell in his 'boot camps').
They are conceptually the same (particularly if using JFS under HPUX)
but it isn't an obvious area of concern under MPE due to the inherent
efficiency of XM, but yes, I would agree with Bill Lancaster that there
are limits to the "more is better" paradigm.
Jeff Kell <[log in to unmask]>
|
|
|