HP3000-L Archives

July 2000, Week 4

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Jerry Fochtman <[log in to unmask]>
Reply To:
Jerry Fochtman <[log in to unmask]>
Date:
Fri, 28 Jul 2000 10:41:10 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (60 lines)
At 09:19 AM 7/28/00 -0400, Kevin Smeltzer wrote:
>Has anyone experienced system sluggishness during a run of DBGEN? Our
>997/500 would not even let users sign on for 10 minutes during part of the
>run.  I was running DBGEN to decrease dataset sizes to save disc space,
>against an archive DataBase that has not been actively used in 3 years.

Kevin,

<plug>

To better understand the situation you've encountered, more information
would be needed.  For example, what version of DBGeneral, what version of
MPE/iX, what type of datasets were involved, and if detail sets, were you
using a reorg as a means to reduce capacity or simply capacity change.
There are a number of possibilities that can have a similar impact
such as a telecom issue.  Unfortunately to determine more accurately
what is going on, information would be needed as it is occurring.  At
this point, it is speculation, as there are a number of possible
reasons.

Assuming that it is not communication related, a couple of other
possibilities come to mind, providing you are using one of the recent
versions of DBG.  First, if you are reducing the capacity of a detail
set using capacity change, this function simply performs a FCLOSE/trim
operation on the file once all the necessary characteristics and
requirements have been verified.  This is only possible if your target
capacity is at or above the set's existing High Water Mark.  It is not
likely that this would have a significant impact as you've described.

However, if you are lowering master set capacities or using detail reorg
to lower detail capacities, these functions build entirely new copies
of the file and then exchange the existing dataset/jumbo chunks for the
new copies.  In this case, when the work is completed and the file is
closed, storage manager needs to ensure all changes that may still be in
memory are posted to disc. When writing large files on large memory
systems it is possible that the volume of file changes still in memory
that have not been posted to disc can get fairly large.  Especially if
there is little other activity on the system.

When the file itself is closed, all those modified file pages in memory
must be posted to disc. It is this activity by storage manager that you
may be experiencing.  Without actually examining the system when this
is occurring makes it difficult to determine if this is indeed the
situation you encountered..

I believe this particular problem has surfaced before in discussions
with HP. It tends to occur on systems with decent amounts of memory
and high-performing processes that modify large volumes of data in
fairly large files with little other concurrent memory pressure.  I do
not recall the status of these discussions, perhaps someone else on the
list might remember.

At Bradmark, we've have seen this occur on a few occasions over the
last couple of years. We've been looking at how we might detect that
this scenario may occur and how best to minimize/prevent the operating
system's storage management function from causing it when DBGeneral
closes the file.

</plug>

ATOM RSS1 RSS2