HP3000-L Archives

January 2001, Week 1

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Chris Goodey <[log in to unmask]>
Reply To:
Chris Goodey <[log in to unmask]>
Date:
Wed, 3 Jan 2001 14:29:30 -0800
Content-Type:
text/plain
Parts/Attachments:
text/plain (13 lines)
We don't have enough information from the original
poster of this question to know for sure, but it seems to me that
a dataset that grew from something under 100 megabytes,
to several hundred megabytes or more, could simply have gone
from doing memory resident updates to having to really
pound the disc and thus run slower and slower each time
the capacity increased.

If the machine has several gigabytes of memory, this is
probably not the problem, but if it has more like 256mb or
so, then I would expect batch updates to get slower and slower
as the data base size went past available memory..

ATOM RSS1 RSS2