HP3000-L Archives

March 2001, Week 4

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Gavin Scott <[log in to unmask]>
Reply To:
Gavin Scott <[log in to unmask]>
Date:
Sun, 25 Mar 2001 15:46:27 -0800
Content-Type:
text/plain
Parts/Attachments:
text/plain (59 lines)
Michael suggests:
> This is just thinking off the top of my head here, but
> "what if" (old HP (tm) slogan :)
>
> .  A new BLOB datatype is added to image.
>    .  It is essentially a 1024 byte variable length string.
>    .  The value represents the HFS pathname.

Since Image doesn't do variable sized entries, I don't think it would be
easy to support a variable length filename data type.  1K/blob seems like a
lot of overhead, so I'd suggest instead something like a 64bit identifier
that would be generated by Image internally and which could be mapped onto a
filename, plus perhaps a per-dataset "base" HFS path which would say which
directory (or starting point for a 2-3 level hierarchy for efficiency).  I
would suggest a user-exit be provided for people who want to override the
identifier to filename mapping function, and possibly a user-exit for the
identifier function to allow things like automatically merging duplicate
blobs, etc.

> .  The BLOB data is stored as a standard bytestream file except
>    it also has a privileged filecode.

If you made them non-PRIV (or allow just non-PRIV *read* access somehow)
then you could get image to tell you the filename and then use Samba to
serve up the data directly without having to extract it from Image, write it
to a temp file, server the temp file, then delete the temp file.  There
would still be locking issues possibly, but these could be mitigated by
saying that any change to a blob allocates a new identifier and a new
bytestream file, so a "handle" to a blob file given out would always either
get you the data you expected or it might get you nothing if the old file
had been deleted.

> .  A new flag could be added to the root file to determine whether
>    or not the BLOB HFS files should be associated with the database
>    (thus allowing you to store/restore the database without regard
>    to the potentially zillion BLOB files that are internally referenced).

One idea would be to have the blob file "open" function (possibly via a
user-exit) do something like look first in a "newblobs" directory, then look
in the main "blobs" location.  You could then implement a scheme where new
blobs are created in the "newblobs" location and then later get moved into
the "blobs" location aver they are deemed to be worth adding to the
permanent archive, or after the "newblobs" location is backed up, or
something.  "newblobs" might be an ordinary disk directory but "blobs" might
be, say, a write-only optical library, etc.

> While the above is "rough" to say the least, it gives Image the
> ability to handle BLOBs as a dataitem while not sacrificing Image
> internals.

Ditto.

There also might be hybrid schemes where blobs that are smaller than a
certain size could be stored directly in Image datasets, while larger ones
are stored in external files, though this limits things like the Samba idea
above.

G.

ATOM RSS1 RSS2