Subject: | |
From: | |
Reply To: | |
Date: | Mon, 13 Jan 1997 16:21:37 -0800 |
Content-Type: | text/plain |
Parts/Attachments: |
|
|
I too would like to add my vote to the building chorus!
I've not plowed deep enough into their search engine to
tell, but if they are pulling the actual information for the
technical documents directly from a database (which I'd hope
they do) instead of from static web pages, most web crawlers
and robots wouldn't get the information. It's been awhile since
I've looked at the pseudo-standards employed by the bots, but
at one time they ignored (for many technical reasons) dynamically
generated URL's. I know at my own site I do not specifically restrict
them from those locations, but I've never seen one traverse those
(dynamic) pages either.
----------
From: Stan Sieler[SMTP:[log in to unmask]]
Sent: Monday, January 13, 1997 11:53 AM
To: [log in to unmask]
Subject: [HP3000-L] HPSL: problems & suggested solutions
<snip>
2) The documents in the technical support database must be put
on a publically available web server, searchable by all
web crawlers/robots/spiders ... so we have multiple public
search engines available for searching the documents, instead
of relying on a single server at HPSL.
(With many web servers, significant portions of the documents would
be available via the search engine even if HP's server is offline.)
This would have a secondary benefit of dramatically increasing
the performance of searching for bug reports, and the sophistication
of queries. (Performance increase because the searching is distributed
over multiple search engines, instead of one server (or set?) at HP.)
<snip>
Regards,
Michael L Gueterman
Easy Does It Technologies
email: [log in to unmask]
http://www.editcorp.com
voice: (509) 943-5108
fax: (509) 946-1170
|
|
|