HP3000-L Archives

September 1997, Week 4

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Chris Bartram <[log in to unmask]>
Reply To:
Date:
Mon, 22 Sep 1997 20:37:58 -0400
Content-Type:
Text/Plain
Parts/Attachments:
Text/Plain (86 lines)
 In <[log in to unmask]> [log in to unmask] writes:

> >The inbound buffer pool for a NI "Network Interface" can reach a full
> >condition due to a misbehaving application (i.e. one which opens a
> >socket on the 3000, but does not perform IPCRECV or BSD RECV in a
> >timely manor or not at all.

[snip]

> One of my clients seems to be experiencing a device problem that
> is causing this error. If a tcp/ip device, such as a network
> printer interface transmits packets to the hp3000 that contain
> user data, and such packets are not expected, hence no IPCRECV
> was posted, this would cause the buffer to fill up as well...
>
> Comments on this scenario from Jim?

I'm not Jim, but will I do? ;-)

If "not expected" means "there is no application waiting on data from that
TCP/IP socket" then the answer is no. Inbound traffic to a port on a 3000
that's not linked to a "listening" socket is ignored, causing connect
failures to the sender (if they're paying attention - and if they're TCP/IP;
UDP/IP are just ignored - assuming they're broadcast packets).

The scenario referred to above is one I can attest to personally; I had a
bug in an application I was testing a week or two ago that was accepting
tcp/ip socket info but not properly disconnecting (ipcshutdown'ing). It
resulted in a process with >400 open sockets, eventually causing all other
networked apps on the system to gag and die due to a lack of resources.
Killing that rogue process freed everything up.

This was a bug in *my* code, not MPE which caused this particular situation,
which was easily visible by running SOCKINFO.NET.SYS... and noting that one
lil process had >400 sockets linked to it.

There *is* another problem prevalant on 5.0 (one of our systems still
experiences this - we never got the correct patch) where available socket
resources disappear over time, eventually causing network errors and requiring
a system restart to recover (even stopping/restarting the network doesn't
help). The only way to track this problem that I know of is with the
following debug command file(for MPE/iX 5.0 only- don't have a 5.5 version):

-----------------------------cut here---------------------------------
symopen symnsxpt.xpt0407.telesup
var tcp_kso_ptr:lptr=[c0000000+#333*8].[c0000000+#333*8+4]
var max_cons = &
    symval(tcp_kso_ptr 'tcp_kso_block_type.TG_MAX_CONNECTIONS_ALLOWED')
var cur_cons = &
    symval(tcp_kso_ptr 'tcp_kso_block_type.TG_NUMBER_OF_OPEN_CONNECTIONS')
symclose symnsxpt
wl cisetvar ("tcp_cur_conns", !cur_cons)
wl cisetvar ("tcp_max_conns", !max_cons)
:showvar tcp@
c
-----------------------------cut here---------------------------------
We call ours tcpinfo.cmd

An example of the output indicating the problem:

Picard:/DEV3K/SOURCE>debug
DEBUG/iX B.79.06

HPDEBUG Intrinsic at: a.009d06a8 hxdebug+$e4
$1 ($18) nmdebug > use tcpinfo.cmd
TRUE
TRUE
TCP_CUR_CONNS = 1363
TCP_MAX_CONNS = 4096
Picard:/DEV3K/SOURCE>

This shows we have 1363 tcp connection "resources" used out of a total of
4096 available. As network applications accept/drop connections over time,
the first number eventually reaches the second, and the network stops
working. Running web servers accentuates the problem; as our system handles
a *lot* of network (http/smtp/pop/gopher/etc) requests, we end up having
to reboot about once every two weeks.

Again, I understand there's supposed to be a patch that fixes this (don't
know the ID offhand) but if the command file shows your current connections
increasing over time, then you've got the same problem we do. (I've encount-
ered several sites that have hit this; almost all running web servers)

                Happy hunting,
                   Chris Bartram

ATOM RSS1 RSS2