Subject: | |
From: | |
Reply To: | |
Date: | Thu, 7 Jan 1999 16:06:31 -0800 |
Content-Type: | text/plain |
Parts/Attachments: |
|
|
John Korb writes:
> Now, if you wrote your own special-purpose web server to handle
> transaction from start to finish, performance could skyrocket. Or, have
> the web server have a pool of application server processes active as son
> processes and pass the client form data directly to the proper son (the one
> that acts as the application server for that client's application), with
> the son process keeping the database open all the time and simply
> suspending when it has finished processing a transaction. Even that would
> eliminate a lot of overhead. Something like:
>
> 1) Webserver gets input from client.
>
> 2) Web server maps Target URL to a son process, an application server
> process running as a son process under the web server process).
>
> 3) Web server writes the client's input to the stdin of the proper son
> process.
>
> 4) Son process already has DB opened, processes transaction.
>
> 5) Son writes output page to its stdlist, then waits for next transaction.
>
> 6) Webserver transfers output page data from son's stdlist to the web
> client.
>
> Gee, wouldn't that cut the level of complexity down a few notches!
You just described FastCGI:
http://fastcgi.idle.com/
I don't presently build it into Apache/iX, but I played enough with it a long
time ago to appreciate the performance boost you get from having persistent
CGI server processes.
--
Mark Bixby E-mail: [log in to unmask]
Coast Community College Dist. Web: http://www.cccd.edu/~markb/
District Information Services 1370 Adams Ave, Costa Mesa, CA, USA 92626-5429
Technical Support Voice: +1 714 438-4647
"You can tune a file system, but you can't tune a fish." - tunefs(1M)
|
|
|