Warning: Can't synchronize with repository "(default)" (Unsupported version control system "svn": No module named svn). Look in the Trac log for more information.

Ticket #193 (closed enhancement: fixed)

Opened 14 years ago

Last modified 9 years ago

Explain why WSGI might be better with LightTPD

Reported by: Modjoe Owned by: anonymous
Priority: normal Milestone: 1.0.x bugfix
Component: Documentation Version:
Severity: minor Keywords:
Cc:

Description

Pardon my ignorance but why would someone want to use the more complicated way? Does the second method produce far superior performance?

Change History

comment:1 Changed 14 years ago by SuperJared <jared.kuolt@…>

  • Summary changed from Comment on docs/deployment/lighttpd.html to Explain why WSGI might be better with LightTPD

comment:2 Changed 13 years ago by jorge.vargas

  • Severity changed from normal to minor
  • Milestone set to 1.0b1

does anyone has this setup?

comment:3 Changed 13 years ago by claudio.martinez

I have some servers running lighttpd + wsgi.

I did some benchmarks some time ago (I don't have the results now) with Apache Bench.

There is almost no speed difference between the direct, proxy and wsgi methods as long as the concurrency is low. When you start raising it (around 10), wsgi is faster and more stable.

I ran Apache Bench from 2 computers with -c 10 on each one for 30 mins and the lighttpd+wsgi server didn't drop a single request, zero. The proxy and direct servers just crashed (at least for me) under this stress test. Maybe tweaking the thread pools can help, but I didn't care because wsgi was already faster when the concurrency didn't exceed CherryPy?'s default number of threads.

I'm using flup's threading WSGIServer (easy_install flup). The forking one has some problems with the database connections, just like paste server.

My start script looks like this (there are some commented lines to try the forking method and paste server, but I already did that and they didn't work):

#!/usr/bin/env python
import pkg_resources
pkg_resources.require("TurboGears")

import turbogears
import cherrypy
cherrypy.lowercase_api = True

from os.path import *
import sys

# first look on the command line for a desired config file,
# if it's not on the command line, then
# look for setup.py in this directory. If it's not there, this script is
# probably installed
if len(sys.argv) > 1:
    turbogears.update_config(configfile=sys.argv[1], 
        modulename="gitextranet.config.app")
elif exists(join(dirname(__file__), "setup.py")):
    turbogears.update_config(configfile="dev.cfg",
        modulename="gitextranet.config.app")
else:
    turbogears.update_config(configfile="prod.cfg",
        modulename="gitextranet.config.app")

from gitextranet.controllers import Root
from cherrypy._cpwsgi import wsgiApp
#from paste.util.scgiserver import serve_application
from flup.server.scgi  import WSGIServer
#from flup.server.scgi_fork  import WSGIServer

port = 4000
if len(sys.argv) > 2:
    port = int(sys.argv[2])

cherrypy.config.update({
'global': {
    'autoreload.on': False,
}})
cherrypy.root = Root()
cherrypy.server.start(initOnly=True, serverClass=None)
#serve_application(application=wsgiApp, prefix='/', port=port)
WSGIServer(application=wsgiApp, bindAddress=('localhost', port)).run()

You can check how to configure lighttpd  here

comment:4 Changed 13 years ago by kevin

"Maybe tweaking the thread pools can help, but I didn't care because wsgi was already faster when the concurrency didn't exceed CherryPy?'s default number of threads."

IIRC, CherryPy? defaults to *1* thread. I'd say that the test is bogus if you didn't increase the server threads.

comment:5 Changed 13 years ago by kevin

By the way, I'm not saying that CP is definitely faster the flup... I'm just saying that we need to be careful about the test. Configuring a proxy is certainly a lot easier.

comment:6 Changed 13 years ago by claudio.martinez

"IIRC, CherryPy?? defaults to *1* thread. I'd say that the test is bogus if you didn't increase the server threads."

I thought it was 5 when I did the tests, because that was the point were, as far as I can remember, the performance started dropping. cherrypy.config.get('server.thread_pool') shows that it defaults to 10.

I didn't try tweaking Apache either (used apache for proxying, lighttpd for wsgi). The test server had FC4 default configuration. I don't think it was an Apache issue since I had the same results on testing against CherryPy? directly.

Setting up lighttpd + wsgi doesn't have to be hard, my lighttpd configuration used for the tests was:

# default document-root
server.document-root = "/usr/local/lib/python2.4/site-packages/yourproject.egg"

# TCP port
server.port = 443

# selecting modules
server.modules = ( "mod_access",
                   "mod_scgi",
                   "mod_accesslog",
                   "mod_rewrite",
                   "mod_staticfile" )

ssl.engine = "enable"
ssl.pemfile = "/etc/lighttpd/server.pem"

scgi.server = ( "/" =>
    ((  "host" => "127.0.0.1",
        "port" => 4000,
        "check-local" => "disable"
    ))
)

As it's seen from the config, I was serving static content from the wsgi server and not directly from lighttpd, which would speed things up a little.

Tests were run against a database intensive, ssl page.

When I have the chance (and the servers available) I'll run the tests again.

comment:7 Changed 13 years ago by claudio.martinez

There are some tips  here if someone wants to do a stress test.

comment:8 Changed 13 years ago by kevin

OK, if it defaults to 10 server threads, that's good to know.

You should also test the proxy setup with lighttpd in front, because lighttpd could be faster than Apache in this case.

comment:9 Changed 13 years ago by khorn

  • Milestone changed from 1.0b1 to 1.0

milestone passed, changing to 1.0

This definitely need to be ironed out in the docs before 1.0 is released (though if someone wants to fix it before then, that would be nice :)

comment:10 Changed 13 years ago by alberto

  • Milestone changed from 1.0 to 1.1

comment:11 Changed 12 years ago by alberto

  • Milestone changed from 1.1 to __unclassified__

Batch moved into unclassified from 1.1 to properly track progress on the later

comment:12 Changed 12 years ago by Chris Arndt

  • Status changed from new to closed
  • Resolution set to fixed

I linked to this ticket from  http://docs.turbogears.org/1.0/LightTPD#the-scgi-wsgi-method with a short explanation. Since this answers the original posters question, I'm closing this ticket. Re-open it, if you disagree.

comment:13 Changed 9 years ago by chrisz

  • Milestone changed from __unclassified__ to 1.0.x bugfix
Note: See TracTickets for help on using tickets.