My favorite Python web framework
is Tornado; I've been using it since it
was first announced, and think it does a lot of things right. Although Tornado
is capable of speaking HTTP directly,
the recommended way to
deploy Tornado in production is to put a web server like Nginx at the edge, with
Nginx proxying to backend Tornado instances. The suggestion is to run multiple
Tornado processes on each host, with each process binding on a different port.
The "frontend" Nginx process on each host then proxies HTTP requests to these
"backend" Tornado workers in a round-robin fashion, using the Nginx proxy_pass
directive.
This is all pretty standard, and applies to other many other Python (and Ruby, etc.) frameworks as well. However, the docs don't explain how to actually manage the Tornado worker processes. This is because process management on Linux is a fragmented and divisive issue, and many solutions have appeared over the years. I strongly feel that in 2017 the right way to do manage system services is using Systemd. Regardless of your feelings about the project, it's the de facto standard for managing processes on all modern Linux distributions, and has a lot of really powerful features if used correctly. Ben Darnell (the effective project lead for Tornado) has an example repo on GitHub called tornado-production-skeleton that demonstrates how to deploy Tornado using Supervisor; in this post I'm going to explain how to achieve the same using Systemd.
Deploying A Parameterized Systemd Service
Every Tornado app should have a call to app.listen()
, which causes the Tornado
web process to actually bind and listen on a given port. You'll need to write
your app so that it can accept a listen port somehow; I prefer to do this using
a command line flag using the builtin
Python argparse module. You
can also use
the tornado.options module
or something else that you find convenient. However, I do recommend that you
make this a command line option rather than an environment variable, since it
makes debugging easier as you'll be able to tell which worker process is using
which port from ps
and top
. Here's a simple Torando app putting this all
together:
import argparse
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write('Hello, world')
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-p', '--port', type=int, default=8000)
args = parser.parse_args()
app = tornado.web.Application([(r'/', MainHandler)])
app.listen(args.port, address='127.0.0.1')
tornado.ioloop.IOLoop.current().start()
I would also package this using a setup.py
file that declares the app as a
"console script". Basically in your setup.py
file you'll have something that
looks like (here assuming the previous "Hello world" web server is available as
myapp.web
):
from setuptools import find_packages, setup
setup(
name='myapp',
version='1.0',
author='Evan Klitzke',
author_email='evan@eklitzke.org',
packages=find_packages(),
install_requires=['tornado'],
entry_points={
'console_scripts': [
'myapp-web = myapp.web:main',
]
})
In production, we'll install this into a virtualenv. Let's say we have a web
user, then the virtualenv might live at /home/web/.env
. Running setup.py install
or setup.py develop
would then create a file called
/home/web/.env/bin/myapp-web
which can be run as if it's a stand-alone script
to launch the Tornaod worker. Under the hood, this script just activates the
virtualenv it's part of before running the specified module/function.
Now we need to create a service file. It should be named web@.service
and look
something like this:
[Unit]
Description=My HTTP service
[Service]
ExecStart=/home/web/.env/bin/myapp-web -p %I
User=web
Restart=on-failure
Type=simple
[Install]
WantedBy=multi-user.target
The @ symbol in the file the name is important---it instructs Systemd to
make this service parameterized. The %I
directive in the service file is what
passes the parameter to the actual service instance.
You can either install this as a regular system service, or a user service. The
regular system service is a bit simpler; you'd copy this file to your Systemd
services directory (typically /lib/systemd/system/
) and then run sudo systemctl daemon-reload
to force Systemd to rescan available units. (At the end
of this post I'll cover how to run this as a user service.)
Once this is done, you can enable/start an instance for each port. For instance, if we want to have 8 worker processes listening on ports 8000--8007 you could do:
for port in {8000..8007}; do
sudo systemctl enable web@"${port}".service
sudo systemctl start web@"${port}".service
done
You can then check on the status of an individual worker with a command like
systemctl status web@8001
to check the woker using port 8001. Your deployment
script just needs to update the code and virtualenv for your service when
deploys happen, and then run sudo systemctl restart web@"${port}".service
for
each port you use.
This is pretty much it---you should be good to go at this point. If you want
tighter integration with Systemd you can configure your Python code as a
notify
service which will let Systemd manage the life cycle of the process
more closely, but what I have here should be good enough for most projects. You
may also want to set up a
Systemd
timer unit to
run a health check on your service, although in my experience most people in
production are using active Nagios health checks in which case you don't need to
bother.
User Services
The other option is to deploy the code as a Systemd "user service", which means
that the code will be run by a special Systemd process that only manages
processes for the web
user. By default, such a process is scoped to a user's
login session. Since this isn't appropriate for a web service, you can enable
Systemd persistence using loginctl
to put the web
user into "linger" mode:
sudo loginctl enable-linger web
Once this is done, the web
user should create the directory
~/.config/systemd/user
if it doesn't already exist, and then copy the
web@.service
file to that directory. If you do this, you must remove the
User=
line from the service file, otherwise Systemd will fail when execing
your porocess.
From this point, the commands are the same as the previous example, except the
--user
flag has to be used with systemctl
. As the web
user you'd run
something like:
systemctl --user daemon-reload
for port in {8000..8007}; do
sudo systemctl --user enable web@"${port}".service
sudo systemctl --user start web@"${port}".service
done
That's it. There's not any real advantage to using a user service this way, so I would generally recommend using a regular system service.