Sunday, October 27, 2019

Python Diary: Deploying with Docker and uWSGI

While I am not going to say that I am expert with Docker by any means. This is just an analysis and example on how I plan on deploying uWSGI containers within Docker containers. I highly recommend reading my previous article from today for some additional context on my thoughts behind docker. With that said, let's get started.

While Gunicorn seems to be much more popular than uWSGI when it comes to deploying Python web applications, I still prefer the extremely power uWSGI, as it has full integration with the nginx web server, which is my preferred web server of choice. uWSGI has a great plugin system, and has a really powerful load balancer and application router built right in. Let's get down to business here and I'll show you my custom Dockerfile for deploying uWSGI applications:

FROM python:2.7.9

RUN apt-get update && apt-get -y install uwsgi-core uwsgi-plugin-python

RUN groupadd -g 1000 uwsgi && useradd -g uwsgi -u 1000 appuser

ADD entrypoint.py /

VOLUME /app

EXPOSE 8000

ENTRYPOINT ["/entrypoint.py"]

The idea here is that your web application code is stored on the host operating system, rather than being fully contained within the docker image itself. Originally, I was thinking of bundling the application code within the container, but this will make other deployment software I use break. I currently use Fabric for sending up my Python application code to my cloud server, which is extremely quick. You might be saying that the big advantage of containing your application code within the docker image itself is so everything is fully versioned, and rolling back is extremely trivial. While this is true, if you are using a proper SCM program such as Git, then versioning is a complete non-issue... If you need to perform a rollback, although you really should have tested fully locally... You can rollback your local git repo to an older revision, and perform the same deployment command to rollback your changes on the production server really quickly. Another huge advantage to having your application code on the host file system, is that it can easily be shared read-only through the bind volume to as many docker containers as you need to scale up, and since this base uWSGI docker image rarely, if ever changes, if you have multiple cloud servers, it is even more trivial to manage the application code using say a read-only NFS share. There's little reason to have your entire application stack bundled within the docker image itself, as it will only change during deployments and never during runtime. Using a read-only docker bind volume is also more efficient.

Next order of business is of course the entrypoint script, so here it is:

#!/usr/bin/python

import sys, os

print "Kevin's uWSGI Python container image.\n\n"

app_dir = os.listdir('/app')

if len(app_dir) == 0:
    print " *** App Volume is empty! ***"
    sys.exit(2)

if 'uwsgi.ini' not in app_dir:
    print " *** No uwsgi.ini file available! ***"
    sys.exit(3)

if 'requirements.txt' in app_dir:
    print " ** Found Python requirements.txt file, installing packages..."
    os.system('pip install -r /app/requirements.txt')
    print " ** Package installation has completed!"

if 'as-root.sh' in app_dir:
    try:
        os.stat('/var/run/root-config-done')
        print " ** Not running as-root.sh, has run once already."
    except:
        print " ** Executing custom script as root..."
        os.system('/app/as-root.sh')
        open('/var/run/root-config-done','w').write('DONE')
        print " ** Root Script completed."

RUNONCE = False
if 'runonce.sh' in app_dir:
    try:
        os.stat('/var/run/runonce-done')
    except:
        open('/var/run/runonce-done','w').write('DONE')
        RUNONCE = True

print " ** Downgrading user permissions..."
os.setgid(1000)
os.setuid(1000)
print " ** We are now running as a regular user."

if RUNONCE:
    print " ** Running runonce.sh..."
    os.system('/app/runonce.sh')
    print " ** RunOnce Script completed."

if 'prestart.sh' in app_dir:
    print " ** Executing prestart.sh Script..."
    os.system('/app/prestart.sh')
    print " ** PreStart Script completed."

print " **** SYSTEM INITIALIZATION COMPLETE! ****\n\n"
print "Starting uWSGI container service..."

os.execl('/usr/bin/uwsgi_python','uwsgi_python','/app/uwsgi.ini')

While reading the Dockerfile, you may have seen that a new group and user were created, but that the Dockerfile never used it. Well, the group and user are utilized in the entrypoint script instead, as it can be better used here. No need to downgrade permissions until we are absolutely ready, right? Again, this script is in Python 2.7.x, so update it as needed for Python 3, if you plan on using it with modern Python web applications.

Let's quickly go through this script here. There are a lot of checks early on, so we can fail early and let the operator know that something is wrong, I even give the courtesy of exiting with different error codes which could be checked in an external docker deployment tool. We check if the application directory is empty, and if a uwsgi.ini file exists in the application directory. If everything is in the clear, we check for a requirements.txt file, this occurs during every start so that when we restart the container after updating our application code, we can ensure that newer Python packages are installed correctly. We then check for a script called as-root.sh, so we can run some commands, if needed as root inside the container to install system libraries, or binary Python packages from the Debian repository. This script will also only run once per container instance. This allows for redeployments of our application code to go much quicker. If we do need to make adjustments, and need to run this script again, we can either spin up a new container, or connect to the running container and remove the file to signify we need to run these tasks again. Next, we check for a runonce.sh, which will run as the application user. Inside this script, we could run application specific tasks, say from the Django manage.py command, or other various tasks as the user running the application code. It will also write a file to prevent itself from running again. Next, we downgrade our permissions as a security precaution. Most web application code does not need to run as root, and in the event of a potential exploit, we do not want the hacker to get farther than they can on a traditional deployment. If the runonce script did exist, this is where it will run it. Once that has been completed, we check for a prestart.sh script, this is where we can run our database migrations for example, as they need to run everytime we start the container after a new deployment. You can also use a second bind mount to enable the collecting of static files to the host which your web server or a CDN will serve. Finally we use the exec syscall to replace the running binary with the uWSGI server software, allowing the signals from the docker daemon to be heard by the uWSGI master process.

And well, that's that! It took some thought for me to decide on the best way to utilize Docker in my everyday development, and how I will use it for deployments of my application code. If you haven't read the previous article yet, I highly recommend reading it as well.



from Planet Python
via read more

No comments:

Post a Comment

TestDriven.io: Working with Static and Media Files in Django

This article looks at how to work with static and media files in a Django project, locally and in production. from Planet Python via read...