|tags:||python, venv, deployment|
Enable the ability for a role to deploy OpenStack python code inside a venv
The use of on metal services, the change in release cycles / cadence, and the likelihood of projects using requirements that conflict with one another requires more separation between the installed projects which lends itself to using a virtual environment for installed OpenStack Python code.
Leave things unchanged or further pursue re-containerizing services that have been moved to the host. If we decide to go the route of re-containerizing projects that have been moved to the host’s namespace we will need to invest in kernel development to fix several issues we encountered which forced the move to running “is_metal” in the first place.
The use of venvs within an environment will not effect an existing deployment nor have any adverse effects on upgrades. Upgrading a service that hadn’t used venvs in the past will be taken care of automatically as init scripts, sudoers files, and rootwrap configs will be changed to support the new venv install.
The benefit of running a service in a venv is apparent when dealing with downgrading a package requirement. This issue has been seen a few times where an upstream OpenStack project has downgraded a python package requirement in the middle of a release. In the current deployment system an administrator is required to manually intervene to resolve package downgrade issues. If the system was using a venv and was tagged based on a given deployment upgrading from one to release to another is as simple as re-running the role from the new released version. The result will be a new venv created for the service and the version. This has an upgrade side effect that will allow for Kilo to Liberty upgrades without having to deal with a epoch wheel build or munging of the wheels repo further simplifying an upgrade in terms of what will be required by the end user.
While not directly related to the implementation of this spec it would be possible for us to extend the virtualenv implementation to allow for building and redistribution of pre-built virtualenvs as a means of speeding up and maintaining reproducibility within an environment.
When working within a container access to the service management utilities (nova-manage, cinder-manager, etc...) the deployer or administrator working on an environment will need to sourced/activated the virtualenv before running the tools. While this is an extra step there are no other changes that will need to be addressed in the typical deployer workflow.