Replace OBS with another build system

https://blueprints.launchpad.net/fuel/+spec/replace-obs

https://blueprints.launchpad.net/fuel/+spec/enable-gpg-check-and-sign

Problem description

  • As a CI engineer, I need a package build system that doesn’t introduce package management limitations compared to upstream Linux distributions used in MOS
  • As the Cloud Administrator I want to validate integrity of all Mirantis OpenStack packages with end-to-end signatures on either individual rpm packages or whole deb package repositories

We’ve found following fundamental limitations of OBS:

  • OBS builds packages in its own way which is different from upstream. It causes an issues when upstream package can’t be rebuilt without changing packages sources.
  • OBS rebuilds package when its build dependency changed (and doesn’t update revision number of such package)
  • OBS uses base upstream packages for target in the building stage. Every change in the target causes rebuilding of each package which was built with this target.
  • OBS doesn’t support publishing udeb binary packages. This is due to the fact that it uses plain debian repository structure. But deb and udeb packages should not be published in one repository.
  • Our current OBS version (2.4) doesn’t support debian python:any dependencies. That’s why we decided to create new OBS (2.6) instance. We can’t update current version because it totally breaks supporting previously shipped releases.
  • OBS doesn’t support signing with predefined key. Only OBS autogenerated keys can be used. Every OBS project has it’s own key. Such keys can’t be exported from OBS.
  • It’s quite hard to reproduce our CI due to OBS. Every MOS OBS project based on previously shipped project (e.g. 6.1 and 6.0.1 based on 6.0 release, 6.0 based on 5.1 and so on). So if you need to reproduce our CI for 6.1 release, you need to rebuild all packages for all shipped releases since 3.2.
  • OBS server side natively supported on openSUSE and SUSE Linux Enterprise Server.
  • We cannot support OBS as well as distribute it for our customers.

Proposed change

This specification introduces replacement for existing OBS infrastructure using new build system Perestroika.

Solution will use standard upstream Linux distribution tools to build packages (sbuild/mock), publish packages to repositories and manage package repositories (reprepro/createrepo).

Every package will be built in a clean and up-to-date buildroot. Packages, their dependencies and build dependencies will be fully self-contained for each MOS release. Any package included in any release can be rebuilt at any point in time using only the packages from that release.

Package build CI will be reproducible and can be recreated from scratch in a repeatable way.

New build system is based on Docker, which provides easy distribution. There will be created proper Docker Images for each supported Linux distribution with necessary tools and scripts.

Puppet will be used for configuration of those images.

We can wrap host side scripts of interaction in a package for easy deployment.

Alternatives

Data model impact

None

REST API impact

None

Upgrade impact

None

Security impact

None

Notifications impact

None

Other end user impact

  • System will be able to perform package/repository signing
  • Packaging CI infrastructure will be reproducible.

Performance Impact

  • Unnecessary rebuild of packages and their dependencies will be avoided.

Other deployer impact

None

Developer impact

None

Infrastructure impact

  • Current workflow of building packages will be the same in general.
  • We should think about using Docker Hub as main repository for Docker Images

Implementation

New build system will contain following parts:

  • Code storage We use gerrit code review system as code storage.

    Gerrit projects structure:

    • MOS+master-node Openstack packages code projects:

      [customer-name]/openstack/{package name}

      spec projects:

      [customer-name]/openstack-build/{package name}

    • MOS linux packages code+spec projects:

      [customer-name]/packages/{distribution}/{packagename}

    • Master-node linux packages (separated from MOS linux in 7.0) code+spec projects:

      [customer-name]/packages/fuel/{distribution}/{package name}

    • Versioning scheme will be supported by project branches openstack:

      openstack-ci/fuel-{fuel version}/{openstack version}

      MOS linux/master-node:

      {fuel version}

  • Scheduler This part is based on Jenkins CI tool. All jobs will be configured via jenkins-job-builder Jenkins has a separate set of jobs for each [customer name]+[fuel version] case. Gerrit-trigger configured to track events from {version} branch of all [customer-name] gerrit projects.

    Each set of jobs will contain:

    • Jobs for openstack packages for cluster (rpm/deb)
    • Jobs for MOS linux packages for cluster (rpm/deb)
    • Jobs for openstack packages for master-node (optional in case of using cluster packages) (rpm)
    • Jobs for non-openstack master-node packages (rpm)
    • Jobs for fuel packages (rpm/deb)
    • Job for package publishing
  • Build workers Hardware nodes with preconfigured build tools for all supported distribution. Will be configured as Jenkins slave.

    Each worker will contain:

    • preconfigured docker images with native build tools for each distro type: mockbuild: will build packages by mock (centos6/7 target distributions) sbuild: will build packages by sbuild (trusty target distribution)
    • prepared minimal build chroots for all supported distribution These chroots will be updated on daily basis in order to be up-to-date against upstream state.i
    • precofigured packages caching system (optional) All packaged downloaded from upstream repositories should be cached on build host in order to be reused by build stages. This part will reduce building time. Could be done with squid/polipo/approx

    Build system will use short-lived docker containers to perform package building. Docker images contain preconfigured build tools only. No chroots inside images. Build chroots will be mounted to docker container on start in read-only mode. Additionally tmpfs partition will be mounted over read-only chroot folder with AUFS overlays inside docker container. Docker container will be destroyed after build stage is done.

    Goals of this scheme:
    • Could run a number of containers with the only chroot simultaneously on the same build host
    • No need to perform cleanup operations after build (all changes matters inside container only and will be purged after container is destroyed)
    • tmpfs works much faster than disk fs/lvm snapshots

    All worker nodes will be joined by jenkins slave label

  • Publisher Publisher node will contain all repositories for all customer projects. Will be configured as Jenkins slave. Repositories will be maintained by native tools of respective distribution (reprepro/createrepo). Publisher node will be fully private and available from Jenkins master node only because of containing secret GPG key. All packages and repositories will be signed in terms of respective distribution by GPG key, stored on Publisher node.

  • Mirror node All repositories should be available via http/rsync protocols. All repositories will be synced by Publisher to Mirror host

Backward compatibility

Assignee(s)

Primary assignee:
dburmistrov
Other contributors:

dkaiharodsev

dszeluga

Work Items

  • Write a scripts for interaction with native build tools inside Docker Images and pack them into DEB package.
  • Create Docker Images with packaging tools (sbuild and mockbuild)
  • Create a Jenkins job for building packages by using Docker based packaging system.
  • Create Puppet manifests for configuring build hosts
  • Create Puppet manifests for configuring publisher host

Dependencies

None

Documentation Impact

In case of using new build system we should change workflow documentation where OBS mentioned.

Testing

All of the scripts and Jenkins jobs should be tested in a sandbox environment for building packages.