Jenkins

From Mpich
Revision as of 21:09, 22 September 2015 by Yguo (talk | contribs) (Login & Compiling)

Jump to: navigation, search

MPICH has a relatively recent (as of early 2013) Jenkins continuous integration server setup at https://jenkins.mpich.org/. This page describes how we use this service in MPICH, how it works, and how we might use it in the future.

Executive Summary

If you don't have time to read the lovely prose below, at least internalize this info:

  • we now have a continuous integration server at https://jenkins.mpich.org/
  • this system runs automated build an test runs on regular intervals or whenever a commit is pushed to the revision control system
  • most of the jobs are setup to run automatically whenever code is pushed to http://git.mpich.org/mpich.git
  • the MPICH jobs are unsurprisingly named "mpich" or "mpich-SOMETHING"
  • build+test results are sent to builds@mpich.org. Click here to sign up for this list if you want to receive build status emails.
  • configuration is all done through the web interface (no more cron jobs!)

The Details

Goals

What are we even trying to accomplish by using Jenkins or any other continuous integration system?

  • reduce developer time and effort spent running tests by hand
  • reduce developer time spent fiddling with our existing automated testing systems
  • tighten the automated testing feedback loop from O(1 day) to O(1 hour) or better
  • improve accountability for "breaking the build" (low priority goal, given the current team)
  • improve software (MPICH) quality in several dimensions:
    • ensure correctness on multiple platforms (Linux, OS X, etc.)
    • ensure correctness with multiple compilers (GNU, Clang, Intel, PGI, etc.)
    • ensure correctness with multiple configure options and debugging levels
    • prevent performance regressions
  • track historical software quality information further back than just "last night"
  • reduce average build times, partly by tracking this information historically

Some of this is handled by the existing Nightly Tests infrastructure, though the "old nightlies" have a number of problems:

  • They are fragile. They are a cobbled together series of numerous shell scripts run as cron jobs by several different users on the team. It can be very confusing to track the entire flow of a test run.
  • They are not flexible. Adding a new test suite or configuration can be difficult.
  • They only run nightly.
  • They require the MCS NFS system.
  • They provide no way to suppress known test issues without completely disabling tests or platforms.
  • State from one build or day of testing is not always correctly cleaned up, leading to false positives and false negatives in some cases.

The short term goal is to augment the "old nightlies". In the longer term it would be good to replace it with an all-Jenkins solution, provided that it remains stable and can provide us all important features that are currently offered by the "old nightlies".

The MPICH Jenkins CI Server and General Jenkins Overview

Visit https://jenkins.mpich.org/ to access the Jenkins server. You should log in with your MCS username and password.

After logging in, on the home page you'll see a list of menu options on the left-hand side with an "executor status" table listed below that. In the main central/right-hand panel you'll see a list of jobs which you are able to view. I (goodell@) do not know how to filter this list automatically at this stage. There is a concept of "views", but that doesn't seem to quite solve the problem in general. Look at the list for jobs named "mpich" or "mpich-SOMETHING".

Helpful Jenkins Terminology
job (or sometimes "project") 
a logically related set of operations which should be executed in order to test a particular piece of software
build 
a particular execution of a job
workspace 
the working directory where a build executes
master (or sometimes "server") 
the Jenkins server which orchestrates builds, reports results, and manages configuration
slave (or "build executor") 
a host on which builds actually execute (that is, the job actions run on that host)
build status 
one of STABLE, UNSTABLE, or FAILED (colors are right for our server, STABLE is blue on stock Jenkins servers)

In order to control what happens in a build, you need to find your way to the "configure" panel for a given job. If you don't have the right permissions for the job, any link related to the job will probably yield an HTTP 404 for you.

Build Slave Details

Jenkins utilizes BreadBoard hardware for build testing. All nodes are in the .mcs.anl.gov domain. The platforms, Jenkins node names, and hostnames are:

Ubuntu 12.04 64-bit with IB and MXM ib64-1 ib64-2 ib64-3 ib64-4 ib64-5 ib64-6 ib64-7 ib64-8 ib64-9
bb93 bb73 bb72 bb75 bb66 bb65 bb76 bb87 bb88
ib64-10 ib64-11 ib64-12 ib64-13 ib64-14 ib64-15 ib64-16 ib64-17
bb94 bb85 bb74 bb67 bb79 bb80 bb84 bb82
Ubuntu 12.04 32-bit ubuntu32-1 ubuntu32-2 ubuntu32-3 ubuntu32-4
bb90 bb63 bb62 bb64
FreeBSD 9.1 64-bit freebsd64-1 freebsd64-2
bb95 bb91
FreeBSD 9.1 32-bit freebsd32-1 freebsd32-2
bb92 bb86
OSX 10.8.5 64-bit osx-1 osx-2 osx-3
mpich-mac1 mpich-mac2 mpich-mac3
Solaris x86 (OpenIndiana) solaris-1
bb69

If for some reason you wanted to log into these machines, use the 'autotest' user. In the /sandbox/jenkins-ci/workspace/ directory you will find a forest of directories leading you to the configuration Jenkins displayed. For example, /mpich-review-tcp/compiler/gnu/jenkins_configure/strict/label/solaris/ has the working directory for the gnu,debug,solaris version.

Powercycling nodes

If for some reason a node has become unresponsive and does not return after a graceful reboot command, the pm command can be used from bblogin to hard powercycle nodes. The format of a pm command is, for example:

 pm -c bb72

SLURM Cluster

SLURM is an open-source workload manager designed for Linux clusters of all sizes. The objective is to let SLURM manage all build slaves and schedule the test jobs that are submitted by Jenkins. The SLURM has different partitions (queues) for different sets of nodes. Currently, all ib64 and ubuntu32 nodes are available in SLURM. All SLURM partitions are:

Partition Nodes
ib64 (default) ib64-[1-17]
ubuntu32 ubuntu32-[1-4]

Running Jobs through SLURM

Login & Compiling

The login node is bblogin1.mcs.anl.gov. You need to use your MCS account to login. You can also use the login node to compile your codes. Do not run large, long, multi-threaded, parallel, or CPU-intensive jobs on the login node.

Interactive Jobs

Interactive jobs can run on compute nodes. You can start interactive jobs to run shell on a compute node.

$  srun -N1 --pty bash

This will submit an interactive job that requires one dedicated node to the default partition (ib64). Once the resource is available, the a bash will be started on the allocated node and available for your use.

The default time limit of a job on SLURM is set to 2 hours. In case more time is needed for the interactive job, please set it using the -t option. The following example starts an interactive job with a time limit of 3 hours.

$ srun -t 3:00:00 -N1 --pty bash

To quit your interactive job:

$  exit
Batch Jobs

To run a batch job, you need to create a SLURM job script.

#!/bin/bash
#SBATCH -N 2
#SBATCH -n 20
#SBATCH -t 10:00
#SBATCH -p ib64

mpiexec -n20 ./cpi
exit 0

Once the script is created, you can submit it:

$  sbatch <job_script_name>

For both interactive jobs and batch jobs, you can specify the number of nodes (-N) and the number of processes (-n). You processes will be evenly distributed on the allocated nodes. Use "-t" option to set the time limit of your job. Use "-p" to select the partition.

Jenkins nightly jobs

In the "nightly" view, there are some dependencies between jobs. The dependency means that some jobs are triggered when the dependent upstream job is successfully completed. The following illustrates dependencies between jobs:

mpich-tarball --> armci-mpi
              --> mpich-abi-prolog --> mpich-abi
              --> mpich-master-freebsd
              --> mpich-master-mxm
              --> mpich-master-ofi
              --> mpich-master-osx
              --> mpich-master-portals4
              --> mpich-master-solaris
              --> mpich-master-special-tests
              --> mpich-master-ubuntu

'A --> B' indicates that the right job B is dependent on the left job A. For example, mpich-abi-prolog depends on mpich-tarball, and its build is triggered only when the build of mpich-tarball is successfully done.

mpich-tarball creates a tarball of the MPICH master branch using release.pl, and all downstream jobs, which are dependent on mpich-tarball, use the tarball. Therefore, the MPICH master repository is pulled once in mpich-tarball, and autogen.sh is not executed in most jobs except mpich-tarball.

Jenkins mpich-review details

The mpich-review repository is used for both jenkins testing and for human reviews/signoffs.

There are several jenkins jobs that pull from mpich-review.

The following branches are tested by mpich-review-tcp:

mpich-review/jenkins/all/*
mpich-review/jenkins/most/*
mpich-review/jenkins/tcp/*

The following branches are tested by mpich-review-mxm:

mpich-review/jenkins/all/*
mpich-review/jenkins/most/*
mpich-review/jenkins/mxm/*

The following branches are tested by mpich-review-portals4:

mpich-review/jenkins/all/*
mpich-review/jenkins/most/*
mpich-review/jenkins/portals4/*

The following branches are tested by mpich-review-sock:

mpich-review/jenkins/all/*
mpich-review/jenkins/sock/*

The following branches are tested by mpich-review-ofi:

mpich-review/jenkins/all/*
mpich-review/jenkins/ofi/*

The following branches are tested by mpich-review-armci-mpi:

mpich-review/jenkins/all/*
mpich-review/jenkins/armci-mpi/*

Scripts for Jenkins

In order to ensure the consistency of the scripts and the clarity of their history, the scripts for Jenkins jobs are maintained under maint/jenkins/ directory of the MPICH source tree. If you need to change these scripts, please start your commit message with maint/jenkins:.

Build Scripts

There is a set of build scripts for existing Jenkins jobs:

test-worker-abi-prolog.sh - mpich-master-abi-prolog
test-worker-abi.sh        - mpich-master-abi
test-worker-tarball.sh    - mpich-tarball
test-worker-armci.sh      - All armci jobs
test-worker.sh            - All other jobs

XFAIL Scripts

A set of scripts is provided for setting xfail at build time.

set-xfail.sh
xfail.conf

The build script invokes the set-xfail.sh script which set the xfail based on the settings in xfail.conf. The xfail.conf file enables setting conditional xfails. The syntax of the xfail setting is:

[jobname] [compiler] [jenkins_configure] [netmod] [queue] [sed of XFAIL]

Currently, it supports five types of conditions:

  1. jobname, the name of the Jenkins jobs, partial matches are allowed. For example, mxm matches both mpich-master-mxm and mpich-review-mxm.
  2. compiler, the name of compiler. For example, gnu, intel.
  3. jenkins_configure, the option for ./configure. For example, default, debug.
  4. netmod', the type of netmod. For example, mxm and portals4.
  5. queue, the type of machine for testing. Available types are ib64, ubuntu32, freebsd32, freebsd64, solaris, osx.

Example for xfail.conf:

# xfail alltoall tests for all portals4 jobs
portals4 * * * * sed -i "s+\(^alltoall .*\)+\1 xfail=ticket0+g" test/mpi/threads/pt2pt/testlist
# xfail when the job is "mpich-master-mxm" or "mpich-review-mxm", and the jenkins_configure is "debug".
mxm gnu debug * * sed -i "s+\(^alltoall .*\)+\1 xfail=ticket0+g" test/mpi/threads/pt2pt/testlist

Possible Future Uses of Jenkins in MPICH

  • run the other test suites as well (MPICH1, Intel, C++, LLNL I/O)
  • automated performance regression testing, including historical performance trend plotting
  • packaging our nightly snapshot tarballs
  • packaging our final release tarballs
  • write a script to filter TAP results for more sophisticated xfail criteria, possibly based on machine or test environment (e.g., exclude bcast2 failures due to MPIEXEC_TIMEOUT on shared machines)
  • gate pushes to origin on 100% clean tests
  • automated builds on platforms that are harder to integrate with the old nightlies (BG/Q, niagara machines, etc.)
  • multi-machine tests
  • build an extreme feedback device (google for more ideas) :)
  • email notification for mpich-review tests (with test results)