Jenkins Node Setup (manual)¶
DISCLAIMER: We're currently migrating the generation of build slaves using Ansible
Not all Jenkins Nodes are doing the same kind of work so the exact things that need to be installed
will heavily depend on what your node is supposed to do. The below lists some of the dependencies
to install software to run builds for Osmocom Cellular software.
deb [arch=amd64] https://download.docker.com/linux/debian jessie stablefor docker-ce
apt-get install openjdk-7-jre-headless doxygen g++ libtalloc-dev libpcsclite-dev make gcc \ pkgconf libtool autoconf automake libortp-dev asciidoc mscgen virt-manager git libsctp-dev \ libpcap-dev osc libc-ares-dev libgps-dev libsofia-sip-ua-glib-dev libssl-dev libsqlite3-dev \ libusb-dev libffi-dev libfftw3-dev flex libdbi-dev libsnmp-dev libncurses5-dev libgsm1-dev \ python-minimal python3 libdbd-sqlite3 cppcheck coccinelle htop docker-ce \ libgmp-dev gawk texinfo flex bison bc libsigsegv-dev libffi-dev libusb-1.0-0-dev libreadline-dev \ debhelper devscripts gcc-arm-none-eabi git-buildpackage dh-systemd dh-autoreconf libzmq3-dev \ libgnutls28-dev libsystemd-dev sqlite3 stow libmnl-dev \ python-setuptools python3-setuptools
For build hosts running debian jessie, the following extra repository is required to have a libuhd recent enough to build osmo-trx:
echo "deb http://ftp.de.debian.org/debian jessie-backports main" > /etc/apt/sources.list.d/uhd.list apt-get update apt-get install -t jessie-backports libboost-dev libuhd-dev
To run contrib/jenkin.sh to test ARM optmized instructions, several extra dependencies are required. It is recommended to run those using at least :
apt-get install qemu qemu-user-static qemu-system-arm proot
Usually the slave nodes (debian 9) have a debian armhf rootfs pre-installed in /opt/qemu-img which is used by default by contrib/jenkins.sh. In order to build the image, you can use debootstrap (instructions available in the same jenkins.sh script). You then need to install on top of the previous packages:
apt-get install debootstrap fakeroot
Another possibility is downloading a base image from eg. https://uk.images.linuxcontainers.org/images/debian/stretch/armhf and then using chroot/proot to install missing packages in the rootfs (see again jenkins.sh).
There is a jenkins job that installs the common python tests code.
In order for this to work, the osmocom-build user must have write access to /usr/local/*
chown -R root:osmocom-build /usr/local chmod -R g+w /usr/local
Then configure this jenkins job to also run on the new build slave:
and trigger a run.
If the jenkins job is not an option for some reason, the manual installation would be:
git clone git://git.osmocom.org/python/osmo-python-tests cd osmo-python-tests/ python2 ./setup.py install python3 ./setup.py install
(We want to install both for python2 and python3)
Rationale for installing in /usr/local instead of user's home: the executable py scripts are installed in ~/.local/bin, which isn't usually in the $PATH. It is trivial to run scripts in /usr/local/bin from any jenkins job; manipulating $PATH seems not so trivial on jenkins slaves. Feel free to change this if you know a better way.
- Grab coverity tools and set-up directory structure... Log-in is required to fetch it so difficult to automate.
For the osmo-gsm-tester build of the sysmobts binaries, we require an SDK installed on the build slave.
It should match the root fs installed on the actual sysmobts hardware.
At the time of writing, this is a poky-1.5.4 of particular version <forgotten>, but we will probably upgrade to 201705 image and SDK soon.
FIXME: Define which particular toolchains those are and how to install them.
There appears to be some dockerfile + scripts that are not maintained in git. This is located in ~/docker of the osmocom-build user. Copy it from another node and execute ./update.sh
commands to remove unused docker images:
Delete all stopped containers (including data-only containers):
docker rm $(docker ps -a -q)
Delete all 'untagged/dangling' (<none>) images
docker rmi $(docker images -q -f dangling=true)
docker and overlay¶
- if the physical host hosting the lxc container doesn't have 'overlay.ko'
loaded, overlay/overlay2 storage drivers are not available to docker
- docker "silently" falls back to using "vfs" which is basically not copy-on-write
but copy-everything-all-the-time, and which consumes massive amounts of storage
How to test:
$ docker info | grep Storage
Storage Driver: overlay2
If it shows vfs, something is wrong.
su osmocom-build ln -s osmo-ci/scripts ~/bin
Add your new node to the build matrix of
https://jenkins.osmocom.org/jenkins/job/update-osmo-ci-on-slaves/configure and execute it. This will git clone
osmo-ci to the $HOME directory and build a docker container used for future builds.
(btw, since setting up the jenkins job for osmo-python-tests as above, osmocom-build has write access to /usr/local/bin now, and we could also install osmo-ci/scripts to /usr/local/bin)
Install packages as indicated in https://git.osmocom.org/osmo-gsm-manuals/tree/INSTALL.txt
- Create ~/.oscrc with username/pass for obs nighly builds
SSH / ftp.osmocom.org upload¶
- Create ~/.ssh/id_rsa for access to ftp.osmocom.org upload
- Add host key of (for the right port "48") of ftp.osmocom.org. This is currently for the
apiuser in the ftp.osmocom.org jail
Some jobs do a 'git commit' in a local clone (e.g. Osmocom_nightly_packages) and hence need some id set:
su osmocom-build cat > ~/.gitconfig <<END [user] email = firstname.lastname@example.org name = Your Name END