Building Puppet RPMs based on the official PE stack

How to (somewhat) easily build Puppet Enterprise RPMs for SLES (and RHEL).

If you want to use Puppet on the SuSE Linux Enterprise Server (SLES) distribution then you are basically heading for trouble. SLES really suffers for lacking a broad community software stack to supplement the distro itself. Like what Red Hat Enterprise Linux (RHEL) has in the form of the EPEL (https://fedoraproject.org/wiki/EPEL) repositories there is none of in the SLES world.

You have a few choices though:
(1) use the bundled Puppet packages which SuSE ships with SLES.
(2) use packages found on SuSE’s OpenBuild (http://openbuildservice.org/) service.
(3) compile the Puppet community stack with all dependencies yourself located in a custom directory tree.
(4) use the source RPMs provided by Puppetlabs

Option (1) will leave you with notoriously old versions of Puppet which will make a number of recent features unavailable to you.

Option (2) will give you a few decent Puppet stacks to choose from. Unfortunately all the ones I found had their binaries and libraries located inside the /usr directory tree. This will mean that they are bound to conflict with a number of SLES distro packages eventually.

Option (3) is probably the technically most reliable option which will let you run any recent version of Puppet. The downside is that you will need to compile the whole stack of Puppet dependencies such as Ruby, Augeas, Java and other libraries. This is because that you will find that many of the software dependencies in the distro of too old to be satisfy the requirement of any half-way recent Puppet version. So this option (3) will work but it will be a LOT of work. Especially if you need to maintain a Puppet stack for any combination of SLES 10, SLES 11 and SLES 12.

Option (4) can be used if you can live with the feature set brought along in Puppet Enterprise 3.1 (this is Puppet Community version 3.3.). After this version Puppetlabs stopped releasing the SRPM for thier Puppet Enterprise stack.

If however Puppet 3.3 is good enough for you then it will offer a number of advantages. In a heterogenous UNIX/Linux landscape you will be able to use the exact same Puppet stack for all servers. Pretend that you have a server landscape of SLES 10, SLES 11, RHEL 5 and RHEL 6. This option will let you build packages for all of these platforms with the exact same versions for Puppet and all its dependency libraries. This will ensure that if a Puppet manifest works on SLES then it will work on RHEL too, or vice-versa (as it should with a declarative config management tool). There are even sources for Solaris, Ubuntu Linux and other operating systems. See the complete list here:

http://downloads.puppetlabs.com/enterprise/sources/3.1.3/

To help you quickly build the RPMs you can use a few wrapper scripts I have made. They currently work for and have been tested on SLES 10, SLES 11, RHEL 5 and RHEL 6. I have published them on Github here:

https://github.com/twillert/pe-puppet-build

I have had very nice results using this Puppet Enterprise stack for some time now. Though its a shame that Puppetlabs have stopped making their SRPMs publicly available.

If you have any problems using the scripts then let me know.

Generic Logstash-Forwarder Installation

This is a short tutorial to show how you can install Logstash-Forwarder on a 32bit Linux system.

I have an age old Linux 32bit which runs a Linux distro (OpenSUSE 10.2) which has been obsolete for a few years now. I have been wanting to get rid of the server for some time but I have never gotten around moving my friends little website to another server. (yeah, security – I know, dont ask)

Anyways, having just started to play with Logstash I figured this could be a nice little exercise into learning the Logstash-Forwarder a little better. My old server only has 500MB RAM and its already running Apache, MySQL, Postfix etc. so I couldnt afford to use a regular Logstash agent due to the bigger memory footprint. The Logstash-Forwarder (from now on known as LSF) is, like the name implies, just a program to forward log data to a regular Logstash agent. It includes encryption and load balancing while having a smaller memory footprint compared to a native Logstash Java process acting as forwarder. My LSF process only use around 4-5 MB of memory.

The installation instructions for Logstash-Forwarder are normally straightforward:

  1. clone LSF from Github
  2. go build
  3. build RPM with fpm
  4. install RPM
  5. configure LSF
  6. run LSF

On this server I had no Git binary, no Go and no Ruby installed so I had do this manually:

Building LSF

wget https://go.googlecode.com/files/go1.1.linux-386.tar.gz
tar xzf go1.1.linux-386.tar.gz
export GOROOT=$HOME/go
export PATH=$PATH:$GOROOT/bin

wget –no-check-certificate https://github.com/elasticsearch/logstash-forwarder/archive/master.zip -O lsf.zip
unzip lsf.zip
cd logstash-forwarder-master
go build
mkdir -p /opt/logstash-forwarder/bin
cp logstash-forwarder-master /opt/logstash-forwarder/bin/logstash-forwarder
cp logstash-forwarder.init /etc/init.d/logstash-forwarder

Configuring LSF

If you dont already have some SSL certificates ready then you can create them yourself. On ubuntu.com there is a nice and easy recipe for it -> https://help.ubuntu.com/community/OpenSSL

Dont forget to also put the SSL files on your indexer server.

Create the config file /etc/logstash-forwarder. Mine looked like below. It will forward /var/log/messages and /var/log/apache2/access_log to an Logstash indexer agent.

/etc/logstash-forwarder:
{
  "network": {
  "servers": [ "your.indexer.com:5000" ],
  "ssl certificate": "/root/myCA/server_crt.pem",
  "ssl key": "/root/myCA/server_key.pem",
  "ssl ca" : "/root/myCA/cacert.crt",
  "timeout": 15
},
  "files": [
   {
     "paths": [ "/var/log/messages" ],
     "fields": { "type": "syslog" }
   }, {
     "paths": [ "/var/log/apache2/access_log", "/srv/www/vhosts/uwe-carstens.de/statistics/logs/access_log" ],
     "fields": { "type": "apache" }
   }
   ]
 }

# Now first test LSF from the command line
/opt/logstash-forwarder/bin/logstash-forwarder -config /etc/logstash-forwarder

Once you can see both syslog and Apache log entries from your forwarder are showing up on the indexer then things should be working as intended. You can now go ahead and configure LSF to start automatically when the server starts:

chkconfig logstash-forwarder on

You might have amend the init script a bit according to whatever Linux distro you are using.

Potential Problems

Go Version 1.2

I had problems with LSF dumping with strange error messages about /usr/local/go …
something. I never found out caused these problems other than they went away when I switched to Go version 1.1. Initially I had downloaded and used Go version 1.2 which is more recent.

TLS handshake problem

I spent a long time researching another problem I had regarding the initial TLS/SSL handshake between the LSF and the indexer. In the end I had to give up and ask for help on the Logstash IRC channel. The helpful “whack” was able to identify that the Java version on the indexer side was using SSLv3 and my LSF was using TLSv1 which was the reason for the failing handshake. I encountered this problem on another node which has SUSE Linux Enterprise Server (SLES) installed. The default Java engine on SLES is the IBM variant.

2013/12/09 22:21:19.330290 Connecting to 127.0.0.1:5000 (127.0.0.1)
2013/12/09 22:21:19.334321 Failed to tls handshake with 127.0.0.1 local error: protocol version not supported

The problem is better described here:

https://logstash.jira.com/browse/LOGSTASH-1702

https://groups.google.com/forum/#!topic/logstash-users/rAJ5eKpNU94

A wrokaround for this problem was for me to use another Java engine with the indexer. Oracle Java 1.7.0_45 worked for me. I dont know how you can tell your Java to use another TLS/SSL version? If you do then please let me know, thanks!

Xymon CGI Scripts not working

I recently installed Xymon again on a few private servers. The server demon was running on a server where I had not installed Apache myself. So it took a little while to figure out that suExec does not go well with Xymon.

If your Xymon cgi scripts are not working and you see this in your Apache error_log “Premature end of script headers: svcstatus.sh” then your Apache might be configured it “SuexecUserGroup” (http://httpd.apache.org/docs/current/mod/mod_suexec.html). The lazy way to Xymon cgi scripts working is to remove suExec from your Apache config. Will only work if nothing else in the same web server scope does not depend on suExec.

The not-so-lazy way of solving this is of course to figure out which suExec config settigns work with Xymon.  This would help in situations where you are in control of Apache and/or where you need suExec for something else inside same scope (virtual hosts).

Xymon http://www.xymon.org/

Linux: getting Virtualbox Guest Additions to work with CentOS 5

I installed CentOS 5 as a guest OS in Virtualbox 3.0.4 running on a Windows Vista host.

Its probably me, but I couldnt get the guest additions for Virtualbox 3.0.4 installed for CentOS within 5 minutes. After a lot of googling and reading about DKMS for Linux, I decided to cheat and try a simple hack.

Suposedly the correct way to install the guest additions is to have a package named “dkms” installed. Unfortunately this package does not come with the Cent OS 5 distribution DVD (or at least, I couldnt find it). As I didnt want to install other dependency packages (yum-priorities) nor change my RPM repository settings, I went for the quick and dirty hack:

# ln -s /usr/src/kernels/2.6.18-128.4.1.el5-i686 /usr/src/linux
# sh /media/VBOXADDITIONS_3.0.4_50677/VBoxLinuxAdditions-x86.run

Reboot, and voila. You now have better graphics, mouse and copy&paste support.

Linux: Network alias and default gateway on CentOS

To add another IP number to an existing network interface you only need to create an additional file with the additional IP configuration in the /etc/sysconfig/network-scripts directory. As you are creating an additional interface “on top” an existing one there should already be one configuration file named ifcfg-XXXX, where XXXX is the device name, ie. eth0.

To configure a new IP number on top of the eth0 network device, create your new file ifcfg-eth0:1 like this:

# Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE]
DEVICE=eth0:1
BOOTPROTO=static
BROADCAST=192.168.0.255
IPADDR=192.168.0.29
NETMASK=255.255.255.0
NETWORK=192.168.0.0
ONBOOT=yes

To change or add your default gateway, you can edit the file /etc/sysconfig/network:

NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=centos
GATEWAY=192.168.0.1

To apply the changes:

# /etc/init.d/network restart

Linux: Network alias and default gateway on OpenSUSE

To add another IP address to an existing network interface, you only need to specify it in the /etc/sysconfig/network/ifcfg-eth0 like this:


STARTMODE='auto'
BOOTPROTO='static'
BROADCAST=192.168.0.255
ETHTOOL_OPTIONS=''
IPADDR=192.168.0.26/24
IPADDR_A=192.168.0.27/24
LABEL_A=1
MTU=''
NAME='79c970 [PCnet32 LANCE]'
NETMASK=''
NETWORK=''
REMOTE_IPADDR=''
USERCONTROL='no'

This will plumb and configure the eth0:1 with the IP number of 192.168.0.27. The suffix _A can be almost anything you like, and you can configure further network interfaces using additional suffixes. The “LABEL_” directive is important to set, otherwise your interface will not be configured. The value of the “LABEL_” directive will become the interface instance. In this example the :1 in eth0:1, but you can choose whatever number you want, in case it need to be something else in your world.

If you want to set or change your default gateway too, you can just change it in the file /etc/sysconfig/network/routes like this:

default 192.168.0.1 - -

After both changes can apply them by rerunning the RC script:

/etc/init.d/network restart

Actually pretty simple and straight forward, but if you are used to virtual interfaces on Solaris then this will save you 20 min of looking through /etc to discover how its done with OpenSUSE.

It lives, for now…

I managed to breathe some life into my age old PC. It is an AMD 2600+ with 512 MB of RAM, a 80 GB drive and a Plextor CD writer. Its pretty noisy but will have to do as a test machine for various things I want to try out until I find suitable replacement.

After installing Solaris 10 5/09 on it, I found out that the on-board network interface was not recognised. On sun.com there a HCL database of many of the components being supported or reported to work with Solaris. However I did not know how to figure out which vendor or model to look for, and I could not be bothered to install another OS on it just for finding out the type/model of the network interface. I was about the throw machine back under the desk when I realised I had an OpenSolaris 2009.06 LiveCD laying on the same desk. The LiveCD has a Device Utility program on the desktop which tells you the vendor and often model name of all yor hardware components; my network interface was a SiS 900. Back to the HCL page, download driver archive sfe-2.6.0a.tar, unpack archive, quickly glance into the README.txt and off we go. The process very smooth and only took a few minutes. Thanks to Masayuki Murayama for still maintaning these drivers.

These were the commands I used:

# gunzip -cd sfe-2.6.0a.tar.gz | tar xf -
# cd sfe-2.6.0a
# /usr/ccs/bin/make install
# ./adddrv.sh
# /usr/ccs/bin/make uninstall
# modload obj/sfe
# modinfo | grep sfe
234 f9ea9000 be8c 227 1 sfe (sis900/dp83815 driver 2.6.0)
# devfsadm -i sfe
# ifconfig sfe0 plumb
# ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
sfe0: flags=1000842 mtu 1500 index 2
inet 0.0.0.0 netmask 0
ether 0:b:6a:17:b4:bf

After booting I noticed that ZFS was complaining about a file being read correctly, so I thought it was a good idea to have all blocks mirrored even though its on the same drive. Better than nothing, and it might prolong the life of this machine a bit.
The error message went away after doing this, followed by a ZFS scrub.


# zfs set copies=2 rpool
# zpool scrub rpool

Hopefully the machine will live long enough for the replacement to arrive…