Authors: Jeffrey McCune James Turnbull
Tip
For more information about tuning Passenger, please see:
http://www.modrails.com/documentation/Users%20guide%20Apache.html
The second aspect of the Apache configuration is the Apache virtual host stanza. The virtual host configures Apache to listen on TCP port 8140 and to encrypt all traffic using SSL and the certificates generated for use with the Puppet master. The virtual host also configures Passenger to use the system’s Ruby interpreter and provides the path to the Rack configuration file namedconfig.ru (
Listing 4-6).
Listing 4-6.
Apache Puppet master configuration file
# /etc/httpd/conf.d/20_puppetmaster.conf
# Apache handles the SSL encryption and decryption. It replaces webrick and listens by default on 8140
Listen 8140
SSLEngine on
SSLProtocol -ALL +SSLv3 +TLSv1
SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP
# Puppet master should generate initial CA certificate.
# ensure certs are located in /var/lib/puppet/ssl
# Change puppet.example.com to the fully qualified domain name of the Puppet master, i.e. $(facter fqdn).
SSLCertificateFile /var/lib/puppet/ssl/certs/
puppet.example.com
.pem
SSLCertificateKeyFile /var/lib/puppet/ssl/private_keys/
puppet.example.com
.pem
SSLCertificateChainFile /var/lib/puppet/ssl/certs/ca.pem
SSLCACertificateFile /var/lib/puppet/ssl/ca/ca_crt.pem
# CRL checking should be enabled
# disable next line if Apache complains about CRL
SSLCARevocationFile /var/lib/puppet/ssl/ca/ca_crl.pem
# optional to allow CSR request, required if certificates distributed to client during provisioning.
SSLVerifyClient optional
SSLVerifyDepth 1
SSLOptions +StdEnvVars
# The following client headers record authentication information for down stream workers.
RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e
RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e
RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e
RackAutoDetect On
DocumentRoot /etc/puppet/rack/puppetmaster//files/06/21/01/f062101/public/
Options None
AllowOverride None
Order allow,deny
allow from all
# /etc/httpd/conf.d/20_puppetmaster.conf
This configuration file may appear a little overwhelming. In particular, theRequestHeader
statements are the source of much confusion among Puppet newcomers and veterans alike. When using this configuration file example, make sure to replacepuppet.example.com
with the fully qualified domain name of your own Puppet master system. The fully qualified domain name is easily found with the command:
$ facter fqdn.
The first section of the configuration file makes sure Apache is binding and listening on TCP port 8140, the standard port for a Puppet master server.
Next, the virtual host stanza begins with
. Please refer to the Apache version 2.2 configuration reference (http://httpd.apache.org/docs/2.2/
) for more information about configuring Apache virtual hosts.
SSL is enabled for the Puppet master specific virtual host usingSSLEngine on
and setting theSSLCipherSuite
parameters. In addition to enabling SSL encryption of the traffic, certificates are provided to prove the identity of the Puppet master service. Next, revocation is enabled using theSSLCARevocationFile
parameter. Thepuppet cert
command will automatically keep theca_crl.pem
file updated as we issue and revoke new Puppet agent certificates.
Finally, Apache is configured to verify the authenticity of the Puppet agent certificate. The results of this verification are stored in the environment as a standard environment variable. The Puppet master process running inside Passenger will check the environment variables set by theSSLOptions +StdEnvVars
configuration in order to authorize the Puppet agent.
In the section immediately following the SSL configuration, the results of verifying the Puppet agent’s certificate are stored as client request headers as well as in standard environment variables. Later in this chapter, you’ll see how Client Request Headers may be consulted by downstream workers in order to provide authentication using standard environment variables.
The last section of the Puppet master virtual host is the Rack configuration. Rack provides a common API for web servers to exchange requests and responses with a Ruby HTTP service like Puppet. Rack is commonly used to allow web applications like the Puppet Dashboard to be hosted on multiple web servers. This stanza looks for a special file calledconfig.ru
in/etc/puppet/rack/puppetmaster/
(see
Listing 4-7
).
Listing 4-7.
Puppet master Rack configuration file
# /etc/puppet/rack/puppetmaster/config.ru
# a config.ru, for use with every rack-compatible webserver.
$0 = "master"
# if you want debugging:
# ARGV << "--debug"
ARGV << "--rack"
require 'puppet/application/master'
run Puppet::Application[:master].run
# EOF /etc/puppet/rack/puppetmaster/config.ru
Tip
If you installed Puppet from packages, check your “share” directory structure for aconfig.ru
example provided by the package maintainer, often located at/usr/share/puppet/ext/rack/files/config.ru
. For up to date Rack configuration files, check the ext directory in the most recently released version of Puppet. This may be found online at https://github.com/puppetlabs/puppet/tree/master/ext/rack/files
Before creating this configuration file, you may need to create the skeleton directory structure for Rack and the Puppet master rack application instance. To do so, you could execute the command:
mkdir -p /etc/puppet/rack/puppetmaster/{public,tmp}.
Note
Theconfig.ru
Rack configuration file should be owned by thepuppet
user and group. Passenger will inspect the owner of this file and switch from theroot
system account to this less privilegedpuppet
service account when Apache is started.
We’ve covered the steps required to install and configure Apache and Passenger. You’re now ready to test your changes by starting the Apache service. Before doing so, make sure to double check the ownership of theconfig.ru
file. If there is a certificate problem, make sure the existing SSL certificates are configured in the Puppet master Apache virtual host configuration file, as shown in
Listing 4-6
. You also want to make sure the Puppet master is not already running.
In order to start Apache and the new Puppet master service, you can again use thepuppet resource
command:
# puppet resource service httpd ensure=running enable=true hasstatus=true
service { 'httpd':
ensure => 'running',
enable => 'true'
}
Running the Puppet agent against the Apache Puppet master virtual host will allow you to test the system:
# puppet agent --test
info: Caching catalog for puppet.example.lan
info: Applying configuration version '1290801236'
notice: Passenger is setup and serving catalogs.
notice: /Stage[main]//Node[default]/Notify[Passenger]/message: defined 'message' as 'Passenger is setup and serving catalogs.'
notice: Finished catalog run in 0.38 seconds
The Puppet agent does not provide any indication that the Puppet master service has switched from WEBrick to Apache. The best way to tell if everything is working is to use to Apache access logs (see
Listing 4-8
). The Puppet master virtual host will use the combined access logs to record incoming requests from the Puppet agent.
Listing 4-8.
Puppet requests in the Apache access logs
# tail /var/log/httpd/access_log
127.0.0.1 - - [24/Nov/2010:20:48:11 -0800] "GET
/production/catalog/puppet.example.com?facts=…& A&facts_format=b64_zlib_yaml HTTP/1.1" 200
1181 "-" "-"
127.0.0.1 - - [24/Nov/2010:20:48:12 -0800] "PUT /production/report/puppet.example.com
HTTP/1.1" 200 14 "-" "-"
In theaccess_log
file we can see that the Puppet agent issues a HTTP GET request using the URI/production/catalog/puppet.example.com
. We can also see the Puppet agent sends the list of facts about itself in the request URI. The Puppet master compiles the modules and manifests into a configuration catalog and provides this catalog in the HTTP. The “200” status code indicates that this operation was successful. Following the catalog run, the Puppet agent submits a report using the PUT request to the URI/production/report/puppet.example.com
. We cover more information about reports and reporting features in Puppet in
Chapter 10
.
In addition to the Apache access_log, the Puppet master process itself continues to log information about itself to the system log. This information is available in/var/log/messages
on Enterprise Linux bases systems and in/var/log/daemon
on Ubuntu/Debian systems.
And that’s it! You’ve added an Apache and Passenger front-end to your Puppet master that will allow you to scale to a much larger number of hosts.
You’ve replaced the WEBrick HTTP server with the Apache web server. Sometimes, though, you need more capacity than a single machine can provide alone. In this case, you can scale the Puppet master horizontally rather than vertically. Horizontal scaling uses the resources of multiple Puppet masters in a cluster to get more capacity than any one system can provide. This configuration can cater for environments with tens of thousands of managed nodes.
There are many options and strategies available to provide a front-end request handler. We’re going to use HTTP load balancing to direct client requests to available back-end services. Each Puppet master worker is configured independently, using different Apache virtual host configurations bound to different ports on the loopback interface 127.0.0.1. This allows multiple Puppet master workers to be configured and tested on the same operating system instance and easily redistributed to multiple hosts; all you have to do is change the listening IP address and port numbers in the load balancer and worker configuration files.
Load Balancing
For an introduction into the general problem of load balancing and scalable web architectures, we recommend the Wikipedia article titled Load balancing (computing) at -http://en.wikipedia.org/wiki/Load_balancing_(computing)
. In particular, the idea of horizontal and vertical scaling is an important one to consider. The Puppet master scales well both horizontally and vertically, either by adding more systems working in parallel or by increasing the amount of memory and processor resources.