Pro Puppet (26 page)

Read Pro Puppet Online

Authors: Jeffrey McCune James Turnbull

BOOK: Pro Puppet
3.48Mb size Format: txt, pdf, ePub

The
X-SSL-Subject
and
X-Client-DN
headers contain the same information, the common name from the verified SSL certificate presented by the Puppet agent. This information is provided in two headers to support back-end HTTP servers other than Apache. The
X-Client-Verify
header indicates to the back-end worker whether or not the load balancer was able to verify the authenticity of the client SSL certificate. This value will be
SUCCESS
in Apache if the client certificate is signed by a trusted Certificate Authority, is not listed in the Certificate Revocation List, and has not expired.

The information set in the client request headers directly matches the
SetEnvIf
configuration lines configured in the back-end Puppet master virtual host. We can see these lines in
/etc/httpd/conf.d/40_puppetmaster_worker_18140.conf
as we configured them in
Listing 4-9
:

# Obtain Authentication Information from Client Request Headers
SetEnvIf X-Client-Verify "(.*)" SSL_CLIENT_VERIFY=$1
SetEnvIf X-SSL-Client-DN "(.*)" SSL_CLIENT_S_DN=$1

The authentication information in a load-balanced Puppet master configuration is passed from the load balancer to the back-end workers using client request headers. This design allows heterogeneous front-end and back-end HTTP systems to work together as long as the back-end HTTP server is able to read the Puppet agent certificate common name and determine whether or not the certificate is currently valid. Once read from the headers, the back-end HTTP server sets this information in two environment variables for Puppet to reference.

The third important section in the front-end load balancer configuration in
Listing 4-11
tells Apache to route all requests to the pool of Puppet master virtual hosts. This section is composed of the three lines,
ProxyPass
,
ProxyPassReverse
,
ProxyPreserveHost
. These three statements tell Apache the virtual host listening on port 8140 should forward all Puppet agent requests to the pool of Puppet master workers named
balancer://puppetmaster
.

Tip
Detailed information about mod_proxy and additional configuration options are available online at
http://httpd.apache.org/docs/2.0/mod/mod_proxy.html
.

Testing the Load Balancer Configuration

We’re now almost ready to test the new Puppet master configuration using the Puppet agent. Before doing so, you need to make sure each virtual host is logging information in a clearly defined location. This will allow you to trace the Puppet agent request as it passes through the front-end load balancer to the back-end worker virtual host.

To make it easier, let’s separate out the logging events for each virtual host by adding
ErrorLog
and
CustomLog
configuration options to each configuration file, as shown in
Listing 4-13
.

Listing 4-13.
Configuring front-end logging

ErrorLog /var/log/httpd/balancer_error.log
CustomLog /var/log/httpd/balancer_access.log combined
CustomLog /var/log/httpd/balancer_ssl_requests.log "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x
\"%r\" %b"

Only three lines need to be inserted into the

stanza to enable logging on the front end. Every request coming into the Puppet master infrastructure will pass through the front-end virtual host and will be logged to the
balancer_access.log
file.

Worker virtual hosts do not handle SSL encrypted traffic and only require two configuration lines to be inserted into the
VirtualHost
stanza. Every request routed to a specific worker will be logged into that worker’s access log file. In
Listing 4-14
, we’ve included the TCP port number of the worker to uniquely identify the log file and the associated worker.

Listing 4-14.
Configuring worker logging

ErrorLog /var/log/httpd/puppetmaster_worker_error_18140.log
CustomLog /var/log/httpd/puppetmaster_worker_access_18140.log combined

Once the front-end load balancer and back-end worker virtual hosts have been configured to log to their own log files, you need to restart Apache and makes sure the log files were created properly.

# service httpd restart
ls -l {balancer,puppetmaster}*.log
-rw-r--r-- 1 root root 0 Nov 26 15:36 balancer_access.log
-rw-r--r-- 1 root root 0 Nov 26 15:36 balancer_error.log
-rw-r--r-- 1 root root 0 Nov 26 15:36 balancer_ssl_requests.log
-rw-r--r-- 1 root root 0 Nov 26 15:36 puppetmaster_worker_access_18140.log
-rw-r--r-- 1 root root 0 Nov 26 15:36 puppetmaster_worker_error_18140.log

With the appropriate log files in place, you can now test the load balancer and the single back-end worker using
puppet agent
:

# puppet agent --test
info: Caching catalog for puppet.example.com
info: Applying configuration version '1290814852'
notice: Passenger is setup and serving catalogs.
notice: /Stage[main]//Node[default]/Notify[Passenger]/message: defined 'message' as 'Passenger is setup and serving catalogs.'
notice: Finished catalog run in 0.43 seconds

Here we’ve run the
puppet agent
command and obtained a catalog from the Puppet master. The Apache load-balancing virtual host listened on
puppet.example.com
port 8140 and received the Puppet agent request, forwarded it along to the back-end Puppet master virtual host listening on port 18140, and then provided the response back to the Puppet agent.

We can check the Apache logs to verify that this is what actually happened, as shown in Listings 4-15 through 4-17.

Listing 4-15.
Load balancer request log

# less balancer_access.log
127.0.0.1 - - [26/Nov/2010:15:40:51 -0800] "GET /production/catalog/puppet.example.com?facts=…
&facts_format=b64_zlib_yaml HTTP/1.1" 200 944 "-" "-"
127.0.0.1 - - [26/Nov/2010:15:40:53 -0800] "PUT /production/report/puppet.example.com HTTP/1.1" 200 14 "-" "-"

Listing 4-16.
Load balancer error log

root:/var/log/httpd # less balancer_error.log
[Fri Nov 26 15:40:53 2010] [error] (111)Connection refused: proxy: HTTP: attempt to connect to
127.0.0.1:18141 (127.0.0.1) failed
[Fri Nov 26 15:40:53 2010] [error] ap_proxy_connect_backend disabling worker for (127.0.0.1)

Listing 4-17.
First Puppet master worker request log

# less puppetmaster_worker_access_18140.log
127.0.0.1 - - [26/Nov/2010:15:40:51 -0800] "GET /production/catalog/puppet.example.lan?facts=…
&facts_format=b64_zlib_yaml HTTP/1.1" 200 944 "-" "-"
127.0.0.1 - - [26/Nov/2010:15:40:53 -0800] "PUT /production/report/puppet.example.lan HTTP/1.1" 200 14 "-" "-"

In
Listing 4-15
, you can see the incoming Puppet agent catalog request at 3:40:51 PM. The front-end load balancer receives the request and, according to the
balancer_error.log
shown in
Listing 4-16
, disables the worker virtual host on Port 18141. This leaves one additional worker in the
balancer://puppetmaster
pool, which receives the request, as indicated in
puppetmaster_worker_access_18140.log
shown in
Listing 4-17
. Finally, the Puppet agent uploads the catalog run report a few seconds later.

What happens, however, if all the back-end workers are disabled?  Well, let’s see. To do this, disable the Puppet master virtual host by renaming the configuration file:

# mv 40_puppetmaster_worker_18140.conf{,.disabled}

Restarting Apache:

# service httpd restart
Stopping httpd:                                            [  OK  ]
Starting httpd:                                            [  OK  ]

And then running the Puppet agent again:

# puppet agent --test
err: Could not retrieve catalog from remote server: Error 503 on SERVER …
warning: Not using cache on failed catalog
err: Could not retrieve catalog; skipping run
err: Could not send report: Error 503 on SERVER …

We’ve discovered that the Puppet agent receives error 503 when no back-end Puppet master worker virtual hosts are available. The front-end load balancer runs through its list of back-end workers defined in the
Proxy balancer://puppetmaster
section of
30_puppetmaster_frontend_8140.conf
file. Finding no available back-end workers, the front-end returns HTTP error code 503, Service Temporarily Unavailable to the client. This HTTP error code is also available in the front-end load balancer’s error log file (
Listing 4-18
).

Listing 4-18.
Apache front end load balancer error log

# less balancer_error.log
[Fri Nov 26 15:59:01 2010] [error] (111)Connection refused: proxy: HTTP: attempt to connect to
127.0.0.1:18140 (127.0.0.1) failed
[Fri Nov 26 15:59:01 2010] [error] ap_proxy_connect_backend disabling worker for (127.0.0.1)
[Fri Nov 26 15:59:01 2010] [error] (111)Connection refused: proxy: HTTP: attempt to connect to
127.0.0.1:18141 (127.0.0.1) failed
[Fri Nov 26 15:59:01 2010] [error] ap_proxy_connect_backend disabling worker for (127.0.0.1)
[Fri Nov 26 15:59:01 2010] [error] proxy: BALANCER: (balancer://puppetmaster). All workers
are in error state

Now that you’ve seen one and no back-end masters working, let’s bring back both workers back online. In doing so, you will configure the second Puppet master worker.

The second back-end worker running on TCP port 18141 is almost identical to the first worker virtual host configuration, except the port number is incremented by one. First re-enable the first back-end worker, and then define the second back-end worker:

# mv 40_puppetmaster_worker_18140.conf{.disabled,}

This command renamed the disabled configuration file back to the original name of
40_puppetmaster_worker_18140.conf,
effectively re-enabling the worker virtual host listening on port 18140.

# sed s/18140/18141/ 40_puppetmaster_worker_18140.conf \
  > 41_puppetmaster_worker_18141.conf

This command reads the configuration file of the first worker and writes out a new configuration file for the second worker. While the original file is being read and the new file written, the sed command is performing a search-and-replace, replacing all instances of “18140” with “18141.”  The results are two nearly identical worker virtual hosts, the only difference being the port and the log files:

# rsync -axH /etc/puppet/rack/puppetmaster{,_18141}/

The Rack configuration for each worker process is identical and needs no modification when bringing additional workers online. Using the rsync command, we’re able to create an identical copy of the existing Puppet rack configuration for use with the new worker virtual host.

Using the diff command, we’re able to easily visualize the lines modified by the sed command. As you can see in
Listing 4-19
, the difference between the two worker configuration files is only a matter of the listening port and the log files.

Listing 4-19.
Comparison of two Puppet master worker virtual host configurations

# diff -U2 4{0,1}*.conf
--- 40_puppetmaster_worker_18140.conf   2010-11-26 16:19:21.000000000 -0800
+++ 41_puppetmaster_worker_18141.conf   2010-11-26 16:19:31.000000000 -0800
@@ -1,4 +1,4 @@
-Listen
18140
-18140
>
+Listen
18141
+18141
>
 SSLEngine off
@@ -8,6 +8,6 @@
 RackAutoDetect On
-DocumentRoot /etc/puppet/rack/puppetmaster_
18140
//files/06/21/01/f062101/public/
-18140
/>
+DocumentRoot /etc/puppet/rack/puppetmaster_
18141
//files/06/21/01/f062101/public/
+18141
/>
    Options None
    AllowOverride None
@@ -16,6 +16,6 @@
 

-ErrorLog /var/log/httpd/puppetmaster_worker_error_
18140
.log
-CustomLog /var/log/httpd/puppetmaster_worker_access_
18140
.log combined
+ErrorLog /var/log/httpd/puppetmaster_worker_error_
18141
.log
+CustomLog /var/log/httpd/puppetmaster_worker_access_
18141
.log combined

As you can see, we configure a unique Rack
DocumentRoot
for each back-end Puppet master worker process. This is important to allow Passenger to track and identify each of the multiple Puppet masters. After configuring the second back-end worker virtual host, restart Apache:

# service httpd restart
Stopping httpd:                                            [  OK  ]
Starting httpd:                                            [  OK  ]

And test the Puppet agent again:

Other books

The Devil in Denim by Melanie Scott
Hot Water by Erin Brockovich
Mistaken by J A Howell
What Strange Creatures by Emily Arsenault
10 Tahoe Trap by Todd Borg
Tin God by Stacy Green