Authors: Jeffrey McCune James Turnbull
Listing 6-16.
Including exported file fragments in the load balancer configuration
Include /etc/httpd/conf.d.members/*.conf
The Example.com operator no longer needs to manually add each line once he configures Apache to include all files in theconf.d.members
directory. Instead, he configures Puppet to manage the individual file fragments using exported resources.
The Puppet configuration to export each load balancer member is very similar to what we saw with the SSH host key example. The Puppet configuration is very simple. Each worker node needs to export a singlebalancermember
resource for itself:
class worker {
@@balancermember { "${fqdn}":
url => "http://${fqdn}:18140",
}
}
Notice that the Example.com operator uses the fully qualified domain name as the title of the resource. In doing so, he is guaranteed there will be no duplicate resource declarations because each worker node should have a unique value for theirfqdn
fact. Declaring the defined resource in this manner exports two resources into the stored configuration database, thebalancermember
resource and the contained file resource shown in
Listing 6-14
. Neither of these resources will be collected on the worker nodes themselves.
The last step in automating the configuration is for the Example.com operator to collect all of the exported resources on the load balancer node itself, as you can see in
Listing 6-17
.
Listing 6-17.
Collecting exported load balancer workers
class loadbalancer_members {
Balancermember <<| |>> { notify => Service["apache"] }
}
The operator uses the double angle brace syntax to collect allbalancermember
resources from the stored configuration database. In addition, he's using a parameter block to notify the Apache service of any changes puppet makes to thebalancermember
resources. Just like with virtual resources, a parameter block may be specified to add additional parameters to collected resources. This syntax is new in Puppet 2.6.x; previous versions of Puppet could not create relationships to collected resources.
In this example, we've seen a simplified version of the file fragment pattern using Apache'sInclude
configuration statement. Web server worker nodes can easily model their configuration in Puppet using a defined resource type. Using a defined resource type, the Example.com operator exports load balancer resources to automatically reconfigure the front-end load balancer as new members come online.
In the next section, you'll see how exported resources are ideal for automatically reconfiguring a central Nagios monitoring system as new hosts are added to the network.
So far, you've seen how exported resources enable Puppet to automatically reconfigure the Example.com systems as new machines are brought online. You've seen how to automate the management of SSH known hosts keys to improve security, and how to automatically reconfigure Apache as additional capacity is added into a load balancer pool.
In this final example of exported resources, you'll see how the Example.com operator configures Puppet to automatically monitor new systems as they're brought online. The problem of monitoring service availability is something all sites share. Puppet helps solve this problem quickly and easily, and reduces the amount of time and effort required to manage the monitoring system itself.
This example specifically focuses on Nagios. Puppet has native types and providers for Nagios built into the software. The concepts in this section, however, apply to any software requiring a central system to be reconfigured when new hosts come online and need to be monitored.
In Nagios, the system performing the service checks is called the monitor system. The Nagios service running on the monitor system looks to the configuration files in/etc/nagios
to sort out which target systems need to be monitored. The Example.com operator wants Puppet to automatically reconfigure the monitor system when a new target system comes online.
To accomplish this goal, the Example.com operator first configures two classes in Puppet. The first class, namednagios::monitor
, manages the Nagios service and collects the service check resources exported by thenagios::target
class. Let's take a look at these two classes now (see
Listing 6-18
).
Listing 6-18.
/etc/puppet/modules/nagios/manifests/monitor.pp
# Manage the Nagios monitoring service
class nagios::monitor {
# Manage the packages
package { [ "nagios", "nagios-plugins" ]: ensure => installed }
# Manage the Nagios monitoring service
service { "nagios":
ensure => running,
hasstatus => true,
enable => true,
subscribe => [ Package["nagios"], Package["nagios-plugins"] ],
}
# collect resources and populate /etc/nagios/nagios_*.cfg
Nagios_host <<||>> { notify => Service["nagios"] }
Nagios_service <<||>> { notify => Service["nagios"] }
}
As you can see, the Example.com operator has configured Puppet to manage the Nagios packages and service. The classnagios::monitor
should be included in the catalog for the monitor node. In addition to the packages and the service, two additional resource types are collected from the stored configuration database, allnagios_host
andnagios_service
resources. When collecting these host and service resources, the operator adds the notify metaparameter to ensure that the Nagios monitoring service automatically reloads its configuration if any new nodes have exported their information to the stored configuration database.
Note
Additional information about thenagios_host
andnagios_service
Puppet types are available online. There are a number of additional resource types related to Nagios management in addition to these two basic service checks. If you need to make Nagios aware of the interdependencies between hosts to reduce the number of notifications generated during a service outage, or manage custom Nagios service checks and commands, please see the comprehensive and up-to-date Puppet type reference athttp://docs.puppetlabs.com/references/stable/type.html
.
Let's see how the Example.com operator implements thenagios::monitor
class in the Puppet configuration. With thenagios::monitor
class added to the monitor node's classification in site.pp, the Example.com operator runs the Puppet agent on nodemonitor1
, as you can see in
Listing 6-19
.
Listing 6-19.
The first Puppet agent run to configure Nagios
# puppet agent --test
info: Caching catalog for monitor1
info: Applying configuration version '1294374100'
notice: /Stage[main]/Nagios::Monitor/Package[nagios]/ensure: created
info: /Stage[main]/Nagios::Monitor/Package[nagios]: Scheduling refresh of Service[nagios]
notice: /Stage[main]/Nagios::Monitor/Package[nagios-plugins]/ensure: created
info: /Stage[main]/Nagios::Monitor/Package[nagios-plugins]: Scheduling refresh of Service[nagios]
notice: /Stage[main]/Nagios::Monitor/Service[nagios]/ensure: ensure changed 'stopped' to 'running'
notice: /Stage[main]/Nagios::Monitor/Service[nagios]: Triggered 'refresh' from 2 events
notice: Finished catalog run in 14.96 seconds
Notice that the first Puppet agent configuration run on monitor1 does not mention anything about managingNagios_host
orNagios_service
resources. This is because no nodes have yet been classified with thenagios::target
class, and as a result there are no exported host or service resources in the stored configuration database.
The Example.com operator configures Puppet to export Nagios service and host resources using the classnagios::target
. As you can see in
Listing 6-20
, the class contains only exported resources. The resources will not be managed on any nodes until they are collected like the operator is doing in
Listing 6-18
.
Listing 6-20.
/etc/puppet/modules/manifests/target.pp
# This class exports nagios host and service check resources
class nagios::export::target {
@@nagios_host { "$fqdn":
ensure => present,
alias => $hostname,
address => $ipaddress,
use => "generic-host",
}
@@nagios_service { "check_ping_${hostname}":
check_command => "check_ping!100.0,20%!500.0,60%",
use => "generic-service",
host_name => "$fqdn",
notification_period => "24x7",
service_description => "${hostname}_check_ping"
}
}
In
Listing 6-20
, the Example.com operator has configured two exported resources, one of which provides the monitor node with information about the target host itself. This resource defines a Nagios host in/etc/nagios/*.cfg
on the nodes collecting these resources. The title of thenagios_host
resource is set to the value of the$fqdn
fact. Using the fully qualified domain name as the resource title ensures
there will be no duplicate resources in the stored configuration database. In addition, the operator has added an alias for the target host using the short hostname in the$hostname
fact. Finally, the address of the target node is set to the$ipaddress
variable coming from Facter.
Once a resource describing the target host is exported, the operator also exports a basic service check for the host. As we see, this service check is performing a basic ICMP ping command to the target node. Thehost_name
parameter of the resource is also provided from Facter via the$fqdn
fact. The check_command looks a bit confusing, and rightly so, as this parameter is directly using the Nagios configuration file syntax. Reading thecheck_ping
line left to right, we interpret it to mean that Nagios will issue a warning when the ping takes longer than 100 milliseconds or experiences 20% packet loss. Nagios will also issue a critical alert if the ping command takes longer than 500 milliseconds to complete or experiences more than 60% packet loss. The notification period is also set to be 24 hours a day, 7 days a week, which is a default notification period provided by the default Nagios configuration. Finally, the operator has configured a descriptive label for the service using the short name of the host set by Facter.
Let's see how themonitor1
node is configured automatically when a target node is classified with thisnagios::target
class. First, the Example.com operator runs the Puppet agent on a new system namedtarget1
(
Listing 6-21
).
Listing 6-21.
Puppet agent on target1 exporting Nagios checks
# puppet agent --test
info: Caching catalog for target1
info: Applying configuration version '1294374100'
notice: Finished catalog run in 0.02 seconds
It appears the puppet agent run ontarget1
didn't actually manage any resources. This is true; the resources exported in thenagios::target
class are actually being exported to the stored configuration database rather than being managed on the node. They are not being collected on the nodetarget1
, which is why the output of
Listing 6-21
does not mention them.
We expect the Puppet agent on the nodemonitor1
to collect the resources exported by nodetarget1
. Let's see the results in
Listing 6-22
.
Listing 6-22.
Puppet agent collecting resources in monitor1
# puppet agent --test
info: Caching catalog for monitor1
info: Applying configuration version '1294374100'
notice: /Stage[main]/Nagios::Monitor/Nagios_service[check_ping_puppet]/ensure: created
info: /Stage[main]/Nagios::Monitor/Nagios_service[check_ping_puppet]: Scheduling refresh of Service[nagios]
notice: /Stage[main]/Nagios::Monitor/Nagios_host[target1.example.com]/ensure: created
info: /Stage[main]/Nagios::Monitor/Nagios_host[target1.example.com]: Scheduling refresh of Service[nagios]
notice: /Stage[main]/Nagios::Monitor/Service[nagios]: Triggered 'refresh' from 2 events
notice: monitor
notice: /Stage[main]//Node[monitord]/Notify[monitor]/message: defined 'message' as 'monitor'
notice: Finished catalog run in 0.87 seconds
As we expect, running the Puppet agent onmonitor1
aftertarget1
has checked in causes the resources to be collected from the stored configuration database. Looking back to thenagios::monitor
class in
Listing 6-18
, we also see the operator has added thenotify
parameter to ensure that the Nagios service automatically reloads the new configuration information after all of the resources are collected.