Authors: Jeffrey McCune James Turnbull
The Puppet Agent runs without trouble. Looking back at the output of the Puppet master, we should see the following information noting the establishment of a database connection to the MySQL server:
info:
Connecting to mysql database: puppet
info: Expiring the node cache of debian.example.com
info: Not using expired node for debian.example.com from cache; expired at Mon Dec 27 15:05:21
-0500 2010
info: Caching node for debian.example.com
notice: Compiled catalog for debian.example.com in environment production in 0.03 seconds
Once a Puppet agent has connected to a Puppet master with Stored Configurations enabled, the database tables should be created automatically. The automatic creation of the database tables may be verified using themysql
command line utility, as shown in
Listing 6-7
.
Listing 6-7.
Verifying stored configuration tables
# mysql -u puppet -p \
-D puppet \
-e 'select name,last_compile from hosts;' \
--batch
name last_compile
mail.example.com 2010-12-14 15:06:21
This command may appear slightly complicated, so let's work through each of the options. The-u
option specifies the MySQL account to use when connecting to the MySQL server. The-p
option prompts for a password, and the-D
option specifies the database to connect to. These three options may be different for you if you chose to change any of the default names or passwords when setting up the MySQL database. The-e
option tells themysql
command to execute once and exit after doing so. Theselect
command prints thename
andlast_compile
field from all rows in thehosts
table. Finally, the--batch
option tells themysql
command to output the information in a simplified format.
The results of themysql
command show the host named “mail” is successfully storing configuration information in the MySQL database.
With the MySQL tables created in thepuppet
database, we have the option to add an index improving the access time of storing and retrieving configuration information. This index is optional, but we recommend it for sites with more than one hundred Puppet-managed hosts.
First, connect to thepuppet
database using the puppet MySQL account:
# mysql -u puppet -p -D puppet
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 54
Server version: 5.0.51a-24+lenny4 (Debian)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
Next, add the index for fields frequently accessed by Stored Configurations, as shown in
Listing 6-8
.
Listing 6-8.
Adding an index on the resources table
mysql> create index exported_restype_title on resources (exported, restype, title(50));
Query OK, 5 rows affected (0.09 sec)
Records: 5 Duplicates: 0 Warnings: 0
This command creates an index on theexported
,restype
andtitle
fields in theresources
table.
Note
Up-to-date information regarding the tuning of database settings and indices is available on the Puppet community wiki. Stored configurations are being increasingly used at large sites, and improvements to performance and settings are an evolving and ongoing process. For more information, please see:http://projects.puppetlabs.com/projects/1/wiki/Using_Stored_Configuration
.
With stored configurations enabled in the Puppet master, we can now export resources from a node's catalog. These exported resources may then be collected on another node, allowing nodes to exchange configuration information dynamically and automatically. In this section we'll examine a number of common use cases for exported resources.
The first example will export the public SSH host identification key from each Puppet-managed node and store the resources centrally in the stored configuration database. Every node may then collect all of the public host keys from all other nodes. This configuration increases security and eliminates the “unknown host” warning commonly shown when logging in via SSH for the first time.
The second example we provide uses exported resources to dynamically re-configure a load balancer when additional Puppet master worker processes come online.
Finally, you'll see how to dynamically and automatically reconfigure the Nagios monitoring system to check the availability to new Puppet managed systems.
When new systems are brought online in a large network, theknown_hosts
files of all other systems become stale and out of date, causing “unknown host” warnings when logging in using SSH. Puppet provides a simple and elegant solution to this problem using stored configurations and exported resources. When new systems are brought online, Puppet updates the known_hosts file on all other systems by adding the public host key of the new system. This automated management of the known_hosts file also increases security, by reducing the likelihood of a “man-in-the-middle” attack remaining unnoticed.
We learned in this chapter any resource may be declared virtually using the @ symbol before the resource declaration. A similar syntax,@@
, is used when resources should be declared virtually and exported to all other nodes using stored configurations. The use of@@
allows any node's catalog to collect the resource.
Listing 6-9
shows how this looks for SSH public keys.
Listing 6-9.
Exporting ssh key resources
class ssh::hostkeys {
@@sshkey { "${fqdn}_dsa":
host_aliases => [ "$fqdn", "$hostname", "$ipaddress" ],
type => dsa,
key => $sshdsakey,
}
@@sshkey { "${fqdn}_rsa":
host_aliases => [ "$fqdn", "$hostname", "$ipaddress" ],
type => rsa,
key => $sshrsakey,
}
}
This Puppet code snippet looks a little strange compared to what we've worked with so far.
The classssh::hostkeys
should be included in the catalog of all nodes in the network for their SSH public host keys to be exported and collectible. All of the resources and parameters are set to variables coming from Facter fact values. In
Listing 6-10
, twosshkey
resources have been declared as virtual resources and exported to the central stored configuration database, as indicated by the@@
symbols. The titles of each resource contain the suffixes_dsa
or_rsa
, preventing these two resources from conflicting
with each other. To make sure each resource has a unique title for the entire network, the title also contains the fully qualified domain name of the node exporting the public host keys.
Thehost_aliases
parameter provides additional names and addresses the node may be reached by. This information is important to prevent the “unknown host” warnings when connecting to the node from another system. In this example, we're providing the fully qualified domain name, short hostname, and IP address of the system. Each of these values comes fromfacter
and is automatically provided.
Theytype
andkey
parameters provide the public key information itself. The values of$sshdsakey
and$sshrsakey
also come from Facter and are automatically available on each host.
Exporting these twosshkey
resources is not sufficient to configure theknown_hosts
file on each node. We must also collect all exported sshkey resources for Puppet to fully manage and keep updated theknown_hosts
file shown in
Listing 6-10
.
Listing 6-10.
Collecting exported sshkey resources
class ssh::knownhosts {
Sshkey <<| |>> { ensure => present }
}
Thessh::knownhosts
class should be included in the catalog for all nodes where Puppet should manage the SSHknown_hosts
file. Notice that we've used double angle braces to collect resources from the stored configuration database. This is similar to collecting virtual resources, however virtual resources only use a
single
pair of angle braces. We're also specifying that theensure
parameter should take on the value “present” when collecting the exportedsshkey
resources.
Note
The ability to specify additional parameters when a resource is collected is new in Puppet 2.6.x and later. It will not work with Puppet 0.25.x and earlier.
With the two classes configured and added to the node classification for every host in the network, the operator verifies host keys are collected on every node in the network.
First, our operator runs the Puppet agent on themail.example.com
host. Since this is the first host to run the Puppet agent, he expects only two SSH keys to be collected: the keys exported by the mail host itself, as you can see in
Listing 6-11
.
Listing 6-11.
The first Puppet agent on mail.example.com
# puppet agent --test
info: Caching catalog for mail.example.com
info: Applying configuration version '1293584061'
notice: /Stage[main]//Node[default]/Sshkey[mail.example.com_dsa]/ensure: created
notice: /Stage[main]//Node[default]/Sshkey[mail.example.com _rsa]/ensure: created
notice: Finished catalog run in 0.02 seconds
Note the two sshkey resources being collected from the stored configuration database, the ssh dsa and rsa public key exported from themail.example.com
host.
In
Listing 6-12
, the operator runs Puppet on the web server, expecting the public keys for both the web host and the mail host to be collected.
Listing 6-12.
The second Puppet agent run on web.example.com
# puppet agent --test
info: Caching catalog for web.example.com
info: Applying configuration version '1293584061'
notice: /Stage[main]//Node[default]/Sshkey[mail.example.com_rsa]/ensure: created
notice: /Stage[main]//Node[default]/Sshkey[mail.example.com_dsa]/ensure: created
notice: /Stage[main]//Node[default]/Sshkey[web.example.com_rsa]/ensure: created
notice: /Stage[main]//Node[default]/Sshkey[web.example.com_dsa]/ensure: created
notice: Finished catalog run in 0.43 seconds
The Puppet agent onweb.example.com
manages a total of four ssh host key resources, as shown in
Listing 6-12
. The rsa and dsa keys from both the mail host and the web host are now being exported and stored in the configuration database.
Finally, running the Puppet agent once more on themail.example.com
host should result in the two public keys exported by the web host being collected and managed.
Listing 6-13
shows how the operator verifies this.
Listing 6-13.
The third Puppet agent run on mail.example.com
# puppet agent --test
info: Caching catalog for mail.example.com
info: Applying configuration version '1293584061'
notice: /Stage[main]//Node[default]/Sshkey[web.example.com_rsa]/ensure: created
notice: /Stage[main]//Node[default]/Sshkey[web.example.com_dsa]/ensure: created
info: FileBucket adding /etc/ssh/ssh_known_hosts as {md5}815e87b6880446e4eb20a8d0e7298658
notice: Hello World!
notice: /Stage[main]//Node[default]/Notify[hello]/message: defined 'message' as 'Hello World!'
notice: Finished catalog run in 0.04 seconds
As expected, the two SSH public key resources exported by the web host are correctly being collected on the mail host. By exporting and collecting two sshkey resources, the staff of Example.com can rely on all hosts automatically knowing the identity of all other hosts, even as new hosts are added to the network. So long as Puppet runs frequently, every system will have aknown_hosts
file containing the public key of every other system in the network.
In the next example, you'll see how this feature also allows the automatic addition of worker nodes to a load balancer pool.
In the previous example, SSH public key resources were exported and stored in the configuration database so that every host in the network is able to collect the public identification keys of every other host in the network. Along the same lines, but on a much smaller scale, you can also export resources to a single node on the network, such as a load balancer.
In this example, HTTP worker nodes will export configuration resources that only the load balancer will collect. This combination eliminates the need to manually reconfigure the load balancer every time a new worker node is added to the network.
Each load balancer worker will export a defined resource type representing the load balancer configuration. Let's see how the Example.com operator configures this system now. The load balancer software being used in this example is Apache. The Example.com operator models the configuration of a
HTTP worker using a file fragment placed into the directory/etc/httpd/conf.d.members/
. Let's first take a look at the defined resource type, shown in
Listing 6-14
.
Listing 6-14.
Load balancer worker-defined resource type
define balancermember($url) {
file { "/etc/httpd/conf.d.members/worker_${name}.conf":
ensure => file,
owner => 0,
group => 0,
mode => "0644",
content => " BalancerMember $url \n”,
}
}
This configuration file fragment contains a single line, the URL to a member of the load balancer pool. Using a defined resource type is recommended since all resources declared within will be exported when the defined type itself is exported.
The load balancer configuration is similar to the Apache configuration presented in the Scaling Puppet chapter. Without using exported resources, the Example.com operator might define his load balancer configuration statically, as shown in
Listing 6-15
.
Listing 6-15.
Load balancer front-end configuration
BalancerMember http://puppetmaster1.example.com:18140
BalancerMember http://puppetmaster2.example.com:18140
BalancerMember http://puppetmaster3.example.com:18140
In this example, three Puppet master workers have been statically defined. If the Example.com operator would like to add additional capacity, he would have to add a fourth line to this Apache configuration block. Exported resources allow him to save this manual step and automatically add the configuration once a new worker node comes online and is configured by Puppet. To accomplish this, the Example.com operator replaces all of theBalancerMember
statements with an Include statement to read in all of the file fragments. In the Puppet manifest, these configuration statements are modeled using the balancermember defined type, shown in
Listing 6-16
.