The Complete Magazine on Open Source

SaltStack: Grains, Pillars, Targeting and Render Systems

, / 1295 0
Salt stack visual
In this third and concluding article in the series on SaltStack, which makes software for configuration management, infrastructure automation and cloud orchestration, we look at the various components of SaltStack. (Read Part 1 and 2 here)

Let’s start with the first component of SaltStack, called grains.

Grains is an interface that provides information specific to a minion. The information available through the grains interface is static. Grains gets loaded when the Salt minion starts. This means that the information in grains is unchanging. So grains information could be about the running kernel or the operating system. It is case insensitive; the names FOO and foo target the same grain.

Listing grains
Available grains can be listed by using the module:

salt ‘*’

Grains data can be listed by using the grains.items module:

salt ‘*’ grains.items

Using grains in the cmd line
1.To ping all machines with the Red Hat OS, use the following command:

salt -G ‘os:redhat’

2.To pull the number of cores on all machines that have a 64-bit CPU, use the command given below:

salt -G ‘cpuarch:x86_64’ grains.item num_cpus

Types of grains
There are two types of grains—core grains (these come with the Salt installation) and custom grains. The latter are found in:

  • /etc/salt/grains
  • /etc/salt/minion
  • Grains directory, synced to minions

You can set up custom grains using either of the following methods.
Minion (the easiest and preferred option): The configuration file, by default, is /etc/salt/minion.
Add custom static grains under the grains: section.

datacenter: datacenter4
environment: test
department: HR
role: webserver

In the /etc/salt/grains file, custom grains are configured in the same way as in the example above, but without the line with the word ‘grains’.
Master: You can write grains using a Python function. In fact, you  can write public functions in Python to implement custom grains on the respective minions. Salt will execute all the ‘public’ functions found in the modules located in the grains package or the custom grains directory. The function must return a python dict, where the keys in the dict are the names of the grains and the values are the values of the grains.
Custom grains should be placed in a _grains directory located under the file _roots specified by the master config file. The default path is /srv/salt/_grains. Custom grains will be distributed to the minions when state.highstate is run, or by executing the saltutil.sync_grains or saltutil.sync_all functions.

#!/usr/bin/env python
def yourfunction():
# initialize a grains dictionary
grains = {}
# Some code for logic that sets grains like
return grains

Core grains can be overridden by custom grains. As there are several ways of defining custom grains, there is an order of precedence which should be kept in mind when defining them. The order of evaluation is as follows.
Each successive evaluation overrides the previous ones, so any grains defined in /etc/salt/grains that have the same name as a core grain will override that core grain. Similarly, /etc/salt/minion overrides both core grains and grains set in /etc/salt/grains, and custom grain modules will override any grains of the same name.

Matching grains in a top file
Grains can be used in the .sls file and the top file. For example:

- match: grain
- tomcat

- match: grain
- database

A pillar is an interface that generates and stores highly sensitive data specific to a particular minion, such as cryptographic keys and passwords. It stores data in a key/value pair and the data is managed in a similar way as the Salt State Tree. The Salt Master server maintains a pillar_roots setup that matches the structure of the file_roots used in the Salt file server.
The configuration for the pillar_roots in the master config file is identical in behaviour and function to file_roots. To start setting up the pillar, create the /srv/pillar directory and uncomment pillar_roots in the /etc/salt/master configuration file.

- /srv/pillar

Top files
Similar to the state tree, the pillar comprises of .sls files and has a top file.

- pkgs

In the above top file, pillar data in the pkgs file is available to all the minions. Like Salt states, the pkgs pillar can be located at /srv/pillar/pkgs.sls or /srv/pillar/pkgs/init.sls and it follows the same namespace as the Salt state.

- pam_ldap
- vim

Grains within a pillar
Grains can be used in the pillar’s top file as well as pillar .sls files, and it is useful to deliver specific Salt pillar data to minions with different properties. For example:

- packages

- match: grain
- webserver

- match: grain
- rhnpackages

Viewing pillar data
Once the pillar is set up, the data can be viewed on the minion via the pillar module, which comes with functions, pillar.items and pillar.raw. Pillar.items will return a freshly reloaded pillar and pillar.raw will return the current pillar without a refresh:

salt ‘*’ pillar.items

To ensure that the minions have the new pillar data, issue a command to them asking that they fetch their pillars from the master:

salt ‘*’ saltutil.refresh_pillar

Like grains, pillars can also be used in the cmd line. The syntax is as follows:

salt -I ‘somekey:specialvalue’
salt -I ‘master:id:myminion’ pillar.items

Remote execution is one of the core features in Salt, and we can achieve most of the tasks through this as it saves a lot of time. The syntax of the command line execution is as follows:

salt <target> <function> [arguments]

The target component allows you to filter which minions should run the following function. The default filter is a glob on the minion id.

salt ‘*’
salt ‘machine_name’
salt ‘*’ (we can use arbitrary or regular expression in targets)

Grains and pillars can be used to specify targets:

salt -G ‘os:redhat’
salt -I ‘master:id:myminion’ pillar.items

An example of multiple target types:

salt -C ‘G@os:redhat or I@master:id:myminion’

Here are some shell commands:

salt ‘*’ ‘uname -a’
salt ‘*’ ‘rpm –qa|grep vim’

Examples of space-delimited arguments:

salt ‘*’ cmd.exec_code python ‘import sys; print sys.version’
salt ‘*’ cmd.exec_code bash ‘echo $(seq 1 5)’

The render/template system
SLS data doesn’t need to be represented in YAML. Salt defaults to YAML because it is simple, straightforward and easy to learn. The default rendering system is the YAML+ Jinja renderer. The YAML+ Jinja renderer will first pass the template through the Jinja2 templating system and then through the YAML parser. The benefit here is that full programming constructs are available when creating SLS files.
The templating system is configured through the renderer value in the master config file. Managed files and dynamic content can be rendered through templates. In SLS data you should pass the rendering system in the template variable.
The supported formats are:

  • Jinja + YAML
  • Mako + YAML
  • Wempy + YAML
  • Jinja + json
  • Mako + json
  • Wempy + json

Here’s an example of the motd file:

{% if grains[‘os’] == ‘RedHat’ %}
- source: salt://motd/files/motd.rhel
{% elif grains[‘os’] == ‘Ubuntu’ %}
- source: salt://base/motd/files/motd.ubuntu
{% endif %}
- force: True

In the above .sls file, the file.managed state uses the Jinja templating system to generate different messages for Red Hat and Ubuntu OSs, using an if statement.
Here’s how you can use iterations (for-loops):

{% for usr in [‘john’,’henry’,’pal’] %}
{{ usr }}:
{% endfor %}

An example of pillar data in templates:

john: 1000
henry: 1001
pal: 1002

Update the top.sls file of the pillar as follows:

- pkgs
- users

Refresh the pillar data to ensure users are available for all minions as follows:

salt ‘*’ saltutil.refresh_pillar

salt ‘*’ pillar.items users ( you should see the users from the output)



You can create users by using the pillar data .sls file, as follows:

{% for user, uid in pillar.get(‘users’, {}).items() %}
- uid: {{uid}}
{% endfor %}

So this is the last in the series of three articles on SaltStack. We do hope you enjoyed reading these articles as much as we did putting them together for you.