SaltStack
Documentation | Components overview | A guide by yet.org | Useful cheatsheet
Installation
Usually, you need to add the saltstack repo to your package manager repo list and install via the package manager. See the repo page for more info
For quick installation, use the salt-bootstrap script
curl -o bootstrap-salt.sh -L https://bootstrap.saltproject.io
# On the master
sudo sh bootstrap-salt.sh -M
# On the minion
sudo sh bootstrap-salt.sh
# Notable options
# -x python3 Use python3 for the install. Should be default on most distros but
# just in case
# -L Also install salt-cloud and required python-libcloud package
# -M Also install salt-master
# -S Also install salt-syndic
# -N Do not install salt-minion
# -X Do not start daemons after installation
# -A Pass the salt-master DNS name or IP. This will be stored under
Default ports to open on the master:
- 4505 : publish port
- 4506 : return port
Manage salt-minion and salt-master using the systemd units salt-master.service
and salt-minion.service
Minion config (Reference)
master
is where the mnion connects to, id
is the identifier of the minion.
the fqdn of the host is ued if not specified.
# /etc/salt/minion
master: saltmaster.your.domain
id: saltminion-01
Alternatively, the id can be specified in the file: /etc/salt/minion_id
echo 'saltminion-01' > /etc/salt/minion_id
The id from the config file takes precedence
Accepting minions
The master needs to trust the minions connecting to it.
The salt-key
command is used to manage this trust
Use salt-call --local key.finger
to view the fingerprint of the minion and
cross-check from the master before accepting a minion
Accept the minion using salt-key -a saltminion-01
# Command quick reference
salt-key
# Options
# <none> : same as -L
# -L : lists all connections
# -l, --list ARG : Arg is one of (un[accepted],acc[epted], rej[ected], den[ied],all)
#
# -a <id> : accept the minion with the given id
# -A : accept all
#
# -r <id> : reject the minion with the given id
# -R : reject all
#
# -d <id> : delete the minion with the given id
# -D : delete all
#
# -p <id> : print the public key
# -P : print all public keys
#
# -f <id> : print the fingerptint
# -F : print all fingerprints
Salt commands
- salt-master : daemon used to control the Salt minions
- salt-minion : daemon which receives commands from a Salt master.
- salt-key : management of Salt server public keys used for authentication.
- salt : main CLI to execute commands across minions in parallel and query them too.
- salt-ssh : allows to control minion using SSH for transport
- salt-run : execute a salt runner
- salt-call : runs module.function locally on a minion, use –local if you don’t want to contact your master
- salt-cloud : VM provisionning in the cloud
- salt-api : daemons which offer an API to interact with Salt
- salt-cp : copy a file to a set of systems
- salt-syndic : daemon running on a minion that passes through commands from a higher master
- salt-proxy : Receives commands from a master and relay these commands to devices that are unable to run a full minion.
- spm : frontend command for managing salt packages.
Executing commands on the minion
The general structure to execute commands on the minion is
salt <target> <module.function> <arguments>
Test the connection to the minion using ping
salt '*' test.ping
Embedded documentation is available using
salt '*' sys.doc test.ping
View all commands on a module using
salt '*' sys.list_functions test
Useful commands
Note: mostly taken from this blog
List modules, functions etc
salt '*' sys.list_modules # List all the preloaded Salt modules
salt '*' sys.list_functions # List all the functions
salt '*' sys.list_state_modules # List all the state modules
salt '*' sys.list_state_functions # List all the state functions
Network related commands (reference)
salt '*' network.ip_addrs # Get IP of your minion
salt '*' network.ping <hostname> # Ping a host from your minion
salt '*' network.traceroute <host> # Traceroute a host from your minion
salt '*' network.get_hostname # Get hostname
salt '*' network.mod_hostname # Modify hostname
Minion Status
salt-run manage.status # What is the status of all my minions? (both up and down)
salt-run manage.up # Any minions that are up?
salt-run manage.down # Any minions that are down?
Jobs
salt-run jobs.active # get list of active jobs
salt-run jobs.list_jobs # get list of historic jobs
salt-run jobs.lookup_jid <job_id> # get details of this specific job
System
salt '*' system.reboot # Let's reboot all the minions that match minion*
salt '*' status.uptime # Get the uptime of all our minions
salt '*' status.diskusage
salt '*' status.loadavg
salt '*' status.meminfo
Managing packages
salt '*' pkg.list_upgrades # get a list of packages that need to be upgrade
salt '*' pkg.upgrade # Upgrades all packages via apt-get dist-upgrade (or similar)
salt '*' pkg.version htop # get current version of the bash package
salt '*' pkg.install htop # install or upgrade bash package
salt '*' pkg.remove htop
Managing services on the minion
salt '*' service.status <service name>
salt '*' service.available <service name>
salt '*' service.stop <service name>
salt '*' service.start <service name>
salt '*' service.restart <service name>
salt '*' ps.grep <service name>
Running ad-hoc commands
salt '*' cmd.run 'echo Hello World' # Returns the output as a string
salt '*' cmd.run_all 'ls -la' # Returns more info like return code, pid
# etc as a dict
Targeting minions (Reference)
Glob matching
salt '*web*' test.ping
salt 'minion-*' test.ping
salt 'minion-??' test.ping
salt 'minion-0[1-9]' test.ping
Perl Regular expression matching
salt -E 'minion' test.ping
salt -E 'minion-.*' test.ping
salt -E '^minion-01$' test.ping
salt -E 'minion-((01)|(02))' test.ping
List matching
salt -L 'minion-01,minion-02,minion-03' test.ping
Grain and Pillar matching
Grains are static information regarding a minion. This include information about things like the OS, cpu architecture, kernel, network state etc.
To view all the grains availabe for the minions, use
salt '*' grains.items
To get the value of a grain, use
salt '*' grains.get osfullname
Grains can be added and deleted using
salt '*' grains.setval web frontend
salt '*' grains.delval web
To target minions based on grains, use:
# Use --grain or -G to match on grains
salt -G 'os:Ubuntu' test.ping
# Use --grain-pcre or -P for perl style regex on grains
salt -P 'os:Arch.*' test.ping
Pillars are secure user-defined variables stored on master and assigned to minions
Operations on pillars are similar to the ones for grains
salt '*' pillar.items
salt '*' pillar.get hostname
To traget minions based on pillars, use:
# Use --pillar or -I to match pillars
salt -I 'branch:mas*' test.ping
# USe --pillar-pcre or -J for perl style matching on pillars
salt -J 'role:prod.*' test.ping
Matching using IP addresses.
# Use -S or --ipcidr to match using IP cidr notation
salt -S 192.168.40.20 test.ping
salt -S 10.0.0.0/24 test.ping
Compound matching. This combines all of the above types of matching
salt -C 'minion-* and G@os:Ubuntu and not L@minion-02' test.ping
# The different lettes correspond to each matching type
# G Grains glob
# E Perl regexp on minion ID
# P Perl regexp on Grains
# L List of Minion
# I Pillar glob
# S Subnet/IP address
# R Range cluster
In state or pillar files, matching looks like:
'192.168.1.0/24':
- match: ipcidr
- internal
Nodegroups are user-defined groupings of your minions. They are like aliases
for matching your nodes. Nodegroups can be defined in the /etc/salt/master
file using compound statements
nodegroups:
group1: '[email protected],bar.domain.com,baz.domain.com or bl*.domain.com'
group2: 'G@os:Debian and foo.domain.com'
group3: 'G@os:Debian and N@group1'
group4:
- 'G@foo:bar'
- 'or'
- 'G@foo:baz'
The master needs to be restarted after defining the nodegroups. They can then be used as follows:
salt -N group1 test.ping
A batch size can be useful fo rolling out updates
# syntax:
# -b BATCH, --batch=BATCH, --batch-size=BATCH
# where BATCH is a percentage or an absolute number
salt -G 'os:Debian' --batch-size 25% apache.signal restart
# --batch-wait=BATCH_WAIT Wait the specified time in seconds after each job
# done before freeing the slot in the batch for the next
# one.
Confuguration management
State modules are declarative and idempotend, unlike normal modules so far, which are iterative. Thus state modules are useful for configuration management.
As mentioned above, To list all available state modules, use
sys.list_state_modules
List the functions available on a state module
salt '*' sys.list_state_functions pkg
Get documentation on any of them
salt '*' sys.state_doc pkg.latest
We use state files (.sls) to describe the desired state of our minions.
States are stored in text files on the master and transferred to the minions on demand via the master's File Server. The collection of state files make up the State Tree.
The file_roots
property in /etc/salt/master
specifies the directories used
by this file server.
Restart the salt-master after editing this
Eg. Create the file /srv/salt/tools.sls
(make the parent dir if necessary) to
install the following tools on our minions
tools:
pkg.latest:
- pkgs:
- iftop
- vnstat
- htop
- curl
- vim
- logwatch
- unattended-upgrades
- fail2ban
Aplly the state using:
salt '*' state.sls tools
This will apply the state to each minion individually. This works bu is not really efficient.
A top.sls file is placed on the top of the state tree. It is used to map groups of minions to their configuration roles.
Top files have three components:
- Environment: A state tree directory containing a set of state files to configure systems.
- Target: A grouping of machines which will have a set of states applied to them.
- State files: A list of state files to apply to a target. Each state file describes one or more states to be configured and enforced on the targeted machines.
The relationship between these are nested. Environments contain targets, Targets contain states
Consider the following top file. It describes a scenario in which all minions with an ID that begins with web have an apache state applied to them.
base: # Apply SLS files from the directory root for the 'base' environment
'web*': # All minions with a minion_id that begins with 'web'
- apache # Apply the state file named 'apache.sls'
To apply all states configured in your top.sls file just run
salt '*' state.apply
# use test=True for a dry run
salt '*' state.apply test=True
The states that will be applied to a minion in a given environment can be viewed using the state.show_top function.
salt '*' state.show_top
Pillars
Pillars are tree-like structures of data defined on the Salt Master and passed through to minions. They allow confidential, targeted data to be securely sent only to the relevant minion.
Similar to the state tree, the pillar is comprised of sls files and has a top
file. The default location for the pillar is in /srv/pillar. This location can
be configured via the pillar_roots
option in the master configuration file.
Note: It must not be in a subdirectory of the state tree or file_roots
Example usage of pillars:
# /srv/pillar/top.sls:
base: # Enviornment
'*': # Target
- data # Apply the state file named data.sls
# /srv/pillar/data.sls
info: some data
Now, instruct the minions to fetch pillar data from the master
salt '*' saltutil.refresh_pillar
pillar.item
Retrieves the value of one or more keys from the in-memory pillar data.
All pillar items can be retrived using pillar.items
. This compiles a fresh pillar dictionary
and displays it, but leaves the in-memory
data
untouched. If pillar keys are passed to this function, it acts like pillar.items
and returns from the in-memory data
salt '*' pillar.items
pillar.raw
is like pillar.items, it returns the entire pillar dictionary, but
from the in-memory pillar data instead of compiling fresh pillar data.
Individual items may be fetched using
salt '*' pillar.get info
The data can be accessed from state files using the syntax:
# simple data
{{ pillar['info'] }}
# more complex/nested data
{{ pillar['users']['foo'] }}
# providing defaults
{{ salt['pillar.get']('pkgs:apache', 'httpd') }}
See the official docs for using more complicated data
Pillar data can be parameterised using grain data
# /srv/pillar/pkg/init.sls
pkgs:
{% if grains['os_family'] == 'RedHat' %}
apache: httpd
vim: vim-enhanced
{% elif grains['os_family'] == 'Debian' %}
apache: apache2
vim: vim
{% elif grains['os'] == 'Arch' %}
apache: apache
vim: vim
{% endif %}
Add pkg
to /srv/pillar/top.sls
. Now, this data can be referenced in state files
# /srv/salt/apache/init.sls
apache:
pkg.installed:
- name: {{ pillar['pkgs']['apache'] }}
Read more about merging keys and namespace flattening here