table of contents
- NAME
- SALT PROJECT
- INTRODUCTION TO SALT
- SALT SYSTEM ARCHITECTURE
- CONTRIBUTING
- INSTALLATION
- CONFIGURING SALT
- USING SALT
- REMOTE EXECUTION
- CONFIGURATION MANAGEMENT
- RETURN CODES
- UTILITY MODULES - CODE REUSE IN CUSTOM MODULES
- EVENTS & REACTOR
- ORCHESTRATION
- SOLARIS
- SALT SSH
- THORIUM COMPLEX REACTOR
- SALT CLOUD
- SALT PROXY MINION
- NETWORK AUTOMATION
- SALT VIRT
- ONEDIR PACKAGING
- COMMAND LINE REFERENCE
- PILLARS
- MASTER TOPS
- SALT MODULE REFERENCE
- APIS
- ARCHITECTURE
- MINION DATA CACHE
- SLOTS
- WINDOWS
- DEVELOPING SALT
- RELEASE NOTES
- VENAFI TOOLS FOR SALT
- GLOSSARY
- AUTHOR
- COPYRIGHT
SALT(7) | Salt | SALT(7) |
NAME¶
salt - Salt Documentation
SALT PROJECT¶
Salt Project License: Apache v2.0PyPi Package DownloadsPyPi Package DownloadsSalt Project Slack CommunitySalt Project Twitch ChannelSalt Project subredditFollow SaltStack on Twitter.INDENT 0.0
- Latest Salt Documentation
- Open an issue (bug report, feature request, etc.)
Salt is the world's fastest, most intelligent and scalable automation engine.
About Salt¶
Built on Python, Salt is an event-driven automation tool and framework to deploy, configure, and manage complex IT systems. Use Salt to automate common infrastructure administration tasks and ensure that all the components of your infrastructure are operating in a consistent desired state.
Salt has many possible uses, including configuration management, which involves:
- Managing operating system deployment and configuration.
- Installing and configuring software applications and services.
- Managing servers, virtual machines, containers, databases, web servers, network devices, and more.
- Ensuring consistent configuration and preventing configuration drift.
Salt is ideal for configuration management because it is pluggable, customizable, and plays well with many existing technologies. Salt enables you to deploy and manage applications that use any tech stack running on nearly any operating system, including different types of network devices such as switches and routers from a variety of vendors.
In addition to configuration management Salt can also:
- Automate and orchestrate routine IT processes, such as common required tasks for scheduled server downtimes or upgrading operating systems or applications.
- Create self-aware, self-healing systems that can automatically respond to outages, common administration problems, or other important events.
About our sponsors¶
Salt powers VMware's vRealize Automation SaltStack Config, and can be found under the hood of products from Juniper, Cisco, Cloudflare, Nutanix, SUSE, and Tieto, to name a few.
The original sponsor of our community, SaltStack, was acquired by VMware in 2020. The Salt Project remains an open source ecosystem that VMware supports and contributes to. VMware ensures the code integrity and quality of the Salt modules by acting as the official sponsor and manager of the Salt project. Many of the core Salt Project contributors are also VMware employees. This team carefully reviews and enhances the Salt modules to ensure speed, quality, and security.
Download and install Salt¶
Salt is tested and packaged to run on CentOS, Debian, RHEL, Ubuntu, MacOS, Windows, and more. Download Salt and get started now. See supported operating systems for more information.
To download and install Salt, see: * The Salt install guide * Salt Project repository
Technical support¶
Report bugs or problems using Salt by opening an issue: https://github.com/saltstack/salt/issues
To join our community forum where you can exchange ideas, best practices, discuss technical support questions, and talk to project maintainers, join our Slack workspace: Salt Project Community Slack
Salt Project documentation¶
Installation instructions, tutorials, in-depth API and module documentation:
- The Salt install guide
- The Salt user guide
- Latest Salt documentation
- Salt's contributing guide
Security advisories¶
Keep an eye on the Salt Project Security Announcements landing page. Salt Project recommends subscribing to the Salt Project Security RSS feed to receive notification when new information is available regarding security announcements.
Other channels to receive security announcements include the Salt Community mailing list and the Salt Project Community Slack.
Responsibly reporting security vulnerabilities¶
When reporting security vulnerabilities for Salt or other SaltStack projects, refer to the SECURITY.md file found in this repository.
Join our community¶
Salt is built by the Salt Project community, which includes more than 3,000 contributors working in roles just like yours. This well-known and trusted community works together to improve the underlying technology and extend Salt by creating a variety of execution and state modules to accomplish the most common tasks or solve the most important problems that people in your role are likely to face.
If you want to help extend Salt or solve a problem with Salt, you can join our community and contribute today.
Please be sure to review our Code of Conduct. Also, check out some of our community resources including:
- Salt Project Community Wiki
- Salt Project Community Slack
- Salt Project: IRC on LiberaChat
- Salt Project YouTube channel
- Salt Project Twitch channel
There are lots of ways to get involved in our community. Every month, there are around a dozen opportunities to meet with other contributors and the Salt Core team and collaborate in real time. The best way to keep track is by subscribing to the Salt Project Community Events Calendar on the main https://saltproject.io website.
If you have additional questions, email us at saltproject@vmware.com or reach out directly to the Community Manager, Jimmy Chunga via Slack. We'd be glad to have you join our community!
License¶
Salt is licensed under the Apache 2.0 license. Please see the LICENSE file for the full text of the Apache license, followed by a full summary of the licensing used by external modules.
A complete list of attributions and dependencies can be found here: salt/DEPENDENCIES.md
INTRODUCTION TO SALT¶
We’re not just talking about NaCl.
The 30 second summary¶
Salt is:
- A configuration management system. Salt is capable of maintaining remote nodes in defined states. For example, it can ensure that specific packages are installed and that specific services are running.
- A distributed remote execution system used to execute commands and query data on remote nodes. Salt can query and execute commands either on individual nodes or by using an arbitrary selection criteria.
It was developed in order to bring the best solutions found in the world of remote execution together and make them better, faster, and more malleable. Salt accomplishes this through its ability to handle large loads of information, and not just dozens but hundreds and even thousands of individual servers quickly through a simple and manageable interface.
Simplicity¶
Providing versatility between massive scale deployments and smaller systems may seem daunting, but Salt is very simple to set up and maintain, regardless of the size of the project. The architecture of Salt is designed to work with any number of servers, from a handful of local network systems to international deployments across different data centers. The topology is a simple server/client model with the needed functionality built into a single set of daemons. While the default configuration will work with little to no modification, Salt can be fine tuned to meet specific needs.
Parallel execution¶
The core functions of Salt:
- enable commands to remote systems to be called in parallel rather than serially
- use a secure and encrypted protocol
- use the smallest and fastest network payloads possible
- provide a simple programming interface
Salt also introduces more granular controls to the realm of remote execution, allowing systems to be targeted not just by hostname, but also by system properties.
Builds on proven technology¶
Salt takes advantage of a number of technologies and techniques. The networking layer is built with the excellent ZeroMQ networking library, so the Salt daemon includes a viable and transparent AMQ broker. Salt uses public keys for authentication with the master daemon, then uses faster AES encryption for payload communication; authentication and encryption are integral to Salt. Salt takes advantage of communication via msgpack, enabling fast and light network traffic.
Python client interface¶
In order to allow for simple expansion, Salt execution routines can be written as plain Python modules. The data collected from Salt executions can be sent back to the master server, or to any arbitrary program. Salt can be called from a simple Python API, or from the command line, so that Salt can be used to execute one-off commands as well as operate as an integral part of a larger application.
Fast, flexible, scalable¶
The result is a system that can execute commands at high speed on target server groups ranging from one to very many servers. Salt is very fast, easy to set up, amazingly malleable and provides a single remote execution architecture that can manage the diverse requirements of any number of servers. The Salt infrastructure brings together the best of the remote execution world, amplifies its capabilities and expands its range, resulting in a system that is as versatile as it is practical, suitable for any network.
Open¶
Salt is developed under the Apache 2.0 license, and can be used for open and proprietary projects. Please submit your expansions back to the Salt project so that we can all benefit together as Salt grows. Please feel free to sprinkle Salt around your systems and let the deliciousness come forth.
SALT SYSTEM ARCHITECTURE¶
Overview¶
This page provides a high-level overview of the Salt system architecture and its different components.
What is Salt?¶
Salt is a Python-based open-source remote execution framework used for:
- Configuration management
- Automation
- Provisioning
- Orchestration
The Salt system architecture¶
The following diagram shows the primary components of the basic Salt architecture: [image]
The following sections describe some of the core components of the Salt architecture.
Salt Masters and Salt Minions¶
Salt uses the master-client model in which a master issues commands to a client and the client executes the command. In the Salt ecosystem, the Salt Master is a server that is running the salt-master service. It issues commands to one or more Salt Minions, which are servers that are running the salt-minion service and that are registered with that particular Salt Master.
Another way to describe Salt is as a publisher-subscriber model. The master publishes jobs that need to be executed and Salt Minions subscribe to those jobs. When a specific job applies to that minion, it will execute the job.
When a minion finishes executing a job, it sends job return data back to the master. Salt has two ports used by default for the minions to communicate with their master(s). These ports work in concert to receive and deliver data to the Message Bus. Salt’s message bus is ZeroMQ, which creates an asynchronous network topology to provide the fastest communication possible.
Targets and grains¶
The master indicates which minions should execute the job by defining a target. A target is the group of minions, across one or many masters, that a job's Salt command applies to.
NOTE:
The following is an example of one of the many kinds of commands that a master might issue to a minion. This command indicates that all minions should install the Vim application:
salt -v '*' pkg.install vim
In this case the glob '*' is the target, which indicates that all minions should execute this command. Many other targeting options are available, including targeting a specific minion by its ID or targeting minions by their shared traits or characteristics (called grains in Salt).
Salt comes with an interface to derive information about the underlying system. This is called the grains interface, because it presents Salt with grains of information. Grains are collected for the operating system, domain name, IP address, kernel, OS type, memory, and many other system properties. You can also create your own custom grain data.
Grain data is relatively static. However, grain data is refreshed when system information changes (such as network settings) or when a new value is assigned to a custom grain.
Open event system (event bus)¶
The event system is used for inter-process communication between the Salt Master and Salt Minions. In the event system:
- Events are seen by both the master and minions.
- Events can be monitored and evaluated by both.
The event bus lays the groundwork for orchestration and real-time monitoring.
All minions see jobs and results by subscribing to events published on the event system. Salt uses a pluggable event system with two layers:
- ZeroMQ (0MQ) - The current default socket-level library providing a flexible transport layer.
- Tornado - Full TCP-based transport layer event system.
One of the greatest strengths of Salt is the speed of execution. The event system’s communication bus is more efficient than running a higher-level web service (http). The remote execution system is the component that all components are built upon, allowing for decentralized remote execution to spread load across resources.
Salt states¶
In addition to remote execution, Salt provides another method for configuring minions by declaring which state a minion should be in, otherwise referred to as Salt states. Salt states make configuration management possible. You can use Salt states to deploy and manage infrastructure with simple YAML files. Using states, you can automate recursive and predictable tasks by queueing jobs for Salt to implement without needing user input. You can also add more complex conditional logic to state files with Jinja.
To illustrate the subtle differences between remote execution and configuration management, take the command referenced in the previous section about Targets and grains in which Salt installed the application Vim on all minions:
Methodology | Implementation | Result |
Remote execution | 0.0 • 2 Run salt -v '*' pkg.install vim from the terminal 168u | 0.0 • 2 Remotely installs Vim on the targeted minions 168u |
Configuration management | 0.0 • 2 Write a YAML state file that checks whether Vim is installed • 2 This state file is then applied to the targeted minions 168u | 0.0 • 2 Ensures that Vim is always installed on the targeted minions • 2 Salt analyzes the state file and determines what actions need to be taken to ensure the minion complies with the state declarations • 2 If Vim is not installed, it automates the processes to install Vim on the targeted minions 168u |
The state file that verifies Vim is installed might look like the following example:
# File:/srv/salt/vim_install.sls install_vim_now:
pkg.installed:
- pkgs:
- vim
To apply this state to a minion, you would use the state.apply module, such as in the following example:
salt '*' state.apply vim_install
This command applies the vim_install state to all minions.
Formulas are collections of states that work in harmony to configure a minion or application. For example, one state might trigger another state.
The Top file¶
It is not practical to manually run each state individually targeting specific minions each time. Some environments have hundreds of state files targeting thousands of minions.
Salt offers two features to help with this scaling problem:
- The top.sls file - Maps Salt states to their applicable minions.
- Highstate execution - Runs all Salt states outlined in top.sls in a single execution.
The top file maps which states should be applied to different minions in certain environments. The following is an example of a simple top file:
# File: /srv/salt/top.sls base:
'*':
- all_server_setup
'01webserver':
- web_server_setup
In this example, base refers to the Salt environment, which is the default. You can specify more than one environment as needed, such as prod, dev, QA, etc.
Groups of minions are specified under the environment, and states are listed for each set of minions. This top file indicates that a state called all_server_setup should be applied to all minions '*' and the state called web_server_setup should be applied to the 01webserver minion.
To run the Salt command, you would use the state.highstate function:
salt \* state.highstate
This command applies the top file to the targeted minions.
Salt pillar¶
Salt’s pillar feature takes data defined on the Salt Master and distributes it to minions as needed. Pillar is primarily used to store secrets or other highly sensitive data, such as account credentials, cryptographic keys, or passwords. Pillar is also useful for storing non-secret data that you don't want to place directly in your state files, such as configuration data.
Salt pillar brings data into the cluster from the opposite direction as grains. While grains are data generated from the minion, the pillar is data generated from the master.
Pillars are organized similarly to states in a Pillar state tree, where top.sls acts to coordinate pillar data to environments and minions privy to the data. Information transferred using pillar has a dictionary generated for the targeted minion and encrypted with that minion’s key for secure data transfer. Pillar data is encrypted on a per-minion basis, which makes it useful for storing sensitive data specific to a particular minion.
Beacons and reactors¶
The beacon system is a monitoring tool that can listen for a variety of system processes on Salt Minions. Beacons can trigger reactors which can then help implement a change or troubleshoot an issue. For example, if a service’s response times out, the reactor system can restart the service.
Beacons are used for a variety of purposes, including:
- Automated reporting
- Error log delivery
- Microservice monitoring
- User shell activity
- Resource monitoring
When coupled with reactors, beacons can create automated pre-written responses to infrastructure and application issues. Reactors expand Salt with automated responses using pre-written remediation states.
Reactors can be applied in a variety of scenarios:
- Infrastructure scaling
- Notifying administrators
- Restarting failed applications
- Automatic rollback
When both beacons and reactors are used together , you can create unique states customized to your specific needs.
Salt runners and orchestration¶
Salt runners are convenience applications executed with the salt-run command. Salt runners work similarly to Salt execution modules. However, they execute on the Salt Master instead of the Salt Minions. A Salt runner can be a simple client call or a complex application.
Salt provides the ability to orchestrate system administrative tasks throughout the enterprise. Orchestration makes it possible to coordinate the activities of multiple machines from a central place. It has the added advantage of being able to control the sequence of when certain configuration events occur. Orchestration states execute on the master using the state runner module.
CONTRIBUTING¶
So you want to contribute to the Salt project? Excellent! You can help in a number of ways:
- Use Salt and open well-written bug reports.
- Join a working group.
- Answer questions on irc, the community Slack, the salt-users mailing list, Server Fault, or r/saltstack on Reddit.
- Fix bugs.
- Improve the documentation.
- Provide workarounds, patches, or other code without tests.
- Tell other people about problems you solved using Salt.
If you'd like to update docs or fix an issue, you're going to need the Salt repo. The best way to contribute is using Git.
Environment setup¶
To hack on Salt or the docs you're going to need to set up your development environment. If you already have a workflow that you're comfortable with, you can use that, but otherwise this is an opinionated guide for setting up your dev environment. Follow these steps and you'll end out with a functioning dev environment and be able to submit your first PR.
This guide assumes at least a passing familiarity with Git, a common version control tool used across many open source projects, and is necessary for contributing to Salt. For an introduction to Git, watch Salt Docs Clinic - Git For the True Beginner. Because of its widespread use, there are many resources for learning more about Git. One popular resource is the free online book Learn Git in a Month of Lunches.
pyenv, Virtual Environments, and you¶
We recommend pyenv, since it allows installing multiple different Python versions, which is important for testing Salt across all the versions of Python that we support.
On Linux¶
Install pyenv:
git clone https://github.com/pyenv/pyenv.git ~/.pyenv export PATH="$HOME/.pyenv/bin:$PATH" git clone https://github.com/pyenv/pyenv-virtualenv.git $(pyenv root)/plugins/pyenv-virtualenv
On Mac¶
Install pyenv using brew:
brew update brew install pyenv brew install pyenv-virtualenv
----
Now add pyenv to your .bashrc:
echo 'export PATH="$HOME/.pyenv/bin:$PATH"' >> ~/.bashrc pyenv init 2>> ~/.bashrc pyenv virtualenv-init 2>> ~/.bashrc
For other shells, see the pyenv instructions.
Go ahead and restart your shell. Now you should be able to install a new version of Python:
pyenv install 3.7.0
If that fails, don't panic! You're probably just missing some build dependencies. Check out pyenv common build problems.
Now that you've got your version of Python installed, you can create a new virtual environment with this command:
pyenv virtualenv 3.7.0 salt
Then activate it:
pyenv activate salt
Sweet! Now you're ready to clone Salt so you can start hacking away! If you get stuck at any point, check out the resources at the beginning of this guide. IRC and Slack are particularly helpful places to go.
Get the source!¶
Salt uses the fork and clone workflow for Git contributions. See Using the Fork-and-Branch Git Workflow for how to implement it. But if you just want to hurry and get started you can go ahead and follow these steps:
Clones are so shallow. Well, this one is anyway:
git clone --depth=1 --origin salt https://github.com/saltstack/salt.git
This creates a shallow clone of Salt, which should be fast. Most of the time that's all you'll need, and you can start building out other commits as you go. If you really want all 108,300+ commits you can just run git fetch --unshallow. Then go make a sandwich because it's gonna be a while.
You're also going to want to head over to GitHub and create your own fork of Salt. Once you've got that set up you can add it as a remote:
git remote add yourname <YOUR SALT REMOTE>
If you use your name to refer to your fork, and salt to refer to the official Salt repo you'll never get upstream or origin confused.
NOTE:
Set up pre-commit and nox¶
Here at Salt we use pre-commit and nox to make it easier for contributors to get quick feedback, for quality control, and to increase the chance that your merge request will get reviewed and merged. Nox enables us to run multiple different test configurations, as well as other common tasks. You can think of it as Make with superpowers. Pre-commit does what it sounds like: it configures some Git pre-commit hooks to run black for formatting, isort for keeping our imports sorted, and pylint to catch issues like unused imports, among others. You can easily install them in your virtualenv with:
python -m pip install pre-commit nox pre-commit install
WARNING:
Now before each commit, it will ensure that your code at least looks right before you open a pull request. And with that step, it's time to start hacking on Salt!
Set up imagemagick¶
One last prerequisite is to have imagemagick installed, as it is required by Sphinx for generating the HTML documentation.
# On Mac, via homebrew brew install imagemagick
# Example Linux installation: Debian-based sudo apt install imagemagick
Salt issues¶
Create your own¶
Perhaps you've come to this guide because you found a problem in Salt, and you've diagnosed the cause. Maybe you need some help figuring out the problem. In any case, creating quality bug reports is a great way to contribute to Salt even if you lack the skills, time, or inclination to fix it yourself. If that's the case, head on over to Salt's issue tracker on GitHub.
Creating a good report can take a little bit of time - but every minute you invest in making it easier for others to reproduce and understand your issue is time well spent. The faster someone can understand your issue, the faster it will be able to get fixed correctly.
The thing that every issue needs goes by many names, but one at least as good as any other is MCVE - Minimum Complete Verifiable Example.
In a nutshell:
- Minimum: All of the extra information has been removed. Will 2 or 3 lines of master/minion config still exhibit the behavior?
- Complete: Minimum also means complete. If your example is missing information, then it's not complete. Salt, Python, and OS versions are all bits of information that make your example complete. Have you provided the commands that you ran?
- Verifiable: Can someone take your report and reproduce it?
Slow is smooth, and smooth is fast - it may feel like you're taking a long time to create your issue if you're creating a proper MCVE, but a MCVE eliminates back and forth required to reproduce/verify the issue so someone can actually create a fix.
Pick an issue¶
If you don't already have an issue in mind, you can search for help wanted issues. If you also search for good first issue then you should be able to find some issues that are good for getting started contributing to Salt. Documentation issues are also good starter issues. When you find an issue that catches your eye (or one of your own), it's a good idea to comment on the issue and mention that you're working on it. Good communication is key to collaboration - so if you don't have time to complete work on the issue, just leaving some information about when you expect to pick things up again is a great idea!
Hacking away¶
Salt, tests, documentation, and you¶
Before approving code contributions, Salt requires:
- documentation
- meaningful passing tests
- correct code
Documentation fixes just require correct documentation.
What if I don't write tests or docs?¶
If you aren't into writing documentation or tests, we still welcome your contributions! But your PR will be labeled Needs Testcase and Help Wanted until someone can get to write the tests/documentation. Of course, if you have a desire but just lack the skill we are more than happy to collaborate and help out! There's the documentation working group and the testing working group. We also regularly stream our test clinic live on Twitch every Tuesday afternoon and Thursday morning, Central Time. If you'd like specific help with tests, bring them to the clinic. If no community members need help, you can also just watch tests written in real time.
Documentation¶
Salt uses both docstrings, as well as normal reStructuredText files in the salt/doc folder for documentation. Sphinx is used to generate the documentation, and does require imagemagick. See Set up imagemagick for more information.
Before submitting a documentation PR, it helps to first build the Salt docs locally on your machine and preview them. Local previews helps you:
- Debug potential documentation output errors before submitting a PR.
- Saves you time by not needing to use the Salt CI/CD test suite to debug, which takes more than 30 minutes to run on a PR.
- Ensures the final output looks the way you intended it to look.
To set up your local environment to preview the core Salt and module documentation:
- 1.
- Install the documentation dependencies. For example, on Ubuntu:
sudo apt-get update sudo apt-get install -y enchant-2 git gcc imagemagick make zlib1g-dev libc-dev libffi-dev g++ libxml2 libxml2-dev libxslt-dev libcurl4-openssl-dev libssl-dev libgnutls28-dev xz-utils inkscape
- 2.
- Navigate to the folder where you store your Salt repository and remove any .nox directories that might be in that folder:
rm -rf .nox
- 3.
- Install pyenv for the version of Python needed to run the docs. As of the time of writing, the Salt docs theme is not compatible with Python 3.10, so you'll need to run 3.9 or earlier. For example:
pyenv install 3.7.15 pyenv virtualenv 3.7.15 salt-docs echo 'salt-docs' > .python-version
- 4.
- Activate pyenv if it's not auto-activated:
pyenv exec pip install -U pip setuptools wheel
- 5.
- Install nox into your pyenv environment, which is the utility that will build the Salt documentation:
pyenv exec pip install nox
Since we use nox, you can build your docs and view them in your browser with this one-liner:
python -m nox -e 'docs-html(compress=False, clean=False)'; cd doc/_build/html; python -m webbrowser http://localhost:8000/contents.html; python -m http.server
The first time you build the docs, it will take a while because there are a lot of modules. Maybe you should go grab some dessert if you already finished that sandwich. But once nox and Sphinx are done building the docs, python should launch your default browser with the URL http://localhost:8000/contents.html. Now you can navigate to your docs and ensure your changes exist. If you make changes, you can simply run this:
cd -; python -m nox -e 'docs-html(compress=False, clean=False)'; cd doc/_build/html; python -m http.server
And then refresh your browser to get your updated docs. This one should be quite a bit faster since Sphinx won't need to rebuild everything.
Alternatively, you could build the docs on your local machine and then preview the build output. To build the docs locally:
pyenv exec nox -e 'docs-html(compress=False, clean=True)'
The output from this command will put the preview files in: doc > _build > html.
If your change is a docs-only change, you can go ahead and commit/push your code and open a PR. You can indicate that it's a docs-only change by adding [Documentation] to the title of your PR. Otherwise, you'll want to write some tests and code.
Running development Salt¶
Note: If you run into any issues in this section, check the Troubleshooting section.
If you're going to hack on the Salt codebase you're going to want to be able to run Salt locally. The first thing you need to do is install Salt as an editable pip install:
python -m pip install -e .
This will let you make changes to Salt without having to re-install it.
After all of the dependencies and Salt are installed, it's time to set up the config for development. Typically Salt runs as root, but you can specify which user to run as. To configure that, just copy the master and minion configs. We have .gitignore setup to ignore the local/ directory, so we can put all of our personal files there.
mkdir -p local/etc/salt/
Create a master config file as local/etc/salt/master:
cat <<EOF >local/etc/salt/master user: $(whoami) root_dir: $PWD/local/ publish_port: 55505 ret_port: 55506 EOF
And a minion config as local/etc/salt/minion:
cat <<EOF >local/etc/salt/minion user: $(whoami) root_dir: $PWD/local/ master: localhost id: saltdev master_port: 55506 EOF
Now you can start your Salt master and minion, specifying the config dir.
salt-master --config-dir=local/etc/salt/ --log-level=debug --daemon salt-minion --config-dir=local/etc/salt/ --log-level=debug --daemon
Now you should be able to accept the minion key:
salt-key -c local/etc/salt -Ay
And check that your master/minion are communicating:
salt -c local/etc/salt \* test.version
Rather than running test.version from your master, you can run it from the minion instead:
salt-call -c local/etc/salt test.version
Note that you're running salt-call instead of salt, and you're not specifying the minion (\*), but if you're running the dev version then you still will need to pass in the config dir. Now that you've got Salt running, you can hack away on the Salt codebase!
If you need to restart Salt for some reason, if you've made changes and they don't appear to be reflected, this is one option:
kill -INT $(pgrep salt-master) kill -INT $(pgrep salt-minion)
If you'd rather not use kill, you can have a couple of terminals open with your salt virtualenv activated and omit the --daemon argument. Salt will run in the foreground, so you can just use ctrl+c to quit.
Test first? Test last? Test meaningfully!¶
You can write tests first or tests last, as long as your tests are meaningful and complete! Typically the best tests for Salt are going to be unit tests. Testing is a whole topic on its own, But you may also want to write functional or integration tests. You'll find those in the salt/tests directory.
When you're thinking about tests to write, the most important thing to keep in mind is, “What, exactly, am I testing?” When a test fails, you should know:
- What, specifically, failed?
- Why did it fail?
- As much as possible, what do I need to do to fix this failure?
If you can't answer those questions then you might need to refactor your tests.
When you're running tests locally, you should make sure that if you remove your code changes your tests are failing. If your tests aren't failing when you haven't yet made changes, then it's possible that you're testing the wrong thing.
But whether you adhere to TDD/BDD, or you write your code first and your tests last, ensure that your tests are meaningful.
Running tests¶
As previously mentioned, we use nox, and that's how we run our tests. You should have it installed by this point but if not you can install it with this:
python -m pip install nox
Now you can run your tests:
python -m nox -e "test-3(coverage=False)" -- tests/unit/cli/test_batch.py
It's a good idea to install espeak or use say on Mac if you're running some long-running tests. You can do something like this:
python -m nox -e "test-3(coverage=False)" -- tests/unit/cli/test_batch.py; espeak "Tests done, woohoo!"
That way you don't have to keep monitoring the actual test run.
python -m nox -e "test-3(coverage=False)" -- --core-tests
You can enable or disable test groups locally by passing their respected flag:
- --no-fast-tests - Tests that are ~10s or faster. Fast tests make up ~75% of tests and can run in 10 to 20 minutes.
- --slow-tests - Tests that are ~10s or slower.
- --core-tests - Tests of any speed that test the root parts of salt.
- --flaky-jail - Test that need to be temporarily skipped.
In Your PR, you can enable or disable test groups by setting a label. All fast, slow, and core tests specified in the change file will always run.
- test:no-fast
- test:core
- test:slow
- test:flaky-jail
Changelog and commit!¶
When you write your commit message you should use imperative style. Do this:
Don't do this:
But that advice is backwards for the changelog. We follow the keepachangelog approach for our changelog, and use towncrier to generate it for each release. As a contributor, all that means is that you need to add a file to the salt/changelog directory, using the <issue #>.<type> format. For instanch, if you fixed issue 123, you would do:
echo "Made sys.doc inform when no minions return" > changelog/123.fixed
And that's all that would go into your file. When it comes to your commit message, it's usually a good idea to add other information, such as
- What does a reviewer need to know about the change that you made?
- If someone isn't an expert in this area, what will they need to know?
This will also help you out, because when you go to create the PR it will automatically insert the body of your commit messages.
Pull request time!¶
Once you've done all your dev work and tested locally, you should check out our PR guidelines. After you read that page, it's time to open a new PR. Fill out the PR template - you should have updated or created any necessary docs, and written tests if you're providing a code change. When you submit your PR, we have a suite of tests that will run across different platforms to help ensure that no known bugs were introduced.
Now what?¶
You've made your changes, added documentation, opened your PR, and have passing tests… now what? When can you expect your code to be merged?
When you open your PR, a reviewer will get automatically assigned. If your PR is submitted during the week you should be able to expect some kind of communication within that business day. If your tests are passing and we're not in a code freeze, ideally your code will be merged that week or month. If you haven't heard from your assigned reviewer, ping them on GitHub, irc, or Community Slack.
It's likely that your reviewer will leave some comments that need addressing - it may be a style change, or you forgot a changelog entry, or need to update the docs. Maybe it's something more fundamental - perhaps you encountered the rare case where your PR has a much larger scope than initially assumed.
Whatever the case, simply make the requested changes (or discuss why the requests are incorrect), and push up your new commits. If your PR is open for a significant period of time it may be worth rebasing your changes on the most recent changes to Salt. If you need help, the previously linked Git resources will be valuable.
But if, for whatever reason, you're not interested in driving your PR to completion then just note that in your PR. Something like, “I'm not interested in writing docs/tests, I just wanted to provide this fix - someone else will need to complete this PR.” If you do that then we'll add a “Help Wanted” label and someone will be able to pick up the PR, make the required changes, and it can eventually get merged in.
In any case, now that you have a PR open, congrats! You're a Salt developer! You rock!
Troubleshooting¶
zmq.core.error.ZMQError¶
Once the minion starts, you may see an error like the following:
::
This means that the path to the socket the minion is using is too long. This is a system limitation, so the only workaround is to reduce the length of this path. This can be done in a couple different ways:
- 1.
- Create your virtualenv in a path that is short enough.
- 2.
- Edit the :conf_minion:sock_dir minion config variable and reduce its length. Remember that this path is relative to the value you set in :conf_minion:root_dir.
NOTE: The socket path is limited to 107 characters on Solaris and Linux, and 103 characters on BSD-based systems.
No permissions to access ...¶
If you forget to pass your config path to any of the salt* commands, you might see
No permissions to access "/var/log/salt/master", are you running as the correct user?
Just pass -c local/etc/salt (or whatever you named it)
File descriptor limit¶
You might need to raise your file descriptor limit. You can check it with:
ulimit -n
If the value is less than 3072, you should increase it with:
ulimit -n 3072 # For c-shell: limit descriptors 3072
Pygit2 or other dependency install fails¶
You may see some failure messages when installing requirements. You can directly access your nox environment and possibly install pygit (or other dependency) that way. When you run nox, you'll see a message like this:
nox > Re-using existing virtual environment at .nox/pytest-parametrized-3-crypto-none-transport-zeromq-coverage-false.
For this, you would be able to install with:
.nox/pytest-parametrized-3-crypto-none-transport-zeromq-coverage-false/bin/python -m pip install pygit2
INSTALLATION¶
See the Salt Install Guide for the current installation instructions.
CONFIGURING SALT¶
This section explains how to configure user access, view and store job results, secure and troubleshoot, and how to perform many other administrative tasks.
Configuring the Salt Master¶
The Salt system is amazingly simple and easy to configure, the two components of the Salt system each have a respective configuration file. The salt-master is configured via the master configuration file, and the salt-minion is configured via the minion configuration file.
SEE ALSO:
The configuration file for the salt-master is located at /etc/salt/master by default. Atomic included configuration files can be placed in /etc/salt/master.d/*.conf. Warning: files with other suffixes than .conf will not be included. A notable exception is FreeBSD, where the configuration file is located at /usr/local/etc/salt. The available options are as follows:
Primary Master Configuration¶
interface¶
Default: 0.0.0.0 (all interfaces)
The local interface to bind to, must be an IP address.
interface: 192.168.0.1
ipv6¶
Default: False
Whether the master should listen for IPv6 connections. If this is set to True, the interface option must be adjusted too (for example: interface: '::')
ipv6: True
publish_port¶
Default: 4505
The network port to set up the publication interface.
publish_port: 4505
master_id¶
Default: None
The id to be passed in the publish job to minions. This is used for MultiSyndics to return the job to the requesting master.
NOTE:
master_id: MasterOfMaster
user¶
Default: root
The user to run the Salt processes
user: root
enable_ssh_minions¶
Default: False
Tell the master to also use salt-ssh when running commands against minions.
enable_ssh_minions: True
NOTE:
ret_port¶
Default: 4506
The port used by the return server, this is the server used by Salt to receive execution returns and command executions.
ret_port: 4506
pidfile¶
Default: /var/run/salt-master.pid
Specify the location of the master pidfile.
pidfile: /var/run/salt-master.pid
root_dir¶
Default: /
The system root directory to operate from, change this to make Salt run from an alternative root.
root_dir: /
NOTE:
conf_file¶
Default: /etc/salt/master
The path to the master's configuration file.
conf_file: /etc/salt/master
pki_dir¶
Default: <LIB_STATE_DIR>/pki/master
The directory to store the pki authentication keys.
<LIB_STATE_DIR> is the pre-configured variable state directory set during installation via --salt-lib-state-dir. It defaults to /etc/salt. Systems following the Filesystem Hierarchy Standard (FHS) might set it to /var/lib/salt.
pki_dir: /etc/salt/pki/master
extension_modules¶
Changed in version 2016.3.0: The default location for this directory has been moved. Prior to this version, the location was a directory named extmods in the Salt cachedir (on most platforms, /var/cache/salt/extmods). It has been moved into the master cachedir (on most platforms, /var/cache/salt/master/extmods).
Directory where custom modules are synced to. This directory can contain subdirectories for each of Salt's module types such as runners, output, wheel, modules, states, returners, engines, utils, etc. This path is appended to root_dir.
Note, any directories or files not found in the module_dirs location will be removed from the extension_modules path.
extension_modules: /root/salt_extmods
extmod_whitelist/extmod_blacklist¶
New in version 2017.7.0.
By using this dictionary, the modules that are synced to the master's extmod cache using saltutil.sync_* can be limited. If nothing is set to a specific type, then all modules are accepted. To block all modules of a specific type, whitelist an empty list.
extmod_whitelist:
modules:
- custom_module
engines:
- custom_engine
pillars: [] extmod_blacklist:
modules:
- specific_module
- modules
- states
- grains
- renderers
- returners
- output
- proxy
- runners
- wheel
- engines
- queues
- pillar
- utils
- sdb
- cache
- clouds
- tops
- roster
- tokens
module_dirs¶
Default: []
Like extension_modules, but a list of extra directories to search for Salt modules.
module_dirs:
- /var/cache/salt/minion/extmods
cachedir¶
Default: /var/cache/salt/master
The location used to store cache information, particularly the job information for executed salt commands.
This directory may contain sensitive data and should be protected accordingly.
cachedir: /var/cache/salt/master
verify_env¶
Default: True
Verify and set permissions on configuration directories at startup.
verify_env: True
keep_jobs¶
Default: 24
Set the number of hours to keep old job information. Note that setting this option to 0 disables the cache cleaner.
Deprecated since version 3006: Replaced by keep_jobs_seconds
keep_jobs: 24
keep_jobs_seconds¶
Default: 86400
Set the number of seconds to keep old job information. Note that setting this option to 0 disables the cache cleaner.
keep_jobs_seconds: 86400
gather_job_timeout¶
New in version 2014.7.0.
Default: 10
The number of seconds to wait when the client is requesting information about running jobs.
gather_job_timeout: 10
timeout¶
Default: 5
Set the default timeout for the salt command and api.
loop_interval¶
Default: 60
The loop_interval option controls the seconds for the master's Maintenance process check cycle. This process updates file server backends, cleans the job cache and executes the scheduler.
maintenance_interval¶
New in version 3006.0.
Default: 3600
Defines how often to restart the master's Maintenance process.
maintenance_interval: 9600
output¶
Default: nested
Set the default outputter used by the salt command.
outputter_dirs¶
Default: []
A list of additional directories to search for salt outputters in.
outputter_dirs: []
output_file¶
Default: None
Set the default output file used by the salt command. Default is to output to the CLI and not to a file. Functions the same way as the "--out-file" CLI option, only sets this to a single file for all salt commands.
output_file: /path/output/file
show_timeout¶
Default: True
Tell the client to show minions that have timed out.
show_timeout: True
show_jid¶
Default: False
Tell the client to display the jid when a job is published.
show_jid: False
color¶
Default: True
By default output is colored, to disable colored output set the color value to False.
color: False
color_theme¶
Default: ""
Specifies a path to the color theme to use for colored command line output.
color_theme: /etc/salt/color_theme
cli_summary¶
Default: False
When set to True, displays a summary of the number of minions targeted, the number of minions returned, and the number of minions that did not return.
cli_summary: False
sock_dir¶
Default: /var/run/salt/master
Set the location to use for creating Unix sockets for master process communication.
sock_dir: /var/run/salt/master
enable_gpu_grains¶
Default: False
Enable GPU hardware data for your master. Be aware that the master can take a while to start up when lspci and/or dmidecode is used to populate the grains for the master.
enable_gpu_grains: True
skip_grains¶
Default: False
MasterMinions should omit grains. A MasterMinion is "a minion function object for generic use on the master" that omit pillar. A RunnerClient creates a MasterMinion omitting states and renderer. Setting to True can improve master performance.
skip_grains: True
job_cache¶
Default: True
The master maintains a temporary job cache. While this is a great addition, it can be a burden on the master for larger deployments (over 5000 minions). Disabling the job cache will make previously executed jobs unavailable to the jobs system and is not generally recommended. Normally it is wise to make sure the master has access to a faster IO system or a tmpfs is mounted to the jobs dir.
job_cache: True
NOTE:
Note that the keep_jobs_seconds option can be set to a lower value, such as 3600, to limit the number of seconds jobs are stored in the job cache. (The default is 86400 seconds.)
Please see the Managing the Job Cache documentation for more information.
minion_data_cache¶
Default: True
The minion data cache is a cache of information about the minions stored on the master, this information is primarily the pillar, grains and mine data. The data is cached via the cache subsystem in the Master cachedir under the name of the minion or in a supported database. The data is used to predetermine what minions are expected to reply from executions.
minion_data_cache: True
cache¶
Default: localfs
Cache subsystem module to use for minion data cache.
cache: consul
memcache_expire_seconds¶
Default: 0
Memcache is an additional cache layer that keeps a limited amount of data fetched from the minion data cache for a limited period of time in memory that makes cache operations faster. It doesn't make much sense for the localfs cache driver but helps for more complex drivers like consul.
This option sets the memcache items expiration time. By default is set to 0 that disables the memcache.
memcache_expire_seconds: 30
memcache_max_items¶
Default: 1024
Set memcache limit in items that are bank-key pairs. I.e the list of minion_0/data, minion_0/mine, minion_1/data contains 3 items. This value depends on the count of minions usually targeted in your environment. The best one could be found by analyzing the cache log with memcache_debug enabled.
memcache_max_items: 1024
memcache_full_cleanup¶
Default: False
If cache storage got full, i.e. the items count exceeds the memcache_max_items value, memcache cleans up its storage. If this option set to False memcache removes the only one oldest value from its storage. If this set set to True memcache removes all the expired items and also removes the oldest one if there are no expired items.
memcache_full_cleanup: True
memcache_debug¶
Default: False
Enable collecting the memcache stats and log it on debug log level. If enabled memcache collect information about how many fetch calls has been done and how many of them has been hit by memcache. Also it outputs the rate value that is the result of division of the first two values. This should help to choose right values for the expiration time and the cache size.
memcache_debug: True
ext_job_cache¶
Default: ''
Used to specify a default returner for all minions. When this option is set, the specified returner needs to be properly configured and the minions will always default to sending returns to this returner. This will also disable the local job cache on the master.
ext_job_cache: redis
event_return¶
New in version 2015.5.0.
Default: ''
Specify the returner(s) to use to log events. Each returner may have installation and configuration requirements. Read the returner's documentation.
NOTE:
event_return:
- syslog
- splunk
event_return_queue¶
New in version 2015.5.0.
Default: 0
On busy systems, enabling event_returns can cause a considerable load on the storage system for returners. Events can be queued on the master and stored in a batched fashion using a single transaction for multiple events. By default, events are not queued.
event_return_queue: 0
event_return_whitelist¶
New in version 2015.5.0.
Default: []
Only return events matching tags in a whitelist.
Changed in version 2016.11.0: Supports glob matching patterns.
event_return_whitelist:
- salt/master/a_tag
- salt/run/*/ret
event_return_blacklist¶
New in version 2015.5.0.
Default: []
Store all event returns _except_ the tags in a blacklist.
Changed in version 2016.11.0: Supports glob matching patterns.
event_return_blacklist:
- salt/master/not_this_tag
- salt/wheel/*/ret
max_event_size¶
New in version 2014.7.0.
Default: 1048576
Passing very large events can cause the minion to consume large amounts of memory. This value tunes the maximum size of a message allowed onto the master event bus. The value is expressed in bytes.
max_event_size: 1048576
master_job_cache¶
New in version 2014.7.0.
Default: local_cache
Specify the returner to use for the job cache. The job cache will only be interacted with from the salt master and therefore does not need to be accessible from the minions.
master_job_cache: redis
job_cache_store_endtime¶
New in version 2015.8.0.
Default: False
Specify whether the Salt Master should store end times for jobs as returns come in.
job_cache_store_endtime: False
enforce_mine_cache¶
Default: False
By-default when disabling the minion_data_cache mine will stop working since it is based on cached data, by enabling this option we explicitly enabling only the cache for the mine system.
enforce_mine_cache: False
max_minions¶
Default: 0
The maximum number of minion connections allowed by the master. Use this to accommodate the number of minions per master if you have different types of hardware serving your minions. The default of 0 means unlimited connections. Please note that this can slow down the authentication process a bit in large setups.
max_minions: 100
con_cache¶
Default: False
If max_minions is used in large installations, the master might experience high-load situations because of having to check the number of connected minions for every authentication. This cache provides the minion-ids of all connected minions to all MWorker-processes and greatly improves the performance of max_minions.
con_cache: True
presence_events¶
Default: False
Causes the master to periodically look for actively connected minions. Presence events are fired on the event bus on a regular interval with a list of connected minions, as well as events with lists of newly connected or disconnected minions. This is a master-only operation that does not send executions to minions.
presence_events: False
detect_remote_minions¶
Default: False
When checking the minions connected to a master, also include the master's connections to minions on the port specified in the setting remote_minions_port. This is particularly useful when checking if the master is connected to any Heist-Salt minions. If this setting is set to True, the master will check all connections on port 22 by default unless a user also configures a different port with the setting remote_minions_port.
Changing this setting will check the remote minions the master is connected to when using presence events, the manage runner, and any other parts of the code that call the connected_ids method to check the status of connected minions.
detect_remote_minions: True
remote_minions_port¶
Default: 22
The port to use when checking for remote minions when detect_remote_minions is set to True.
remote_minions_port: 2222
ping_on_rotate¶
New in version 2014.7.0.
Default: False
By default, the master AES key rotates every 24 hours. The next command following a key rotation will trigger a key refresh from the minion which may result in minions which do not respond to the first command after a key refresh.
To tell the master to ping all minions immediately after an AES key refresh, set ping_on_rotate to True. This should mitigate the issue where a minion does not appear to initially respond after a key is rotated.
Note that enabling this may cause high load on the master immediately after the key rotation event as minions reconnect. Consider this carefully if this salt master is managing a large number of minions.
If disabled, it is recommended to handle this event by listening for the aes_key_rotate event with the key tag and acting appropriately.
ping_on_rotate: False
transport¶
Default: zeromq
Changes the underlying transport layer. ZeroMQ is the recommended transport while additional transport layers are under development. Supported values are zeromq and tcp (experimental). This setting has a significant impact on performance and should not be changed unless you know what you are doing!
transport: zeromq
transport_opts¶
Default: {}
(experimental) Starts multiple transports and overrides options for each transport with the provided dictionary This setting has a significant impact on performance and should not be changed unless you know what you are doing! The following example shows how to start a TCP transport alongside a ZMQ transport.
transport_opts:
tcp:
publish_port: 4605
ret_port: 4606
zeromq: []
master_stats¶
Default: False
Turning on the master stats enables runtime throughput and statistics events to be fired from the master event bus. These events will report on what functions have been run on the master and how long these runs have, on average, taken over a given period of time.
master_stats_event_iter¶
Default: 60
The time in seconds to fire master_stats events. This will only fire in conjunction with receiving a request to the master, idle masters will not fire these events.
sock_pool_size¶
Default: 1
To avoid blocking waiting while writing a data to a socket, we support socket pool for Salt applications. For example, a job with a large number of target host list can cause long period blocking waiting. The option is used by ZMQ and TCP transports, and the other transport methods don't need the socket pool by definition. Most of Salt tools, including CLI, are enough to use a single bucket of socket pool. On the other hands, it is highly recommended to set the size of socket pool larger than 1 for other Salt applications, especially Salt API, which must write data to socket concurrently.
sock_pool_size: 15
ipc_mode¶
Default: ipc
The ipc strategy. (i.e., sockets versus tcp, etc.) Windows platforms lack POSIX IPC and must rely on TCP based inter-process communications. ipc_mode is set to tcp by default on Windows.
ipc_mode: ipc
ipc_write_buffer¶
Default: 0
The maximum size of a message sent via the IPC transport module can be limited dynamically or by sharing an integer value lower than the total memory size. When the value dynamic is set, salt will use 2.5% of the total memory as ipc_write_buffer value (rounded to an integer). A value of 0 disables this option.
ipc_write_buffer: 10485760
tcp_master_pub_port¶
Default: 4512
The TCP port on which events for the master should be published if ipc_mode is TCP.
tcp_master_pub_port: 4512
tcp_master_pull_port¶
Default: 4513
The TCP port on which events for the master should be pulled if ipc_mode is TCP.
tcp_master_pull_port: 4513
tcp_master_publish_pull¶
Default: 4514
The TCP port on which events for the master should be pulled fom and then republished onto the event bus on the master.
tcp_master_publish_pull: 4514
tcp_master_workers¶
Default: 4515
The TCP port for mworkers to connect to on the master.
tcp_master_workers: 4515
auth_events¶
New in version 2017.7.3.
Default: True
Determines whether the master will fire authentication events. Authentication events are fired when a minion performs an authentication check with the master.
auth_events: True
minion_data_cache_events¶
New in version 2017.7.3.
Default: True
Determines whether the master will fire minion data cache events. Minion data cache events are fired when a minion requests a minion data cache refresh.
minion_data_cache_events: True
http_connect_timeout¶
New in version 2019.2.0.
Default: 20
HTTP connection timeout in seconds. Applied when fetching files using tornado back-end. Should be greater than overall download time.
http_connect_timeout: 20
http_request_timeout¶
New in version 2015.8.0.
Default: 3600
HTTP request timeout in seconds. Applied when fetching files using tornado back-end. Should be greater than overall download time.
http_request_timeout: 3600
use_yamlloader_old¶
New in version 2019.2.1.
Default: False
Use the pre-2019.2 YAML renderer. Uses legacy YAML rendering to support some legacy inline data structures. See the 2019.2.1 release notes for more details.
use_yamlloader_old: False
req_server_niceness¶
New in version 3001.
Default: None
Process priority level of the ReqServer subprocess of the master. Supported on POSIX platforms only.
req_server_niceness: 9
pub_server_niceness¶
New in version 3001.
Default: None
Process priority level of the PubServer subprocess of the master. Supported on POSIX platforms only.
pub_server_niceness: 9
fileserver_update_niceness¶
New in version 3001.
Default: None
Process priority level of the FileServerUpdate subprocess of the master. Supported on POSIX platforms only.
fileserver_update_niceness: 9
maintenance_niceness¶
New in version 3001.
Default: None
Process priority level of the Maintenance subprocess of the master. Supported on POSIX platforms only.
maintenance_niceness: 9
mworker_niceness¶
New in version 3001.
Default: None
Process priority level of the MWorker subprocess of the master. Supported on POSIX platforms only.
mworker_niceness: 9
mworker_queue_niceness¶
New in version 3001.
default: None
process priority level of the MWorkerQueue subprocess of the master. supported on POSIX platforms only.
mworker_queue_niceness: 9
event_return_niceness¶
New in version 3001.
default: None
process priority level of the EventReturn subprocess of the master. supported on POSIX platforms only.
event_return_niceness: 9
event_publisher_niceness¶
New in version 3001.
default: none
process priority level of the EventPublisher subprocess of the master. supported on POSIX platforms only.
event_publisher_niceness: 9
reactor_niceness¶
New in version 3001.
default: None
process priority level of the Reactor subprocess of the master. supported on POSIX platforms only.
reactor_niceness: 9
Salt-SSH Configuration¶
roster¶
Default: flat
Define the default salt-ssh roster module to use
roster: cache
roster_defaults¶
New in version 2017.7.0.
Default settings which will be inherited by all rosters.
roster_defaults:
user: daniel
sudo: True
priv: /root/.ssh/id_rsa
tty: True
roster_file¶
Default: /etc/salt/roster
Pass in an alternative location for the salt-ssh flat roster file.
roster_file: /root/roster
rosters¶
Default: None
Define locations for flat roster files so they can be chosen when using Salt API. An administrator can place roster files into these locations. Then, when calling Salt API, the roster_file parameter should contain a relative path to these locations. That is, roster_file=/foo/roster will be resolved as /etc/salt/roster.d/foo/roster etc. This feature prevents passing insecure custom rosters through the Salt API.
rosters:
- /etc/salt/roster.d
- /opt/salt/some/more/rosters
ssh_passwd¶
Default: ''
The ssh password to log in with.
ssh_passwd: ''
ssh_priv_passwd¶
Default: ''
Passphrase for ssh private key file.
ssh_priv_passwd: ''
ssh_port¶
Default: 22
The target system's ssh port number.
ssh_port: 22
ssh_scan_ports¶
Default: 22
Comma-separated list of ports to scan.
ssh_scan_ports: 22
ssh_scan_timeout¶
Default: 0.01
Scanning socket timeout for salt-ssh.
ssh_scan_timeout: 0.01
ssh_sudo¶
Default: False
Boolean to run command via sudo.
ssh_sudo: False
ssh_timeout¶
Default: 60
Number of seconds to wait for a response when establishing an SSH connection.
ssh_timeout: 60
ssh_user¶
Default: root
The user to log in as.
ssh_user: root
ssh_log_file¶
New in version 2016.3.5.
Default: /var/log/salt/ssh
Specify the log file of the salt-ssh command.
ssh_log_file: /var/log/salt/ssh
ssh_minion_opts¶
Default: None
Pass in minion option overrides that will be inserted into the SHIM for salt-ssh calls. The local minion config is not used for salt-ssh. Can be overridden on a per-minion basis in the roster (minion_opts)
ssh_minion_opts:
gpg_keydir: /root/gpg
ssh_use_home_key¶
Default: False
Set this to True to default to using ~/.ssh/id_rsa for salt-ssh authentication with minions
ssh_use_home_key: False
ssh_identities_only¶
Default: False
Set this to True to default salt-ssh to run with -o IdentitiesOnly=yes. This option is intended for situations where the ssh-agent offers many different identities and allows ssh to ignore those identities and use the only one specified in options.
ssh_identities_only: False
ssh_list_nodegroups¶
Default: {}
List-only nodegroups for salt-ssh. Each group must be formed as either a comma-separated list, or a YAML list. This option is useful to group minions into easy-to-target groups when using salt-ssh. These groups can then be targeted with the normal -N argument to salt-ssh.
ssh_list_nodegroups:
groupA: minion1,minion2
groupB: minion1,minion3
Default: False
Run the ssh_pre_flight script defined in the salt-ssh roster. By default the script will only run when the thin dir does not exist on the targeted minion. This will force the script to run and not check if the thin dir exists first.
thin_extra_mods¶
Default: None
List of additional modules, needed to be included into the Salt Thin. Pass a list of importable Python modules that are typically located in the site-packages Python directory so they will be also always included into the Salt Thin, once generated.
min_extra_mods¶
Default: None
Identical as thin_extra_mods, only applied to the Salt Minimal.
Master Security Settings¶
open_mode¶
Default: False
Open mode is a dangerous security feature. One problem encountered with pki authentication systems is that keys can become "mixed up" and authentication begins to fail. Open mode turns off authentication and tells the master to accept all authentication. This will clean up the pki keys received from the minions. Open mode should not be turned on for general use. Open mode should only be used for a short period of time to clean up pki keys. To turn on open mode set this value to True.
open_mode: False
auto_accept¶
Default: False
Enable auto_accept. This setting will automatically accept all incoming public keys from minions.
auto_accept: False
keysize¶
Default: 2048
The size of key that should be generated when creating new keys.
keysize: 2048
autosign_timeout¶
New in version 2014.7.0.
Default: 120
Time in minutes that a incoming public key with a matching name found in pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys are removed when the master checks the minion_autosign directory. This method to auto accept minions can be safer than an autosign_file because the keyid record can expire and is limited to being an exact name match. This should still be considered a less than secure option, due to the fact that trust is based on just the requesting minion id.
autosign_file¶
Default: not defined
If the autosign_file is specified incoming keys specified in the autosign_file will be automatically accepted. Matches will be searched for first by string comparison, then by globbing, then by full-string regex matching. This should still be considered a less than secure option, due to the fact that trust is based on just the requesting minion id.
Changed in version 2018.3.0: For security reasons the file must be readonly except for its owner. If permissive_pki_access is True the owning group can also have write access, but if Salt is running as root it must be a member of that group. A less strict requirement also existed in previous version.
autoreject_file¶
New in version 2014.1.0.
Default: not defined
Works like autosign_file, but instead allows you to specify minion IDs for which keys will automatically be rejected. Will override both membership in the autosign_file and the auto_accept setting.
autosign_grains_dir¶
New in version 2018.3.0.
Default: not defined
If the autosign_grains_dir is specified, incoming keys from minions with grain values that match those defined in files in the autosign_grains_dir will be accepted automatically. Grain values that should be accepted automatically can be defined by creating a file named like the corresponding grain in the autosign_grains_dir and writing the values into that file, one value per line. Lines starting with a # will be ignored. Minion must be configured to send the corresponding grains on authentication. This should still be considered a less than secure option, due to the fact that trust is based on just the requesting minion.
Please see the Autoaccept Minions from Grains documentation for more information.
autosign_grains_dir: /etc/salt/autosign_grains
permissive_pki_access¶
Default: False
Enable permissive access to the salt keys. This allows you to run the master or minion as root, but have a non-root group be given access to your pki_dir. To make the access explicit, root must belong to the group you've given access to. This is potentially quite insecure. If an autosign_file is specified, enabling permissive_pki_access will allow group access to that specific file.
permissive_pki_access: False
publisher_acl¶
Default: {}
Enable user accounts on the master to execute specific modules. These modules can be expressed as regular expressions.
publisher_acl:
fred:
- test.ping
- pkg.*
publisher_acl_blacklist¶
Default: {}
Blacklist users or modules
This example would blacklist all non sudo users, including root from running any commands. It would also blacklist any use of the "cmd" module.
This is completely disabled by default.
publisher_acl_blacklist:
users:
- root
- '^(?!sudo_).*$' # all non sudo users
modules:
- cmd.*
- test.echo
sudo_acl¶
Default: False
Enforce publisher_acl and publisher_acl_blacklist when users have sudo access to the salt command.
sudo_acl: False
external_auth¶
Default: {}
The external auth system uses the Salt auth modules to authenticate and validate users to access areas of the Salt system.
external_auth:
pam:
fred:
- test.*
token_expire¶
Default: 43200
Time (in seconds) for a newly generated token to live.
Default: 12 hours
token_expire: 43200
token_expire_user_override¶
Default: False
Allow eauth users to specify the expiry time of the tokens they generate.
A boolean applies to all users or a dictionary of whitelisted eauth backends and usernames may be given:
token_expire_user_override:
pam:
- fred
- tom
ldap:
- gary
keep_acl_in_token¶
Default: False
Set to True to enable keeping the calculated user's auth list in the token file. This is disabled by default and the auth list is calculated or requested from the eauth driver each time.
Note: keep_acl_in_token will be forced to True when using external authentication for REST API (rest is present under external_auth). This is because the REST API does not store the password, and can therefore not retroactively fetch the ACL, so the ACL must be stored in the token.
keep_acl_in_token: False
eauth_acl_module¶
Default: ''
Auth subsystem module to use to get authorized access list for a user. By default it's the same module used for external authentication.
eauth_acl_module: django
file_recv¶
Default: False
Allow minions to push files to the master. This is disabled by default, for security purposes.
file_recv: False
file_recv_max_size¶
New in version 2014.7.0.
Default: 100
Set a hard-limit on the size of the files that can be pushed to the master. It will be interpreted as megabytes.
file_recv_max_size: 100
master_sign_pubkey¶
Default: False
Sign the master auth-replies with a cryptographic signature of the master's public key. Please see the tutorial how to use these settings in the Multimaster-PKI with Failover Tutorial
master_sign_pubkey: True
master_sign_key_name¶
Default: master_sign
The customizable name of the signing-key-pair without suffix.
master_sign_key_name: <filename_without_suffix>
master_pubkey_signature¶
Default: master_pubkey_signature
The name of the file in the master's pki-directory that holds the pre-calculated signature of the master's public-key.
master_pubkey_signature: <filename>
master_use_pubkey_signature¶
Default: False
Instead of computing the signature for each auth-reply, use a pre-calculated signature. The master_pubkey_signature must also be set for this.
master_use_pubkey_signature: True
rotate_aes_key¶
Default: True
Rotate the salt-masters AES-key when a minion-public is deleted with salt-key. This is a very important security-setting. Disabling it will enable deleted minions to still listen in on the messages published by the salt-master. Do not disable this unless it is absolutely clear what this does.
rotate_aes_key: True
publish_session¶
Default: 86400
The number of seconds between AES key rotations on the master.
publish_session: Default: 86400
ssl¶
New in version 2016.11.0.
Default: None
TLS/SSL connection options. This could be set to a dictionary containing arguments corresponding to python ssl.wrap_socket method. For details see Tornado and Python documentation.
Note: to set enum arguments values like cert_reqs and ssl_version use constant names without ssl module prefix: CERT_REQUIRED or PROTOCOL_SSLv23.
ssl:
keyfile: <path_to_keyfile>
certfile: <path_to_certfile>
ssl_version: PROTOCOL_TLSv1_2
preserve_minion_cache¶
Default: False
By default, the master deletes its cache of minion data when the key for that minion is removed. To preserve the cache after key deletion, set preserve_minion_cache to True.
WARNING: This may have security implications if compromised minions auth with a previous deleted minion ID.
preserve_minion_cache: False
allow_minion_key_revoke¶
Default: True
Controls whether a minion can request its own key revocation. When True the master will honor the minion's request and revoke its key. When False, the master will drop the request and the minion's key will remain accepted.
allow_minion_key_revoke: False
optimization_order¶
Default: [0, 1, 2]
In cases where Salt is distributed without .py files, this option determines the priority of optimization level(s) Salt's module loader should prefer.
NOTE:
optimization_order:
- 2
- 0
- 1
Master Large Scale Tuning Settings¶
max_open_files¶
Default: 100000
Each minion connecting to the master uses AT LEAST one file descriptor, the master subscription connection. If enough minions connect you might start seeing on the console(and then salt-master crashes):
Too many open files (tcp_listener.cpp:335) Aborted (core dumped)
max_open_files: 100000
By default this value will be the one of ulimit -Hn, i.e., the hard limit for max open files.
To set a different value than the default one, uncomment, and configure this setting. Remember that this value CANNOT be higher than the hard limit. Raising the hard limit depends on the OS and/or distribution, a good way to find the limit is to search the internet for something like this:
raise max open files hard limit debian
worker_threads¶
Default: 5
The number of threads to start for receiving commands and replies from minions. If minions are stalling on replies because you have many minions, raise the worker_threads value.
Worker threads should not be put below 3 when using the peer system, but can drop down to 1 worker otherwise.
Standards for busy environments:
- Use one worker thread per 200 minions.
- The value of worker_threads should not exceed 1½ times the available CPU cores.
NOTE:
worker_threads: 5
pub_hwm¶
Default: 1000
The zeromq high water mark on the publisher interface.
pub_hwm: 1000
zmq_backlog¶
Default: 1000
The listen queue size of the ZeroMQ backlog.
zmq_backlog: 1000
Master Module Management¶
runner_dirs¶
Default: []
Set additional directories to search for runner modules.
runner_dirs:
- /var/lib/salt/runners
utils_dirs¶
New in version 2018.3.0.
Default: []
Set additional directories to search for util modules.
utils_dirs:
- /var/lib/salt/utils
cython_enable¶
Default: False
Set to true to enable Cython modules (.pyx files) to be compiled on the fly on the Salt master.
cython_enable: False
Master State System Settings¶
state_top¶
Default: top.sls
The state system uses a "top" file to tell the minions what environment to use and what modules to use. The state_top file is defined relative to the root of the base environment. The value of "state_top" is also used for the pillar top file
state_top: top.sls
state_top_saltenv¶
This option has no default value. Set it to an environment name to ensure that only the top file from that environment is considered during a highstate.
NOTE:
state_top_saltenv: dev
top_file_merging_strategy¶
Changed in version 2016.11.0: A merge_all strategy has been added.
Default: merge
When no specific fileserver environment (a.k.a. saltenv) has been specified for a highstate, all environments' top files are inspected. This config option determines how the SLS targets in those top files are handled.
When set to merge, the base environment's top file is evaluated first, followed by the other environments' top files. The first target expression (e.g. '*') for a given environment is kept, and when the same target expression is used in a different top file evaluated later, it is ignored. Because base is evaluated first, it is authoritative. For example, if there is a target for '*' for the foo environment in both the base and foo environment's top files, the one in the foo environment would be ignored. The environments will be evaluated in no specific order (aside from base coming first). For greater control over the order in which the environments are evaluated, use env_order. Note that, aside from the base environment's top file, any sections in top files that do not match that top file's environment will be ignored. So, for example, a section for the qa environment would be ignored if it appears in the dev environment's top file. To keep use cases like this from being ignored, use the merge_all strategy.
When set to same, then for each environment, only that environment's top file is processed, with the others being ignored. For example, only the dev environment's top file will be processed for the dev environment, and any SLS targets defined for dev in the base environment's (or any other environment's) top file will be ignored. If an environment does not have a top file, then the top file from the default_top config parameter will be used as a fallback.
When set to merge_all, then all states in all environments in all top files will be applied. The order in which individual SLS files will be executed will depend on the order in which the top files were evaluated, and the environments will be evaluated in no specific order. For greater control over the order in which the environments are evaluated, use env_order.
top_file_merging_strategy: same
env_order¶
Default: []
When top_file_merging_strategy is set to merge, and no environment is specified for a highstate, this config option allows for the order in which top files are evaluated to be explicitly defined.
env_order:
- base
- dev
- qa
master_tops¶
Default: {}
The master_tops option replaces the external_nodes option by creating a pluggable system for the generation of external top data. The external_nodes option is deprecated by the master_tops option. To gain the capabilities of the classic external_nodes system, use the following configuration:
master_tops:
ext_nodes: <Shell command which returns yaml>
renderer¶
Default: jinja|yaml
The renderer to use on the minions to render the state data.
renderer: jinja|json
userdata_template¶
New in version 2016.11.4.
Default: None
The renderer to use for templating userdata files in salt-cloud, if the userdata_template is not set in the cloud profile. If no value is set in the cloud profile or master config file, no templating will be performed.
userdata_template: jinja
jinja_env¶
New in version 2018.3.0.
Default: {}
jinja_env overrides the default Jinja environment options for all templates except sls templates. To set the options for sls templates use jinja_sls_env.
NOTE:
The default options are:
jinja_env:
block_start_string: '{%'
block_end_string: '%}'
variable_start_string: '{{'
variable_end_string: '}}'
comment_start_string: '{#'
comment_end_string: '#}'
line_statement_prefix:
line_comment_prefix:
trim_blocks: False
lstrip_blocks: False
newline_sequence: '\n'
keep_trailing_newline: False
jinja_sls_env¶
New in version 2018.3.0.
Default: {}
jinja_sls_env sets the Jinja environment options for sls templates. The defaults and accepted options are exactly the same as they are for jinja_env.
The default options are:
jinja_sls_env:
block_start_string: '{%'
block_end_string: '%}'
variable_start_string: '{{'
variable_end_string: '}}'
comment_start_string: '{#'
comment_end_string: '#}'
line_statement_prefix:
line_comment_prefix:
trim_blocks: False
lstrip_blocks: False
newline_sequence: '\n'
keep_trailing_newline: False
Example using line statements and line comments to increase ease of use:
If your configuration options are
jinja_sls_env:
line_statement_prefix: '%'
line_comment_prefix: '##'
With these options jinja will interpret anything after a % at the start of a line (ignoreing whitespace) as a jinja statement and will interpret anything after a ## as a comment.
This allows the following more convenient syntax to be used:
## (this comment will not stay once rendered) # (this comment remains in the rendered template) ## ensure all the formula services are running % for service in formula_services: enable_service_{{ service }}:
service.running:
name: {{ service }} % endfor
The following less convenient but equivalent syntax would have to be used if you had not set the line_statement and line_comment options:
{# (this comment will not stay once rendered) #} # (this comment remains in the rendered template) {# ensure all the formula services are running #} {% for service in formula_services %} enable_service_{{ service }}:
service.running:
name: {{ service }} {% endfor %}
jinja_trim_blocks¶
Deprecated since version 2018.3.0: Replaced by jinja_env and jinja_sls_env
New in version 2014.1.0.
Default: False
If this is set to True, the first newline after a Jinja block is removed (block, not variable tag!). Defaults to False and corresponds to the Jinja environment init variable trim_blocks.
jinja_trim_blocks: False
jinja_lstrip_blocks¶
Deprecated since version 2018.3.0: Replaced by jinja_env and jinja_sls_env
New in version 2014.1.0.
Default: False
If this is set to True, leading spaces and tabs are stripped from the start of a line to a block. Defaults to False and corresponds to the Jinja environment init variable lstrip_blocks.
jinja_lstrip_blocks: False
failhard¶
Default: False
Set the global failhard flag. This informs all states to stop running states at the moment a single state fails.
failhard: False
state_verbose¶
Default: True
Controls the verbosity of state runs. By default, the results of all states are returned, but setting this value to False will cause salt to only display output for states that failed or states that have changes.
state_verbose: False
state_output¶
Default: full
The state_output setting controls which results will be output full multi line:
- full, terse - each state will be full/terse
- mixed - only states with errors will be full
- changes - states with changes and errors will be full
full_id, mixed_id, changes_id and terse_id are also allowed; when set, the state ID will be used as name in the output.
state_output: full
state_output_diff¶
Default: False
The state_output_diff setting changes whether or not the output from successful states is returned. Useful when even the terse output of these states is cluttering the logs. Set it to True to ignore them.
state_output_diff: False
state_output_profile¶
Default: True
The state_output_profile setting changes whether profile information will be shown for each state run.
state_output_profile: True
state_output_pct¶
Default: False
The state_output_pct setting changes whether success and failure information as a percent of total actions will be shown for each state run.
state_output_pct: False
state_compress_ids¶
Default: False
The state_compress_ids setting aggregates information about states which have multiple "names" under the same state ID in the highstate output.
state_compress_ids: False
state_aggregate¶
Default: False
Automatically aggregate all states that have support for mod_aggregate by setting to True.
state_aggregate: True
Or pass a list of state module names to automatically aggregate just those types.
state_aggregate:
- pkg
state_events¶
Default: False
Send progress events as each function in a state run completes execution by setting to True. Progress events are in the format salt/job/<JID>/prog/<MID>/<RUN NUM>.
state_events: True
yaml_utf8¶
Default: False
Enable extra routines for YAML renderer used states containing UTF characters.
yaml_utf8: False
runner_returns¶
Default: True
If set to False, runner jobs will not be saved to job cache (defined by master_job_cache).
runner_returns: False
Master File Server Settings¶
fileserver_backend¶
Default: ['roots']
Salt supports a modular fileserver backend system, this system allows the salt master to link directly to third party systems to gather and manage the files available to minions. Multiple backends can be configured and will be searched for the requested file in the order in which they are defined here. The default setting only enables the standard backend roots, which is configured using the file_roots option.
Example:
fileserver_backend:
- roots
- gitfs
NOTE:
fileserver_followsymlinks¶
New in version 2014.1.0.
Default: True
By default, the file_server follows symlinks when walking the filesystem tree. Currently this only applies to the default roots fileserver_backend.
fileserver_followsymlinks: True
fileserver_ignoresymlinks¶
New in version 2014.1.0.
Default: False
If you do not want symlinks to be treated as the files they are pointing to, set fileserver_ignoresymlinks to True. By default this is set to False. When set to True, any detected symlink while listing files on the Master will not be returned to the Minion.
fileserver_ignoresymlinks: False
fileserver_list_cache_time¶
New in version 2014.1.0.
Changed in version 2016.11.0: The default was changed from 30 seconds to 20.
Default: 20
Salt caches the list of files/symlinks/directories for each fileserver backend and environment as they are requested, to guard against a performance bottleneck at scale when many minions all ask the fileserver which files are available simultaneously. This configuration parameter allows for the max age of that cache to be altered.
Set this value to 0 to disable use of this cache altogether, but keep in mind that this may increase the CPU load on the master when running a highstate on a large number of minions.
NOTE:
fileserver_list_cache_time: 5
fileserver_verify_config¶
New in version 2017.7.0.
Default: True
By default, as the master starts it performs some sanity checks on the configured fileserver backends. If any of these sanity checks fail (such as when an invalid configuration is used), the master daemon will abort.
To skip these sanity checks, set this option to False.
fileserver_verify_config: False
hash_type¶
Default: sha256
The hash_type is the hash to use when discovering the hash of a file on the master server. The default is sha256, but md5, sha1, sha224, sha384, and sha512 are also supported.
hash_type: sha256
file_buffer_size¶
Default: 1048576
The buffer size in the file server in bytes.
file_buffer_size: 1048576
file_ignore_regex¶
Default: ''
A regular expression (or a list of expressions) that will be matched against the file path before syncing the modules and states to the minions. This includes files affected by the file.recurse state. For example, if you manage your custom modules and states in subversion and don't want all the '.svn' folders and content synced to your minions, you could set this to '/.svn($|/)'. By default nothing is ignored.
file_ignore_regex:
- '/\.svn($|/)'
- '/\.git($|/)'
file_ignore_glob¶
Default ''
A file glob (or list of file globs) that will be matched against the file path before syncing the modules and states to the minions. This is similar to file_ignore_regex above, but works on globs instead of regex. By default nothing is ignored.
file_ignore_glob:
- '\*.pyc'
- '\*/somefolder/\*.bak'
- '\*.swp'
NOTE:
master_roots¶
Default: ''
A master-only copy of the file_roots dictionary, used by the state compiler.
Example:
master_roots:
base:
- /srv/salt-master
roots: Master's Local File Server¶
file_roots¶
Changed in version 3005.
Default:
base:
- /srv/salt
Salt runs a lightweight file server written in ZeroMQ to deliver files to minions. This file server is built into the master daemon and does not require a dedicated port.
The file server works on environments passed to the master. Each environment can have multiple root directories. The subdirectories in the multiple file roots cannot match, otherwise the downloaded files will not be able to be reliably ensured. A base environment is required to house the top file.
As of 2018.3.5 and 2019.2.1, it is possible to have __env__ as a catch-all environment.
Example:
file_roots:
base:
- /srv/salt
dev:
- /srv/salt/dev/services
- /srv/salt/dev/states
prod:
- /srv/salt/prod/services
- /srv/salt/prod/states
__env__:
- /srv/salt/default
Taking dynamic environments one step further, __env__ can also be used in the file_roots filesystem path as of version 3005. It will be replaced with the actual saltenv and searched for states and data to provide to the minion. Note this substitution ONLY occurs for the __env__ environment. For instance, this configuration:
file_roots:
__env__:
- /srv/__env__/salt
is equivalent to this static configuration:
file_roots:
dev:
- /srv/dev/salt
test:
- /srv/test/salt
prod:
- /srv/prod/salt
NOTE:
roots_update_interval¶
New in version 2018.3.0.
Default: 60
This option defines the update interval (in seconds) for file_roots.
NOTE:
roots_update_interval: 120
gitfs: Git Remote File Server Backend¶
gitfs_remotes¶
Default: []
When using the git fileserver backend at least one git remote needs to be defined. The user running the salt master will need read access to the repo.
The repos will be searched in order to find the file requested by a client and the first repo to have the file will return it. Branches and tags are translated into salt environments.
gitfs_remotes:
- git://github.com/saltstack/salt-states.git
- file:///var/git/saltmaster
NOTE:
As of 2014.7.0, it is possible to have per-repo versions of several of the gitfs configuration parameters. For more information, see the GitFS Walkthrough.
gitfs_provider¶
New in version 2014.7.0.
Optional parameter used to specify the provider to be used for gitfs. More information can be found in the GitFS Walkthrough.
Must be either pygit2 or gitpython. If unset, then each will be tried in that same order, and the first one with a compatible version installed will be the provider that is used.
gitfs_provider: gitpython
gitfs_ssl_verify¶
Default: True
Specifies whether or not to ignore SSL certificate errors when fetching from the repositories configured in gitfs_remotes. The False setting is useful if you're using a git repo that uses a self-signed certificate. However, keep in mind that setting this to anything other True is a considered insecure, and using an SSH-based transport (if available) may be a better option.
gitfs_ssl_verify: False
NOTE:
Changed in version 2015.8.0: This option can now be configured on individual repositories as well. See here for more info.
Changed in version 2016.11.0: The default config value changed from False to True.
gitfs_mountpoint¶
New in version 2014.7.0.
Default: ''
Specifies a path on the salt fileserver which will be prepended to all files served by gitfs. This option can be used in conjunction with gitfs_root. It can also be configured for an individual repository, see here for more info.
gitfs_mountpoint: salt://foo/bar
NOTE:
gitfs_root¶
Default: ''
Relative path to a subdirectory within the repository from which Salt should begin to serve files. This is useful when there are files in the repository that should not be available to the Salt fileserver. Can be used in conjunction with gitfs_mountpoint. If used, then from Salt's perspective the directories above the one specified will be ignored and the relative path will (for the purposes of gitfs) be considered as the root of the repo.
gitfs_root: somefolder/otherfolder
Changed in version 2014.7.0: This option can now be configured on individual repositories as well. See here for more info.
gitfs_base¶
Default: master
Defines which branch/tag should be used as the base environment.
gitfs_base: salt
Changed in version 2014.7.0: This option can now be configured on individual repositories as well. See here for more info.
gitfs_saltenv¶
New in version 2016.11.0.
Default: []
Global settings for per-saltenv configuration parameters. Though per-saltenv configuration parameters are typically one-off changes specific to a single gitfs remote, and thus more often configured on a per-remote basis, this parameter can be used to specify per-saltenv changes which should apply to all remotes. For example, the below configuration will map the develop branch to the dev saltenv for all gitfs remotes.
gitfs_saltenv:
- dev:
- ref: develop
gitfs_disable_saltenv_mapping¶
New in version 2018.3.0.
Default: False
When set to True, all saltenv mapping logic is disregarded (aside from which branch/tag is mapped to the base saltenv). To use any other environments, they must then be defined using per-saltenv configuration parameters.
gitfs_disable_saltenv_mapping: True
NOTE:
gitfs_ref_types¶
New in version 2018.3.0.
Default: ['branch', 'tag', 'sha']
This option defines what types of refs are mapped to fileserver environments (i.e. saltenvs). It also sets the order of preference when there are ambiguously-named refs (i.e. when a branch and tag both have the same name). The below example disables mapping of both tags and SHAs, so that only branches are mapped as saltenvs:
gitfs_ref_types:
- branch
NOTE:
NOTE:
gitfs_saltenv_whitelist¶
New in version 2014.7.0.
Changed in version 2018.3.0: Renamed from gitfs_env_whitelist to gitfs_saltenv_whitelist
Default: []
Used to restrict which environments are made available. Can speed up state runs if the repos in gitfs_remotes contain many branches/tags. More information can be found in the GitFS Walkthrough.
gitfs_saltenv_whitelist:
- base
- v1.*
- 'mybranch\d+'
gitfs_saltenv_blacklist¶
New in version 2014.7.0.
Changed in version 2018.3.0: Renamed from gitfs_env_blacklist to gitfs_saltenv_blacklist
Default: []
Used to restrict which environments are made available. Can speed up state runs if the repos in gitfs_remotes contain many branches/tags. More information can be found in the GitFS Walkthrough.
gitfs_saltenv_blacklist:
- base
- v1.*
- 'mybranch\d+'
gitfs_global_lock¶
New in version 2015.8.9.
Default: True
When set to False, if there is an update lock for a gitfs remote and the pid written to it is not running on the master, the lock file will be automatically cleared and a new lock will be obtained. When set to True, Salt will simply log a warning when there is an update lock present.
On single-master deployments, disabling this option can help automatically deal with instances where the master was shutdown/restarted during the middle of a gitfs update, leaving a update lock in place.
However, on multi-master deployments with the gitfs cachedir shared via GlusterFS, nfs, or another network filesystem, it is strongly recommended not to disable this option as doing so will cause lock files to be removed if they were created by a different master.
# Disable global lock gitfs_global_lock: False
gitfs_update_interval¶
New in version 2018.3.0.
Default: 60
This option defines the default update interval (in seconds) for gitfs remotes. The update interval can also be set for a single repository via a per-remote config option
gitfs_update_interval: 120
GitFS Authentication Options¶
These parameters only currently apply to the pygit2 gitfs provider. Examples of how to use these can be found in the GitFS Walkthrough.
gitfs_user¶
New in version 2014.7.0.
Default: ''
Along with gitfs_password, is used to authenticate to HTTPS remotes.
gitfs_user: git
NOTE:
gitfs_password¶
New in version 2014.7.0.
Default: ''
Along with gitfs_user, is used to authenticate to HTTPS remotes. This parameter is not required if the repository does not use authentication.
gitfs_password: mypassword
NOTE:
gitfs_insecure_auth¶
New in version 2014.7.0.
Default: False
By default, Salt will not authenticate to an HTTP (non-HTTPS) remote. This parameter enables authentication over HTTP. Enable this at your own risk.
gitfs_insecure_auth: True
NOTE:
gitfs_pubkey¶
New in version 2014.7.0.
Default: ''
Along with gitfs_privkey (and optionally gitfs_passphrase), is used to authenticate to SSH remotes. Required for SSH remotes.
gitfs_pubkey: /path/to/key.pub
NOTE:
gitfs_privkey¶
New in version 2014.7.0.
Default: ''
Along with gitfs_pubkey (and optionally gitfs_passphrase), is used to authenticate to SSH remotes. Required for SSH remotes.
gitfs_privkey: /path/to/key
NOTE:
gitfs_passphrase¶
New in version 2014.7.0.
Default: ''
This parameter is optional, required only when the SSH key being used to authenticate is protected by a passphrase.
gitfs_passphrase: mypassphrase
NOTE:
gitfs_refspecs¶
New in version 2017.7.0.
Default: ['+refs/heads/*:refs/remotes/origin/*', '+refs/tags/*:refs/tags/*']
When fetching from remote repositories, by default Salt will fetch branches and tags. This parameter can be used to override the default and specify alternate refspecs to be fetched. More information on how this feature works can be found in the GitFS Walkthrough.
gitfs_refspecs:
- '+refs/heads/*:refs/remotes/origin/*'
- '+refs/tags/*:refs/tags/*'
- '+refs/pull/*/head:refs/remotes/origin/pr/*'
- '+refs/pull/*/merge:refs/remotes/origin/merge/*'
hgfs: Mercurial Remote File Server Backend¶
hgfs_remotes¶
New in version 0.17.0.
Default: []
When using the hg fileserver backend at least one mercurial remote needs to be defined. The user running the salt master will need read access to the repo.
The repos will be searched in order to find the file requested by a client and the first repo to have the file will return it. Branches and/or bookmarks are translated into salt environments, as defined by the hgfs_branch_method parameter.
hgfs_remotes:
- https://username@bitbucket.org/username/reponame
NOTE:
hgfs_remotes:
- https://username@bitbucket.org/username/repo1
- base: saltstates
- https://username@bitbucket.org/username/repo2:
- root: salt
- mountpoint: salt://foo/bar/baz
- https://username@bitbucket.org/username/repo3:
- root: salt/states
- branch_method: mixed
hgfs_branch_method¶
New in version 0.17.0.
Default: branches
Defines the objects that will be used as fileserver environments.
- branches - Only branches and tags will be used
- bookmarks - Only bookmarks and tags will be used
- mixed - Branches, bookmarks, and tags will be used
hgfs_branch_method: mixed
NOTE:
Prior to this release, the default branch will be used as the base environment.
hgfs_mountpoint¶
New in version 2014.7.0.
Default: ''
Specifies a path on the salt fileserver which will be prepended to all files served by hgfs. This option can be used in conjunction with hgfs_root. It can also be configured on a per-remote basis, see here for more info.
hgfs_mountpoint: salt://foo/bar
NOTE:
hgfs_root¶
New in version 0.17.0.
Default: ''
Relative path to a subdirectory within the repository from which Salt should begin to serve files. This is useful when there are files in the repository that should not be available to the Salt fileserver. Can be used in conjunction with hgfs_mountpoint. If used, then from Salt's perspective the directories above the one specified will be ignored and the relative path will (for the purposes of hgfs) be considered as the root of the repo.
hgfs_root: somefolder/otherfolder
Changed in version 2014.7.0: Ability to specify hgfs roots on a per-remote basis was added. See here for more info.
hgfs_base¶
New in version 2014.1.0.
Default: default
Defines which branch should be used as the base environment. Change this if hgfs_branch_method is set to bookmarks to specify which bookmark should be used as the base environment.
hgfs_base: salt
hgfs_saltenv_whitelist¶
New in version 2014.7.0.
Changed in version 2018.3.0: Renamed from hgfs_env_whitelist to hgfs_saltenv_whitelist
Default: []
Used to restrict which environments are made available. Can speed up state runs if your hgfs remotes contain many branches/bookmarks/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID.
If used, only branches/bookmarks/tags which match one of the specified expressions will be exposed as fileserver environments.
If used in conjunction with hgfs_saltenv_blacklist, then the subset of branches/bookmarks/tags which match the whitelist but do not match the blacklist will be exposed as fileserver environments.
hgfs_saltenv_whitelist:
- base
- v1.*
- 'mybranch\d+'
hgfs_saltenv_blacklist¶
New in version 2014.7.0.
Changed in version 2018.3.0: Renamed from hgfs_env_blacklist to hgfs_saltenv_blacklist
Default: []
Used to restrict which environments are made available. Can speed up state runs if your hgfs remotes contain many branches/bookmarks/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID.
If used, branches/bookmarks/tags which match one of the specified expressions will not be exposed as fileserver environments.
If used in conjunction with hgfs_saltenv_whitelist, then the subset of branches/bookmarks/tags which match the whitelist but do not match the blacklist will be exposed as fileserver environments.
hgfs_saltenv_blacklist:
- base
- v1.*
- 'mybranch\d+'
hgfs_update_interval¶
New in version 2018.3.0.
Default: 60
This option defines the update interval (in seconds) for hgfs_remotes.
hgfs_update_interval: 120
svnfs: Subversion Remote File Server Backend¶
svnfs_remotes¶
New in version 0.17.0.
Default: []
When using the svn fileserver backend at least one subversion remote needs to be defined. The user running the salt master will need read access to the repo.
The repos will be searched in order to find the file requested by a client and the first repo to have the file will return it. The trunk, branches, and tags become environments, with the trunk being the base environment.
svnfs_remotes:
- svn://foo.com/svn/myproject
NOTE:
- svnfs_root
- svnfs_mountpoint
- svnfs_trunk
- svnfs_branches
- svnfs_tags
For example:
svnfs_remotes:
- svn://foo.com/svn/project1
- svn://foo.com/svn/project2:
- root: salt
- mountpoint: salt://foo/bar/baz
- svn//foo.com/svn/project3:
- root: salt/states
- branches: branch
- tags: tag
svnfs_mountpoint¶
New in version 2014.7.0.
Default: ''
Specifies a path on the salt fileserver which will be prepended to all files served by hgfs. This option can be used in conjunction with svnfs_root. It can also be configured on a per-remote basis, see here for more info.
svnfs_mountpoint: salt://foo/bar
NOTE:
svnfs_root¶
New in version 0.17.0.
Default: ''
Relative path to a subdirectory within the repository from which Salt should begin to serve files. This is useful when there are files in the repository that should not be available to the Salt fileserver. Can be used in conjunction with svnfs_mountpoint. If used, then from Salt's perspective the directories above the one specified will be ignored and the relative path will (for the purposes of svnfs) be considered as the root of the repo.
svnfs_root: somefolder/otherfolder
Changed in version 2014.7.0: Ability to specify svnfs roots on a per-remote basis was added. See here for more info.
svnfs_trunk¶
New in version 2014.7.0.
Default: trunk
Path relative to the root of the repository where the trunk is located. Can also be configured on a per-remote basis, see here for more info.
svnfs_trunk: trunk
svnfs_branches¶
New in version 2014.7.0.
Default: branches
Path relative to the root of the repository where the branches are located. Can also be configured on a per-remote basis, see here for more info.
svnfs_branches: branches
svnfs_tags¶
New in version 2014.7.0.
Default: tags
Path relative to the root of the repository where the tags are located. Can also be configured on a per-remote basis, see here for more info.
svnfs_tags: tags
svnfs_saltenv_whitelist¶
New in version 2014.7.0.
Changed in version 2018.3.0: Renamed from svnfs_env_whitelist to svnfs_saltenv_whitelist
Default: []
Used to restrict which environments are made available. Can speed up state runs if your svnfs remotes contain many branches/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID.
If used, only branches/tags which match one of the specified expressions will be exposed as fileserver environments.
If used in conjunction with svnfs_saltenv_blacklist, then the subset of branches/tags which match the whitelist but do not match the blacklist will be exposed as fileserver environments.
svnfs_saltenv_whitelist:
- base
- v1.*
- 'mybranch\d+'
svnfs_saltenv_blacklist¶
New in version 2014.7.0.
Changed in version 2018.3.0: Renamed from svnfs_env_blacklist to svnfs_saltenv_blacklist
Default: []
Used to restrict which environments are made available. Can speed up state runs if your svnfs remotes contain many branches/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID.
If used, branches/tags which match one of the specified expressions will not be exposed as fileserver environments.
If used in conjunction with svnfs_saltenv_whitelist, then the subset of branches/tags which match the whitelist but do not match the blacklist will be exposed as fileserver environments.
svnfs_saltenv_blacklist:
- base
- v1.*
- 'mybranch\d+'
svnfs_update_interval¶
New in version 2018.3.0.
Default: 60
This option defines the update interval (in seconds) for svnfs_remotes.
svnfs_update_interval: 120
minionfs: MinionFS Remote File Server Backend¶
minionfs_env¶
New in version 2014.7.0.
Default: base
Environment from which MinionFS files are made available.
minionfs_env: minionfs
minionfs_mountpoint¶
New in version 2014.7.0.
Default: ''
Specifies a path on the salt fileserver from which minionfs files are served.
minionfs_mountpoint: salt://foo/bar
NOTE:
minionfs_whitelist¶
New in version 2014.7.0.
Default: []
Used to restrict which minions' pushed files are exposed via minionfs. If using a regular expression, the expression must match the entire minion ID.
If used, only the pushed files from minions which match one of the specified expressions will be exposed.
If used in conjunction with minionfs_blacklist, then the subset of hosts which match the whitelist but do not match the blacklist will be exposed.
minionfs_whitelist:
- server01
- dev*
- 'mail\d+.mydomain.tld'
minionfs_blacklist¶
New in version 2014.7.0.
Default: []
Used to restrict which minions' pushed files are exposed via minionfs. If using a regular expression, the expression must match the entire minion ID.
If used, only the pushed files from minions which match one of the specified expressions will not be exposed.
If used in conjunction with minionfs_whitelist, then the subset of hosts which match the whitelist but do not match the blacklist will be exposed.
minionfs_blacklist:
- server01
- dev*
- 'mail\d+.mydomain.tld'
minionfs_update_interval¶
New in version 2018.3.0.
Default: 60
This option defines the update interval (in seconds) for MinionFS.
NOTE:
minionfs_update_interval: 120
azurefs: Azure File Server Backend¶
New in version 2015.8.0.
See the azurefs documentation for usage examples.
azurefs_update_interval¶
New in version 2018.3.0.
Default: 60
This option defines the update interval (in seconds) for azurefs.
azurefs_update_interval: 120
s3fs: S3 File Server Backend¶
New in version 0.16.0.
See the s3fs documentation for usage examples.
s3fs_update_interval¶
New in version 2018.3.0.
Default: 60
This option defines the update interval (in seconds) for s3fs.
s3fs_update_interval: 120
fileserver_interval¶
New in version 3006.0.
Default: 3600
Defines how often to restart the master's FilesServerUpdate process.
fileserver_interval: 9600
Pillar Configuration¶
pillar_roots¶
Changed in version 3005.
Default:
base:
- /srv/pillar
Set the environments and directories used to hold pillar sls data. This configuration is the same as file_roots:
As of 2017.7.5 and 2018.3.1, it is possible to have __env__ as a catch-all environment.
Example:
pillar_roots:
base:
- /srv/pillar
dev:
- /srv/pillar/dev
prod:
- /srv/pillar/prod
__env__:
- /srv/pillar/others
Taking dynamic environments one step further, __env__ can also be used in the pillar_roots filesystem path as of version 3005. It will be replaced with the actual pillarenv and searched for Pillar data to provide to the minion. Note this substitution ONLY occurs for the __env__ environment. For instance, this configuration:
pillar_roots:
__env__:
- /srv/__env__/pillar
is equivalent to this static configuration:
pillar_roots:
dev:
- /srv/dev/pillar
test:
- /srv/test/pillar
prod:
- /srv/prod/pillar
on_demand_ext_pillar¶
New in version 2016.3.6,2016.11.3,2017.7.0.
Default: ['libvirt', 'virtkey']
The external pillars permitted to be used on-demand using pillar.ext.
on_demand_ext_pillar:
- libvirt
- virtkey
- git
WARNING:
decrypt_pillar¶
New in version 2017.7.0.
Default: []
A list of paths to be recursively decrypted during pillar compilation.
decrypt_pillar:
- 'foo:bar': gpg
- 'lorem:ipsum:dolor'
Entries in this list can be formatted either as a simple string, or as a key/value pair, with the key being the pillar location, and the value being the renderer to use for pillar decryption. If the former is used, the renderer specified by decrypt_pillar_default will be used.
decrypt_pillar_delimiter¶
New in version 2017.7.0.
Default: :
The delimiter used to distinguish nested data structures in the decrypt_pillar option.
decrypt_pillar_delimiter: '|' decrypt_pillar:
- 'foo|bar': gpg
- 'lorem|ipsum|dolor'
decrypt_pillar_default¶
New in version 2017.7.0.
Default: gpg
The default renderer used for decryption, if one is not specified for a given pillar key in decrypt_pillar.
decrypt_pillar_default: my_custom_renderer
decrypt_pillar_renderers¶
New in version 2017.7.0.
Default: ['gpg']
List of renderers which are permitted to be used for pillar decryption.
decrypt_pillar_renderers:
- gpg
- my_custom_renderer
gpg_decrypt_must_succeed¶
New in version 3005.
Default: False
If this is True and the ciphertext could not be decrypted, then an error is raised.
Sending the ciphertext through basically is never desired, for example if a state is setting a database password from pillar and gpg rendering fails, then the state will update the password to the ciphertext, which by definition is not encrypted.
WARNING:
gpg_decrypt_must_succeed: False
pillar_opts¶
Default: False
The pillar_opts option adds the master configuration file data to a dict in the pillar called master. This can be used to set simple configurations in the master config file that can then be used on minions.
Note that setting this option to True means the master config file will be included in all minion's pillars. While this makes global configuration of services and systems easy, it may not be desired if sensitive data is stored in the master configuration.
pillar_opts: False
pillar_safe_render_error¶
Default: True
The pillar_safe_render_error option prevents the master from passing pillar render errors to the minion. This is set on by default because the error could contain templating data which would give that minion information it shouldn't have, like a password! When set True the error message will only show:
Rendering SLS 'my.sls' failed. Please see master log for details.
pillar_safe_render_error: True
ext_pillar¶
The ext_pillar option allows for any number of external pillar interfaces to be called when populating pillar data. The configuration is based on ext_pillar functions. The available ext_pillar functions can be found herein:
salt/pillar
By default, the ext_pillar interface is not configured to run.
Default: []
ext_pillar:
- hiera: /etc/hiera.yaml
- cmd_yaml: cat /etc/salt/yaml
- reclass:
inventory_base_uri: /etc/reclass
There are additional details at Pillars
ext_pillar_first¶
New in version 2015.5.0.
Default: False
This option allows for external pillar sources to be evaluated before pillar_roots. External pillar data is evaluated separately from pillar_roots pillar data, and then both sets of pillar data are merged into a single pillar dictionary, so the value of this config option will have an impact on which key "wins" when there is one of the same name in both the external pillar data and pillar_roots pillar data. By setting this option to True, ext_pillar keys will be overridden by pillar_roots, while leaving it as False will allow ext_pillar keys to override those from pillar_roots.
NOTE:
ext_pillar_first: False
pillarenv_from_saltenv¶
Default: False
When set to True, the pillarenv value will assume the value of the effective saltenv when running states. This essentially makes salt-run pillar.show_pillar saltenv=dev equivalent to salt-run pillar.show_pillar saltenv=dev pillarenv=dev. If pillarenv is set on the CLI, it will override this option.
pillarenv_from_saltenv: True
NOTE:
pillar_raise_on_missing¶
New in version 2015.5.0.
Default: False
Set this option to True to force a KeyError to be raised whenever an attempt to retrieve a named value from pillar fails. When this option is set to False, the failed attempt returns an empty string.
Git External Pillar (git_pillar) Configuration Options¶
git_pillar_provider¶
New in version 2015.8.0.
Specify the provider to be used for git_pillar. Must be either pygit2 or gitpython. If unset, then both will be tried in that same order, and the first one with a compatible version installed will be the provider that is used.
git_pillar_provider: gitpython
git_pillar_base¶
New in version 2015.8.0.
Default: master
If the desired branch matches this value, and the environment is omitted from the git_pillar configuration, then the environment for that git_pillar remote will be base. For example, in the configuration below, the foo branch/tag would be assigned to the base environment, while bar would be mapped to the bar environment.
git_pillar_base: foo ext_pillar:
- git:
- foo https://mygitserver/git-pillar.git
- bar https://mygitserver/git-pillar.git
git_pillar_branch¶
New in version 2015.8.0.
Default: master
If the branch is omitted from a git_pillar remote, then this branch will be used instead. For example, in the configuration below, the first two remotes would use the pillardata branch/tag, while the third would use the foo branch/tag.
git_pillar_branch: pillardata ext_pillar:
- git:
- https://mygitserver/pillar1.git
- https://mygitserver/pillar2.git:
- root: pillar
- foo https://mygitserver/pillar3.git
git_pillar_env¶
New in version 2015.8.0.
Default: '' (unset)
Environment to use for git_pillar remotes. This is normally derived from the branch/tag (or from a per-remote env parameter), but if set this will override the process of deriving the env from the branch/tag name. For example, in the configuration below the foo branch would be assigned to the base environment, while the bar branch would need to explicitly have bar configured as its environment to keep it from also being mapped to the base environment.
git_pillar_env: base ext_pillar:
- git:
- foo https://mygitserver/git-pillar.git
- bar https://mygitserver/git-pillar.git:
- env: bar
For this reason, this option is recommended to be left unset, unless the use case calls for all (or almost all) of the git_pillar remotes to use the same environment irrespective of the branch/tag being used.
git_pillar_root¶
New in version 2015.8.0.
Default: ''
Path relative to the root of the repository where the git_pillar top file and SLS files are located. In the below configuration, the pillar top file and SLS files would be looked for in a subdirectory called pillar.
git_pillar_root: pillar ext_pillar:
- git:
- master https://mygitserver/pillar1.git
- master https://mygitserver/pillar2.git
NOTE:
ext_pillar:
- git:
- master https://mygitserver/pillar1.git
- master https://mygitserver/pillar2.git:
- root: pillar
In this example, for the first remote the top file and SLS files would be looked for in the root of the repository, while in the second remote the pillar data would be retrieved from the pillar subdirectory.
git_pillar_ssl_verify¶
New in version 2015.8.0.
Changed in version 2016.11.0.
Default: False
Specifies whether or not to ignore SSL certificate errors when contacting the remote repository. The False setting is useful if you're using a git repo that uses a self-signed certificate. However, keep in mind that setting this to anything other True is a considered insecure, and using an SSH-based transport (if available) may be a better option.
In the 2016.11.0 release, the default config value changed from False to True.
git_pillar_ssl_verify: True
NOTE:
git_pillar_global_lock¶
New in version 2015.8.9.
Default: True
When set to False, if there is an update/checkout lock for a git_pillar remote and the pid written to it is not running on the master, the lock file will be automatically cleared and a new lock will be obtained. When set to True, Salt will simply log a warning when there is an lock present.
On single-master deployments, disabling this option can help automatically deal with instances where the master was shutdown/restarted during the middle of a git_pillar update/checkout, leaving a lock in place.
However, on multi-master deployments with the git_pillar cachedir shared via GlusterFS, nfs, or another network filesystem, it is strongly recommended not to disable this option as doing so will cause lock files to be removed if they were created by a different master.
# Disable global lock git_pillar_global_lock: False
git_pillar_includes¶
New in version 2017.7.0.
Default: True
Normally, when processing git_pillar remotes, if more than one repo under the same git section in the ext_pillar configuration refers to the same pillar environment, then each repo in a given environment will have access to the other repos' files to be referenced in their top files. However, it may be desirable to disable this behavior. If so, set this value to False.
For a more detailed examination of how includes work, see this explanation from the git_pillar documentation.
git_pillar_includes: False
git_pillar_update_interval¶
New in version 3000.
Default: 60
This option defines the default update interval (in seconds) for git_pillar remotes. The update is handled within the global loop, hence git_pillar_update_interval should be a multiple of loop_interval.
git_pillar_update_interval: 120
Git External Pillar Authentication Options¶
These parameters only currently apply to the pygit2 git_pillar_provider. Authentication works the same as it does in gitfs, as outlined in the GitFS Walkthrough, though the global configuration options are named differently to reflect that they are for git_pillar instead of gitfs.
git_pillar_user¶
New in version 2015.8.0.
Default: ''
Along with git_pillar_password, is used to authenticate to HTTPS remotes.
git_pillar_user: git
git_pillar_password¶
New in version 2015.8.0.
Default: ''
Along with git_pillar_user, is used to authenticate to HTTPS remotes. This parameter is not required if the repository does not use authentication.
git_pillar_password: mypassword
git_pillar_insecure_auth¶
New in version 2015.8.0.
Default: False
By default, Salt will not authenticate to an HTTP (non-HTTPS) remote. This parameter enables authentication over HTTP. Enable this at your own risk.
git_pillar_insecure_auth: True
git_pillar_pubkey¶
New in version 2015.8.0.
Default: ''
Along with git_pillar_privkey (and optionally git_pillar_passphrase), is used to authenticate to SSH remotes.
git_pillar_pubkey: /path/to/key.pub
git_pillar_privkey¶
New in version 2015.8.0.
Default: ''
Along with git_pillar_pubkey (and optionally git_pillar_passphrase), is used to authenticate to SSH remotes.
git_pillar_privkey: /path/to/key
git_pillar_passphrase¶
New in version 2015.8.0.
Default: ''
This parameter is optional, required only when the SSH key being used to authenticate is protected by a passphrase.
git_pillar_passphrase: mypassphrase
git_pillar_refspecs¶
New in version 2017.7.0.
Default: ['+refs/heads/*:refs/remotes/origin/*', '+refs/tags/*:refs/tags/*']
When fetching from remote repositories, by default Salt will fetch branches and tags. This parameter can be used to override the default and specify alternate refspecs to be fetched. This parameter works similarly to its GitFS counterpart, in that it can be configured both globally and for individual remotes.
git_pillar_refspecs:
- '+refs/heads/*:refs/remotes/origin/*'
- '+refs/tags/*:refs/tags/*'
- '+refs/pull/*/head:refs/remotes/origin/pr/*'
- '+refs/pull/*/merge:refs/remotes/origin/merge/*'
git_pillar_verify_config¶
New in version 2017.7.0.
Default: True
By default, as the master starts it performs some sanity checks on the configured git_pillar repositories. If any of these sanity checks fail (such as when an invalid configuration is used), the master daemon will abort.
To skip these sanity checks, set this option to False.
git_pillar_verify_config: False
Pillar Merging Options¶
pillar_source_merging_strategy¶
New in version 2014.7.0.
Default: smart
The pillar_source_merging_strategy option allows you to configure merging strategy between different sources. It accepts 5 values:
- none:
It will not do any merging at all and only parse the pillar data from the passed environment and 'base' if no environment was specified.
New in version 2016.3.4.
- recurse:
It will recursively merge data. For example, theses 2 sources:
foo: 42 bar:
element1: True
bar:
element2: True baz: quux
will be merged as:
foo: 42 bar:
element1: True
element2: True baz: quux
- •
- aggregate:
instructs aggregation of elements between sources that use the #!yamlex renderer.
For example, these two documents:
foo: 42 bar: !aggregate {
element1: True } baz: !aggregate quux
bar: !aggregate {
element2: True } baz: !aggregate quux2
will be merged as:
foo: 42 bar:
element1: True
element2: True baz:
- quux
- quux2
NOTE:
- •
- overwrite:
Will use the behaviour of the 2014.1 branch and earlier.
Overwrites elements according the order in which they are processed.
First pillar processed:
A:
first_key: blah
second_key: blah
Second pillar processed:
A:
third_key: blah
fourth_key: blah
will be merged as:
A:
third_key: blah
fourth_key: blah
- •
- smart (default):
Guesses the best strategy based on the "renderer" setting.
NOTE:
pillar_merge_lists¶
New in version 2015.8.0.
Default: False
Recursively merge lists by aggregating them instead of replacing them.
pillar_merge_lists: False
pillar_includes_override_sls¶
New in version 2017.7.6,2018.3.1.
Default: False
Prior to version 2017.7.3, keys from pillar includes would be merged on top of the pillar SLS. Since 2017.7.3, the includes are merged together and then the pillar SLS is merged on top of that.
Set this option to True to return to the old behavior.
pillar_includes_override_sls: True
Pillar Cache Options¶
pillar_cache¶
New in version 2015.8.8.
Default: False
A master can cache pillars locally to bypass the expense of having to render them for each minion on every request. This feature should only be enabled in cases where pillar rendering time is known to be unsatisfactory and any attendant security concerns about storing pillars in a master cache have been addressed.
When enabling this feature, be certain to read through the additional pillar_cache_* configuration options to fully understand the tunable parameters and their implications.
pillar_cache: False
NOTE:
pillar_cache_ttl¶
New in version 2015.8.8.
Default: 3600
If and only if a master has set pillar_cache: True, the cache TTL controls the amount of time, in seconds, before the cache is considered invalid by a master and a fresh pillar is recompiled and stored.
pillar_cache_backend¶
New in version 2015.8.8.
Default: disk
If an only if a master has set pillar_cache: True, one of several storage providers can be utilized:
- disk (default):
The default storage backend. This caches rendered pillars to the master cache. Rendered pillars are serialized and deserialized as msgpack structures for speed. Note that pillars are stored UNENCRYPTED. Ensure that the master cache has permissions set appropriately (sane defaults are provided).
- memory [EXPERIMENTAL]:
An optional backend for pillar caches which uses a pure-Python in-memory data structure for maximal performance. There are several caveats, however. First, because each master worker contains its own in-memory cache, there is no guarantee of cache consistency between minion requests. This works best in situations where the pillar rarely if ever changes. Secondly, and perhaps more importantly, this means that unencrypted pillars will be accessible to any process which can examine the memory of the salt-master! This may represent a substantial security risk.
pillar_cache_backend: disk
Master Reactor Settings¶
reactor¶
Default: []
Defines a salt reactor. See the Reactor documentation for more information.
reactor:
- 'salt/minion/*/start':
- salt://reactor/startup_tasks.sls
reactor_refresh_interval¶
Default: 60
The TTL for the cache of the reactor configuration.
reactor_refresh_interval: 60
reactor_worker_threads¶
Default: 10
The number of workers for the runner/wheel in the reactor.
reactor_worker_threads: 10
reactor_worker_hwm¶
Default: 10000
The queue size for workers in the reactor.
reactor_worker_hwm: 10000
Salt-API Master Settings¶
There are some settings for salt-api that can be configured on the Salt Master.
api_logfile¶
Default: /var/log/salt/api
The logfile location for salt-api.
api_logfile: /var/log/salt/api
api_pidfile¶
Default: /var/run/salt-api.pid
If this master will be running salt-api, specify the pidfile of the salt-api daemon.
api_pidfile: /var/run/salt-api.pid
rest_timeout¶
Default: 300
Used by salt-api for the master requests timeout.
rest_timeout: 300
netapi_enable_clients¶
New in version 3006.0.
Default: []
Used by salt-api to enable access to the listed clients. Unless a client is addded to this list, requests will be rejected before authentication is attempted or processing of the low state occurs.
This can be used to only expose the required functionality via salt-api.
Configuration with all possible clients enabled:
netapi_enable_clients:
- local
- local_async
- local_batch
- local_subset
- runner
- runner_async
- ssh
- wheel
- wheel_async
NOTE:
Syndic Server Settings¶
A Salt syndic is a Salt master used to pass commands from a higher Salt master to minions below the syndic. Using the syndic is simple. If this is a master that will have syndic servers(s) below it, set the order_masters setting to True.
If this is a master that will be running a syndic daemon for passthrough the syndic_master setting needs to be set to the location of the master server.
Do not forget that, in other words, it means that it shares with the local minion its ID and PKI directory.
order_masters¶
Default: False
Extra data needs to be sent with publications if the master is controlling a lower level master via a syndic minion. If this is the case the order_masters value must be set to True
order_masters: False
syndic_master¶
Changed in version 2016.3.5,2016.11.1: Set default higher level master address.
Default: masterofmasters
If this master will be running the salt-syndic to connect to a higher level master, specify the higher level master with this configuration value.
syndic_master: masterofmasters
You can optionally connect a syndic to multiple higher level masters by setting the syndic_master value to a list:
syndic_master:
- masterofmasters1
- masterofmasters2
Each higher level master must be set up in a multi-master configuration.
syndic_master_port¶
Default: 4506
If this master will be running the salt-syndic to connect to a higher level master, specify the higher level master port with this configuration value.
syndic_master_port: 4506
syndic_pidfile¶
Default: /var/run/salt-syndic.pid
If this master will be running the salt-syndic to connect to a higher level master, specify the pidfile of the syndic daemon.
syndic_pidfile: /var/run/syndic.pid
syndic_log_file¶
Default: /var/log/salt/syndic
If this master will be running the salt-syndic to connect to a higher level master, specify the log file of the syndic daemon.
syndic_log_file: /var/log/salt-syndic.log
syndic_failover¶
New in version 2016.3.0.
Default: random
The behaviour of the multi-syndic when connection to a master of masters failed. Can specify random (default) or ordered. If set to random, masters will be iterated in random order. If ordered is specified, the configured order will be used.
syndic_failover: random
syndic_wait¶
Default: 5
The number of seconds for the salt client to wait for additional syndics to check in with their lists of expected minions before giving up.
syndic_wait: 5
syndic_forward_all_events¶
New in version 2017.7.0.
Default: False
Option on multi-syndic or single when connected to multiple masters to be able to send events to all connected masters.
syndic_forward_all_events: False
Peer Publish Settings¶
Salt minions can send commands to other minions, but only if the minion is allowed to. By default "Peer Publication" is disabled, and when enabled it is enabled for specific minions and specific commands. This allows secure compartmentalization of commands based on individual minions.
peer¶
Default: {}
The configuration uses regular expressions to match minions and then a list of regular expressions to match functions. The following will allow the minion authenticated as foo.example.com to execute functions from the test and pkg modules.
peer:
foo.example.com:
- test.*
- pkg.*
This will allow all minions to execute all commands:
peer:
.*:
- .*
This is not recommended, since it would allow anyone who gets root on any single minion to instantly have root on all of the minions!
By adding an additional layer you can limit the target hosts in addition to the accessible commands:
peer:
foo.example.com:
'db*':
- test.*
- pkg.*
peer_run¶
Default: {}
The peer_run option is used to open up runners on the master to access from the minions. The peer_run configuration matches the format of the peer configuration.
The following example would allow foo.example.com to execute the manage.up runner:
peer_run:
foo.example.com:
- manage.up
Master Logging Settings¶
log_file¶
Default: /var/log/salt/master
The master log can be sent to a regular file, local path name, or network location. See also log_file.
Examples:
log_file: /var/log/salt/master
log_file: file:///dev/log
log_file: udp://loghost:10514
log_level¶
Default: warning
The level of messages to send to the console. See also log_level.
log_level: warning
log_level_logfile¶
Default: warning
The level of messages to send to the log file. See also log_level_logfile. When it is not set explicitly it will inherit the level set by log_level option.
log_level_logfile: warning
log_datefmt¶
Default: %H:%M:%S
The date and time format used in console log messages. See also log_datefmt.
log_datefmt: '%H:%M:%S'
log_datefmt_logfile¶
Default: %Y-%m-%d %H:%M:%S
The date and time format used in log file messages. See also log_datefmt_logfile.
log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
log_fmt_console¶
Default: [%(levelname)-8s] %(message)s
The format of the console logging messages. See also log_fmt_console.
NOTE:
Console log colors are specified by these additional formatters:
%(colorlevel)s %(colorname)s %(colorprocess)s %(colormsg)s
Since it is desirable to include the surrounding brackets, '[' and ']', in the coloring of the messages, these color formatters also include padding as well. Color LogRecord attributes are only available for console logging.
log_fmt_console: '%(colorlevel)s %(colormsg)s' log_fmt_console: '[%(levelname)-8s] %(message)s'
log_fmt_logfile¶
Default: %(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s
The format of the log file logging messages. See also log_fmt_logfile.
log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s'
log_granular_levels¶
Default: {}
This can be used to control logging levels more specifically. See also log_granular_levels.
log_rotate_max_bytes¶
Default: 0
The maximum number of bytes a single log file may contain before it is rotated. A value of 0 disables this feature. Currently only supported on Windows. On other platforms, use an external tool such as 'logrotate' to manage log files. log_rotate_max_bytes
log_rotate_backup_count¶
Default: 0
The number of backup files to keep when rotating log files. Only used if log_rotate_max_bytes is greater than 0. Currently only supported on Windows. On other platforms, use an external tool such as 'logrotate' to manage log files. log_rotate_backup_count
Node Groups¶
nodegroups¶
Default: {}
Node groups allow for logical groupings of minion nodes. A group consists of a group name and a compound target.
nodegroups:
group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com'
group2: 'G@os:Debian and foo.domain.com'
group3: 'G@os:Debian and N@group1'
group4:
- 'G@foo:bar'
- 'or'
- 'G@foo:baz'
More information on using nodegroups can be found here.
Range Cluster Settings¶
range_server¶
Default: 'range:80'
The range server (and optional port) that serves your cluster information https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec
range_server: range:80
Include Configuration¶
Configuration can be loaded from multiple files. The order in which this is done is:
- 1.
- The master config file itself
- 2.
- The files matching the glob in default_include
- 3.
- The files matching the glob in include (if defined)
Each successive step overrides any values defined in the previous steps. Therefore, any config options defined in one of the default_include files would override the same value in the master config file, and any options defined in include would override both.
default_include¶
Default: master.d/*.conf
The master can include configuration from other files. Per default the master will automatically include all config files from master.d/*.conf where master.d is relative to the directory of the master configuration file.
NOTE:
include¶
Default: not defined
The master can include configuration from other files. To enable this, pass a list of paths to this option. The paths can be either relative or absolute; if relative, they are considered to be relative to the directory the main minion configuration file lives in. Paths can make use of shell-style globbing. If no files are matched by a path passed to this option then the master will log a warning message.
# Include files from a master.d directory in the same # directory as the master config file include: master.d/* # Include a single extra file into the configuration include: /etc/roles/webserver # Include several files and the master.d directory include:
- extra_config
- master.d/*
- /etc/roles/webserver
Keepalive Settings¶
tcp_keepalive¶
Default: True
The tcp keepalive interval to set on TCP ports. This setting can be used to tune Salt connectivity issues in messy network environments with misbehaving firewalls.
tcp_keepalive: True
tcp_keepalive_cnt¶
Default: -1
Sets the ZeroMQ TCP keepalive count. May be used to tune issues with minion disconnects.
tcp_keepalive_cnt: -1
tcp_keepalive_idle¶
Default: 300
Sets ZeroMQ TCP keepalive idle. May be used to tune issues with minion disconnects.
tcp_keepalive_idle: 300
tcp_keepalive_intvl¶
Default: -1
Sets ZeroMQ TCP keepalive interval. May be used to tune issues with minion disconnects.
tcp_keepalive_intvl': -1
Windows Software Repo Settings¶
winrepo_provider¶
New in version 2015.8.0.
Specify the provider to be used for winrepo. Must be either pygit2 or gitpython. If unset, then both will be tried in that same order, and the first one with a compatible version installed will be the provider that is used.
winrepo_provider: gitpython
winrepo_dir¶
Changed in version 2015.8.0: Renamed from win_repo to winrepo_dir.
Default: /srv/salt/win/repo
Location on the master where the winrepo_remotes are checked out for pre-2015.8.0 minions. 2015.8.0 and later minions use winrepo_remotes_ng instead.
winrepo_dir: /srv/salt/win/repo
winrepo_dir_ng¶
New in version 2015.8.0: A new ng repo was added.
Default: /srv/salt/win/repo-ng
Location on the master where the winrepo_remotes_ng are checked out for 2015.8.0 and later minions.
winrepo_dir_ng: /srv/salt/win/repo-ng
winrepo_cachefile¶
Changed in version 2015.8.0: Renamed from win_repo_mastercachefile to winrepo_cachefile
NOTE:
Default: winrepo.p
Path relative to winrepo_dir where the winrepo cache should be created.
winrepo_cachefile: winrepo.p
winrepo_remotes¶
Changed in version 2015.8.0: Renamed from win_gitrepos to winrepo_remotes.
Default: ['https://github.com/saltstack/salt-winrepo.git']
List of git repositories to checkout and include in the winrepo for pre-2015.8.0 minions. 2015.8.0 and later minions use winrepo_remotes_ng instead.
winrepo_remotes:
- https://github.com/saltstack/salt-winrepo.git
To specify a specific revision of the repository, prepend a commit ID to the URL of the repository:
winrepo_remotes:
- '<commit_id> https://github.com/saltstack/salt-winrepo.git'
Replace <commit_id> with the SHA1 hash of a commit ID. Specifying a commit ID is useful in that it allows one to revert back to a previous version in the event that an error is introduced in the latest revision of the repo.
winrepo_remotes_ng¶
New in version 2015.8.0: A new ng repo was added.
Default: ['https://github.com/saltstack/salt-winrepo-ng.git']
List of git repositories to checkout and include in the winrepo for 2015.8.0 and later minions.
winrepo_remotes_ng:
- https://github.com/saltstack/salt-winrepo-ng.git
To specify a specific revision of the repository, prepend a commit ID to the URL of the repository:
winrepo_remotes_ng:
- '<commit_id> https://github.com/saltstack/salt-winrepo-ng.git'
Replace <commit_id> with the SHA1 hash of a commit ID. Specifying a commit ID is useful in that it allows one to revert back to a previous version in the event that an error is introduced in the latest revision of the repo.
winrepo_branch¶
New in version 2015.8.0.
Default: master
If the branch is omitted from a winrepo remote, then this branch will be used instead. For example, in the configuration below, the first two remotes would use the winrepo branch/tag, while the third would use the foo branch/tag.
winrepo_branch: winrepo winrepo_remotes:
- https://mygitserver/winrepo1.git
- https://mygitserver/winrepo2.git:
- foo https://mygitserver/winrepo3.git
winrepo_ssl_verify¶
New in version 2015.8.0.
Changed in version 2016.11.0.
Default: False
Specifies whether or not to ignore SSL certificate errors when contacting the remote repository. The False setting is useful if you're using a git repo that uses a self-signed certificate. However, keep in mind that setting this to anything other True is a considered insecure, and using an SSH-based transport (if available) may be a better option.
In the 2016.11.0 release, the default config value changed from False to True.
winrepo_ssl_verify: True
Winrepo Authentication Options¶
These parameters only currently apply to the pygit2 winrepo_provider. Authentication works the same as it does in gitfs, as outlined in the GitFS Walkthrough, though the global configuration options are named differently to reflect that they are for winrepo instead of gitfs.
winrepo_user¶
New in version 2015.8.0.
Default: ''
Along with winrepo_password, is used to authenticate to HTTPS remotes.
winrepo_user: git
winrepo_password¶
New in version 2015.8.0.
Default: ''
Along with winrepo_user, is used to authenticate to HTTPS remotes. This parameter is not required if the repository does not use authentication.
winrepo_password: mypassword
winrepo_insecure_auth¶
New in version 2015.8.0.
Default: False
By default, Salt will not authenticate to an HTTP (non-HTTPS) remote. This parameter enables authentication over HTTP. Enable this at your own risk.
winrepo_insecure_auth: True
winrepo_pubkey¶
New in version 2015.8.0.
Default: ''
Along with winrepo_privkey (and optionally winrepo_passphrase), is used to authenticate to SSH remotes.
winrepo_pubkey: /path/to/key.pub
winrepo_privkey¶
New in version 2015.8.0.
Default: ''
Along with winrepo_pubkey (and optionally winrepo_passphrase), is used to authenticate to SSH remotes.
winrepo_privkey: /path/to/key
winrepo_passphrase¶
New in version 2015.8.0.
Default: ''
This parameter is optional, required only when the SSH key being used to authenticate is protected by a passphrase.
winrepo_passphrase: mypassphrase
winrepo_refspecs¶
New in version 2017.7.0.
Default: ['+refs/heads/*:refs/remotes/origin/*', '+refs/tags/*:refs/tags/*']
When fetching from remote repositories, by default Salt will fetch branches and tags. This parameter can be used to override the default and specify alternate refspecs to be fetched. This parameter works similarly to its GitFS counterpart, in that it can be configured both globally and for individual remotes.
winrepo_refspecs:
- '+refs/heads/*:refs/remotes/origin/*'
- '+refs/tags/*:refs/tags/*'
- '+refs/pull/*/head:refs/remotes/origin/pr/*'
- '+refs/pull/*/merge:refs/remotes/origin/merge/*'
Configure Master on Windows¶
The master on Windows requires no additional configuration. You can modify the master configuration by creating/editing the master config file located at c:\salt\conf\master. The same configuration options available on Linux are available in Windows, as long as they apply. For example, SSH options wouldn't apply in Windows. The main differences are the file paths. If you are familiar with common salt paths, the following table may be useful:
linux Paths | Windows Paths | |
/etc/salt | <---> | c:\salt\conf |
/ | <---> | c:\salt |
So, for example, the master config file in Linux is /etc/salt/master. In Windows the master config file is c:\salt\conf\master. The Linux path /etc/salt becomes c:\salt\conf in Windows.
Common File Locations¶
Linux Paths | Windows Paths |
conf_file: /etc/salt/master | conf_file: c:\salt\conf\master |
log_file: /var/log/salt/master | log_file: c:\salt\var\log\salt\master |
pidfile: /var/run/salt-master.pid | pidfile: c:\salt\var\run\salt-master.pid |
Common Directories¶
Linux Paths | Windows Paths |
cachedir: /var/cache/salt/master | cachedir: c:\salt\var\cache\salt\master |
extension_modules: /var/cache/salt/master/extmods | c:\salt\var\cache\salt\master\extmods |
pki_dir: /etc/salt/pki/master | pki_dir: c:\salt\conf\pki\master |
root_dir: / | root_dir: c:\salt |
sock_dir: /var/run/salt/master | sock_dir: c:\salt\var\run\salt\master |
Roots¶
file_roots
Linux Paths | Windows Paths |
/srv/salt | c:\salt\srv\salt |
/srv/spm/salt | c:\salt\srv\spm\salt |
pillar_roots
Linux Paths | Windows Paths |
/srv/pillar | c:\salt\srv\pillar |
/srv/spm/pillar | c:\salt\srv\spm\pillar |
Win Repo Settings¶
Linux Paths | Windows Paths |
winrepo_dir: /srv/salt/win/repo | winrepo_dir: c:\salt\srv\salt\win\repo |
winrepo_dir_ng: /srv/salt/win/repo-ng | winrepo_dir_ng: c:\salt\srv\salt\win\repo-ng |
Configuring the Salt Minion¶
The Salt system is amazingly simple and easy to configure. The two components of the Salt system each have a respective configuration file. The salt-master is configured via the master configuration file, and the salt-minion is configured via the minion configuration file.
SEE ALSO:
The Salt Minion configuration is very simple. Typically, the only value that needs to be set is the master value so the minion knows where to locate its master.
By default, the salt-minion configuration will be in /etc/salt/minion. A notable exception is FreeBSD, where the configuration will be in /usr/local/etc/salt/minion.
Minion Primary Configuration¶
master¶
Default: salt
The hostname or IP address of the master. See ipv6 for IPv6 connections to the master.
Default: salt
master: salt
master:port Syntax¶
New in version 2015.8.0.
The master config option can also be set to use the master's IP in conjunction with a port number by default.
master: localhost:1234
For IPv6 formatting with a port, remember to add brackets around the IP address before adding the port and enclose the line in single quotes to make it a string:
master: '[2001:db8:85a3:8d3:1319:8a2e:370:7348]:1234'
NOTE:
List of Masters Syntax¶
The option can also be set to a list of masters, enabling multi-master mode.
master:
- address1
- address2
Changed in version 2014.7.0: The master can be dynamically configured. The master value can be set to an module function which will be executed and will assume that the returning value is the ip or hostname of the desired master. If a function is being specified, then the master_type option must be set to func, to tell the minion that the value is a function to be run and not a fully-qualified domain name.
master: module.function master_type: func
In addition, instead of using multi-master mode, the minion can be configured to use the list of master addresses as a failover list, trying the first address, then the second, etc. until the minion successfully connects. To enable this behavior, set master_type to failover:
master:
- address1
- address2 master_type: failover
color¶
Default: True
By default output is colored. To disable colored output, set the color value to False.
ipv6¶
Default: None
Whether the master should be connected over IPv6. By default salt minion will try to automatically detect IPv6 connectivity to master.
ipv6: True
master_uri_format¶
New in version 2015.8.0.
Specify the format in which the master address will be evaluated. Valid options are default or ip_only. If ip_only is specified, then the master address will not be split into IP and PORT, so be sure that only an IP (or domain name) is set in the master configuration setting.
master_uri_format: ip_only
master_tops_first¶
New in version 2018.3.0.
Default: False
SLS targets defined using the Master Tops system are normally executed after any matches defined in the Top File. Set this option to True to have the minion execute the Master Tops states first.
master_tops_first: True
master_type¶
New in version 2014.7.0.
Default: str
The type of the master variable. Can be str, failover, func or disable.
master_type: str
If this option is str (default), multiple hot masters are configured. Minions can connect to multiple masters simultaneously (all master are "hot").
master_type: failover
If this option is set to failover, master must be a list of master addresses. The minion will then try each master in the order specified in the list until it successfully connects. master_alive_interval must also be set, this determines how often the minion will verify the presence of the master.
master_type: func
If the master needs to be dynamically assigned by executing a function instead of reading in the static master value, set this to func. This can be used to manage the minion's master setting from an execution module. By simply changing the algorithm in the module to return a new master ip/fqdn, restart the minion and it will connect to the new master.
As of version 2016.11.0 this option can be set to disable and the minion will never attempt to talk to the master. This is useful for running a masterless minion daemon.
master_type: disable
max_event_size¶
New in version 2014.7.0.
Default: 1048576
Passing very large events can cause the minion to consume large amounts of memory. This value tunes the maximum size of a message allowed onto the minion event bus. The value is expressed in bytes.
max_event_size: 1048576
enable_legacy_startup_events¶
New in version 2019.2.0.
Default: True
When a minion starts up it sends a notification on the event bus with a tag that looks like this: salt/minion/<minion_id>/start. For historical reasons the minion also sends a similar event with an event tag like this: minion_start. This duplication can cause a lot of clutter on the event bus when there are many minions. Set enable_legacy_startup_events: False in the minion config to ensure only the salt/minion/<minion_id>/start events are sent. Beginning with the 3001 Salt release this option will default to False.
enable_legacy_startup_events: True
master_failback¶
New in version 2016.3.0.
Default: False
If the minion is in multi-master mode and the :conf_minion`master_type` configuration option is set to failover, this setting can be set to True to force the minion to fail back to the first master in the list if the first master is back online.
master_failback: False
master_failback_interval¶
New in version 2016.3.0.
Default: 0
If the minion is in multi-master mode, the :conf_minion`master_type` configuration is set to failover, and the master_failback option is enabled, the master failback interval can be set to ping the top master with this interval, in seconds.
master_failback_interval: 0
master_alive_interval¶
Default: 0
Configures how often, in seconds, the minion will verify that the current master is alive and responding. The minion will try to establish a connection to the next master in the list if it finds the existing one is dead.
master_alive_interval: 30
master_shuffle¶
New in version 2014.7.0.
Deprecated since version 2019.2.0.
Default: False
WARNING:
master_shuffle: True
random_master¶
New in version 2014.7.0.
Changed in version 2019.2.0: The master_failback option can be used in conjunction with random_master to force the minion to fail back to the first master in the list if the first master is back online. Note that master_type must be set to failover in order for the master_failback setting to work.
Default: False
If master is a list of addresses, shuffle them before trying to connect to distribute the minions over all available masters. This uses Python's random.shuffle method.
If multiple masters are specified in the 'master' setting as a list, the default behavior is to always try to connect to them in the order they are listed. If random_master is set to True, the order will be randomized instead upon Minion startup. This can be helpful in distributing the load of many minions executing salt-call requests, for example, from a cron job. If only one master is listed, this setting is ignored and a warning is logged.
random_master: True
NOTE:
retry_dns¶
Default: 30
Set the number of seconds to wait before attempting to resolve the master hostname if name resolution fails. Defaults to 30 seconds. Set to zero if the minion should shutdown and not retry.
retry_dns: 30
retry_dns_count¶
New in version 2018.3.4.
Default: None
Set the number of attempts to perform when resolving the master hostname if name resolution fails. By default the minion will retry indefinitely.
retry_dns_count: 3
master_port¶
Default: 4506
The port of the master ret server, this needs to coincide with the ret_port option on the Salt master.
master_port: 4506
publish_port¶
Default: 4505
The port of the master publish server, this needs to coincide with the publish_port option on the Salt master.
publish_port: 4505
source_interface_name¶
New in version 2018.3.0.
The name of the interface to use when establishing the connection to the Master.
NOTE:
NOTE:
NOTE:
WARNING:
- zeromq requires pyzmq >= 16.0.1 and libzmq >= 4.1.6
- tcp requires tornado >= 4.5
Configuration example:
source_interface_name: bond0.1234
source_address¶
New in version 2018.3.0.
The source IP address or the domain name to be used when connecting the Minion to the Master. See ipv6 for IPv6 connections to the Master.
WARNING:
- zeromq requires pyzmq >= 16.0.1 and libzmq >= 4.1.6
- tcp requires tornado >= 4.5
Configuration example:
source_address: if-bond0-1234.sjc.us-west.internal
source_ret_port¶
New in version 2018.3.0.
The source port to be used when connecting the Minion to the Master ret server.
WARNING:
- zeromq requires pyzmq >= 16.0.1 and libzmq >= 4.1.6
- tcp requires tornado >= 4.5
Configuration example:
source_ret_port: 49017
source_publish_port¶
New in version 2018.3.0.
The source port to be used when connecting the Minion to the Master publish server.
WARNING:
- zeromq requires pyzmq >= 16.0.1 and libzmq >= 4.1.6
- tcp requires tornado >= 4.5
Configuration example:
source_publish_port: 49018
user¶
Default: root
The user to run the Salt processes
user: root
sudo_user¶
Default: ''
The user to run salt remote execution commands as via sudo. If this option is enabled then sudo will be used to change the active user executing the remote command. If enabled the user will need to be allowed access via the sudoers file for the user that the salt minion is configured to run as. The most common option would be to use the root user. If this option is set the user option should also be set to a non-root user. If migrating from a root minion to a non root minion the minion cache should be cleared and the minion pki directory will need to be changed to the ownership of the new user.
sudo_user: root
pidfile¶
Default: /var/run/salt-minion.pid
The location of the daemon's process ID file
pidfile: /var/run/salt-minion.pid
root_dir¶
Default: /
This directory is prepended to the following options: pki_dir, cachedir, log_file, sock_dir, and pidfile.
root_dir: /
conf_file¶
Default: /etc/salt/minion
The path to the minion's configuration file.
conf_file: /etc/salt/minion
pki_dir¶
Default: <LIB_STATE_DIR>/pki/minion
The directory used to store the minion's public and private keys.
<LIB_STATE_DIR> is the pre-configured variable state directory set during installation via --salt-lib-state-dir. It defaults to /etc/salt. Systems following the Filesystem Hierarchy Standard (FHS) might set it to /var/lib/salt.
pki_dir: /etc/salt/pki/minion
id¶
Default: the system's hostname
SEE ALSO:
The Setting up a Salt Minion section contains detailed information on how the hostname is determined.
Explicitly declare the id for this minion to use. Since Salt uses detached ids it is possible to run multiple minions on the same machine but with different ids.
id: foo.bar.com
minion_id_caching¶
New in version 0.17.2.
Default: True
Caches the minion id to a file when the minion's id is not statically defined in the minion config. This setting prevents potential problems when automatic minion id resolution changes, which can cause the minion to lose connection with the master. To turn off minion id caching, set this config to False.
For more information, please see Issue #7558 and Pull Request #8488.
minion_id_caching: True
append_domain¶
Default: None
Append a domain to a hostname in the event that it does not exist. This is useful for systems where socket.getfqdn() does not actually result in a FQDN (for instance, Solaris).
append_domain: foo.org
minion_id_remove_domain¶
New in version 3000.
Default: False
Remove a domain when the minion id is generated as a fully qualified domain name (either by the user provided id_function, or by Salt). This is useful when the minions shall be named like hostnames. Can be a single domain (to prevent name clashes), or True, to remove all domains.
- minion_id_remove_domain = foo.org - FQDN = king_bob.foo.org --> minion_id = king_bob - FQDN = king_bob.bar.org --> minion_id = king_bob.bar.org
- minion_id_remove_domain = True - FQDN = king_bob.foo.org --> minion_id = king_bob - FQDN = king_bob.bar.org --> minion_id = king_bob
For more information, please see issue 49212 and PR 49378.
minion_id_remove_domain: foo.org
minion_id_lowercase¶
Default: False
Convert minion id to lowercase when it is being generated. Helpful when some hosts get the minion id in uppercase. Cached ids will remain the same and not converted.
minion_id_lowercase: True
cachedir¶
Default: /var/cache/salt/minion
The location for minion cache data.
This directory may contain sensitive data and should be protected accordingly.
cachedir: /var/cache/salt/minion
color_theme¶
Default: ""
Specifies a path to the color theme to use for colored command line output.
color_theme: /etc/salt/color_theme
append_minionid_config_dirs¶
Default: [] (the empty list) for regular minions, ['cachedir'] for proxy minions.
Append minion_id to these configuration directories. Helps with multiple proxies and minions running on the same machine. Allowed elements in the list: pki_dir, cachedir, extension_modules. Normally not needed unless running several proxies and/or minions on the same machine.
append_minionid_config_dirs:
- pki_dir
- cachedir
verify_env¶
Default: True
Verify and set permissions on configuration directories at startup.
verify_env: True
NOTE:
cache_jobs¶
Default: False
The minion can locally cache the return data from jobs sent to it, this can be a good way to keep track of the minion side of the jobs the minion has executed. By default this feature is disabled, to enable set cache_jobs to True.
cache_jobs: False
grains¶
Default: (empty)
SEE ALSO:
Statically assigns grains to the minion.
grains:
roles:
- webserver
- memcache
deployment: datacenter4
cabinet: 13
cab_u: 14-15
grains_blacklist¶
Default: []
Each grains key will be compared against each of the expressions in this list. Any keys which match will be filtered from the grains. Exact matches, glob matches, and regular expressions are supported.
NOTE:
New in version 3000.
grains_blacklist:
- cpu_flags
- zmq*
- ipv[46]
grains_cache¶
Default: False
The minion can locally cache grain data instead of refreshing the data each time the grain is referenced. By default this feature is disabled, to enable set grains_cache to True.
grains_cache: False
grains_cache_expiration¶
Default: 300
Grains cache expiration, in seconds. If the cache file is older than this number of seconds then the grains cache will be dumped and fully re-populated with fresh data. Defaults to 5 minutes. Will have no effect if grains_cache is not enabled.
grains_cache_expiration: 300
grains_deep_merge¶
New in version 2016.3.0.
Default: False
The grains can be merged, instead of overridden, using this option. This allows custom grains to defined different subvalues of a dictionary grain. By default this feature is disabled, to enable set grains_deep_merge to True.
grains_deep_merge: False
For example, with these custom grains functions:
def custom1_k1():
return {"custom1": {"k1": "v1"}} def custom1_k2():
return {"custom1": {"k2": "v2"}}
Without grains_deep_merge, the result would be:
custom1:
k1: v1
With grains_deep_merge, the result will be:
custom1:
k1: v1
k2: v2
grains_refresh_every¶
Default: 0
The grains_refresh_every setting allows for a minion to periodically check its grains to see if they have changed and, if so, to inform the master of the new grains. This operation is moderately expensive, therefore care should be taken not to set this value too low.
Note: This value is expressed in minutes.
A value of 10 minutes is a reasonable default.
grains_refresh_every: 0
grains_refresh_pre_exec¶
New in version 3005.
Default: False
The grains_refresh_pre_exec setting allows for a minion to check its grains prior to the execution of any operation to see if they have changed and, if so, to inform the master of the new grains. This operation is moderately expensive, therefore care should be taken before enabling this behavior.
grains_refresh_pre_exec: True
metadata_server_grains¶
New in version 2017.7.0.
Default: False
Set this option to enable gathering of cloud metadata from http://169.254.169.254/latest for use in grains (see here for more information).
metadata_server_grains: True
fibre_channel_grains¶
Default: False
The fibre_channel_grains setting will enable the fc_wwn grain for Fibre Channel WWN's on the minion. Since this grain is expensive, it is disabled by default.
fibre_channel_grains: True
iscsi_grains¶
Default: False
The iscsi_grains setting will enable the iscsi_iqn grain on the minion. Since this grain is expensive, it is disabled by default.
iscsi_grains: True
nvme_grains¶
Default: False
The nvme_grains setting will enable the nvme_nqn grain on the minion. Since this grain is expensive, it is disabled by default.
nvme_grains: True
mine_enabled¶
New in version 2015.8.10.
Default: True
Determines whether or not the salt minion should run scheduled mine updates. If this is set to False then the mine update function will not get added to the scheduler for the minion.
mine_enabled: True
mine_return_job¶
New in version 2015.8.10.
Default: False
Determines whether or not scheduled mine updates should be accompanied by a job return for the job cache.
mine_return_job: False
mine_functions¶
Default: Empty
Designate which functions should be executed at mine_interval intervals on each minion. See this documentation on the Salt Mine for more information. Note these can be defined in the pillar for a minion as well.
mine_functions:
test.ping: []
network.ip_addrs:
interface: eth0
cidr: '10.0.0.0/8'
mine_interval¶
Default: 60
The number of minutes between mine updates.
mine_interval: 60
sock_dir¶
Default: /var/run/salt/minion
The directory where Unix sockets will be kept.
sock_dir: /var/run/salt/minion
enable_fqdns_grains¶
Default: True
In order to calculate the fqdns grain, all the IP addresses from the minion are processed with underlying calls to socket.gethostbyaddr which can take 5 seconds to be released (after reaching socket.timeout) when there is no fqdn for that IP. These calls to socket.gethostbyaddr are processed asynchronously, however, it still adds 5 seconds every time grains are generated if an IP does not resolve. In Windows grains are regenerated each time a new process is spawned. Therefore, the default for Windows is False. In many cases this value does not make sense to include for proxy minions as it will be FQDN for the host running the proxy minion process, so the default for proxy minions is False`. On macOS, FQDN resolution can be very slow, therefore the default for macOS is False as well. All other OSes default to True. This option was added here.
enable_fqdns_grains: False
enable_gpu_grains¶
Default: True
Enable GPU hardware data for your master. Be aware that the minion can take a while to start up when lspci and/or dmidecode is used to populate the grains for the minion, so this can be set to False if you do not need these grains.
enable_gpu_grains: False
outputter_dirs¶
Default: []
A list of additional directories to search for salt outputters in.
outputter_dirs: []
backup_mode¶
Default: ''
Make backups of files replaced by file.managed and file.recurse state modules under cachedir in file_backup subdirectory preserving original paths. Refer to File State Backups documentation for more details.
backup_mode: minion
acceptance_wait_time¶
Default: 10
The number of seconds to wait until attempting to re-authenticate with the master.
acceptance_wait_time: 10
acceptance_wait_time_max¶
Default: 0
The maximum number of seconds to wait until attempting to re-authenticate with the master. If set, the wait will increase by acceptance_wait_time seconds each iteration.
acceptance_wait_time_max: 0
rejected_retry¶
Default: False
If the master rejects the minion's public key, retry instead of exiting. Rejected keys will be handled the same as waiting on acceptance.
rejected_retry: False
random_reauth_delay¶
Default: 10
When the master key changes, the minion will try to re-auth itself to receive the new master key. In larger environments this can cause a syn-flood on the master because all minions try to re-auth immediately. To prevent this and have a minion wait for a random amount of time, use this optional parameter. The wait-time will be a random number of seconds between 0 and the defined value.
random_reauth_delay: 60
master_tries¶
New in version 2016.3.0.
Default: 1
The number of attempts to connect to a master before giving up. Set this to -1 for unlimited attempts. This allows for a master to have downtime and the minion to reconnect to it later when it comes back up. In 'failover' mode, which is set in the master_type configuration, this value is the number of attempts for each set of masters. In this mode, it will cycle through the list of masters for each attempt.
master_tries is different than auth_tries because auth_tries attempts to retry auth attempts with a single master. auth_tries is under the assumption that you can connect to the master but not gain authorization from it. master_tries will still cycle through all of the masters in a given try, so it is appropriate if you expect occasional downtime from the master(s).
master_tries: 1
auth_tries¶
New in version 2014.7.0.
Default: 7
The number of attempts to authenticate to a master before giving up. Or, more technically, the number of consecutive SaltReqTimeoutErrors that are acceptable when trying to authenticate to the master.
auth_tries: 7
auth_timeout¶
New in version 2014.7.0.
Default: 5
When waiting for a master to accept the minion's public key, salt will continuously attempt to reconnect until successful. This is the timeout value, in seconds, for each individual attempt. After this timeout expires, the minion will wait for acceptance_wait_time seconds before trying again. Unless your master is under unusually heavy load, this should be left at the default.
NOTE:
auth_timeout: 5
auth_safemode¶
New in version 2014.7.0.
Default: False
If authentication fails due to SaltReqTimeoutError during a ping_interval, this setting, when set to True, will cause a sub-minion process to restart.
auth_safemode: False
ping_interval¶
Default: 0
Instructs the minion to ping its master(s) every n number of minutes. Used primarily as a mitigation technique against minion disconnects.
ping_interval: 0
random_startup_delay¶
Default: 0
The maximum bound for an interval in which a minion will randomly sleep upon starting up prior to attempting to connect to a master. This can be used to splay connection attempts for cases where many minions starting up at once may place undue load on a master.
For example, setting this to 5 will tell a minion to sleep for a value between 0 and 5 seconds.
random_startup_delay: 5
recon_default¶
Default: 1000
The interval in milliseconds that the socket should wait before trying to reconnect to the master (1000ms = 1 second).
recon_default: 1000
recon_max¶
Default: 10000
The maximum time a socket should wait. Each interval the time to wait is calculated by doubling the previous time. If recon_max is reached, it starts again at the recon_default.
- reconnect 1: the socket will wait 'recon_default' milliseconds
- reconnect 2: 'recon_default' * 2
- reconnect 3: ('recon_default' * 2) * 2
- reconnect 4: value from previous interval * 2
- reconnect 5: value from previous interval * 2
- reconnect x: if value >= recon_max, it starts again with recon_default
recon_max: 10000
recon_randomize¶
Default: True
Generate a random wait time on minion start. The wait time will be a random value between recon_default and recon_default + recon_max. Having all minions reconnect with the same recon_default and recon_max value kind of defeats the purpose of being able to change these settings. If all minions have the same values and the setup is quite large (several thousand minions), they will still flood the master. The desired behavior is to have time-frame within all minions try to reconnect.
recon_randomize: True
loop_interval¶
Default: 1
The loop_interval sets how long in seconds the minion will wait between evaluating the scheduler and running cleanup tasks. This defaults to 1 second on the minion scheduler.
loop_interval: 1
pub_ret¶
Default: True
Some installations choose to start all job returns in a cache or a returner and forgo sending the results back to a master. In this workflow, jobs are most often executed with --async from the Salt CLI and then results are evaluated by examining job caches on the minions or any configured returners. WARNING: Setting this to False will disable returns back to the master.
pub_ret: True
return_retry_timer¶
Default: 5
The default timeout for a minion return attempt.
return_retry_timer: 5
return_retry_timer_max¶
Default: 10
The maximum timeout for a minion return attempt. If non-zero the minion return retry timeout will be a random int between return_retry_timer and return_retry_timer_max
return_retry_timer_max: 10
return_retry_tries¶
Default: 3
The maximum number of retries for a minion return attempt.
return_retry_tries: 3
cache_sreqs¶
Default: True
The connection to the master ret_port is kept open. When set to False, the minion creates a new connection for every return to the master.
cache_sreqs: True
ipc_mode¶
Default: ipc
Windows platforms lack POSIX IPC and must rely on slower TCP based inter- process communications. ipc_mode is set to tcp on such systems.
ipc_mode: ipc
ipc_write_buffer¶
Default: 0
The maximum size of a message sent via the IPC transport module can be limited dynamically or by sharing an integer value lower than the total memory size. When the value dynamic is set, salt will use 2.5% of the total memory as ipc_write_buffer value (rounded to an integer). A value of 0 disables this option.
ipc_write_buffer: 10485760
tcp_pub_port¶
Default: 4510
Publish port used when ipc_mode is set to tcp.
tcp_pub_port: 4510
tcp_pull_port¶
Default: 4511
Pull port used when ipc_mode is set to tcp.
tcp_pull_port: 4511
transport¶
Default: zeromq
Changes the underlying transport layer. ZeroMQ is the recommended transport while additional transport layers are under development. Supported values are zeromq and tcp (experimental). This setting has a significant impact on performance and should not be changed unless you know what you are doing!
transport: zeromq
syndic_finger¶
Default: ''
The key fingerprint of the higher-level master for the syndic to verify it is talking to the intended master.
syndic_finger: 'ab:30:65:2a:d6:9e:20:4f:d8:b2:f3:a7:d4:65:50:10'
http_connect_timeout¶
New in version 2019.2.0.
Default: 20
HTTP connection timeout in seconds. Applied when fetching files using tornado back-end. Should be greater than overall download time.
http_connect_timeout: 20
http_request_timeout¶
New in version 2015.8.0.
Default: 3600
HTTP request timeout in seconds. Applied when fetching files using tornado back-end. Should be greater than overall download time.
http_request_timeout: 3600
proxy_host¶
Default: ''
The hostname used for HTTP proxy access.
proxy_host: proxy.my-domain
proxy_port¶
Default: 0
The port number used for HTTP proxy access.
proxy_port: 31337
proxy_username¶
Default: ''
The username used for HTTP proxy access.
proxy_username: charon
proxy_password¶
Default: ''
The password used for HTTP proxy access.
proxy_password: obolus
no_proxy¶
New in version 2019.2.0.
Default: []
List of hosts to bypass HTTP proxy
NOTE:
no_proxy: [ '127.0.0.1', 'foo.tld' ]
use_yamlloader_old¶
New in version 2019.2.1.
Default: False
Use the pre-2019.2 YAML renderer. Uses legacy YAML rendering to support some legacy inline data structures. See the 2019.2.1 release notes for more details.
use_yamlloader_old: False
Docker Configuration¶
docker.update_mine¶
New in version 2017.7.8,2018.3.3.
Changed in version 2019.2.0: The default value is now False
Default: True
If enabled, when containers are added, removed, stopped, started, etc., the mine will be updated with the results of docker.ps verbose=True all=True host=True. This mine data is used by mine.get_docker. Set this option to False to keep Salt from updating the mine with this information.
NOTE:
NOTE:
docker.update_mine: False
docker.compare_container_networks¶
New in version 2018.3.0.
Default: {'static': ['Aliases', 'Links', 'IPAMConfig'], 'automatic': ['IPAddress', 'Gateway', 'GlobalIPv6Address', 'IPv6Gateway']}
Specifies which keys are examined by docker.compare_container_networks.
NOTE:
docker.compare_container_networks:
static:
- Aliases
- Links
- IPAMConfig
automatic:
- IPAddress
- Gateway
- GlobalIPv6Address
- IPv6Gateway
optimization_order¶
Default: [0, 1, 2]
In cases where Salt is distributed without .py files, this option determines the priority of optimization level(s) Salt's module loader should prefer.
NOTE:
optimization_order:
- 2
- 0
- 1
Minion Execution Module Management¶
disable_modules¶
Default: [] (all execution modules are enabled by default)
The event may occur in which the administrator desires that a minion should not be able to execute a certain module.
However, the sys module is built into the minion and cannot be disabled.
This setting can also tune the minion. Because all modules are loaded into system memory, disabling modules will lower the minion's memory footprint.
Modules should be specified according to their file name on the system and not by their virtual name. For example, to disable cmd, use the string cmdmod which corresponds to salt.modules.cmdmod.
disable_modules:
- test
- solr
disable_returners¶
Default: [] (all returners are enabled by default)
If certain returners should be disabled, this is the place
disable_returners:
- mongo_return
whitelist_modules¶
Default: [] (Module whitelisting is disabled. Adding anything to the config option will cause only the listed modules to be enabled. Modules not in the list will not be loaded.)
This option is the reverse of disable_modules. If enabled, only execution modules in this list will be loaded and executed on the minion.
Note that this is a very large hammer and it can be quite difficult to keep the minion working the way you think it should since Salt uses many modules internally itself. At a bare minimum you need the following enabled or else the minion won't start.
whitelist_modules:
- cmdmod
- test
- config
module_dirs¶
Default: []
A list of extra directories to search for Salt modules
module_dirs:
- /var/lib/salt/modules
returner_dirs¶
Default: []
A list of extra directories to search for Salt returners
returner_dirs:
- /var/lib/salt/returners
states_dirs¶
Default: []
A list of extra directories to search for Salt states
states_dirs:
- /var/lib/salt/states
grains_dirs¶
Default: []
A list of extra directories to search for Salt grains
grains_dirs:
- /var/lib/salt/grains
render_dirs¶
Default: []
A list of extra directories to search for Salt renderers
render_dirs:
- /var/lib/salt/renderers
utils_dirs¶
Default: []
A list of extra directories to search for Salt utilities
utils_dirs:
- /var/lib/salt/utils
cython_enable¶
Default: False
Set this value to true to enable auto-loading and compiling of .pyx modules, This setting requires that gcc and cython are installed on the minion.
cython_enable: False
enable_zip_modules¶
New in version 2015.8.0.
Default: False
Set this value to true to enable loading of zip archives as extension modules. This allows for packing module code with specific dependencies to avoid conflicts and/or having to install specific modules' dependencies in system libraries.
enable_zip_modules: False
providers¶
Default: (empty)
A module provider can be statically overwritten or extended for the minion via the providers option. This can be done on an individual basis in an SLS file, or globally here in the minion config, like below.
providers:
service: systemd
modules_max_memory¶
Default: -1
Specify a max size (in bytes) for modules on import. This feature is currently only supported on *NIX operating systems and requires psutil.
modules_max_memory: -1
extmod_whitelist/extmod_blacklist¶
New in version 2017.7.0.
By using this dictionary, the modules that are synced to the minion's extmod cache using saltutil.sync_* can be limited. If nothing is set to a specific type, then all modules are accepted. To block all modules of a specific type, whitelist an empty list.
extmod_whitelist:
modules:
- custom_module
engines:
- custom_engine
pillars: [] extmod_blacklist:
modules:
- specific_module
Valid options:
- beacons
- clouds
- sdb
- modules
- states
- grains
- renderers
- returners
- proxy
- engines
- output
- utils
- pillar
Top File Settings¶
These parameters only have an effect if running a masterless minion.
state_top¶
Default: top.sls
The state system uses a "top" file to tell the minions what environment to use and what modules to use. The state_top file is defined relative to the root of the base environment.
state_top: top.sls
state_top_saltenv¶
This option has no default value. Set it to an environment name to ensure that only the top file from that environment is considered during a highstate.
NOTE:
state_top_saltenv: dev
top_file_merging_strategy¶
Changed in version 2016.11.0: A merge_all strategy has been added.
Default: merge
When no specific fileserver environment (a.k.a. saltenv) has been specified for a highstate, all environments' top files are inspected. This config option determines how the SLS targets in those top files are handled.
When set to merge, the base environment's top file is evaluated first, followed by the other environments' top files. The first target expression (e.g. '*') for a given environment is kept, and when the same target expression is used in a different top file evaluated later, it is ignored. Because base is evaluated first, it is authoritative. For example, if there is a target for '*' for the foo environment in both the base and foo environment's top files, the one in the foo environment would be ignored. The environments will be evaluated in no specific order (aside from base coming first). For greater control over the order in which the environments are evaluated, use env_order. Note that, aside from the base environment's top file, any sections in top files that do not match that top file's environment will be ignored. So, for example, a section for the qa environment would be ignored if it appears in the dev environment's top file. To keep use cases like this from being ignored, use the merge_all strategy.
When set to same, then for each environment, only that environment's top file is processed, with the others being ignored. For example, only the dev environment's top file will be processed for the dev environment, and any SLS targets defined for dev in the base environment's (or any other environment's) top file will be ignored. If an environment does not have a top file, then the top file from the default_top config parameter will be used as a fallback.
When set to merge_all, then all states in all environments in all top files will be applied. The order in which individual SLS files will be executed will depend on the order in which the top files were evaluated, and the environments will be evaluated in no specific order. For greater control over the order in which the environments are evaluated, use env_order.
top_file_merging_strategy: same
env_order¶
Default: []
When top_file_merging_strategy is set to merge, and no environment is specified for a highstate, this config option allows for the order in which top files are evaluated to be explicitly defined.
env_order:
- base
- dev
- qa
default_top¶
Default: base
When top_file_merging_strategy is set to same, and no environment is specified for a highstate (i.e. environment is not set for the minion), this config option specifies a fallback environment in which to look for a top file if an environment lacks one.
default_top: dev
startup_states¶
Default: ''
States to run when the minion daemon starts. To enable, set startup_states to:
- highstate: Execute state.highstate
- sls: Read in the sls_list option and execute the named sls files
- top: Read top_file option and execute based on that file on the Master
startup_states: ''
sls_list¶
Default: []
List of states to run when the minion starts up if startup_states is set to sls.
sls_list:
- edit.vim
- hyper
start_event_grains¶
Default: []
List of grains to pass in start event when minion starts up.
start_event_grains:
- machine_id
- uuid
top_file¶
Default: ''
Top file to execute if startup_states is set to top.
top_file: ''
State Management Settings¶
renderer¶
Default: jinja|yaml
The default renderer used for local state executions
renderer: jinja|json
test¶
Default: False
Set all state calls to only test if they are going to actually make changes or just post what changes are going to be made.
test: False
state_aggregate¶
Default: False
Automatically aggregate all states that have support for mod_aggregate by setting to True.
state_aggregate: True
Or pass a list of state module names to automatically aggregate just those types.
state_aggregate:
- pkg
state_queue¶
Default: False
Instead of failing immediately when another state run is in progress, a value of True will queue the new state run to begin running once the other has finished. This option starts a new thread for each queued state run, so use this option sparingly.
state_queue: True
Additionally, it can be set to an integer representing the maximum queue size which can be attained before the state runs will fail to be queued. This can prevent runaway conditions where new threads are started until system performance is hampered.
state_queue: 2
state_verbose¶
Default: True
Controls the verbosity of state runs. By default, the results of all states are returned, but setting this value to False will cause salt to only display output for states that failed or states that have changes.
state_verbose: True
state_output¶
Default: full
The state_output setting controls which results will be output full multi line:
- full, terse - each state will be full/terse
- mixed - only states with errors will be full
- changes - states with changes and errors will be full
full_id, mixed_id, changes_id and terse_id are also allowed; when set, the state ID will be used as name in the output.
state_output: full
state_output_diff¶
Default: False
The state_output_diff setting changes whether or not the output from successful states is returned. Useful when even the terse output of these states is cluttering the logs. Set it to True to ignore them.
state_output_diff: False
state_output_profile¶
Default: True
The state_output_profile setting changes whether profile information will be shown for each state run.
state_output_profile: True
state_output_pct¶
Default: False
The state_output_pct setting changes whether success and failure information as a percent of total actions will be shown for each state run.
state_output_pct: False
state_compress_ids¶
Default: False
The state_compress_ids setting aggregates information about states which have multiple "names" under the same state ID in the highstate output.
state_compress_ids: False
autoload_dynamic_modules¶
Default: True
autoload_dynamic_modules turns on automatic loading of modules found in the environments on the master. This is turned on by default. To turn off auto-loading modules when states run, set this value to False.
autoload_dynamic_modules: True
clean_dynamic_modules¶
Default: True
clean_dynamic_modules keeps the dynamic modules on the minion in sync with the dynamic modules on the master. This means that if a dynamic module is not on the master it will be deleted from the minion. By default this is enabled and can be disabled by changing this value to False.
clean_dynamic_modules: True
NOTE:
saltenv¶
Changed in version 2018.3.0: Renamed from environment to saltenv. If environment is used, saltenv will take its value. If both are used, environment will be ignored and saltenv will be used.
Normally the minion is not isolated to any single environment on the master when running states, but the environment can be isolated on the minion side by statically setting it. Remember that the recommended way to manage environments is to isolate via the top file.
saltenv: dev
lock_saltenv¶
New in version 2018.3.0.
Default: False
For purposes of running states, this option prevents using the saltenv argument to manually set the environment. This is useful to keep a minion which has the saltenv option set to dev from running states from an environment other than dev.
lock_saltenv: True
snapper_states¶
Default: False
The snapper_states value is used to enable taking snapper snapshots before and after salt state runs. This allows for state runs to be rolled back.
For snapper states to function properly snapper needs to be installed and enabled.
snapper_states: True
snapper_states_config¶
Default: root
Snapper can execute based on a snapper configuration. The configuration needs to be set up before snapper can use it. The default configuration is root, this default makes snapper run on SUSE systems using the default configuration set up at install time.
snapper_states_config: root
global_state_conditions¶
Default: None
If set, this parameter expects a dictionary of state module names as keys and a list of conditions which must be satisfied in order to run any functions in that state module.
global_state_conditions:
"*": ["G@global_noop:false"]
service: ["not G@virtual_subtype:chroot"]
File Directory Settings¶
file_client¶
Default: remote
The client defaults to looking on the master server for files, but can be directed to look on the minion by setting this parameter to local.
file_client: remote
use_master_when_local¶
Default: False
When using a local file_client, this parameter is used to allow the client to connect to a master for remote execution.
use_master_when_local: False
file_roots¶
Default:
base:
- /srv/salt
When using a local file_client, this parameter is used to setup the fileserver's environments. This parameter operates identically to the master config parameter of the same name.
file_roots:
base:
- /srv/salt
dev:
- /srv/salt/dev/services
- /srv/salt/dev/states
prod:
- /srv/salt/prod/services
- /srv/salt/prod/states
fileserver_followsymlinks¶
New in version 2014.1.0.
Default: True
By default, the file_server follows symlinks when walking the filesystem tree. Currently this only applies to the default roots fileserver_backend.
fileserver_followsymlinks: True
fileserver_ignoresymlinks¶
New in version 2014.1.0.
Default: False
If you do not want symlinks to be treated as the files they are pointing to, set fileserver_ignoresymlinks to True. By default this is set to False. When set to True, any detected symlink while listing files on the Master will not be returned to the Minion.
fileserver_ignoresymlinks: False
hash_type¶
Default: sha256
The hash_type is the hash to use when discovering the hash of a file on the local fileserver. The default is sha256, but md5, sha1, sha224, sha384, and sha512 are also supported.
hash_type: sha256
Pillar Configuration¶
pillar_roots¶
Default:
base:
- /srv/pillar
When using a local file_client, this parameter is used to setup the pillar environments.
pillar_roots:
base:
- /srv/pillar
dev:
- /srv/pillar/dev
prod:
- /srv/pillar/prod
on_demand_ext_pillar¶
New in version 2016.3.6,2016.11.3,2017.7.0.
Default: ['libvirt', 'virtkey']
When using a local file_client, this option controls which external pillars are permitted to be used on-demand using pillar.ext.
on_demand_ext_pillar:
- libvirt
- virtkey
- git
WARNING:
decrypt_pillar¶
New in version 2017.7.0.
Default: []
A list of paths to be recursively decrypted during pillar compilation.
decrypt_pillar:
- 'foo:bar': gpg
- 'lorem:ipsum:dolor'
Entries in this list can be formatted either as a simple string, or as a key/value pair, with the key being the pillar location, and the value being the renderer to use for pillar decryption. If the former is used, the renderer specified by decrypt_pillar_default will be used.
decrypt_pillar_delimiter¶
New in version 2017.7.0.
Default: :
The delimiter used to distinguish nested data structures in the decrypt_pillar option.
decrypt_pillar_delimiter: '|' decrypt_pillar:
- 'foo|bar': gpg
- 'lorem|ipsum|dolor'
decrypt_pillar_default¶
New in version 2017.7.0.
Default: gpg
The default renderer used for decryption, if one is not specified for a given pillar key in decrypt_pillar.
decrypt_pillar_default: my_custom_renderer
decrypt_pillar_renderers¶
New in version 2017.7.0.
Default: ['gpg']
List of renderers which are permitted to be used for pillar decryption.
decrypt_pillar_renderers:
- gpg
- my_custom_renderer
gpg_decrypt_must_succeed¶
New in version 3005.
Default: False
If this is True and the ciphertext could not be decrypted, then an error is raised.
Sending the ciphertext through basically is never desired, for example if a state is setting a database password from pillar and gpg rendering fails, then the state will update the password to the ciphertext, which by definition is not encrypted.
WARNING:
gpg_decrypt_must_succeed: False
pillarenv¶
Default: None
Isolates the pillar environment on the minion side. This functions the same as the environment setting, but for pillar instead of states.
pillarenv: dev
pillarenv_from_saltenv¶
New in version 2017.7.0.
Default: False
When set to True, the pillarenv value will assume the value of the effective saltenv when running states. This essentially makes salt '*' state.sls mysls saltenv=dev equivalent to salt '*' state.sls mysls saltenv=dev pillarenv=dev. If pillarenv is set, either in the minion config file or via the CLI, it will override this option.
pillarenv_from_saltenv: True
pillar_raise_on_missing¶
New in version 2015.5.0.
Default: False
Set this option to True to force a KeyError to be raised whenever an attempt to retrieve a named value from pillar fails. When this option is set to False, the failed attempt returns an empty string.
minion_pillar_cache¶
New in version 2016.3.0.
Default: False
The minion can locally cache rendered pillar data under cachedir/pillar. This allows a temporarily disconnected minion to access previously cached pillar data by invoking salt-call with the --local and --pillar_root=:conf_minion:cachedir/pillar options. Before enabling this setting consider that the rendered pillar may contain security sensitive data. Appropriate access restrictions should be in place. By default the saved pillar data will be readable only by the user account running salt. By default this feature is disabled, to enable set minion_pillar_cache to True.
minion_pillar_cache: False
file_recv_max_size¶
New in version 2014.7.0.
Default: 100
Set a hard-limit on the size of the files that can be pushed to the master. It will be interpreted as megabytes.
file_recv_max_size: 100
pass_to_ext_pillars¶
Specify a list of configuration keys whose values are to be passed to external pillar functions.
Suboptions can be specified using the ':' notation (i.e. option:suboption)
The values are merged and included in the extra_minion_data optional parameter of the external pillar function. The extra_minion_data parameter is passed only to the external pillar functions that have it explicitly specified in their definition.
If the config contains
opt1: value1 opt2:
subopt1: value2
subopt2: value3 pass_to_ext_pillars:
- opt1
- opt2: subopt1
the extra_minion_data parameter will be
{"opt1": "value1", "opt2": {"subopt1": "value2"}}
ssh_merge_pillar¶
New in version 2018.3.2.
Default: True
Merges the compiled pillar data with the pillar data already available globally. This is useful when using salt-ssh or salt-call --local and overriding the pillar data in a state file:
apply_showpillar:
module.run:
- name: state.apply
- mods:
- showpillar
- kwargs:
pillar:
test: "foo bar"
If set to True, the showpillar state will have access to the global pillar data.
If set to False, only the overriding pillar data will be available to the showpillar state.
Security Settings¶
open_mode¶
Default: False
Open mode can be used to clean out the PKI key received from the Salt master, turn on open mode, restart the minion, then turn off open mode and restart the minion to clean the keys.
open_mode: False
master_finger¶
Default: ''
Fingerprint of the master public key to validate the identity of your Salt master before the initial key exchange. The master fingerprint can be found as master.pub by running "salt-key -F master" on the Salt master.
master_finger: 'ba:30:65:2a:d6:9e:20:4f:d8:b2:f3:a7:d4:65:11:13'
keysize¶
Default: 2048
The size of key that should be generated when creating new keys.
keysize: 2048
permissive_pki_access¶
Default: False
Enable permissive access to the salt keys. This allows you to run the master or minion as root, but have a non-root group be given access to your pki_dir. To make the access explicit, root must belong to the group you've given access to. This is potentially quite insecure.
permissive_pki_access: False
verify_master_pubkey_sign¶
Default: False
Enables verification of the master-public-signature returned by the master in auth-replies. Please see the tutorial on how to configure this properly Multimaster-PKI with Failover Tutorial
New in version 2014.7.0.
verify_master_pubkey_sign: True
If this is set to True, master_sign_pubkey must be also set to True in the master configuration file.
master_sign_key_name¶
Default: master_sign
The filename without the .pub suffix of the public key that should be used for verifying the signature from the master. The file must be located in the minion's pki directory.
New in version 2014.7.0.
master_sign_key_name: <filename_without_suffix>
autosign_grains¶
New in version 2018.3.0.
Default: not defined
The grains that should be sent to the master on authentication to decide if the minion's key should be accepted automatically.
Please see the Autoaccept Minions from Grains documentation for more information.
autosign_grains:
- uuid
- server_id
always_verify_signature¶
Default: False
If verify_master_pubkey_sign is enabled, the signature is only verified if the public-key of the master changes. If the signature should always be verified, this can be set to True.
New in version 2014.7.0.
always_verify_signature: True
cmd_blacklist_glob¶
Default: []
If cmd_blacklist_glob is enabled then any shell command called over remote execution or via salt-call will be checked against the glob matches found in the cmd_blacklist_glob list and any matched shell command will be blocked.
NOTE:
New in version 2016.11.0.
cmd_blacklist_glob:
- 'rm * '
- 'cat /etc/* '
cmd_whitelist_glob¶
Default: []
If cmd_whitelist_glob is enabled then any shell command called over remote execution or via salt-call will be checked against the glob matches found in the cmd_whitelist_glob list and any shell command NOT found in the list will be blocked. If cmd_whitelist_glob is NOT SET, then all shell commands are permitted.
NOTE:
New in version 2016.11.0.
cmd_whitelist_glob:
- 'ls * '
- 'cat /etc/fstab'
ssl¶
New in version 2016.11.0.
Default: None
TLS/SSL connection options. This could be set to a dictionary containing arguments corresponding to python ssl.wrap_socket method. For details see Tornado and Python documentation.
Note: to set enum arguments values like cert_reqs and ssl_version use constant names without ssl module prefix: CERT_REQUIRED or PROTOCOL_SSLv23.
ssl:
keyfile: <path_to_keyfile>
certfile: <path_to_certfile>
ssl_version: PROTOCOL_TLSv1_2
Reactor Settings¶
reactor¶
Default: []
Defines a salt reactor. See the Reactor documentation for more information.
reactor: []
reactor_refresh_interval¶
Default: 60
The TTL for the cache of the reactor configuration.
reactor_refresh_interval: 60
reactor_worker_threads¶
Default: 10
The number of workers for the runner/wheel in the reactor.
reactor_worker_threads: 10
reactor_worker_hwm¶
Default: 10000
The queue size for workers in the reactor.
reactor_worker_hwm: 10000
Thread Settings¶
multiprocessing¶
Default: True
If multiprocessing is enabled when a minion receives a publication a new process is spawned and the command is executed therein. Conversely, if multiprocessing is disabled the new publication will be run executed in a thread.
multiprocessing: True
process_count_max¶
New in version 2018.3.0.
Default: -1
Limit the maximum amount of processes or threads created by salt-minion. This is useful to avoid resource exhaustion in case the minion receives more publications than it is able to handle, as it limits the number of spawned processes or threads. -1 is the default and disables the limit.
process_count_max: -1
Minion Logging Settings¶
log_file¶
Default: /var/log/salt/minion
The minion log can be sent to a regular file, local path name, or network location. See also log_file.
Examples:
log_file: /var/log/salt/minion
log_file: file:///dev/log
log_file: udp://loghost:10514
log_level¶
Default: warning
The level of messages to send to the console. See also log_level.
log_level: warning
log_level_logfile¶
Default: warning
The level of messages to send to the log file. See also log_level_logfile. When it is not set explicitly it will inherit the level set by log_level option.
log_level_logfile: warning
log_datefmt¶
Default: %H:%M:%S
The date and time format used in console log messages. See also log_datefmt.
log_datefmt: '%H:%M:%S'
log_datefmt_logfile¶
Default: %Y-%m-%d %H:%M:%S
The date and time format used in log file messages. See also log_datefmt_logfile.
log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
log_fmt_console¶
Default: [%(levelname)-8s] %(message)s
The format of the console logging messages. See also log_fmt_console.
NOTE:
Console log colors are specified by these additional formatters:
%(colorlevel)s %(colorname)s %(colorprocess)s %(colormsg)s
Since it is desirable to include the surrounding brackets, '[' and ']', in the coloring of the messages, these color formatters also include padding as well. Color LogRecord attributes are only available for console logging.
log_fmt_console: '%(colorlevel)s %(colormsg)s' log_fmt_console: '[%(levelname)-8s] %(message)s'
log_fmt_logfile¶
Default: %(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s
The format of the log file logging messages. See also log_fmt_logfile.
log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s'
log_granular_levels¶
Default: {}
This can be used to control logging levels more specifically. See also log_granular_levels.
log_rotate_max_bytes¶
Default: 0
The maximum number of bytes a single log file may contain before it is rotated. A value of 0 disables this feature. Currently only supported on Windows. On other platforms, use an external tool such as 'logrotate' to manage log files. log_rotate_max_bytes
log_rotate_backup_count¶
Default: 0
The number of backup files to keep when rotating log files. Only used if log_rotate_max_bytes is greater than 0. Currently only supported on Windows. On other platforms, use an external tool such as 'logrotate' to manage log files. log_rotate_backup_count
zmq_monitor¶
Default: False
To diagnose issues with minions disconnecting or missing returns, ZeroMQ supports the use of monitor sockets to log connection events. This feature requires ZeroMQ 4.0 or higher.
To enable ZeroMQ monitor sockets, set 'zmq_monitor' to 'True' and log at a debug level or higher.
A sample log event is as follows:
[DEBUG ] ZeroMQ event: {'endpoint': 'tcp://127.0.0.1:4505', 'event': 512, 'value': 27, 'description': 'EVENT_DISCONNECTED'}
All events logged will include the string ZeroMQ event. A connection event should be logged as the minion starts up and initially connects to the master. If not, check for debug log level and that the necessary version of ZeroMQ is installed.
tcp_authentication_retries¶
Default: 5
The number of times to retry authenticating with the salt master when it comes back online.
Zeromq does a lot to make sure when connections come back online that they reauthenticate. The tcp transport should try to connect with a new connection if the old one times out on reauthenticating.
-1 for infinite tries.
tcp_reconnect_backoff¶
Default: 1
The time in seconds to wait before attempting another connection with salt master when the previous connection fails while on TCP transport.
failhard¶
Default: False
Set the global failhard flag. This informs all states to stop running states at the moment a single state fails
failhard: False
Include Configuration¶
Configuration can be loaded from multiple files. The order in which this is done is:
- 1.
- The minion config file itself
- 2.
- The files matching the glob in default_include
- 3.
- The files matching the glob in include (if defined)
Each successive step overrides any values defined in the previous steps. Therefore, any config options defined in one of the default_include files would override the same value in the minion config file, and any options defined in include would override both.
default_include¶
Default: minion.d/*.conf
The minion can include configuration from other files. Per default the minion will automatically include all config files from minion.d/*.conf where minion.d is relative to the directory of the minion configuration file.
NOTE:
include¶
Default: not defined
The minion can include configuration from other files. To enable this, pass a list of paths to this option. The paths can be either relative or absolute; if relative, they are considered to be relative to the directory the main minion configuration file lives in. Paths can make use of shell-style globbing. If no files are matched by a path passed to this option then the minion will log a warning message.
# Include files from a minion.d directory in the same # directory as the minion config file include: minion.d/*.conf # Include a single extra file into the configuration include: /etc/roles/webserver # Include several files and the minion.d directory include:
- extra_config
- minion.d/*
- /etc/roles/webserver
Keepalive Settings¶
tcp_keepalive¶
Default: True
The tcp keepalive interval to set on TCP ports. This setting can be used to tune Salt connectivity issues in messy network environments with misbehaving firewalls.
tcp_keepalive: True
tcp_keepalive_cnt¶
Default: -1
Sets the ZeroMQ TCP keepalive count. May be used to tune issues with minion disconnects.
tcp_keepalive_cnt: -1
tcp_keepalive_idle¶
Default: 300
Sets ZeroMQ TCP keepalive idle. May be used to tune issues with minion disconnects.
tcp_keepalive_idle: 300
tcp_keepalive_intvl¶
Default: -1
Sets ZeroMQ TCP keepalive interval. May be used to tune issues with minion disconnects.
tcp_keepalive_intvl': -1
Frozen Build Update Settings¶
These options control how salt.modules.saltutil.update() works with esky frozen apps. For more information look at https://github.com/cloudmatrix/esky/.
update_url¶
Default: False (Update feature is disabled)
The url to use when looking for application updates. Esky depends on directory listings to search for new versions. A webserver running on your Master is a good starting point for most setups.
update_url: 'http://salt.example.com/minion-updates'
update_restart_services¶
Default: [] (service restarting on update is disabled)
A list of services to restart when the minion software is updated. This would typically just be a list containing the minion's service name, but you may have other services that need to go with it.
update_restart_services: ['salt-minion']
Windows Software Repo Settings¶
These settings apply to all minions, whether running in masterless or master-minion mode.
winrepo_cache_expire_min¶
New in version 2016.11.0.
Default: 1800
If set to a nonzero integer, then passing refresh=True to functions in the windows pkg module will not refresh the windows repo metadata if the age of the metadata is less than this value. The exception to this is pkg.refresh_db, which will always refresh the metadata, regardless of age.
winrepo_cache_expire_min: 1800
winrepo_cache_expire_max¶
New in version 2016.11.0.
Default: 21600
If the windows repo metadata is older than this value, and the metadata is needed by a function in the windows pkg module, the metadata will be refreshed.
winrepo_cache_expire_max: 86400
winrepo_source_dir¶
Default: salt://win/repo-ng/
The source location for the winrepo sls files.
winrepo_source_dir: salt://win/repo-ng/
Standalone Minion Windows Software Repo Settings¶
The following settings are for configuring the Windows Software Repository (winrepo) on a masterless minion. To run in masterless minion mode, set the file_client to local or run salt-call with the --local option
IMPORTANT:
winrepo_dir¶
Changed in version 2015.8.0: Renamed from win_repo to winrepo_dir. This option did not have a default value until this version.
Default: C:\salt\srv\salt\win\repo
Location on the minion file_roots where winrepo files are kept. This is also where the winrepo_remotes are cloned to by winrepo.update_git_repos.
winrepo_dir: 'D:\winrepo'
winrepo_dir_ng¶
New in version 2015.8.0: A new ng repo was added.
Default: C:\salt\srv\salt\win\repo-ng
Location on the minion file_roots where winrepo files are kept for 2018.8.0 and later minions. This is also where the winrepo_remotes are cloned to by winrepo.update_git_repos.
winrepo_dir_ng: /srv/salt/win/repo-ng
winrepo_cachefile¶
Changed in version 2015.8.0: Renamed from win_repo_cachefile to winrepo_cachefile. Also, this option did not have a default value until this version.
Default: winrepo.p
The name of the winrepo cache file. The file will be created at root of the directory specified by winrepo_dir_ng.
winrepo_cachefile: winrepo.p
winrepo_remotes¶
Changed in version 2015.8.0: Renamed from win_gitrepos to winrepo_remotes. Also, this option did not have a default value until this version.
New in version 2015.8.0.
Default: ['https://github.com/saltstack/salt-winrepo.git']
List of git repositories to checkout and include in the winrepo
winrepo_remotes:
- https://github.com/saltstack/salt-winrepo.git
To specify a specific revision of the repository, prepend a commit ID to the URL of the repository:
winrepo_remotes:
- '<commit_id> https://github.com/saltstack/salt-winrepo.git'
Replace <commit_id> with the SHA1 hash of a commit ID. Specifying a commit ID is useful in that it allows one to revert back to a previous version in the event that an error is introduced in the latest revision of the repo.
winrepo_remotes_ng¶
New in version 2015.8.0: A new ng repo was added.
Default: ['https://github.com/saltstack/salt-winrepo-ng.git']
List of git repositories to checkout and include in the winrepo for 2015.8.0 and later minions.
winrepo_remotes_ng:
- https://github.com/saltstack/salt-winrepo-ng.git
To specify a specific revision of the repository, prepend a commit ID to the URL of the repository:
winrepo_remotes_ng:
- '<commit_id> https://github.com/saltstack/salt-winrepo-ng.git'
Replace <commit_id> with the SHA1 hash of a commit ID. Specifying a commit ID is useful in that it allows one to revert back to a previous version in the event that an error is introduced in the latest revision of the repo.
Configuring the Salt Proxy Minion¶
The Salt system is amazingly simple and easy to configure. The two components of the Salt system each have a respective configuration file. The salt-master is configured via the master configuration file, and the salt-proxy is configured via the proxy configuration file.
SEE ALSO:
The Salt Minion configuration is very simple. Typically, the only value that needs to be set is the master value so the proxy knows where to locate its master.
By default, the salt-proxy configuration will be in /etc/salt/proxy. A notable exception is FreeBSD, where the configuration will be in /usr/local/etc/salt/proxy.
With the Salt 3004 release, the ability to configure proxy minions using the delta proxy was introduced. The delta proxy provides the ability for a single control proxy minion to manage multiple proxy minions.
SEE ALSO:
Proxy-specific Configuration Options¶
add_proxymodule_to_opts¶
New in version 2015.8.2.
Changed in version 2016.3.0.
Default: False
Add the proxymodule LazyLoader object to opts.
add_proxymodule_to_opts: True
proxy_merge_grains_in_module¶
New in version 2016.3.0.
Changed in version 2017.7.0.
Default: True
If a proxymodule has a function called grains, then call it during regular grains loading and merge the results with the proxy's grains dictionary. Otherwise it is assumed that the module calls the grains function in a custom way and returns the data elsewhere.
proxy_merge_grains_in_module: False
proxy_keep_alive¶
New in version 2017.7.0.
Default: True
Whether the connection with the remote device should be restarted when dead. The proxy module must implement the alive function, otherwise the connection is considered alive.
proxy_keep_alive: False
proxy_keep_alive_interval¶
New in version 2017.7.0.
Default: 1
The frequency of keepalive checks, in minutes. It requires the proxy_keep_alive option to be enabled (and the proxy module to implement the alive function).
proxy_keep_alive_interval: 5
proxy_always_alive¶
New in version 2017.7.0.
Default: True
Whether the proxy should maintain the connection with the remote device. Similarly to proxy_keep_alive, this option is very specific to the design of the proxy module. When proxy_always_alive is set to False, the connection with the remote device is not maintained and has to be closed after every command.
proxy_always_alive: False
proxy_merge_pillar_in_opts¶
New in version 2017.7.3.
Default: False.
Whether the pillar data to be merged into the proxy configuration options. As multiple proxies can run on the same server, we may need different configuration options for each, while there's one single configuration file. The solution is merging the pillar data of each proxy minion into the opts.
proxy_merge_pillar_in_opts: True
proxy_deep_merge_pillar_in_opts¶
New in version 2017.7.3.
Default: False.
Deep merge of pillar data into configuration opts. This option is evaluated only when proxy_merge_pillar_in_opts is enabled.
proxy_merge_pillar_in_opts_strategy¶
New in version 2017.7.3.
Default: smart.
The strategy used when merging pillar configuration into opts. This option is evaluated only when proxy_merge_pillar_in_opts is enabled.
proxy_mines_pillar¶
New in version 2017.7.3.
Default: True.
Allow enabling mine details using pillar data. This evaluates the mine configuration under the pillar, for the following regular minion options that are also equally available on the proxy minion: mine_interval, and mine_functions.
Delta proxy minions¶
Welcome to the delta proxy minion installation guide. This installation guide explains the process for installing and using delta proxy minion which is available beginning in version 3004.
This guide is intended for system and network administrators with the general knowledge and experience required in the field. This guide is also intended for users that have ideally already tested and used standard Salt proxy minions in their environment before deciding to move to a delta proxy minion environment. See Salt proxy minions for more information.
NOTE:
Proxy minions vs. delta proxy minions¶
Salt can target network devices through Salt proxy minions, Proxy minions allow you to control network devices that, for whatever reason, cannot run the standard Salt minion. Examples include:
- Network gear that has an API but runs a proprietary operating system
- Devices with limited CPU or memory
- Devices that could run a minion but will not for security reasons
A proxy minion acts as an intermediary between the Salt master and the device it represents. The proxy minion runs on the Salt master and then translates commands from the Salt master to the device as needed.
By acting as an intermediary for the actual minion, proxy minions eliminate the need to establish a constant connection from a Salt master to a minion. Proxy minions generally only open a connection to the actual minion when necessary.
Proxy minions also reduce the amount of CPU or memory the minion must spend checking for commands from the Salt master. Proxy minions use the Salt master's CPU or memory to check for commands. The actual minion only needs to use CPU or memory to run commands when needed.
NOTE:
- Salt proxy minions
- Salt proxy modules
When delta proxy minions are needed¶
Normally, you would create a separate instance of proxy minion for each device that needs to be managed. However, this doesn't always scale well if you have thousands of devices. Running several thousand proxy minions can require a lot of memory and CPU.
A delta proxy minion can solve this problem: it makes it possible to run one minion that acts as the intermediary between the Salt master and the many network devices it can represent. In this scenario, one device (the delta proxy minion on the Salt master) runs several proxies. This configuration boosts performance and improves the overall scalability of the network.
Key terms¶
The following lists some important terminology that is used throughout this guide:
Term | Definition |
Salt master | The Salt master is a central node running the Salt master server. The Salt master issues commands to minions. |
minion | Minions are nodes running the Salt minion service. Minions listen to commands from a Salt master and perform the requested tasks, then return data back to the Salt master as needed. |
proxy minion | A Salt master that is running the proxy-minion service. The proxy minion acts as an intermediary between the Salt master and the device it represents. The proxy minion runs on the Salt master and then translates commands from the Salt master to the device. A separate instance of proxy minion is needed for each device that is managed. |
delta proxy minion | A Salt master that is running the delta proxy-minion service. The delta proxy minion acts as the intermediary between the Salt master and the many network devices it can represent. Only one instance of the delta proxy service is needed to run several proxies. |
control proxy | The control proxy runs on the Salt master. It manages a list of devices and issues commands to the network devices it represents. The Salt master needs at least one control proxy, but it is possible to have more than one control proxy, each managing a different set of devices. |
managed device | A device (such as Netmiko) that is managed by proxy minions or by a control proxy minion. The proxy minion or control proxy only creates a connection to the actual minion it needs to issue a command. |
pillar file | Pillars are structures of data (files) defined on the Salt master and passed through to one or more minions when the minion needs access to the pillar file. Pillars allow confidential, targeted data to be securely sent only to the relevant minion. Because all configurations for delta proxy minions are done on the Salt master (not on the minions), you use pillar files to configure the delta proxy-minion service. |
top file | The top file is a pillar file that maps which states should be applied to different minions in certain environments. |
Pre-installation¶
Before you start¶
Before installing the delta proxy minion, ensure that:
- Your network device and firmware are supported.
- The Salt master that is acting as the control proxy minion has network access to the devices it is managing.
- You have installed, configured, and tested standard Salt proxy minions in your environment before introducing delta proxy minions into your environment.
Install or upgrade Salt¶
Ensure your Salt masters are running at least Salt version 3004. For instructions on installing or upgrading Salt, see repo.saltproject.io. For RedHat systems, see Install or Upgrade Salt.
Installation¶
Before you begin the delta proxy minion installation process, ensure you have read and completed the Pre-installation steps.
Overview of the installation process¶
Similar to proxy minions, all the delta proxy minion configurations are done on the Salt master rather than on the minions that will be managed. The installation process has the following phases:
- 1.
- Configure the master to use delta proxy - Create a configuration file on the Salt master that defines its proxy settings.
- 2.
- Create a pillar file for each managed device - Create a pillar file for each device that will be managed by the delta proxy minion and reference these minions in the top file.
- 3.
- Create a control proxy configuration file - Create a control proxy file that lists the devices that it will manage. Then, reference this file in the top file.
- 4.
- Start the delta proxy minion - Start the delta proxy-minion service and validate that it has been set up correctly.
Configure the master to use delta proxy¶
In this step, you'll create a configuration file on the Salt master that defines its proxy settings. This is a general configuration file that tells the Salt master how to handle all proxy minions.
To create this configuration:
- 1.
- On the Salt master, navigate to the /etc/salt directory. In this directory, create a file named proxy if one doesn't already exist.
- 2.
- Open the file in your preferred editor and add the following configuration information:
# Use delta proxy metaproxy metaproxy: deltaproxy # Disable the FQDNS grain enable_fqdns_grains: False # Enabled multprocessing multiprocessing: True
NOTE:
- 3.
- Save the file.
Your Salt master is now configured to use delta proxy. Next, you need to Create a pillar file for each managed device.
Delta proxy configuration options¶
The following table describes the configuration options used in the delta proxy configuration file:
Field | Description |
metaproxy | Set this configuration option to deltaproxy. If this option is set to proxy or if this line is not included in the file, the Salt master will use the standard proxy service instead of the delta proxy service. |
enable_fqdns_grains | If your router does not have the ability to use Reverse DNS lookup to obtain the Fully Qualified Domain Name (fqdn) for an IP address, you'll need to change the enable_fqdns_grains setting in the pillar configuration file to False instead. |
multiprocessing | Multi-processing is the ability to run more than one task or process at the same time. A delta proxy minion has the ability to run with multi-processing turned off. If you plan to run with multi-processing enabled, you should also enable the skip_connect_on_init setting to True. |
skip_connect_on_init | This setting tells the control proxy whether or not it should make a connection to the managed device when it starts. When set to True, the delta proxy minion will only connect when it needs to issue commands to the managed devices. |
Create a pillar file for each managed device¶
Each device that needs to be managed by delta proxy needs a separate pillar file on the Salt master. To create this file:
- 1.
- Navigate to the /srv/pillar directory.
- 2.
- In this directory create a new pillar file for a minion. For example, my_managed_device_pillar_file_01.sls.
- 3.
- Open the new file in your preferred editor and add the necessary configuration information for that minion and your environment. The following is an example pillar file for a Netmiko device:
proxy:
proxytype: netmiko
device_type: arista_eos
host: 192.0.2.1
username: myusername
password: mypassword
always_alive: True
NOTE:
- Salt proxy modules
- Netmiko Salt proxy module
- 4.
- Save the file.
- 5.
- In an editor, open the top file: /srv/pillar/top.sls.
- 6.
- Add a section to the top file that indicates the minion ID of the device that will be managed. Then, list the name of the pillar file you created in the previous steps. For example:
my_managed_device_minion_ID:
- my_managed_device_pillar_file_01
- 7.
- Repeat the previous steps for each minion that needs to be managed.
You've now created the pillar file for the minions that will be managed by the delta proxy minion and you have referenced these files in the top file. Proceed to the next section.
Create a control proxy configuration file¶
On the Salt master, you'll need to create or edit a control proxy file for each control proxy. The control proxy manages several devices and issues commands to the network devices it represents. The Salt master needs at least one control proxy, but it is possible to have more than one control proxy, each managing a different set of devices.
To configure a control proxy, you'll create a file that lists the minion IDs of the minions that it will manage. Then you will reference this control proxy configuration file in the top file.
To create a control proxy configuration file:
- 1.
- On the Salt master, navigate to the /srv/pillar directory. In this directory, create a new proxy configuration file. Give this file a descriptive name, such as control_proxy_01_configuration.sls.
- 2.
- Open the file in your preferred editor and add a list of the minion IDs for each device that needs to be managed. For example:
proxy:
proxytype: deltaproxy
ids:
- my_managed_device_01
- my_managed_device_02
- my_managed_device_03
- 3.
- Save the file.
- 4.
- In an editor, open the top file: /srv/pillar/top.sls.
- 5.
- Add a section to the top file that indicates references the delta proxy control proxy. For example:
base:
my_managed_device_minion_01:
- my_managed_device_pillar_file_01
my_managed_device_minion_02:
- my_managed_device_pillar_file_02
my_managed_device_minion_03:
- my_managed_device_pillar_file_03
delta_proxy_control:
- control_proxy_01_configuration
- 6.
- Repeat the previous steps for each control proxy if needed.
- 7.
- In an editor, open the proxy config file: /etc/salt/proxy. Add a section for metaproxy and set it's value to deltaproxy.
metaproxy: deltaproxy
Now that you have created the necessary configurations, proceed to the next section.
Start the delta proxy minion¶
After you've successfully configured the delta proxy minion, you need to start the proxy minion service for each managed device and validate that it is working correctly.
NOTE:
To start a single instance of a delta proxy minion and test that it is configured correctly:
- 1.
- In the terminal for the Salt master, run the following command, replacing the placeholder text with the actual minion ID:
sudo salt-proxy --proxyid=<control_proxy_id>
- 2.
- To test the delta proxy minion, run the following test.version command on the Salt master and target a specific minion. For example:
salt my_managed_device_minion_ID test.version
This command returns an output similar to the following:
local:
3004
After you've successfully started the delta proxy minions and verified that they are working correctly, you can now use these minions the same as standard proxy minions.
Additional resources¶
This reference section includes additional resources for delta proxy minions.
For reference, see:
- Salt proxy minions
- Salt proxy modules
- Netmiko Salt proxy module
Configuration file examples¶
- Example master configuration file
- Example minion configuration file
- Example proxy minion configuration file
Example master configuration file¶
##### Primary configuration settings ##### ########################################## # This configuration file is used to manage the behavior of the Salt Master. # Values that are commented out but have an empty line after the comment are # defaults that do not need to be set in the config. If there is no blank line # after the comment then the value is presented as an example and is not the # default. # Per default, the master will automatically include all config files # from master.d/*.conf (master.d is a directory in the same directory # as the main master config file). #default_include: master.d/*.conf # The address of the interface to bind to: #interface: 0.0.0.0 # Whether the master should listen for IPv6 connections. If this is set to True, # the interface option must be adjusted, too. (For example: "interface: '::'") #ipv6: False # The tcp port used by the publisher: #publish_port: 4505 # The user under which the salt master will run. Salt will update all # permissions to allow the specified user to run the master. The exception is # the job cache, which must be deleted if this user is changed. If the # modified files cause conflicts, set verify_env to False. #user: root # Tell the master to also use salt-ssh when running commands against minions. #enable_ssh_minions: False # The port used by the communication interface. The ret (return) port is the # interface used for the file server, authentication, job returns, etc. #ret_port: 4506 # Specify the location of the daemon process ID file: #pidfile: /var/run/salt-master.pid # The root directory prepended to these options: pki_dir, cachedir, # sock_dir, log_file, autosign_file, autoreject_file, extension_modules, # key_logfile, pidfile, autosign_grains_dir: #root_dir: / # The path to the master's configuration file. #conf_file: /etc/salt/master # Directory used to store public key data: #pki_dir: /etc/salt/pki/master # Key cache. Increases master speed for large numbers of accepted # keys. Available options: 'sched'. (Updates on a fixed schedule.) # Note that enabling this feature means that minions will not be # available to target for up to the length of the maintenance loop # which by default is 60s. #key_cache: '' # Directory to store job and cache data: # This directory may contain sensitive data and should be protected accordingly. # #cachedir: /var/cache/salt/master # Directory where custom modules sync to. This directory can contain # subdirectories for each of Salt's module types such as "runners", # "output", "wheel", "modules", "states", "returners", "engines", # "utils", etc. # # Note, any directories or files not found in the `module_dirs` # location will be removed from the extension_modules path. #extension_modules: /var/cache/salt/master/extmods # Directory for custom modules. This directory can contain subdirectories for # each of Salt's module types such as "runners", "output", "wheel", "modules", # "states", "returners", "engines", "utils", etc. #module_dirs: [] # Verify and set permissions on configuration directories at startup: #verify_env: True # Set the number of hours to keep old job information in the job cache. # This option is deprecated by the keep_jobs_seconds option. #keep_jobs: 24 # Set the number of seconds to keep old job information in the job cache: #keep_jobs_seconds: 86400 # The number of seconds to wait when the client is requesting information # about running jobs. #gather_job_timeout: 10 # Set the default timeout for the salt command and api. The default is 5 # seconds. #timeout: 5 # The loop_interval option controls the seconds for the master's maintenance # process check cycle. This process updates file server backends, cleans the # job cache and executes the scheduler. #loop_interval: 60 # Set the default outputter used by the salt command. The default is "nested". #output: nested # To set a list of additional directories to search for salt outputters, set the # outputter_dirs option. #outputter_dirs: [] # Set the default output file used by the salt command. Default is to output # to the CLI and not to a file. Functions the same way as the "--out-file" # CLI option, only sets this to a single file for all salt commands. #output_file: None # Return minions that timeout when running commands like test.ping #show_timeout: True # Tell the client to display the jid when a job is published. #show_jid: False # By default, output is colored. To disable colored output, set the color value # to False. #color: True # Do not strip off the colored output from nested results and state outputs # (true by default). # strip_colors: False # To display a summary of the number of minions targeted, the number of # minions returned, and the number of minions that did not return, set the # cli_summary value to True. (False by default.) # #cli_summary: False # Set the directory used to hold unix sockets: #sock_dir: /var/run/salt/master # The master can take a while to start up when lspci and/or dmidecode is used # to populate the grains for the master. Enable if you want to see GPU hardware # data for your master. # enable_gpu_grains: False # The master maintains a job cache. While this is a great addition, it can be # a burden on the master for larger deployments (over 5000 minions). # Disabling the job cache will make previously executed jobs unavailable to # the jobs system and is not generally recommended. #job_cache: True # Cache minion grains, pillar and mine data via the cache subsystem in the # cachedir or a database. #minion_data_cache: True # Cache subsystem module to use for minion data cache. #cache: localfs # Enables a fast in-memory cache booster and sets the expiration time. #memcache_expire_seconds: 0 # Set a memcache limit in items (bank + key) per cache storage (driver + driver_opts). #memcache_max_items: 1024 # Each time a cache storage got full cleanup all the expired items not just the oldest one. #memcache_full_cleanup: False # Enable collecting the memcache stats and log it on `debug` log level. #memcache_debug: False # Store all returns in the given returner. # Setting this option requires that any returner-specific configuration also # be set. See various returners in salt/returners for details on required # configuration values. (See also, event_return_queue, and event_return_queue_max_seconds below.) # #event_return: mysql # On busy systems, enabling event_returns can cause a considerable load on # the storage system for returners. Events can be queued on the master and # stored in a batched fashion using a single transaction for multiple events. # By default, events are not queued. #event_return_queue: 0 # In some cases enabling event return queueing can be very helpful, but the bus # may not busy enough to flush the queue consistently. Setting this to a reasonable # value (1-30 seconds) will cause the queue to be flushed when the oldest event is older # than `event_return_queue_max_seconds` regardless of how many events are in the queue. #event_return_queue_max_seconds: 0 # Only return events matching tags in a whitelist, supports glob matches. #event_return_whitelist: # - salt/master/a_tag # - salt/run/*/ret # Store all event returns **except** the tags in a blacklist, supports globs. #event_return_blacklist: # - salt/master/not_this_tag # - salt/wheel/*/ret # Passing very large events can cause the minion to consume large amounts of # memory. This value tunes the maximum size of a message allowed onto the # master event bus. The value is expressed in bytes. #max_event_size: 1048576 # Windows platforms lack posix IPC and must rely on slower TCP based inter- # process communications. Set ipc_mode to 'tcp' on such systems #ipc_mode: ipc # Overwrite the default tcp ports used by the minion when ipc_mode is set to 'tcp' #tcp_master_pub_port: 4510 #tcp_master_pull_port: 4511 # By default, the master AES key rotates every 24 hours. The next command # following a key rotation will trigger a key refresh from the minion which may # result in minions which do not respond to the first command after a key refresh. # # To tell the master to ping all minions immediately after an AES key refresh, set # ping_on_rotate to True. This should mitigate the issue where a minion does not # appear to initially respond after a key is rotated. # # Note that ping_on_rotate may cause high load on the master immediately after # the key rotation event as minions reconnect. Consider this carefully if this # salt master is managing a large number of minions. # # If disabled, it is recommended to handle this event by listening for the # 'aes_key_rotate' event with the 'key' tag and acting appropriately. # ping_on_rotate: False # By default, the master deletes its cache of minion data when the key for that # minion is removed. To preserve the cache after key deletion, set # 'preserve_minion_cache' to True. # # WARNING: This may have security implications if compromised minions auth with # a previous deleted minion ID. #preserve_minion_cache: False # Allow or deny minions from requesting their own key revocation #allow_minion_key_revoke: True # If max_minions is used in large installations, the master might experience # high-load situations because of having to check the number of connected # minions for every authentication. This cache provides the minion-ids of # all connected minions to all MWorker-processes and greatly improves the # performance of max_minions. # con_cache: False # The master can include configuration from other files. To enable this, # pass a list of paths to this option. The paths can be either relative or # absolute; if relative, they are considered to be relative to the directory # the main master configuration file lives in (this file). Paths can make use # of shell-style globbing. If no files are matched by a path passed to this # option, then the master will log a warning message. # # Include a config file from some other path: # include: /etc/salt/extra_config # # Include config from several files and directories: # include: # - /etc/salt/extra_config ##### Large-scale tuning settings ##### ########################################## # Max open files # # Each minion connecting to the master uses AT LEAST one file descriptor, the # master subscription connection. If enough minions connect you might start # seeing on the console (and then salt-master crashes): # Too many open files (tcp_listener.cpp:335) # Aborted (core dumped) # # By default this value will be the one of `ulimit -Hn`, ie, the hard limit for # max open files. # # If you wish to set a different value than the default one, uncomment and # configure this setting. Remember that this value CANNOT be higher than the # hard limit. Raising the hard limit depends on your OS and/or distribution, # a good way to find the limit is to search the internet. For example: # raise max open files hard limit debian # #max_open_files: 100000 # The number of worker threads to start. These threads are used to manage # return calls made from minions to the master. If the master seems to be # running slowly, increase the number of threads. This setting can not be # set lower than 3. #worker_threads: 5 # Set the ZeroMQ high water marks # http://api.zeromq.org/3-2:zmq-setsockopt # The listen queue size / backlog #zmq_backlog: 1000 # The publisher interface ZeroMQPubServerChannel #pub_hwm: 1000 # The master may allocate memory per-event and not # reclaim it. # To set a high-water mark for memory allocation, use # ipc_write_buffer to set a high-water mark for message # buffering. # Value: In bytes. Set to 'dynamic' to have Salt select # a value for you. Default is disabled. # ipc_write_buffer: 'dynamic' # These two batch settings, batch_safe_limit and batch_safe_size, are used to # automatically switch to a batch mode execution. If a command would have been # sent to more than <batch_safe_limit> minions, then run the command in # batches of <batch_safe_size>. If no batch_safe_size is specified, a default # of 8 will be used. If no batch_safe_limit is specified, then no automatic # batching will occur. #batch_safe_limit: 100 #batch_safe_size: 8 # Master stats enables stats events to be fired from the master at close # to the defined interval #master_stats: False #master_stats_event_iter: 60 ##### Security settings ##### ########################################## # Enable passphrase protection of Master private key. Although a string value # is acceptable; passwords should be stored in an external vaulting mechanism # and retrieved via sdb. See https://docs.saltproject.io/en/latest/topics/sdb/. # Passphrase protection is off by default but an example of an sdb profile and # query is as follows. # masterkeyring: # driver: keyring # service: system # # key_pass: sdb://masterkeyring/key_pass # Enable passphrase protection of the Master signing_key. This only applies if # master_sign_pubkey is set to True. This is disabled by default. # master_sign_pubkey: True # signing_key_pass: sdb://masterkeyring/signing_pass # Enable "open mode", this mode still maintains encryption, but turns off # authentication, this is only intended for highly secure environments or for # the situation where your keys end up in a bad state. If you run in open mode # you do so at your own risk! #open_mode: False # Enable auto_accept, this setting will automatically accept all incoming # public keys from the minions. Note that this is insecure. #auto_accept: False # The size of key that should be generated when creating new keys. #keysize: 2048 # Time in minutes that an incoming public key with a matching name found in # pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys # are removed when the master checks the minion_autosign directory. # 0 equals no timeout # autosign_timeout: 120 # If the autosign_file is specified, incoming keys specified in the # autosign_file will be automatically accepted. This is insecure. Regular # expressions as well as globing lines are supported. The file must be readonly # except for the owner. Use permissive_pki_access to allow the group write access. #autosign_file: /etc/salt/autosign.conf # Works like autosign_file, but instead allows you to specify minion IDs for # which keys will automatically be rejected. Will override both membership in # the autosign_file and the auto_accept setting. #autoreject_file: /etc/salt/autoreject.conf # If the autosign_grains_dir is specified, incoming keys from minions with grain # values matching those defined in files in this directory will be accepted # automatically. This is insecure. Minions need to be configured to send the grains. #autosign_grains_dir: /etc/salt/autosign_grains # Enable permissive access to the salt keys. This allows you to run the # master or minion as root, but have a non-root group be given access to # your pki_dir. To make the access explicit, root must belong to the group # you've given access to. This is potentially quite insecure. If an autosign_file # is specified, enabling permissive_pki_access will allow group access to that # specific file. #permissive_pki_access: False # Allow users on the master access to execute specific commands on minions. # This setting should be treated with care since it opens up execution # capabilities to non root users. By default this capability is completely # disabled. #publisher_acl: # larry: # - test.ping # - network.* # # Blacklist any of the following users or modules # # This example would blacklist all non sudo users, including root from # running any commands. It would also blacklist any use of the "cmd" # module. This is completely disabled by default. # # # Check the list of configured users in client ACL against users on the # system and throw errors if they do not exist. #client_acl_verify: True # #publisher_acl_blacklist: # users: # - root # - '^(?!sudo_).*$' # all non sudo users # modules: # - cmd # Enforce publisher_acl & publisher_acl_blacklist when users have sudo # access to the salt command. # #sudo_acl: False # The external auth system uses the Salt auth modules to authenticate and # validate users to access areas of the Salt system. #external_auth: # pam: # fred: # - test.* # # Time (in seconds) for a newly generated token to live. Default: 12 hours #token_expire: 43200 # # Allow eauth users to specify the expiry time of the tokens they generate. # A boolean applies to all users or a dictionary of whitelisted eauth backends # and usernames may be given. # token_expire_user_override: # pam: # - fred # - tom # ldap: # - gary # #token_expire_user_override: False # Set to True to enable keeping the calculated user's auth list in the token # file. This is disabled by default and the auth list is calculated or requested # from the eauth driver each time. # # Note: `keep_acl_in_token` will be forced to True when using external authentication # for REST API (`rest` is present under `external_auth`). This is because the REST API # does not store the password, and can therefore not retroactively fetch the ACL, so # the ACL must be stored in the token. #keep_acl_in_token: False # Auth subsystem module to use to get authorized access list for a user. By default it's # the same module used for external authentication. #eauth_acl_module: django # Allow minions to push files to the master. This is disabled by default, for # security purposes. #file_recv: False # Set a hard-limit on the size of the files that can be pushed to the master. # It will be interpreted as megabytes. Default: 100 #file_recv_max_size: 100 # Signature verification on messages published from the master. # This causes the master to cryptographically sign all messages published to its event # bus, and minions then verify that signature before acting on the message. # # This is False by default. # # Note that to facilitate interoperability with masters and minions that are different # versions, if sign_pub_messages is True but a message is received by a minion with # no signature, it will still be accepted, and a warning message will be logged. # Conversely, if sign_pub_messages is False, but a minion receives a signed # message it will be accepted, the signature will not be checked, and a warning message # will be logged. This behavior went away in Salt 2014.1.0 and these two situations # will cause minion to throw an exception and drop the message. # sign_pub_messages: False # Signature verification on messages published from minions # This requires that minions cryptographically sign the messages they # publish to the master. If minions are not signing, then log this information # at loglevel 'INFO' and drop the message without acting on it. # require_minion_sign_messages: False # The below will drop messages when their signatures do not validate. # Note that when this option is False but `require_minion_sign_messages` is True # minions MUST sign their messages but the validity of their signatures # is ignored. # These two config options exist so a Salt infrastructure can be moved # to signing minion messages gradually. # drop_messages_signature_fail: False # Use TLS/SSL encrypted connection between master and minion. # Can be set to a dictionary containing keyword arguments corresponding to Python's # 'ssl.wrap_socket' method. # Default is None. #ssl: # keyfile: <path_to_keyfile> # certfile: <path_to_certfile> # ssl_version: PROTOCOL_TLSv1_2 ##### Salt-SSH Configuration ##### ########################################## # Define the default salt-ssh roster module to use #roster: flat # Pass in an alternative location for the salt-ssh `flat` roster file #roster_file: /etc/salt/roster # Define locations for `flat` roster files so they can be chosen when using Salt API. # An administrator can place roster files into these locations. Then when # calling Salt API, parameter 'roster_file' should contain a relative path to # these locations. That is, "roster_file=/foo/roster" will be resolved as # "/etc/salt/roster.d/foo/roster" etc. This feature prevents passing insecure # custom rosters through the Salt API. # #rosters: # - /etc/salt/roster.d # - /opt/salt/some/more/rosters # The ssh password to log in with. #ssh_passwd: '' #The target system's ssh port number. #ssh_port: 22 # Comma-separated list of ports to scan. #ssh_scan_ports: 22 # Scanning socket timeout for salt-ssh. #ssh_scan_timeout: 0.01 # Boolean to run command via sudo. #ssh_sudo: False # Boolean to run ssh_pre_flight script defined in roster. By default # the script will only run if the thin_dir does not exist on the targeted # minion. This forces the script to run regardless of the thin dir existing # or not. #ssh_run_pre_flight: True # Number of seconds to wait for a response when establishing an SSH connection. #ssh_timeout: 60 # The user to log in as. #ssh_user: root # The log file of the salt-ssh command: #ssh_log_file: /var/log/salt/ssh # Pass in minion option overrides that will be inserted into the SHIM for # salt-ssh calls. The local minion config is not used for salt-ssh. Can be # overridden on a per-minion basis in the roster (`minion_opts`) #ssh_minion_opts: # gpg_keydir: /root/gpg # Set this to True to default to using ~/.ssh/id_rsa for salt-ssh # authentication with minions #ssh_use_home_key: False # Set this to True to default salt-ssh to run with ``-o IdentitiesOnly=yes``. # This option is intended for situations where the ssh-agent offers many # different identities and allows ssh to ignore those identities and use the # only one specified in options. #ssh_identities_only: False # List-only nodegroups for salt-ssh. Each group must be formed as either a # comma-separated list, or a YAML list. This option is useful to group minions # into easy-to-target groups when using salt-ssh. These groups can then be # targeted with the normal -N argument to salt-ssh. #ssh_list_nodegroups: {} # salt-ssh has the ability to update the flat roster file if a minion is not # found in the roster. Set this to True to enable it. #ssh_update_roster: False ##### Master Module Management ##### ########################################## # Manage how master side modules are loaded. # Add any additional locations to look for master runners: #runner_dirs: [] # Add any additional locations to look for master utils: #utils_dirs: [] # Enable Cython for master side modules: #cython_enable: False ##### State System settings ##### ########################################## # The state system uses a "top" file to tell the minions what environment to # use and what modules to use. The state_top file is defined relative to the # root of the base environment as defined in "File Server settings" below. #state_top: top.sls # The master_tops option replaces the external_nodes option by creating # a plugable system for the generation of external top data. The external_nodes # option is deprecated by the master_tops option. # # To gain the capabilities of the classic external_nodes system, use the # following configuration: # master_tops: # ext_nodes: <Shell command which returns yaml> # #master_tops: {} # The renderer to use on the minions to render the state data #renderer: jinja|yaml # Default Jinja environment options for all templates except sls templates #jinja_env: # block_start_string: '{%' # block_end_string: '%}' # variable_start_string: '{{' # variable_end_string: '}}' # comment_start_string: '{#' # comment_end_string: '#}' # line_statement_prefix: # line_comment_prefix: # trim_blocks: False # lstrip_blocks: False # newline_sequence: '\n' # keep_trailing_newline: False # Jinja environment options for sls templates #jinja_sls_env: # block_start_string: '{%' # block_end_string: '%}' # variable_start_string: '{{' # variable_end_string: '}}' # comment_start_string: '{#' # comment_end_string: '#}' # line_statement_prefix: # line_comment_prefix: # trim_blocks: False # lstrip_blocks: False # newline_sequence: '\n' # keep_trailing_newline: False # The failhard option tells the minions to stop immediately after the first # failure detected in the state execution, defaults to False #failhard: False # The state_verbose and state_output settings can be used to change the way # state system data is printed to the display. By default all data is printed. # The state_verbose setting can be set to True or False, when set to False # all data that has a result of True and no changes will be suppressed. #state_verbose: True # The state_output setting controls which results will be output full multi line # full, terse - each state will be full/terse # mixed - only states with errors will be full # changes - states with changes and errors will be full # full_id, mixed_id, changes_id and terse_id are also allowed; # when set, the state ID will be used as name in the output #state_output: full # The state_output_diff setting changes whether or not the output from # successful states is returned. Useful when even the terse output of these # states is cluttering the logs. Set it to True to ignore them. #state_output_diff: False # The state_output_profile setting changes whether profile information # will be shown for each state run. #state_output_profile: True # The state_output_pct setting changes whether success and failure information # as a percent of total actions will be shown for each state run. #state_output_pct: False # The state_compress_ids setting aggregates information about states which have # multiple "names" under the same state ID in the highstate output. #state_compress_ids: False # Automatically aggregate all states that have support for mod_aggregate by # setting to 'True'. Or pass a list of state module names to automatically # aggregate just those types. # # state_aggregate: # - pkg # #state_aggregate: False # Send progress events as each function in a state run completes execution # by setting to 'True'. Progress events are in the format # 'salt/job/<JID>/prog/<MID>/<RUN NUM>'. #state_events: False ##### File Server settings ##### ########################################## # Salt runs a lightweight file server written in zeromq to deliver files to # minions. This file server is built into the master daemon and does not # require a dedicated port. # The file server works on environments passed to the master, each environment # can have multiple root directories, the subdirectories in the multiple file # roots cannot match, otherwise the downloaded files will not be able to be # reliably ensured. A base environment is required to house the top file. # Example: # file_roots: # base: # - /srv/salt/ # dev: # - /srv/salt/dev/services # - /srv/salt/dev/states # prod: # - /srv/salt/prod/services # - /srv/salt/prod/states # #file_roots: # base: # - /srv/salt # # The master_roots setting configures a master-only copy of the file_roots dictionary, # used by the state compiler. #master_roots: # base: # - /srv/salt-master # When using multiple environments, each with their own top file, the # default behaviour is an unordered merge. To prevent top files from # being merged together and instead to only use the top file from the # requested environment, set this value to 'same'. #top_file_merging_strategy: merge # To specify the order in which environments are merged, set the ordering # in the env_order option. Given a conflict, the last matching value will # win. #env_order: ['base', 'dev', 'prod'] # If top_file_merging_strategy is set to 'same' and an environment does not # contain a top file, the top file in the environment specified by default_top # will be used instead. #default_top: base # The hash_type is the hash to use when discovering the hash of a file on # the master server. The default is sha256, but md5, sha1, sha224, sha384 and # sha512 are also supported. # # WARNING: While md5 and sha1 are also supported, do not use them due to the # high chance of possible collisions and thus security breach. # # Prior to changing this value, the master should be stopped and all Salt # caches should be cleared. #hash_type: sha256 # The buffer size in the file server can be adjusted here: #file_buffer_size: 1048576 # A regular expression (or a list of expressions) that will be matched # against the file path before syncing the modules and states to the minions. # This includes files affected by the file.recurse state. # For example, if you manage your custom modules and states in subversion # and don't want all the '.svn' folders and content synced to your minions, # you could set this to '/\.svn($|/)'. By default nothing is ignored. #file_ignore_regex: # - '/\.svn($|/)' # - '/\.git($|/)' # A file glob (or list of file globs) that will be matched against the file # path before syncing the modules and states to the minions. This is similar # to file_ignore_regex above, but works on globs instead of regex. By default # nothing is ignored. # file_ignore_glob: # - '*.pyc' # - '*/somefolder/*.bak' # - '*.swp' # File Server Backend # # Salt supports a modular fileserver backend system, this system allows # the salt master to link directly to third party systems to gather and # manage the files available to minions. Multiple backends can be # configured and will be searched for the requested file in the order in which # they are defined here. The default setting only enables the standard backend # "roots" which uses the "file_roots" option. #fileserver_backend: # - roots # # To use multiple backends list them in the order they are searched: #fileserver_backend: # - git # - roots # # Uncomment the line below if you do not want the file_server to follow # symlinks when walking the filesystem tree. This is set to True # by default. Currently this only applies to the default roots # fileserver_backend. #fileserver_followsymlinks: False # # Uncomment the line below if you do not want symlinks to be # treated as the files they are pointing to. By default this is set to # False. By uncommenting the line below, any detected symlink while listing # files on the Master will not be returned to the Minion. #fileserver_ignoresymlinks: True # # The fileserver can fire events off every time the fileserver is updated, # these are disabled by default, but can be easily turned on by setting this # flag to True #fileserver_events: False # Git File Server Backend Configuration # # Optional parameter used to specify the provider to be used for gitfs. Must be # either pygit2 or gitpython. If unset, then both will be tried (in that # order), and the first one with a compatible version installed will be the # provider that is used. # #gitfs_provider: pygit2 # Along with gitfs_password, is used to authenticate to HTTPS remotes. # gitfs_user: '' # Along with gitfs_user, is used to authenticate to HTTPS remotes. # This parameter is not required if the repository does not use authentication. #gitfs_password: '' # By default, Salt will not authenticate to an HTTP (non-HTTPS) remote. # This parameter enables authentication over HTTP. Enable this at your own risk. #gitfs_insecure_auth: False # Along with gitfs_privkey (and optionally gitfs_passphrase), is used to # authenticate to SSH remotes. This parameter (or its per-remote counterpart) # is required for SSH remotes. #gitfs_pubkey: '' # Along with gitfs_pubkey (and optionally gitfs_passphrase), is used to # authenticate to SSH remotes. This parameter (or its per-remote counterpart) # is required for SSH remotes. #gitfs_privkey: '' # This parameter is optional, required only when the SSH key being used to # authenticate is protected by a passphrase. #gitfs_passphrase: '' # When using the git fileserver backend at least one git remote needs to be # defined. The user running the salt master will need read access to the repo. # # The repos will be searched in order to find the file requested by a client # and the first repo to have the file will return it. # When using the git backend branches and tags are translated into salt # environments. # Note: file:// repos will be treated as a remote, so refs you want used must # exist in that repo as *local* refs. #gitfs_remotes: # - git://github.com/saltstack/salt-states.git # - file:///var/git/saltmaster # # The gitfs_ssl_verify option specifies whether to ignore ssl certificate # errors when contacting the gitfs backend. You might want to set this to # false if you're using a git backend that uses a self-signed certificate but # keep in mind that setting this flag to anything other than the default of True # is a security concern, you may want to try using the ssh transport. #gitfs_ssl_verify: True # # The gitfs_root option gives the ability to serve files from a subdirectory # within the repository. The path is defined relative to the root of the # repository and defaults to the repository root. #gitfs_root: somefolder/otherfolder # # The refspecs fetched by gitfs remotes #gitfs_refspecs: # - '+refs/heads/*:refs/remotes/origin/*' # - '+refs/tags/*:refs/tags/*' # # ##### Pillar settings ##### ########################################## # Salt Pillars allow for the building of global data that can be made selectively # available to different minions based on minion grain filtering. The Salt # Pillar is laid out in the same fashion as the file server, with environments, # a top file and sls files. However, pillar data does not need to be in the # highstate format, and is generally just key/value pairs. #pillar_roots: # base: # - /srv/pillar # #ext_pillar: # - hiera: /etc/hiera.yaml # - cmd_yaml: cat /etc/salt/yaml # A list of paths to be recursively decrypted during pillar compilation. # Entries in this list can be formatted either as a simple string, or as a # key/value pair, with the key being the pillar location, and the value being # the renderer to use for pillar decryption. If the former is used, the # renderer specified by decrypt_pillar_default will be used. #decrypt_pillar: # - 'foo:bar': gpg # - 'lorem:ipsum:dolor' # The delimiter used to distinguish nested data structures in the # decrypt_pillar option. #decrypt_pillar_delimiter: ':' # The default renderer used for decryption, if one is not specified for a given # pillar key in decrypt_pillar. #decrypt_pillar_default: gpg # List of renderers which are permitted to be used for pillar decryption. #decrypt_pillar_renderers: # - gpg # If this is `True` and the ciphertext could not be decrypted, then an error is # raised. #gpg_decrypt_must_succeed: False # The ext_pillar_first option allows for external pillar sources to populate # before file system pillar. This allows for targeting file system pillar from # ext_pillar. #ext_pillar_first: False # The external pillars permitted to be used on-demand using pillar.ext #on_demand_ext_pillar: # - libvirt # - virtkey # The pillar_gitfs_ssl_verify option specifies whether to ignore ssl certificate # errors when contacting the pillar gitfs backend. You might want to set this to # false if you're using a git backend that uses a self-signed certificate but # keep in mind that setting this flag to anything other than the default of True # is a security concern, you may want to try using the ssh transport. #pillar_gitfs_ssl_verify: True # The pillar_opts option adds the master configuration file data to a dict in # the pillar called "master". This is used to set simple configurations in the # master config file that can then be used on minions. #pillar_opts: False # The pillar_safe_render_error option prevents the master from passing pillar # render errors to the minion. This is set on by default because the error could # contain templating data which would give that minion information it shouldn't # have, like a password! When set true the error message will only show: # Rendering SLS 'my.sls' failed. Please see master log for details. #pillar_safe_render_error: True # The pillar_source_merging_strategy option allows you to configure merging strategy # between different sources. It accepts five values: none, recurse, aggregate, overwrite, # or smart. None will not do any merging at all. Recurse will merge recursively mapping of data. # Aggregate instructs aggregation of elements between sources that use the #!yamlex renderer. Overwrite # will overwrite elements according the order in which they are processed. This is # behavior of the 2014.1 branch and earlier. Smart guesses the best strategy based # on the "renderer" setting and is the default value. #pillar_source_merging_strategy: smart # Recursively merge lists by aggregating them instead of replacing them. #pillar_merge_lists: False # Set this option to True to force the pillarenv to be the same as the effective # saltenv when running states. If pillarenv is specified this option will be # ignored. #pillarenv_from_saltenv: False # Set this option to 'True' to force a 'KeyError' to be raised whenever an # attempt to retrieve a named value from pillar fails. When this option is set # to 'False', the failed attempt returns an empty string. Default is 'False'. #pillar_raise_on_missing: False # Git External Pillar (git_pillar) Configuration Options # # Specify the provider to be used for git_pillar. Must be either pygit2 or # gitpython. If unset, then both will be tried in that same order, and the # first one with a compatible version installed will be the provider that # is used. #git_pillar_provider: pygit2 # If the desired branch matches this value, and the environment is omitted # from the git_pillar configuration, then the environment for that git_pillar # remote will be base. #git_pillar_base: master # If the branch is omitted from a git_pillar remote, then this branch will # be used instead #git_pillar_branch: master # Environment to use for git_pillar remotes. This is normally derived from # the branch/tag (or from a per-remote env parameter), but if set this will # override the process of deriving the env from the branch/tag name. #git_pillar_env: '' # Path relative to the root of the repository where the git_pillar top file # and SLS files are located. #git_pillar_root: '' # Specifies whether or not to ignore SSL certificate errors when contacting # the remote repository. #git_pillar_ssl_verify: False # When set to False, if there is an update/checkout lock for a git_pillar # remote and the pid written to it is not running on the master, the lock # file will be automatically cleared and a new lock will be obtained. #git_pillar_global_lock: True # Git External Pillar Authentication Options # # Along with git_pillar_password, is used to authenticate to HTTPS remotes. #git_pillar_user: '' # Along with git_pillar_user, is used to authenticate to HTTPS remotes. # This parameter is not required if the repository does not use authentication. #git_pillar_password: '' # By default, Salt will not authenticate to an HTTP (non-HTTPS) remote. # This parameter enables authentication over HTTP. #git_pillar_insecure_auth: False # Along with git_pillar_privkey (and optionally git_pillar_passphrase), # is used to authenticate to SSH remotes. #git_pillar_pubkey: '' # Along with git_pillar_pubkey (and optionally git_pillar_passphrase), # is used to authenticate to SSH remotes. #git_pillar_privkey: '' # This parameter is optional, required only when the SSH key being used # to authenticate is protected by a passphrase. #git_pillar_passphrase: '' # The refspecs fetched by git_pillar remotes #git_pillar_refspecs: # - '+refs/heads/*:refs/remotes/origin/*' # - '+refs/tags/*:refs/tags/*' # A master can cache pillars locally to bypass the expense of having to render them # for each minion on every request. This feature should only be enabled in cases # where pillar rendering time is known to be unsatisfactory and any attendant security # concerns about storing pillars in a master cache have been addressed. # # When enabling this feature, be certain to read through the additional ``pillar_cache_*`` # configuration options to fully understand the tunable parameters and their implications. # # Note: setting ``pillar_cache: True`` has no effect on targeting Minions with Pillars. # See https://docs.saltproject.io/en/latest/topics/targeting/pillar.html #pillar_cache: False # If and only if a master has set ``pillar_cache: True``, the cache TTL controls the amount # of time, in seconds, before the cache is considered invalid by a master and a fresh # pillar is recompiled and stored. #pillar_cache_ttl: 3600 # If and only if a master has set `pillar_cache: True`, one of several storage providers # can be utilized. # # `disk`: The default storage backend. This caches rendered pillars to the master cache. # Rendered pillars are serialized and deserialized as msgpack structures for speed. # Note that pillars are stored UNENCRYPTED. Ensure that the master cache # has permissions set appropriately. (Same defaults are provided.) # # memory: [EXPERIMENTAL] An optional backend for pillar caches which uses a pure-Python # in-memory data structure for maximal performance. There are several caveats, # however. First, because each master worker contains its own in-memory cache, # there is no guarantee of cache consistency between minion requests. This # works best in situations where the pillar rarely if ever changes. Secondly, # and perhaps more importantly, this means that unencrypted pillars will # be accessible to any process which can examine the memory of the ``salt-master``! # This may represent a substantial security risk. # #pillar_cache_backend: disk # A master can also cache GPG data locally to bypass the expense of having to render them # for each minion on every request. This feature should only be enabled in cases # where pillar rendering time is known to be unsatisfactory and any attendant security # concerns about storing decrypted GPG data in a master cache have been addressed. # # When enabling this feature, be certain to read through the additional ``gpg_cache_*`` # configuration options to fully understand the tunable parameters and their implications. #gpg_cache: False # If and only if a master has set ``gpg_cache: True``, the cache TTL controls the amount # of time, in seconds, before the cache is considered invalid by a master and a fresh # pillar is recompiled and stored. #gpg_cache_ttl: 86400 # If and only if a master has set `gpg_cache: True`, one of several storage providers # can be utilized. Available options are the same as ``pillar_cache_backend``. #gpg_cache_backend: disk ###### Reactor Settings ##### ########################################### # Define a salt reactor. See https://docs.saltproject.io/en/latest/topics/reactor/ #reactor: [] #Set the TTL for the cache of the reactor configuration. #reactor_refresh_interval: 60 #Configure the number of workers for the runner/wheel in the reactor. #reactor_worker_threads: 10 #Define the queue size for workers in the reactor. #reactor_worker_hwm: 10000 ##### Syndic settings ##### ########################################## # The Salt syndic is used to pass commands through a master from a higher # master. Using the syndic is simple. If this is a master that will have # syndic servers(s) below it, then set the "order_masters" setting to True. # # If this is a master that will be running a syndic daemon for passthrough, then # the "syndic_master" setting needs to be set to the location of the master server # to receive commands from. # Set the order_masters setting to True if this master will command lower # masters' syndic interfaces. #order_masters: False # If this master will be running a salt syndic daemon, syndic_master tells # this master where to receive commands from. #syndic_master: masterofmasters # This is the 'ret_port' of the MasterOfMaster: #syndic_master_port: 4506 # PID file of the syndic daemon: #syndic_pidfile: /var/run/salt-syndic.pid # The log file of the salt-syndic daemon: #syndic_log_file: /var/log/salt/syndic # The behaviour of the multi-syndic when connection to a master of masters failed. # Can specify ``random`` (default) or ``ordered``. If set to ``random``, masters # will be iterated in random order. If ``ordered`` is specified, the configured # order will be used. #syndic_failover: random # The number of seconds for the salt client to wait for additional syndics to # check in with their lists of expected minions before giving up. #syndic_wait: 5 ##### Peer Publish settings ##### ########################################## # Salt minions can send commands to other minions, but only if the minion is # allowed to. By default "Peer Publication" is disabled, and when enabled it # is enabled for specific minions and specific commands. This allows secure # compartmentalization of commands based on individual minions. # The configuration uses regular expressions to match minions and then a list # of regular expressions to match functions. The following will allow the # minion authenticated as foo.example.com to execute functions from the test # and pkg modules. #peer: # foo.example.com: # - test.* # - pkg.* # # This will allow all minions to execute all commands: #peer: # .*: # - .* # # This is not recommended, since it would allow anyone who gets root on any # single minion to instantly have root on all of the minions! # Minions can also be allowed to execute runners from the salt master. # Since executing a runner from the minion could be considered a security risk, # it needs to be enabled. This setting functions just like the peer setting # except that it opens up runners instead of module functions. # # All peer runner support is turned off by default and must be enabled before # using. This will enable all peer runners for all minions: #peer_run: # .*: # - .* # # To enable just the manage.up runner for the minion foo.example.com: #peer_run: # foo.example.com: # - manage.up # # ##### Mine settings ##### ##################################### # Restrict mine.get access from minions. By default any minion has a full access # to get all mine data from master cache. In acl definion below, only pcre matches # are allowed. # mine_get: # .*: # - .* # # The example below enables minion foo.example.com to get 'network.interfaces' mine # data only, minions web* to get all network.* and disk.* mine data and all other # minions won't get any mine data. # mine_get: # foo.example.com: # - network.interfaces # web.*: # - network.* # - disk.* ##### Logging settings ##### ########################################## # The location of the master log file # The master log can be sent to a regular file, local path name, or network # location. Remote logging works best when configured to use rsyslogd(8) (e.g.: # ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI # format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility> #log_file: /var/log/salt/master #log_file: file:///dev/log #log_file: udp://loghost:10514 #log_file: /var/log/salt/master #key_logfile: /var/log/salt/key # The level of messages to send to the console. # One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'. # # The following log levels are considered INSECURE and may log sensitive data: # ['garbage', 'trace', 'debug'] # #log_level: warning # The level of messages to send to the log file. # One of 'garbage', 'trace', 'debug', 'info', 'warning', 'error', 'critical'. # If using 'log_granular_levels' this must be set to the highest desired level. #log_level_logfile: warning # The date and time format used in log messages. Allowed date/time formatting # can be seen here: http://docs.python.org/library/time.html#time.strftime #log_datefmt: '%H:%M:%S' #log_datefmt_logfile: '%Y-%m-%d %H:%M:%S' # The format of the console logging messages. Allowed formatting options can # be seen here: http://docs.python.org/library/logging.html#logrecord-attributes # # Console log colors are specified by these additional formatters: # # %(colorlevel)s # %(colorname)s # %(colorprocess)s # %(colormsg)s # # Since it is desirable to include the surrounding brackets, '[' and ']', in # the coloring of the messages, these color formatters also include padding as # well. Color LogRecord attributes are only available for console logging. # #log_fmt_console: '%(colorlevel)s %(colormsg)s' #log_fmt_console: '[%(levelname)-8s] %(message)s' # #log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s' # This can be used to control logging levels more specificically. This # example sets the main salt library at the 'warning' level, but sets # 'salt.modules' to log at the 'debug' level: # log_granular_levels: # 'salt': 'warning' # 'salt.modules': 'debug' # #log_granular_levels: {} ##### Node Groups ###### ########################################## # Node groups allow for logical groupings of minion nodes. A group consists of # a group name and a compound target. Nodgroups can reference other nodegroups # with 'N@' classifier. Ensure that you do not have circular references. # #nodegroups: # group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com' # group2: 'G@os:Debian and foo.domain.com' # group3: 'G@os:Debian and N@group1' # group4: # - 'G@foo:bar' # - 'or' # - 'G@foo:baz' ##### Range Cluster settings ##### ########################################## # The range server (and optional port) that serves your cluster information # https://github.com/ytoolshed/range/wiki/%22yamlfile%22-module-file-spec # #range_server: range:80 ##### Windows Software Repo settings ##### ########################################### # Location of the repo on the master: #winrepo_dir_ng: '/srv/salt/win/repo-ng' # # List of git repositories to include with the local repo: #winrepo_remotes_ng: # - 'https://github.com/saltstack/salt-winrepo-ng.git' ##### Windows Software Repo settings - Pre 2015.8 ##### ######################################################## # Legacy repo settings for pre-2015.8 Windows minions. # # Location of the repo on the master: #winrepo_dir: '/srv/salt/win/repo' # # Location of the master's repo cache file: #winrepo_mastercachefile: '/srv/salt/win/repo/winrepo.p' # # List of git repositories to include with the local repo: #winrepo_remotes: # - 'https://github.com/saltstack/salt-winrepo.git' # The refspecs fetched by winrepo remotes #winrepo_refspecs: # - '+refs/heads/*:refs/remotes/origin/*' # - '+refs/tags/*:refs/tags/*' # ##### Returner settings ###### ############################################ # Which returner(s) will be used for minion's result: #return: mysql ###### Miscellaneous settings ###### ############################################ # Default match type for filtering events tags: startswith, endswith, find, regex, fnmatch #event_match_type: startswith # Save runner returns to the job cache #runner_returns: True # Permanently include any available Python 3rd party modules into thin and minimal Salt # when they are generated for Salt-SSH or other purposes. # The modules should be named by the names they are actually imported inside the Python. # The value of the parameters can be either one module or a comma separated list of them. #thin_extra_mods: foo,bar #min_extra_mods: foo,bar,baz ###### Keepalive settings ###### ############################################ # Warning: Failure to set TCP keepalives on the salt-master can result in # not detecting the loss of a minion when the connection is lost or when # its host has been terminated without first closing the socket. # Salt's Presence System depends on this connection status to know if a minion # is "present". # ZeroMQ now includes support for configuring SO_KEEPALIVE if supported by # the OS. If connections between the minion and the master pass through # a state tracking device such as a firewall or VPN gateway, there is # the risk that it could tear down the connection the master and minion # without informing either party that their connection has been taken away. # Enabling TCP Keepalives prevents this from happening. # Overall state of TCP Keepalives, enable (1 or True), disable (0 or False) # or leave to the OS defaults (-1), on Linux, typically disabled. Default True, enabled. #tcp_keepalive: True # How long before the first keepalive should be sent in seconds. Default 300 # to send the first keepalive after 5 minutes, OS default (-1) is typically 7200 seconds # on Linux see /proc/sys/net/ipv4/tcp_keepalive_time. #tcp_keepalive_idle: 300 # How many lost probes are needed to consider the connection lost. Default -1 # to use OS defaults, typically 9 on Linux, see /proc/sys/net/ipv4/tcp_keepalive_probes. #tcp_keepalive_cnt: -1 # How often, in seconds, to send keepalives after the first one. Default -1 to # use OS defaults, typically 75 seconds on Linux, see # /proc/sys/net/ipv4/tcp_keepalive_intvl. #tcp_keepalive_intvl: -1 ##### NetAPI settings ##### ############################################ # Allow the raw_shell parameter to be used when calling Salt SSH client via API #netapi_allow_raw_shell: True # Set a list of clients to enable in in the API #netapi_enable_clients: []
Example minion configuration file¶
##### Primary configuration settings ##### ########################################## # This configuration file is used to manage the behavior of the Salt Minion. # With the exception of the location of the Salt Master Server, values that are # commented out but have an empty line after the comment are defaults that need # not be set in the config. If there is no blank line after the comment, the # value is presented as an example and is not the default. # Per default the minion will automatically include all config files # from minion.d/*.conf (minion.d is a directory in the same directory # as the main minion config file). #default_include: minion.d/*.conf # Set the location of the salt master server. If the master server cannot be # resolved, then the minion will fail to start. #master: salt # Set http proxy information for the minion when doing requests #proxy_host: #proxy_port: #proxy_username: #proxy_password: # List of hosts to bypass HTTP proxy. This key does nothing unless proxy_host etc is # configured, it does not support any kind of wildcards. #no_proxy: [] # If multiple masters are specified in the 'master' setting, the default behavior # is to always try to connect to them in the order they are listed. If random_master # is set to True, the order will be randomized upon Minion startup instead. This can # be helpful in distributing the load of many minions executing salt-call requests, # for example, from a cron job. If only one master is listed, this setting is ignored # and a warning will be logged. #random_master: False # NOTE: Deprecated in Salt 2019.2.0. Use 'random_master' instead. #master_shuffle: False # Minions can connect to multiple masters simultaneously (all masters # are "hot"), or can be configured to failover if a master becomes # unavailable. Multiple hot masters are configured by setting this # value to "str". Failover masters can be requested by setting # to "failover". MAKE SURE TO SET master_alive_interval if you are # using failover. # Setting master_type to 'disable' lets you have a running minion (with engines and # beacons) without a master connection # master_type: str # Poll interval in seconds for checking if the master is still there. Only # respected if master_type above is "failover". To disable the interval entirely, # set the value to -1. (This may be necessary on machines which have high numbers # of TCP connections, such as load balancers.) # master_alive_interval: 30 # If the minion is in multi-master mode and the master_type configuration option # is set to "failover", this setting can be set to "True" to force the minion # to fail back to the first master in the list if the first master is back online. #master_failback: False # If the minion is in multi-master mode, the "master_type" configuration is set to # "failover", and the "master_failback" option is enabled, the master failback # interval can be set to ping the top master with this interval, in seconds. #master_failback_interval: 0 # Set whether the minion should connect to the master via IPv6: #ipv6: False # Set the number of seconds to wait before attempting to resolve # the master hostname if name resolution fails. Defaults to 30 seconds. # Set to zero if the minion should shutdown and not retry. # retry_dns: 30 # Set the number of times to attempt to resolve # the master hostname if name resolution fails. Defaults to None, # which will attempt the resolution indefinitely. # retry_dns_count: 3 # Set the port used by the master reply and authentication server. #master_port: 4506 # The user to run salt. #user: root # The user to run salt remote execution commands as via sudo. If this option is # enabled then sudo will be used to change the active user executing the remote # command. If enabled the user will need to be allowed access via the sudoers # file for the user that the salt minion is configured to run as. The most # common option would be to use the root user. If this option is set the user # option should also be set to a non-root user. If migrating from a root minion # to a non root minion the minion cache should be cleared and the minion pki # directory will need to be changed to the ownership of the new user. #sudo_user: root # Specify the location of the daemon process ID file. #pidfile: /var/run/salt-minion.pid # The root directory prepended to these options: pki_dir, cachedir, log_file, # sock_dir, pidfile. #root_dir: / # The path to the minion's configuration file. #conf_file: /etc/salt/minion # The directory to store the pki information in #pki_dir: /etc/salt/pki/minion # Explicitly declare the id for this minion to use, if left commented the id # will be the hostname as returned by the python call: socket.getfqdn() # Since salt uses detached ids it is possible to run multiple minions on the # same machine but with different ids, this can be useful for salt compute # clusters. #id: # Cache the minion id to a file when the minion's id is not statically defined # in the minion config. Defaults to "True". This setting prevents potential # problems when automatic minion id resolution changes, which can cause the # minion to lose connection with the master. To turn off minion id caching, # set this config to ``False``. #minion_id_caching: True # Convert minion id to lowercase when it is being generated. Helpful when some # hosts get the minion id in uppercase. Cached ids will remain the same and # not converted. For example, Windows minions often have uppercase minion # names when they are set up but not always. To turn on, set this config to # ``True``. #minion_id_lowercase: False # Append a domain to a hostname in the event that it does not exist. This is # useful for systems where socket.getfqdn() does not actually result in a # FQDN (for instance, Solaris). #append_domain: # Custom static grains for this minion can be specified here and used in SLS # files just like all other grains. This example sets 4 custom grains, with # the 'roles' grain having two values that can be matched against. #grains: # roles: # - webserver # - memcache # deployment: datacenter4 # cabinet: 13 # cab_u: 14-15 # # Where cache data goes. # This data may contain sensitive data and should be protected accordingly. #cachedir: /var/cache/salt/minion # Append minion_id to these directories. Helps with # multiple proxies and minions running on the same machine. # Allowed elements in the list: pki_dir, cachedir, extension_modules # Normally not needed unless running several proxies and/or minions on the same machine # Defaults to ['cachedir'] for proxies, [] (empty list) for regular minions #append_minionid_config_dirs: # Verify and set permissions on configuration directories at startup. #verify_env: True # The minion can locally cache the return data from jobs sent to it, this # can be a good way to keep track of jobs the minion has executed # (on the minion side). By default this feature is disabled, to enable, set # cache_jobs to True. #cache_jobs: False # Set the directory used to hold unix sockets. #sock_dir: /var/run/salt/minion # In order to calculate the fqdns grain, all the IP addresses from the minion # are processed with underlying calls to `socket.gethostbyaddr` which can take # 5 seconds to be released (after reaching `socket.timeout`) when there is no # fqdn for that IP. These calls to `socket.gethostbyaddr` are processed # asynchronously, however, it still adds 5 seconds every time grains are # generated if an IP does not resolve. In Windows grains are regenerated each # time a new process is spawned. Therefore, the default for Windows is `False`. # On macOS, FQDN resolution can be very slow, therefore the default for macOS is # `False` as well. All other OSes default to `True` # enable_fqdns_grains: True # The minion can take a while to start up when lspci and/or dmidecode is used # to populate the grains for the minion. Set this to False if you do not need # GPU hardware grains for your minion. # enable_gpu_grains: True # Set the default outputter used by the salt-call command. The default is # "nested". #output: nested # To set a list of additional directories to search for salt outputters, set the # outputter_dirs option. #outputter_dirs: [] # By default output is colored. To disable colored output, set the color value # to False. #color: True # Do not strip off the colored output from nested results and state outputs # (true by default). # strip_colors: False # Backup files that are replaced by file.managed and file.recurse under # 'cachedir'/file_backup relative to their original location and appended # with a timestamp. The only valid setting is "minion". Disabled by default. # # Alternatively this can be specified for each file in state files: # /etc/ssh/sshd_config: # file.managed: # - source: salt://ssh/sshd_config # - backup: minion # #backup_mode: minion # When waiting for a master to accept the minion's public key, salt will # continuously attempt to reconnect until successful. This is the time, in # seconds, between those reconnection attempts. #acceptance_wait_time: 10 # If this is nonzero, the time between reconnection attempts will increase by # acceptance_wait_time seconds per iteration, up to this maximum. If this is # set to zero, the time between reconnection attempts will stay constant. #acceptance_wait_time_max: 0 # If the master rejects the minion's public key, retry instead of exiting. # Rejected keys will be handled the same as waiting on acceptance. #rejected_retry: False # When the master key changes, the minion will try to re-auth itself to receive # the new master key. In larger environments this can cause a SYN flood on the # master because all minions try to re-auth immediately. To prevent this and # have a minion wait for a random amount of time, use this optional parameter. # The wait-time will be a random number of seconds between 0 and the defined value. #random_reauth_delay: 60 # To avoid overloading a master when many minions startup at once, a randomized # delay may be set to tell the minions to wait before connecting to the master. # This value is the number of seconds to choose from for a random number. For # example, setting this value to 60 will choose a random number of seconds to delay # on startup between zero seconds and sixty seconds. Setting to '0' will disable # this feature. #random_startup_delay: 0 # When waiting for a master to accept the minion's public key, salt will # continuously attempt to reconnect until successful. This is the timeout value, # in seconds, for each individual attempt. After this timeout expires, the minion # will wait for acceptance_wait_time seconds before trying again. Unless your master # is under unusually heavy load, this should be left at the default. #auth_timeout: 60 # Number of consecutive SaltReqTimeoutError that are acceptable when trying to # authenticate. #auth_tries: 7 # The number of attempts to connect to a master before giving up. # Set this to -1 for unlimited attempts. This allows for a master to have # downtime and the minion to reconnect to it later when it comes back up. # In 'failover' mode, it is the number of attempts for each set of masters. # In this mode, it will cycle through the list of masters for each attempt. # # This is different than auth_tries because auth_tries attempts to # retry auth attempts with a single master. auth_tries is under the # assumption that you can connect to the master but not gain # authorization from it. master_tries will still cycle through all # the masters in a given try, so it is appropriate if you expect # occasional downtime from the master(s). #master_tries: 1 # If authentication fails due to SaltReqTimeoutError during a ping_interval, # cause sub minion process to restart. #auth_safemode: False # Ping Master to ensure connection is alive (minutes). #ping_interval: 0 # To auto recover minions if master changes IP address (DDNS) # auth_tries: 10 # auth_safemode: True # ping_interval: 2 # # Minions won't know master is missing until a ping fails. After the ping fail, # the minion will attempt authentication and likely fails out and cause a restart. # When the minion restarts it will resolve the masters IP and attempt to reconnect. # If you don't have any problems with syn-floods, don't bother with the # three recon_* settings described below, just leave the defaults! # # The ZeroMQ pull-socket that binds to the masters publishing interface tries # to reconnect immediately, if the socket is disconnected (for example if # the master processes are restarted). In large setups this will have all # minions reconnect immediately which might flood the master (the ZeroMQ-default # is usually a 100ms delay). To prevent this, these three recon_* settings # can be used. # recon_default: the interval in milliseconds that the socket should wait before # trying to reconnect to the master (1000ms = 1 second) # # recon_max: the maximum time a socket should wait. each interval the time to wait # is calculated by doubling the previous time. if recon_max is reached, # it starts again at recon_default. Short example: # # reconnect 1: the socket will wait 'recon_default' milliseconds # reconnect 2: 'recon_default' * 2 # reconnect 3: ('recon_default' * 2) * 2 # reconnect 4: value from previous interval * 2 # reconnect 5: value from previous interval * 2 # reconnect x: if value >= recon_max, it starts again with recon_default # # recon_randomize: generate a random wait time on minion start. The wait time will # be a random value between recon_default and recon_default + # recon_max. Having all minions reconnect with the same recon_default # and recon_max value kind of defeats the purpose of being able to # change these settings. If all minions have the same values and your # setup is quite large (several thousand minions), they will still # flood the master. The desired behavior is to have timeframe within # all minions try to reconnect. # # Example on how to use these settings. The goal: have all minions reconnect within a # 60 second timeframe on a disconnect. # recon_default: 1000 # recon_max: 59000 # recon_randomize: True # # Each minion will have a randomized reconnect value between 'recon_default' # and 'recon_default + recon_max', which in this example means between 1000ms # 60000ms (or between 1 and 60 seconds). The generated random-value will be # doubled after each attempt to reconnect. Lets say the generated random # value is 11 seconds (or 11000ms). # reconnect 1: wait 11 seconds # reconnect 2: wait 22 seconds # reconnect 3: wait 33 seconds # reconnect 4: wait 44 seconds # reconnect 5: wait 55 seconds # reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max) # reconnect 7: wait 11 seconds # reconnect 8: wait 22 seconds # reconnect 9: wait 33 seconds # reconnect x: etc. # # In a setup with ~6000 hosts these settings would average the reconnects # to about 100 per second and all hosts would be reconnected within 60 seconds. # recon_default: 100 # recon_max: 5000 # recon_randomize: False # # # The loop_interval sets how long in seconds the minion will wait between # evaluating the scheduler and running cleanup tasks. This defaults to 1 # second on the minion scheduler. #loop_interval: 1 # Some installations choose to start all job returns in a cache or a returner # and forgo sending the results back to a master. In this workflow, jobs # are most often executed with --async from the Salt CLI and then results # are evaluated by examining job caches on the minions or any configured returners. # WARNING: Setting this to False will **disable** returns back to the master. #pub_ret: True # The grains can be merged, instead of overridden, using this option. # This allows custom grains to defined different subvalues of a dictionary # grain. By default this feature is disabled, to enable set grains_deep_merge # to ``True``. #grains_deep_merge: False # The grains_refresh_every setting allows for a minion to periodically check # its grains to see if they have changed and, if so, to inform the master # of the new grains. This operation is moderately expensive, therefore # care should be taken not to set this value too low. # # Note: This value is expressed in __minutes__! # # A value of 10 minutes is a reasonable default. # # If the value is set to zero, this check is disabled. #grains_refresh_every: 1 # The grains_refresh_pre_exec setting allows for a minion to check its grains # prior to the execution of any operation to see if they have changed and, if # so, to inform the master of the new grains. This operation is moderately # expensive, therefore care should be taken before enabling this behavior. #grains_refresh_pre_exec: False # Cache grains on the minion. Default is False. #grains_cache: False # Cache rendered pillar data on the minion. Default is False. # This may cause 'cachedir'/pillar to contain sensitive data that should be # protected accordingly. #minion_pillar_cache: False # Grains cache expiration, in seconds. If the cache file is older than this # number of seconds then the grains cache will be dumped and fully re-populated # with fresh data. Defaults to 5 minutes. Will have no effect if 'grains_cache' # is not enabled. # grains_cache_expiration: 300 # Determines whether or not the salt minion should run scheduled mine updates. # Defaults to "True". Set to "False" to disable the scheduled mine updates # (this essentially just does not add the mine update function to the minion's # scheduler). #mine_enabled: True # Determines whether or not scheduled mine updates should be accompanied by a job # return for the job cache. Defaults to "False". Set to "True" to include job # returns in the job cache for mine updates. #mine_return_job: False # Example functions that can be run via the mine facility # NO mine functions are established by default. # Note these can be defined in the minion's pillar as well. #mine_functions: # test.ping: [] # network.ip_addrs: # interface: eth0 # cidr: '10.0.0.0/8' # The number of minutes between mine updates. #mine_interval: 60 # Windows platforms lack posix IPC and must rely on slower TCP based inter- # process communications. ipc_mode is set to 'tcp' on such systems. #ipc_mode: ipc # Overwrite the default tcp ports used by the minion when ipc_mode is set to 'tcp' #tcp_pub_port: 4510 #tcp_pull_port: 4511 # Passing very large events can cause the minion to consume large amounts of # memory. This value tunes the maximum size of a message allowed onto the # minion event bus. The value is expressed in bytes. #max_event_size: 1048576 # When a minion starts up it sends a notification on the event bus with a tag # that looks like this: `salt/minion/<minion_id>/start`. For historical reasons # the minion also sends a similar event with an event tag like this: # `minion_start`. This duplication can cause a lot of clutter on the event bus # when there are many minions. Set `enable_legacy_startup_events: False` in the # minion config to ensure only the `salt/minion/<minion_id>/start` events are # sent. Beginning with the `Sodium` Salt release this option will default to # `False` #enable_legacy_startup_events: True # To detect failed master(s) and fire events on connect/disconnect, set # master_alive_interval to the number of seconds to poll the masters for # connection events. # #master_alive_interval: 30 # The minion can include configuration from other files. To enable this, # pass a list of paths to this option. The paths can be either relative or # absolute; if relative, they are considered to be relative to the directory # the main minion configuration file lives in (this file). Paths can make use # of shell-style globbing. If no files are matched by a path passed to this # option then the minion will log a warning message. # # Include a config file from some other path: # include: /etc/salt/extra_config # # Include config from several files and directories: #include: # - /etc/salt/extra_config # - /etc/roles/webserver # The syndic minion can verify that it is talking to the correct master via the # key fingerprint of the higher-level master with the "syndic_finger" config. #syndic_finger: '' # # # ##### Minion module management ##### ########################################## # Disable specific modules. This allows the admin to limit the level of # access the master has to the minion. The default here is the empty list, # below is an example of how this needs to be formatted in the config file #disable_modules: # - cmdmod # - test #disable_returners: [] # This is the reverse of disable_modules. The default, like disable_modules, is the empty list, # but if this option is set to *anything* then *only* those modules will load. # Note that this is a very large hammer and it can be quite difficult to keep the minion working # the way you think it should since Salt uses many modules internally itself. At a bare minimum # you need the following enabled or else the minion won't start. #whitelist_modules: # - cmdmod # - test # - config # Modules can be loaded from arbitrary paths. This enables the easy deployment # of third party modules. Modules for returners and minions can be loaded. # Specify a list of extra directories to search for minion modules and # returners. These paths must be fully qualified! #module_dirs: [] #returner_dirs: [] #states_dirs: [] #render_dirs: [] #utils_dirs: [] # # A module provider can be statically overwritten or extended for the minion # via the providers option, in this case the default module will be # overwritten by the specified module. In this example the pkg module will # be provided by the yumpkg5 module instead of the system default. #providers: # pkg: yumpkg5 # # Enable Cython modules searching and loading. (Default: False) #cython_enable: False # # Specify a max size (in bytes) for modules on import. This feature is currently # only supported on *nix operating systems and requires psutil. # modules_max_memory: -1 ##### State Management Settings ##### ########################################### # The default renderer to use in SLS files. This is configured as a # pipe-delimited expression. For example, jinja|yaml will first run jinja # templating on the SLS file, and then load the result as YAML. This syntax is # documented in further depth at the following URL: # # https://docs.saltproject.io/en/latest/ref/renderers/#composing-renderers # # NOTE: The "shebang" prefix (e.g. "#!jinja|yaml") described in the # documentation linked above is for use in an SLS file to override the default # renderer, it should not be used when configuring the renderer here. # #renderer: jinja|yaml # # The failhard option tells the minions to stop immediately after the first # failure detected in the state execution. Defaults to False. #failhard: False # # Reload the modules prior to a highstate run. #autoload_dynamic_modules: True # # clean_dynamic_modules keeps the dynamic modules on the minion in sync with # the dynamic modules on the master, this means that if a dynamic module is # not on the master it will be deleted from the minion. By default, this is # enabled and can be disabled by changing this value to False. #clean_dynamic_modules: True # # Renamed from ``environment`` to ``saltenv``. If ``environment`` is used, # ``saltenv`` will take its value. If both are used, ``environment`` will be # ignored and ``saltenv`` will be used. # Normally the minion is not isolated to any single environment on the master # when running states, but the environment can be isolated on the minion side # by statically setting it. Remember that the recommended way to manage # environments is to isolate via the top file. #saltenv: None # # Isolates the pillar environment on the minion side. This functions the same # as the environment setting, but for pillar instead of states. #pillarenv: None # # Set this option to True to force the pillarenv to be the same as the # effective saltenv when running states. Note that if pillarenv is specified, # this option will be ignored. #pillarenv_from_saltenv: False # # Set this option to 'True' to force a 'KeyError' to be raised whenever an # attempt to retrieve a named value from pillar fails. When this option is set # to 'False', the failed attempt returns an empty string. Default is 'False'. #pillar_raise_on_missing: False # # If using the local file directory, then the state top file name needs to be # defined, by default this is top.sls. #state_top: top.sls # # Run states when the minion daemon starts. To enable, set startup_states to: # 'highstate' -- Execute state.highstate # 'sls' -- Read in the sls_list option and execute the named sls files # 'top' -- Read top_file option and execute based on that file on the Master #startup_states: '' # # List of states to run when the minion starts up if startup_states is 'sls': #sls_list: # - edit.vim # - hyper # # List of grains to pass in start event when minion starts up: #start_event_grains: # - machine_id # - uuid # # Top file to execute if startup_states is 'top': #top_file: '' # Automatically aggregate all states that have support for mod_aggregate by # setting to True. Or pass a list of state module names to automatically # aggregate just those types. # # state_aggregate: # - pkg # #state_aggregate: False # Instead of failing immediately when another state run is in progress, a value # of True will queue the new state run to begin running once the other has # finished. This option starts a new thread for each queued state run, so use # this option sparingly. Additionally, it can be set to an integer representing # the maximum queue size which can be attained before the state runs will fail # to be queued. This can prevent runaway conditions where new threads are # started until system performance is hampered. # #state_queue: False # Disable requisites during state runs by specifying a single requisite # or a list of requisites to disable. # # disabled_requisites: require_in # # disabled_requisites: # - require # - require_in # If set, this parameter expects a dictionary of state module names as keys # and list of conditions which must be satisfied in order to run any functions # in that state module. # #global_state_conditions: # "*": ["G@global_noop:false"] # service: ["not G@virtual_subtype:chroot"] ##### File Directory Settings ##### ########################################## # The Salt Minion can redirect all file server operations to a local directory, # this allows for the same state tree that is on the master to be used if # copied completely onto the minion. This is a literal copy of the settings on # the master but used to reference a local directory on the minion. # Set the file client. The client defaults to looking on the master server for # files, but can be directed to look at the local file directory setting # defined below by setting it to "local". Setting a local file_client runs the # minion in masterless mode. #file_client: remote # The file directory works on environments passed to the minion, each environment # can have multiple root directories, the subdirectories in the multiple file # roots cannot match, otherwise the downloaded files will not be able to be # reliably ensured. A base environment is required to house the top file. # Example: # file_roots: # base: # - /srv/salt/ # dev: # - /srv/salt/dev/services # - /srv/salt/dev/states # prod: # - /srv/salt/prod/services # - /srv/salt/prod/states # #file_roots: # base: # - /srv/salt # Uncomment the line below if you do not want the file_server to follow # symlinks when walking the filesystem tree. This is set to True # by default. Currently this only applies to the default roots # fileserver_backend. #fileserver_followsymlinks: False # # Uncomment the line below if you do not want symlinks to be # treated as the files they are pointing to. By default this is set to # False. By uncommenting the line below, any detected symlink while listing # files on the Master will not be returned to the Minion. #fileserver_ignoresymlinks: True # # The hash_type is the hash to use when discovering the hash of a file on # the local fileserver. The default is sha256, but md5, sha1, sha224, sha384 # and sha512 are also supported. # # WARNING: While md5 and sha1 are also supported, do not use them due to the # high chance of possible collisions and thus security breach. # # Warning: Prior to changing this value, the minion should be stopped and all # Salt caches should be cleared. #hash_type: sha256 # The Salt pillar is searched for locally if file_client is set to local. If # this is the case, and pillar data is defined, then the pillar_roots need to # also be configured on the minion: #pillar_roots: # base: # - /srv/pillar # If this is `True` and the ciphertext could not be decrypted, then an error is # raised. #gpg_decrypt_must_succeed: False # Set a hard-limit on the size of the files that can be pushed to the master. # It will be interpreted as megabytes. Default: 100 #file_recv_max_size: 100 # # ###### Security settings ##### ########################################### # Enable "open mode", this mode still maintains encryption, but turns off # authentication, this is only intended for highly secure environments or for # the situation where your keys end up in a bad state. If you run in open mode # you do so at your own risk! #open_mode: False # The size of key that should be generated when creating new keys. #keysize: 2048 # Enable permissive access to the salt keys. This allows you to run the # master or minion as root, but have a non-root group be given access to # your pki_dir. To make the access explicit, root must belong to the group # you've given access to. This is potentially quite insecure. #permissive_pki_access: False # The state_verbose and state_output settings can be used to change the way # state system data is printed to the display. By default all data is printed. # The state_verbose setting can be set to True or False, when set to False # all data that has a result of True and no changes will be suppressed. #state_verbose: True # The state_output setting controls which results will be output full multi line # full, terse - each state will be full/terse # mixed - only states with errors will be full # changes - states with changes and errors will be full # full_id, mixed_id, changes_id and terse_id are also allowed; # when set, the state ID will be used as name in the output #state_output: full # The state_output_diff setting changes whether or not the output from # successful states is returned. Useful when even the terse output of these # states is cluttering the logs. Set it to True to ignore them. #state_output_diff: False # The state_output_profile setting changes whether profile information # will be shown for each state run. #state_output_profile: True # The state_output_pct setting changes whether success and failure information # as a percent of total actions will be shown for each state run. #state_output_pct: False # The state_compress_ids setting aggregates information about states which have # multiple "names" under the same state ID in the highstate output. #state_compress_ids: False # Fingerprint of the master public key to validate the identity of your Salt master # before the initial key exchange. The master fingerprint can be found by running # "salt-key -f master.pub" on the Salt master. #master_finger: '' # Use TLS/SSL encrypted connection between master and minion. # Can be set to a dictionary containing keyword arguments corresponding to Python's # 'ssl.wrap_socket' method. # Default is None. #ssl: # keyfile: <path_to_keyfile> # certfile: <path_to_certfile> # ssl_version: PROTOCOL_TLSv1_2 # Grains to be sent to the master on authentication to check if the minion's key # will be accepted automatically. Needs to be configured on the master. #autosign_grains: # - uuid # - server_id ###### Reactor Settings ##### ########################################### # Define a salt reactor. See https://docs.saltproject.io/en/latest/topics/reactor/ #reactor: [] #Set the TTL for the cache of the reactor configuration. #reactor_refresh_interval: 60 #Configure the number of workers for the runner/wheel in the reactor. #reactor_worker_threads: 10 #Define the queue size for workers in the reactor. #reactor_worker_hwm: 10000 ###### Thread settings ##### ########################################### # Disable multiprocessing support, by default when a minion receives a # publication a new process is spawned and the command is executed therein. # # WARNING: Disabling multiprocessing may result in substantial slowdowns # when processing large pillars. See https://github.com/saltstack/salt/issues/38758 # for a full explanation. #multiprocessing: True # Limit the maximum amount of processes or threads created by salt-minion. # This is useful to avoid resource exhaustion in case the minion receives more # publications than it is able to handle, as it limits the number of spawned # processes or threads. -1 is the default and disables the limit. #process_count_max: -1 ##### Logging settings ##### ########################################## # The location of the minion log file # The minion log can be sent to a regular file, local path name, or network # location. Remote logging works best when configured to use rsyslogd(8) (e.g.: # ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI # format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility> #log_file: /var/log/salt/minion #log_file: file:///dev/log #log_file: udp://loghost:10514 # #log_file: /var/log/salt/minion #key_logfile: /var/log/salt/key # The level of messages to send to the console. # One of 'garbage', 'trace', 'debug', 'info', 'warning', 'error', 'critical'. # # The following log levels are considered INSECURE and may log sensitive data: # ['garbage', 'trace', 'debug'] # # Default: 'warning' #log_level: warning # The level of messages to send to the log file. # One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'. # If using 'log_granular_levels' this must be set to the highest desired level. # Default: 'warning' #log_level_logfile: # The date and time format used in log messages. Allowed date/time formatting # can be seen here: http://docs.python.org/library/time.html#time.strftime #log_datefmt: '%H:%M:%S' #log_datefmt_logfile: '%Y-%m-%d %H:%M:%S' # The format of the console logging messages. Allowed formatting options can # be seen here: http://docs.python.org/library/logging.html#logrecord-attributes # # Console log colors are specified by these additional formatters: # # %(colorlevel)s # %(colorname)s # %(colorprocess)s # %(colormsg)s # # Since it is desirable to include the surrounding brackets, '[' and ']', in # the coloring of the messages, these color formatters also include padding as # well. Color LogRecord attributes are only available for console logging. # #log_fmt_console: '%(colorlevel)s %(colormsg)s' #log_fmt_console: '[%(levelname)-8s] %(message)s' # #log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s' # This can be used to control logging levels more specificically. This # example sets the main salt library at the 'warning' level, but sets # 'salt.modules' to log at the 'debug' level: # log_granular_levels: # 'salt': 'warning' # 'salt.modules': 'debug' # #log_granular_levels: {} # To diagnose issues with minions disconnecting or missing returns, ZeroMQ # supports the use of monitor sockets to log connection events. This # feature requires ZeroMQ 4.0 or higher. # # To enable ZeroMQ monitor sockets, set 'zmq_monitor' to 'True' and log at a # debug level or higher. # # A sample log event is as follows: # # [DEBUG ] ZeroMQ event: {'endpoint': 'tcp://127.0.0.1:4505', 'event': 512, # 'value': 27, 'description': 'EVENT_DISCONNECTED'} # # All events logged will include the string 'ZeroMQ event'. A connection event # should be logged as the minion starts up and initially connects to the # master. If not, check for debug log level and that the necessary version of # ZeroMQ is installed. # #zmq_monitor: False # Number of times to try to authenticate with the salt master when reconnecting # to the master #tcp_authentication_retries: 5 ###### Module configuration ##### ########################################### # Salt allows for modules to be passed arbitrary configuration data, any data # passed here in valid yaml format will be passed on to the salt minion modules # for use. It is STRONGLY recommended that a naming convention be used in which # the module name is followed by a . and then the value. Also, all top level # data must be applied via the yaml dict construct, some examples: # # You can specify that all modules should run in test mode: #test: True # # A simple value for the test module: #test.foo: foo # # A list for the test module: #test.bar: [baz,quo] # # A dict for the test module: #test.baz: {spam: sausage, cheese: bread} # # ###### Update settings ###### ########################################### # Using the features in Esky, a salt minion can both run as a frozen app and # be updated on the fly. These options control how the update process # (saltutil.update()) behaves. # # The url for finding and downloading updates. Disabled by default. #update_url: False # # The list of services to restart after a successful update. Empty by default. #update_restart_services: [] ###### Keepalive settings ###### ############################################ # ZeroMQ now includes support for configuring SO_KEEPALIVE if supported by # the OS. If connections between the minion and the master pass through # a state tracking device such as a firewall or VPN gateway, there is # the risk that it could tear down the connection the master and minion # without informing either party that their connection has been taken away. # Enabling TCP Keepalives prevents this from happening. # Overall state of TCP Keepalives, enable (1 or True), disable (0 or False) # or leave to the OS defaults (-1), on Linux, typically disabled. Default True, enabled. #tcp_keepalive: True # How long before the first keepalive should be sent in seconds. Default 300 # to send the first keepalive after 5 minutes, OS default (-1) is typically 7200 seconds # on Linux see /proc/sys/net/ipv4/tcp_keepalive_time. #tcp_keepalive_idle: 300 # How many lost probes are needed to consider the connection lost. Default -1 # to use OS defaults, typically 9 on Linux, see /proc/sys/net/ipv4/tcp_keepalive_probes. #tcp_keepalive_cnt: -1 # How often, in seconds, to send keepalives after the first one. Default -1 to # use OS defaults, typically 75 seconds on Linux, see # /proc/sys/net/ipv4/tcp_keepalive_intvl. #tcp_keepalive_intvl: -1 ###### Windows Software settings ###### ############################################ # Location of the repository cache file on the master: #win_repo_cachefile: 'salt://win/repo/winrepo.p' ###### Returner settings ###### ############################################ # Default Minion returners. Can be a comma delimited string or a list: # #return: mysql # #return: mysql,slack,redis # #return: # - mysql # - hipchat # - slack ###### Miscellaneous settings ###### ############################################ # Default match type for filtering events tags: startswith, endswith, find, regex, fnmatch #event_match_type: startswith
Example proxy minion configuration file¶
##### Primary configuration settings ##### ########################################## # This configuration file is used to manage the behavior of all Salt Proxy # Minions on this host. # With the exception of the location of the Salt Master Server, values that are # commented out but have an empty line after the comment are defaults that need # not be set in the config. If there is no blank line after the comment, the # value is presented as an example and is not the default. # Per default the proxy minion will automatically include all config files # from proxy.d/*.conf (proxy.d is a directory in the same directory # as the main minion config file). #default_include: proxy.d/*.conf # Backwards compatibility option for proxymodules created before 2015.8.2 # This setting will default to 'False' in the 2016.3.0 release # Setting this to True adds proxymodules to the __opts__ dictionary. # This breaks several Salt features (basically anything that serializes # __opts__ over the wire) but retains backwards compatibility. #add_proxymodule_to_opts: True # Set the location of the salt master server. If the master server cannot be # resolved, then the minion will fail to start. #master: salt # If a proxymodule has a function called 'grains', then call it during # regular grains loading and merge the results with the proxy's grains # dictionary. Otherwise it is assumed that the module calls the grains # function in a custom way and returns the data elsewhere # # Default to False for 2016.3 and 2016.11. Switch to True for 2017.7.0. # proxy_merge_grains_in_module: True # If a proxymodule has a function called 'alive' returning a boolean # flag reflecting the state of the connection with the remove device, # when this option is set as True, a scheduled job on the proxy will # try restarting the connection. The polling frequency depends on the # next option, 'proxy_keep_alive_interval'. Added in 2017.7.0. # proxy_keep_alive: True # The polling interval (in minutes) to check if the underlying connection # with the remote device is still alive. This option requires # 'proxy_keep_alive' to be configured as True and the proxymodule to # implement the 'alive' function. Added in 2017.7.0. # proxy_keep_alive_interval: 1 # By default, any proxy opens the connection with the remote device when # initialized. Some proxymodules allow through this option to open/close # the session per command. This requires the proxymodule to have this # capability. Please consult the documentation to see if the proxy type # used can be that flexible. Added in 2017.7.0. # proxy_always_alive: True # If multiple masters are specified in the 'master' setting, the default behavior # is to always try to connect to them in the order they are listed. If random_master is # set to True, the order will be randomized instead. This can be helpful in distributing # the load of many minions executing salt-call requests, for example, from a cron job. # If only one master is listed, this setting is ignored and a warning will be logged. #random_master: False # Minions can connect to multiple masters simultaneously (all masters # are "hot"), or can be configured to failover if a master becomes # unavailable. Multiple hot masters are configured by setting this # value to "str". Failover masters can be requested by setting # to "failover". MAKE SURE TO SET master_alive_interval if you are # using failover. # master_type: str # Poll interval in seconds for checking if the master is still there. Only # respected if master_type above is "failover". # master_alive_interval: 30 # Set whether the minion should connect to the master via IPv6: #ipv6: False # Set the number of seconds to wait before attempting to resolve # the master hostname if name resolution fails. Defaults to 30 seconds. # Set to zero if the minion should shutdown and not retry. # retry_dns: 30 # Set the port used by the master reply and authentication server. #master_port: 4506 # The user to run salt. #user: root # Setting sudo_user will cause salt to run all execution modules under an sudo # to the user given in sudo_user. The user under which the salt minion process # itself runs will still be that provided in the user config above, but all # execution modules run by the minion will be rerouted through sudo. #sudo_user: saltdev # Specify the location of the daemon process ID file. #pidfile: /var/run/salt-minion.pid # The root directory prepended to these options: pki_dir, cachedir, log_file, # sock_dir, pidfile. #root_dir: / # The directory to store the pki information in #pki_dir: /etc/salt/pki/minion # Where cache data goes. # This data may contain sensitive data and should be protected accordingly. #cachedir: /var/cache/salt/minion # Append minion_id to these directories. Helps with # multiple proxies and minions running on the same machine. # Allowed elements in the list: pki_dir, cachedir, extension_modules # Normally not needed unless running several proxies and/or minions on the same machine # Defaults to ['cachedir'] for proxies, [] (empty list) for regular minions # append_minionid_config_dirs: # - cachedir # Verify and set permissions on configuration directories at startup. #verify_env: True # The minion can locally cache the return data from jobs sent to it, this # can be a good way to keep track of jobs the minion has executed # (on the minion side). By default this feature is disabled, to enable, set # cache_jobs to True. #cache_jobs: False # Set the directory used to hold unix sockets. #sock_dir: /var/run/salt/minion # Set the default outputter used by the salt-call command. The default is # "nested". #output: nested # # By default output is colored. To disable colored output, set the color value # to False. #color: True # Do not strip off the colored output from nested results and state outputs # (true by default). # strip_colors: False # Backup files that are replaced by file.managed and file.recurse under # 'cachedir'/file_backup relative to their original location and appended # with a timestamp. The only valid setting is "minion". Disabled by default. # # Alternatively this can be specified for each file in state files: # /etc/ssh/sshd_config: # file.managed: # - source: salt://ssh/sshd_config # - backup: minion # #backup_mode: minion # When waiting for a master to accept the minion's public key, salt will # continuously attempt to reconnect until successful. This is the time, in # seconds, between those reconnection attempts. #acceptance_wait_time: 10 # If this is nonzero, the time between reconnection attempts will increase by # acceptance_wait_time seconds per iteration, up to this maximum. If this is # set to zero, the time between reconnection attempts will stay constant. #acceptance_wait_time_max: 0 # If the master rejects the minion's public key, retry instead of exiting. # Rejected keys will be handled the same as waiting on acceptance. #rejected_retry: False # When the master key changes, the minion will try to re-auth itself to receive # the new master key. In larger environments this can cause a SYN flood on the # master because all minions try to re-auth immediately. To prevent this and # have a minion wait for a random amount of time, use this optional parameter. # The wait-time will be a random number of seconds between 0 and the defined value. #random_reauth_delay: 60 # When waiting for a master to accept the minion's public key, salt will # continuously attempt to reconnect until successful. This is the timeout value, # in seconds, for each individual attempt. After this timeout expires, the minion # will wait for acceptance_wait_time seconds before trying again. Unless your master # is under unusually heavy load, this should be left at the default. #auth_timeout: 60 # Number of consecutive SaltReqTimeoutError that are acceptable when trying to # authenticate. #auth_tries: 7 # If authentication fails due to SaltReqTimeoutError during a ping_interval, # cause sub minion process to restart. #auth_safemode: False # Ping Master to ensure connection is alive (minutes). #ping_interval: 0 # To auto recover minions if master changes IP address (DDNS) # auth_tries: 10 # auth_safemode: False # ping_interval: 90 # # Minions won't know master is missing until a ping fails. After the ping fail, # the minion will attempt authentication and likely fails out and cause a restart. # When the minion restarts it will resolve the masters IP and attempt to reconnect. # If you don't have any problems with syn-floods, don't bother with the # three recon_* settings described below, just leave the defaults! # # The ZeroMQ pull-socket that binds to the masters publishing interface tries # to reconnect immediately, if the socket is disconnected (for example if # the master processes are restarted). In large setups this will have all # minions reconnect immediately which might flood the master (the ZeroMQ-default # is usually a 100ms delay). To prevent this, these three recon_* settings # can be used. # recon_default: the interval in milliseconds that the socket should wait before # trying to reconnect to the master (1000ms = 1 second) # # recon_max: the maximum time a socket should wait. each interval the time to wait # is calculated by doubling the previous time. if recon_max is reached, # it starts again at recon_default. Short example: # # reconnect 1: the socket will wait 'recon_default' milliseconds # reconnect 2: 'recon_default' * 2 # reconnect 3: ('recon_default' * 2) * 2 # reconnect 4: value from previous interval * 2 # reconnect 5: value from previous interval * 2 # reconnect x: if value >= recon_max, it starts again with recon_default # # recon_randomize: generate a random wait time on minion start. The wait time will # be a random value between recon_default and recon_default + # recon_max. Having all minions reconnect with the same recon_default # and recon_max value kind of defeats the purpose of being able to # change these settings. If all minions have the same values and your # setup is quite large (several thousand minions), they will still # flood the master. The desired behavior is to have timeframe within # all minions try to reconnect. # # Example on how to use these settings. The goal: have all minions reconnect within a # 60 second timeframe on a disconnect. # recon_default: 1000 # recon_max: 59000 # recon_randomize: True # # Each minion will have a randomized reconnect value between 'recon_default' # and 'recon_default + recon_max', which in this example means between 1000ms # 60000ms (or between 1 and 60 seconds). The generated random-value will be # doubled after each attempt to reconnect. Lets say the generated random # value is 11 seconds (or 11000ms). # reconnect 1: wait 11 seconds # reconnect 2: wait 22 seconds # reconnect 3: wait 33 seconds # reconnect 4: wait 44 seconds # reconnect 5: wait 55 seconds # reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max) # reconnect 7: wait 11 seconds # reconnect 8: wait 22 seconds # reconnect 9: wait 33 seconds # reconnect x: etc. # # In a setup with ~6000 thousand hosts these settings would average the reconnects # to about 100 per second and all hosts would be reconnected within 60 seconds. # recon_default: 100 # recon_max: 5000 # recon_randomize: False # # # The loop_interval sets how long in seconds the minion will wait between # evaluating the scheduler and running cleanup tasks. This defaults to a # sane 60 seconds, but if the minion scheduler needs to be evaluated more # often lower this value #loop_interval: 60 # The grains_refresh_every setting allows for a minion to periodically check # its grains to see if they have changed and, if so, to inform the master # of the new grains. This operation is moderately expensive, therefore # care should be taken not to set this value too low. # # Note: This value is expressed in __minutes__! # # A value of 10 minutes is a reasonable default. # # If the value is set to zero, this check is disabled. #grains_refresh_every: 1 # Cache grains on the minion. Default is False. #grains_cache: False # Grains cache expiration, in seconds. If the cache file is older than this # number of seconds then the grains cache will be dumped and fully re-populated # with fresh data. Defaults to 5 minutes. Will have no effect if 'grains_cache' # is not enabled. # grains_cache_expiration: 300 # Windows platforms lack posix IPC and must rely on slower TCP based inter- # process communications. Set ipc_mode to 'tcp' on such systems #ipc_mode: ipc # Overwrite the default tcp ports used by the minion when in tcp mode #tcp_pub_port: 4510 #tcp_pull_port: 4511 # Passing very large events can cause the minion to consume large amounts of # memory. This value tunes the maximum size of a message allowed onto the # minion event bus. The value is expressed in bytes. #max_event_size: 1048576 # To detect failed master(s) and fire events on connect/disconnect, set # master_alive_interval to the number of seconds to poll the masters for # connection events. # #master_alive_interval: 30 # The minion can include configuration from other files. To enable this, # pass a list of paths to this option. The paths can be either relative or # absolute; if relative, they are considered to be relative to the directory # the main minion configuration file lives in (this file). Paths can make use # of shell-style globbing. If no files are matched by a path passed to this # option then the minion will log a warning message. # # Include a config file from some other path: # include: /etc/salt/extra_config # # Include config from several files and directories: #include: # - /etc/salt/extra_config # - /etc/roles/webserver # # # ##### Minion module management ##### ########################################## # Disable specific modules. This allows the admin to limit the level of # access the master has to the minion. #disable_modules: [cmd,test] #disable_returners: [] # # Modules can be loaded from arbitrary paths. This enables the easy deployment # of third party modules. Modules for returners and minions can be loaded. # Specify a list of extra directories to search for minion modules and # returners. These paths must be fully qualified! #module_dirs: [] #returner_dirs: [] #states_dirs: [] #render_dirs: [] #utils_dirs: [] # # A module provider can be statically overwritten or extended for the minion # via the providers option, in this case the default module will be # overwritten by the specified module. In this example the pkg module will # be provided by the yumpkg5 module instead of the system default. #providers: # pkg: yumpkg5 # # Enable Cython modules searching and loading. (Default: False) #cython_enable: False # # Specify a max size (in bytes) for modules on import. This feature is currently # only supported on *nix operating systems and requires psutil. # modules_max_memory: -1 ##### State Management Settings ##### ########################################### # The default renderer to use in SLS files. This is configured as a # pipe-delimited expression. For example, jinja|yaml will first run jinja # templating on the SLS file, and then load the result as YAML. This syntax is # documented in further depth at the following URL: # # https://docs.saltproject.io/en/latest/ref/renderers/#composing-renderers # # NOTE: The "shebang" prefix (e.g. "#!jinja|yaml") described in the # documentation linked above is for use in an SLS file to override the default # renderer, it should not be used when configuring the renderer here. # #renderer: jinja|yaml # # The failhard option tells the minions to stop immediately after the first # failure detected in the state execution. Defaults to False. #failhard: False # # Reload the modules prior to a highstate run. #autoload_dynamic_modules: True # # clean_dynamic_modules keeps the dynamic modules on the minion in sync with # the dynamic modules on the master, this means that if a dynamic module is # not on the master it will be deleted from the minion. By default, this is # enabled and can be disabled by changing this value to False. #clean_dynamic_modules: True # # Normally, the minion is not isolated to any single environment on the master # when running states, but the environment can be isolated on the minion side # by statically setting it. Remember that the recommended way to manage # environments is to isolate via the top file. #environment: None # # If using the local file directory, then the state top file name needs to be # defined, by default this is top.sls. #state_top: top.sls # # Run states when the minion daemon starts. To enable, set startup_states to: # 'highstate' -- Execute state.highstate # 'sls' -- Read in the sls_list option and execute the named sls files # 'top' -- Read top_file option and execute based on that file on the Master #startup_states: '' # # List of states to run when the minion starts up if startup_states is 'sls': #sls_list: # - edit.vim # - hyper # # Top file to execute if startup_states is 'top': #top_file: '' # Automatically aggregate all states that have support for mod_aggregate by # setting to True. Or pass a list of state module names to automatically # aggregate just those types. # # state_aggregate: # - pkg # #state_aggregate: False ##### File Directory Settings ##### ########################################## # The Salt Minion can redirect all file server operations to a local directory, # this allows for the same state tree that is on the master to be used if # copied completely onto the minion. This is a literal copy of the settings on # the master but used to reference a local directory on the minion. # Set the file client. The client defaults to looking on the master server for # files, but can be directed to look at the local file directory setting # defined below by setting it to "local". Setting a local file_client runs the # minion in masterless mode. #file_client: remote # The file directory works on environments passed to the minion, each environment # can have multiple root directories, the subdirectories in the multiple file # roots cannot match, otherwise the downloaded files will not be able to be # reliably ensured. A base environment is required to house the top file. # Example: # file_roots: # base: # - /srv/salt/ # dev: # - /srv/salt/dev/services # - /srv/salt/dev/states # prod: # - /srv/salt/prod/services # - /srv/salt/prod/states # #file_roots: # base: # - /srv/salt # The hash_type is the hash to use when discovering the hash of a file in # the local fileserver. The default is sha256 but sha224, sha384 and sha512 # are also supported. # # WARNING: While md5 and sha1 are also supported, do not use it due to the high chance # of possible collisions and thus security breach. # # WARNING: While md5 is also supported, do not use it due to the high chance # of possible collisions and thus security breach. # # Warning: Prior to changing this value, the minion should be stopped and all # Salt caches should be cleared. #hash_type: sha256 # The Salt pillar is searched for locally if file_client is set to local. If # this is the case, and pillar data is defined, then the pillar_roots need to # also be configured on the minion: #pillar_roots: # base: # - /srv/pillar # # ###### Security settings ##### ########################################### # Enable "open mode", this mode still maintains encryption, but turns off # authentication, this is only intended for highly secure environments or for # the situation where your keys end up in a bad state. If you run in open mode # you do so at your own risk! #open_mode: False # Enable permissive access to the salt keys. This allows you to run the # master or minion as root, but have a non-root group be given access to # your pki_dir. To make the access explicit, root must belong to the group # you've given access to. This is potentially quite insecure. #permissive_pki_access: False # The state_verbose and state_output settings can be used to change the way # state system data is printed to the display. By default all data is printed. # The state_verbose setting can be set to True or False, when set to False # all data that has a result of True and no changes will be suppressed. #state_verbose: True # The state_output setting controls which results will be output full multi line # full, terse - each state will be full/terse # mixed - only states with errors will be full # changes - states with changes and errors will be full # full_id, mixed_id, changes_id and terse_id are also allowed; # when set, the state ID will be used as name in the output #state_output: full # The state_output_diff setting changes whether or not the output from # successful states is returned. Useful when even the terse output of these # states is cluttering the logs. Set it to True to ignore them. #state_output_diff: False # The state_output_profile setting changes whether profile information # will be shown for each state run. #state_output_profile: True # The state_output_pct setting changes whether success and failure information # as a percent of total actions will be shown for each state run. #state_output_pct: False # The state_compress_ids setting aggregates information about states which have # multiple "names" under the same state ID in the highstate output. #state_compress_ids: False # Fingerprint of the master public key to validate the identity of your Salt master # before the initial key exchange. The master fingerprint can be found by running # "salt-key -F master" on the Salt master. #master_finger: '' ###### Thread settings ##### ########################################### # Disable multiprocessing support, by default when a minion receives a # publication a new process is spawned and the command is executed therein. #multiprocessing: True ##### Logging settings ##### ########################################## # The location of the minion log file # The minion log can be sent to a regular file, local path name, or network # location. Remote logging works best when configured to use rsyslogd(8) (e.g.: # ``file:///dev/log``), with rsyslogd(8) configured for network logging. The URI # format is: <file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility> #log_file: /var/log/salt/minion #log_file: file:///dev/log #log_file: udp://loghost:10514 # #log_file: /var/log/salt/minion #key_logfile: /var/log/salt/key # The level of messages to send to the console. # One of 'garbage', 'trace', 'debug', 'info', 'warning', 'error', 'critical'. # # The following log levels are considered INSECURE and may log sensitive data: # ['garbage', 'trace', 'debug'] # # Default: 'warning' #log_level: warning # The level of messages to send to the log file. # One of 'garbage', 'trace', 'debug', info', 'warning', 'error', 'critical'. # If using 'log_granular_levels' this must be set to the highest desired level. # Default: 'warning' #log_level_logfile: # The date and time format used in log messages. Allowed date/time formatting # can be seen here: http://docs.python.org/library/time.html#time.strftime #log_datefmt: '%H:%M:%S' #log_datefmt_logfile: '%Y-%m-%d %H:%M:%S' # The format of the console logging messages. Allowed formatting options can # be seen here: http://docs.python.org/library/logging.html#logrecord-attributes # # Console log colors are specified by these additional formatters: # # %(colorlevel)s # %(colorname)s # %(colorprocess)s # %(colormsg)s # # Since it is desirable to include the surrounding brackets, '[' and ']', in # the coloring of the messages, these color formatters also include padding as # well. Color LogRecord attributes are only available for console logging. # #log_fmt_console: '%(colorlevel)s %(colormsg)s' #log_fmt_console: '[%(levelname)-8s] %(message)s' # #log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s' # This can be used to control logging levels more specificically. This # example sets the main salt library at the 'warning' level, but sets # 'salt.modules' to log at the 'debug' level: # log_granular_levels: # 'salt': 'warning' # 'salt.modules': 'debug' # #log_granular_levels: {} # To diagnose issues with minions disconnecting or missing returns, ZeroMQ # supports the use of monitor sockets # to log connection events. This # feature requires ZeroMQ 4.0 or higher. # # To enable ZeroMQ monitor sockets, set 'zmq_monitor' to 'True' and log at a # debug level or higher. # # A sample log event is as follows: # # [DEBUG ] ZeroMQ event: {'endpoint': 'tcp://127.0.0.1:4505', 'event': 512, # 'value': 27, 'description': 'EVENT_DISCONNECTED'} # # All events logged will include the string 'ZeroMQ event'. A connection event # should be logged on the as the minion starts up and initially connects to the # master. If not, check for debug log level and that the necessary version of # ZeroMQ is installed. # #zmq_monitor: False ###### Module configuration ##### ########################################### # Salt allows for modules to be passed arbitrary configuration data, any data # passed here in valid yaml format will be passed on to the salt minion modules # for use. It is STRONGLY recommended that a naming convention be used in which # the module name is followed by a . and then the value. Also, all top level # data must be applied via the yaml dict construct, some examples: # # You can specify that all modules should run in test mode: #test: True # # A simple value for the test module: #test.foo: foo # # A list for the test module: #test.bar: [baz,quo] # # A dict for the test module: #test.baz: {spam: sausage, cheese: bread} # # ###### Update settings ###### ########################################### # Using the features in Esky, a salt minion can both run as a frozen app and # be updated on the fly. These options control how the update process # (saltutil.update()) behaves. # # The url for finding and downloading updates. Disabled by default. #update_url: False # # The list of services to restart after a successful update. Empty by default. #update_restart_services: [] ###### Keepalive settings ###### ############################################ # ZeroMQ now includes support for configuring SO_KEEPALIVE if supported by # the OS. If connections between the minion and the master pass through # a state tracking device such as a firewall or VPN gateway, there is # the risk that it could tear down the connection the master and minion # without informing either party that their connection has been taken away. # Enabling TCP Keepalives prevents this from happening. # Overall state of TCP Keepalives, enable (1 or True), disable (0 or False) # or leave to the OS defaults (-1), on Linux, typically disabled. Default True, enabled. #tcp_keepalive: True # How long before the first keepalive should be sent in seconds. Default 300 # to send the first keepalive after 5 minutes, OS default (-1) is typically 7200 seconds # on Linux see /proc/sys/net/ipv4/tcp_keepalive_time. #tcp_keepalive_idle: 300 # How many lost probes are needed to consider the connection lost. Default -1 # to use OS defaults, typically 9 on Linux, see /proc/sys/net/ipv4/tcp_keepalive_probes. #tcp_keepalive_cnt: -1 # How often, in seconds, to send keepalives after the first one. Default -1 to # use OS defaults, typically 75 seconds on Linux, see # /proc/sys/net/ipv4/tcp_keepalive_intvl. #tcp_keepalive_intvl: -1 ###### Windows Software settings ###### ############################################ # Location of the repository cache file on the master: #win_repo_cachefile: 'salt://win/repo/winrepo.p' ###### Returner settings ###### ############################################ # Which returner(s) will be used for minion's result: #return: mysql
Minion Blackout Configuration¶
New in version 2016.3.0.
Salt supports minion blackouts. When a minion is in blackout mode, all remote execution commands are disabled. This allows production minions to be put "on hold", eliminating the risk of an untimely configuration change.
Minion blackouts are configured via a special pillar key, minion_blackout. If this key is set to True, then the minion will reject all incoming commands, except for saltutil.refresh_pillar. (The exception is important, so minions can be brought out of blackout mode)
Salt also supports an explicit whitelist of additional functions that will be allowed during blackout. This is configured with the special pillar key minion_blackout_whitelist, which is formed as a list:
minion_blackout_whitelist:
- test.version
- pillar.get
Access Control System¶
New in version 0.10.4.
Salt maintains a standard system used to open granular control to non administrative users to execute Salt commands. The access control system has been applied to all systems used to configure access to non administrative control interfaces in Salt.
These interfaces include, the peer system, the external auth system and the publisher acl system.
The access control system mandated a standard configuration syntax used in all of the three aforementioned systems. While this adds functionality to the configuration in 0.10.4, it does not negate the old configuration.
Now specific functions can be opened up to specific minions from specific users in the case of external auth and publisher ACLs, and for specific minions in the case of the peer system.
Publisher ACL system¶
The salt publisher ACL system is a means to allow system users other than root to have access to execute select salt commands on minions from the master.
NOTE:
external_auth is useful for salt-api or for making your own scripts that use Salt's Python API. It can be used at the CLI (with the -a flag) but it is more cumbersome as there are more steps involved. The only time it is useful at the CLI is when the local system is not configured to authenticate against an external service but you still want Salt to authenticate against an external service.
For more information and examples, see this Access Control System section.
The publisher ACL system is configured in the master configuration file via the publisher_acl configuration option. Under the publisher_acl configuration option the users open to send commands are specified and then a list of the minion functions which will be made available to specified user. Both users and functions could be specified by exact match, shell glob or regular expression. This configuration is much like the external_auth configuration:
publisher_acl:
# Allow thatch to execute anything.
thatch:
- .*
# Allow fred to use test and pkg, but only on "web*" minions.
fred:
- web*:
- test.*
- pkg.*
# Allow admin and managers to use saltutil module functions
admin|manager_.*:
- saltutil.*
# Allow users to use only my_mod functions on "web*" minions with specific arguments.
user_.*:
- web*:
- 'my_mod.*':
args:
- 'a.*'
- 'b.*'
kwargs:
'kwa': 'kwa.*'
'kwb': 'kwb'
Permission Issues¶
Directories required for publisher_acl must be modified to be readable by the users specified:
chmod 755 /var/cache/salt /var/cache/salt/master /var/cache/salt/master/jobs /var/run/salt /var/run/salt/master
NOTE:
If you are upgrading from earlier versions of salt you must also remove any existing user keys and re-start the Salt master:
rm /var/cache/salt/.*key service salt-master restart
Whitelist and Blacklist¶
Salt's authentication systems can be configured by specifying what is allowed using a whitelist, or by specifying what is disallowed using a blacklist. If you specify a whitelist, only specified operations are allowed. If you specify a blacklist, all operations are allowed except those that are blacklisted.
See publisher_acl and publisher_acl_blacklist.
External Authentication System¶
Salt's External Authentication System (eAuth) allows for Salt to pass through command authorization to any external authentication system, such as PAM or LDAP.
NOTE:
NOTE:
external_auth is useful for salt-api or for making your own scripts that use Salt's Python API. It can be used at the CLI (with the -a flag) but it is more cumbersome as there are more steps involved. The only time it is useful at the CLI is when the local system is not configured to authenticate against an external service but you still want Salt to authenticate against an external service.
For more information and examples, see this Access Control System section.
External Authentication System Configuration¶
The external authentication system allows for specific users to be granted access to execute specific functions on specific minions. Access is configured in the master configuration file and uses the access control system:
external_auth:
pam:
thatch:
- 'web*':
- test.*
- network.*
steve|admin.*:
- .*
The above configuration allows the user thatch to execute functions in the test and network modules on the minions that match the web* target. User steve and the users whose logins start with admin, are granted unrestricted access to minion commands.
Salt respects the current PAM configuration in place, and uses the 'login' service to authenticate.
NOTE:
NOTE:
To allow access to wheel modules or runner modules the following @ syntax must be used:
external_auth:
pam:
thatch:
- '@wheel' # to allow access to all wheel modules
- '@runner' # to allow access to all runner modules
- '@jobs' # to allow access to the jobs runner and/or wheel module
NOTE:
NOTE:
WARNING:
Matching syntax¶
The structure of the external_auth dictionary can take the following shapes. User and function matches are exact matches, shell glob patterns or regular expressions; minion matches are compound targets.
By user:
external_auth:
<eauth backend>:
<user or group%>:
- <regex to match function>
By user, by minion:
external_auth:
<eauth backend>:
<user or group%>:
<minion compound target>:
- <regex to match function>
By user, by runner/wheel:
external_auth:
<eauth backend>:
<user or group%>:
<@runner or @wheel>:
- <regex to match function>
By user, by runner+wheel module:
external_auth:
<eauth backend>:
<user or group%>:
<@module_name>:
- <regex to match function without module_name>
Groups¶
To apply permissions to a group of users in an external authentication system, append a % to the ID:
external_auth:
pam:
admins%:
- '*':
- 'pkg.*'
Limiting by function arguments¶
Positional arguments or keyword arguments to functions can also be whitelisted.
New in version 2016.3.0.
external_auth:
pam:
my_user:
- '*':
- 'my_mod.*':
args:
- 'a.*'
- 'b.*'
kwargs:
'kwa': 'kwa.*'
'kwb': 'kwb'
- '@runner':
- 'runner_mod.*':
args:
- 'a.*'
- 'b.*'
kwargs:
'kwa': 'kwa.*'
'kwb': 'kwb'
The rules:
- 1.
- The arguments values are matched as regexp.
- 2.
- If arguments restrictions are specified the only matched are allowed.
- 3.
- If an argument isn't specified any value is allowed.
- 4.
- To skip an arg use "everything" regexp .*. I.e. if arg0 and arg2 should be limited but arg1 and other arguments could have any value use:
args:
- 'value0'
- '.*'
- 'value2'
Usage¶
The external authentication system can then be used from the command-line by any user on the same system as the master with the -a option:
$ salt -a pam web\* test.version
The system will ask the user for the credentials required by the authentication system and then publish the command.
Tokens¶
With external authentication alone, the authentication credentials will be required with every call to Salt. This can be alleviated with Salt tokens.
Tokens are short term authorizations and can be easily created by just adding a -T option when authenticating:
$ salt -T -a pam web\* test.version
Now a token will be created that has an expiration of 12 hours (by default). This token is stored in a file named salt_token in the active user's home directory.
Once the token is created, it is sent with all subsequent communications. User authentication does not need to be entered again until the token expires.
Token expiration time can be set in the Salt master config file.
LDAP and Active Directory¶
NOTE:
Salt supports both user and group authentication for LDAP (and Active Directory accessed via its LDAP interface)
OpenLDAP and similar systems¶
LDAP configuration happens in the Salt master configuration file.
Server configuration values and their defaults:
# Server to auth against auth.ldap.server: localhost # Port to connect via auth.ldap.port: 389 # Use TLS when connecting auth.ldap.tls: False # Use STARTTLS when connecting auth.ldap.starttls: False # LDAP scope level, almost always 2 auth.ldap.scope: 2 # Server specified in URI format auth.ldap.uri: '' # Overrides .ldap.server, .ldap.port, .ldap.tls above # Verify server's TLS certificate auth.ldap.no_verify: False # Bind to LDAP anonymously to determine group membership # Active Directory does not allow anonymous binds without special configuration # In addition, if auth.ldap.anonymous is True, empty bind passwords are not permitted. auth.ldap.anonymous: False # FOR TESTING ONLY, this is a VERY insecure setting. # If this is True, the LDAP bind password will be ignored and # access will be determined by group membership alone with # the group memberships being retrieved via anonymous bind auth.ldap.auth_by_group_membership_only: False # Require authenticating user to be part of this Organizational Unit # This can be blank if your LDAP schema does not use this kind of OU auth.ldap.groupou: 'Groups' # Object Class for groups. An LDAP search will be done to find all groups of this # class to which the authenticating user belongs. auth.ldap.groupclass: 'posixGroup' # Unique ID attribute name for the user auth.ldap.accountattributename: 'memberUid' # These are only for Active Directory auth.ldap.activedirectory: False auth.ldap.persontype: 'person' auth.ldap.minion_stripdomains: [] # Redhat Identity Policy Audit auth.ldap.freeipa: False
Authenticating to the LDAP Server¶
There are two phases to LDAP authentication. First, Salt authenticates to search for a users' Distinguished Name and group membership. The user it authenticates as in this phase is often a special LDAP system user with read-only access to the LDAP directory. After Salt searches the directory to determine the actual user's DN and groups, it re-authenticates as the user running the Salt commands.
If you are already aware of the structure of your DNs and permissions in your LDAP store are set such that users can look up their own group memberships, then the first and second users can be the same. To tell Salt this is the case, omit the auth.ldap.bindpw parameter. Note this is not the same thing as using an anonymous bind. Most LDAP servers will not permit anonymous bind, and as mentioned above, if auth.ldap.anonymous is False you cannot use an empty password.
You can template the binddn like this:
auth.ldap.basedn: dc=saltstack,dc=com auth.ldap.binddn: uid={{ username }},cn=users,cn=accounts,dc=saltstack,dc=com
Salt will use the password entered on the salt command line in place of the bindpw.
To use two separate users, specify the LDAP lookup user in the binddn directive, and include a bindpw like so
auth.ldap.binddn: uid=ldaplookup,cn=sysaccounts,cn=etc,dc=saltstack,dc=com auth.ldap.bindpw: mypassword
As mentioned before, Salt uses a filter to find the DN associated with a user. Salt substitutes the {{ username }} value for the username when querying LDAP
auth.ldap.filter: uid={{ username }}
Determining Group Memberships (OpenLDAP / non-Active Directory)¶
For OpenLDAP, to determine group membership, one can specify an OU that contains group data. This is prepended to the basedn to create a search path. Then the results are filtered against auth.ldap.groupclass, default posixGroup, and the account's 'name' attribute, memberUid by default.
auth.ldap.groupou: Groups
Note that as of 2017.7, auth.ldap.groupclass can refer to either a groupclass or an objectClass. For some LDAP servers (notably OpenLDAP without the memberOf overlay enabled) to determine group membership we need to know both the objectClass and the memberUid attributes. Usually for these servers you will want a auth.ldap.groupclass of posixGroup and an auth.ldap.groupattribute of memberUid.
LDAP servers with the memberOf overlay will have entries similar to auth.ldap.groupclass: person and auth.ldap.groupattribute: memberOf.
When using the ldap('DC=domain,DC=com') eauth operator, sometimes the records returned from LDAP or Active Directory have fully-qualified domain names attached, while minion IDs instead are simple hostnames. The parameter below allows the administrator to strip off a certain set of domain names so the hostnames looked up in the directory service can match the minion IDs.
auth.ldap.minion_stripdomains: ['.external.bigcorp.com', '.internal.bigcorp.com']
Determining Group Memberships (Active Directory)¶
Active Directory handles group membership differently, and does not utilize the groupou configuration variable. AD needs the following options in the master config:
auth.ldap.activedirectory: True auth.ldap.filter: sAMAccountName={{username}} auth.ldap.accountattributename: sAMAccountName auth.ldap.groupclass: group auth.ldap.persontype: person
To determine group membership in AD, the username and password that is entered when LDAP is requested as the eAuth mechanism on the command line is used to bind to AD's LDAP interface. If this fails, then it doesn't matter what groups the user belongs to, he or she is denied access. Next, the distinguishedName of the user is looked up with the following LDAP search:
(&(<value of auth.ldap.accountattributename>={{username}})
(objectClass=<value of auth.ldap.persontype>) )
This should return a distinguishedName that we can use to filter for group membership. Then the following LDAP query is executed:
(&(member=<distinguishedName from search above>)
(objectClass=<value of auth.ldap.groupclass>) )
external_auth:
ldap:
test_ldap_user:
- '*':
- test.ping
To configure a LDAP group, append a % to the ID:
external_auth:
ldap:
test_ldap_group%:
- '*':
- test.echo
In addition, if there are a set of computers in the directory service that should be part of the eAuth definition, they can be specified like this:
external_auth:
ldap:
test_ldap_group%:
- ldap('DC=corp,DC=example,DC=com'):
- test.echo
The string inside ldap() above is any valid LDAP/AD tree limiter. OU= in particular is permitted as long as it would return a list of computer objects.
Peer Communication¶
Salt 0.9.0 introduced the capability for Salt minions to publish commands. The intent of this feature is not for Salt minions to act as independent brokers one with another, but to allow Salt minions to pass commands to each other.
In Salt 0.10.0 the ability to execute runners from the master was added. This allows for the master to return collective data from runners back to the minions via the peer interface.
The peer interface is configured through two options in the master configuration file. For minions to send commands from the master the peer configuration is used. To allow for minions to execute runners from the master the peer_run configuration is used.
Since this presents a viable security risk by allowing minions access to the master publisher the capability is turned off by default. The minions can be allowed access to the master publisher on a per minion basis based on regular expressions. Minions with specific ids can be allowed access to certain Salt modules and functions.
Peer Configuration¶
The configuration is done under the peer setting in the Salt master configuration file, here are a number of configuration possibilities.
The simplest approach is to enable all communication for all minions, this is only recommended for very secure environments.
peer:
.*:
- .*
This configuration will allow minions with IDs ending in example.com access to the test, ps, and pkg module functions.
peer:
.*example.com:
- test.*
- ps.*
- pkg.*
The configuration logic is simple, a regular expression is passed for matching minion ids, and then a list of expressions matching minion functions is associated with the named minion. For instance, this configuration will also allow minions ending with foo.org access to the publisher.
peer:
.*example.com:
- test.*
- ps.*
- pkg.*
.*foo.org:
- test.*
- ps.*
- pkg.*
NOTE:
Peer Runner Communication¶
Configuration to allow minions to execute runners from the master is done via the peer_run option on the master. The peer_run configuration follows the same logic as the peer option. The only difference is that access is granted to runner modules.
To open up access to all minions to all runners:
peer_run:
.*:
- .*
This configuration will allow minions with IDs ending in example.com access to the manage and jobs runner functions.
peer_run:
.*example.com:
- manage.*
- jobs.*
NOTE:
Using Peer Communication¶
The publish module was created to manage peer communication. The publish module comes with a number of functions to execute peer communication in different ways. Currently there are three functions in the publish module. These examples will show how to test the peer system via the salt-call command.
To execute test.version on all minions:
# salt-call publish.publish \* test.version
To execute the manage.up runner:
# salt-call publish.runner manage.up
To match minions using other matchers, use tgt_type:
# salt-call publish.publish 'webserv* and not G@os:Ubuntu' test.version tgt_type='compound'
NOTE:
When to Use Each Authentication System¶
publisher_acl is useful for allowing local system users to run Salt commands without giving them root access. If you can log into the Salt master directly, then publisher_acl allows you to use Salt without root privileges. If the local system is configured to authenticate against a remote system, like LDAP or Active Directory, then publisher_acl will interact with the remote system transparently.
external_auth is useful for salt-api or for making your own scripts that use Salt's Python API. It can be used at the CLI (with the -a flag) but it is more cumbersome as there are more steps involved. The only time it is useful at the CLI is when the local system is not configured to authenticate against an external service but you still want Salt to authenticate against an external service.
Examples¶
The access controls are manifested using matchers in these configurations:
publisher_acl:
fred:
- web\*:
- pkg.list_pkgs
- test.*
- apache.*
In the above example, fred is able to send commands only to minions which match the specified glob target. This can be expanded to include other functions for other minions based on standard targets (all matchers are supported except the compound one).
external_auth:
pam:
dave:
- test.version
- mongo\*:
- network.*
- log\*:
- network.*
- pkg.*
- 'G@os:RedHat':
- kmod.*
steve:
- .*
The above allows for all minions to be hit by test.version by dave, and adds a few functions that dave can execute on other minions. It also allows steve unrestricted access to salt commands.
NOTE:
Job Management¶
New in version 0.9.7.
Since Salt executes jobs running on many systems, Salt needs to be able to manage jobs running on many systems.
The Minion proc System¶
Salt Minions maintain a proc directory in the Salt cachedir. The proc directory maintains files named after the executed job ID. These files contain the information about the current running jobs on the minion and allow for jobs to be looked up. This is located in the proc directory under the cachedir, with a default configuration it is under /var/cache/salt/{master|minion}/proc.
Functions in the saltutil Module¶
Salt 0.9.7 introduced a few new functions to the saltutil module for managing jobs. These functions are:
- 1.
- running Returns the data of all running jobs that are found in the proc directory.
- 2.
- find_job Returns specific data about a certain job based on job id.
- 3.
- signal_job Allows for a given jid to be sent a signal.
- 4.
- term_job Sends a termination signal (SIGTERM, 15) to the process controlling the specified job.
- 5.
- kill_job Sends a kill signal (SIGKILL, 9) to the process controlling the specified job.
These functions make up the core of the back end used to manage jobs at the minion level.
The jobs Runner¶
A convenience runner front end and reporting system has been added as well. The jobs runner contains functions to make viewing data easier and cleaner.
The jobs runner contains a number of functions...
active¶
The active function runs saltutil.running on all minions and formats the return data about all running jobs in a much more usable and compact format. The active function will also compare jobs that have returned and jobs that are still running, making it easier to see what systems have completed a job and what systems are still being waited on.
# salt-run jobs.active
lookup_jid¶
When jobs are executed the return data is sent back to the master and cached. By default it is cached for 86400 seconds, but this can be configured via the keep_jobs_seconds option in the master configuration. Using the lookup_jid runner will display the same return data that the initial job invocation with the salt command would display.
# salt-run jobs.lookup_jid <job id number>
list_jobs¶
Before finding a historic job, it may be required to find the job id. list_jobs will parse the cached execution data and display all of the job data for jobs that have already, or partially returned.
# salt-run jobs.list_jobs
Scheduling Jobs¶
Salt's scheduling system allows incremental executions on minions or the master. The schedule system exposes the execution of any execution function on minions or any runner on the master.
Scheduling can be enabled by multiple methods:
- schedule option in either the master or minion config files. These require the master or minion application to be restarted in order for the schedule to be implemented.
- Minion pillar data. Schedule is implemented by refreshing the minion's pillar data, for example by using saltutil.refresh_pillar.
- The schedule state or schedule module
NOTE:
A scheduled run has no output on the minion unless the config is set to info level or higher. Refer to minion-logging-settings.
States are executed on the minion, as all states are. You can pass positional arguments and provide a YAML dict of named arguments.
schedule:
job1:
function: state.sls
seconds: 3600
args:
- httpd
kwargs:
test: True
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour).
schedule:
job1:
function: state.sls
seconds: 3600
args:
- httpd
kwargs:
test: True
splay: 15
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 0 and 15 seconds.
schedule:
job1:
function: state.sls
seconds: 3600
args:
- httpd
kwargs:
test: True
splay:
start: 10
end: 15
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 10 and 15 seconds.
Schedule by Date and Time¶
New in version 2014.7.0.
Frequency of jobs can also be specified using date strings supported by the Python dateutil library. This requires the Python dateutil library to be installed.
schedule:
job1:
function: state.sls
args:
- httpd
kwargs:
test: True
when: 5:00pm
This will schedule the command: state.sls httpd test=True at 5:00 PM minion localtime.
schedule:
job1:
function: state.sls
args:
- httpd
kwargs:
test: True
when:
- Monday 5:00pm
- Tuesday 3:00pm
- Wednesday 5:00pm
- Thursday 3:00pm
- Friday 5:00pm
This will schedule the command: state.sls httpd test=True at 5:00 PM on Monday, Wednesday and Friday, and 3:00 PM on Tuesday and Thursday.
schedule:
job1:
function: state.sls
args:
- httpd
kwargs:
test: True
when:
- 'tea time'
whens:
tea time: 1:40pm
deployment time: Friday 5:00pm
The Salt scheduler also allows custom phrases to be used for the when parameter. These whens can be stored as either pillar values or grain values.
schedule:
job1:
function: state.sls
seconds: 3600
args:
- httpd
kwargs:
test: True
range:
start: 8:00am
end: 5:00pm
This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) between the hours of 8:00 AM and 5:00 PM. The range parameter must be a dictionary with the date strings using the dateutil format.
schedule:
job1:
function: state.sls
seconds: 3600
args:
- httpd
kwargs:
test: True
range:
invert: True
start: 8:00am
end: 5:00pm
Using the invert option for range, this will schedule the command state.sls httpd test=True every 3600 seconds (every hour) until the current time is between the hours of 8:00 AM and 5:00 PM. The range parameter must be a dictionary with the date strings using the dateutil format.
schedule:
job1:
function: pkg.install
kwargs:
pkgs: [{'bar': '>1.2.3'}]
refresh: true
once: '2016-01-07T14:30:00'
This will schedule the function pkg.install to be executed once at the specified time. The schedule entry job1 will not be removed after the job completes, therefore use schedule.delete to manually remove it afterwards.
The default date format is ISO 8601 but can be overridden by also specifying the once_fmt option, like this:
schedule:
job1:
function: test.ping
once: 2015-04-22T20:21:00
once_fmt: '%Y-%m-%dT%H:%M:%S'
Maximum Parallel Jobs Running¶
New in version 2014.7.0.
The scheduler also supports ensuring that there are no more than N copies of a particular routine running. Use this for jobs that may be long-running and could step on each other or pile up in case of infrastructure outage.
The default for maxrunning is 1.
schedule:
long_running_job:
function: big_file_transfer
jid_include: True
maxrunning: 1
Cron-like Schedule¶
New in version 2014.7.0.
schedule:
job1:
function: state.sls
cron: '*/15 * * * *'
args:
- httpd
kwargs:
test: True
The scheduler also supports scheduling jobs using a cron like format. This requires the Python croniter library.
Job Data Return¶
New in version 2015.5.0.
By default, data about jobs runs from the Salt scheduler is returned to the master. Setting the return_job parameter to False will prevent the data from being sent back to the Salt master.
schedule:
job1:
function: scheduled_job_function
return_job: False
Job Metadata¶
New in version 2015.5.0.
It can be useful to include specific data to differentiate a job from other jobs. Using the metadata parameter special values can be associated with a scheduled job. These values are not used in the execution of the job, but can be used to search for specific jobs later if combined with the return_job parameter. The metadata parameter must be specified as a dictionary, othewise it will be ignored.
schedule:
job1:
function: scheduled_job_function
metadata:
foo: bar
Run on Start¶
New in version 2015.5.0.
By default, any job scheduled based on the startup time of the minion will run the scheduled job when the minion starts up. Sometimes this is not the desired situation. Using the run_on_start parameter set to False will cause the scheduler to skip this first run and wait until the next scheduled run:
schedule:
job1:
function: state.sls
seconds: 3600
run_on_start: False
args:
- httpd
kwargs:
test: True
Until and After¶
New in version 2015.8.0.
schedule:
job1:
function: state.sls
seconds: 15
until: '12/31/2015 11:59pm'
args:
- httpd
kwargs:
test: True
Using the until argument, the Salt scheduler allows you to specify an end time for a scheduled job. If this argument is specified, jobs will not run once the specified time has passed. Time should be specified in a format supported by the dateutil library. This requires the Python dateutil library to be installed.
New in version 2015.8.0.
schedule:
job1:
function: state.sls
seconds: 15
after: '12/31/2015 11:59pm'
args:
- httpd
kwargs:
test: True
Using the after argument, the Salt scheduler allows you to specify an start time for a scheduled job. If this argument is specified, jobs will not run until the specified time has passed. Time should be specified in a format supported by the dateutil library. This requires the Python dateutil library to be installed.
Scheduling States¶
schedule:
log-loadavg:
function: cmd.run
seconds: 3660
args:
- 'logger -t salt < /proc/loadavg'
kwargs:
stateful: False
shell: /bin/sh
Scheduling Highstates¶
To set up a highstate to run on a minion every 60 minutes set this in the minion config or pillar:
schedule:
highstate:
function: state.highstate
minutes: 60
Time intervals can be specified as seconds, minutes, hours, or days.
Scheduling Runners¶
Runner executions can also be specified on the master within the master configuration file:
schedule:
run_my_orch:
function: state.orchestrate
hours: 6
splay: 600
args:
- orchestration.my_orch
The above configuration is analogous to running salt-run state.orch orchestration.my_orch every 6 hours.
Scheduler With Returner¶
The scheduler is also useful for tasks like gathering monitoring data about a minion, this schedule option will gather status data and send it to a MySQL returner database:
schedule:
uptime:
function: status.uptime
seconds: 60
returner: mysql
meminfo:
function: status.meminfo
minutes: 5
returner: mysql
Since specifying the returner repeatedly can be tiresome, the schedule_returner option is available to specify one or a list of global returners to be used by the minions when scheduling.
Managing the Job Cache¶
The Salt Master maintains a job cache of all job executions which can be queried via the jobs runner. This job cache is called the Default Job Cache.
Default Job Cache¶
A number of options are available when configuring the job cache. The default caching system uses local storage on the Salt Master and can be found in the job cache directory (on Linux systems this is typically /var/cache/salt/master/jobs). The default caching system is suitable for most deployments as it does not typically require any further configuration or management.
The default job cache is a temporary cache and jobs will be stored for 86400 seconds. If the default cache needs to store jobs for a different period the time can be easily adjusted by changing the keep_jobs_seconds parameter in the Salt Master configuration file. The value passed in is measured in seconds:
keep_jobs_seconds: 86400
Reducing the Size of the Default Job Cache¶
The Default Job Cache can sometimes be a burden on larger deployments (over 5000 minions). Disabling the job cache will make previously executed jobs unavailable to the jobs system and is not generally recommended. Normally it is wise to make sure the master has access to a faster IO system or a tmpfs is mounted to the jobs dir.
However, you can disable the job_cache by setting it to False in the Salt Master configuration file. Setting this value to False means that the Salt Master will no longer cache minion returns, but a JID directory and jid file for each job will still be created. This JID directory is necessary for checking for and preventing JID collisions.
The default location for the job cache is in the /var/cache/salt/master/jobs/ directory.
Setting the job_cache to False in addition to setting the keep_jobs_seconds option to a smaller value, such as 3600, in the Salt Master configuration file will reduce the size of the Default Job Cache, and thus the burden on the Salt Master.
NOTE:
Additional Job Cache Options¶
Many deployments may wish to use an external database to maintain a long term register of executed jobs. Salt comes with two main mechanisms to do this, the master job cache and the external job cache.
See Storing Job Results in an External System.
Storing Job Results in an External System¶
After a job executes, job results are returned to the Salt Master by each Salt Minion. These results are stored in the Default Job Cache.
In addition to the Default Job Cache, Salt provides two additional mechanisms to send job results to other systems (databases, local syslog, and others):
- External Job Cache
- Master Job Cache
The major difference between these two mechanism is from where results are returned (from the Salt Master or Salt Minion). Configuring either of these options will also make the Jobs Runner functions to automatically query the remote stores for information.
External Job Cache - Minion-Side Returner¶
When an External Job Cache is configured, data is returned to the Default Job Cache on the Salt Master like usual, and then results are also sent to an External Job Cache using a Salt returner module running on the Salt Minion. [image]
- Advantages: Data is stored without placing additional load on the Salt Master.
- Disadvantages: Each Salt Minion connects to the external job cache, which can result in a large number of connections. Also requires additional configuration to get returner module settings on all Salt Minions.
Master Job Cache - Master-Side Returner¶
New in version 2014.7.0.
Instead of configuring an External Job Cache on each Salt Minion, you can configure the Master Job Cache to send job results from the Salt Master instead. In this configuration, Salt Minions send data to the Default Job Cache as usual, and then the Salt Master sends the data to the external system using a Salt returner module running on the Salt Master. [image]
- Advantages: A single connection is required to the external system. This is preferred for databases and similar systems.
- Disadvantages: Places additional load on your Salt Master.
Configure an External or Master Job Cache¶
Step 1: Understand Salt Returners¶
Before you configure a job cache, it is essential to understand Salt returner modules ("returners"). Returners are pluggable Salt Modules that take the data returned by jobs, and then perform any necessary steps to send the data to an external system. For example, a returner might establish a connection, authenticate, and then format and transfer data.
The Salt Returner system provides the core functionality used by the External and Master Job Cache systems, and the same returners are used by both systems.
Salt currently provides many different returners that let you connect to a wide variety of systems. A complete list is available at all Salt returners. Each returner is configured differently, so make sure you read and follow the instructions linked from that page.
For example, the MySQL returner requires:
- A database created using provided schema (structure is available at MySQL returner)
- A user created with privileges to the database
- Optional SSL configuration
A simpler returner, such as Slack or HipChat, requires:
- An API key/version
- The target channel/room
- The username that should be used to send the message
Step 2: Configure the Returner¶
After you understand the configuration and have the external system ready, the configuration requirements must be declared.
External Job Cache¶
The returner configuration settings can be declared in the Salt Minion configuration file, the Minion's pillar data, or the Minion's grains.
If external_job_cache configuration settings are specified in more than one place, the options are retrieved in the following order. The first configuration location that is found is the one that will be used.
- Minion configuration file
- Minion's grains
- Minion's pillar data
Master Job Cache¶
The returner configuration settings for the Master Job Cache should be declared in the Salt Master's configuration file.
Configuration File Examples¶
MySQL requires:
mysql.host: 'salt' mysql.user: 'salt' mysql.pass: 'salt' mysql.db: 'salt' mysql.port: 3306
Slack requires:
slack.channel: 'channel' slack.api_key: 'key' slack.from_name: 'name'
After you have configured the returner and added settings to the configuration file, you can enable the External or Master Job Cache.
Step 3: Enable the External or Master Job Cache¶
Configuration is a single line that specifies an already-configured returner to use to send all job data to an external system.
External Job Cache¶
To enable a returner as the External Job Cache (Minion-side), add the following line to the Salt Master configuration file:
ext_job_cache: <returner>
For example:
ext_job_cache: mysql
NOTE:
Master Job Cache¶
To enable a returner as a Master Job Cache (Master-side), add the following line to the Salt Master configuration file:
master_job_cache: <returner>
For example:
master_job_cache: mysql
Verify that the returner configuration settings are in the Master configuration file, and be sure to restart the salt-master service after you make configuration changes. (service salt-master restart).
Logging¶
The Salt Project tries to get the logging to work for you and help us solve any issues you might find along the way.
If you want to get some more information on the nitty-gritty of salt's logging system, please head over to the logging development document, if all you're after is salt's logging configurations, please continue reading.
Log Levels¶
The log levels are ordered numerically such that setting the log level to a specific level will record all log statements at that level and higher. For example, setting log_level: error will log statements at error, critical, and quiet levels, although nothing should be logged at quiet level.
Most of the logging levels are defined by default in Python's logging library and can be found in the official Python documentation. Salt uses some more levels in addition to the standard levels. All levels available in salt are shown in the table below.
NOTE:
Level | Numeric value | Description |
quiet | 1000 | Nothing should be logged at this level |
critical | 50 | Critical errors |
error | 40 | Errors |
warning | 30 | Warnings |
info | 20 | Normal log information |
profile | 15 | Profiling information on salt performance |
debug | 10 | Information useful for debugging both salt implementations and salt code |
trace | 5 | More detailed code debugging information |
garbage | 1 | Even more debugging information |
all | 0 | Everything |
Available Configuration Settings¶
log_file¶
The log records can be sent to a regular file, local path name, or network location. Remote logging works best when configured to use rsyslogd(8) (e.g.: file:///dev/log), with rsyslogd(8) configured for network logging. The format for remote addresses is:
<file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
Where log-facility is the symbolic name of a syslog facility as defined in the SysLogHandler documentation. It defaults to LOG_USER.
Default: Dependent of the binary being executed, for example, for salt-master, /var/log/salt/master.
Examples:
log_file: /var/log/salt/master
log_file: /var/log/salt/minion
log_file: file:///dev/log
log_file: file:///dev/log/LOG_DAEMON
log_file: udp://loghost:10514
log_level¶
Default: warning
The level of log record messages to send to the console. One of all, garbage, trace, debug, profile, info, warning, error, critical, quiet.
log_level: warning
NOTE:
log_level_logfile¶
Default: info
The level of messages to send to the log file. One of all, garbage, trace, debug, profile, info, warning, error, critical, quiet.
log_level_logfile: warning
log_datefmt¶
Default: %H:%M:%S
The date and time format used in console log messages. Allowed date/time formatting matches those used in time.strftime().
log_datefmt: '%H:%M:%S'
log_datefmt_logfile¶
Default: %Y-%m-%d %H:%M:%S
The date and time format used in log file messages. Allowed date/time formatting matches those used in time.strftime().
log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
log_fmt_console¶
Default: [%(levelname)-8s] %(message)s
The format of the console logging messages. All standard python logging LogRecord attributes can be used. Salt also provides these custom LogRecord attributes to colorize console log output:
"%(colorlevel)s" # log level name colorized by level "%(colorname)s" # colorized module name "%(colorprocess)s" # colorized process number "%(colormsg)s" # log message colorized by level
NOTE:
log_fmt_console: '[%(levelname)-8s] %(message)s'
log_fmt_logfile¶
Default: %(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s
The format of the log file logging messages. All standard python logging LogRecord attributes can be used. Salt also provides these custom LogRecord attributes that include padding and enclosing brackets [ and ]:
"%(bracketlevel)s" # equivalent to [%(levelname)-8s] "%(bracketname)s" # equivalent to [%(name)-17s] "%(bracketprocess)s" # equivalent to [%(process)5s]
log_fmt_logfile: '%(asctime)s,%(msecs)03d [%(name)-17s][%(levelname)-8s] %(message)s'
log_granular_levels¶
Default: {}
This can be used to control logging levels more specifically, based on log call name. The example sets the main salt library at the 'warning' level, sets salt.modules to log at the debug level, and sets a custom module to the all level:
log_granular_levels:
'salt': 'warning'
'salt.modules': 'debug'
'salt.loader.saltmaster.ext.module.custom_module': 'all'
You can determine what log call name to use here by adding %(module)s to the log format. Typically, it is the path of the file which generates the log without the trailing .py and with path separators replaced with .
log_fmt_jid¶
Default: [JID: %(jid)s]
The format of the JID when added to logging messages.
log_fmt_jid: '[JID: %(jid)s]'
External Logging Handlers¶
Besides the internal logging handlers used by salt, there are some external which can be used, see the external logging handlers document.
External Logging Handlers¶
fluent_mod | Fluent Logging Handler |
log4mongo_mod | Log4Mongo Logging Handler |
logstash_mod | Logstash Logging Handler |
sentry_mod | Sentry Logging Handler |
salt.log_handlers.fluent_mod¶
Fluent Logging Handler¶
New in version 2015.8.0.
This module provides some fluentd logging handlers.
Fluent Logging Handler¶
In the fluent configuration file:
<source>
type forward
bind localhost
port 24224 </source>
Then, to send logs via fluent in Logstash format, add the following to the salt (master and/or minion) configuration file:
fluent_handler:
host: localhost
port: 24224
To send logs via fluent in the Graylog raw json format, add the following to the salt (master and/or minion) configuration file:
fluent_handler:
host: localhost
port: 24224
payload_type: graylog
tags:
- salt_master.SALT
The above also illustrates the tags option, which allows one to set descriptive (or useful) tags on records being sent. If not provided, this defaults to the single tag: 'salt'. Also note that, via Graylog "magic", the 'facility' of the logged message is set to 'SALT' (the portion of the tag after the first period), while the tag itself will be set to simply 'salt_master'. This is a feature, not a bug :)
Note: There is a third emitter, for the GELF format, but it is largely untested, and I don't currently have a setup supporting this config, so while it runs cleanly and outputs what LOOKS to be valid GELF, any real-world feedback on its usefulness, and correctness, will be appreciated.
Log Level¶
The fluent_handler configuration section accepts an additional setting log_level. If not set, the logging level used will be the one defined for log_level in the global configuration file section.
- Inspiration
-
This work was inspired in fluent-logger-python
salt.log_handlers.log4mongo_mod¶
Log4Mongo Logging Handler¶
This module provides a logging handler for sending salt logs to MongoDB
Configuration¶
In the salt configuration file (e.g. /etc/salt/{master,minion}):
log4mongo_handler:
host: mongodb_host
port: 27017
database_name: logs
collection: salt_logs
username: logging
password: reindeerflotilla
write_concern: 0
log_level: warning
Log Level¶
If not set, the log_level will be set to the level defined in the global configuration file setting.
- Inspiration
-
This work was inspired by the Salt logging handlers for LogStash and Sentry and by the log4mongo Python implementation.
salt.log_handlers.logstash_mod¶
Logstash Logging Handler¶
New in version 0.17.0.
This module provides some Logstash logging handlers.
UDP Logging Handler¶
For versions of Logstash before 1.2.0:
In the salt configuration file:
logstash_udp_handler:
host: 127.0.0.1
port: 9999
version: 0
msg_type: logstash
In the Logstash configuration file:
input {
udp {
type => "udp-type"
format => "json_event"
} }
For version 1.2.0 of Logstash and newer:
In the salt configuration file:
logstash_udp_handler:
host: 127.0.0.1
port: 9999
version: 1
msg_type: logstash
In the Logstash configuration file:
input {
udp {
port => 9999
codec => json
} }
Please read the UDP input configuration page for additional information.
ZeroMQ Logging Handler¶
For versions of Logstash before 1.2.0:
In the salt configuration file:
logstash_zmq_handler:
address: tcp://127.0.0.1:2021
version: 0
In the Logstash configuration file:
input {
zeromq {
type => "zeromq-type"
mode => "server"
topology => "pubsub"
address => "tcp://0.0.0.0:2021"
charset => "UTF-8"
format => "json_event"
} }
For version 1.2.0 of Logstash and newer:
In the salt configuration file:
logstash_zmq_handler:
address: tcp://127.0.0.1:2021
version: 1
In the Logstash configuration file:
input {
zeromq {
topology => "pubsub"
address => "tcp://0.0.0.0:2021"
codec => json
} }
Please read the ZeroMQ input configuration page for additional information.
- Important Logstash Setting
-
One of the most important settings that you should not forget on your Logstash configuration file regarding these logging handlers is format. Both the UDP and ZeroMQ inputs need to have format as json_event which is what we send over the wire.
Log Level¶
Both the logstash_udp_handler and the logstash_zmq_handler configuration sections accept an additional setting log_level. If not set, the logging level used will be the one defined for log_level in the global configuration file section.
HWM¶
The high water mark for the ZMQ socket setting. Only applicable for the logstash_zmq_handler.
- Inspiration
-
This work was inspired in pylogstash, python-logstash, canary and the PyZMQ logging handler.
salt.log_handlers.sentry_mod¶
Sentry Logging Handler¶
New in version 0.17.0.
This module provides a Sentry logging handler. Sentry is an open source error tracking platform that provides deep context about exceptions that happen in production. Details about stack traces along with the context variables available at the time of the exception are easily browsable and filterable from the online interface. For more details please see Sentry.
- Note
-
The Raven library needs to be installed on the system for this logging handler to be available.
Configuring the python Sentry client, Raven, should be done under the sentry_handler configuration key. Additional context may be provided for corresponding grain item(s). At the bare minimum, you need to define the DSN. As an example:
sentry_handler:
dsn: https://pub-key:secret-key@app.getsentry.com/app-id
More complex configurations can be achieved, for example:
sentry_handler:
servers:
- https://sentry.example.com
- http://192.168.1.1
project: app-id
public_key: deadbeefdeadbeefdeadbeefdeadbeef
secret_key: beefdeadbeefdeadbeefdeadbeefdead
context:
- os
- master
- saltversion
- cpuarch
- ec2.tags.environment
- Note
-
The public_key and secret_key variables are not supported with Sentry > 3.0. The DSN key should be used instead.
All the client configuration keys are supported, please see the Raven client documentation.
The default logging level for the sentry handler is ERROR. If you wish to define a different one, define log_level under the sentry_handler configuration key:
sentry_handler:
dsn: https://pub-key:secret-key@app.getsentry.com/app-id
log_level: warning
The available log levels are those also available for the salt cli tools and configuration; salt --help should give you the required information.
Threaded Transports¶
Raven's documents rightly suggest using its threaded transport for critical applications. However, don't forget that if you start having troubles with Salt after enabling the threaded transport, please try switching to a non-threaded transport to see if that fixes your problem.
Salt File Server¶
Salt comes with a simple file server suitable for distributing files to the Salt minions. The file server is a stateless ZeroMQ server that is built into the Salt master.
The main intent of the Salt file server is to present files for use in the Salt state system. With this said, the Salt file server can be used for any general file transfer from the master to the minions.
File Server Backends¶
In Salt 0.12.0, the modular fileserver was introduced. This feature added the ability for the Salt Master to integrate different file server backends. File server backends allow the Salt file server to act as a transparent bridge to external resources. A good example of this is the git backend, which allows Salt to serve files sourced from one or more git repositories, but there are several others as well. Click here for a full list of Salt's fileserver backends.
Enabling a Fileserver Backend¶
Fileserver backends can be enabled with the fileserver_backend option.
fileserver_backend:
- git
See the documentation for each backend to find the correct value to add to fileserver_backend in order to enable them.
Using Multiple Backends¶
If fileserver_backend is not defined in the Master config file, Salt will use the roots backend, but the fileserver_backend option supports multiple backends. When more than one backend is in use, the files from the enabled backends are merged into a single virtual filesystem. When a file is requested, the backends will be searched in order for that file, and the first backend to match will be the one which returns the file.
fileserver_backend:
- roots
- git
With this configuration, the environments and files defined in the file_roots parameter will be searched first, and if the file is not found then the git repositories defined in gitfs_remotes will be searched.
Defining Environments¶
Just as the order of the values in fileserver_backend matters, so too does the order in which different sources are defined within a fileserver environment. For example, given the below file_roots configuration, if both /srv/salt/dev/foo.txt and /srv/salt/prod/foo.txt exist on the Master, then salt://foo.txt would point to /srv/salt/dev/foo.txt in the dev environment, but it would point to /srv/salt/prod/foo.txt in the base environment.
file_roots:
base:
- /srv/salt/prod
qa:
- /srv/salt/qa
- /srv/salt/prod
dev:
- /srv/salt/dev
- /srv/salt/qa
- /srv/salt/prod
Similarly, when using the git backend, if both repositories defined below have a hotfix23 branch/tag, and both of them also contain the file bar.txt in the root of the repository at that branch/tag, then salt://bar.txt in the hotfix23 environment would be served from the first repository.
gitfs_remotes:
- https://mydomain.tld/repos/first.git
- https://mydomain.tld/repos/second.git
NOTE:
See the documentation for each backend for a more detailed explanation of how environments are mapped.
Requesting Files from Specific Environments¶
The Salt fileserver supports multiple environments, allowing for SLS files and other files to be isolated for better organization.
For the default backend (called roots), environments are defined using the roots option. Other backends (such as gitfs) define environments in their own ways. For a list of available fileserver backends, see here.
Querystring Syntax¶
Any salt:// file URL can specify its fileserver environment using a querystring syntax, like so:
In Reactor configurations, this method must be used to pull files from an environment other than base.
In States¶
Minions can be instructed which environment to use both globally, and for a single state, and multiple methods for each are available:
Globally¶
A minion can be pinned to an environment using the environment option in the minion config file.
Additionally, the environment can be set for a single call to the following functions:
- state.apply
- state.highstate
- state.sls
- state.top
NOTE:
On a Per-State Basis¶
Within an individual state, there are two ways of specifying the environment. The first is to add a saltenv argument to the state. This example will pull the file from the config environment:
/etc/foo/bar.conf:
file.managed:
- source: salt://foo/bar.conf
- user: foo
- mode: 600
- saltenv: config
Another way of doing the same thing is to use the querystring syntax described above:
/etc/foo/bar.conf:
file.managed:
- source: salt://foo/bar.conf?saltenv=config
- user: foo
- mode: 600
NOTE:
File Server Configuration¶
The Salt file server is a high performance file server written in ZeroMQ. It manages large files quickly and with little overhead, and has been optimized to handle small files in an extremely efficient manner.
The Salt file server is an environment aware file server. This means that files can be allocated within many root directories and accessed by specifying both the file path and the environment to search. The individual environments can span across multiple directory roots to create overlays and to allow for files to be organized in many flexible ways.
Periodic Restarts¶
The file server will restart periodically. The reason for this is to prevent any files erver backends which may not properly handle resources from endlessly consuming memory. A notable example of this is using a git backend with the pygit2 library. How often the file server restarts can be controlled with the fileserver_interval in your master's config file.
Environments¶
The Salt file server defaults to the mandatory base environment. This environment MUST be defined and is used to download files when no environment is specified.
Environments allow for files and sls data to be logically separated, but environments are not isolated from each other. This allows for logical isolation of environments by the engineer using Salt, but also allows for information to be used in multiple environments.
Directory Overlay¶
The environment setting is a list of directories to publish files from. These directories are searched in order to find the specified file and the first file found is returned.
This means that directory data is prioritized based on the order in which they are listed. In the case of this file_roots configuration:
file_roots:
base:
- /srv/salt/base
- /srv/salt/failover
If a file's URI is salt://httpd/httpd.conf, it will first search for the file at /srv/salt/base/httpd/httpd.conf. If the file is found there it will be returned. If the file is not found there, then /srv/salt/failover/httpd/httpd.conf will be used for the source.
This allows for directories to be overlaid and prioritized based on the order they are defined in the configuration.
It is also possible to have file_roots which supports multiple environments:
file_roots:
base:
- /srv/salt/base
dev:
- /srv/salt/dev
- /srv/salt/base
prod:
- /srv/salt/prod
- /srv/salt/base
This example ensures that each environment will check the associated environment directory for files first. If a file is not found in the appropriate directory, the system will default to using the base directory.
Local File Server¶
New in version 0.9.8.
The file server can be rerouted to run from the minion. This is primarily to enable running Salt states without a Salt master. To use the local file server interface, copy the file server data to the minion and set the file_roots option on the minion to point to the directories copied from the master. Once the minion file_roots option has been set, change the file_client option to local to make sure that the local file server interface is used.
The cp Module¶
The cp module is the home of minion side file server operations. The cp module is used by the Salt state system, salt-cp, and can be used to distribute files presented by the Salt file server.
Escaping Special Characters¶
The salt:// url format can potentially contain a query string, for example salt://dir/file.txt?saltenv=base. You can prevent the fileclient/fileserver from interpreting ? as the initial token of a query string by referencing the file with salt://| rather than salt://.
/etc/marathon/conf/?checkpoint:
file.managed:
- source: salt://|hw/config/?checkpoint
- makedirs: True
Environments¶
Since the file server is made to work with the Salt state system, it supports environments. The environments are defined in the master config file and when referencing an environment the file specified will be based on the root directory of the environment.
get_file¶
The cp.get_file function can be used on the minion to download a file from the master, the syntax looks like this:
salt '*' cp.get_file salt://vimrc /etc/vimrc
This will instruct all Salt minions to download the vimrc file and copy it to /etc/vimrc
Template rendering can be enabled on both the source and destination file names like so:
salt '*' cp.get_file "salt://{{grains.os}}/vimrc" /etc/vimrc template=jinja
This example would instruct all Salt minions to download the vimrc from a directory with the same name as their OS grain and copy it to /etc/vimrc
For larger files, the cp.get_file module also supports gzip compression. Because gzip is CPU-intensive, this should only be used in scenarios where the compression ratio is very high (e.g. pretty-printed JSON or YAML files).
To use compression, use the gzip named argument. Valid values are integers from 1 to 9, where 1 is the lightest compression and 9 the heaviest. In other words, 1 uses the least CPU on the master (and minion), while 9 uses the most.
salt '*' cp.get_file salt://vimrc /etc/vimrc gzip=5
Finally, note that by default cp.get_file does not create new destination directories if they do not exist. To change this, use the makedirs argument:
salt '*' cp.get_file salt://vimrc /etc/vim/vimrc makedirs=True
In this example, /etc/vim/ would be created if it didn't already exist.
get_dir¶
The cp.get_dir function can be used on the minion to download an entire directory from the master. The syntax is very similar to get_file:
salt '*' cp.get_dir salt://etc/apache2 /etc
cp.get_dir supports template rendering and gzip compression arguments just like get_file:
salt '*' cp.get_dir salt://etc/{{pillar.webserver}} /etc gzip=5 template=jinja
File Server Client Instance¶
A client instance is available which allows for modules and applications to be written which make use of the Salt file server.
The file server uses the same authentication and encryption used by the rest of the Salt system for network communication.
fileclient Module¶
The salt/fileclient.py module is used to set up the communication from the minion to the master. When creating a client instance using the fileclient module, the minion configuration needs to be passed in. When using the fileclient module from within a minion module the built in __opts__ data can be passed:
import salt.minion import salt.fileclient def get_file(path, dest, saltenv="base"):
"""
Used to get a single file from the Salt master
CLI Example:
salt '*' cp.get_file salt://vimrc /etc/vimrc
"""
# Get the fileclient object
client = salt.fileclient.get_file_client(__opts__)
# Call get_file
return client.get_file(path, dest, False, saltenv)
Creating a fileclient instance outside of a minion module where the __opts__ data is not available, it needs to be generated:
import salt.fileclient import salt.config def get_file(path, dest, saltenv="base"):
"""
Used to get a single file from the Salt master
"""
# Get the configuration data
opts = salt.config.minion_config("/etc/salt/minion")
# Get the fileclient object
client = salt.fileclient.get_file_client(opts)
# Call get_file
return client.get_file(path, dest, False, saltenv)
Git Fileserver Backend Walkthrough¶
NOTE:
The gitfs backend allows Salt to serve files from git repositories. It can be enabled by adding git to the fileserver_backend list, and configuring one or more repositories in gitfs_remotes.
Branches and tags become Salt fileserver environments.
NOTE:
Installing Dependencies¶
Both pygit2 and GitPython are supported Python interfaces to git. If compatible versions of both are installed, pygit2 will be preferred. In these cases, GitPython can be forced using the gitfs_provider parameter in the master config file.
NOTE:
pygit2¶
The minimum supported version of pygit2 is 0.20.3. Availability for this version of pygit2 is still limited, though the SaltStack team is working to get compatible versions available for as many platforms as possible.
For the Fedora/EPEL versions which have a new enough version packaged, the following command would be used to install pygit2:
# yum install python-pygit2
Provided a valid version is packaged for Debian/Ubuntu (which is not currently the case), the package name would be the same, and the following command would be used to install it:
# apt-get install python-pygit2
If pygit2 is not packaged for the platform on which the Master is running, the pygit2 website has installation instructions here. Keep in mind however that following these instructions will install libgit2 and pygit2 without system packages. Additionally, keep in mind that SSH authentication in pygit2 requires libssh2 (not libssh) development libraries to be present before libgit2 is built. On some Debian-based distros pkg-config is also required to link libgit2 with libssh2.
NOTE:
Additionally, version 0.21.0 of pygit2 introduced a dependency on python-cffi, which in turn depends on newer releases of libffi. Upgrading libffi is not advisable as several other applications depend on it, so on older LTS linux releases pygit2 0.20.3 and libgit2 0.20.0 is the recommended combination.
WARNING:
RedHat Pygit2 Issues¶
The release of RedHat/CentOS 7.3 upgraded both python-cffi and http-parser, both of which are dependencies for pygit2/libgit2. Both pygit2 and libgit2 packages (which are from the EPEL repository) should be upgraded to the most recent versions, at least to 0.24.2.
The below errors will show up in the master log if an incompatible python-pygit2 package is installed:
2017-02-10 09:07:34,892 [salt.utils.gitfs ][ERROR ][11211] Import pygit2 failed: CompileError: command 'gcc' failed with exit status 1 2017-02-10 09:07:34,907 [salt.utils.gitfs ][ERROR ][11211] gitfs is configured but could not be loaded, are pygit2 and libgit2 installed? 2017-02-10 09:07:34,907 [salt.utils.gitfs ][CRITICAL][11211] No suitable gitfs provider module is installed. 2017-02-10 09:07:34,912 [salt.master ][CRITICAL][11211] Master failed pre flight checks, exiting
The below errors will show up in the master log if an incompatible libgit2 package is installed:
2017-02-15 18:04:45,211 [salt.utils.gitfs ][ERROR ][6211] Error occurred fetching gitfs remote 'https://foo.com/bar.git': No Content-Type header in response
A restart of the salt-master daemon and gitfs cache directory clean up may be required to allow http(s) repositories to continue to be fetched.
Debian Pygit2 Issues¶
The Debian repos currently have older versions of pygit2 (package python3-pygit2). These older versions may have issues using newer SSH keys (see [this issue](https://github.com/saltstack/salt/issues/61790)). Instead, pygit2 can be installed from Pypi, but you will need a version that matches the libgit2 version from Debian. This is version 1.6.1.
# apt-get install libgit2 # salt-pip install pygit2==1.6.1 --no-deps
Note that the above instructions assume a onedir installation. The need for --no-deps is to prevent the CFFI package from mismatching with Salt.
GitPython¶
GitPython 0.3.0 or newer is required to use GitPython for gitfs. For RHEL-based Linux distros, a compatible version is available in EPEL, and can be easily installed on the master using yum:
# yum install GitPython
Ubuntu 14.04 LTS and Debian Wheezy (7.x) also have a compatible version packaged:
# apt-get install python-git
GitPython requires the git CLI utility to work. If installed from a system package, then git should already be installed, but if installed via pip then it may still be necessary to install git separately. For MacOS users, GitPython comes bundled in with the Salt installer, but git must still be installed for it to work properly. Git can be installed in several ways, including by installing XCode.
WARNING:
WARNING:
WARNING:
GitPython:
pip.installed:
- name: 'GitPython < 2.0.9'
Simple Configuration¶
To use the gitfs backend, only two configuration changes are required on the master:
- 1.
- Include gitfs in the fileserver_backend list in the master config file:
fileserver_backend:
- gitfs
NOTE:
- 2.
- Specify one or more git://, https://, file://, or ssh:// URLs in gitfs_remotes to configure which repositories to cache and search for requested files:
gitfs_remotes:
- https://github.com/saltstack-formulas/salt-formula.git
SSH remotes can also be configured using scp-like syntax:
gitfs_remotes:
- git@github.com:user/repo.git
- ssh://user@domain.tld/path/to/repo.git
Information on how to authenticate to SSH remotes can be found here.
- 3.
- Restart the master to load the new configuration.
NOTE:
Multiple Remotes¶
The gitfs_remotes option accepts an ordered list of git remotes to cache and search, in listed order, for requested files.
A simple scenario illustrates this cascading lookup behavior:
If the gitfs_remotes option specifies three remotes:
gitfs_remotes:
- git://github.com/example/first.git
- https://github.com/example/second.git
- file:///root/third
And each repository contains some files:
first.git:
top.sls
edit/vim.sls
edit/vimrc
nginx/init.sls second.git:
edit/dev_vimrc
haproxy/init.sls third:
haproxy/haproxy.conf
edit/dev_vimrc
Salt will attempt to lookup the requested file from each gitfs remote repository in the order in which they are defined in the configuration. The git://github.com/example/first.git remote will be searched first. If the requested file is found, then it is served and no further searching is executed. For example:
- A request for the file salt://haproxy/init.sls will be served from the https://github.com/example/second.git git repo.
- A request for the file salt://haproxy/haproxy.conf will be served from the file:///root/third repo.
NOTE:
The file:// prefix denotes a git repository in a local directory. However, it will still use the given file:// URL as a remote, rather than copying the git repo to the salt cache. This means that any refs you want accessible must exist as local refs in the specified repo.
WARNING:
Per-remote Configuration Parameters¶
New in version 2014.7.0.
The following master config parameters are global (that is, they apply to all configured gitfs remotes):
- gitfs_base
- gitfs_root
- gitfs_ssl_verify
- gitfs_mountpoint (new in 2014.7.0)
- gitfs_user (pygit2 only, new in 2014.7.0)
- gitfs_password (pygit2 only, new in 2014.7.0)
- gitfs_insecure_auth (pygit2 only, new in 2014.7.0)
- gitfs_pubkey (pygit2 only, new in 2014.7.0)
- gitfs_privkey (pygit2 only, new in 2014.7.0)
- gitfs_passphrase (pygit2 only, new in 2014.7.0)
- gitfs_refspecs (new in 2017.7.0)
- gitfs_disable_saltenv_mapping (new in 2018.3.0)
- gitfs_ref_types (new in 2018.3.0)
- gitfs_update_interval (new in 2018.3.0)
NOTE:
These parameters can now be overridden on a per-remote basis. This allows for a tremendous amount of customization. Here's some example usage:
gitfs_provider: pygit2 gitfs_base: develop gitfs_remotes:
- https://foo.com/foo.git
- https://foo.com/bar.git:
- root: salt
- mountpoint: salt://bar
- base: salt-base
- ssl_verify: False
- update_interval: 120
- https://foo.com/bar.git:
- name: second_bar_repo
- root: other/salt
- mountpoint: salt://other/bar
- base: salt-base
- ref_types:
- branch
- http://foo.com/baz.git:
- root: salt/states
- user: joe
- password: mysupersecretpassword
- insecure_auth: True
- disable_saltenv_mapping: True
- saltenv:
- foo:
- ref: foo
- http://foo.com/quux.git:
- all_saltenvs: master
IMPORTANT:
- 1.
- The URL of a remote which has per-remote configuration must be suffixed with a colon.
- 2.
- Per-remote configuration parameters are named like the global versions, with the gitfs_ removed from the beginning. The exception being the name, saltenv, and all_saltenvs parameters, which are only available to per-remote configurations.
The all_saltenvs parameter is new in the 2018.3.0 release.
In the example configuration above, the following is true:
- 1.
- The first and fourth gitfs remotes will use the develop branch/tag as the base environment, while the second and third will use the salt-base branch/tag as the base environment.
- 2.
- The first remote will serve all files in the repository. The second remote will only serve files from the salt directory (and its subdirectories). The third remote will only server files from the other/salt directory (and its subdirectories), while the fourth remote will only serve files from the salt/states directory (and its subdirectories).
- 3.
- The third remote will only serve files from branches, and not from tags or SHAs.
- 4.
- The fourth remote will only have two saltenvs available: base (pointed at develop), and foo (pointed at foo).
- 5.
- The first and fourth remotes will have files located under the root of the Salt fileserver namespace (salt://). The files from the second remote will be located under salt://bar, while the files from the third remote will be located under salt://other/bar.
- 6.
- The second and third remotes reference the same repository and unique names need to be declared for duplicate gitfs remotes.
- 7.
- The fourth remote overrides the default behavior of not authenticating to insecure (non-HTTPS) remotes.
- 8.
- Because all_saltenvs is configured for the fifth remote, files from
the branch/tag master will appear in every fileserver environment.
NOTE:
- 9.
- The first remote will wait 120 seconds between updates instead of 60.
Per-Saltenv Configuration Parameters¶
New in version 2016.11.0.
For more granular control, Salt allows the following three things to be overridden for individual saltenvs within a given repo:
- The mountpoint
- The root
- The branch/tag to be used for a given saltenv
Here is an example:
gitfs_root: salt gitfs_saltenv:
- dev:
- mountpoint: salt://gitfs-dev
- ref: develop gitfs_remotes:
- https://foo.com/bar.git:
- saltenv:
- staging:
- ref: qa
- mountpoint: salt://bar-staging
- dev:
- ref: development
- https://foo.com/baz.git:
- saltenv:
- staging:
- mountpoint: salt://baz-staging
Given the above configuration, the following is true:
- 1.
- For all gitfs remotes, files for the dev saltenv will be located under salt://gitfs-dev.
- 2.
- For the dev saltenv, files from the first remote will be sourced from the development branch, while files from the second remote will be sourced from the develop branch.
- 3.
- For the staging saltenv, files from the first remote will be located under salt://bar-staging, while files from the second remote will be located under salt://baz-staging.
- 4.
- For all gitfs remotes, and in all saltenvs, files will be served from the salt directory (and its subdirectories).
Custom Refspecs¶
New in version 2017.7.0.
GitFS will by default fetch remote branches and tags. However, sometimes it can be useful to fetch custom refs (such as those created for GitHub pull requests). To change the refspecs GitFS fetches, use the gitfs_refspecs config option:
gitfs_refspecs:
- '+refs/heads/*:refs/remotes/origin/*'
- '+refs/tags/*:refs/tags/*'
- '+refs/pull/*/head:refs/remotes/origin/pr/*'
- '+refs/pull/*/merge:refs/remotes/origin/merge/*'
In the above example, in addition to fetching remote branches and tags, GitHub's custom refs for pull requests and merged pull requests will also be fetched. These special head refs represent the head of the branch which is requesting to be merged, and the merge refs represent the result of the base branch after the merge.
IMPORTANT:
Refspecs can be configured on a per-remote basis. For example, the below configuration would only alter the default refspecs for the second GitFS remote. The first remote would only fetch branches and tags (the default).
gitfs_remotes:
- https://domain.tld/foo.git
- https://domain.tld/bar.git:
- refspecs:
- '+refs/heads/*:refs/remotes/origin/*'
- '+refs/tags/*:refs/tags/*'
- '+refs/pull/*/head:refs/remotes/origin/pr/*'
- '+refs/pull/*/merge:refs/remotes/origin/merge/*'
Global Remotes¶
New in version 2018.3.0: for all_saltenvs, 3001 for fallback
The all_saltenvs per-remote configuration parameter overrides the logic Salt uses to map branches/tags to fileserver environments (i.e. saltenvs). This allows a single branch/tag to appear in all GitFS saltenvs.
NOTE:
The fallback global or per-remote configuration can also be used.
This is very useful in particular when working with salt formulas. Prior to the addition of this feature, it was necessary to push a branch/tag to the remote repo for each saltenv in which that formula was to be used. If the formula needed to be updated, this update would need to be reflected in all of the other branches/tags. This is both inconvenient and not scalable.
With all_saltenvs, it is now possible to define your formula once, in a single branch.
gitfs_remotes:
- http://foo.com/quux.git:
- all_saltenvs: anything
If you want to also test working branches of the formula repository, use fallback:
gitfs_remotes:
- http://foo.com/quux.git:
- fallback: anything
Update Intervals¶
Prior to the 2018.3.0 release, GitFS would update its fileserver backends as part of a dedicated "maintenance" process, in which various routine maintenance tasks were performed. This tied the update interval to the loop_interval config option, and also forced all fileservers to update at the same interval.
Now it is possible to make GitFS update at its own interval, using gitfs_update_interval:
gitfs_update_interval: 180 gitfs_remotes:
- https://foo.com/foo.git
- https://foo.com/bar.git:
- update_interval: 120
Using the above configuration, the first remote would update every three minutes, while the second remote would update every two minutes.
Configuration Order of Precedence¶
The order of precedence for GitFS configuration is as follows (each level overrides all levels below it):
- 1.
- Per-saltenv configuration (defined under a per-remote saltenv param)
gitfs_remotes:
- https://foo.com/bar.git:
- saltenv:
- dev:
- mountpoint: salt://bar
- 2.
- Global per-saltenv configuration (defined in gitfs_saltenv)
gitfs_saltenv:
- dev:
- mountpoint: salt://bar
- 3.
- Per-remote configuration parameter
gitfs_remotes:
- https://foo.com/bar.git:
- mountpoint: salt://bar
- 4.
- Global configuration parameter
gitfs_mountpoint: salt://bar
NOTE:
It's important to note however that any root and mountpoint values configured in gitfs_saltenv (or per-saltenv configuration) would be unaffected by this.
Serving from a Subdirectory¶
The gitfs_root parameter allows files to be served from a subdirectory within the repository. This allows for only part of a repository to be exposed to the Salt fileserver.
Assume the below layout:
.gitignore README.txt foo/ foo/bar/ foo/bar/one.txt foo/bar/two.txt foo/bar/three.txt foo/baz/ foo/baz/top.sls foo/baz/edit/vim.sls foo/baz/edit/vimrc foo/baz/nginx/init.sls
The below configuration would serve only the files under foo/baz, ignoring the other files in the repository:
gitfs_remotes:
- git://mydomain.com/stuff.git gitfs_root: foo/baz
The root can also be configured on a per-remote basis.
Mountpoints¶
New in version 2014.7.0.
The gitfs_mountpoint parameter will prepend the specified path to the files served from gitfs. This allows an existing repository to be used, rather than needing to reorganize a repository or design it around the layout of the Salt fileserver.
Before the addition of this feature, if a file being served up via gitfs was deeply nested within the root directory (for example, salt://webapps/foo/files/foo.conf, it would be necessary to ensure that the file was properly located in the remote repository, and that all of the parent directories were present (for example, the directories webapps/foo/files/ would need to exist at the root of the repository).
The below example would allow for a file foo.conf at the root of the repository to be served up from the Salt fileserver path salt://webapps/foo/files/foo.conf.
gitfs_remotes:
- https://mydomain.com/stuff.git gitfs_mountpoint: salt://webapps/foo/files
Mountpoints can also be configured on a per-remote basis.
Using gitfs in Masterless Mode¶
Since 2014.7.0, gitfs can be used in masterless mode. To do so, simply add the gitfs configuration parameters (and set fileserver_backend) in the _minion_ config file instead of the master config file.
Using gitfs Alongside Other Backends¶
Sometimes it may make sense to use multiple backends; for instance, if sls files are stored in git but larger files are stored directly on the master.
The cascading lookup logic used for multiple remotes is also used with multiple backends. If the fileserver_backend option contains multiple backends:
fileserver_backend:
- roots
- git
Then the roots backend (the default backend of files in /srv/salt) will be searched first for the requested file; then, if it is not found on the master, each configured git remote will be searched.
NOTE:
file_roots:
base:
- /srv/salt
__env__:
- /srv/salt
Branches, Environments, and Top Files¶
When using the GitFS backend, branches, and tags will be mapped to environments using the branch/tag name as an identifier.
There is one exception to this rule: the master branch is implicitly mapped to the base environment.
So, for a typical base, qa, dev setup, the following branches could be used:
master qa dev
To map a branch other than master as the base environment, use the gitfs_base parameter.
gitfs_base: salt-base
The base can also be configured on a per-remote basis.
Use Case: Code Promotion (dev -> qa -> base)¶
When running a highstate, the top.sls files from all of the different branches and tags will be merged into one. This does not work well with the use case where changes are tested in development branches before being merged upstream towards production, because if the same SLS file from multiple environments is part of the highstate, it can result in non-unique state IDs, which will cause an error in the state compiler and not allow the highstate to proceed.
To accomplish this use case, you should do three things:
- 1.
- Use {{ saltenv }} in place of your environment in your top.sls. This will let you use the same top file in all branches, because {{ saltenv }} gets replaced with the effective saltenv of the environment being processed.
- 2.
- Set top_file_merging_strategy to same in the minion configuration. This will keep the base environment from looking at the top.sls from the dev or qa branches, etc.
- 3.
- Explicitly define your saltenv. (More on this below.)
Consider the following example top file and SLS file:
top.sls
{{ saltenv }}:
'*':
- mystuff
mystuff.sls
manage_mystuff:
pkg.installed:
- name: mystuff
file.managed:
- name: /etc/mystuff.conf
- source: salt://mystuff/files/mystuff.conf
service.running:
- name: mystuffd
- enable: True
- watch:
- file: /etc/mystuff.conf
Imagine for a moment that you need to change your mystuff.conf. So, you go to your dev branch, edit mystuff/files/mystuff.conf, and commit and push.
If you have only done the first two steps recommended above, and you run your highstate, you will end up with conflicting IDs:
myminion:
Data failed to compile: ----------
Detected conflicting IDs, SLS IDs need to be globally unique.
The conflicting ID is 'manage_mystuff' and is found in SLS 'base:mystuff' and SLS 'dev:mystuff' ----------
Detected conflicting IDs, SLS IDs need to be globally unique.
The conflicting ID is 'manage_mystuff' and is found in SLS 'dev:mystuff' and SLS 'qa:mystuff'
This is because, in the absence of an explicit saltenv, all environments' top files are considered. Each environment looks at only its own top.sls, but because the mystuff.sls exists in each branch, they all get pulled into the highstate, resulting in these conflicting IDs. This is why explicitly setting your saltenv is important for this use case.
There are two ways of explicitly defining the saltenv:
- 1.
- Set the saltenv in your minion configuration file. This allows you to isolate which states are run to a specific branch/tag on a given minion. This also works nicely if you have different salt deployments for dev, qa, and prod. Boxes in dev can have saltenv set to dev, boxes in qa can have the saltenv set to qa, and boxes in prod can have the saltenv set to base.
- 2.
- At runtime, you can set the saltenv like so:
salt myminion state.apply saltenv=dev
A couple notes about setting the saltenv at runtime:
- It will take precedence over the saltenv setting from the minion config file, and pairs nicely with cases where you do not have separate salt deployments for dev/qa/prod. You can have a box with saltenv set to base, which you can test your dev changes on by running your state.apply with saltenv=dev.
- If you don't set saltenv in the minion config file, you _must_ specify it at runtime to avoid conflicting IDs.
If you branched qa off of master, and dev off of qa, you can merge changes from dev into qa, and then merge qa into master to promote your changes to from dev to qa to prod.
Environment Whitelist/Blacklist¶
New in version 2014.7.0.
The gitfs_saltenv_whitelist and gitfs_saltenv_blacklist parameters allow for greater control over which branches/tags are exposed as fileserver environments. Exact matches, globs, and regular expressions are supported, and are evaluated in that order. If using a regular expression, ^ and $ must be omitted, and the expression must match the entire branch/tag.
gitfs_saltenv_whitelist:
- base
- v1.*
- 'mybranch\d+'
NOTE:
The behavior of the blacklist/whitelist will differ depending on which combination of the two options is used:
- If only gitfs_saltenv_whitelist is used, then only branches/tags which match the whitelist will be available as environments
- If only gitfs_saltenv_blacklist is used, then the branches/tags which match the blacklist will not be available as environments
- If both are used, then the branches/tags which match the whitelist, but do not match the blacklist, will be available as environments.
Authentication¶
pygit2¶
New in version 2014.7.0.
Both HTTPS and SSH authentication are supported as of version 0.20.3, which is the earliest version of pygit2 supported by Salt for gitfs.
NOTE:
HTTPS¶
For HTTPS repositories which require authentication, the username and password can be provided like so:
gitfs_remotes:
- https://domain.tld/myrepo.git:
- user: git
- password: mypassword
If the repository is served over HTTP instead of HTTPS, then Salt will by default refuse to authenticate to it. This behavior can be overridden by adding an insecure_auth parameter:
gitfs_remotes:
- http://domain.tld/insecure_repo.git:
- user: git
- password: mypassword
- insecure_auth: True
SSH¶
SSH repositories can be configured using the ssh:// protocol designation, or using scp-like syntax. So, the following two configurations are equivalent:
- ssh://git@github.com/user/repo.git
- git@github.com:user/repo.git
Both gitfs_pubkey and gitfs_privkey (or their per-remote counterparts) must be configured in order to authenticate to SSH-based repos. If the private key is protected with a passphrase, it can be configured using gitfs_passphrase (or simply passphrase if being configured per-remote). For example:
gitfs_remotes:
- git@github.com:user/repo.git:
- pubkey: /root/.ssh/id_rsa.pub
- privkey: /root/.ssh/id_rsa
- passphrase: myawesomepassphrase
Finally, the SSH host key must be added to the known_hosts file.
NOTE:
Since upgrading libssh2 would require rebuilding many other packages (curl, etc.), followed by a rebuild of libgit2 and a reinstall of pygit2, an easier workaround for systems with older libssh2 is to use GitPython with a passphraseless key for authentication.
GitPython¶
HTTPS¶
For HTTPS repositories which require authentication, the username and password can be configured in one of two ways. The first way is to include them in the URL using the format https://<user>:<password>@<url>, like so:
gitfs_remotes:
- https://git:mypassword@domain.tld/myrepo.git
The other way would be to configure the authentication in /var/lib/salt/.netrc:
machine domain.tld login git password mypassword
If the repository is served over HTTP instead of HTTPS, then Salt will by default refuse to authenticate to it. This behavior can be overridden by adding an insecure_auth parameter:
gitfs_remotes:
- http://git:mypassword@domain.tld/insecure_repo.git:
- insecure_auth: True
SSH¶
Only passphrase-less SSH public key authentication is supported using GitPython. The auth parameters (pubkey, privkey, etc.) shown in the pygit2 authentication examples above do not work with GitPython.
gitfs_remotes:
- ssh://git@github.com/example/salt-states.git
Since GitPython wraps the git CLI, the private key must be located in ~/.ssh/id_rsa for the user under which the Master is running, and should have permissions of 0600. Also, in the absence of a user in the repo URL, GitPython will (just as SSH does) attempt to login as the current user (in other words, the user under which the Master is running, usually root).
If a key needs to be used, then ~/.ssh/config can be configured to use the desired key. Information on how to do this can be found by viewing the manpage for ssh_config. Here's an example entry which can be added to the ~/.ssh/config to use an alternate key for gitfs:
Host github.com
IdentityFile /root/.ssh/id_rsa_gitfs
The Host parameter should be a hostname (or hostname glob) that matches the domain name of the git repository.
It is also necessary to add the SSH host key to the known_hosts file. The exception to this would be if strict host key checking is disabled, which can be done by adding StrictHostKeyChecking no to the entry in ~/.ssh/config
Host github.com
IdentityFile /root/.ssh/id_rsa_gitfs
StrictHostKeyChecking no
However, this is generally regarded as insecure, and is not recommended.
Adding the SSH Host Key to the known_hosts File¶
To use SSH authentication, it is necessary to have the remote repository's SSH host key in the ~/.ssh/known_hosts file. If the master is also a minion, this can be done using the ssh.set_known_host function:
# salt mymaster ssh.set_known_host user=root hostname=github.com mymaster:
----------
new:
----------
enc:
ssh-rsa
fingerprint:
16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48
hostname:
|1|OiefWWqOD4kwO3BhoIGa0loR5AA=|BIXVtmcTbPER+68HvXmceodDcfI=
key:
AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
old:
None
status:
updated
If not, then the easiest way to add the key is to su to the user (usually root) under which the salt-master runs and attempt to login to the server via SSH:
$ su - Password: # ssh github.com The authenticity of host 'github.com (192.30.252.128)' can't be established. RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'github.com,192.30.252.128' (RSA) to the list of known hosts. Permission denied (publickey).
It doesn't matter if the login was successful, as answering yes will write the fingerprint to the known_hosts file.
Verifying the Fingerprint¶
To verify that the correct fingerprint was added, it is a good idea to look it up. One way to do this is to use nmap:
$ nmap -p 22 github.com --script ssh-hostkey Starting Nmap 5.51 ( http://nmap.org ) at 2014-08-18 17:47 CDT Nmap scan report for github.com (192.30.252.129) Host is up (0.17s latency). Not shown: 996 filtered ports PORT STATE SERVICE 22/tcp open ssh | ssh-hostkey: 1024 ad:1c:08:a4:40:e3:6f:9c:f5:66:26:5d:4b:33:5d:8c (DSA) |_2048 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48 (RSA) 80/tcp open http 443/tcp open https 9418/tcp open git Nmap done: 1 IP address (1 host up) scanned in 28.78 seconds
Another way is to check one's own known_hosts file, using this one-liner:
$ ssh-keygen -l -f /dev/stdin <<<`ssh-keyscan github.com 2>/dev/null` | awk '{print $2}' 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48
WARNING:
NOTE:
Refreshing gitfs Upon Push¶
By default, Salt updates the remote fileserver backends every 60 seconds. However, if it is desirable to refresh quicker than that, the Reactor System can be used to signal the master to update the fileserver on each push, provided that the git server is also a Salt minion. There are three steps to this process:
- 1.
- On the master, create a file /srv/reactor/update_fileserver.sls, with the following contents:
update_fileserver:
runner.fileserver.update
- 2.
- Add the following reactor configuration to the master config file:
reactor:
- 'salt/fileserver/gitfs/update':
- /srv/reactor/update_fileserver.sls
- 3.
- On the git server, add a post-receive hook
- a.
- If the user executing git push is the same as the minion user, use the following hook:
#!/usr/bin/env sh salt-call event.fire_master update salt/fileserver/gitfs/update
- b.
- To enable other git users to run the hook after a push, use sudo in the hook script:
#!/usr/bin/env sh sudo -u root salt-call event.fire_master update salt/fileserver/gitfs/update
- 4.
- If using sudo in the git hook (above), the policy must be changed to permit all users to fire the event. Add the following policy to the sudoers file on the git server.
Cmnd_Alias SALT_GIT_HOOK = /bin/salt-call event.fire_master update salt/fileserver/gitfs/update Defaults!SALT_GIT_HOOK !requiretty ALL ALL=(root) NOPASSWD: SALT_GIT_HOOK
The update argument right after event.fire_master in this example can really be anything, as it represents the data being passed in the event, and the passed data is ignored by this reactor.
Similarly, the tag name salt/fileserver/gitfs/update can be replaced by anything, so long as the usage is consistent.
The root user name in the hook script and sudo policy should be changed to match the user under which the minion is running.
Using Git as an External Pillar Source¶
The git external pillar (a.k.a. git_pillar) has been rewritten for the 2015.8.0 release. This rewrite brings with it pygit2 support (allowing for access to authenticated repositories), as well as more granular support for per-remote configuration. This configuration schema is detailed here.
Why aren't my custom modules/states/etc. syncing to my Minions?¶
In versions 0.16.3 and older, when using the git fileserver backend, certain versions of GitPython may generate errors when fetching, which Salt fails to catch. While not fatal to the fetch process, these interrupt the fileserver update that takes place before custom types are synced, and thus interrupt the sync itself. Try disabling the git fileserver backend in the master config, restarting the master, and attempting the sync again.
This issue is worked around in Salt 0.16.4 and newer.
MinionFS Backend Walkthrough¶
New in version 2014.1.0.
NOTE:
Sometimes it is desirable to deploy a file located on one minion to one or more other minions. This is supported in Salt, and can be accomplished in two parts:
- 1.
- Minion support for pushing files to the master (using cp.push)
- 2.
- The minionfs fileserver backend
This walkthrough will show how to use both of these features.
Enabling File Push¶
To set the master to accept files pushed from minions, the file_recv option in the master config file must be set to True (the default is False).
file_recv: True
NOTE:
Pushing Files¶
Once this has been done, files can be pushed to the master using the cp.push function:
salt 'minion-id' cp.push /path/to/the/file
This command will store the file in a subdirectory named minions under the master's cachedir. On most masters, this path will be /var/cache/salt/master/minions. Within this directory will be one directory for each minion which has pushed a file to the master, and underneath that the full path to the file on the minion. So, for example, if a minion with an ID of dev1 pushed a file /var/log/myapp.log to the master, it would be saved to /var/cache/salt/master/minions/dev1/var/log/myapp.log.
Serving Pushed Files Using MinionFS¶
While it is certainly possible to add /var/cache/salt/master/minions to the master's file_roots and serve these files, it may only be desirable to expose files pushed from certain minions. Adding /var/cache/salt/master/minions/<minion-id> for each minion that needs to be exposed can be cumbersome and prone to errors.
Enter minionfs. This fileserver backend will make files pushed using cp.push available to the Salt fileserver, and provides an easy mechanism to restrict which minions' pushed files are made available.
Simple Configuration¶
To use the minionfs backend, add minionfs to the list of backends in the fileserver_backend configuration option on the master:
file_recv: True fileserver_backend:
- roots
- minionfs
NOTE:
Also, as described earlier, file_recv: True is needed to enable the master to receive files pushed from minions. As always, changes to the master configuration require a restart of the salt-master service.
Files made available via minionfs are by default located at salt://<minion-id>/path/to/file. Think back to the earlier example, in which dev1 pushed a file /var/log/myapp.log to the master. With minionfs enabled, this file would be addressable in Salt at salt://dev1/var/log/myapp.log.
If many minions have pushed to the master, this will result in many directories in the root of the Salt fileserver. For this reason, it is recommended to use the minionfs_mountpoint config option to organize these files underneath a subdirectory:
minionfs_mountpoint: salt://minionfs
Using the above mountpoint, the file in the example would be located at salt://minionfs/dev1/var/log/myapp.log.
Restricting Certain Minions' Files from Being Available Via MinionFS¶
A whitelist and blacklist can be used to restrict the minions whose pushed files are available via minionfs. These lists can be managed using the minionfs_whitelist and minionfs_blacklist config options. Click the links for both of them for a detailed explanation of how to use them.
A more complex configuration example, which uses both a whitelist and blacklist, can be found below:
file_recv: True fileserver_backend:
- roots
- minionfs minionfs_mountpoint: salt://minionfs minionfs_whitelist:
- host04
- web*
- 'mail\d+\.domain\.tld' minionfs_blacklist:
- web21
Potential Concerns¶
- There is no access control in place to restrict which minions have access to files served up by minionfs. All minions will have access to these files.
- Unless the minionfs_whitelist and/or minionfs_blacklist config options are used, all minions which push files to the master will have their files made available via minionfs.
Salt Package Manager¶
The Salt Package Manager, or SPM, enables Salt formulas to be packaged to simplify distribution to Salt masters. The design of SPM was influenced by other existing packaging systems including RPM, Yum, and Pacman. [image]
NOTE:
Packaging System
The packaging system is used to package the state, pillar, file templates, and other files used by your formula into a single file. After a formula package is created, it is copied to the Repository System where it is made available to Salt masters.
See Building SPM Packages
Repo System
The Repo system stores the SPM package and metadata files and makes them available to Salt masters via http(s), ftp, or file URLs. SPM repositories can be hosted on a Salt Master, a Salt Minion, or on another system.
See Distributing SPM Packages
Salt Master
SPM provides Salt master settings that let you configure the URL of one or more SPM repos. You can then quickly install packages that contain entire formulas to your Salt masters using SPM.
See Installing SPM Packages
Contents
Building SPM Packages¶
The first step when using Salt Package Manager is to build packages for each of of the formulas that you want to distribute. Packages can be built on any system where you can install Salt.
Package Build Overview¶
To build a package, all state, pillar, jinja, and file templates used by your formula are assembled into a folder on the build system. These files can be cloned from a Git repository, such as those found at the saltstack-formulas organization on GitHub, or copied directly to the folder.
The following diagram demonstrates a typical formula layout on the build system: [image]
In this example, all formula files are placed in a myapp-formula folder. This is the folder that is targeted by the spm build command when this package is built.
Within this folder, pillar data is placed in a pillar.example file at the root, and all state, jinja, and template files are placed within a subfolder that is named after the application being packaged. State files are typically contained within a subfolder, similar to how state files are organized in the state tree. Any non-pillar files in your package that are not contained in a subfolder are placed at the root of the spm state tree.
Additionally, a FORMULA file is created and placed in the root of the folder. This file contains package metadata that is used by SPM.
Package Installation Overview¶
When building packages, it is useful to know where files are installed on the Salt master. During installation, all files except pillar.example and FORMULA are copied directly to the spm state tree on the Salt master (located at \srv\spm\salt).
If a pillar.example file is present in the root, it is renamed to <formula name>.sls.orig and placed in the pillar_path. [image]
NOTE:
Building an SPM Formula Package¶
- 1.
- Assemble formula files in a folder on the build system.
- 2.
- Create a FORMULA file and place it in the root of the package folder.
- 3.
- Run spm build <folder name>. The package is built and placed in the /srv/spm_build folder.
spm build /path/to/salt-packages-source/myapp-formula
- 4.
- Copy the .spm file to a folder on the repository system.
Types of Packages¶
SPM supports different types of packages. The function of each package is denoted by its name. For instance, packages which end in -formula are considered to be Salt States (the most common type of formula). Packages which end in -conf contain configuration which is to be placed in the /etc/salt/ directory. Packages which do not contain one of these names are treated as if they have a -formula name.
formula¶
By default, most files from this type of package live in the /srv/spm/salt/ directory. The exception is the pillar.example file, which will be renamed to <package_name>.sls and placed in the pillar directory (/srv/spm/pillar/ by default).
reactor¶
By default, files from this type of package live in the /srv/spm/reactor/ directory.
conf¶
The files in this type of package are configuration files for Salt, which normally live in the /etc/salt/ directory. Configuration files for packages other than Salt can and should be handled with a Salt State (using a formula type of package).
Technical Information¶
Packages are built using BZ2-compressed tarballs. By default, the package database is stored using the sqlite3 driver (see Loader Modules below).
Support for these are built into Python, and so no external dependencies are needed.
All other files belonging to SPM use YAML, for portability and ease of use and maintainability.
SPM-Specific Loader Modules¶
SPM was designed to behave like traditional package managers, which apply files to the filesystem and store package metadata in a local database. However, because modern infrastructures often extend beyond those use cases, certain parts of SPM have been broken out into their own set of modules.
Package Database¶
By default, the package database is stored using the sqlite3 module. This module was chosen because support for SQLite3 is built into Python itself.
Please see the SPM Development Guide for information on creating new modules for package database management.
Package Files¶
By default, package files are installed using the local module. This module applies files to the local filesystem, on the machine that the package is installed on.
Please see the SPM Development Guide for information on creating new modules for package file management.
Distributing SPM Packages¶
SPM packages can be distributed to Salt masters over HTTP(S), FTP, or through the file system. The SPM repo can be hosted on any system where you can install Salt. Salt is installed so you can run the spm create_repo command when you update or add a package to the repo. SPM repos do not require the salt-master, salt-minion, or any other process running on the system.
NOTE:
Setting up a Package Repository¶
After packages are built, the generated SPM files are placed in the srv/spm_build folder.
Where you place the built SPM files on your repository server depends on how you plan to make them available to your Salt masters.
You can share the srv/spm_build folder on the network, or copy the files to your FTP or Web server.
Adding a Package to the repository¶
New packages are added by simply copying the SPM file to the repo folder, and then generating repo metadata.
Generate Repo Metadata¶
Each time you update or add an SPM package to your repository, issue an spm create_repo command:
spm create_repo /srv/spm_build
SPM generates the repository metadata for all of the packages in that directory and places it in an SPM-METADATA file at the folder root. This command is used even if repository metadata already exists in that directory.
Installing SPM Packages¶
SPM packages are installed to your Salt master, where they are available to Salt minions using all of Salt's package management functions.
Configuring Remote Repositories¶
Before SPM can use a repository, two things need to happen. First, the Salt master needs to know where the repository is through a configuration process. Then it needs to pull down the repository metadata.
Repository Configuration Files¶
Repositories are configured by adding each of them to the /etc/salt/spm.repos.d/spm.repo file on each Salt master. This file contains the name of the repository, and the link to the repository:
my_repo:
url: https://spm.example.com/
For HTTP/HTTPS Basic authorization you can define credentials:
my_repo:
url: https://spm.example.com/
username: user
password: pass
Beware of unauthorized access to this file, please set at least 0640 permissions for this configuration file:
The URL can use http, https, ftp, or file.
my_repo:
url: file:///srv/spm_build
Updating Local Repository Metadata¶
After the repository is configured on the Salt master, repository metadata is downloaded using the spm update_repo command:
spm update_repo
NOTE:
Update File Roots¶
SPM packages are installed to the srv/spm/salt folder on your Salt master. This path needs to be added to the file roots on your Salt master manually.
file_roots:
base:
- /srv/salt
- /srv/spm/salt
Restart the salt-master service after updating the file_roots setting.
Installing Packages¶
To install a package, use the spm install command:
spm install apache
WARNING:
Installing directly from an SPM file¶
You can also install SPM packages using a local SPM file using the spm local install command:
spm local install /srv/spm/apache-201506-1.spm
An SPM repository is not required when using spm local install.
Pillars¶
If an installed package includes Pillar data, be sure to target the installed pillar to the necessary systems using the pillar Top file.
Removing Packages¶
Packages may be removed after they are installed using the spm remove command.
spm remove apache
If files have been modified, they will not be removed. Empty directories will also be removed.
SPM Configuration¶
There are a number of options that are specific to SPM. They may be configured in the master configuration file, or in SPM's own spm configuration file (normally located at /etc/salt/spm). If configured in both places, the spm file takes precedence. In general, these values will not need to be changed from the defaults.
spm_logfile¶
Default: /var/log/salt/spm
Where SPM logs messages.
spm_repos_config¶
Default: /etc/salt/spm.repos
SPM repositories are configured with this file. There is also a directory which corresponds to it, which ends in .d. For instance, if the filename is /etc/salt/spm.repos, the directory will be /etc/salt/spm.repos.d/.
spm_cache_dir¶
Default: /var/cache/salt/spm
When SPM updates package repository metadata and downloads packaged, they will be placed in this directory. The package database, normally called packages.db, also lives in this directory.
spm_db¶
Default: /var/cache/salt/spm/packages.db
The location and name of the package database. This database stores the names of all of the SPM packages installed on the system, the files that belong to them, and the metadata for those files.
spm_build_dir¶
Default: /srv/spm_build
When packages are built, they will be placed in this directory.
spm_build_exclude¶
Default: ['.git']
When SPM builds a package, it normally adds all files in the formula directory to the package. Files listed here will be excluded from that package. This option requires a list to be specified.
spm_build_exclude:
- .git
- .svn
Types of Packages¶
SPM supports different types of formula packages. The function of each package is denoted by its name. For instance, packages which end in -formula are considered to be Salt States (the most common type of formula). Packages which end in -conf contain configuration which is to be placed in the /etc/salt/ directory. Packages which do not contain one of these names are treated as if they have a -formula name.
formula¶
By default, most files from this type of package live in the /srv/spm/salt/ directory. The exception is the pillar.example file, which will be renamed to <package_name>.sls and placed in the pillar directory (/srv/spm/pillar/ by default).
reactor¶
By default, files from this type of package live in the /srv/spm/reactor/ directory.
conf¶
The files in this type of package are configuration files for Salt, which normally live in the /etc/salt/ directory. Configuration files for packages other than Salt can and should be handled with a Salt State (using a formula type of package).
FORMULA File¶
In addition to the formula itself, a FORMULA file must exist which describes the package. An example of this file is:
name: apache os: RedHat, Debian, Ubuntu, SUSE, FreeBSD os_family: RedHat, Debian, Suse, FreeBSD version: 201506 release: 2 summary: Formula for installing Apache description: Formula for installing Apache
Required Fields¶
This file must contain at least the following fields:
name¶
The name of the package, as it will appear in the package filename, in the repository metadata, and the package database. Even if the source formula has -formula in its name, this name should probably not include that. For instance, when packaging the apache-formula, the name should be set to apache.
os¶
The value of the os grain that this formula supports. This is used to help users know which operating systems can support this package.
os_family¶
The value of the os_family grain that this formula supports. This is used to help users know which operating system families can support this package.
version¶
The version of the package. While it is up to the organization that manages this package, it is suggested that this version is specified in a YYYYMM format. For instance, if this version was released in June 2015, the package version should be 201506. If multiple releases are made in a month, the release field should be used.
minimum_version¶
Minimum recommended version of Salt to use this formula. Not currently enforced.
release¶
This field refers primarily to a release of a version, but also to multiple versions within a month. In general, if a version has been made public, and immediate updates need to be made to it, this field should also be updated.
summary¶
A one-line description of the package.
description¶
A more detailed description of the package which can contain more than one line.
Optional Fields¶
The following fields may also be present.
top_level_dir¶
This field is optional, but highly recommended. If it is not specified, the package name will be used.
Formula repositories typically do not store .sls files in the root of the repository; instead they are stored in a subdirectory. For instance, an apache-formula repository would contain a directory called apache, which would contain an init.sls, plus a number of other related files. In this instance, the top_level_dir should be set to apache.
Files outside the top_level_dir, such as README.rst, FORMULA, and LICENSE will not be installed. The exceptions to this rule are files that are already treated specially, such as pillar.example and _modules/.
dependencies¶
A comma-separated list of packages that must be installed along with this package. When this package is installed, SPM will attempt to discover and install these packages as well. If it is unable to, then it will refuse to install this package.
This is useful for creating packages which tie together other packages. For instance, a package called wordpress-mariadb-apache would depend upon wordpress, mariadb, and apache.
optional¶
A comma-separated list of packages which are related to this package, but are neither required nor necessarily recommended. This list is displayed in an informational message when the package is installed to SPM.
recommended¶
A comma-separated list of optional packages that are recommended to be installed with the package. This list is displayed in an informational message when the package is installed to SPM.
files¶
A files section can be added, to specify a list of files to add to the SPM. Such a section might look like:
files:
- _pillar
- FORMULA
- _runners
- d|mymodule/index.rst
- r|README.rst
When files are specified, then only those files will be added to the SPM, regardless of what other files exist in the directory. They will also be added in the order specified, which is useful if you have a need to lay down files in a specific order.
As can be seen in the example above, you may also tag files as being a specific type. This is done by pre-pending a filename with its type, followed by a pipe (|) character. The above example contains a document file and a readme. The available file types are:
- c: config file
- d: documentation file
- g: ghost file (i.e. the file contents are not included in the package payload)
- l: license file
- r: readme file
- s: SLS file
- m: Salt module
The first 5 of these types (c, d, g, l, r) will be placed in /usr/share/salt/spm/ by default. This can be changed by setting an spm_share_dir value in your /etc/salt/spm configuration file.
The last two types (s and m) are currently ignored, but they are reserved for future use.
Pre and Post States¶
It is possible to run Salt states before and after installing a package by using pre and post states. The following sections may be declared in a FORMULA:
- pre_local_state
- pre_tgt_state
- post_local_state
- post_tgt_state
Sections with pre in their name are evaluated before a package is installed and sections with post are evaluated after a package is installed. local states are evaluated before tgt states.
Each of these sections needs to be evaluated as text, rather than as YAML. Consider the following block:
pre_local_state: >
echo test > /tmp/spmtest:
cmd:
- run
Note that this declaration uses > after pre_local_state. This is a YAML marker that marks the next multi-line block as text, including newlines. It is important to use this marker whenever declaring pre or post states, so that the text following it can be evaluated properly.
local States¶
local states are evaluated locally; this is analogous to issuing a state run using a salt-call --local command. These commands will be issued on the local machine running the spm command, whether that machine is a master or a minion.
local states do not require any special arguments, but they must still use the > marker to denote that the state is evaluated as text, not a data structure.
pre_local_state: >
echo test > /tmp/spmtest:
cmd:
- run
tgt States¶
tgt states are issued against a remote target. This is analogous to issuing a state using the salt command. As such it requires that the machine that the spm command is running on is a master.
Because tgt states require that a target be specified, their code blocks are a little different. Consider the following state:
pre_tgt_state:
tgt: '*'
data: >
echo test > /tmp/spmtest:
cmd:
- run
With tgt states, the state data is placed under a data section, inside the *_tgt_state code block. The target is of course specified as a tgt and you may also optionally specify a tgt_type (the default is glob).
You still need to use the > marker, but this time it follows the data line, rather than the *_tgt_state line.
Templating States¶
The reason that state data must be evaluated as text rather than a data structure is because that state data is first processed through the rendering engine, as it would be with a standard state run.
This means that you can use Jinja or any other supported renderer inside of Salt. All formula variables are available to the renderer, so you can reference FORMULA data inside your state if you need to:
pre_tgt_state:
tgt: '*'
data: >
echo {{ name }} > /tmp/spmtest:
cmd:
- run
You may also declare your own variables inside the FORMULA. If SPM doesn't recognize them then it will ignore them, so there are no restrictions on variable names, outside of avoiding reserved words.
By default the renderer is set to jinja|yaml. You may change this by changing the renderer setting in the FORMULA itself.
Building a Package¶
Once a FORMULA file has been created, it is placed into the root of the formula that is to be turned into a package. The spm build command is used to turn that formula into a package:
spm build /path/to/saltstack-formulas/apache-formula
The resulting file will be placed in the build directory. By default this directory is located at /srv/spm/.
Loader Modules¶
When an execution module is placed in <file_roots>/_modules/ on the master, it will automatically be synced to minions, the next time a sync operation takes place. Other modules are also propagated this way: state modules can be placed in _states/, and so on.
When SPM detects a file in a package which resides in one of these directories, that directory will be placed in <file_roots> instead of in the formula directory with the rest of the files.
Removing Packages¶
Packages may be removed once they are installed using the spm remove command.
spm remove apache
If files have been modified, they will not be removed. Empty directories will also be removed.
Technical Information¶
Packages are built using BZ2-compressed tarballs. By default, the package database is stored using the sqlite3 driver (see Loader Modules below).
Support for these are built into Python, and so no external dependencies are needed.
All other files belonging to SPM use YAML, for portability and ease of use and maintainability.
SPM-Specific Loader Modules¶
SPM was designed to behave like traditional package managers, which apply files to the filesystem and store package metadata in a local database. However, because modern infrastructures often extend beyond those use cases, certain parts of SPM have been broken out into their own set of modules.
Package Database¶
By default, the package database is stored using the sqlite3 module. This module was chosen because support for SQLite3 is built into Python itself.
Please see the SPM Development Guide for information on creating new modules for package database management.
Package Files¶
By default, package files are installed using the local module. This module applies files to the local filesystem, on the machine that the package is installed on.
Please see the SPM Development Guide for information on creating new modules for package file management.
Types of Packages¶
SPM supports different types of formula packages. The function of each package is denoted by its name. For instance, packages which end in -formula are considered to be Salt States (the most common type of formula). Packages which end in -conf contain configuration which is to be placed in the /etc/salt/ directory. Packages which do not contain one of these names are treated as if they have a -formula name.
formula¶
By default, most files from this type of package live in the /srv/spm/salt/ directory. The exception is the pillar.example file, which will be renamed to <package_name>.sls and placed in the pillar directory (/srv/spm/pillar/ by default).
reactor¶
By default, files from this type of package live in the /srv/spm/reactor/ directory.
conf¶
The files in this type of package are configuration files for Salt, which normally live in the /etc/salt/ directory. Configuration files for packages other than Salt can and should be handled with a Salt State (using a formula type of package).
SPM Development Guide¶
This document discusses developing additional code for SPM.
SPM-Specific Loader Modules¶
SPM was designed to behave like traditional package managers, which apply files to the filesystem and store package metadata in a local database. However, because modern infrastructures often extend beyond those use cases, certain parts of SPM have been broken out into their own set of modules.
Each function that accepts arguments has a set of required and optional arguments. Take note that SPM will pass all arguments in, and therefore each function must accept each of those arguments. However, arguments that are marked as required are crucial to SPM's core functionality, while arguments that are marked as optional are provided as a benefit to the module, if it needs to use them.
Package Database¶
By default, the package database is stored using the sqlite3 module. This module was chosen because support for SQLite3 is built into Python itself.
Modules for managing the package database are stored in the salt/spm/pkgdb/ directory. A number of functions must exist to support database management.
init()¶
Get a database connection, and initialize the package database if necessary.
This function accepts no arguments. If a database is used which supports a connection object, then that connection object is returned. For instance, the sqlite3 module returns a connect() object from the sqlite3 library:
def myfunc():
conn = sqlite3.connect(__opts__["spm_db"], isolation_level=None)
...
return conn
SPM itself will not use this connection object; it will be passed in as-is to the other functions in the module. Therefore, when you set up this object, make sure to do so in a way that is easily usable throughout the module.
info()¶
Return information for a package. This generally consists of the information that is stored in the FORMULA file in the package.
The arguments that are passed in, in order, are package (required) and conn (optional).
package is the name of the package, as specified in the FORMULA. conn is the connection object returned from init().
list_files()¶
Return a list of files for an installed package. Only the filename should be returned, and no other information.
The arguments that are passed in, in order, are package (required) and conn (optional).
package is the name of the package, as specified in the FORMULA. conn is the connection object returned from init().
register_pkg()¶
Register a package in the package database. Nothing is expected to be returned from this function.
The arguments that are passed in, in order, are name (required), formula_def (required), and conn (optional).
name is the name of the package, as specified in the FORMULA. formula_def is the contents of the FORMULA file, as a dict. conn is the connection object returned from init().
register_file()¶
Register a file in the package database. Nothing is expected to be returned from this function.
The arguments that are passed in are name (required), member (required), path (required), digest (optional), and conn (optional).
name is the name of the package.
member is a tarfile object for the package file. It is included, because it contains most of the information for the file.
path is the location of the file on the local filesystem.
digest is the SHA1 checksum of the file.
conn is the connection object returned from init().
unregister_pkg()¶
Unregister a package from the package database. This usually only involves removing the package's record from the database. Nothing is expected to be returned from this function.
The arguments that are passed in, in order, are name (required) and conn (optional).
name is the name of the package, as specified in the FORMULA. conn is the connection object returned from init().
unregister_file()¶
Unregister a package from the package database. This usually only involves removing the package's record from the database. Nothing is expected to be returned from this function.
The arguments that are passed in, in order, are name (required), pkg (optional) and conn (optional).
name is the path of the file, as it was installed on the filesystem.
pkg is the name of the package that the file belongs to.
conn is the connection object returned from init().
db_exists()¶
Check to see whether the package database already exists. This is the path to the package database file. This function will return True or False.
The only argument that is expected is db_, which is the package database file.
Package Files¶
By default, package files are installed using the local module. This module applies files to the local filesystem, on the machine that the package is installed on.
Modules for managing the package database are stored in the salt/spm/pkgfiles/ directory. A number of functions must exist to support file management.
init()¶
Initialize the installation location for the package files. Normally these will be directory paths, but other external destinations such as databases can be used. For this reason, this function will return a connection object, which can be a database object. However, in the default local module, this object is a dict containing the paths. This object will be passed into all other functions.
Three directories are used for the destinations: formula_path, pillar_path, and reactor_path.
formula_path is the location of most of the files that will be installed. The default is specific to the operating system, but is normally /srv/salt/.
pillar_path is the location that the pillar.example file will be installed to. The default is specific to the operating system, but is normally /srv/pillar/.
reactor_path is the location that reactor files will be installed to. The default is specific to the operating system, but is normally /srv/reactor/.
check_existing()¶
Check the filesystem for existing files. All files for the package will be checked, and if any are existing, then this function will normally state that SPM will refuse to install the package.
This function returns a list of the files that exist on the system.
The arguments that are passed into this function are, in order: package (required), pkg_files (required), formula_def (formula_def), and conn (optional).
package is the name of the package that is to be installed.
pkg_files is a list of the files to be checked.
formula_def is a copy of the information that is stored in the FORMULA file.
conn is the file connection object.
install_file()¶
Install a single file to the destination (normally on the filesystem). Nothing is expected to be returned from this function.
This function returns the final location that the file was installed to.
The arguments that are passed into this function are, in order, package (required), formula_tar (required), member (required), formula_def (required), and conn (optional).
package is the name of the package that is to be installed.
formula_tar is the tarfile object for the package. This is passed in so that the function can call formula_tar.extract() for the file.
member is the tarfile object which represents the individual file. This may be modified as necessary, before being passed into formula_tar.extract().
formula_def is a copy of the information from the FORMULA file.
conn is the file connection object.
remove_file()¶
Remove a single file from file system. Normally this will be little more than an os.remove(). Nothing is expected to be returned from this function.
The arguments that are passed into this function are, in order, path (required) and conn (optional).
path is the absolute path to the file to be removed.
conn is the file connection object.
hash_file()¶
Returns the hexdigest hash value of a file.
The arguments that are passed into this function are, in order, path (required), hashobj (required), and conn (optional).
path is the absolute path to the file.
hashobj is a reference to hashlib.sha1(), which is used to pull the hexdigest() for the file.
conn is the file connection object.
This function will not generally be more complex than:
def hash_file(path, hashobj, conn=None):
with salt.utils.files.fopen(path, "r") as f:
hashobj.update(f.read())
return hashobj.hexdigest()
path_exists()¶
Check to see whether the file already exists on the filesystem. Returns True or False.
This function expects a path argument, which is the absolute path to the file to be checked.
path_isdir()¶
Check to see whether the path specified is a directory. Returns True or False.
This function expects a path argument, which is the absolute path to be checked.
Storing Data in Other Databases¶
The SDB interface is designed to store and retrieve data that, unlike pillars and grains, is not necessarily minion-specific. The initial design goal was to allow passwords to be stored in a secure database, such as one managed by the keyring package, rather than as plain-text files. However, as a generic database interface, it could conceptually be used for a number of other purposes.
SDB was added to Salt in version 2014.7.0.
SDB Configuration¶
In order to use the SDB interface, a configuration profile must be set up. To be available for master commands, such as runners, it needs to be configured in the master configuration. For modules executed on a minion, it can be set either in the minion configuration file, or as a pillar. The configuration stanza includes the name/ID that the profile will be referred to as, a driver setting, and any other arguments that are necessary for the SDB module that will be used. For instance, a profile called mykeyring, which uses the system service in the keyring module would look like:
mykeyring:
driver: keyring
service: system
It is recommended to keep the name of the profile simple, as it is used in the SDB URI as well.
SDB URIs¶
SDB is designed to make small database queries (hence the name, SDB) using a compact URL. This allows users to reference a database value quickly inside a number of Salt configuration areas, without a lot of overhead. The basic format of an SDB URI is:
sdb://<profile>/<args>
The profile refers to the configuration profile defined in either the master or the minion configuration file. The args are specific to the module referred to in the profile, but will typically only need to refer to the key of a key/value pair inside the database. This is because the profile itself should define as many other parameters as possible.
For example, a profile might be set up to reference credentials for a specific OpenStack account. The profile might look like:
kevinopenstack:
driver: keyring
service: salt.cloud.openstack.kevin
And the URI used to reference the password might look like:
Getting, Setting and Deleting SDB Values¶
Once an SDB driver is configured, you can use the sdb execution module to get, set and delete values from it. There are two functions that may appear in most SDB modules: get, set and delete.
Getting a value requires only the SDB URI to be specified. To retrieve a value from the kevinopenstack profile above, you would use:
salt-call sdb.get sdb://kevinopenstack/password
WARNING:
salt-call sdb.get 'sdb://myvault/secret/salt?saltstack'
Instead of the above please use the preferred URI using / instead:
salt-call sdb.get 'sdb://myvault/secret/salt/saltstack'
Setting a value uses the same URI as would be used to retrieve it, followed by the value as another argument.
salt-call sdb.set 'sdb://myvault/secret/salt/saltstack' 'super awesome'
Deleting values (if supported by the driver) is done pretty much the same way as getting them. Provided that you have a profile called mykvstore that uses a driver allowing to delete values you would delete a value as shown below:
salt-call sdb.delete 'sdb://mykvstore/foobar'
The sdb.get, sdb.set and sdb.delete functions are also available in the runner system:
salt-run sdb.get 'sdb://myvault/secret/salt/saltstack' salt-run sdb.set 'sdb://myvault/secret/salt/saltstack' 'super awesome' salt-run sdb.delete 'sdb://mykvstore/foobar'
Using SDB URIs in Files¶
SDB URIs can be used in both configuration files, and files that are processed by the renderer system (jinja, mako, etc.). In a configuration file (such as /etc/salt/master, /etc/salt/minion, /etc/salt/cloud, etc.), make an entry as usual, and set the value to the SDB URI. For instance:
mykey: sdb://myetcd/mykey
To retrieve this value using a module, the module in question must use the config.get function to retrieve configuration values. This would look something like:
mykey = __salt__["config.get"]("mykey")
Templating renderers use a similar construct. To get the mykey value from above in Jinja, you would use:
{{ salt['config.get']('mykey') }}
When retrieving data from configuration files using config.get, the SDB URI need only appear in the configuration file itself.
If you would like to retrieve a key directly from SDB, you would call the sdb.get function directly, using the SDB URI. For instance, in Jinja:
{{ salt['sdb.get']('sdb://myetcd/mykey') }}
When writing Salt modules, it is not recommended to call sdb.get directly, as it requires the user to provide values in SDB, using a specific URI. Use config.get instead.
Writing SDB Modules¶
There is currently one function that MUST exist in any SDB module (get()), one that SHOULD exist (set_()) and one that MAY exist (delete()). If using a (set_()) function, a __func_alias__ dictionary MUST be declared in the module as well:
__func_alias__ = {
"set_": "set", }
This is because set is a Python built-in, and therefore functions should not be created which are called set(). The __func_alias__ functionality is provided via Salt's loader interfaces, and allows legally-named functions to be referred to using names that would otherwise be unwise to use.
The get() function is required, as it will be called via functions in other areas of the code which make use of the sdb:// URI. For example, the config.get function in the config execution module uses this function.
The set_() function may be provided, but is not required, as some sources may be read-only, or may be otherwise unwise to access via a URI (for instance, because of SQL injection attacks).
The delete() function may be provided as well, but is not required, as many sources may be read-only or restrict such operations.
A simple example of an SDB module is salt/sdb/keyring_db.py, as it provides basic examples of most, if not all, of the types of functionality that are available not only for SDB modules, but for Salt modules in general.
Running the Salt Master/Minion as an Unprivileged User¶
While the default setup runs the master and minion as the root user, some may consider it an extra measure of security to run the master as a non-root user. Keep in mind that doing so does not change the master's capability to access minions as the user they are running as. Due to this many feel that running the master as a non-root user does not grant any real security advantage which is why the master has remained as root by default.
NOTE:
As of Salt 0.9.10 it is possible to run Salt as a non-root user. This can be done by setting the user parameter in the master configuration file. and restarting the salt-master service.
The minion has its own user parameter as well, but running the minion as an unprivileged user will keep it from making changes to things like users, installed packages, etc. unless access controls (sudo, etc.) are setup on the minion to permit the non-root user to make the needed changes.
In order to allow Salt to successfully run as a non-root user, ownership, and permissions need to be set such that the desired user can read from and write to the following directories (and their subdirectories, where applicable):
- /etc/salt
- /var/cache/salt
- /var/log/salt
- /var/run/salt
Ownership can be easily changed with chown, like so:
# chown -R user /etc/salt /var/cache/salt /var/log/salt /var/run/salt
WARNING:
Using cron with Salt¶
The Salt Minion can initiate its own highstate using the salt-call command.
$ salt-call state.apply
This will cause the minion to check in with the master and ensure it is in the correct "state".
Use cron to initiate a highstate¶
If you would like the Salt Minion to regularly check in with the master you can use cron to run the salt-call command:
0 0 * * * salt-call state.apply
The above cron entry will run a highstate every day at midnight.
NOTE:
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/opt/bin 0 0 * * * salt-call state.apply
Hardening Salt¶
This topic contains tips you can use to secure and harden your Salt environment. How you best secure and harden your Salt environment depends heavily on how you use Salt, where you use Salt, how your team is structured, where you get data from, and what kinds of access (internal and external) you require.
IMPORTANT:
IMPORTANT:
WARNING:
General hardening tips¶
- Restrict who can directly log into your Salt master system.
- Use SSH keys secured with a passphrase to gain access to the Salt master system.
- Track and secure SSH keys and any other login credentials you and your team need to gain access to the Salt master system.
- Use a hardened bastion server or a VPN to restrict direct access to the Salt master from the internet.
- Don't expose the Salt master any more than what is required.
- Harden the system as you would with any high-priority target.
- Keep the system patched and up-to-date.
- Use tight firewall rules. Pay particular attention to TCP/4505 and TCP/4506 on the salt master and avoid exposing these ports unnecessarily.
Salt hardening tips¶
WARNING:
IMPORTANT:
- Subscribe to salt-users or salt-announce so you know when new Salt releases are available.
- Keep your systems up-to-date with the latest patches.
- Use Salt's Client ACL system to avoid having to give out root access in order to run Salt commands.
- Use Salt's Client ACL system to restrict which users can run what commands.
- Use external Pillar to pull data into Salt from external sources so that non-sysadmins (other teams, junior admins, developers, etc) can provide configuration data without needing access to the Salt master.
- Make heavy use of SLS files that are version-controlled and go through a peer-review/code-review process before they're deployed and run in production. This is good advice even for "one-off" CLI commands because it helps mitigate typos and mistakes.
- Use salt-api, SSL, and restrict authentication with the external auth system if you need to expose your Salt master to external services.
- Make use of Salt's event system and reactor to allow minions to signal the Salt master without requiring direct access.
- Run the salt-master daemon as non-root.
- Disable which modules are loaded onto minions with the disable_modules setting. (for example, disable the cmd module if it makes sense in your environment.)
- Look through the fully-commented sample master and minion config files. There are many options for securing an installation.
- Run masterless-mode minions on particularly sensitive minions. There is also Salt SSH or the modules.sudo if you need to further restrict a minion.
- Monitor specific security related log messages. Salt salt-master logs attempts to access methods which are not exposed to network clients. These log messages are logged at the error log level and start with Requested method not exposed.
Rotating keys¶
There are several reasons to rotate keys. One example is exposure or a compromised key. An easy way to rotate a key is to remove the existing keys and let the salt-master or salt-minion process generate new keys on restart.
Rotate a minion key¶
Run the following on the Salt minion:
salt-call saltutil.regen_keys systemctl stop salt-minion
Run the following on the Salt master:
salt-key -d <minion-id>
Run the following on the Salt minion:
systemctl start salt-minion
Run the following on the Salt master:
salt-key -a <minion-id>
Rotate a master key¶
Run the following on the Salt master:
systemctl stop salt-master rm <pki_dir>/master.{pem,pub} systemctl start salt-master
Run the following on the Salt minion:
systemctl stop salt-minion rm <pki_dir>/minion_master.pub systemctl start salt-minion
Hardening of syndic setups¶
Syndics must be run as the same user as their syndic master process. The master of master's will include publisher ACL information in jobs sent to downstream masters via syndics. This means that any minions connected directly to a master of masters will also receive ACL information in jobs being published. For the most secure setup, only connect syndics directly to master of masters.
Security disclosure policy¶
- security@saltstack.com
- gpg key ID
- 4EA0793D
- gpg key fingerprint
- 8ABE 4EFC F0F4 B24B FF2A AF90 D570 F2D3 4EA0 793D
gpg public key:
-----BEGIN PGP PUBLIC KEY BLOCK----- mQINBFO15mMBEADa3CfQwk5ED9wAQ8fFDku277CegG3U1hVGdcxqKNvucblwoKCb hRK6u9ihgaO9V9duV2glwgjytiBI/z6lyWqdaD37YXG/gTL+9Md+qdSDeaOa/9eg 7y+g4P+FvU9HWUlujRVlofUn5Dj/IZgUywbxwEybutuzvvFVTzsn+DFVwTH34Qoh QIuNzQCSEz3Lhh8zq9LqkNy91ZZQO1ZIUrypafspH6GBHHcE8msBFgYiNBnVcUFH u0r4j1Rav+621EtD5GZsOt05+NJI8pkaC/dDKjURcuiV6bhmeSpNzLaXUhwx6f29 Vhag5JhVGGNQxlRTxNEM86HEFp+4zJQ8m/wRDrGX5IAHsdESdhP+ljDVlAAX/ttP /Ucl2fgpTnDKVHOA00E515Q87ZHv6awJ3GL1veqi8zfsLaag7rw1TuuHyGLOPkDt t5PAjsS9R3KI7pGnhqI6bTOi591odUdgzUhZChWUUX1VStiIDi2jCvyoOOLMOGS5 AEYXuWYP7KgujZCDRaTNqRDdgPd93Mh9JI8UmkzXDUgijdzVpzPjYgFaWtyK8lsc Fizqe3/Yzf9RCVX/lmRbiEH+ql/zSxcWlBQd17PKaL+TisQFXcmQzccYgAxFbj2r QHp5ABEu9YjFme2Jzun7Mv9V4qo3JF5dmnUk31yupZeAOGZkirIsaWC3hwARAQAB tDBTYWx0U3RhY2sgU2VjdXJpdHkgVGVhbSA8c2VjdXJpdHlAc2FsdHN0YWNrLmNv bT6JAj4EEwECACgFAlO15mMCGwMFCQeGH4AGCwkIBwMCBhUIAgkKCwQWAgMBAh4B AheAAAoJENVw8tNOoHk9z/MP/2vzY27fmVxU5X8joiiturjlgEqQw41IYEmWv1Bw 4WVXYCHP1yu/1MC1uuvOmOd5BlI8YO2C2oyW7d1B0NorguPtz55b7jabCElekVCh h/H4ZVThiwqgPpthRv/2npXjIm7SLSs/kuaXo6Qy2JpszwDVFw+xCRVL0tH9KJxz HuNBeVq7abWD5fzIWkmGM9hicG/R2D0RIlco1Q0VNKy8klG+pOFOW886KnwkSPc7 JUYp1oUlHsSlhTmkLEG54cyVzrTP/XuZuyMTdtyTc3mfgW0adneAL6MARtC5UB/h q+v9dqMf4iD3wY6ctu8KWE8Vo5MUEsNNO9EA2dUR88LwFZ3ZnnXdQkizgR/Aa515 dm17vlNkSoomYCo84eN7GOTfxWcq+iXYSWcKWT4X+h/ra+LmNndQWQBRebVUtbKE ZDwKmiQz/5LY5EhlWcuU4lVmMSFpWXt5FR/PtzgTdZAo9QKkBjcv97LYbXvsPI69 El1BLAg+m+1UpE1L7zJT1il6PqVyEFAWBxW46wXCCkGssFsvz2yRp0PDX8A6u4yq rTkt09uYht1is61joLDJ/kq3+6k8gJWkDOW+2NMrmf+/qcdYCMYXmrtOpg/wF27W GMNAkbdyzgeX/MbUBCGCMdzhevRuivOI5bu4vT5s3KdshG+yhzV45bapKRd5VN+1 mZRqiQJVBBMBAgA/AhsDBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgBYhBIq+Tvzw 9LJL/yqvkNVw8tNOoHk9BQJb0e5rBQkL3m8IAAoJENVw8tNOoHk9fzMP/ApQtkQD BmoYEBTF6BH1bywzDw5OHpnBSLbuoYtA3gkhnm/83MzFDcGn22pgo2Fv0MuHltWI G2oExzje7szmcM6Xg3ZTKapJ3/p2J+P33tkJA1LWpg+DdgdQlqrjlXKwEnikszuB 9IMhbjoPeBzwiUtsBQmcwbVgwMzbscwoV5DJ/gLDCkgF4rp2uKEYAcBi8s9NGX6p zQsb9Sb0/bKdCrszAcvUn4WYB6WbAPttvutYHtg/nZfXEeX/SgBueXo3lO9vzFlO r3Zgk7WeucsEqa9Qo0VLOq28HykixM5mEJKsAQrNIqM1DqXgfDch8RJAHzgMBHFH Qi9hJXk1/6OA2FPXQGcA9Td5Dt0i1Z7wMrAUMj3s9gNMVCD0hQqEKfUtpyV7KBAj AO5j8Wr8KafnRm6czBCkcV0SRzHQSHdYyncozWwPgWOaRC9AY9fEDz8lBaSoB/C+ dyO/xZMTWoaWqkHozVoHIrCc4CAtZTye/5mxFhq15Q1Iy/NjelrMTCD1kql1dNIP oOgfOYl1xLMQIBwrrCrgeRIvxEgKRf9KOLbSrS7+3vOKoxf+LD4AQfLci8dFyH+I t0Z43nk93yTOI82RTdz5GwUXIKcvGhsJ8bgNlGTxM1R/Sl8Sg8diE2PRAp/fk7+g CwOM8VkeyrDM2k1cy64d8USkbR7YtT3otyFQiQJVBBMBCAA/AhsDBgsJCAcDAgYV CAIJCgsEFgIDAQIeAQIXgBYhBIq+Tvzw9LJL/yqvkNVw8tNOoHk9BQJeapbNBQkN v4KKAAoJENVw8tNOoHk9BFQP/04a1yQb3aOYbNgx+ER9l54wZbUUlReU+ujmlW03 12ZW8fFZ0SN2q7xKtE/I9nNl1gjJ7NHTP3FhZ0eNyG+mJeGyrscVKxaAkTV+71e3 7n94/qC2bM753X+2160eR7Md+R/itoljStwmib1583rSTTUld1i4FnUTrEhF7MBt I/+5l7vUK4Hj1RPovHVeHXYfdbrS6wCBi6GsdOfYGfGacZIfM4XLXTkyjVt4Zg0j rwZ36P1amHky1QyvQ2stkXjCEtP04h3o3EfC1yupNXarO1VXj10/wWYhoGAz6AT2 Usk6DiaiJqHPy2RwPfKzv7ZrUlMxKrqjPUHcoBf++EjzFtR3LJ0pY2fLwp6Pk4s4 18Xwi7r16HnCH/BZgqZVyXAhDV6+U9rAHab/n4b0hcWWaT2SIhsyZKtEMiTMJeq5 aAMcRSWX+dHO+MzMIBzNu7BO3b+zODD0+XSMsPqeHp3cqfZ3EHobKQPPFucdfjug Hx2+dbPD3IwJVIilc9Otfz/+JYG4im5p4N6UCwXHbtiuuREC1SQpU9BqEjQAyIiL gXlE5MSVqXijkrIpYB+K8cR+44nQ4K2kc4ievNqXR6D7XQ3AE76QN84Lby2b5W86 bbboIy0Bgy+9jgCx0CS7fk1P8zx1dw2FNDVfxZ+s473ZvwP1wdSRZICjZUvM8hx4 4kPCiQJVBBMBCAA/AhsDBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgBYhBIq+Tvzw 9LJL/yqvkNVw8tNOoHk9BQJiOkMeBQkUJ/c7AAoJENVw8tNOoHk9Xx8P/26W8v/v Exmttzcqh7MlihddXfr2lughSuUBQ8aLsffGHSGIgyqSPlq0Fl5qOCoJ8hYZSBqV yEfo7iRY7E3K1LGXKDkpup9hC1wMjR0A25eoXwEnD2vEQ/upXXueH05vkcMc165B cK0kNxas+2amCc3nHJOlfWILXQk4OS+nB0lBWe8H96ppfAaX/G0JiYsa0hjNycZq 0ftEdCkAJRvSFuu6d3gXH69KLxoNcJOE+99f3wMOuOcX3Xf1k/cwqdJRdEiW8oz8 Gf5ZRzWcpsXXg6nB2mkahLoRDMM2U+1C6fHbUg4yTvU1AB+F/OYqe1d0hedho0o5 +WWoTuM/U79+m3NM14qvr0iJP7ytABiEE96nNAz+Q0NDZqA6JoUd7obo8KVjGHEt 9bRl/8K/zWkdNLoF84tWjEiBCzCKXGEay7lgiIx5f3OvP91CfGL+ILHrk/AZR1eE M+KI7wB8sJEFF95UoKVua3YzLIFScB4bUEOg6bz8xSSP4a0BWktSm5ws8iCWqOE6 S9haCppZ7a6k5czQNPJV2bp2eTS4ykFAQLv/mHMS5awIvb8b630Rufn1vZHKCrMf WdSbBZD7oojxYo1psPlfzN2KUrNXgl7vAUNagJEogMoiYAZ2ML7rTVAC1qnbxQb+ DeC+r0I98AIY6igIgRbcybH3ccfXYNtcxLUJuQINBFO15mMBEAC5UuLii9ZLz6qH fIJp35IOW9U8SOf7QFhzXR7NZ3DmJsd3f6Nb/habQFIHjm3K9wbpj+FvaW2oWRlF VvYdzjUq6c82GUUjW1dnqgUvFwdmM8351n0YQ2TonmyaF882RvsRZrbJ65uvy7SQ xlouXaAYOdqwLsPxBEOyOnMPSktW5V2UIWyxsNP3sADchWIGq9p5D3Y/loyIMsS1 dj+TjoQZOKSj7CuRT98+8yhGAY8YBEXu9r3I9o6mDkuPpAljuMc8r09Im6az2egt K/szKt4Hy1bpSSBZU4W/XR7XwQNywmb3wxjmYT6Od3Mwj0jtzc3gQiH8hcEy3+BO +NNmyzFVyIwOLziwjmEcw62S57wYKUVnHD2nglMsQa8Ve0e6ABBMEY7zGEGStva5 9rfgeh0jUMJiccGiUDTMs0tdkC6knYKbu/fdRqNYFoNuDcSeLEw4DdCuP01l2W4y Y+fiK6hAcL25amjzc+yYo9eaaqTn6RATbzdhHQZdpAMxY+vNT0+NhP1Zo5gYBMR6 5Zp/VhFsf67ijb03FUtdw9N8dHwiR2m8vVA8kO/gCD6wS2p9RdXqrJ9JhnHYWjiV uXR+f755ZAndyQfRtowMdQIoiXuJEXYw6XN+/BX81gJaynJYc0uw0MnxWQX+A5m8 HqEsbIFUXBYXPgbwXTm7c4IHGgXXdwARAQABiQI8BBgBAgAmAhsMFiEEir5O/PD0 skv/Kq+Q1XDy006geT0FAlvR7oMFCQvebyAACgkQ1XDy006geT2Hxw//Zha8j8Uc 4B+DmHhZIvPmHp9aFI4DWhC7CBDrYKztBz42H6eX+UsBu4p+uBDKdW9xJH+Qt/zF nf/zB5Bhc/wFceVRCAkWxPdiIQeo5XQGjZeORjle7E9iunTko+5q1q9I7IgqWYrn jRmulDvRhO7AoUrqGACDrV6t0F1/XPB8seR2i6axFmFlt1qBHasRq11yksdgNYiD KXaovf7csDGPGOCWEKMX7BFGpdK/dWdNYfH0Arfom0U5TqNfvGtP4yRPx2bcs7/1 VXPj7IqhBgOtA9pwtMjFki8HGkqj7bB2ErFBOnSwqqNnNcbnhiO6D74SHVGAHhKZ whaMPDg76EvjAezoLHg7KWYOyUkWJSLa+YoM9r4+PJuEuW/XuaZCNbrAhek+p3pD ywhElvZe/2UFk619qKzwSbTzk7a90rxLQ2wwtd0vxAW/GyjWl4/kOMZhI5+LAk1l REucE0fSQxzCTeXu2ObvFR9ic02IYGH3Koz8CrGReEI1J05041Y5IhKxdsvGOD2W e7ymcblYW4Gz8eYFlLeNJkj/38R7qmNZ028XHzAZDCAWDiTFrnCoglyk+U0JRHfg HTsdvoc8mBdT/s24LhnfAbpLizlrZZquuOF6NLQSkbuLtmIwf+h9ynEEJxEkGGWg 7JqB1tMjNHLkRpveO/DTYB+iffpba1nCgumJAjwEGAEIACYCGwwWIQSKvk788PSy S/8qr5DVcPLTTqB5PQUCYjpDOQUJFCf3VgAKCRDVcPLTTqB5PYDiEADaj1aAdXDb +XrlhzlGCT3e16RDiE4BjSD1KHZX8ZDABI79JDG0iMN2PpWuViXq7AvWuwgNYdac WjHsZGgHW82UoPVGKnfEVjjf0lQQIIcgdS5dEV8LamkeIo4vKUX/MZY+Mivk6luP vCec9Euj/XU1nY6gGq6inpwDtZkNoJlCBune/IIGS82dU8RrSGAHNRZoaDJfdfQm j7YAOWCUqyzn747yMyuMUOc15iJIgOz1dKN5YwDmFkzjlw+616Aswcp8UA0OfOQ+ e4THli32BgKTSNeOGhGgx1xCDkt+0gP1L0L2Sqhlr6BnqNF65mQ4j2v6UGY1noCo jYxFchoa1zEdEiZRr/sRO91XlJtK7HyIAI0cUHKVU+Cayoh//OBQBJnbeZlfh9Qn 4ead1pTz9bcKIeZleAjlzNG249bGY+82WsFghb4/7U9MYJVePz0m1zJKPkdABZ+R lSDvhf4ImesfH5UuofZFv1UXmQL4yV7PDXXdy2xhma7YLznyZTUobDoJiZbuO72O g5HJCpYoNfvGx++Z9naomUWufqi9PWigEMxU8lUtiGaLQrDW3inTOZTTmTnsJiAI Lhku0Jr4SjCqxoEFydXOGvNV5XB4WXvf+A6JhcZI+/S72ai1CeSgMFiJLAEb2MZ+ fwPKmQ2cKnCBs5ASj1DkgUcz2c8DTUPVqg== =i1Tf -----END PGP PUBLIC KEY BLOCK-----
The SaltStack Security Team is available at security@saltstack.com for security-related bug reports or questions.
We request the disclosure of any security-related bugs or issues be reported non-publicly until such time as the issue can be resolved and a security-fix release can be prepared. At that time we will release the fix and make a public announcement with upgrade instructions and download locations.
Security response procedure¶
SaltStack takes security and the trust of our customers and users very seriously. Our disclosure policy is intended to resolve security issues as quickly and safely as is possible.
- 1.
- A security report sent to security@saltstack.com is assigned to a team member. This person is the primary contact for questions and will coordinate the fix, release, and announcement.
- 2.
- The reported issue is reproduced and confirmed. A list of affected projects and releases is made.
- 3.
- Fixes are implemented for all affected projects and releases that are actively supported. Back-ports of the fix are made to any old releases that are actively supported.
- 4.
- Packagers are notified via the salt-packagers mailing list that an issue was reported and resolved, and that an announcement is incoming.
- 5.
- A pre-announcement is sent out to the salt-announce mailing list approximately a week before the CVE release. This announcement does not include details of the vulnerability. The pre-announcement will include the date the release will occur and the vulnerability rating.
- 6.
- A new release is created and pushed to all affected repositories. The release documentation provides a full description of the issue, plus any upgrade instructions or other relevant details.
- 7.
- An announcement is made to the salt-users and salt-announce mailing lists. The announcement contains a description of the issue and a link to the full release documentation and download locations.
Receiving security announcements¶
The following mailing lists, per the previous tasks identified in our response procedure, will receive security-relevant notifications:
- salt-packagers
- salt-users
- salt-announce
In addition to the mailing lists, SaltStack also provides the following resources:
- SaltStack Security Announcements landing page
- SaltStack Security RSS Feed
- SaltStack Community Slack Workspace
Salt Channels¶
One of the fundamental features of Salt is remote execution. Salt has two basic "channels" for communicating with minions. Each channel requires a client (minion) and a server (master) implementation to work within Salt. These pairs of channels will work together to implement the specific message passing required by the channel interface. Channels use Transports for sending and receiving messages.
Pub Channel¶
The pub (or pubish) channel is how a master sends a job (payload) to a minion. This is a basic pub/sub paradigm, which has specific targeting semantics. All data which goes across the publish system should be encrypted such that only members of the Salt cluster can decrypt the published payloads.
Req Channel¶
The req channel is how the minions send data to the master. This interface is primarily used for fetching files and returning job returns. The req channels have two basic interfaces when talking to the master. send is the basic method that guarantees the message is encrypted at least so that only minions attached to the same master can read it-- but no guarantee of minion-master confidentiality, whereas the crypted_transfer_decode_dictentry method does guarantee minion-master confidentiality. The req channel is also used by the salt cli to publish jobs to the master.
Salt Transport¶
Transports in Salt are used by Channels to send messages between Masters, Minions, and the Salt CLI. Transports can be brokerless or brokered. There are two types of server / client implementations needed to implement a channel.
Publish Server¶
The publish server implements a publish / subscribe paradigm and is used by Minions to receive jobs from Masters.
Publish Client¶
The publish client subscribes to, and receives messages from a Publish Server.
Request Server¶
The request server implements a request / reply paradigm. Every request sent by the client must receive exactly one reply.
Request Client¶
The request client sends requests to a Request Server and receives a reply message.
ZeroMQ Transport¶
NOTE:
ZeroMQ is a messaging library with bindings into many languages. ZeroMQ implements a socket interface for message passing, with specific semantics for the socket type.
Publish Server and Client¶
The publish server and client are implemented using ZeroMQ's pub/sub sockets. By default we don't use ZeroMQ's filtering, which means that all publish jobs are sent to all minions and filtered minion side. ZeroMQ does have publisher side filtering which can be enabled in salt using zmq_filtering.
Request Server and Client¶
The request server and client are implemented using ZeroMQ's req/rep sockets. These sockets enforce a send/recv pattern, which forces salt to serialize messages through these socket pairs. This means that although the interface is asynchronous on the minion we cannot send a second message until we have received the reply of the first message.
TCP Transport¶
The tcp transport is an implementation of Salt's transport using raw tcp sockets. Since this isn't using a pre-defined messaging library we will describe the wire protocol, message semantics, etc. in this document.
The tcp transport is enabled by changing the transport setting to tcp on each Salt minion and Salt master.
transport: tcp
WARNING:
Wire Protocol¶
This implementation over TCP focuses on flexibility over absolute efficiency. This means we are okay to spend a couple of bytes of wire space for flexibility in the future. That being said, the wire framing is quite efficient and looks like:
msgpack({'head': SOMEHEADER, 'body': SOMEBODY})
Since msgpack is an iterably parsed serialization, we can simply write the serialized payload to the wire. Within that payload we have two items "head" and "body". Head contains header information (such as "message id"). The Body contains the actual message that we are sending. With this flexible wire protocol we can implement any message semantics that we'd like-- including multiplexed message passing on a single socket.
TLS Support¶
New in version 2016.11.1.
The TCP transport allows for the master/minion communication to be optionally wrapped in a TLS connection. Enabling this is simple, the master and minion need to be using the tcp connection, then the ssl option is enabled. The ssl option is passed as a dict and corresponds to the options passed to the Python ssl.wrap_socket function.
A simple setup looks like this, on the Salt Master add the ssl option to the master configuration file:
ssl:
keyfile: <path_to_keyfile>
certfile: <path_to_certfile>
ssl_version: PROTOCOL_TLSv1_2
ciphers: ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384
The minimal ssl option in the minion configuration file looks like this:
ssl: True # Versions below 2016.11.4: ssl: {}
Specific options can be sent to the minion also, as defined in the Python ssl.wrap_socket function.
NOTE:
Crypto¶
The current implementation uses the same crypto as the zeromq transport.
Publish Server and Client¶
For the publish server and client we send messages without "message ids" which the remote end interprets as a one-way send.
NOTE:
As of Salt 3005, publishes using pcre and glob targeting are also sent only to relevant minions and not broadcasted. Other targeting types are always sent to all minions and rely on minion-side filtering.
NOTE:
Request Server and Client¶
For the request server and client we send messages with a "message id". This "message id" allows us to multiplex messages across the socket.
Master Tops System¶
In 0.10.4 the external_nodes system was upgraded to allow for modular subsystems to be used to generate the top file data for a highstate run on the master.
The old external_nodes option has been removed. The master tops system provides a pluggable and extendable replacement for it, allowing for multiple different subsystems to provide top file data.
Using the new master_tops option is simple:
master_tops:
ext_nodes: cobbler-external-nodes
for Cobbler or:
master_tops:
reclass:
inventory_base_uri: /etc/reclass
classes_uri: roles
for Reclass.
master_tops:
varstack: /path/to/the/config/file/varstack.yaml
for Varstack.
It's also possible to create custom master_tops modules. Simply place them into salt://_tops in the Salt fileserver and use the saltutil.sync_tops runner to sync them. If this runner function is not available, they can manually be placed into extmods/tops, relative to the master cachedir (in most cases the full path will be /var/cache/salt/master/extmods/tops).
Custom tops modules are written like any other execution module, see the source for the two modules above for examples of fully functional ones. Below is a bare-bones example:
/etc/salt/master:
master_tops:
customtop: True
customtop.py: (custom master_tops module)
import logging import sys # Define the module's virtual name __virtualname__ = "customtop" log = logging.getLogger(__name__) def __virtual__():
return __virtualname__ def top(**kwargs):
log.debug("Calling top in customtop")
return {"base": ["test"]}
salt minion state.show_top should then display something like:
$ salt minion state.show_top minion
----------
base:
- test
NOTE:
Returners¶
By default the return values of the commands sent to the Salt minions are returned to the Salt master, however anything at all can be done with the results data.
By using a Salt returner, results data can be redirected to external data-stores for analysis and archival.
Returners pull their configuration values from the Salt minions. Returners are only configured once, which is generally at load time.
The returner interface allows the return data to be sent to any system that can receive data. This means that return data can be sent to a Redis server, a MongoDB server, a MySQL server, or any system.
SEE ALSO:
Using Returners¶
All Salt commands will return the command data back to the master. Specifying returners will ensure that the data is _also_ sent to the specified returner interfaces.
Specifying what returners to use is done when the command is invoked:
salt '*' test.version --return redis_return
This command will ensure that the redis_return returner is used.
It is also possible to specify multiple returners:
salt '*' test.version --return mongo_return,redis_return,cassandra_return
In this scenario all three returners will be called and the data from the test.version command will be sent out to the three named returners.
Writing a Returner¶
Returners are Salt modules that allow the redirection of results data to targets other than the Salt Master.
Returners Are Easy To Write!¶
Writing a Salt returner is straightforward.
A returner is a Python module containing at minimum a returner function. Other optional functions can be included to add support for master_job_cache, Storing Job Results in an External System, and Event Returners.
- returner
- The returner function must accept a single argument. The argument contains return data from the called minion function. If the minion function test.version is called, the value of the argument will be a dictionary. Run the following command from a Salt master to get a sample of the dictionary:
salt-call --local --metadata test.version --out=pprint
import redis import salt.utils.json def returner(ret):
"""
Return information to a redis server
"""
# Get a redis connection
serv = redis.Redis(host="redis-serv.example.com", port=6379, db="0")
serv.sadd("%(id)s:jobs" % ret, ret["jid"])
serv.set("%(jid)s:%(id)s" % ret, salt.utils.json.dumps(ret["return"]))
serv.sadd("jobs", ret["jid"])
serv.sadd(ret["jid"], ret["id"])
The above example of a returner set to send the data to a Redis server serializes the data as JSON and sets it in redis.
Using Custom Returner Modules¶
Place custom returners in a _returners/ directory within the file_roots specified by the master config file.
Custom returners are distributed when any of the following are called:
- state.apply
- saltutil.sync_returners
- saltutil.sync_all
Any custom returners which have been synced to a minion that are named the same as one of Salt's default set of returners will take the place of the default returner with the same name.
Naming the Returner¶
Note that a returner's default name is its filename (i.e. foo.py becomes returner foo), but that its name can be overridden by using a __virtual__ function. A good example of this can be found in the redis returner, which is named redis_return.py but is loaded as simply redis:
try:
import redis
HAS_REDIS = True except ImportError:
HAS_REDIS = False __virtualname__ = "redis" def __virtual__():
if not HAS_REDIS:
return False
return __virtualname__
Master Job Cache Support¶
master_job_cache, Storing Job Results in an External System, and Event Returners. Salt's master_job_cache allows returners to be used as a pluggable replacement for the Default Job Cache. In order to do so, a returner must implement the following functions:
NOTE:
- prep_jid
- Ensures that job ids (jid) don't collide, unless passed_jid is provided.
nocache is an optional boolean that indicates if return data should be cached. passed_jid is a caller provided jid which should be returned unconditionally.
def prep_jid(nocache, passed_jid=None): # pylint: disable=unused-argument
"""
Do any work necessary to prepare a JID, including sending a custom id
"""
return passed_jid if passed_jid is not None else salt.utils.jid.gen_jid()
- save_load
- Save job information. The jid is generated by prep_jid and should be considered a unique identifier for the job. The jid, for example, could be used as the primary/unique key in a database. The load is what is returned to a Salt master by a minion. minions is a list of minions that the job was run against. The following code example stores the load as a JSON string in the salt.jids table.
import salt.utils.json def save_load(jid, load, minions=None):
"""
Save the load to the specified jid id
"""
query = """INSERT INTO salt.jids (
jid, load
) VALUES (
'{0}', '{1}'
);""".format(
jid, salt.utils.json.dumps(load)
)
# cassandra_cql.cql_query may raise a CommandExecutionError
try:
__salt__["cassandra_cql.cql_query"](query)
except CommandExecutionError:
log.critical("Could not save load in jids table.")
raise
except Exception as e:
log.critical("Unexpected error while inserting into jids: {0}".format(e))
raise
- get_load
- must accept a job id (jid) and return the job load stored by save_load, or an empty dictionary when not found.
def get_load(jid):
"""
Return the load data that marks a specified jid
"""
query = """SELECT load FROM salt.jids WHERE jid = '{0}';""".format(jid)
ret = {}
# cassandra_cql.cql_query may raise a CommandExecutionError
try:
data = __salt__["cassandra_cql.cql_query"](query)
if data:
load = data[0].get("load")
if load:
ret = json.loads(load)
except CommandExecutionError:
log.critical("Could not get load from jids table.")
raise
except Exception as e:
log.critical(
"""Unexpected error while getting load from
jids: {0}""".format(
str(e)
)
)
raise
return ret
External Job Cache Support¶
Salt's Storing Job Results in an External System extends the master_job_cache. External Job Cache support requires the following functions in addition to what is required for Master Job Cache support:
- get_jid
- Return a dictionary containing the information (load) returned by each minion when the specified job id was executed.
Sample:
{
"local": {
"master_minion": {
"fun_args": [],
"jid": "20150330121011408195",
"return": "2018.3.4",
"retcode": 0,
"success": true,
"cmd": "_return",
"_stamp": "2015-03-30T12:10:12.708663",
"fun": "test.version",
"id": "master_minion"
}
} }
- get_fun
- Return a dictionary of minions that called a given Salt function as their last function call.
Sample:
{
"local": {
"minion1": "test.version",
"minion3": "test.version",
"minion2": "test.version"
} }
- get_jids
- Return a list of all job ids.
Sample:
{
"local": [
"20150330121011408195",
"20150330195922139916"
] }
- get_minions
- Returns a list of minions
Sample:
{
"local": [
"minion3",
"minion2",
"minion1",
"master_minion"
] }
Please refer to one or more of the existing returners (i.e. mysql, cassandra_cql) if you need further clarification.
Event Support¶
An event_return function must be added to the returner module to allow events to be logged from a master via the returner. A list of events are passed to the function by the master.
The following example was taken from the MySQL returner. In this example, each event is inserted into the salt_events table keyed on the event tag. The tag contains the jid and therefore is guaranteed to be unique.
import salt.utils.json def event_return(events):
"""
Return event to mysql server
Requires that configuration be enabled via 'event_return'
option in master config.
"""
with _get_serv(events, commit=True) as cur:
for event in events:
tag = event.get("tag", "")
data = event.get("data", "")
sql = """INSERT INTO `salt_events` (`tag`, `data`, `master_id` )
VALUES (%s, %s, %s)"""
cur.execute(sql, (tag, salt.utils.json.dumps(data), __opts__["id"]))
Testing the Returner¶
The returner, prep_jid, save_load, get_load, and event_return functions can be tested by configuring the master_job_cache and Event Returners in the master config file and submitting a job to test.version each minion from the master.
Once you have successfully exercised the Master Job Cache functions, test the External Job Cache functions using the ret execution module.
salt-call ret.get_jids cassandra_cql --output=json salt-call ret.get_fun cassandra_cql test.version --output=json salt-call ret.get_minions cassandra_cql --output=json salt-call ret.get_jid cassandra_cql 20150330121011408195 --output=json
Event Returners¶
For maximum visibility into the history of events across a Salt infrastructure, all events seen by a salt master may be logged to one or more returners.
To enable event logging, set the event_return configuration option in the master config to the returner(s) which should be designated as the handler for event returns.
NOTE:
NOTE:
Full List of Returners¶
returner modules¶
appoptics_return | Salt returner to return highstate stats to AppOptics Metrics |
carbon_return | Take data from salt and "return" it into a carbon receiver |
cassandra_cql_return | Return data to a cassandra server |
cassandra_return | |
couchbase_return | Simple returner for Couchbase. |
couchdb_return | Simple returner for CouchDB. |
django_return | Deprecated since version 3006.0. |
elasticsearch_return | Return data to an elasticsearch server for indexing. |
etcd_return | Return data to an etcd server or cluster |
highstate_return | Return the results of a highstate (or any other state function that returns data in a compatible format) via an HTML email or HTML file. |
influxdb_return | Return data to an influxdb server. |
kafka_return | Return data to a Kafka topic |
librato_return | Salt returner to return highstate stats to Librato |
local | The local returner is used to test the returner interface, it just prints the return data to the console to verify that it is being passed properly |
local_cache | Return data to local job cache |
mattermost_returner | Return salt data via mattermost |
memcache_return | Return data to a memcache server |
mongo_future_return | Return data to a mongodb server |
mongo_return | Return data to a mongodb server |
multi_returner | Read/Write multiple returners |
mysql | Return data to a mysql server |
nagios_nrdp_return | Return salt data to Nagios |
odbc | Return data to an ODBC compliant server. |
pgjsonb | Return data to a PostgreSQL server with json data stored in Pg's jsonb data type |
postgres | Return data to a postgresql server |
postgres_local_cache | Use a postgresql server for the master job cache. |
pushover_returner | Return salt data via pushover (http://www.pushover.net) |
rawfile_json | Take data from salt and "return" it into a raw file containing the json, with one line per event. |
redis_return | Return data to a redis server |
sentry_return | Salt returner that reports execution results back to sentry. |
slack_returner | Return salt data via slack |
slack_webhook_return | Return salt data via Slack using Incoming Webhooks |
sms_return | Return data by SMS. |
smtp_return | Return salt data via email |
splunk | Send json response data to Splunk via the HTTP Event Collector Requires the following config values to be specified in config or pillar: |
sqlite3_return | Insert minion return data into a sqlite3 database |
syslog_return | Return data to the host operating system's syslog facility |
telegram_return | Return salt data via Telegram. |
xmpp_return | Return salt data via xmpp |
zabbix_return | Return salt data to Zabbix |
salt.returners.appoptics_return¶
Salt returner to return highstate stats to AppOptics Metrics
To enable this returner the minion will need the AppOptics Metrics client importable on the Python path and the following values configured in the minion or master config.
The AppOptics python client can be found at:
https://github.com/appoptics/python-appoptics-metrics
appoptics.api_token: abc12345def
An example configuration that returns the total number of successes and failures for your salt highstate runs (the default) would look like this:
return: appoptics appoptics.api_token: <token string here>
The returner publishes the following metrics to AppOptics:
- saltstack.failed
- saltstack.passed
- saltstack.retcode
- saltstack.runtime
- saltstack.total
You can add a tags section to specify which tags should be attached to all metrics created by the returner.
appoptics.tags:
host_hostname_alias: <the minion ID - matches @host>
tier: <the tier/etc. of this node>
cluster: <the cluster name, etc.>
If no tags are explicitly configured, then the tag key host_hostname_alias will be set, with the minion's id grain being the value.
In addition to the requested tags, for a highstate run each of these will be tagged with the key:value of state_type: highstate.
In order to return metrics for state.sls runs (distinct from highstates), you can specify a list of state names to the key appoptics.sls_states like so:
appoptics.sls_states:
- role_salt_master.netapi
- role_redis.config
- role_smarty.dummy
This will report success and failure counts on runs of the role_salt_master.netapi, role_redis.config, and role_smarty.dummy states in addition to highstates.
This will report the same metrics as above, but for these runs the metrics will be tagged with state_type: sls and state_name set to the name of the state that was invoked, e.g. role_salt_master.netapi.
- salt.returners.appoptics_return.returner(ret)
- Parse the return data and return metrics to AppOptics.
For each state that's provided in the configuration, return tagged metrics for the result of that state if it's present.
salt.returners.carbon_return¶
Take data from salt and "return" it into a carbon receiver
Add the following configuration to the minion configuration file:
carbon.host: <server ip address> carbon.port: 2003
Errors when trying to convert data to numbers may be ignored by setting carbon.skip_on_error to True:
carbon.skip_on_error: True
By default, data will be sent to carbon using the plaintext protocol. To use the pickle protocol, set carbon.mode to pickle:
carbon.mode: pickle
- You can also specify the pattern used for the metric base path (except for virt modules metrics):
- carbon.metric_base_pattern: carbon.[minion_id].[module].[function]
- These tokens can used :
- [module]: salt module [function]: salt function [minion_id]: minion id
- Default is :
- carbon.metric_base_pattern: [module].[function].[minion_id]
Carbon settings may also be configured as:
carbon:
host: <server IP or hostname>
port: <carbon port>
skip_on_error: True
mode: (pickle|text)
metric_base_pattern: <pattern> | [module].[function].[minion_id]
Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location:
alternative.carbon:
host: <server IP or hostname>
port: <carbon port>
skip_on_error: True
mode: (pickle|text)
To use the carbon returner, append '--return carbon' to the salt command.
salt '*' test.ping --return carbon
To use the alternative configuration, append '--return_config alternative' to the salt command.
New in version 2015.5.0.
salt '*' test.ping --return carbon --return_config alternative
To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command.
New in version 2016.3.0.
salt '*' test.ping --return carbon --return_kwargs '{"skip_on_error": False}'
- salt.returners.carbon_return.event_return(events)
- Return event data to remote carbon server
Provide a list of events to be stored in carbon
- salt.returners.carbon_return.prep_jid(nocache=False, passed_jid=None)
- Do any work necessary to prepare a JID, including sending a custom id
- salt.returners.carbon_return.returner(ret)
- Return data to a remote carbon server using the text metric protocol
Each metric will look like:
[module].[function].[minion_id].[metric path [...]].[metric name]
salt.returners.cassandra_cql_return¶
Return data to a cassandra server
New in version 2015.5.0.
- maintainer
- Corin Kochenower<ckochenower@saltstack.com>
- maturity
- new as of 2015.2
- depends
- salt.modules.cassandra_cql
- depends
- DataStax Python Driver for Apache Cassandra https://github.com/datastax/python-driver pip install cassandra-driver
- platform
- all
- configuration
- To enable this returner, the minion will need the DataStax Python Driver for Apache Cassandra ( https://github.com/datastax/python-driver ) installed and the following values configured in the minion or master config. The list of cluster IPs must include at least one cassandra node IP address. No assumption or default will be used for the cluster IPs. The cluster IPs will be tried in the order listed. The port, username, and password values shown below will be the assumed defaults if you do not provide values.:
cassandra:
cluster:
- 192.168.50.11
- 192.168.50.12
- 192.168.50.13
port: 9042
username: salt
password: salt
Use the following cassandra database schema:
CREATE KEYSPACE IF NOT EXISTS salt
WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 1}; CREATE USER IF NOT EXISTS salt WITH PASSWORD 'salt' NOSUPERUSER; GRANT ALL ON KEYSPACE salt TO salt; USE salt; CREATE TABLE IF NOT EXISTS salt.salt_returns (
jid text,
minion_id text,
fun text,
alter_time timestamp,
full_ret text,
return text,
success boolean,
PRIMARY KEY (jid, minion_id, fun) ) WITH CLUSTERING ORDER BY (minion_id ASC, fun ASC); CREATE INDEX IF NOT EXISTS salt_returns_minion_id ON salt.salt_returns (minion_id); CREATE INDEX IF NOT EXISTS salt_returns_fun ON salt.salt_returns (fun); CREATE TABLE IF NOT EXISTS salt.jids (
jid text PRIMARY KEY,
load text ); CREATE TABLE IF NOT EXISTS salt.minions (
minion_id text PRIMARY KEY,
last_fun text ); CREATE INDEX IF NOT EXISTS minions_last_fun ON salt.minions (last_fun); CREATE TABLE IF NOT EXISTS salt.salt_events (
id timeuuid,
tag text,
alter_time timestamp,
data text,
master_id text,
PRIMARY KEY (id, tag) ) WITH CLUSTERING ORDER BY (tag ASC); CREATE INDEX tag ON salt.salt_events (tag);
Required python modules: cassandra-driver
To use the cassandra returner, append '--return cassandra_cql' to the salt command. ex:
salt '*' test.ping --return_cql cassandra
Note: if your Cassandra instance has not been tuned much you may benefit from altering some timeouts in cassandra.yaml like so:
# How long the coordinator should wait for read operations to complete read_request_timeout_in_ms: 5000 # How long the coordinator should wait for seq or index scans to complete range_request_timeout_in_ms: 20000 # How long the coordinator should wait for writes to complete write_request_timeout_in_ms: 20000 # How long the coordinator should wait for counter writes to complete counter_write_request_timeout_in_ms: 10000 # How long a coordinator should continue to retry a CAS operation # that contends with other proposals for the same row cas_contention_timeout_in_ms: 5000 # How long the coordinator should wait for truncates to complete # (This can be much longer, because unless auto_snapshot is disabled # we need to flush first so we can snapshot before removing the data.) truncate_request_timeout_in_ms: 60000 # The default timeout for other, miscellaneous operations request_timeout_in_ms: 20000
As always, your mileage may vary and your Cassandra cluster may have different needs. SaltStack has seen situations where these timeouts can resolve some stacktraces that appear to come from the Datastax Python driver.
- salt.returners.cassandra_cql_return.event_return(events)
- Return event to one of potentially many clustered cassandra nodes
Requires that configuration be enabled via 'event_return' option in master config.
Cassandra does not support an auto-increment feature due to the highly inefficient nature of creating a monotonically increasing number across all nodes in a distributed database. Each event will be assigned a uuid by the connecting client.
- salt.returners.cassandra_cql_return.get_fun(fun)
- Return a dict of the last function called for all minions
- salt.returners.cassandra_cql_return.get_jid(jid)
- Return the information returned when the specified job id was executed
- salt.returners.cassandra_cql_return.get_jids()
- Return a list of all job ids
- salt.returners.cassandra_cql_return.get_load(jid)
- Return the load data that marks a specified jid
- salt.returners.cassandra_cql_return.get_minions()
- Return a list of minions
- salt.returners.cassandra_cql_return.prep_jid(nocache, passed_jid=None)
- Do any work necessary to prepare a JID, including sending a custom id
- salt.returners.cassandra_cql_return.returner(ret)
- Return data to one of potentially many clustered cassandra nodes
- salt.returners.cassandra_cql_return.save_load(jid, load, minions=None)
- Save the load to the specified jid id
- salt.returners.cassandra_cql_return.save_minions(jid, minions, syndic_id=None)
- Included for API consistency
salt.returners.cassandra_return¶
WARNING:
Return data to a Cassandra ColumnFamily
Here's an example Keyspace / ColumnFamily setup that works with this returner:
create keyspace salt; use salt; create column family returns
with key_validation_class='UTF8Type'
and comparator='UTF8Type'
and default_validation_class='UTF8Type';
Required python modules: pycassa
- salt.returners.cassandra_return.prep_jid(nocache=False, passed_jid=None)
- Do any work necessary to prepare a JID, including sending a custom id
- salt.returners.cassandra_return.returner(ret)
- Return data to a Cassandra ColumnFamily
salt.returners.couchbase_return¶
Simple returner for Couchbase. Optional configuration settings are listed below, along with sane defaults.
couchbase.host: 'salt' couchbase.port: 8091 couchbase.bucket: 'salt' couchbase.ttl: 86400 couchbase.password: 'password' couchbase.skip_verify_views: False
To use the couchbase returner, append '--return couchbase' to the salt command. ex:
salt '*' test.ping --return couchbase
To use the alternative configuration, append '--return_config alternative' to the salt command.
New in version 2015.5.0.
salt '*' test.ping --return couchbase --return_config alternative
To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command.
New in version 2016.3.0.
salt '*' test.ping --return couchbase --return_kwargs '{"bucket": "another-salt"}'
All of the return data will be stored in documents as follows:
JID¶
load: load obj tgt_minions: list of minions targeted nocache: should we not cache the return data
JID/MINION_ID¶
return: return_data full_ret: full load of job return
- salt.returners.couchbase_return.get_jid(jid)
- Return the information returned when the specified job id was executed
- salt.returners.couchbase_return.get_jids()
- Return a list of all job ids
- salt.returners.couchbase_return.get_load(jid)
- Return the load data that marks a specified jid
- salt.returners.couchbase_return.prep_jid(nocache=False, passed_jid=None)
- Return a job id and prepare the job id directory This is the function responsible for making sure jids don't collide (unless its passed a jid) So do what you have to do to make sure that stays the case
- salt.returners.couchbase_return.returner(load)
- Return data to couchbase bucket
- salt.returners.couchbase_return.save_load(jid, clear_load, minion=None)
- Save the load to the specified jid
- salt.returners.couchbase_return.save_minions(jid, minions, syndic_id=None)
- Save/update the minion list for a given jid. The syndic_id argument is included for API compatibility only.
salt.returners.couchdb_return¶
Simple returner for CouchDB. Optional configuration settings are listed below, along with sane defaults:
couchdb.db: 'salt' couchdb.url: 'http://salt:5984/'
Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location:
alternative.couchdb.db: 'salt' alternative.couchdb.url: 'http://salt:5984/'
To use the couchdb returner, append --return couchdb to the salt command. Example:
salt '*' test.ping --return couchdb
To use the alternative configuration, append --return_config alternative to the salt command.
New in version 2015.5.0.
salt '*' test.ping --return couchdb --return_config alternative
To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command.
New in version 2016.3.0.
salt '*' test.ping --return couchdb --return_kwargs '{"db": "another-salt"}'
On concurrent database access¶
As this returner creates a couchdb document with the salt job id as document id and as only one document with a given id can exist in a given couchdb database, it is advised for most setups that every minion be configured to write to it own database (the value of couchdb.db may be suffixed with the minion id), otherwise multi-minion targeting can lead to losing output:
- the first returning minion is able to create a document in the database
- other minions fail with {'error': 'HTTP Error 409: Conflict'}
- salt.returners.couchdb_return.ensure_views()
- This function makes sure that all the views that should exist in the design document do exist.
- salt.returners.couchdb_return.get_fun(fun)
- Return a dict with key being minion and value being the job details of the last run of function 'fun'.
- salt.returners.couchdb_return.get_jid(jid)
- Get the document with a given JID.
- salt.returners.couchdb_return.get_jids()
- List all the jobs that we have..
- salt.returners.couchdb_return.get_minions()
- Return a list of minion identifiers from a request of the view.
- salt.returners.couchdb_return.get_valid_salt_views()
- Returns a dict object of views that should be part of the salt design document.
- salt.returners.couchdb_return.prep_jid(nocache=False, passed_jid=None)
- Do any work necessary to prepare a JID, including sending a custom id
- salt.returners.couchdb_return.returner(ret)
- Take in the return and shove it into the couchdb database.
- salt.returners.couchdb_return.save_minions(jid, minions, syndic_id=None)
- Included for API consistency
- salt.returners.couchdb_return.set_salt_view()
- Helper function that sets the salt design document. Uses get_valid_salt_views and some hardcoded values.
salt.returners.django_return¶
Deprecated since version 3006.0.
WARNING:
A returner that will inform a Django system that returns are available using Django's signal system.
https://docs.djangoproject.com/en/dev/topics/signals/
It is up to the Django developer to register necessary handlers with the signals provided by this returner and process returns as necessary.
The easiest way to use signals is to import them from this returner directly and then use a decorator to register them.
An example Django module that registers a function called 'returner_callback' with this module's 'returner' function:
import salt.returners.django_return from django.dispatch import receiver @receiver(salt.returners.django_return, sender=returner) def returner_callback(sender, ret):
print('I received {0} from {1}'.format(ret, sender))
- salt.returners.django_return.prep_jid(nocache=False, passed_jid=None)
- Do any work necessary to prepare a JID, including sending a custom ID
- salt.returners.django_return.returner(ret)
- Signal a Django server that a return is available
- salt.returners.django_return.save_load(jid, load, minions=None)
- Save the load to the specified jid
salt.returners.elasticsearch_return¶
Return data to an elasticsearch server for indexing.
- maintainer
- Jurnell Cockhren <jurnell.cockhren@sophicware.com>, Arnold Bechtoldt <mail@arnoldbechtoldt.com>
- maturity
- New
- depends
- elasticsearch-py
- platform
- all
To enable this returner the elasticsearch python client must be installed on the desired minions (all or some subset).
Please see documentation of elasticsearch execution module for a valid connection configuration.
WARNING:
To use the returner per salt call:
salt '*' test.ping --return elasticsearch
In order to have the returner apply to all minions:
ext_job_cache: elasticsearch
- Minion configuration:
- debug_returner_payload': False
- Output the payload being posted to the log file in debug mode
- doc_type: 'default'
- Document type to use for normal return messages
- functions_blacklist
- Optional list of functions that should not be returned to elasticsearch
- index_date: False
- Use a dated index (e.g. <index>-2016.11.29)
- master_event_index: 'salt-master-event-cache'
- Index to use when returning master events
- master_event_doc_type: 'efault'
- Document type to use got master events
- master_job_cache_index: 'salt-master-job-cache'
- Index to use for master job cache
- master_job_cache_doc_type: 'default'
- Document type to use for master job cache
- number_of_shards: 1
- Number of shards to use for the indexes
- number_of_replicas: 0
- Number of replicas to use for the indexes
NOTE: The following options are valid for 'state.apply', 'state.sls' and 'state.highstate' functions only.
- states_count: False
- Count the number of states which succeeded or failed and return it in top-level item called 'counts'. States reporting None (i.e. changes would be made but it ran in test mode) are counted as successes.
- states_order_output: False
- Prefix the state UID (e.g. file_|-yum_configured_|-/etc/yum.conf_|-managed) with a zero-padded version of the '__run_num__' value to allow for easier sorting. Also store the state function (i.e. file.managed) into a new key '_func'. Change the index to be '<index>-ordered' (e.g. salt-state_apply-ordered).
- states_single_index: False
- Store results for state.apply, state.sls and state.highstate in the salt-state_apply index (or -ordered/-<date>) indexes if enabled
elasticsearch:
hosts:
- "10.10.10.10:9200"
- "10.10.10.11:9200"
- "10.10.10.12:9200"
index_date: True
number_of_shards: 5
number_of_replicas: 1
debug_returner_payload: True
states_count: True
states_order_output: True
states_single_index: True
functions_blacklist:
- test.ping
- saltutil.find_job
- salt.returners.elasticsearch_return.event_return(events)
- Return events to Elasticsearch
Requires that the event_return configuration be set in master config.
- salt.returners.elasticsearch_return.get_load(jid)
- Return the load data that marks a specified jid
New in version 2015.8.1.
- salt.returners.elasticsearch_return.prep_jid(nocache=False, passed_jid=None)
- Do any work necessary to prepare a JID, including sending a custom id
- salt.returners.elasticsearch_return.returner(ret)
- Process the return from Salt
- salt.returners.elasticsearch_return.save_load(jid, load, minions=None)
- Save the load to the specified jid id
New in version 2015.8.1.
salt.returners.etcd_return¶
Return data to an etcd server or cluster
In order to return to an etcd server, a profile should be created in the master configuration file:
my_etcd_config:
etcd.host: 127.0.0.1
etcd.port: 2379
It is technically possible to configure etcd without using a profile, but this is not considered to be a best practice, especially when multiple etcd servers or clusters are available.
etcd.host: 127.0.0.1 etcd.port: 2379
In order to choose whether to use etcd API v2 or v3, you can put the following configuration option in the same place as your etcd configuration. This option defaults to true, meaning you will use v2 unless you specify otherwise.
etcd.require_v2: True
When using API v3, there are some specific options available to be configured within your etcd profile. They are defaulted to the following...
etcd.encode_keys: False etcd.encode_values: True etcd.raw_keys: False etcd.raw_values: False etcd.unicode_errors: "surrogateescape"
etcd.encode_keys indicates whether you want to pre-encode keys using msgpack before adding them to etcd.
NOTE:
etcd.encode_values indicates whether you want to pre-encode values using msgpack before adding them to etcd. This defaults to True to avoid data loss on non-string values wherever possible.
etcd.raw_keys determines whether you want the raw key or a string returned.
etcd.raw_values determines whether you want the raw value or a string returned.
etcd.unicode_errors determines what you policy to follow when there are encoding/decoding errors.
Additionally, two more options must be specified in the top-level configuration in order to use the etcd returner:
etcd.returner: my_etcd_config etcd.returner_root: /salt/return
The etcd.returner option specifies which configuration profile to use. The etcd.returner_root option specifies the path inside etcd to use as the root of the returner system.
Once the etcd options are configured, the returner may be used:
CLI Example:
A username and password can be set:
etcd.username: larry # Optional; requires etcd.password to be set etcd.password: 123pass # Optional; requires etcd.username to be set
You can also set a TTL (time to live) value for the returner:
etcd.ttl: 5
Authentication with username and password, and ttl, currently requires the master branch of python-etcd.
You may also specify different roles for read and write operations. First, create the profiles as specified above. Then add:
etcd.returner_read_profile: my_etcd_read etcd.returner_write_profile: my_etcd_write
- salt.returners.etcd_return.clean_old_jobs()
- Included for API consistency
- salt.returners.etcd_return.get_fun(fun)
- Return a dict of the last function called for all minions
- salt.returners.etcd_return.get_jid(jid)
- Return the information returned when the specified job id was executed
- salt.returners.etcd_return.get_jids()
- Return a list of all job ids
- salt.returners.etcd_return.get_load(jid)
- Return the load data that marks a specified jid
- salt.returners.etcd_return.get_minions()
- Return a list of minions
- salt.returners.etcd_return.prep_jid(nocache=False, passed_jid=None)
- Do any work necessary to prepare a JID, including sending a custom id
- salt.returners.etcd_return.returner(ret)
- Return data to an etcd server or cluster
- salt.returners.etcd_return.save_load(jid, load, minions=None)
- Save the load to the specified jid
- salt.returners.etcd_return.save_minions(jid, minions, syndic_id=None)
- Included for API consistency
salt.returners.highstate_return¶
Return the results of a highstate (or any other state function that returns data in a compatible format) via an HTML email or HTML file.
New in version 2017.7.0.
Similar results can be achieved by using the smtp returner with a custom template, except an attempt at writing such a template for the complex data structure returned by highstate function had proven to be a challenge, not to mention that the smtp module doesn't support sending HTML mail at the moment.
The main goal of this returner was to produce an easy to read email similar to the output of highstate outputter used by the CLI.
This returner could be very useful during scheduled executions, but could also be useful for communicating the results of a manual execution.
Returner configuration is controlled in a standard fashion either via highstate group or an alternatively named group.
salt '*' state.highstate --return highstate
To use the alternative configuration, append '--return_config config-name'
salt '*' state.highstate --return highstate --return_config simple
Here is an example of what the configuration might look like:
simple.highstate:
report_failures: True
report_changes: True
report_everything: False
failure_function: pillar.items
success_function: pillar.items
report_format: html
report_delivery: smtp
smtp_success_subject: 'success minion {id} on host {host}'
smtp_failure_subject: 'failure minion {id} on host {host}'
smtp_server: smtp.example.com
smtp_recipients: saltusers@example.com, devops@example.com
smtp_sender: salt@example.com
The report_failures, report_changes, and report_everything flags provide filtering of the results. If you want an email to be sent every time, then report_everything is your choice. If you want to be notified only when changes were successfully made use report_changes. And report_failures will generate an email if there were failures.
The configuration allows you to run a salt module function in case of success (success_function) or failure (failure_function).
Any salt function, including ones defined in the _module folder of your salt repo, could be used here and its output will be displayed under the 'extra' heading of the email.
Supported values for report_format are html, json, and yaml. The latter two are typically used for debugging purposes, but could be used for applying a template at some later stage.
The values for report_delivery are smtp or file. In case of file delivery the only other applicable option is file_output.
In case of smtp delivery, smtp_* options demonstrated by the example above could be used to customize the email.
As you might have noticed, the success and failure subjects contain {id} and {host} values. Any other grain name could be used. As opposed to using {{grains['id']}}, which will be rendered by the master and contain master's values at the time of pillar generation, these will contain minion values at the time of execution.
- salt.returners.highstate_return.returner(ret)
- Check highstate return information and possibly fire off an email or save a file.
salt.returners.influxdb_return¶
Return data to an influxdb server.
New in version 2015.8.0.
To enable this returner the minion will need the python client for influxdb installed and the following values configured in the minion or master config, these are the defaults:
influxdb.db: 'salt' influxdb.user: 'salt' influxdb.password: 'salt' influxdb.host: 'localhost' influxdb.port: 8086
Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location:
alternative.influxdb.db: 'salt' alternative.influxdb.user: 'salt' alternative.influxdb.password: 'salt' alternative.influxdb.host: 'localhost' alternative.influxdb.port: 6379
To use the influxdb returner, append '--return influxdb' to the salt command.
salt '*' test.ping --return influxdb
To use the alternative configuration, append '--return_config alternative' to the salt command.
salt '*' test.ping --return influxdb --return_config alternative
To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command.
New in version 2016.3.0.
salt '*' test.ping --return influxdb --return_kwargs '{"db": "another-salt"}'
- salt.returners.influxdb_return.get_fun(fun)
- Return a dict of the last function called for all minions
- salt.returners.influxdb_return.get_jid(jid)
- Return the information returned when the specified job id was executed
- salt.returners.influxdb_return.get_jids()
- Return a list of all job ids
- salt.returners.influxdb_return.get_load(jid)
- Return the load data that marks a specified jid
- salt.returners.influxdb_return.get_minions()
- Return a list of minions
- salt.returners.influxdb_return.prep_jid(nocache=False, passed_jid=None)
- Do any work necessary to prepare a JID, including sending a custom id
- salt.returners.influxdb_return.returner(ret)
- Return data to a influxdb data store
- salt.returners.influxdb_return.save_load(jid, load, minions=None)
- Save the load to the specified jid
- salt.returners.influxdb_return.save_minions(jid, minions, syndic_id=None)
- Included for API consistency
salt.returners.kafka_return¶
Return data to a Kafka topic
- maintainer
- Justin Desilets (justin.desilets@gmail.com)
- maturity
- 20181119
- depends
- confluent-kafka
- platform
- all
To enable this returner install confluent-kafka and enable the following settings in the minion config:
returner.kafka.topic: 'topic'
To use the kafka returner, append --return kafka to the Salt command, eg;
- salt.returners.kafka_return.returner(ret)
- Return information to a Kafka server
salt.returners.librato_return¶
Salt returner to return highstate stats to Librato
To enable this returner the minion will need the Librato client importable on the Python path and the following values configured in the minion or master config.
The Librato python client can be found at: https://github.com/librato/python-librato
librato.email: example@librato.com librato.api_token: abc12345def
This return supports multi-dimension metrics for Librato. To enable support for more metrics, the tags JSON object can be modified to include other tags.
Adding EC2 Tags example: If ec2_tags:region were desired within the tags for multi-dimension. The tags could be modified to include the ec2 tags. Multiple dimensions are added simply by adding more tags to the submission.
pillar_data = __salt__['pillar.raw']() q.add(metric.name, value, tags={'Name': ret['id'],'Region': pillar_data['ec2_tags']['Name']})
- salt.returners.librato_return.returner(ret)
- Parse the return data and return metrics to Librato.
salt.returners.local¶
The local returner is used to test the returner interface, it just prints the return data to the console to verify that it is being passed properly
To use the local returner, append '--return local' to the salt command. ex:
salt '*' test.ping --return local
- salt.returners.local.event_return(event)
- Print event return data to the terminal to verify functionality
- salt.returners.local.returner(ret)
- Print the return data to the terminal to verify functionality
salt.returners.local_cache¶
Return data to local job cache
- salt.returners.local_cache.clean_old_jobs()
- Clean out the old jobs from the job cache
- salt.returners.local_cache.get_endtime(jid)
- Retrieve the stored endtime for a given job
Returns False if no endtime is present
- salt.returners.local_cache.get_jid(jid)
- Return the information returned when the specified job id was executed
- salt.returners.local_cache.get_jids()
- Return a dict mapping all job ids to job information
- salt.returners.local_cache.get_jids_filter(count, filter_find_job=True)
- Return a list of all jobs information filtered by the given criteria. :param int count: show not more than the count of most recent jobs :param bool filter_find_jobs: filter out 'saltutil.find_job' jobs
- salt.returners.local_cache.get_load(jid)
- Return the load data that marks a specified jid
- salt.returners.local_cache.load_reg()
- Load the register from msgpack files
- salt.returners.local_cache.prep_jid(nocache=False, passed_jid=None, recurse_count=0)
- Return a job id and prepare the job id directory.
This is the function responsible for making sure jids don't collide (unless it is passed a jid). So do what you have to do to make sure that stays the case
- salt.returners.local_cache.returner(load)
- Return data to the local job cache
- salt.returners.local_cache.save_load(jid, clear_load, minions=None, recurse_count=0)
- Save the load to the specified jid
minions argument is to provide a pre-computed list of matched minions for the job, for cases when this function can't compute that list itself (such as for salt-ssh)
- salt.returners.local_cache.save_minions(jid, minions, syndic_id=None)
- Save/update the serialized list of minions for a given job
- salt.returners.local_cache.save_reg(data)
- Save the register to msgpack files
- salt.returners.local_cache.update_endtime(jid, time)
- Update (or store) the end time for a given job
Endtime is stored as a plain text string
salt.returners.mattermost_returner¶
Return salt data via mattermost
New in version 2017.7.0.
The following fields can be set in the minion conf file:
mattermost.hook (required) mattermost.username (optional) mattermost.channel (optional)
Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location:
mattermost.channel mattermost.hook mattermost.username
mattermost settings may also be configured as:
mattermost:
channel: RoomName
hook: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
username: user
To use the mattermost returner, append '--return mattermost' to the salt command.
salt '*' test.ping --return mattermost
To override individual configuration items, append --return_kwargs '{'key:': 'value'}' to the salt command.
salt '*' test.ping --return mattermost --return_kwargs '{'channel': '#random'}'
- salt.returners.mattermost_returner.event_return(events)
- Send the events to a mattermost room.
- Parameters
- events -- List of events
- Returns
- Boolean if messages were sent successfully.
- salt.returners.mattermost_returner.post_message(channel, message, username, api_url, hook)
- Send a message to a mattermost room.
- channel -- The room name.
- message -- The message to send to the mattermost room.
- username -- Specify who the message is from.
- hook -- The mattermost hook, if not specified in the configuration.
- Returns
- Boolean if message was sent successfully.
- salt.returners.mattermost_returner.returner(ret)
- Send an mattermost message with the data
salt.returners.memcache_return¶
Return data to a memcache server
To enable this returner the minion will need the python client for memcache installed and the following values configured in the minion or master config, these are the defaults.
memcache.host: 'localhost' memcache.port: '11211'
Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location.
alternative.memcache.host: 'localhost' alternative.memcache.port: '11211'
python2-memcache uses 'localhost' and '11211' as syntax on connection.
To use the memcache returner, append '--return memcache' to the salt command.
salt '*' test.ping --return memcache
To use the alternative configuration, append '--return_config alternative' to the salt command.
New in version 2015.5.0.
salt '*' test.ping --return memcache --return_config alternative
To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command.
New in version 2016.3.0.
salt '*' test.ping --return memcache --return_kwargs '{"host": "hostname.domain.com"}'
- salt.returners.memcache_return.get_fun(fun)
- Return a dict of the last function called for all minions
- salt.returners.memcache_return.get_jid(jid)
- Return the information returned when the specified job id was executed
- salt.returners.memcache_return.get_jids()
- Return a list of all job ids
- salt.returners.memcache_return.get_load(jid)
- Return the load data that marks a specified jid
- salt.returners.memcache_return.get_minions()
- Return a list of minions
- salt.returners.memcache_return.prep_jid(nocache=False, passed_jid=None)
- Do any work necessary to prepare a JID, including sending a custom id
- salt.returners.memcache_return.returner(ret)
- Return data to a memcache data store
- salt.returners.memcache_return.save_load(jid, load, minions=None)
- Save the load to the specified jid
- salt.returners.memcache_return.save_minions(jid, minions, syndic_id=None)
- Included for API consistency
salt.returners.mongo_future_return¶
Return data to a mongodb server
Required python modules: pymongo
This returner will send data from the minions to a MongoDB server. MongoDB server can be configured by using host, port, db, user and password settings or by connection string URI (for pymongo > 2.3). To configure the settings for your MongoDB server, add the following lines to the minion config files:
mongo.db: <database name> mongo.host: <server ip address> mongo.user: <MongoDB username> mongo.password: <MongoDB user password> mongo.port: 27017
Or single URI:
mongo.uri: URI
where uri is in the format:
mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/[database][?options]]
Example:
mongodb://db1.example.net:27017/mydatabase mongodb://db1.example.net:27017,db2.example.net:2500/?replicaSet=test mongodb://db1.example.net:27017,db2.example.net:2500/?replicaSet=test&connectTimeoutMS=300000
More information on URI format can be found in https://docs.mongodb.com/manual/reference/connection-string/
You can also ask for indexes creation on the most common used fields, which should greatly improve performance. Indexes are not created by default.
mongo.indexes: true
Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location:
alternative.mongo.db: <database name> alternative.mongo.host: <server ip address> alternative.mongo.user: <MongoDB username> alternative.mongo.password: <MongoDB user password> alternative.mongo.port: 27017
Or single URI:
alternative.mongo.uri: URI
This mongo returner is being developed to replace the default mongodb returner in the future and should not be considered API stable yet.
To use the mongo returner, append '--return mongo' to the salt command.
salt '*' test.ping --return mongo
To use the alternative configuration, append '--return_config alternative' to the salt command.
New in version 2015.5.0.
salt '*' test.ping --return mongo --return_config alternative
To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command.
New in version 2016.3.0.
salt '*' test.ping --return mongo --return_kwargs '{"db": "another-salt"}'
- salt.returners.mongo_future_return.event_return(events)
- Return events to Mongodb server
- salt.returners.mongo_future_return.get_fun(fun)
- Return the most recent jobs that have executed the named function
- salt.returners.mongo_future_return.get_jid(jid)
- Return the return information associated with a jid
- salt.returners.mongo_future_return.get_jids()
- Return a list of job ids
- salt.returners.mongo_future_return.get_load(jid)
- Return the load associated with a given job id
- salt.returners.mongo_future_return.get_minions()
- Return a list of minions
- salt.returners.mongo_future_return.prep_jid(nocache=False, passed_jid=None)
- Do any work necessary to prepare a JID, including sending a custom id
- salt.returners.mongo_future_return.returner(ret)
- Return data to a mongodb server
- salt.returners.mongo_future_return.save_load(jid, load, minions=None)
- Save the load for a given job id
- salt.returners.mongo_future_return.save_minions(jid, minions, syndic_id=None)
- Included for API consistency
salt.returners.mongo_return¶
Return data to a mongodb server
Required python modules: pymongo
This returner will send data from the minions to a MongoDB server. To configure the settings for your MongoDB server, add the following lines to the minion config files.
mongo.db: <database name> mongo.host: <server ip address> mongo.user: <MongoDB username> mongo.password: <MongoDB user password> mongo.port: 27017
Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location.
alternative.mongo.db: <database name> alternative.mongo.host: <server ip address> alternative.mongo.user: <MongoDB username> alternative.mongo.password: <MongoDB user password> alternative.mongo.port: 27017
To use the mongo returner, append '--return mongo' to the salt command.
salt '*' test.ping --return mongo_return
To use the alternative configuration, append '--return_config alternative' to the salt command.
New in version 2015.5.0.
salt '*' test.ping --return mongo_return --return_config alternative
To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command.
New in version 2016.3.0.
salt '*' test.ping --return mongo --return_kwargs '{"db": "another-salt"}'
To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command.
New in version 2016.3.0.
salt '*' test.ping --return mongo --return_kwargs '{"db": "another-salt"}'
- salt.returners.mongo_return.get_fun(fun)
- Return the most recent jobs that have executed the named function
- salt.returners.mongo_return.get_jid(jid)
- Return the return information associated with a jid
- salt.returners.mongo_return.prep_jid(nocache=False, passed_jid=None)
- Do any work necessary to prepare a JID, including sending a custom id
- salt.returners.mongo_return.returner(ret)
- Return data to a mongodb server
- salt.returners.mongo_return.save_minions(jid, minions, syndic_id=None)
- Included for API consistency
salt.returners.multi_returner¶
Read/Write multiple returners
- salt.returners.multi_returner.clean_old_jobs()
- Clean out the old jobs from all returners (if you have it)
- salt.returners.multi_returner.get_jid(jid)
- Merge the return data from all returners
- salt.returners.multi_returner.get_jids()
- Return all job data from all returners
- salt.returners.multi_returner.get_load(jid)
- Merge the load data from all returners
- salt.returners.multi_returner.prep_jid(nocache=False, passed_jid=None)
- Call both with prep_jid on all returners in multi_returner
TODO: finish this, what do do when you get different jids from 2 returners... since our jids are time based, this make this problem hard, because they aren't unique, meaning that we have to make sure that no one else got the jid and if they did we spin to get a new one, which means "locking" the jid in 2 returners is non-trivial
- salt.returners.multi_returner.returner(load)
- Write return to all returners in multi_returner
- salt.returners.multi_returner.save_load(jid, clear_load, minions=None)
- Write load to all returners in multi_returner
- salt.returners.multi_returner.save_minions(jid, minions, syndic_id=None)
- Included for API consistency
salt.returners.mysql¶
Return data to a mysql server
- maintainer
- Dave Boucha <dave@saltstack.com>, Seth House <shouse@saltstack.com>
- maturity
- mature
- depends
- python-mysqldb
- platform
- all
To enable this returner, the minion will need the python client for mysql installed and the following values configured in the minion or master config. These are the defaults:
mysql.host: 'salt' mysql.user: 'salt' mysql.pass: 'salt' mysql.db: 'salt' mysql.port: 3306
SSL is optional. The defaults are set to None. If you do not want to use SSL, either exclude these options or set them to None.
mysql.ssl_ca: None mysql.ssl_cert: None mysql.ssl_key: None
Alternative configuration values can be used by prefacing the configuration with alternative.. Any values not found in the alternative configuration will be pulled from the default location. As stated above, SSL configuration is optional. The following ssl options are simply for illustration purposes:
alternative.mysql.host: 'salt' alternative.mysql.user: 'salt' alternative.mysql.pass: 'salt' alternative.mysql.db: 'salt' alternative.mysql.port: 3306 alternative.mysql.ssl_ca: '/etc/pki/mysql/certs/localhost.pem' alternative.mysql.ssl_cert: '/etc/pki/mysql/certs/localhost.crt' alternative.mysql.ssl_key: '/etc/pki/mysql/certs/localhost.key'
Should you wish the returner data to be cleaned out every so often, set keep_jobs_seconds to the number of hours for the jobs to live in the tables. Setting it to 0 will cause the data to stay in the tables. The default setting for keep_jobs_seconds is set to 86400.
Should you wish to archive jobs in a different table for later processing, set archive_jobs to True. Salt will create 3 archive tables
- jids_archive
- salt_returns_archive
- salt_events_archive
and move the contents of jids, salt_returns, and salt_events that are more than keep_jobs_seconds seconds old to these tables.
Use the following mysql database schema:
CREATE DATABASE `salt`
DEFAULT CHARACTER SET utf8
DEFAULT COLLATE utf8_general_ci; USE `salt`; -- -- Table structure for table `jids` -- DROP TABLE IF EXISTS `jids`; CREATE TABLE `jids` (
`jid` varchar(255) NOT NULL,
`load` mediumtext NOT NULL,
UNIQUE KEY `jid` (`jid`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- -- Table structure for table `salt_returns` -- DROP TABLE IF EXISTS `salt_returns`; CREATE TABLE `salt_returns` (
`fun` varchar(50) NOT NULL,
`jid` varchar(255) NOT NULL,
`return` mediumtext NOT NULL,
`id` varchar(255) NOT NULL,
`success` varchar(10) NOT NULL,
`full_ret` mediumtext NOT NULL,
`alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
KEY `id` (`id`),
KEY `jid` (`jid`),
KEY `fun` (`fun`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- -- Table structure for table `salt_events` -- DROP TABLE IF EXISTS `salt_events`; CREATE TABLE `salt_events` ( `id` BIGINT NOT NULL AUTO_INCREMENT, `tag` varchar(255) NOT NULL, `data` mediumtext NOT NULL, `alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP, `master_id` varchar(255) NOT NULL, PRIMARY KEY (`id`), KEY `tag` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Required python modules: MySQLdb
To use the mysql returner, append '--return mysql' to the salt command.
salt '*' test.ping --return mysql
To use the alternative configuration, append '--return_config alternative' to the salt command.
New in version 2015.5.0.
salt '*' test.ping --return mysql --return_config alternative
To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command.
New in version 2016.3.0.
salt '*' test.ping --return mysql --return_kwargs '{"db": "another-salt"}'
- salt.returners.mysql.clean_old_jobs()
- Called in the master's event loop every loop_interval. Archives and/or deletes the events and job details from the database. :return:
- salt.returners.mysql.event_return(events)
- Return event to mysql server
Requires that configuration be enabled via 'event_return' option in master config.
- salt.returners.mysql.get_fun(fun)
- Return a dict of the last function called for all minions
- salt.returners.mysql.get_jid(jid)
- Return the information returned when the specified job id was executed
- salt.returners.mysql.get_jids()
- Return a list of all job ids
- salt.returners.mysql.get_jids_filter(count, filter_find_job=True)
- Return a list of all job ids :param int count: show not more than the count of most recent jobs :param bool filter_find_jobs: filter out 'saltutil.find_job' jobs
- salt.returners.mysql.get_load(jid)
- Return the load data that marks a specified jid
- salt.returners.mysql.get_minions()
- Return a list of minions
- salt.returners.mysql.prep_jid(nocache=False, passed_jid=None)
- Do any work necessary to prepare a JID, including sending a custom id
- salt.returners.mysql.returner(ret)
- Return data to a mysql server
- salt.returners.mysql.save_load(jid, load, minions=None)
- Save the load to the specified jid id
- salt.returners.mysql.save_minions(jid, minions, syndic_id=None)
- Included for API consistency
salt.returners.nagios_nrdp_return¶
Return salt data to Nagios
The following fields can be set in the minion conf file:
nagios.url (required) nagios.token (required) nagios.service (optional) nagios.check_type (optional)
Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location:
nagios.url nagios.token nagios.service
Nagios settings may also be configured as:
nagios:
url: http://localhost/nrdp
token: r4nd0mt0k3n
service: service-check
alternative.nagios:
url: http://localhost/nrdp
token: r4nd0mt0k3n
service: another-service-check To use the Nagios returner, append '--return nagios' to the salt command. ex: .. code-block:: bash
salt '*' test.ping --return nagios To use the alternative configuration, append '--return_config alternative' to the salt command. ex:
salt '*' test.ping --return nagios --return_config alternative
To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command.
New in version 2016.3.0.
salt '*' test.ping --return nagios --return_kwargs '{"service": "service-name"}'
- salt.returners.nagios_nrdp_return.returner(ret)
- Send a message to Nagios with the data
salt.returners.odbc¶
Return data to an ODBC compliant server. This driver was developed with Microsoft SQL Server in mind, but theoretically could be used to return data to any compliant ODBC database as long as there is a working ODBC driver for it on your minion platform.
To enable this returner the minion will need
On Linux:
On Windows:
unixODBC and FreeTDS need to be configured via /etc/odbcinst.ini and /etc/odbc.ini.
/etc/odbcinst.ini:
[TDS] Description=TDS Driver=/usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so
(Note the above Driver line needs to point to the location of the FreeTDS shared library. This example is for Ubuntu 14.04.)
/etc/odbc.ini:
[TS] Description = "Salt Returner" Driver=TDS Server = <your server ip or fqdn> Port = 1433 Database = salt Trace = No
Also you need the following values configured in the minion or master config. Configure as you see fit:
returner.odbc.dsn: 'TS' returner.odbc.user: 'salt' returner.odbc.passwd: 'salt'
Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location:
alternative.returner.odbc.dsn: 'TS' alternative.returner.odbc.user: 'salt' alternative.returner.odbc.passwd: 'salt'
Running the following commands against Microsoft SQL Server in the desired database as the appropriate user should create the database tables correctly. Replace with equivalent SQL for other ODBC-compliant servers
--
-- Table structure for table 'jids'
--
if OBJECT_ID('dbo.jids', 'U') is not null
DROP TABLE dbo.jids
CREATE TABLE dbo.jids (
jid varchar(255) PRIMARY KEY,
load varchar(MAX) NOT NULL
);
--
-- Table structure for table 'salt_returns'
--
IF OBJECT_ID('dbo.salt_returns', 'U') IS NOT NULL
DROP TABLE dbo.salt_returns;
CREATE TABLE dbo.salt_returns (
added datetime not null default (getdate()),
fun varchar(100) NOT NULL,
jid varchar(255) NOT NULL,
retval varchar(MAX) NOT NULL,
id varchar(255) NOT NULL,
success bit default(0) NOT NULL,
full_ret varchar(MAX)
);
CREATE INDEX salt_returns_added on dbo.salt_returns(added);
CREATE INDEX salt_returns_id on dbo.salt_returns(id);
CREATE INDEX salt_returns_jid on dbo.salt_returns(jid);
CREATE INDEX salt_returns_fun on dbo.salt_returns(fun); To use this returner, append '--return odbc' to the salt command. .. code-block:: bash
salt '*' status.diskusage --return odbc To use the alternative configuration, append '--return_config alternative' to the salt command. .. versionadded:: 2015.5.0 .. code-block:: bash
salt '*' test.ping --return odbc --return_config alternative
To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command.
New in version 2016.3.0.
salt '*' test.ping --return odbc --return_kwargs '{"dsn": "dsn-name"}'
- salt.returners.odbc.get_fun(fun)
- Return a dict of the last function called for all minions
- salt.returners.odbc.get_jid(jid)
- Return the information returned when the specified job id was executed
- salt.returners.odbc.get_jids()
- Return a list of all job ids
- salt.returners.odbc.get_load(jid)
- Return the load data that marks a specified jid
- salt.returners.odbc.get_minions()
- Return a list of minions
- salt.returners.odbc.prep_jid(nocache=False, passed_jid=None)
- Do any work necessary to prepare a JID, including sending a custom id
- salt.returners.odbc.returner(ret)
- Return data to an odbc server
- salt.returners.odbc.save_load(jid, load, minions=None)
- Save the load to the specified jid id
- salt.returners.odbc.save_minions(jid, minions, syndic_id=None)
- Included for API consistency
salt.returners.pgjsonb¶
Return data to a PostgreSQL server with json data stored in Pg's jsonb data type
- maintainer
- Dave Boucha <dave@saltstack.com>, Seth House <shouse@saltstack.com>, C. R. Oldham <cr@saltstack.com>
- maturity
- Stable
- depends
- python-psycopg2
- platform
- all
NOTE:
To enable this returner, the minion will need the python client for PostgreSQL installed and the following values configured in the minion or master config. These are the defaults:
returner.pgjsonb.host: 'salt' returner.pgjsonb.user: 'salt' returner.pgjsonb.pass: 'salt' returner.pgjsonb.db: 'salt' returner.pgjsonb.port: 5432
SSL is optional. The defaults are set to None. If you do not want to use SSL, either exclude these options or set them to None.
returner.pgjsonb.sslmode: None returner.pgjsonb.sslcert: None returner.pgjsonb.sslkey: None returner.pgjsonb.sslrootcert: None returner.pgjsonb.sslcrl: None
New in version 2017.5.0.
Alternative configuration values can be used by prefacing the configuration with alternative.. Any values not found in the alternative configuration will be pulled from the default location. As stated above, SSL configuration is optional. The following ssl options are simply for illustration purposes:
alternative.pgjsonb.host: 'salt' alternative.pgjsonb.user: 'salt' alternative.pgjsonb.pass: 'salt' alternative.pgjsonb.db: 'salt' alternative.pgjsonb.port: 5432 alternative.pgjsonb.ssl_ca: '/etc/pki/mysql/certs/localhost.pem' alternative.pgjsonb.ssl_cert: '/etc/pki/mysql/certs/localhost.crt' alternative.pgjsonb.ssl_key: '/etc/pki/mysql/certs/localhost.key'
Should you wish the returner data to be cleaned out every so often, set keep_jobs_seconds to the number of seconds for the jobs to live in the tables. Setting it to 0 or leaving it unset will cause the data to stay in the tables.
Should you wish to archive jobs in a different table for later processing, set archive_jobs to True. Salt will create 3 archive tables;
- jids_archive
- salt_returns_archive
- salt_events_archive
and move the contents of jids, salt_returns, and salt_events that are more than keep_jobs_seconds seconds old to these tables.
New in version 2019.2.0.
Use the following Pg database schema:
CREATE DATABASE salt
WITH ENCODING 'utf-8'; -- -- Table structure for table `jids` -- DROP TABLE IF EXISTS jids; CREATE TABLE jids (
jid varchar(255) NOT NULL primary key,
load jsonb NOT NULL ); CREATE INDEX idx_jids_jsonb on jids
USING gin (load)
WITH (fastupdate=on); -- -- Table structure for table `salt_returns` -- DROP TABLE IF EXISTS salt_returns; CREATE TABLE salt_returns (
fun varchar(50) NOT NULL,
jid varchar(255) NOT NULL,
return jsonb NOT NULL,
id varchar(255) NOT NULL,
success varchar(10) NOT NULL,
full_ret jsonb NOT NULL,
alter_time TIMESTAMP WITH TIME ZONE DEFAULT NOW()); CREATE INDEX idx_salt_returns_id ON salt_returns (id); CREATE INDEX idx_salt_returns_jid ON salt_returns (jid); CREATE INDEX idx_salt_returns_fun ON salt_returns (fun); CREATE INDEX idx_salt_returns_return ON salt_returns
USING gin (return) with (fastupdate=on); CREATE INDEX idx_salt_returns_full_ret ON salt_returns
USING gin (full_ret) with (fastupdate=on); -- -- Table structure for table `salt_events` -- DROP TABLE IF EXISTS salt_events; DROP SEQUENCE IF EXISTS seq_salt_events_id; CREATE SEQUENCE seq_salt_events_id; CREATE TABLE salt_events (
id BIGINT NOT NULL UNIQUE DEFAULT nextval('seq_salt_events_id'),
tag varchar(255) NOT NULL,
data jsonb NOT NULL,
alter_time TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
master_id varchar(255) NOT NULL); CREATE INDEX idx_salt_events_tag on
salt_events (tag); CREATE INDEX idx_salt_events_data ON salt_events
USING gin (data) with (fastupdate=on);
Required python modules: Psycopg2
To use this returner, append '--return pgjsonb' to the salt command.
salt '*' test.ping --return pgjsonb
To use the alternative configuration, append '--return_config alternative' to the salt command.
New in version 2015.5.0.
salt '*' test.ping --return pgjsonb --return_config alternative
To override individual configuration items, append --return_kwargs '{"key:": "value"}' to the salt command.
New in version 2016.3.0.
salt '*' test.ping --return pgjsonb --return_kwargs '{"db": "another-salt"}'
- salt.returners.pgjsonb.clean_old_jobs()
- Called in the master's event loop every loop_interval. Archives and/or deletes the events and job details from the database. :return:
- salt.returners.pgjsonb.event_return(events)
- Return event to Pg server
Requires that configuration be enabled via 'event_return' option in master config.
- salt.returners.pgjsonb.get_fun(fun)
- Return a dict of the last function called for all minions
- salt.returners.pgjsonb.get_jid(jid)
- Return the information returned when the specified job id was executed
- salt.returners.pgjsonb.get_jids()
- Return a list of all job ids
- salt.returners.pgjsonb.get_load(jid)
- Return the load data that marks a specified jid
- salt.returners.pgjsonb.get_minions()
- Return a list of minions
- salt.returners.pgjsonb.prep_jid(nocache=False, passed_jid=None)
- Do any work necessary to prepare a JID, including sending a custom id
- salt.returners.pgjsonb.returner(ret)
- Return data to a Pg server
- salt.returners.pgjsonb.save_load(jid, load, minions=None)
- Save the load to the specified jid id
- salt.returners.pgjsonb.save_minions(jid, minions, syndic_id=None)
- Included for API consistency
salt.returners.postgres¶
Return data to a postgresql server
NOTE:
- maintainer
- None
- maturity
- New
- depends
- psycopg2
- platform
- all
To enable this returner the minion will need the psycopg2 installed and the following values configured in the minion or master config:
returner.postgres.host: 'salt' returner.postgres.user: 'salt' returner.postgres.passwd: 'salt' returner.postgres.db: 'salt' returner.postgres.port: 5432
Alternative configuration values can be used by prefacing the configuration. Any values not found in the alternative configuration will be pulled from the default location:
alternative.returner.postgres.host: 'salt' alternative.returner.postgres.user: 'salt' alternative.returner.postgres.passwd: 'salt' alternative.returner.postgres.db: 'salt' alternative.returner.postgres.port: 5432
Running the following commands as the postgres user should create the database correctly:
psql << EOF CREATE ROLE salt WITH PASSWORD 'salt'; CREATE DATABASE salt WITH OWNER salt; EOF psql -h localhost -U salt << EOF -- -- Table structure for table 'jids' -- DROP TABLE IF EXISTS jids; CREATE TABLE jids (
jid varchar(20) PRIMARY KEY,
load text NOT NULL ); -- -- Table structure for table 'salt_returns' -- DROP TABLE IF EXISTS salt_returns; CREATE TABLE salt_returns (
fun varchar(50) NOT NULL,
jid varchar(255) NOT NULL,
return text NOT NULL,
full_ret text,
id varchar(255) NOT NULL,
success varchar(10) NOT NULL,
alter_time TIMESTAMP WITH TIME ZONE DEFAULT now() ); CREATE INDEX idx_salt_returns_id ON salt_returns (id); CREATE INDEX idx_salt_returns_jid ON salt_returns (jid); CREATE INDEX idx_salt_returns_fun ON salt_returns (fun); CREATE INDEX idx_salt_returns_updated ON salt_returns (alter_time); -- -- Table structure for table `salt_events` -- DROP TABLE IF EXISTS salt_events; DROP SEQUENCE IF EXISTS seq_salt_events_id; CREATE SEQUENCE seq_salt_events_id; CREATE TABLE salt_events (
id BIGINT NOT NULL UNIQUE DEFAULT nextval('seq_salt_events