Scroll to navigation

SPACK(1) Spack SPACK(1)

NAME

spack - Spack Documentation

These are docs for the Spack package manager. For sphere packing, see pyspack.


Spack is a package management tool designed to support multiple versions and configurations of software on a wide variety of platforms and environments. It was designed for large supercomputing centers, where many users and application teams share common installations of software on clusters with exotic architectures, using libraries that do not have a standard ABI. Spack is non-destructive: installing a new version does not break existing installations, so many configurations can coexist on the same system.

Most importantly, Spack is simple. It offers a simple spec syntax so that users can specify versions and configuration options concisely. Spack is also simple for package authors: package files are written in pure Python, and specs allow package authors to maintain a single file for many different builds of the same package.

See the Feature Overview for examples and highlights.

Get spack from the github repository and install your first package:

$ git clone -c feature.manyFiles=true https://github.com/spack/spack.git
$ cd spack/bin
$ ./spack install libelf


If you're new to spack and want to start using it, see Getting Started, or refer to the full manual below.

FEATURE OVERVIEW

This is a high-level overview of features that make Spack different from other package managers and port systems.

Simple package installation

Installing the default version of a package is simple. This will install the latest version of the mpileaks package and all of its dependencies:

$ spack install mpileaks


Custom versions & configurations

Spack allows installation to be customized. Users can specify the version, build compiler, compile-time options, and cross-compile platform, all on the command line.

# Install a particular version by appending @
$ spack install mpileaks@1.1.2
# Specify a compiler (and its version), with %
$ spack install mpileaks@1.1.2 %gcc@4.7.3
# Add special compile-time options by name
$ spack install mpileaks@1.1.2 %gcc@4.7.3 debug=True
# Add special boolean compile-time options with +
$ spack install mpileaks@1.1.2 %gcc@4.7.3 +debug
# Add compiler flags using the conventional names
$ spack install mpileaks@1.1.2 %gcc@4.7.3 cppflags="-O3 -floop-block"
# Cross-compile for a different micro-architecture with target=
$ spack install mpileaks@1.1.2 target=icelake


Users can specify as many or few options as they care about. Spack will fill in the unspecified values with sensible defaults. The two listed syntaxes for variants are identical when the value is boolean.

Customize dependencies

Spack allows dependencies of a particular installation to be customized extensively. Suppose that hdf5 depends on openmpi and indirectly on hwloc. Using ^, users can add custom configurations for the dependencies:

# Install hdf5 and link it with specific versions of openmpi and hwloc
$ spack install hdf5@1.10.1 %gcc@4.7.3 +debug ^openmpi+cuda fabrics=auto ^hwloc+gl


Non-destructive installs

Spack installs every unique package/dependency configuration into its own prefix, so new installs will not break existing ones.

Packages can peacefully coexist

Spack avoids library misconfiguration by using RPATH to link dependencies. When a user links a library or runs a program, it is tied to the dependencies it was built with, so there is no need to manipulate LD_LIBRARY_PATH at runtime.

Creating packages is easy

To create a new packages, all Spack needs is a URL for the source archive. The spack create command will create a boilerplate package file, and the package authors can fill in specific build steps in pure Python.

For example, this command:


creates a simple python file:

from spack.package import *
class Libelf(AutotoolsPackage):

"""FIXME: Put a proper description of your package here."""
# FIXME: Add a proper url for your package's homepage here.
homepage = "https://www.example.com"
url = "https://ftp.osuosl.org/pub/blfs/conglomeration/libelf/libelf-0.8.13.tar.gz"
# FIXME: Add a list of GitHub accounts to
# notify when the package is updated.
# maintainers("github_user1", "github_user2")
version("0.8.13", sha256="591a9b4ec81c1f2042a97aa60564e0cb79d041c52faa7416acb38bc95bd2c76d")
# FIXME: Add dependencies if required.
# depends_on("foo")
def configure_args(self):
# FIXME: Add arguments other than --prefix
# FIXME: If not needed delete this function
args = []
return args


It doesn't take much python coding to get from there to a working package:

from spack.package import *
class Libelf(AutotoolsPackage):

"""libelf lets you read, modify or create ELF object files in an
architecture-independent way. The library takes care of size
and endian issues, e.g. you can process a file for SPARC
processors on an Intel-based system. Note: libelf is no longer
maintained and packages that depend on libelf should migrate to
elfutils."""
# The original homepage no longer exists, but the tar file is
# archived at fossies.org.
# homepage = "http://www.mr511.de/software/english.html"
homepage = "https://directory.fsf.org/wiki/Libelf"
urls = [
"https://fossies.org/linux/misc/old/libelf-0.8.13.tar.gz",
"https://ftp.osuosl.org/pub/blfs/conglomeration/libelf/libelf-0.8.13.tar.gz",
]
version("0.8.13", sha256="591a9b4ec81c1f2042a97aa60564e0cb79d041c52faa7416acb38bc95bd2c76d")
provides("elf@0")
# configure: error: neither int nor long is 32-bit
depends_on("automake", when="platform=darwin", type="build")
depends_on("autoconf", when="platform=darwin", type="build")
depends_on("libtool", when="platform=darwin", type="build")
depends_on("m4", when="platform=darwin", type="build")
@property
def force_autoreconf(self):
return self.spec.satisfies("platform=darwin")
def configure_args(self):
args = ["--enable-shared", "--disable-debug"]
# config.sub: invalid option -apple-darwin21.6.0
if self.spec.satisfies("platform=darwin target=aarch64:"):
args.append("--build=aarch64-apple-darwin")
return args
def install(self, spec, prefix):
make("install", parallel=False)
def flag_handler(self, name, flags):
if name == "cflags":
if self.spec.satisfies("%clang@16:"):
flags.append("-Wno-error=implicit-int")
flags.append("-Wno-error=implicit-function-declaration")
return (flags, None, None)


Spack also provides wrapper functions around common commands like configure, make, and cmake to make writing packages simple.

GETTING STARTED

System Prerequisites

Spack has the following minimum system requirements, which are assumed to be present on the machine where Spack is run:

System prerequisites for Spack

Name Supported Versions Notes Requirement Reason
Python 3.6--3.12 Interpreter for Spack
C/C++ Compilers Building software
patch Build software
tar Extract/create archives
gzip Compress/Decompress archives
unzip Compress/Decompress archives
bzip2 Compress/Decompress archives
xz Compress/Decompress archives
zstd Optional Compress/Decompress archives
file Create/Use Buildcaches
lsb-release Linux: identify operating system version
gnupg2 Sign/Verify Buildcaches
git Manage Software Repositories
svn Optional Manage Software Repositories
hg Optional Manage Software Repositories
Python header files Optional (e.g. python3-dev on Debian) Bootstrapping from sources

These requirements can be easily installed on most modern Linux systems; on macOS, the Command Line Tools package is required, and a full XCode suite may be necessary for some packages such as Qt and apple-gl. Spack is designed to run on HPC platforms like Cray. Not all packages should be expected to work on all platforms.

A build matrix showing which packages are working on which systems is shown below.

Installation

Getting Spack is easy. You can clone it from the github repository using this command:

$ git clone -c feature.manyFiles=true https://github.com/spack/spack.git


This will create a directory called spack.

Shell support

Once you have cloned Spack, we recommend sourcing the appropriate script for your shell:

# For bash/zsh/sh
$ . spack/share/spack/setup-env.sh
# For tcsh/csh
$ source spack/share/spack/setup-env.csh
# For fish
$ . spack/share/spack/setup-env.fish


That's it! You're ready to use Spack.

Sourcing these files will put the spack command in your PATH, set up your MODULEPATH to use Spack's packages, and add other useful shell integration for certain commands, environments, and modules. For bash and zsh, it also sets up tab completion.

In order to know which directory to add to your MODULEPATH, these scripts query the spack command. On shared filesystems, this can be a bit slow, especially if you log in frequently. If you don't use modules, or want to set MODULEPATH manually instead, you can set the SPACK_SKIP_MODULES environment variable to skip this step and speed up sourcing the file.

If you do not want to use Spack's shell support, you can always just run the spack command directly from spack/bin/spack.

When the spack command is executed it searches for an appropriate Python interpreter to use, which can be explicitly overridden by setting the SPACK_PYTHON environment variable. When sourcing the appropriate shell setup script, SPACK_PYTHON will be set to the interpreter found at sourcing time, ensuring future invocations of the spack command will continue to use the same consistent python version regardless of changes in the environment.

Bootstrapping clingo

Spack uses clingo under the hood to resolve optimal versions and variants of dependencies when installing a package. Since clingo itself is a binary, Spack has to install it on initial use, which is called bootstrapping.

Spack provides two ways of bootstrapping clingo: from pre-built binaries (default), or from sources. The fastest way to get started is to bootstrap from pre-built binaries.

The first time you concretize a spec, Spack will bootstrap automatically:

$ spack spec zlib
==> Bootstrapping clingo from pre-built binaries
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.4/build_cache/linux-centos7-x86_64-gcc-10.2.1-clingo-bootstrap-spack-ba5ijauisd3uuixtmactc36vps7yfsrl.spec.json
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.4/build_cache/linux-centos7-x86_64/gcc-10.2.1/clingo-bootstrap-spack/linux-centos7-x86_64-gcc-10.2.1-clingo-bootstrap-spack-ba5ijauisd3uuixtmactc36vps7yfsrl.spack
==> Installing "clingo-bootstrap@spack%gcc@10.2.1~docs~ipo+python+static_libstdcpp build_type=Release arch=linux-centos7-x86_64" from a buildcache
==> Bootstrapping patchelf from pre-built binaries
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.4/build_cache/linux-centos7-x86_64-gcc-10.2.1-patchelf-0.16.1-p72zyan5wrzuabtmzq7isa5mzyh6ahdp.spec.json
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.4/build_cache/linux-centos7-x86_64/gcc-10.2.1/patchelf-0.16.1/linux-centos7-x86_64-gcc-10.2.1-patchelf-0.16.1-p72zyan5wrzuabtmzq7isa5mzyh6ahdp.spack
==> Installing "patchelf@0.16.1%gcc@10.2.1 ldflags="-static-libstdc++ -static-libgcc"  build_system=autotools arch=linux-centos7-x86_64" from a buildcache
Input spec
--------------------------------
zlib
Concretized
--------------------------------
zlib@1.2.13%gcc@9.4.0+optimize+pic+shared build_system=makefile arch=linux-ubuntu20.04-icelake


If for security concerns you cannot bootstrap clingo from pre-built binaries, you have to disable fetching the binaries we generated with Github Actions.

$ spack bootstrap disable github-actions-v0.4
==> "github-actions-v0.4" is now disabled and will not be used for bootstrapping
$ spack bootstrap disable github-actions-v0.3
==> "github-actions-v0.3" is now disabled and will not be used for bootstrapping


You can verify that the new settings are effective with:

$ spack bootstrap list
Name: github-actions-v0.5 ENABLED

Type: buildcache
Info:
url: https://mirror.spack.io/bootstrap/github-actions/v0.5
homepage: https://github.com/spack/spack-bootstrap-mirrors
releases: https://github.com/spack/spack-bootstrap-mirrors/releases
Description:
Buildcache generated from a public workflow using Github Actions.
The sha256 checksum of binaries is checked before installation.
Name: github-actions-v0.4 ENABLED
Type: buildcache
Info:
url: https://mirror.spack.io/bootstrap/github-actions/v0.4
homepage: https://github.com/spack/spack-bootstrap-mirrors
releases: https://github.com/spack/spack-bootstrap-mirrors/releases
Description:
Buildcache generated from a public workflow using Github Actions.
The sha256 checksum of binaries is checked before installation.
Name: spack-install ENABLED
Type: install
Info:
url: https://mirror.spack.io
Description:
Specs built from sources downloaded from the Spack public mirror.


NOTE:

When bootstrapping from sources, Spack requires a full install of Python including header files (e.g. python3-dev on Debian), and a compiler with support for C++14 (GCC on Linux, Apple Clang on macOS) and static C++ standard libraries on Linux.


Spack will build the required software on the first request to concretize a spec:

$ spack spec zlib
[+] /usr (external bison-3.0.4-wu5pgjchxzemk5ya2l3ddqug2d7jv6eb)
[+] /usr (external cmake-3.19.4-a4kmcfzxxy45mzku4ipmj5kdiiz5a57b)
[+] /usr (external python-3.6.9-x4fou4iqqlh5ydwddx3pvfcwznfrqztv)
==> Installing re2c-1.2.1-e3x6nxtk3ahgd63ykgy44mpuva6jhtdt
[ ... ]
zlib@1.2.11%gcc@10.1.0+optimize+pic+shared arch=linux-ubuntu18.04-broadwell


The Bootstrap Store

All the tools Spack needs for its own functioning are installed in a separate store, which lives under the ${HOME}/.spack directory. The software installed there can be queried with:

$ spack -b find
-- linux-ubuntu18.04-x86_64 / gcc@10.1.0 ------------------------
clingo-bootstrap@spack  python@3.6.9  re2c@1.2.1


In case it's needed the bootstrap store can also be cleaned with:

$ spack clean -b
==> Removing bootstrapped software and configuration in "/home/spack/.spack/bootstrap"


Check Installation

With Spack installed, you should be able to run some basic Spack commands. For example:

$ spack spec netcdf-c target=x86_64 os=SUSE


In theory, Spack doesn't need any additional installation; just download and run! But in real life, additional steps are usually required before Spack can work in a practical sense. Read on...

Clean Environment

Many packages' installs can be broken by changing environment variables. For example, a package might pick up the wrong build-time dependencies (most of them not specified) depending on the setting of PATH. GCC seems to be particularly vulnerable to these issues.

Therefore, it is recommended that Spack users run with a clean environment, especially for PATH. Only software that comes with the system, or that you know you wish to use with Spack, should be included. This procedure will avoid many strange build errors.

Optional: Alternate Prefix

You may want to run Spack out of a prefix other than the git repository you cloned. The spack clone command provides this functionality. To install spack in a new directory, simply type:

$ spack clone /my/favorite/prefix


This will install a new spack script in /my/favorite/prefix/bin, which you can use just like you would the regular spack script. Each copy of spack installs packages into its own $PREFIX/opt directory.

Compiler configuration

Spack has the ability to build packages with multiple compilers and compiler versions. Compilers can be made available to Spack by specifying them manually in compilers.yaml, or automatically by running spack compiler find, but for convenience Spack will automatically detect compilers the first time it needs them.

spack compilers

You can see which compilers are available to Spack by running spack compilers or spack compiler list:

$ spack compilers
==> Available compilers
-- gcc ---------------------------------------------------------

gcc@4.9.0 gcc@4.8.0 gcc@4.7.0 gcc@4.6.2 gcc@4.4.7
gcc@4.8.2 gcc@4.7.1 gcc@4.6.3 gcc@4.6.1 gcc@4.1.2 -- intel -------------------------------------------------------
intel@15.0.0 intel@14.0.0 intel@13.0.0 intel@12.1.0 intel@10.0
intel@14.0.3 intel@13.1.1 intel@12.1.5 intel@12.0.4 intel@9.1
intel@14.0.2 intel@13.1.0 intel@12.1.3 intel@11.1
intel@14.0.1 intel@13.0.1 intel@12.1.2 intel@10.1 -- clang -------------------------------------------------------
clang@3.4 clang@3.3 clang@3.2 clang@3.1 -- pgi ---------------------------------------------------------
pgi@14.3-0 pgi@13.2-0 pgi@12.1-0 pgi@10.9-0 pgi@8.0-1
pgi@13.10-0 pgi@13.1-1 pgi@11.10-0 pgi@10.2-0 pgi@7.1-3
pgi@13.6-0 pgi@12.8-0 pgi@11.1-0 pgi@9.0-4 pgi@7.0-6


Any of these compilers can be used to build Spack packages. More on how this is done is in Specs & dependencies.

spack compiler add

An alias for spack compiler find.

spack compiler find

Lists the compilers currently available to Spack. If you do not see a compiler in this list, but you want to use it with Spack, you can simply run spack compiler find with the path to where the compiler is installed. For example:

$ spack compiler find /usr/local/tools/ic-13.0.079
==> Added 1 new compiler to ~/.spack/linux/compilers.yaml

intel@13.0.079


Or you can run spack compiler find with no arguments to force auto-detection. This is useful if you do not know where compilers are installed, but you know that new compilers have been added to your PATH. For example, you might load a module, like this:

$ module load gcc/4.9.0
$ spack compiler find
==> Added 1 new compiler to ~/.spack/linux/compilers.yaml

gcc@4.9.0


This loads the environment module for gcc-4.9.0 to add it to PATH, and then it adds the compiler to Spack.

NOTE:

By default, spack does not fill in the modules: field in the compilers.yaml file. If you are using a compiler from a module, then you should add this field manually. See the section on Compilers Requiring Modules.


spack compiler info

If you want to see specifics on a particular compiler, you can run spack compiler info on it:

$ spack compiler info intel@15
intel@15.0.0:

paths:
cc = /usr/local/bin/icc-15.0.090
cxx = /usr/local/bin/icpc-15.0.090
f77 = /usr/local/bin/ifort-15.0.090
fc = /usr/local/bin/ifort-15.0.090
modules = []
operating_system = centos6 ...


This shows which C, C++, and Fortran compilers were detected by Spack. Notice also that we didn't have to be too specific about the version. We just said intel@15, and information about the only matching Intel compiler was displayed.

Manual compiler configuration

If auto-detection fails, you can manually configure a compiler by editing your ~/.spack/<platform>/compilers.yaml file. You can do this by running spack config edit compilers, which will open the file in your favorite editor.

Each compiler configuration in the file looks like this:

compilers:
- compiler:

modules: []
operating_system: centos6
paths:
cc: /usr/local/bin/icc-15.0.024-beta
cxx: /usr/local/bin/icpc-15.0.024-beta
f77: /usr/local/bin/ifort-15.0.024-beta
fc: /usr/local/bin/ifort-15.0.024-beta
spec: intel@15.0.0


For compilers that do not support Fortran (like clang), put None for f77 and fc:

compilers:
- compiler:

modules: []
operating_system: centos6
paths:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: None
fc: None
spec: clang@3.3svn


Once you save the file, the configured compilers will show up in the list displayed by spack compilers.

You can also add compiler flags to manually configured compilers. These flags should be specified in the flags section of the compiler specification. The valid flags are cflags, cxxflags, fflags, cppflags, ldflags, and ldlibs. For example:

compilers:
- compiler:

modules: []
operating_system: centos6
paths:
cc: /usr/bin/gcc
cxx: /usr/bin/g++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
flags:
cflags: -O3 -fPIC
cxxflags: -O3 -fPIC
cppflags: -O3 -fPIC
spec: gcc@4.7.2


These flags will be treated by spack as if they were entered from the command line each time this compiler is used. The compiler wrappers then inject those flags into the compiler command. Compiler flags entered from the command line will be discussed in more detail in the following section.

Some compilers also require additional environment configuration. Examples include Intels oneAPI and AMDs AOCC compiler suites, which have custom scripts for loading environment variables and setting paths. These variables should be specified in the environment section of the compiler specification. The operations available to modify the environment are set, unset, prepend_path, append_path, and remove_path. For example:

compilers:
- compiler:

modules: []
operating_system: centos6
paths:
cc: /opt/intel/oneapi/compiler/latest/linux/bin/icx
cxx: /opt/intel/oneapi/compiler/latest/linux/bin/icpx
f77: /opt/intel/oneapi/compiler/latest/linux/bin/ifx
fc: /opt/intel/oneapi/compiler/latest/linux/bin/ifx
spec: oneapi@latest
environment:
set:
MKL_ROOT: "/path/to/mkl/root"
unset: # A list of environment variables to unset
- CC
prepend_path: # Similar for append|remove_path
LD_LIBRARY_PATH: /ld/paths/added/by/setvars/sh


Build Your Own Compiler

If you are particular about which compiler/version you use, you might wish to have Spack build it for you. For example:

$ spack install gcc@4.9.3


Once that has finished, you will need to add it to your compilers.yaml file. You can then set Spack to use it by default by adding the following to your packages.yaml file:

packages:

all:
compiler: [gcc@4.9.3]


Compilers Requiring Modules

Many installed compilers will work regardless of the environment they are called with. However, some installed compilers require $LD_LIBRARY_PATH or other environment variables to be set in order to run; this is typical for Intel and other proprietary compilers.

In such a case, you should tell Spack which module(s) to load in order to run the chosen compiler (If the compiler does not come with a module file, you might consider making one by hand). Spack will load this module into the environment ONLY when the compiler is run, and NOT in general for a package's install() method. See, for example, this compilers.yaml file:

compilers:
- compiler:

modules: [other/comp/gcc-5.3-sp3]
operating_system: SuSE11
paths:
cc: /usr/local/other/SLES11.3/gcc/5.3.0/bin/gcc
cxx: /usr/local/other/SLES11.3/gcc/5.3.0/bin/g++
f77: /usr/local/other/SLES11.3/gcc/5.3.0/bin/gfortran
fc: /usr/local/other/SLES11.3/gcc/5.3.0/bin/gfortran
spec: gcc@5.3.0


Some compilers require special environment settings to be loaded not just to run, but also to execute the code they build, breaking packages that need to execute code they just compiled. If it's not possible or practical to use a better compiler, you'll need to ensure that environment settings are preserved for compilers like this (i.e., you'll need to load the module or source the compiler's shell script).

By default, Spack tries to ensure that builds are reproducible by cleaning the environment before building. If this interferes with your compiler settings, you CAN use spack install --dirty as a workaround. Note that this MAY interfere with package builds.

Licensed Compilers

Some proprietary compilers require licensing to use. If you need to use a licensed compiler (eg, PGI), the process is similar to a mix of build your own, plus modules:

1.
Create a Spack package (if it doesn't exist already) to install your compiler. Follow instructions on installing Licensed software.
2.
Once the compiler is installed, you should be able to test it by using Spack to load the module it just created, and running simple builds (eg: cc helloWorld.c && ./a.out)
3.
Add the newly-installed compiler to compilers.yaml as shown above.

Mixed Toolchains

Modern compilers typically come with related compilers for C, C++ and Fortran bundled together. When possible, results are best if the same compiler is used for all languages.

In some cases, this is not possible. For example, starting with macOS El Capitan (10.11), many packages no longer build with GCC, but XCode provides no Fortran compilers. The user is therefore forced to use a mixed toolchain: XCode-provided Clang for C/C++ and GNU gfortran for Fortran.

1.
You need to make sure that Xcode is installed. Run the following command:

$ xcode-select --install


If the Xcode command-line tools are already installed, you will see an error message:

xcode-select: error: command line tools are already installed, use "Software Update" to install updates


2.
For most packages, the Xcode command-line tools are sufficient. However, some packages like qt require the full Xcode suite. You can check to see which you have installed by running:

$ xcode-select -p


If the output is:

/Applications/Xcode.app/Contents/Developer


you already have the full Xcode suite installed. If the output is:

/Library/Developer/CommandLineTools


you only have the command-line tools installed. The full Xcode suite can be installed through the App Store. Make sure you launch the Xcode application and accept the license agreement before using Spack. It may ask you to install additional components. Alternatively, the license can be accepted through the command line:

$ sudo xcodebuild -license accept


Note: the flag is -license, not --license.

3.
Run spack compiler find to locate Clang.
4.
There are different ways to get gfortran on macOS. For example, you can install GCC with Spack (spack install gcc), with Homebrew (brew install gcc), or from a DMG installer.
5.
The only thing left to do is to edit ~/.spack/darwin/compilers.yaml to provide the path to gfortran:

compilers:
- compiler:

...
paths:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: /path/to/bin/gfortran
fc: /path/to/bin/gfortran
spec: apple-clang@11.0.0


If you used Spack to install GCC, you can get the installation prefix by spack location -i gcc (this will only work if you have a single version of GCC installed). Whereas for Homebrew, GCC is installed in /usr/local/Cellar/gcc/x.y.z. With the DMG installer, the correct path will be /usr/local/gfortran.


Compiler Verification

You can verify that your compilers are configured properly by installing a simple package. For example:

$ spack install zlib%gcc@5.3.0


Vendor-Specific Compiler Configuration

With Spack, things usually "just work" with GCC. Not so for other compilers. This section provides details on how to get specific compilers working.

Intel Compilers

Intel compilers are unusual because a single Intel compiler version can emulate multiple GCC versions. In order to provide this functionality, the Intel compiler needs GCC to be installed. Therefore, the following steps are necessary to successfully use Intel compilers:

1.
Install a version of GCC that implements the desired language features (spack install gcc).
2.
Tell the Intel compiler how to find that desired GCC. This may be done in one of two ways:
"By default, the compiler determines which version of gcc or g++ you have installed from the PATH environment variable.

If you want use a version of gcc or g++ other than the default version on your system, you need to use either the -gcc-name or -gxx-name compiler option to specify the path to the version of gcc or g++ that you want to use." — Intel Reference Guide




Intel compilers may therefore be configured in one of two ways with Spack: using modules, or using compiler flags.

Configuration with Modules

One can control which GCC is seen by the Intel compiler with modules. A module must be loaded both for the Intel Compiler (so it will run) and GCC (so the compiler can find the intended GCC). The following configuration in compilers.yaml illustrates this technique:

compilers:
- compiler:

modules: [gcc-4.9.3, intel-15.0.24]
operating_system: centos7
paths:
cc: /opt/intel-15.0.24/bin/icc-15.0.24-beta
cxx: /opt/intel-15.0.24/bin/icpc-15.0.24-beta
f77: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
fc: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
spec: intel@15.0.24.4.9.3


NOTE:

The version number on the Intel compiler is a combination of the "native" Intel version number and the GNU compiler it is targeting.


Command Line Configuration

One can also control which GCC is seen by the Intel compiler by adding flags to the icc command:

1.
Identify the location of the compiler you just installed:

$ spack location --install-dir gcc
~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw...


2.
Set up compilers.yaml, for example:

compilers:
- compiler:

modules: [intel-15.0.24]
operating_system: centos7
paths:
cc: /opt/intel-15.0.24/bin/icc-15.0.24-beta
cxx: /opt/intel-15.0.24/bin/icpc-15.0.24-beta
f77: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
fc: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
flags:
cflags: -gcc-name ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/gcc
cxxflags: -gxx-name ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/g++
fflags: -gcc-name ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/gcc
spec: intel@15.0.24.4.9.3



PGI

PGI comes with two sets of compilers for C++ and Fortran, distinguishable by their names. "Old" compilers:

cc:  /soft/pgi/15.10/linux86-64/15.10/bin/pgcc
cxx: /soft/pgi/15.10/linux86-64/15.10/bin/pgCC
f77: /soft/pgi/15.10/linux86-64/15.10/bin/pgf77
fc:  /soft/pgi/15.10/linux86-64/15.10/bin/pgf90


"New" compilers:

cc:  /soft/pgi/15.10/linux86-64/15.10/bin/pgcc
cxx: /soft/pgi/15.10/linux86-64/15.10/bin/pgc++
f77: /soft/pgi/15.10/linux86-64/15.10/bin/pgfortran
fc:  /soft/pgi/15.10/linux86-64/15.10/bin/pgfortran


Older installations of PGI contains just the old compilers; whereas newer installations contain the old and the new. The new compiler is considered preferable, as some packages (hdf) will not build with the old compiler.

When auto-detecting a PGI compiler, there are cases where Spack will find the old compilers, when you really want it to find the new compilers. It is best to check this compilers.yaml; and if the old compilers are being used, change pgf77 and pgf90 to pgfortran.

Other issues:

There are reports that some packages will not build with PGI, including libpciaccess and openssl. A workaround is to build these packages with another compiler and then use them as dependencies for PGI-build packages. For example:

$ spack install openmpi%pgi ^libpciaccess%gcc


PGI requires a license to use; see Licensed Compilers for more information on installation.

NOTE:

It is believed the problem with HDF 4 is that everything is compiled with the F77 compiler, but at some point some Fortran 90 code slipped in there. So compilers that can handle both FORTRAN 77 and Fortran 90 (gfortran, pgfortran, etc) are fine. But compilers specific to one or the other (pgf77, pgf90) won't work.


NAG

The Numerical Algorithms Group provides a licensed Fortran compiler. Like Clang, this requires you to set up a Mixed Toolchains. It is recommended to use GCC for your C/C++ compilers.

The NAG Fortran compilers are a bit more strict than other compilers, and many packages will fail to install with error messages like:

Error: mpi_comm_spawn_multiple_f90.f90: Argument 3 to MPI_COMM_SPAWN_MULTIPLE has data type DOUBLE PRECISION in reference from MPI_COMM_SPAWN_MULTIPLEN and CHARACTER in reference from MPI_COMM_SPAWN_MULTIPLEA


In order to convince the NAG compiler not to be too picky about calling conventions, you can use FFLAGS=-mismatch and FCFLAGS=-mismatch. This can be done through the command line:

$ spack install openmpi fflags="-mismatch"


Or it can be set permanently in your compilers.yaml:

- compiler:

modules: []
operating_system: centos6
paths:
cc: /soft/spack/opt/spack/linux-x86_64/gcc-5.3.0/gcc-6.1.0-q2zosj3igepi3pjnqt74bwazmptr5gpj/bin/gcc
cxx: /soft/spack/opt/spack/linux-x86_64/gcc-5.3.0/gcc-6.1.0-q2zosj3igepi3pjnqt74bwazmptr5gpj/bin/g++
f77: /soft/spack/opt/spack/linux-x86_64/gcc-4.4.7/nag-6.1-jt3h5hwt5myezgqguhfsan52zcskqene/bin/nagfor
fc: /soft/spack/opt/spack/linux-x86_64/gcc-4.4.7/nag-6.1-jt3h5hwt5myezgqguhfsan52zcskqene/bin/nagfor
flags:
fflags: -mismatch
spec: nag@6.1


System Packages

Once compilers are configured, one needs to determine which pre-installed system packages, if any, to use in builds. This is configured in the file ~/.spack/packages.yaml. For example, to use an OpenMPI installed in /opt/local, one would use:

packages:

openmpi:
externals:
- spec: openmpi@1.10.1
prefix: /opt/local
buildable: False


In general, Spack is easier to use and more reliable if it builds all of its own dependencies. However, there are several packages for which one commonly needs to use system versions:

MPI

On supercomputers, sysadmins have already built MPI versions that take into account the specifics of that computer's hardware. Unless you know how they were built and can choose the correct Spack variants, you are unlikely to get a working MPI from Spack. Instead, use an appropriate pre-installed MPI.

If you choose a pre-installed MPI, you should consider using the pre-installed compiler used to build that MPI; see above on compilers.yaml.

OpenSSL

The openssl package underlies much of modern security in a modern OS; an attacker can easily "pwn" any computer on which they can modify SSL. Therefore, any openssl used on a system should be created in a "trusted environment" --- for example, that of the OS vendor.

OpenSSL is also updated by the OS vendor from time to time, in response to security problems discovered in the wider community. It is in everyone's best interest to use any newly updated versions as soon as they come out. Modern Linux installations have standard procedures for security updates without user involvement.

Spack running at user-level is not a trusted environment, nor do Spack users generally keep up-to-date on the latest security holes in SSL. For these reasons, a Spack-installed OpenSSL should likely not be trusted.

As long as the system-provided SSL works, you can use it instead. One can check if it works by trying to download an https://. For example:


To tell Spack to use the system-supplied OpenSSL, first determine what version you have:

$ openssl version
OpenSSL 1.0.2g  1 Mar 2016


Then add the following to ~/.spack/packages.yaml:

packages:

openssl:
externals:
- spec: openssl@1.0.2g
prefix: /usr
buildable: False


BLAS / LAPACK

The recommended way to use system-supplied BLAS / LAPACK packages is to add the following to packages.yaml:

packages:

netlib-lapack:
externals:
- spec: netlib-lapack@3.6.1
prefix: /usr
buildable: False
all:
providers:
blas: [netlib-lapack]
lapack: [netlib-lapack]


NOTE:

Above we pretend that the system-provided BLAS / LAPACK is netlib-lapack only because it is the only BLAS / LAPACK provider which use standard names for libraries (as opposed to, for example, libopenblas.so).

Although we specify external package in /usr, Spack is smart enough not to add /usr/lib to RPATHs, where it could cause unrelated system libraries to be used instead of their Spack equivalents. usr/bin will be present in PATH, however it will have lower precedence compared to paths from other dependencies. This ensures that binaries in Spack dependencies are preferred over system binaries.



Git

Some Spack packages use git to download, which might not work on some computers. For example, the following error was encountered on a Macintosh during spack install julia@master:

==> Cloning git repository:

https://github.com/JuliaLang/julia.git
on branch master Cloning into 'julia'... fatal: unable to access 'https://github.com/JuliaLang/julia.git/':
SSL certificate problem: unable to get local issuer certificate


This problem is related to OpenSSL, and in some cases might be solved by installing a new version of git and openssl:

1.
Run spack install git
2.
Add the output of spack module tcl loads git to your .bashrc.

If this doesn't work, it is also possible to disable checking of SSL certificates by using:

$ spack --insecure install


Using --insecure makes Spack disable SSL checking when fetching from websites and from git.

WARNING:

This workaround should be used ONLY as a last resort! Without SSL certificate verification, spack and git will download from sites you wouldn't normally trust. The code you download and run may then be compromised! While this is not a major issue for archives that will be checksummed, it is especially problematic when downloading from name Git branches or tags, which relies entirely on trusting a certificate for security (no verification).



certificate for security (no verification).

Utilities Configuration

Although Spack does not need installation per se, it does rely on other packages to be available on its host system. If those packages are out of date or missing, then Spack will not work. Sometimes, an appeal to the system's package manager can fix such problems. If not, the solution is have Spack install the required packages, and then have Spack use them.

For example, if curl doesn't work, one could use the following steps to provide Spack a working curl:

$ spack install curl
$ spack load curl


or alternately:

$ spack module tcl loads curl >>~/.bashrc


or if environment modules don't work:

$ export PATH=`spack location --install-dir curl`/bin:$PATH


External commands are used by Spack in two places: within core Spack, and in the package recipes. The bootstrapping procedure for these two cases is somewhat different, and is treated separately below.

Core Spack Utilities

Core Spack uses the following packages, mainly to download and unpack source code: curl, env, git, go, hg, svn, tar, unzip, patch

As long as the user's environment is set up to successfully run these programs from outside of Spack, they should work inside of Spack as well. They can generally be activated as in the curl example above; or some systems might already have an appropriate hand-built environment module that may be loaded. Either way works.

A few notes on specific programs in this list:

cURL, git, Mercurial, etc.

Spack depends on cURL to download tarballs, the format that most Spack-installed packages come in. Your system's cURL should always be able to download unencrypted http://. However, the cURL on some systems has problems with SSL-enabled https:// URLs, due to outdated / insecure versions of OpenSSL on those systems. This will prevent Spack from installing any software requiring https:// until a new cURL has been installed, using the technique above.

WARNING:

remember that if you install curl via Spack that it may rely on a user-space OpenSSL that is not upgraded regularly. It may fall out of date faster than your system OpenSSL.


Some packages use source code control systems as their download method: git, hg, svn and occasionally go. If you had to install a new curl, then chances are the system-supplied version of these other programs will also not work, because they also rely on OpenSSL. Once curl has been installed, you can similarly install the others.

Package Utilities

Spack may also encounter bootstrapping problems inside a package's install() method. In this case, Spack will normally be running inside a sanitized build environment. This includes all of the package's dependencies, but none of the environment Spack inherited from the user: if you load a module or modify $PATH before launching Spack, it will have no effect.

In this case, you will likely need to use the --dirty flag when running spack install, causing Spack to not sanitize the build environment. You are now responsible for making sure that environment does not do strange things to Spack or its installs.

Another way to get Spack to use its own version of something is to add that something to a package that needs it. For example:

depends_on('binutils', type='build')


This is considered best practice for some common build dependencies, such as autotools (if the autoreconf command is needed) and cmake --- cmake especially, because different packages require a different version of CMake.

binutils

Sometimes, strange error messages can happen while building a package. For example, ld might crash. Or one receives a message like:

ld: final link failed: Nonrepresentable section on output


or:

ld: .../_fftpackmodule.o: unrecognized relocation (0x2a) in section `.text'


These problems are often caused by an outdated binutils on your system. Unlike CMake or Autotools, adding depends_on('binutils') to every package is not considered a best practice because every package written in C/C++/Fortran would need it. A potential workaround is to load a recent binutils into your environment and use the --dirty flag.

GPG Signing

spack gpg

Spack has support for signing and verifying packages using GPG keys. A separate keyring is used for Spack, so any keys available in the user's home directory are not used.

spack gpg init

When Spack is first installed, its keyring is empty. Keys stored in var/spack/gpg are the default keys for a Spack installation. These keys may be imported by running spack gpg init. This will import the default keys into the keyring as trusted keys.

Trusting keys

Additional keys may be added to the keyring using spack gpg trust <keyfile>. Once a key is trusted, packages signed by the owner of they key may be installed.

Creating keys

You may also create your own key so that you may sign your own packages using spack gpg create <name> <email>. By default, the key has no expiration, but it may be set with the --expires <date> flag (see the gnupg2 documentation for accepted date formats). It is also recommended to add a comment as to the use of the key using the --comment <comment> flag. The public half of the key can also be exported for sharing with others so that they may use packages you have signed using the --export <keyfile> flag. Secret keys may also be later exported using the spack gpg export <location> [<key>...] command.

NOTE:

The creation of a new GPG key requires generating a lot of random numbers. Depending on the entropy produced on your system, the entire process may take a long time (even appearing to hang). Virtual machines and cloud instances are particularly likely to display this behavior.

To speed it up you may install tools like rngd, which is usually available as a package in the host OS. On e.g. an Ubuntu machine you need to give the following commands:

$ sudo apt-get install rng-tools
$ sudo rngd -r /dev/urandom


before generating the keys.

Another alternative is haveged, which can be installed on RHEL/CentOS machines as follows:

$ sudo yum install haveged
$ sudo chkconfig haveged on


This Digital Ocean tutorial provides a good overview of sources of randomness.




Here is an example of creating a key. Note that we provide a name for the key first (which we can use to reference the key later) and an email address:

$ spack gpg create dinosaur dinosaur@thedinosaurthings.com


If you want to export the key as you create it:

$ spack gpg create --export key.pub dinosaur dinosaur@thedinosaurthings.com


Or the private key:

$ spack gpg create --export-secret key.priv dinosaur dinosaur@thedinosaurthings.com


You can include both --export and --export-secret, each with an output file of choice, to export both.

Listing keys

In order to list the keys available in the keyring, the spack gpg list command will list trusted keys with the --trusted flag and keys available for signing using --signing. If you would like to remove keys from your keyring, spack gpg untrust <keyid>. Key IDs can be email addresses, names, or (best) fingerprints. Here is an example of listing the key that we just created:

gpgconf: socketdir is '/run/user/1000/gnupg'
/home/spackuser/spack/opt/spack/gpg/pubring.kbx
----------------------------------------------------------
pub   rsa4096 2021-03-25 [SC]

60D2685DAB647AD4DB54125961E09BB6F2A0ADCB uid [ultimate] dinosaur (GPG created for Spack) <dinosaur@thedinosaurthings.com>


Note that the name "dinosaur" can be seen under the uid, which is the unique id. We might need this reference if we want to export or otherwise reference the key.

Signing and Verifying Packages

In order to sign a package, spack gpg sign <file> should be used. By default, the signature will be written to <file>.asc, but that may be changed by using the --output <file> flag. If there is only one signing key available, it will be used, but if there is more than one, the key to use must be specified using the --key <keyid> flag. The --clearsign flag may also be used to create a signed file which contains the contents, but it is not recommended. Signed packages may be verified by using spack gpg verify <file>.

Exporting Keys

You likely might want to export a public key, and that looks like this. Let's use the previous example and ask spack to export the key with uid "dinosaur." We will provide an output location (typically a *.pub file) and the name of the key.

$ spack gpg export dinosaur.pub dinosaur


You can then look at the created file, dinosaur.pub, to see the exported key. If you want to include the private key, then just add --secret:

$ spack gpg export --secret dinosaur.priv dinosaur


This will write the private key to the file dinosaur.priv.

WARNING:

You should be very careful about exporting private keys. You likely would only want to do this in the context of moving your spack installation to a different server, and wanting to preserve keys for a buildcache. If you are unsure about exporting, you can ask your local system administrator or for help on an issue or the Spack slack.


Spack on Cray

Spack differs slightly when used on a Cray system. The architecture spec can differentiate between the front-end and back-end processor and operating system. For example, on Edison at NERSC, the back-end target processor is "Ivy Bridge", so you can specify to use the back-end this way:

$ spack install zlib target=ivybridge


You can also use the operating system to build against the back-end:

$ spack install zlib os=CNL10


Notice that the name includes both the operating system name and the major version number concatenated together.

Alternatively, if you want to build something for the front-end, you can specify the front-end target processor. The processor for a login node on Edison is "Sandy bridge" so we specify on the command line like so:

$ spack install zlib target=sandybridge


And the front-end operating system is:

$ spack install zlib os=SuSE11


Cray compiler detection

Spack can detect compilers using two methods. For the front-end, we treat everything the same. The difference lies in back-end compiler detection. Back-end compiler detection is made via the Tcl module avail command. Once it detects the compiler it writes the appropriate PrgEnv and compiler module name to compilers.yaml and sets the paths to each compiler with Cray's compiler wrapper names (i.e. cc, CC, ftn). During build time, Spack will load the correct PrgEnv and compiler module and will call appropriate wrapper.

The compilers.yaml config file will also differ. There is a modules section that is filled with the compiler's Programming Environment and module name. On other systems, this field is empty []:

- compiler:

modules:
- PrgEnv-intel
- intel/15.0.109


As mentioned earlier, the compiler paths will look different on a Cray system. Since most compilers are invoked using cc, CC and ftn, the paths for each compiler are replaced with their respective Cray compiler wrapper names:

paths:

cc: cc
cxx: CC
f77: ftn
fc: ftn


As opposed to an explicit path to the compiler executable. This allows Spack to call the Cray compiler wrappers during build time.

For more on compiler configuration, check out Compiler configuration.

Spack sets the default Cray link type to dynamic, to better match other other platforms. Individual packages can enable static linking (which is the default outside of Spack on cray systems) using the -static flag.

Setting defaults and using Cray modules

If you want to use default compilers for each PrgEnv and also be able to load cray external modules, you will need to set up a packages.yaml.

Here's an example of an external configuration for cray modules:

packages:

mpich:
externals:
- spec: "mpich@7.3.1%gcc@5.2.0 arch=cray_xc-haswell-CNL10"
modules:
- cray-mpich
- spec: "mpich@7.3.1%intel@16.0.0.109 arch=cray_xc-haswell-CNL10"
modules:
- cray-mpich
all:
providers:
mpi: [mpich]


This tells Spack that for whatever package that depends on mpi, load the cray-mpich module into the environment. You can then be able to use whatever environment variables, libraries, etc, that are brought into the environment via module load.

NOTE:

For Cray-provided packages, it is best to use modules: instead of prefix: in packages.yaml, because the Cray Programming Environment heavily relies on modules (e.g., loading the cray-mpich module adds MPI libraries to the compiler wrapper link line).


You can set the default compiler that Spack can use for each compiler type. If you want to use the Cray defaults, then set them under all: in packages.yaml. In the compiler field, set the compiler specs in your order of preference. Whenever you build with that compiler type, Spack will concretize to that version.

Here is an example of a full packages.yaml used at NERSC

packages:

mpich:
externals:
- spec: "mpich@7.3.1%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge"
modules:
- cray-mpich
- spec: "mpich@7.3.1%intel@16.0.0.109 arch=cray_xc-SuSE11-ivybridge"
modules:
- cray-mpich
buildable: False
netcdf:
externals:
- spec: "netcdf@4.3.3.1%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge"
modules:
- cray-netcdf
- spec: "netcdf@4.3.3.1%intel@16.0.0.109 arch=cray_xc-CNL10-ivybridge"
modules:
- cray-netcdf
buildable: False
hdf5:
externals:
- spec: "hdf5@1.8.14%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge"
modules:
- cray-hdf5
- spec: "hdf5@1.8.14%intel@16.0.0.109 arch=cray_xc-CNL10-ivybridge"
modules:
- cray-hdf5
buildable: False
all:
compiler: [gcc@5.2.0, intel@16.0.0.109]
providers:
mpi: [mpich]


Here we tell spack that whenever we want to build with gcc use version 5.2.0 or if we want to build with intel compilers, use version 16.0.0.109. We add a spec for each compiler type for each cray modules. This ensures that for each compiler on our system we can use that external module.

For more on external packages check out the section External Packages.

Using Linux containers on Cray machines

Spack uses environment variables particular to the Cray programming environment to determine which systems are Cray platforms. These environment variables may be propagated into containers that are not using the Cray programming environment.

To ensure that Spack does not autodetect the Cray programming environment, unset the environment variable MODULEPATH. This will cause Spack to treat a linux container on a Cray system as a base linux distro.

Spack On Windows

Windows support for Spack is currently under development. While this work is still in an early stage, it is currently possible to set up Spack and perform a few operations on Windows. This section will guide you through the steps needed to install Spack and start running it on a fresh Windows machine.

Step 1: Install prerequisites

To use Spack on Windows, you will need the following packages:

Required: * Microsoft Visual Studio * Python * Git

Optional: * Intel Fortran (needed for some packages)

NOTE:

Currently MSVC is the only compiler tested for C/C++ projects. Intel OneAPI provides Fortran support.


Microsoft Visual Studio

Microsoft Visual Studio provides the only Windows C/C++ compiler that is currently supported by Spack.

We require several specific components to be included in the Visual Studio installation. One is the C/C++ toolset, which can be selected as "Desktop development with C++" or "C++ build tools," depending on installation type (Professional, Build Tools, etc.) The other required component is "C++ CMake tools for Windows," which can be selected from among the optional packages. This provides CMake and Ninja for use during Spack configuration.

If you already have Visual Studio installed, you can make sure these components are installed by rerunning the installer. Next to your installation, select "Modify" and look at the "Installation details" pane on the right.

Intel Fortran

For Fortran-based packages on Windows, we strongly recommend Intel's oneAPI Fortran compilers. The suite is free to download from Intel's website, located at https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/fortran-compiler.html. The executable of choice for Spack will be Intel's Beta Compiler, ifx, which supports the classic compiler's (ifort's) frontend and runtime libraries by using LLVM.

Python

As Spack is a Python-based package, an installation of Python will be needed to run it. Python 3 can be downloaded and installed from the Windows Store, and will be automatically added to your PATH in this case.

NOTE:

Spack currently supports Python versions later than 3.2 inclusive.


Git

A bash console and GUI can be downloaded from https://git-scm.com/downloads. If you are unfamiliar with Git, there are a myriad of resources online to help guide you through checking out repositories and switching development branches.

When given the option of adjusting your PATH, choose the Git from the command line and also from 3rd-party software option. This will automatically update your PATH variable to include the git command.

Spack support on Windows is currently dependent on installing the Git for Windows project as the project providing Git support on Windows. This is additionally the recommended method for installing Git on Windows, a link to which can be found above. Spack requires the utilities vendored by this project.

utilities vendored by this project.

Step 2: Install and setup Spack

We are now ready to get the Spack environment set up on our machine. We begin by using Git to clone the Spack repo, hosted at https://github.com/spack/spack.git into a desired directory, for our purposes today, called spack_install.

In order to install Spack with Windows support, run the following one liner in a Windows CMD prompt.


NOTE:

If you chose to install Spack into a directory on Windows that is set up to require Administrative Privileges, Spack will require elevated privileges to run. Administrative Privileges can be denoted either by default such as C:\Program Files, or aministrator applied administrative restrictions on a directory that spack installs files to such as C:\Users


Step 3: Run and configure Spack

To use Spack, run bin\spack_cmd.bat (you may need to Run as Administrator) from the top-level spack directory. This will provide a Windows command prompt with an environment properly set up with Spack and its prerequisites. If you receive a warning message that Python is not in your PATH (which may happen if you installed Python from the website and not the Windows Store) add the location of the Python executable to your PATH now. You can permanently add Python to your PATH variable by using the Edit the system environment variables utility in Windows Control Panel.

NOTE:

Alternatively, Powershell can be used in place of CMD


To configure Spack, first run the following command inside the Spack console:

spack compiler find


This creates a .staging directory in our Spack prefix, along with a windows subdirectory containing a compilers.yaml file. On a fresh Windows install with the above packages installed, this command should only detect Microsoft Visual Studio and the Intel Fortran compiler will be integrated within the first version of MSVC present in the compilers.yaml output.

Spack provides a default config.yaml file for Windows that it will use unless overridden. This file is located at etc\spack\defaults\windows\config.yaml. You can read more on how to do this and write your own configuration files in the Configuration Files section of our documentation. If you do this, pay particular attention to the build_stage block of the file as this specifies the directory that will temporarily hold the source code for the packages to be installed. This path name must be sufficiently short for compliance with cmd, otherwise you will see build errors during installation (particularly with CMake) tied to long path names.

To allow Spack use of external tools and dependencies already on your system, the external pieces of software must be described in the packages.yaml file. There are two methods to populate this file:

The first and easiest choice is to use Spack to find installation on your system. In the Spack terminal, run the following commands:

spack external find cmake
spack external find ninja


The spack external find <name> will find executables on your system with the same name given. The command will store the items found in packages.yaml in the .staging\ directory.

Assuming that the command found CMake and Ninja executables in the previous step, continue to Step 4. If no executables were found, we may need to manually direct spack towards the CMake and Ninja installations we set up with Visual Studio. Therefore, your packages.yaml file will look something like this, with possibly slight variants in the paths to CMake and Ninja:

packages:

cmake:
externals:
- spec: cmake@3.19
prefix: 'c:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\Common7\IDE\CommonExtensions\Microsoft\CMake\CMake'
buildable: False
ninja:
externals:
- spec: ninja@1.8.2
prefix: 'c:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\Common7\IDE\CommonExtensions\Microsoft\CMake\Ninja'
buildable: False


You can also use an separate installation of CMake if you have one and prefer to use it. If you don't have a path to Ninja analogous to the above, then you can obtain it by running the Visual Studio Installer and following the instructions at the start of this section. Also note that .yaml files use spaces for indentation and not tabs, so ensure that this is the case when editing one directly.

NOTE:

Cygwin The use of Cygwin is not officially supported by Spack and is not tested. However Spack will not throw an error, so use if choosing to use Spack with Cygwin, know that no functionality is garunteed.


Step 4: Use Spack

Once the configuration is complete, it is time to give the installation a test. Install a basic package though the Spack console via:

spack install cpuinfo


If in the previous step, you did not have CMake or Ninja installed, running the command above should bootstrap both packages

Windows Compatible Packages

Not all spack packages currently have Windows support. Some are inherently incompatible with the platform, and others simply have yet to be ported. To view the current set of packages with Windows support, the list command should be used via spack list -t windows. If there's a package you'd like to install on Windows but is not in that list, feel free to reach out to request the port or contribute the port yourself.

NOTE:

This is by no means a comprehensive list, some packages may have ports that were not tagged while others may just work out of the box on Windows and have not been tagged as such.


For developers

The intent is to provide a Windows installer that will automatically set up Python, Git, and Spack, instead of requiring the user to do so manually. Instructions for creating the installer are at https://github.com/spack/spack/blob/develop/lib/spack/spack/cmd/installer/README.md

Alternatively a pre-built copy of the Windows installer is available as an artifact of Spack's Windows CI available at each run of the CI on develop or any PR.

BASIC USAGE

The spack command has many subcommands. You'll only need a small subset of them for typical usage.

Note that Spack colorizes output. less -R should be used with Spack to maintain this colorization. E.g.:

$ spack find | less -R


It is recommended that the following be put in your .bashrc file:

alias less='less -R'


If you do not see colorized output when using less -R it is because color is being disabled in the piped output. In this case, tell spack to force colorized output with a flag

$ spack --color always find | less -R


or an environment variable

$ SPACK_COLOR=always spack find | less -R


Listing available packages

To install software with Spack, you need to know what software is available. You can see a list of available package names at the packages.spack.io website, or using the spack list command.

spack list

The spack list command prints out a list of all of the packages Spack can install:

$ spack list
3dtk                                      py-googleapis-common-protos
3proxy                                    py-googledrivedownloader
7zip                                      py-gosam
abacus                                    py-gpaw
abduco                                    py-gpustat
abi-compliance-checker                    py-gputil
abi-dumper                                py-gpy
abinit                                    py-gpyopt
abseil-cpp                                py-gpytorch
abyss                                     py-gql
...


There are thousands of them, so we've truncated the output above, but you can find a full list here. Packages are listed by name in alphabetical order. A pattern to match with no wildcards, * or ?, will be treated as though it started and ended with *, so util is equivalent to *util*. All patterns will be treated as case-insensitive. You can also add the -d to search the description of the package in addition to the name. Some examples:

All packages whose names contain "sql":

$ spack list sql
mysql              py-agate-sql                     py-mysqlclient  py-sqlalchemy-migrate  r-rpostgresql  sqlitebrowser
mysql-connector-c  py-aiosqlite                     py-mysqldb1     py-sqlalchemy-stubs    r-rsqlite
mysqlpp            py-azure-mgmt-sql                py-pygresql     py-sqlalchemy-utils    r-sqldf
perl-dbd-mysql     py-azure-mgmt-sqlvirtualmachine  py-pymysql      py-sqlitedict          sqlcipher
perl-dbd-sqlite    py-flask-sqlalchemy              py-pysqlite3    py-sqlparse            sqlite
postgresql         py-mysql-connector-python        py-sqlalchemy   r-rmysql               sqlite-jdbc


All packages whose names or descriptions contain documentation:

$ spack list --search-description documentation
asciidoc-py3       perl-bioperl          py-interface-meta       py-sphinx-tabs               r-spam
byacc              perl-db-file          py-markdown             py-sphinxautomodapi          r-stanheaders
compositeproto     perl-io-prompt        py-mkdocs               py-sphinxcontrib-websupport  r-units
damageproto        py-alabaster          py-mkdocs-material      r-downlit                    r-uwot
double-conversion  py-astropy-helpers    py-mkdocstrings         r-lifecycle                  sowing
doxygen            py-dask-sphinx-theme  py-myst-parser          r-modeltools                 texinfo
gflags             py-docutils           py-param                r-pkgdown                    totalview
gtk-doc            py-epydoc             py-pdoc3                r-quadprog                   xorg-docs
libxfixes          py-exhale             py-python-docs-theme    r-rcpp                       xorg-sgml-doctools
libxpresent        py-fastai             py-recommonmark         r-rdpack
man-db             py-ford               py-sphinx               r-rinside
ntl                py-furo               py-sphinx-immaterial    r-roxygen2
oommf              py-griffe             py-sphinx-multiversion  r-satellite


spack info

To get more information on a particular package from spack list, use spack info. Just supply the name of a package:

$ spack info --all mpich
AutotoolsPackage:   mpich
Description:

MPICH is a high performance and widely portable implementation of the
Message Passing Interface (MPI) standard. Homepage: https://www.mpich.org Maintainers: @raffenet @yfguo Externally Detectable:
True (version, variants) Tags:
detectable e4s Preferred version:
4.1.2 https://www.mpich.org/static/downloads/4.1.2/mpich-4.1.2.tar.gz Safe versions:
develop [git] https://github.com/pmodels/mpich.git
4.1.2 https://www.mpich.org/static/downloads/4.1.2/mpich-4.1.2.tar.gz
4.1.1 https://www.mpich.org/static/downloads/4.1.1/mpich-4.1.1.tar.gz
4.1 https://www.mpich.org/static/downloads/4.1/mpich-4.1.tar.gz
4.0.3 https://www.mpich.org/static/downloads/4.0.3/mpich-4.0.3.tar.gz
4.0.2 https://www.mpich.org/static/downloads/4.0.2/mpich-4.0.2.tar.gz
4.0.1 https://www.mpich.org/static/downloads/4.0.1/mpich-4.0.1.tar.gz
4.0 https://www.mpich.org/static/downloads/4.0/mpich-4.0.tar.gz
3.4.3 https://www.mpich.org/static/downloads/3.4.3/mpich-3.4.3.tar.gz
3.4.2 https://www.mpich.org/static/downloads/3.4.2/mpich-3.4.2.tar.gz
3.4.1 https://www.mpich.org/static/downloads/3.4.1/mpich-3.4.1.tar.gz
3.4 https://www.mpich.org/static/downloads/3.4/mpich-3.4.tar.gz
3.3.2 https://www.mpich.org/static/downloads/3.3.2/mpich-3.3.2.tar.gz
3.3.1 https://www.mpich.org/static/downloads/3.3.1/mpich-3.3.1.tar.gz
3.3 https://www.mpich.org/static/downloads/3.3/mpich-3.3.tar.gz
3.2.1 https://www.mpich.org/static/downloads/3.2.1/mpich-3.2.1.tar.gz
3.2 https://www.mpich.org/static/downloads/3.2/mpich-3.2.tar.gz
3.1.4 https://www.mpich.org/static/downloads/3.1.4/mpich-3.1.4.tar.gz
3.1.3 https://www.mpich.org/static/downloads/3.1.3/mpich-3.1.3.tar.gz
3.1.2 https://www.mpich.org/static/downloads/3.1.2/mpich-3.1.2.tar.gz
3.1.1 https://www.mpich.org/static/downloads/3.1.1/mpich-3.1.1.tar.gz
3.1 https://www.mpich.org/static/downloads/3.1/mpich-3.1.tar.gz
3.0.4 https://www.mpich.org/static/downloads/3.0.4/mpich-3.0.4.tar.gz Deprecated versions:
None Variants:
argobots [false] false, true
Enable Argobots support
build_system [autotools] autotools
Build systems supported by the package
cuda [false] false, true
Build with CUDA
device [ch4] ch3, ch4
Abstract Device Interface (ADI)
implementation. The ch4 device is in experimental state for versions
before 3.4.
fortran [true] false, true
Enable Fortran support
hwloc [true] false, true
Use external hwloc package
hydra [true] false, true
Build the hydra process manager
libxml2 [true] false, true
Use libxml2 for XML support instead of the custom minimalistic implementation
netmod [ofi] mxm, ofi, tcp, ucx
Network module. Only single netmod builds are
supported. For ch3 device configurations, this presumes the
ch3:nemesis communication channel. ch3:sock is not supported by this
spack package at this time.
pci [true] false, true
Support analyzing devices on PCI bus
pmi [pmi] cray, off, pmi, pmi2, pmix
PMI interface.
rocm [false] false, true
Enable ROCm support
romio [true] false, true
Enable ROMIO MPI I/O implementation
slurm [false] false, true
Enable SLURM support
verbs [false] false, true
Build support for OpenFabrics verbs.
wrapperrpath [true] false, true
Enable wrapper rpath
when +cuda
cuda_arch [none] none, 10, 11, 12, 13, 20, 21, 30, 32, 35, 37, 50, 52, 53, 60, 61, 62, 70, 72, 75,
80, 86, 87, 89, 90
CUDA architecture
when +rocm
amdgpu_target [none] none, gfx1010, gfx1011, gfx1012, gfx1013, gfx1030, gfx1031, gfx1032, gfx1033,
gfx1034, gfx1035, gfx1036, gfx1100, gfx1101, gfx1102, gfx1103, gfx701, gfx801,
gfx802, gfx803, gfx900, gfx900:xnack-, gfx902, gfx904, gfx906, gfx906:xnack-,
gfx908, gfx908:xnack-, gfx909, gfx90a, gfx90a:xnack+, gfx90a:xnack-, gfx90c,
gfx940
AMD GPU architecture
when @3.3: device=ch4 netmod=ucx
hcoll [false] false, true
Enable support for Mellanox HCOLL accelerated collective operations library
when @3.4:
datatype-engine [auto] auto, dataloop, yaksa
controls the datatype engine to use
when @4: device=ch4
vci [false] false, true
Enable multiple VCI (virtual communication interface) critical sections to improve performance of
applications that do heavy concurrent MPIcommunications. Set MPIR_CVAR_CH4_NUM_VCIS=<N> to enable multiple
vcis at runtime. Installation Phases:
autoreconf configure build install Build Dependencies:
argobots cray-pmi gmake hip libfabric libxml2 mxm python yaksa
autoconf cuda gnuconfig hsa-rocr-dev libpciaccess llvm-amdgpu pkgconfig slurm
automake findutils hcoll hwloc libtool m4 pmix ucx Link Dependencies:
argobots cuda hip hwloc libpciaccess llvm-amdgpu pmix ucx
cray-pmi hcoll hsa-rocr-dev libfabric libxml2 mxm slurm yaksa Run Dependencies:
None Virtual Packages:
mpich provides mpi@:4.0
mpich@:3.2 provides mpi@:3.1
mpich@:3.1 provides mpi@:3.0
mpich@:1.2 provides mpi@:2.2
mpich@:1.1 provides mpi@:2.1
mpich@:1.0 provides mpi@:2.0 Available Build Phase Test Methods:
None Available Install Phase Test Methods:
None Stand-Alone/Smoke Test Methods:
Mpi.test_mpi_hello Mpich.test_cpi Mpich.test_finalized Mpich.test_manyrma Mpich.test_sendrecv Licenses:
None


Most of the information is self-explanatory. The safe versions are versions that Spack knows the checksum for, and it will use the checksum to verify that these versions download without errors or viruses.

Dependencies and virtual dependencies are described in more detail later.

spack versions

To see more available versions of a package, run spack versions. For example:

$ spack versions libelf

0.8.13


There are two sections in the output. Safe versions are versions for which Spack has a checksum on file. It can verify that these versions are downloaded correctly.

In many cases, Spack can also show you what versions are available out on the web---these are remote versions. Spack gets this information by scraping it directly from package web pages. Depending on the package and how its releases are organized, Spack may or may not be able to find remote versions.

Installing and uninstalling

spack install

spack install will install any package shown by spack list. For example, To install the latest version of the mpileaks package, you might type this:

$ spack install mpileaks


If mpileaks depends on other packages, Spack will install the dependencies first. It then fetches the mpileaks tarball, expands it, verifies that it was downloaded without errors, builds it, and installs it in its own directory under $SPACK_ROOT/opt. You'll see a number of messages from Spack, a lot of build output, and a message that the package is installed.

$ spack install mpileaks
... dependency build output ...
==> Installing mpileaks-1.0-ph7pbnhl334wuhogmugriohcwempqry2
==> No binary for mpileaks-1.0-ph7pbnhl334wuhogmugriohcwempqry2 found: installing from source
==> mpileaks: Executing phase: 'autoreconf'
==> mpileaks: Executing phase: 'configure'
==> mpileaks: Executing phase: 'build'
==> mpileaks: Executing phase: 'install'
[+] ~/spack/opt/linux-rhel7-broadwell/gcc-8.1.0/mpileaks-1.0-ph7pbnhl334wuhogmugriohcwempqry2


The last line, with the [+], indicates where the package is installed.

Add the Spack debug option (one or more times) -- spack -d install mpileaks -- to get additional (and even more verbose) output.

Building a specific version

Spack can also build specific versions of a package. To do this, just add @ after the package name, followed by a version:

$ spack install mpich@3.0.4


Any number of versions of the same package can be installed at once without interfering with each other. This is good for multi-user sites, as installing a version that one user needs will not disrupt existing installations for other users.

In addition to different versions, Spack can customize the compiler, compile-time options (variants), compiler flags, and platform (for cross compiles) of an installation. Spack is unique in that it can also configure the dependencies a package is built with. For example, two configurations of the same version of a package, one built with boost 1.39.0, and the other version built with version 1.43.0, can coexist.

This can all be done on the command line using the spec syntax. Spack calls the descriptor used to refer to a particular package configuration a spec. In the commands above, mpileaks and mpileaks@3.0.4 are both valid specs. We'll talk more about how you can use them to customize an installation in Specs & dependencies.

Reusing installed dependencies

By default, when you run spack install, Spack tries hard to reuse existing installations as dependencies, either from a local store or from remote buildcaches if configured. This minimizes unwanted rebuilds of common dependencies, in particular if you update Spack frequently.

In case you want the latest versions and configurations to be installed instead, you can add the --fresh option:

$ spack install --fresh mpich


Reusing installations in this mode is "accidental", and happening only if there's a match between existing installations and what Spack would have installed anyhow.

You can use the spack spec -I mpich command to see what will be reused and what will be built before you install.

You can configure Spack to use the --fresh behavior by default in concretizer.yaml:

concretizer:

reuse: false


spack uninstall

To uninstall a package, type spack uninstall <package>. This will ask the user for confirmation before completely removing the directory in which the package was installed.

$ spack uninstall mpich


If there are still installed packages that depend on the package to be uninstalled, spack will refuse to uninstall it.

To uninstall a package and every package that depends on it, you may give the --dependents option.

$ spack uninstall --dependents mpich


will display a list of all the packages that depend on mpich and, upon confirmation, will uninstall them in the right order.

A command like

$ spack uninstall mpich


may be ambiguous if multiple mpich configurations are installed. For example, if both mpich@3.0.2 and mpich@3.1 are installed, mpich could refer to either one. Because it cannot determine which one to uninstall, Spack will ask you either to provide a version number to remove the ambiguity or use the --all option to uninstall all of the matching packages.

You may force uninstall a package with the --force option

$ spack uninstall --force mpich


but you risk breaking other installed packages. In general, it is safer to remove dependent packages before removing their dependencies or use the --dependents option.

Garbage collection

When Spack builds software from sources, if often installs tools that are needed just to build or test other software. These are not necessary at runtime. To support cases where removing these tools can be a benefit Spack provides the spack gc ("garbage collector") command, which will uninstall all unneeded packages:

$ spack find
==> 24 installed packages
-- linux-ubuntu18.04-broadwell / gcc@9.0.1 ----------------------
autoconf@2.69    findutils@4.6.0  libiconv@1.16        libszip@2.1.1  m4@1.4.18    openjpeg@2.3.1  pkgconf@1.6.3  util-macros@1.19.1
automake@1.16.1  gdbm@1.18.1      libpciaccess@0.13.5  libtool@2.4.6  mpich@3.3.2  openssl@1.1.1d  readline@8.0   xz@5.2.4
cmake@3.16.1     hdf5@1.10.5      libsigsegv@2.12      libxml2@2.9.9  ncurses@6.1  perl@5.30.0     texinfo@6.5    zlib@1.2.11
$ spack gc
==> The following packages will be uninstalled:

-- linux-ubuntu18.04-broadwell / gcc@9.0.1 ----------------------
vn47edz autoconf@2.69 6m3f2qn findutils@4.6.0 ubl6bgk libtool@2.4.6 pksawhz openssl@1.1.1d urdw22a readline@8.0
ki6nfw5 automake@1.16.1 fklde6b gdbm@1.18.1 b6pswuo m4@1.4.18 k3s2csy perl@5.30.0 lp5ya3t texinfo@6.5
ylvgsov cmake@3.16.1 5omotir libsigsegv@2.12 leuzbbh ncurses@6.1 5vmfbrq pkgconf@1.6.3 5bmv4tg util-macros@1.19.1 ==> Do you want to proceed? [y/N] y [ ... ] $ spack find ==> 9 installed packages -- linux-ubuntu18.04-broadwell / gcc@9.0.1 ---------------------- hdf5@1.10.5 libiconv@1.16 libpciaccess@0.13.5 libszip@2.1.1 libxml2@2.9.9 mpich@3.3.2 openjpeg@2.3.1 xz@5.2.4 zlib@1.2.11


In the example above Spack went through all the packages in the package database and removed everything that is not either:

1.
A package installed upon explicit request of the user
2.
A link or run dependency, even transitive, of one of the packages at point 1.

You can check Viewing more metadata to see how to query for explicitly installed packages or Dependency types for a more thorough treatment of dependency types.

Marking packages explicit or implicit

By default, Spack will mark packages a user installs as explicitly installed, while all of its dependencies will be marked as implicitly installed. Packages can be marked manually as explicitly or implicitly installed by using spack mark. This can be used in combination with spack gc to clean up packages that are no longer required.

$ spack install m4
==> 29005: Installing libsigsegv
[...]
==> 29005: Installing m4
[...]
$ spack install m4 ^libsigsegv@2.11
==> 39798: Installing libsigsegv
[...]
==> 39798: Installing m4
[...]
$ spack find -d
==> 4 installed packages
-- linux-fedora32-haswell / gcc@10.1.1 --------------------------
libsigsegv@2.11
libsigsegv@2.12
m4@1.4.18

libsigsegv@2.12 m4@1.4.18
libsigsegv@2.11 $ spack gc ==> There are no unused specs. Spack's store is clean. $ spack mark -i m4 ^libsigsegv@2.11 ==> m4@1.4.18 : marking the package implicit $ spack gc ==> The following packages will be uninstalled:
-- linux-fedora32-haswell / gcc@10.1.1 --------------------------
5fj7p2o libsigsegv@2.11 c6ensc6 m4@1.4.18 ==> Do you want to proceed? [y/N]


In the example above, we ended up with two versions of m4 since they depend on different versions of libsigsegv. spack gc will not remove any of the packages since both versions of m4 have been installed explicitly and both versions of libsigsegv are required by the m4 packages.

spack mark can also be used to implement upgrade workflows. The following example demonstrates how the spack mark and spack gc can be used to only keep the current version of a package installed.

When updating Spack via git pull, new versions for either libsigsegv or m4 might be introduced. This will cause Spack to install duplicates. Since we only want to keep one version, we mark everything as implicitly installed before updating Spack. If there is no new version for either of the packages, spack install will simply mark them as explicitly installed and spack gc will not remove them.

$ spack install m4
==> 62843: Installing libsigsegv
[...]
==> 62843: Installing m4
[...]
$ spack mark -i -a
==> m4@1.4.18 : marking the package implicit
$ git pull
[...]
$ spack install m4
[...]
==> m4@1.4.18 : marking the package explicit
[...]
$ spack gc
==> There are no unused specs. Spack's store is clean.


When using this workflow for installations that contain more packages, care has to be taken to either only mark selected packages or issue spack install for all packages that should be kept.

You can check Viewing more metadata to see how to query for explicitly or implicitly installed packages.

Non-Downloadable Tarballs

The tarballs for some packages cannot be automatically downloaded by Spack. This could be for a number of reasons:

1.
The author requires users to manually accept a license agreement before downloading (jdk and galahad).
2.
The software is proprietary and cannot be downloaded on the open Internet.

To install these packages, one must create a mirror and manually add the tarballs in question to it (see Mirrors (mirrors.yaml)):

1.
Create a directory for the mirror. You can create this directory anywhere you like, it does not have to be inside ~/.spack:

$ mkdir ~/.spack/manual_mirror


2.
Register the mirror with Spack by creating ~/.spack/mirrors.yaml:

3.
Put your tarballs in it. Tarballs should be named <package>/<package>-<version>.tar.gz. For example:

$ ls -l manual_mirror/galahad
-rw-------. 1 me me 11657206 Jun 21 19:25 galahad-2.60003.tar.gz


4.
Install as usual:

$ spack install galahad



Seeing installed packages

We know that spack list shows you the names of available packages, but how do you figure out which are already installed?

spack find

spack find shows the specs of installed packages. A spec is like a name, but it has a version, compiler, architecture, and build options associated with it. In spack, you can have many installations of the same package with different specs.

Running spack find with no arguments lists installed packages:

$ spack find
==> 74 installed packages.
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
ImageMagick@6.8.9-10  libdwarf@20130729  py-dateutil@2.4.0
adept-utils@1.0       libdwarf@20130729  py-ipython@2.3.1
atk@2.14.0            libelf@0.8.12      py-matplotlib@1.4.2
boost@1.55.0          libelf@0.8.13      py-nose@1.3.4
bzip2@1.0.6           libffi@3.1         py-numpy@1.9.1
cairo@1.14.0          libmng@2.0.2       py-pygments@2.0.1
callpath@1.0.2        libpng@1.6.16      py-pyparsing@2.0.3
cmake@3.0.2           libtiff@4.0.3      py-pyside@1.2.2
dbus@1.8.6            libtool@2.4.2      py-pytz@2014.10
dbus@1.9.0            libxcb@1.11        py-setuptools@11.3.1
dyninst@8.1.2         libxml2@2.9.2      py-six@1.9.0
fontconfig@2.11.1     libxml2@2.9.2      python@2.7.8
freetype@2.5.3        llvm@3.0           qhull@1.0
gdk-pixbuf@2.31.2     memaxes@0.5        qt@4.8.6
glib@2.42.1           mesa@8.0.5         qt@5.4.0
graphlib@2.0.0        mpich@3.0.4        readline@6.3
gtkplus@2.24.25       mpileaks@1.0       sqlite@3.8.5
harfbuzz@0.9.37       mrnet@4.1.0        stat@2.1.0
hdf5@1.8.13           ncurses@5.9        tcl@8.6.3
icu@54.1              netcdf@4.3.3       tk@src
jpeg@9a               openssl@1.0.1h     vtk@6.1.0
launchmon@1.0.1       pango@1.36.8       xcb-proto@1.11
lcms@2.6              pixman@0.32.6      xz@5.2.0
libdrm@2.4.33         py-dateutil@2.4.0  zlib@1.2.8
-- linux-debian7-x86_64 / gcc@4.9.2 --------------------------------
libelf@0.8.10  mpich@3.0.4


Packages are divided into groups according to their architecture and compiler. Within each group, Spack tries to keep the view simple, and only shows the version of installed packages.

Viewing more metadata

spack find can filter the package list based on the package name, spec, or a number of properties of their installation status. For example, missing dependencies of a spec can be shown with --missing, deprecated packages can be included with --deprecated, packages which were explicitly installed with spack install <package> can be singled out with --explicit and those which have been pulled in only as dependencies with --implicit.

In some cases, there may be different configurations of the same version of a package installed. For example, there are two installations of libdwarf@20130729 above. We can look at them in more detail using spack find --deps, and by asking only to show libdwarf packages:

$ spack find --deps libdwarf
==> 2 installed packages.
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------

libdwarf@20130729-d9b90962
^libelf@0.8.12
libdwarf@20130729-b52fac98
^libelf@0.8.13


Now we see that the two instances of libdwarf depend on different versions of libelf: 0.8.12 and 0.8.13. This view can become complicated for packages with many dependencies. If you just want to know whether two packages' dependencies differ, you can use spack find --long:

$ spack find --long libdwarf
==> 2 installed packages.
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
libdwarf@20130729-d9b90962  libdwarf@20130729-b52fac98


Now the libdwarf installs have hashes after their names. These are hashes over all of the dependencies of each package. If the hashes are the same, then the packages have the same dependency configuration.

If you want to know the path where each package is installed, you can use spack find --paths:

$ spack find --paths
==> 74 installed packages.
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------

ImageMagick@6.8.9-10 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/ImageMagick@6.8.9-10-4df950dd
adept-utils@1.0 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/adept-utils@1.0-5adef8da
atk@2.14.0 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/atk@2.14.0-3d09ac09
boost@1.55.0 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/boost@1.55.0
bzip2@1.0.6 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/bzip2@1.0.6
cairo@1.14.0 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/cairo@1.14.0-fcc2ab44
callpath@1.0.2 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/callpath@1.0.2-5dce4318 ...


You can restrict your search to a particular package by supplying its name:

$ spack find --paths libelf
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------

libelf@0.8.11 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/libelf@0.8.11
libelf@0.8.12 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/libelf@0.8.12
libelf@0.8.13 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/libelf@0.8.13


Spec queries

spack find actually does a lot more than this. You can use specs to query for specific configurations and builds of each package. If you want to find only libelf versions greater than version 0.8.12, you could say:

$ spack find libelf@0.8.12:
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------

libelf@0.8.12 libelf@0.8.13


Finding just the versions of libdwarf built with a particular version of libelf would look like this:

$ spack find --long libdwarf ^libelf@0.8.12
==> 1 installed packages.
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
libdwarf@20130729-d9b90962


We can also search for packages that have a certain attribute. For example, spack find libdwarf +debug will show only installations of libdwarf with the 'debug' compile-time option enabled.

The full spec syntax is discussed in detail in Specs & dependencies.

Machine-readable output

If you only want to see very specific things about installed packages, Spack has some options for you. spack find --format can be used to output only specific fields:

$ spack find --format "{name}-{version}-{hash}"
autoconf-2.69-icynozk7ti6h4ezzgonqe6jgw5f3ulx4
automake-1.16.1-o5v3tc77kesgonxjbmeqlwfmb5qzj7zy
bzip2-1.0.6-syohzw57v2jfag5du2x4bowziw3m5p67
bzip2-1.0.8-zjny4jwfyvzbx6vii3uuekoxmtu6eyuj
cmake-3.15.1-7cf6onn52gywnddbmgp7qkil4hdoxpcb
...


or:

$ spack find --format "{hash:7}"
icynozk
o5v3tc7
syohzw5
zjny4jw
7cf6onn
...


This uses the same syntax as described in documentation for format() -- you can use any of the options there. This is useful for passing metadata about packages to other command-line tools.

Alternately, if you want something even more machine readable, you can output each spec as JSON records using spack find --json. This will output metadata on specs and all dependencies as json:

$ spack find --json sqlite@3.28.0
[

{
"name": "sqlite",
"hash": "3ws7bsihwbn44ghf6ep4s6h4y2o6eznv",
"version": "3.28.0",
"arch": {
"platform": "darwin",
"platform_os": "mojave",
"target": "x86_64"
},
"compiler": {
"name": "apple-clang",
"version": "10.0.0"
},
"namespace": "builtin",
"parameters": {
"fts": true,
"functions": false,
"cflags": [],
"cppflags": [],
"cxxflags": [],
"fflags": [],
"ldflags": [],
"ldlibs": []
},
"dependencies": {
"readline": {
"hash": "722dzmgymxyxd6ovjvh4742kcetkqtfs",
"type": [
"build",
"link"
]
}
}
},
... ]


You can use this with tools like jq to quickly create JSON records structured the way you want:

$ spack find --json sqlite@3.28.0 | jq -C '.[] | { name, version, hash }'
{

"name": "sqlite",
"version": "3.28.0",
"hash": "3ws7bsihwbn44ghf6ep4s6h4y2o6eznv" } {
"name": "readline",
"version": "7.0",
"hash": "722dzmgymxyxd6ovjvh4742kcetkqtfs" } {
"name": "ncurses",
"version": "6.1",
"hash": "zvaa4lhlhilypw5quj3akyd3apbq5gap" }


spack diff

It's often the case that you have two versions of a spec that you need to disambiguate. Let's say that we've installed two variants of zlib, one with and one without the optimize variant:

$ spack install zlib
$ spack install zlib -optimize


When we do spack find we see the two versions.

$ spack find zlib
==> 2 installed packages
-- linux-ubuntu20.04-skylake / gcc@9.3.0 ------------------------
zlib@1.2.11  zlib@1.2.11


Let's now say that we want to uninstall zlib. We run the command, and hit a problem real quickly since we have two!

$ spack uninstall zlib
==> Error: zlib matches multiple packages:

-- linux-ubuntu20.04-skylake / gcc@9.3.0 ------------------------
efzjziy zlib@1.2.11 sl7m27m zlib@1.2.11 ==> Error: You can either:
a) use a more specific spec, or
b) specify the spec by its hash (e.g. `spack uninstall /hash`), or
c) use `spack uninstall --all` to uninstall ALL matching specs.


Oh no! We can see from the above that we have two different versions of zlib installed, and the only difference between the two is the hash. This is a good use case for spack diff, which can easily show us the "diff" or set difference between properties for two packages. Let's try it out. Since the only difference we see in the spack find view is the hash, let's use spack diff to look for more detail. We will provide the two hashes:

$ spack diff /efzjziy /sl7m27m
==> Warning: This interface is subject to change.
--- zlib@1.2.11efzjziyc3dmb5h5u5azsthgbgog5mj7g
+++ zlib@1.2.11sl7m27mzkbejtkrajigj3a3m37ygv4u2
@@ variant_value @@
-  zlib optimize False
+  zlib optimize True


The output is colored, and written in the style of a git diff. This means that you can copy and paste it into a GitHub markdown as a code block with language "diff" and it will render nicely! Here is an example:

```diff
--- zlib@1.2.11/efzjziyc3dmb5h5u5azsthgbgog5mj7g
+++ zlib@1.2.11/sl7m27mzkbejtkrajigj3a3m37ygv4u2
@@ variant_value @@
-  zlib optimize False
+  zlib optimize True
```


Awesome! Now let's read the diff. It tells us that our first zlib was built with ~optimize (False) and the second was built with +optimize (True). You can't see it in the docs here, but the output above is also colored based on the content being an addition (+) or subtraction (-).

This is a small example, but you will be able to see differences for any attributes on the installation spec. Running spack diff A B means we'll see which spec attributes are on B but not on A (green) and which are on A but not on B (red). Here is another example with an additional difference type, version:

$ spack diff python@2.7.8 python@3.8.11
==> Warning: This interface is subject to change.
--- python@2.7.8/tsxdi6gl4lihp25qrm4d6nys3nypufbf
+++ python@3.8.11/yjtseru4nbpllbaxb46q7wfkyxbuvzxx
@@ variant_value @@
-  python patches a8c52415a8b03c0e5f28b5d52ae498f7a7e602007db2b9554df28cd5685839b8
+  python patches 0d98e93189bc278fbc37a50ed7f183bd8aaf249a8e1670a465f0db6bb4f8cf87
@@ version @@
-  openssl 1.0.2u
+  openssl 1.1.1k
-  python 2.7.8
+  python 3.8.11


Let's say that we were only interested in one kind of attribute above, version. We can ask the command to only output this attribute. To do this, you'd add the --attribute for attribute parameter, which defaults to all. Here is how you would filter to show just versions:

$ spack diff --attribute version python@2.7.8 python@3.8.11
==> Warning: This interface is subject to change.
--- python@2.7.8/tsxdi6gl4lihp25qrm4d6nys3nypufbf
+++ python@3.8.11/yjtseru4nbpllbaxb46q7wfkyxbuvzxx
@@ version @@
-  openssl 1.0.2u
+  openssl 1.1.1k
-  python 2.7.8
+  python 3.8.11


And you can add as many attributes as you'd like with multiple --attribute arguments (for lots of attributes, you can use -a for short). Finally, if you want to view the data as json (and possibly pipe into an output file) just add --json:

$ spack diff --json python@2.7.8 python@3.8.11


This data will be much longer because along with the differences for A vs. B and B vs. A, the JSON output also showsthe intersection.

Using installed packages

There are several different ways to use Spack packages once you have installed them. As you've seen, spack packages are installed into long paths with hashes, and you need a way to get them into your path. The easiest way is to use spack load, which is described in the next section.

Some more advanced ways to use Spack packages include:

  • environments, which you can use to bundle a number of related packages to "activate" all at once, and
  • environment modules, which are commonly used on supercomputing clusters. Spack generates module files for every installation automatically, and you can customize how this is done.

spack load / unload

If you have shell support enabled you can use the spack load command to quickly get a package on your PATH.

For example this will add the mpich package built with gcc to your path:

$ spack install mpich %gcc@4.4.7
# ... wait for install ...
$ spack load mpich %gcc@4.4.7
$ which mpicc
~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/mpich@3.0.4/bin/mpicc


These commands will add appropriate directories to your PATH and MANPATH according to the prefix inspections defined in your modules configuration. When you no longer want to use a package, you can type unload or unuse similarly:

$ spack unload mpich %gcc@4.4.7


Ambiguous specs

If a spec used with load/unload or is ambiguous (i.e. more than one installed package matches it), then Spack will warn you:

$ spack load libelf
==> Error: libelf matches multiple packages.
Matching packages:

qmm4kso libelf@0.8.13%gcc@4.4.7 arch=linux-debian7-x86_64
cd2u6jt libelf@0.8.13%intel@15.0.0 arch=linux-debian7-x86_64 Use a more specific spec


You can either type the spack load command again with a fully qualified argument, or you can add just enough extra constraints to identify one package. For example, above, the key differentiator is that one libelf is built with the Intel compiler, while the other used gcc. You could therefore just type:

$ spack load libelf %intel


To identify just the one built with the Intel compiler. If you want to be very specific, you can load it by its hash. For example, to load the first libelf above, you would run:

$ spack load /qmm4kso


To see which packages that you have loaded to your environment you would use spack find --loaded.

$ spack find --loaded
==> 2 installed packages
-- linux-debian7 / gcc@4.4.7 ------------------------------------
libelf@0.8.13
-- linux-debian7 / intel@15.0.0 ---------------------------------
libelf@0.8.13


You can also use spack load --list to get the same output, but it does not have the full set of query options that spack find offers.

We'll learn more about Spack's spec syntax in the next section.

Specs & dependencies

We know that spack install, spack uninstall, and other commands take a package name with an optional version specifier. In Spack, that descriptor is called a spec. Spack uses specs to refer to a particular build configuration (or configurations) of a package. Specs are more than a package name and a version; you can use them to specify the compiler, compiler version, architecture, compile options, and dependency options for a build. In this section, we'll go over the full syntax of specs.

Here is an example of a much longer spec than we've seen thus far:

mpileaks @1.2:1.4 %gcc@4.7.5 +debug -qt target=x86_64 ^callpath @1.1 %gcc@4.7.2


If provided to spack install, this will install the mpileaks library at some version between 1.2 and 1.4 (inclusive), built using gcc at version 4.7.5 for a generic x86_64 architecture, with debug options enabled, and without Qt support. Additionally, it says to link it with the callpath library (which it depends on), and to build callpath with gcc 4.7.2. Most specs will not be as complicated as this one, but this is a good example of what is possible with specs.

More formally, a spec consists of the following pieces:

  • Package name identifier (mpileaks above)
  • @ Optional version specifier (@1.2:1.4)
  • % Optional compiler specifier, with an optional compiler version (gcc or gcc@4.7.3)
  • + or - or ~ Optional variant specifiers (+debug, -qt, or ~qt) for boolean variants. Use ++ or -- or ~~ to propagate variants through the dependencies (++debug, --qt, or ~~qt).
  • name=<value> Optional variant specifiers that are not restricted to boolean variants. Use name==<value> to propagate variant through the dependencies.
  • name=<value> Optional compiler flag specifiers. Valid flag names are cflags, cxxflags, fflags, cppflags, ldflags, and ldlibs. Use name==<value> to propagate compiler flags through the dependencies.
  • target=<value> os=<value> Optional architecture specifier (target=haswell os=CNL10)
  • ^ Dependency specs (^callpath@1.1)

There are two things to notice here. The first is that specs are recursively defined. That is, each dependency after ^ is a spec itself. The second is that everything is optional except for the initial package name identifier. Users can be as vague or as specific as they want about the details of building packages, and this makes spack good for beginners and experts alike.

To really understand what's going on above, we need to think about how software is structured. An executable or a library (these are generally the artifacts produced by building software) depends on other libraries in order to run. We can represent the relationship between a package and its dependencies as a graph. Here is the full dependency graph for mpileaks: [graph]

Each box above is a package and each arrow represents a dependency on some other package. For example, we say that the package mpileaks depends on callpath and mpich. mpileaks also depends indirectly on dyninst, libdwarf, and libelf, in that these libraries are dependencies of callpath. To install mpileaks, Spack has to build all of these packages. Dependency graphs in Spack have to be acyclic, and the depends on relationship is directional, so this is a directed, acyclic graph or DAG.

The package name identifier in the spec is the root of some dependency DAG, and the DAG itself is implicit. Spack knows the precise dependencies among packages, but users do not need to know the full DAG structure. Each ^ in the full spec refers to some dependency of the root package. Spack will raise an error if you supply a name after ^ that the root does not actually depend on (e.g. mpileaks ^emacs@23.3).

Spack further simplifies things by only allowing one configuration of each package within any single build. Above, both mpileaks and callpath depend on mpich, but mpich appears only once in the DAG. You cannot build an mpileaks version that depends on one version of mpich and on a callpath version that depends on some other version of mpich. In general, such a configuration would likely behave unexpectedly at runtime, and Spack enforces this to ensure a consistent runtime environment.

The point of specs is to abstract this full DAG from Spack users. If a user does not care about the DAG at all, she can refer to mpileaks by simply writing mpileaks. If she knows that mpileaks indirectly uses dyninst and she wants a particular version of dyninst, then she can refer to mpileaks ^dyninst@8.1. Spack will fill in the rest when it parses the spec; the user only needs to know package names and minimal details about their relationship.

When spack prints out specs, it sorts package names alphabetically to normalize the way they are displayed, but users do not need to worry about this when they write specs. The only restriction on the order of dependencies within a spec is that they appear after the root package. For example, these two specs represent exactly the same configuration:

mpileaks ^callpath@1.0 ^libelf@0.8.3
mpileaks ^libelf@0.8.3 ^callpath@1.0


You can put all the same modifiers on dependency specs that you would put on the root spec. That is, you can specify their versions, compilers, variants, and architectures just like any other spec. Specifiers are associated with the nearest package name to their left. For example, above, @1.1 and %gcc@4.7.2 associates with the callpath package, while @1.2:1.4, %gcc@4.7.5, +debug, -qt, and target=haswell os=CNL10 all associate with the mpileaks package.

In the diagram above, mpileaks depends on mpich with an unspecified version, but packages can depend on other packages with constraints by adding more specifiers. For example, mpileaks could depend on mpich@1.2: if it can only build with version 1.2 or higher of mpich.

Below are more details about the specifiers that you can add to specs.

Version specifier

A version specifier pkg@<specifier> comes after a package name and starts with @. It can be something abstract that matches multiple known versions, or a specific version. During concretization, Spack will pick the optimal version within the spec's constraints according to policies set for the particular Spack installation.

The version specifier can be a specific version, such as @=1.0.0 or @=1.2a7. Or, it can be a range of versions, such as @1.0:1.5. Version ranges are inclusive, so this example includes both 1.0 and any 1.5.x version. Version ranges can be unbounded, e.g. @:3 means any version up to and including 3. This would include 3.4 and 3.4.2. Similarly, @4.2: means any version above and including 4.2. As a short-hand, @3 is equivalent to the range @3:3 and includes any version with major version 3.

Notice that you can distinguish between the specific version @=3.2 and the range @3.2. This is useful for packages that follow a versioning scheme that omits the zero patch version number: 3.2, 3.2.1, 3.2.2, etc. In general it is preferable to use the range syntax @3.2, since ranges also match versions with one-off suffixes, such as 3.2-custom.

A version specifier can also be a list of ranges and specific versions, separated by commas. For example, @1.0:1.5,=1.7.1 matches any version in the range 1.0:1.5 and the specific version 1.7.1.

For packages with a git attribute, git references may be specified instead of a numerical version i.e. branches, tags and commits. Spack will stage and build based off the git reference provided. Acceptable syntaxes for this are:


# commit hashes foo@abcdef1234abcdef1234abcdef1234abcdef1234 # 40 character hashes are automatically treated as git commits foo@git.abcdef1234abcdef1234abcdef1234abcdef1234
# branches and tags foo@git.develop # use the develop branch foo@git.0.19 # use the 0.19 tag


Spack always needs to associate a Spack version with the git reference, which is used for version comparison. This Spack version is heuristically taken from the closest valid git tag among ancestors of the git ref.

Once a Spack version is associated with a git ref, it always printed with the git ref. For example, if the commit @git.abcdefg is tagged 0.19, then the spec will be shown as @git.abcdefg=0.19.

If the git ref is not exactly a tag, then the distance to the nearest tag is also part of the resolved version. @git.abcdefg=0.19.git.8 means that the commit is 8 commits away from the 0.19 tag.

In cases where Spack cannot resolve a sensible version from a git ref, users can specify the Spack version to use for the git ref. This is done by appending = and the Spack version to the git ref. For example:

foo@git.my_ref=3.2 # use the my_ref tag or branch, but treat it as version 3.2 for version comparisons
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234=develop # use the given commit, but treat it as develop for version comparisons


Details about how versions are compared and how Spack determines if one version is less than another are discussed in the developer guide.

Compiler specifier

A compiler specifier comes somewhere after a package name and starts with %. It tells Spack what compiler(s) a particular package should be built with. After the % should come the name of some registered Spack compiler. This might include gcc, or intel, but the specific compilers available depend on the site. You can run spack compilers to get a list; more on this below.

The compiler spec can be followed by an optional compiler version. A compiler version specifier looks exactly like a package version specifier. Version specifiers will associate with the nearest package name or compiler specifier to their left in the spec.

If the compiler spec is omitted, Spack will choose a default compiler based on site policies.

Variants

Variants are named options associated with a particular package. They are optional, as each package must provide default values for each variant it makes available. Variants can be specified using a flexible parameter syntax name=<value>. For example, spack install mercury debug=True will install mercury built with debug flags. The names of particular variants available for a package depend on what was provided by the package author. spack info <package> will provide information on what build variants are available.

For compatibility with earlier versions, variants which happen to be boolean in nature can be specified by a syntax that represents turning options on and off. For example, in the previous spec we could have supplied mercury +debug with the same effect of enabling the debug compile time option for the libelf package.

Depending on the package a variant may have any default value. For mercury here, debug is False by default, and we turned it on with debug=True or +debug. If a variant is True by default you can turn it off by either adding -name or ~name to the spec.

There are two syntaxes here because, depending on context, ~ and - may mean different things. In most shells, the following will result in the shell performing home directory substitution:

mpileaks ~debug   # shell may try to substitute this!
mpileaks~debug    # use this instead


If there is a user called debug, the ~ will be incorrectly expanded. In this situation, you would want to write libelf -debug. However, - can be ambiguous when included after a package name without spaces:

mpileaks-debug     # wrong!
mpileaks -debug    # right


Spack allows the - character to be part of package names, so the above will be interpreted as a request for the mpileaks-debug package, not a request for mpileaks built without debug options. In this scenario, you should write mpileaks~debug to avoid ambiguity.

When spack normalizes specs, it prints them out with no spaces boolean variants using the backwards compatibility syntax and uses only ~ for disabled boolean variants. The - and spaces on the command line are provided for convenience and legibility.

Spack allows variants to propagate their value to the package's dependency by using ++, --, and ~~ for boolean variants. For example, for a debug variant:

mpileaks ++debug   # enabled debug will be propagated to dependencies
mpileaks +debug    # only mpileaks will have debug enabled


To propagate the value of non-boolean variants Spack uses name==value. For example, for the stackstart variant:

mpileaks stackstart==4   # variant will be propagated to dependencies
mpileaks stackstart=4    # only mpileaks will have this variant value


Compiler Flags

Compiler flags are specified using the same syntax as non-boolean variants, but fulfill a different purpose. While the function of a variant is set by the package, compiler flags are used by the compiler wrappers to inject flags into the compile line of the build. Additionally, compiler flags can be inherited by dependencies by using ==. spack install libdwarf cppflags=="-g" will install both libdwarf and libelf with the -g flag injected into their compile line.

NOTE:

versions of spack prior to 0.19.0 will propagate compiler flags using the = syntax.


Notice that the value of the compiler flags must be quoted if it contains any spaces. Any of cppflags=-O3, cppflags="-O3", cppflags='-O3', and cppflags="-O3 -fPIC" are acceptable, but cppflags=-O3 -fPIC is not. Additionally, if the value of the compiler flags is not the last thing on the line, it must be followed by a space. The command spack install libelf cppflags="-O3"%intel will be interpreted as an attempt to set cppflags="-O3%intel".

The six compiler flags are injected in the order of implicit make commands in GNU Autotools. If all flags are set, the order is $cppflags $cflags|$cxxflags $ldflags <command> $ldlibs for C and C++ and $fflags $cppflags $ldflags <command> $ldlibs for Fortran.

Compiler environment variables and additional RPATHs

Sometimes compilers require setting special environment variables to operate correctly. Spack handles these cases by allowing custom environment modifications in the environment attribute of the compiler configuration section. See also the Environment Modifications section of the configuration files docs for more information.

It is also possible to specify additional RPATHs that the compiler will add to all executables generated by that compiler. This is useful for forcing certain compilers to RPATH their own runtime libraries, so that executables will run without the need to set LD_LIBRARY_PATH.

compilers:

- compiler:
spec: gcc@4.9.3
paths:
cc: /opt/gcc/bin/gcc
c++: /opt/gcc/bin/g++
f77: /opt/gcc/bin/gfortran
fc: /opt/gcc/bin/gfortran
environment:
unset:
- BAD_VARIABLE
set:
GOOD_VARIABLE_NUM: 1
GOOD_VARIABLE_STR: good
prepend_path:
PATH: /path/to/binutils
append_path:
LD_LIBRARY_PATH: /opt/gcc/lib
extra_rpaths:
- /path/to/some/compiler/runtime/directory
- /path/to/some/other/compiler/runtime/directory


Architecture specifiers

Each node in the dependency graph of a spec has an architecture attribute. This attribute is a triplet of platform, operating system and processor. You can specify the elements either separately, by using the reserved keywords platform, os and target:

$ spack install libelf platform=linux
$ spack install libelf os=ubuntu18.04
$ spack install libelf target=broadwell


or together by using the reserved keyword arch:

$ spack install libelf arch=cray-CNL10-haswell


Normally users don't have to bother specifying the architecture if they are installing software for their current host, as in that case the values will be detected automatically. If you need fine-grained control over which packages use which targets (or over all packages' default target), see Package Preferences.

The situation is a little bit different for Cray machines and a detailed explanation on how the architecture can be set on them can be found at Spack on Cray



Support for specific microarchitectures

Spack knows how to detect and optimize for many specific microarchitectures (including recent Intel, AMD and IBM chips) and encodes this information in the target portion of the architecture specification. A complete list of the microarchitectures known to Spack can be obtained in the following way:

$ spack arch --known-targets
Generic architectures (families)

aarch64 armv8.1a armv8.3a armv8.5a ppc ppc64le riscv64 sparc64 x86_64 x86_64_v3
arm armv8.2a armv8.4a armv9.0a ppc64 ppcle sparc x86 x86_64_v2 x86_64_v4 GenuineIntel - x86
i686 pentium2 pentium3 pentium4 prescott GenuineIntel - x86_64
nocona nehalem sandybridge haswell skylake cannonlake cascadelake sapphirerapids
core2 westmere ivybridge broadwell mic_knl skylake_avx512 icelake AuthenticAMD - x86_64
k10 bulldozer piledriver zen steamroller zen2 zen3 excavator zen4 IBM - ppc64
power7 power8 power9 power10 IBM - ppc64le
power8le power9le power10le Cavium - aarch64
thunderx2 Fujitsu - aarch64
a64fx ARM - aarch64
cortex_a72 neoverse_n1 neoverse_v1 neoverse_v2 Apple - aarch64
m1 m2 SiFive - riscv64
u74mc


When a spec is installed Spack matches the compiler being used with the microarchitecture being targeted to inject appropriate optimization flags at compile time. Giving a command such as the following:

$ spack install zlib%gcc@9.0.1 target=icelake


will produce compilation lines similar to:

$ /usr/bin/gcc-9 -march=icelake-client -mtune=icelake-client -c ztest10532.c
$ /usr/bin/gcc-9 -march=icelake-client -mtune=icelake-client -c -fPIC -O2 ztest10532.
...


where the flags -march=icelake-client -mtune=icelake-client are injected by Spack based on the requested target and compiler.

If Spack knows that the requested compiler can't optimize for the current target or can't build binaries for that target at all, it will exit with a meaningful error message:

$ spack install zlib%gcc@5.5.0 target=icelake
==> Error: cannot produce optimized binary for micro-architecture "icelake" with gcc@5.5.0 [supported compiler versions are 8:]


When instead an old compiler is selected on a recent enough microarchitecture but there is no explicit target specification, Spack will optimize for the best match it can find instead of failing:

$ spack arch
linux-ubuntu18.04-broadwell
$ spack spec zlib%gcc@4.8
Input spec
--------------------------------
zlib%gcc@4.8
Concretized
--------------------------------
zlib@1.2.11%gcc@4.8+optimize+pic+shared arch=linux-ubuntu18.04-haswell
$ spack spec zlib%gcc@9.0.1
Input spec
--------------------------------
zlib%gcc@9.0.1
Concretized
--------------------------------
zlib@1.2.11%gcc@9.0.1+optimize+pic+shared arch=linux-ubuntu18.04-broadwell


In the snippet above, for instance, the microarchitecture was demoted to haswell when compiling with gcc@4.8 since support to optimize for broadwell starts from gcc@4.9:.

Finally, if Spack has no information to match compiler and target, it will proceed with the installation but avoid injecting any microarchitecture specific flags.

WARNING:

Currently, Spack doesn't print any warning to the user if it has no information on which optimization flags should be used for a given compiler. This behavior might change in the future.


Virtual dependencies

The dependency graph for mpileaks we saw above wasn't quite accurate. mpileaks uses MPI, which is an interface that has many different implementations. Above, we showed mpileaks and callpath depending on mpich, which is one particular implementation of MPI. However, we could build either with another implementation, such as openmpi or mvapich.

Spack represents interfaces like this using virtual dependencies. The real dependency DAG for mpileaks looks like this: [graph]

Notice that mpich has now been replaced with mpi. There is no real MPI package, but some packages provide the MPI interface, and these packages can be substituted in for mpi when mpileaks is built.

You can see what virtual packages a particular package provides by getting info on it:

$ spack info --virtuals mpich
AutotoolsPackage:   mpich
Description:

MPICH is a high performance and widely portable implementation of the
Message Passing Interface (MPI) standard. Homepage: https://www.mpich.org Preferred version:
4.1.2 https://www.mpich.org/static/downloads/4.1.2/mpich-4.1.2.tar.gz Safe versions:
develop [git] https://github.com/pmodels/mpich.git
4.1.2 https://www.mpich.org/static/downloads/4.1.2/mpich-4.1.2.tar.gz
4.1.1 https://www.mpich.org/static/downloads/4.1.1/mpich-4.1.1.tar.gz
4.1 https://www.mpich.org/static/downloads/4.1/mpich-4.1.tar.gz
4.0.3 https://www.mpich.org/static/downloads/4.0.3/mpich-4.0.3.tar.gz
4.0.2 https://www.mpich.org/static/downloads/4.0.2/mpich-4.0.2.tar.gz
4.0.1 https://www.mpich.org/static/downloads/4.0.1/mpich-4.0.1.tar.gz
4.0 https://www.mpich.org/static/downloads/4.0/mpich-4.0.tar.gz
3.4.3 https://www.mpich.org/static/downloads/3.4.3/mpich-3.4.3.tar.gz
3.4.2 https://www.mpich.org/static/downloads/3.4.2/mpich-3.4.2.tar.gz
3.4.1 https://www.mpich.org/static/downloads/3.4.1/mpich-3.4.1.tar.gz
3.4 https://www.mpich.org/static/downloads/3.4/mpich-3.4.tar.gz
3.3.2 https://www.mpich.org/static/downloads/3.3.2/mpich-3.3.2.tar.gz
3.3.1 https://www.mpich.org/static/downloads/3.3.1/mpich-3.3.1.tar.gz
3.3 https://www.mpich.org/static/downloads/3.3/mpich-3.3.tar.gz
3.2.1 https://www.mpich.org/static/downloads/3.2.1/mpich-3.2.1.tar.gz
3.2 https://www.mpich.org/static/downloads/3.2/mpich-3.2.tar.gz
3.1.4 https://www.mpich.org/static/downloads/3.1.4/mpich-3.1.4.tar.gz
3.1.3 https://www.mpich.org/static/downloads/3.1.3/mpich-3.1.3.tar.gz
3.1.2 https://www.mpich.org/static/downloads/3.1.2/mpich-3.1.2.tar.gz
3.1.1 https://www.mpich.org/static/downloads/3.1.1/mpich-3.1.1.tar.gz
3.1 https://www.mpich.org/static/downloads/3.1/mpich-3.1.tar.gz
3.0.4 https://www.mpich.org/static/downloads/3.0.4/mpich-3.0.4.tar.gz Deprecated versions:
None Variants:
argobots [false] false, true
Enable Argobots support
build_system [autotools] autotools
Build systems supported by the package
cuda [false] false, true
Build with CUDA
device [ch4] ch3, ch4
Abstract Device Interface (ADI)
implementation. The ch4 device is in experimental state for versions
before 3.4.
fortran [true] false, true
Enable Fortran support
hwloc [true] false, true
Use external hwloc package
hydra [true] false, true
Build the hydra process manager
libxml2 [true] false, true
Use libxml2 for XML support instead of the custom minimalistic implementation
netmod [ofi] mxm, ofi, tcp, ucx
Network module. Only single netmod builds are
supported. For ch3 device configurations, this presumes the
ch3:nemesis communication channel. ch3:sock is not supported by this
spack package at this time.
pci [true] false, true
Support analyzing devices on PCI bus
pmi [pmi] cray, off, pmi, pmi2, pmix
PMI interface.
rocm [false] false, true
Enable ROCm support
romio [true] false, true
Enable ROMIO MPI I/O implementation
slurm [false] false, true
Enable SLURM support
verbs [false] false, true
Build support for OpenFabrics verbs.
wrapperrpath [true] false, true
Enable wrapper rpath
when +cuda
cuda_arch [none] none, 10, 11, 12, 13, 20, 21, 30, 32, 35, 37, 50, 52, 53, 60, 61, 62, 70, 72, 75,
80, 86, 87, 89, 90
CUDA architecture
when +rocm
amdgpu_target [none] none, gfx1010, gfx1011, gfx1012, gfx1013, gfx1030, gfx1031, gfx1032, gfx1033,
gfx1034, gfx1035, gfx1036, gfx1100, gfx1101, gfx1102, gfx1103, gfx701, gfx801,
gfx802, gfx803, gfx900, gfx900:xnack-, gfx902, gfx904, gfx906, gfx906:xnack-,
gfx908, gfx908:xnack-, gfx909, gfx90a, gfx90a:xnack+, gfx90a:xnack-, gfx90c,
gfx940
AMD GPU architecture
when @3.3: device=ch4 netmod=ucx
hcoll [false] false, true
Enable support for Mellanox HCOLL accelerated collective operations library
when @3.4:
datatype-engine [auto] auto, dataloop, yaksa
controls the datatype engine to use
when @4: device=ch4
vci [false] false, true
Enable multiple VCI (virtual communication interface) critical sections to improve performance of
applications that do heavy concurrent MPIcommunications. Set MPIR_CVAR_CH4_NUM_VCIS=<N> to enable multiple
vcis at runtime. Build Dependencies:
argobots cray-pmi gmake hip libfabric libxml2 mxm python yaksa
autoconf cuda gnuconfig hsa-rocr-dev libpciaccess llvm-amdgpu pkgconfig slurm
automake findutils hcoll hwloc libtool m4 pmix ucx Link Dependencies:
argobots cuda hip hwloc libpciaccess llvm-amdgpu pmix ucx
cray-pmi hcoll hsa-rocr-dev libfabric libxml2 mxm slurm yaksa Run Dependencies:
None Virtual Packages:
mpich provides mpi@:4.0
mpich@:3.2 provides mpi@:3.1
mpich@:3.1 provides mpi@:3.0
mpich@:1.2 provides mpi@:2.2
mpich@:1.1 provides mpi@:2.1
mpich@:1.0 provides mpi@:2.0 Licenses:
None


Spack is unique in that its virtual packages can be versioned, just like regular packages. A particular version of a package may provide a particular version of a virtual package, and we can see above that mpich versions 1 and above provide all mpi interface versions up to 1, and mpich versions 3 and above provide mpi versions up to 3. A package can depend on a particular version of a virtual package, e.g. if an application needs MPI-2 functions, it can depend on mpi@2: to indicate that it needs some implementation that provides MPI-2 functions.

Constraining virtual packages

When installing a package that depends on a virtual package, you can opt to specify the particular provider you want to use, or you can let Spack pick. For example, if you just type this:

$ spack install mpileaks


Then spack will pick a provider for you according to site policies. If you really want a particular version, say mpich, then you could run this instead:

$ spack install mpileaks ^mpich


This forces spack to use some version of mpich for its implementation. As always, you can be even more specific and require a particular mpich version:

$ spack install mpileaks ^mpich@3


The mpileaks package in particular only needs MPI-1 commands, so any MPI implementation will do. If another package depends on mpi@2 and you try to give it an insufficient MPI implementation (e.g., one that provides only mpi@:1), then Spack will raise an error. Likewise, if you try to plug in some package that doesn't provide MPI, Spack will raise an error.

Explicit binding of virtual dependencies

There are packages that provide more than just one virtual dependency. When interacting with them, users might want to utilize just a subset of what they could provide, and use other providers for virtuals they need.

It is possible to be more explicit and tell Spack which dependency should provide which virtual, using a special syntax:

$ spack spec strumpack ^[virtuals=mpi] intel-parallel-studio+mkl ^[virtuals=lapack] openblas


Concretizing the spec above produces the following DAG:

[image]


where intel-parallel-studio could provide mpi, lapack, and blas but is used only for the former. The lapack and blas dependencies are satisfied by openblas.

Specifying Specs by Hash

Complicated specs can become cumbersome to enter on the command line, especially when many of the qualifications are necessary to distinguish between similar installs. To avoid this, when referencing an existing spec, Spack allows you to reference specs by their hash. We previously discussed the spec hash that Spack computes. In place of a spec in any command, substitute /<hash> where <hash> is any amount from the beginning of a spec hash.

For example, lets say that you accidentally installed two different mvapich2 installations. If you want to uninstall one of them but don't know what the difference is, you can run:

$ spack find --long mvapich2
==> 2 installed packages.
-- linux-centos7-x86_64 / gcc@6.3.0 ----------
qmt35td mvapich2@2.2%gcc
er3die3 mvapich2@2.2%gcc


You can then uninstall the latter installation using:

$ spack uninstall /er3die3


Or, if you want to build with a specific installation as a dependency, you can use:

$ spack install trilinos ^/er3die3


If the given spec hash is sufficiently long as to be unique, Spack will replace the reference with the spec to which it refers. Otherwise, it will prompt for a more qualified hash.

Note that this will not work to reinstall a dependency uninstalled by spack uninstall --force.

spack providers

You can see what packages provide a particular virtual package using spack providers. If you wanted to see what packages provide mpi, you would just run:

$ spack providers mpi
cray-mpich     intel-mpi              mpich@:1.0  mpich@:3.2     mpt     mvapich        mvapich2-gdr  openmpi@1.6.5
cray-mvapich2  intel-oneapi-mpi       mpich@:1.1  mpich          mpt@1:  mvapich2       mvapich2x     openmpi@1.7.5:
fujitsu-mpi    intel-parallel-studio  mpich@:1.2  mpilander      mpt@3:  mvapich2@2.1:  nvhpc         openmpi@2.0.0:
hpcx-mpi       mpi-serial             mpich@:3.1  mpitrampoline  msmpi   mvapich2@2.3:  openmpi       spectrum-mpi


And if you only wanted to see packages that provide MPI-2, you would add a version specifier to the spec:

$ spack providers mpi@2
hpcx-mpi               mpi-serial  mpich@:3.1  mpt      mvapich2       mvapich2x      openmpi@1.7.5:
intel-mpi              mpich@:1.0  mpich@:3.2  mpt@3:   mvapich2@2.1:  nvhpc          openmpi@2.0.0:
intel-oneapi-mpi       mpich@:1.1  mpich       msmpi    mvapich2@2.3:  openmpi        spectrum-mpi
intel-parallel-studio  mpich@:1.2  mpilander   mvapich  mvapich2-gdr   openmpi@1.6.5


Notice that the package versions that provide insufficient MPI versions are now filtered out.

Deprecating insecure packages

spack deprecate allows for the removal of insecure packages with minimal impact to their dependents.

WARNING:

The spack deprecate command is designed for use only in extraordinary circumstances. This is a VERY big hammer to be used with care.


The spack deprecate command will remove one package and replace it with another by replacing the deprecated package's prefix with a link to the deprecator package's prefix.

WARNING:

The spack deprecate command makes no promises about binary compatibility. It is up to the user to ensure the deprecator is suitable for the deprecated package.


Spack tracks concrete deprecated specs and ensures that no future packages concretize to a deprecated spec.

The first spec given to the spack deprecate command is the package to deprecate. It is an abstract spec that must describe a single installed package. The second spec argument is the deprecator spec. By default it must be an abstract spec that describes a single installed package, but with the -i/--install-deprecator it can be any abstract spec that Spack will install and then use as the deprecator. The -I/--no-install-deprecator option will ensure the default behavior.

By default, spack deprecate will deprecate all dependencies of the deprecated spec, replacing each by the dependency of the same name in the deprecator spec. The -d/--dependencies option will ensure the default, while the -D/--no-dependencies option will deprecate only the root of the deprecate spec in favor of the root of the deprecator spec.

spack deprecate can use symbolic links or hard links. The default behavior is symbolic links, but the -l/--link-type flag can take options hard or soft.

Verifying installations

The spack verify command can be used to verify the validity of Spack-installed packages any time after installation.

At installation time, Spack creates a manifest of every file in the installation prefix. For links, Spack tracks the mode, ownership, and destination. For directories, Spack tracks the mode, and ownership. For files, Spack tracks the mode, ownership, modification time, hash, and size. The Spack verify command will check, for every file in each package, whether any of those attributes have changed. It will also check for newly added files or deleted files from the installation prefix. Spack can either check all installed packages using the -a,--all or accept specs listed on the command line to verify.

The spack verify command can also verify for individual files that they haven't been altered since installation time. If the given file is not in a Spack installation prefix, Spack will report that it is not owned by any package. To check individual files instead of specs, use the -f,--files option.

Spack installation manifests are part of the tarball signed by Spack for binary package distribution. When installed from a binary package, Spack uses the packaged installation manifest instead of creating one at install time.

The spack verify command also accepts the -l,--local option to check only local packages (as opposed to those used transparently from upstream spack instances) and the -j,--json option to output machine-readable json data for any errors.

Extensions & Python support

Spack's installation model assumes that each package will live in its own install prefix. However, certain packages are typically installed within the directory hierarchy of other packages. For example, Python packages are typically installed in the $prefix/lib/python-2.7/site-packages directory.

In Spack, installation prefixes are immutable, so this type of installation is not directly supported. However, it is possible to create views that allow you to merge install prefixes of multiple packages into a single new prefix. Views are a convenient way to get a more traditional filesystem structure. Using extensions, you can ensure that Python packages always share the same prefix in the view as Python itself. Suppose you have Python installed like so:

$ spack find python
==> 1 installed packages.
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
python@2.7.8


spack extensions

You can find extensions for your Python installation like this:

$ spack extensions python
==> python@2.7.8%gcc@4.4.7 arch=linux-debian7-x86_64-703c7a96
==> 36 extensions:
geos          py-ipython     py-pexpect    py-pyside            py-sip
py-basemap    py-libxml2     py-pil        py-pytz              py-six
py-biopython  py-mako        py-pmw        py-rpy2              py-sympy
py-cython     py-matplotlib  py-pychecker  py-scientificpython  py-virtualenv
py-dateutil   py-mpi4py      py-pygments   py-scikit-learn
py-epydoc     py-mx          py-pylint     py-scipy
py-gnuplot    py-nose        py-pyparsing  py-setuptools
py-h5py       py-numpy       py-pyqt       py-shiboken
==> 12 installed:
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
py-dateutil@2.4.0    py-nose@1.3.4       py-pyside@1.2.2
py-dateutil@2.4.0    py-numpy@1.9.1      py-pytz@2014.10
py-ipython@2.3.1     py-pygments@2.0.1   py-setuptools@11.3.1
py-matplotlib@1.4.2  py-pyparsing@2.0.3  py-six@1.9.0


The extensions are a subset of what's returned by spack list, and they are packages like any other. They are installed into their own prefixes, and you can see this with spack find --paths:

$ spack find --paths py-numpy
==> 1 installed packages.
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------

py-numpy@1.9.1 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/py-numpy@1.9.1-66733244


However, even though this package is installed, you cannot use it directly when you run python:

$ spack load python
$ python
Python 2.7.8 (default, Feb 17 2015, 01:35:25)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
Traceback (most recent call last):

File "<stdin>", line 1, in <module> ImportError: No module named numpy >>>


Using Extensions in Environments

The recommended way of working with extensions such as py-numpy above is through Environments. For example, the following creates an environment in the current working directory with a filesystem view in the ./view directory:

$ spack env create --with-view view --dir .
$ spack -e . add py-numpy
$ spack -e . concretize
$ spack -e . install


We recommend environments for two reasons. Firstly, environments can be activated (requires Shell support):

$ spack env activate .


which sets all the right environment variables such as PATH and PYTHONPATH. This ensures that

$ python
>>> import numpy


works. Secondly, even without shell support, the view ensures that Python can locate its extensions:

$ ./view/bin/python
>>> import numpy


See Environments (spack.yaml) for a more in-depth description of Spack environments and customizations to views.

Using spack load

A more traditional way of using Spack and extensions is spack load (requires Shell support). This will add the extension to PYTHONPATH in your current shell, and Python itself will be available in the PATH:

$ spack load py-numpy
$ python
>>> import numpy


The loaded packages can be checked using spack find --loaded

Loading Extensions via Modules

Apart from spack env activate and spack load, you can load numpy through your environment modules (using environment-modules or lmod). This will also add the extension to the PYTHONPATH in your current shell.

$ module load <name of numpy module>


If you do not know the name of the specific numpy module you wish to load, you can use the spack module tcl|lmod loads command to get the name of the module from the Spack spec.

Filesystem requirements

By default, Spack needs to be run from a filesystem that supports flock locking semantics. Nearly all local filesystems and recent versions of NFS support this, but parallel filesystems or NFS volumes may be configured without flock support enabled. You can determine how your filesystems are mounted with mount. The output for a Lustre filesystem might look like this:

$ mount | grep lscratch
mds1-lnet0@o2ib100:/lsd on /p/lscratchd type lustre (rw,nosuid,lazystatfs,flock)
mds2-lnet0@o2ib100:/lse on /p/lscratche type lustre (rw,nosuid,lazystatfs,flock)


Note the flock option on both Lustre mounts.

If you do not see this or a similar option for your filesystem, you have a few options. First, you can move your Spack installation to a filesystem that supports locking. Second, you could ask your system administrator to enable flock for your filesystem.

If none of those work, you can disable locking in one of two ways:

1.
Run Spack with the -L or --disable-locks option to disable locks on a call-by-call basis.
2.
Edit config.yaml and set the locks option to false to always disable locking.



WARNING:

If you disable locking, concurrent instances of Spack will have no way to avoid stepping on each other. You must ensure that there is only one instance of Spack running at a time. Otherwise, Spack may end up with a corrupted database file, or you may not be able to see all installed packages in commands like spack find.

If you are unfortunate enough to run into this situation, you may be able to fix it by running spack reindex.



This issue typically manifests with the error below:

$ ./spack find
Traceback (most recent call last):
File "./spack", line 176, in <module>

main() File "./spack", line 154,' in main
return_val = command(parser, args) File "./spack/lib/spack/spack/cmd/find.py", line 170, in find
specs = set(spack.installed_db.query(\**q_args)) File "./spack/lib/spack/spack/database.py", line 551, in query
with self.read_transaction(): File "./spack/lib/spack/spack/database.py", line 598, in __enter__
if self._enter() and self._acquire_fn: File "./spack/lib/spack/spack/database.py", line 608, in _enter
return self._db.lock.acquire_read(self._timeout) File "./spack/lib/spack/llnl/util/lock.py", line 103, in acquire_read
self._lock(fcntl.LOCK_SH, timeout) # can raise LockError. File "./spack/lib/spack/llnl/util/lock.py", line 64, in _lock
fcntl.lockf(self._fd, op | fcntl.LOCK_NB) IOError: [Errno 38] Function not implemented


A nicer error message is TBD in future versions of Spack.

Troubleshooting

The spack audit command:

$ spack audit -h
usage: spack audit [-h] SUBCOMMAND ...
audit configuration files, packages, etc.
positional arguments:

SUBCOMMAND
configs audit configuration files
externals check external detection in packages
packages-https
check https in packages
packages audit package recipes
list list available checks and exits options:
-h, --help show this help message and exit


can be used to detect a number of configuration issues. This command detects configuration settings which might not be strictly wrong but are not likely to be useful outside of special cases.

It can also be used to detect dependency issues with packages - for example cases where a package constrains a dependency with a variant that doesn't exist (in this case Spack could report the problem ahead of time but automatically performing the check would slow down most runs of Spack).

A detailed list of the checks currently implemented for each subcommand can be printed with:

$ spack -v audit list
generic:

Generic checks relying on global variables configs:
Sanity checks on compilers.yaml
1. Report compilers with the same spec and two different definitions
Sanity checks on packages.yaml
1. Search for duplicate specs declared as externals
2. Search package preferences deprecated in v0.21 (and slated for removal in v0.22)
3. Warns if variant preferences have mismatched types or names. packages:
Sanity checks on specs used in directives
1. Ensure stand-alone test method is not included in build-time callbacks
2. Ensure that patches fetched from GitHub and GitLab have stable sha256
hashes.
3. Report unknown or wrong variants in directives for this package
4. Report unknown dependencies and wrong variants for dependencies
5. Ensures that variant defaults are present and parsable from cli
6. Ensures that all variants have a description.
7. Report if version constraints used in directives are not satisfiable
8. Reports named specs in the 'when=' attribute of a directive.
Note that 'conflicts' is the only directive allowing that.

Sanity checks on reserved attributes of packages
1. Ensure that packages don't override reserved names
Sanity checks on properties a package should maintain
1. Ensure package names are lowercase and consistent
2. Ensure that package objects are pickleable
3. Ensure that all packages can unparse and that unparsed code is valid Python
4. Ensure all versions in a package can produce a fetcher
5. Ensure the package has a docstring and no fixmes
6. Ensure no packages use md5 checksums
7. Ensure that methods modifying the build environment are ported to builder classes. packages-https:
Sanity checks on https checks of package urls, etc.
1. Check for correctness of links externals:
Sanity checks for external software detection
1. Test drive external detection for packages


Depending on the use case, users might run the appropriate subcommands to obtain diagnostics. Issues, if found, are reported to stdout:

% spack audit packages lammps
PKG-DIRECTIVES: 1 issue found
1. lammps: wrong variant in "conflicts" directive

the variant 'adios' does not exist
in /home/spack/spack/var/spack/repos/builtin/packages/lammps/package.py


Getting Help

spack help

If you don't find what you need here, the help subcommand will print out out a list of all of spack's options and subcommands:

$ spack help
usage: spack [-hkV] [--color {always,never,auto}] COMMAND ...
A flexible package manager that supports multiple versions,
configurations, platforms, and compilers.
These are common spack commands:
query packages:

list list and search available packages
info get detailed information on a particular package
find list and search installed packages build packages:
install build and install packages
uninstall remove installed packages
gc remove specs that are now no longer needed
spec show what would be installed, given a spec configuration:
external manage external packages in Spack configuration environments:
env manage virtual environments
view project packages to a compact naming scheme on the filesystem create packages:
create create a new package file
edit open package files in $EDITOR system:
arch print architecture information about this machine
audit audit configuration files, packages, etc.
compilers list available compilers user environment:
load add package to the user environment
module generate/manage module files
unload remove package from the user environment options:
--color {always,never,auto}
when to colorize output (default: auto)
-V, --version show version number and exit
-h, --help show this help message and exit
-k, --insecure do not check ssl certificates when downloading more help:
spack help --all list all commands and options
spack help <command> help on a specific command
spack help --spec help on the package specification syntax
spack docs open https://spack.rtfd.io/ in a browser


Adding an argument, e.g. spack help <subcommand>, will print out usage information for a particular subcommand:

$ spack help install
usage: spack install [-hnvyU] [--only {package,dependencies}] [-u UNTIL] [-j JOBS] [--overwrite] [--fail-fast]

[--keep-prefix] [--keep-stage] [--dont-restage]
[--use-cache | --no-cache | --cache-only | --use-buildcache [{auto,only,never},][package:{auto,only,never},][dependencies:{auto,only,never}]]
[--include-build-deps] [--no-check-signature] [--show-log-on-error] [--source] [--deprecated]
[--fake] [--only-concrete] [--add | --no-add] [-f SPEC_YAML_FILE] [--clean | --dirty]
[--test {root,all}] [--log-format {junit,cdash}] [--log-file LOG_FILE] [--help-cdash] [--reuse]
[--reuse-deps]
... build and install packages positional arguments:
spec package spec options:
--add (with environment) add spec to the environment as a root
--cache-only only install package from binary mirrors
--clean unset harmful variables in the build environment (default)
--deprecated fetch deprecated versions without warning
--dirty preserve user environment in spack's build environment (danger!)
--dont-restage if a partial install is detected, don't delete prior state
--fail-fast stop all builds if any build fails (default is best effort)
--fake fake install for debug purposes
--help-cdash show usage instructions for CDash reporting
--include-build-deps include build deps when installing from cache, useful for CI pipeline troubleshooting
--keep-prefix don't remove the install prefix if installation fails
--keep-stage don't remove the build stage if installation succeeds
--log-file LOG_FILE filename for the log file
--log-format {junit,cdash}
format to be used for log files
--no-add (with environment) do not add spec to the environment as a root
--no-cache do not check for pre-built Spack packages in mirrors
--no-check-signature do not check signatures of binary packages
--only {package,dependencies}
select the mode of installation

default is to install the package along with all its dependencies. alternatively, one can decide to install only the package or only the dependencies
--only-concrete (with environment) only install already concretized specs
--overwrite reinstall an existing spec, even if it has dependents
--show-log-on-error print full build log to stderr if build fails
--source install source files in prefix
--test {root,all} run tests on only root packages or all packages
--use-buildcache [{auto,only,never},][package:{auto,only,never},][dependencies:{auto,only,never}]
select the mode of buildcache for the 'package' and 'dependencies'

default: package:auto,dependencies:auto

- `auto` behaves like --use-cache
- `only` behaves like --cache-only
- `never` behaves like --no-cache
--use-cache check for pre-built Spack packages in mirrors (default)
-f SPEC_YAML_FILE, --file SPEC_YAML_FILE
read specs to install from .yaml files
-h, --help show this help message and exit
-j JOBS, --jobs JOBS explicitly set number of parallel jobs
-n, --no-checksum do not use checksums to verify downloaded files (unsafe)
-u UNTIL, --until UNTIL
phase to stop after when installing (default None)
-v, --verbose display verbose build output while installing
-y, --yes-to-all assume "yes" is the answer to every confirmation request concretizer arguments:
--reuse reuse installed packages/buildcaches when possible
--reuse-deps reuse installed dependencies only
-U, --fresh do not reuse installed deps; build newest configuration


Alternately, you can use spack --help in place of spack help, or spack <subcommand> --help to get help on a particular subcommand.

SPACK FOR HOMEBREW/CONDA USERS

Spack is an incredibly powerful package manager, designed for supercomputers where users have diverse installation needs. But Spack can also be used to handle simple single-user installations on your laptop. Most macOS users are already familiar with package managers like Homebrew and Conda, where all installed packages are symlinked to a single central location like /usr/local. In this section, we will show you how to emulate the behavior of Homebrew/Conda using Environments (spack.yaml)!

Setup

First, let's create a new environment. We'll assume that Spack is already set up correctly, and that you've already sourced the setup script for your shell. To create a new environment, simply run:

$ spack env create myenv


Here, myenv can be anything you want to name your environment. Next, we can add a list of packages we would like to install into our environment. Let's say we want a newer version of Bash than the one that comes with macOS, and we want a few Python libraries. We can run:

$ spack -e myenv add bash@5 python py-numpy py-scipy py-matplotlib


Each package can be listed on a separate line, or combined into a single line like we did above. Notice that we're explicitly asking for Bash 5 here. You can use any spec you would normally use on the command line with other Spack commands.

Next, we want to manually configure a couple of things:

$ spack -e myenv config edit


# This is a Spack Environment file.
#
# It describes a set of packages to be installed, along with
# configuration settings.
spack:

# add package specs to the `specs` list
specs: [bash@5, python, py-numpy, py-scipy, py-matplotlib]
view: true


You can see the packages we added earlier in the specs: section. If you ever want to add more packages, you can either use spack add or manually edit this file.

We also need to change the concretizer:unify option. By default, Spack concretizes each spec separately, allowing multiple versions of the same package to coexist. Since we want a single consistent environment, we want to concretize all of the specs together.

Here is what your spack.yaml looks like with this new setting:

# This is a Spack Environment file.
#
# It describes a set of packages to be installed, along with
# configuration settings.
spack:

# add package specs to the `specs` list
specs: [bash@5, python, py-numpy, py-scipy, py-matplotlib]
view: true
concretizer:
unify: true


Spack symlinks all installations to /Users/me/spack/var/spack/environments/myenv/.spack-env/view, which is the default when view: true. You can actually change this to any directory you want. For example, Homebrew uses /usr/local, while Conda uses /Users/me/anaconda. In order to access files in these locations, you need to update PATH and other environment variables to point to them. Activating the Spack environment does this automatically, but you can also manually set them in your .bashrc.

WARNING:

There are several reasons why you shouldn't use /usr/local:
1.
If you are on macOS 10.11+ (El Capitan and newer), Apple makes it hard for you. You may notice permissions issues on /usr/local due to their System Integrity Protection. By default, users don't have permissions to install anything in /usr/local, and you can't even change this using sudo chown or sudo chmod.
2.
Other package managers like Homebrew will try to install things to the same directory. If you plan on using Homebrew in conjunction with Spack, don't symlink things to /usr/local.
3.
If you are on a shared workstation, or don't have sudo privileges, you can't do this.

If you still want to do this anyway, there are several ways around SIP. You could disable SIP by booting into recovery mode and running csrutil disable, but this is not recommended, as it can open up your OS to security vulnerabilities. Another technique is to run spack concretize and spack install using sudo. This is also not recommended.

The safest way I've found is to create your installation directories using sudo, then change ownership back to the user like so:

for directory in .spack bin contrib include lib man share
do

sudo mkdir -p /usr/local/$directory
sudo chown $(id -un):$(id -gn) /usr/local/$directory done


Depending on the packages you install in your environment, the exact list of directories you need to create may vary. You may also find some packages like Java libraries that install a single file to the installation prefix instead of in a subdirectory. In this case, the action is the same, just replace mkdir -p with touch in the for-loop above.

But again, it's safer just to use the default symlink location.



Installation

To actually concretize the environment, run:

$ spack -e myenv concretize


This will tell you which if any packages are already installed, and alert you to any conflicting specs.

To actually install these packages and symlink them to your view: directory, simply run:

$ spack -e myenv install
$ spack env activate myenv


Now, when you type which python3, it should find the one you just installed.

In order to change the default shell to our newer Bash installation, we first need to add it to this list of acceptable shells. Run:

$ sudo vim /etc/shells


and add the absolute path to your bash executable. Then run:

$ chsh -s /path/to/bash


Now, when you log out and log back in, echo $SHELL should point to the newer version of Bash.

Updating Installed Packages

Let's say you upgraded to a new version of macOS, or a new version of Python was released, and you want to rebuild your entire software stack. To do this, simply run the following commands:

$ spack env activate myenv
$ spack concretize --fresh --force
$ spack install


The --fresh flag tells Spack to use the latest version of every package where possible instead of trying to optimize for reuse of existing installed packages.

The --force flag in addition tells Spack to overwrite its previous concretization decisions, allowing you to choose a new version of Python. If any of the new packages like Bash are already installed, spack install won't re-install them, it will keep the symlinks in place.

Updating & Cleaning Up Old Packages

If you're looking to mimic the behavior of Homebrew, you may also want to clean up out-of-date packages from your environment after an upgrade. To upgrade your entire software stack within an environment and clean up old package versions, simply run the following commands:

$ spack env activate myenv
$ spack mark -i --all
$ spack concretize --fresh --force
$ spack install
$ spack gc


Running spack mark -i --all tells Spack to mark all of the existing packages within an environment as "implicitly" installed. This tells spack's garbage collection system that these packages should be cleaned up.

Don't worry however, this will not remove your entire environment. Running spack install will reexamine your spack environment after a fresh concretization and will re-mark any packages that should remain installed as "explicitly" installed.

Note: if you use multiple spack environments you should re-run spack install in each of your environments prior to running spack gc to prevent spack from uninstalling any shared packages that are no longer required by the environment you just upgraded.

Uninstallation

If you decide that Spack isn't right for you, uninstallation is simple. Just run:

$ spack env activate myenv
$ spack uninstall --all


This will uninstall all packages in your environment and remove the symlinks.

FREQUENTLY ASKED QUESTIONS

This page contains answers to frequently asked questions about Spack. If you have questions that are not answered here, feel free to ask on Slack or GitHub Discussions. If you've learned the answer to a question that you think should be here, please consider contributing to this page.

Why does Spack pick particular versions and variants?

This question comes up in a variety of forms:

1.
Why does Spack seem to ignore my package preferences from packages.yaml config?
2.
Why does Spack toggle a variant instead of using the default from the package.py file?



The short answer is that Spack always picks an optimal configuration based on a complex set of criteria[1]. These criteria are more nuanced than always choosing the latest versions or default variants.

NOTE:

As a rule of thumb: requirements + constraints > reuse > preferences > defaults.


The following set of criteria (from lowest to highest precedence) explain common cases where concretization output may seem surprising at first.

1.
Package preferences configured in packages.yaml override variant defaults from package.py files, and influence the optimal ordering of versions. Preferences are specified as follows:

packages:

foo:
version: [1.0, 1.1]
variants: ~mpi


2.
Reuse concretization configured in concretizer.yaml overrides preferences, since it's typically faster to reuse an existing spec than to build a preferred one from sources. When build caches are enabled, specs may be reused from a remote location too. Reuse concretization is configured as follows:

concretizer:

reuse: dependencies # other options are 'true' and 'false'


3.
Package requirements configured in packages.yaml, and constraints from the command line as well as package.py files override all of the above. Requirements are specified as follows:

packages:

foo:
require:
- "@1.2: +mpi"



Requirements and constraints restrict the set of possible solutions, while reuse behavior and preferences influence what an optimal solution looks like.

FOOTNOTES

[1]
The exact list of criteria can be retrieved with the spack solve command

CONFIGURATION FILES

Spack has many configuration files. Here is a quick list of them, in case you want to skip directly to specific docs:

  • compilers.yaml
  • concretizer.yaml
  • config.yaml
  • mirrors.yaml
  • modules.yaml
  • packages.yaml
  • repos.yaml

You can also add any of these as inline configuration in the YAML manifest file (spack.yaml) describing an environment.

YAML Format

Spack configuration files are written in YAML. We chose YAML because it's human readable, but also versatile in that it supports dictionaries, lists, and nested sections. For more details on the format, see yaml.org and libyaml. Here is an example config.yaml file:

config:

install_tree: $spack/opt/spack
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage


Each Spack configuration file is nested under a top-level section corresponding to its name. So, config.yaml starts with config:, mirrors.yaml starts with mirrors:, etc.

Configuration Scopes

Spack pulls configuration data from files in several directories. There are six configuration scopes. From lowest to highest:

1.
defaults: Stored in $(prefix)/etc/spack/defaults/. These are the "factory" settings. Users should generally not modify the settings here, but should override them in other configuration scopes. The defaults here will change from version to version of Spack.
2.
system: Stored in /etc/spack/. These are settings for this machine, or for all machines on which this file system is mounted. The site scope can be used for settings idiosyncratic to a particular machine, such as the locations of compilers or external packages. These settings are presumably controlled by someone with root access on the machine. They override the defaults scope.
3.
site: Stored in $(prefix)/etc/spack/. Settings here affect only this instance of Spack, and they override the defaults and system scopes. The site scope can can be used for per-project settings (one Spack instance per project) or for site-wide settings on a multi-user machine (e.g., for a common Spack instance).
4.
user: Stored in the home directory: ~/.spack/. These settings affect all instances of Spack and take higher precedence than site, system, or defaults scopes.
5.
custom: Stored in a custom directory specified by --config-scope. If multiple scopes are listed on the command line, they are ordered from lowest to highest precedence.
6.
environment: When using Spack Environments (spack.yaml), Spack reads additional configuration from the environment file. See Configuring Environments for further details on these scopes. Environment scopes can be referenced from the command line as env:name (to reference environment foo, use env:foo).
7.
command line: Build settings specified on the command line take precedence over all other scopes.

Each configuration directory may contain several configuration files, such as config.yaml, compilers.yaml, or mirrors.yaml. When configurations conflict, settings from higher-precedence scopes override lower-precedence settings.

Commands that modify scopes (e.g., spack compilers, spack repo, etc.) take a --scope=<name> parameter that you can use to control which scope is modified. By default, they modify the highest-precedence scope.

Custom scopes

In addition to the defaults, system, site, and user scopes, you may add configuration scopes directly on the command line with the --config-scope argument, or -C for short.

For example, the following adds two configuration scopes, named scopea and scopeb, to a spack spec command:

$ spack -C ~/myscopes/scopea -C ~/myscopes/scopeb spec ncurses


Custom scopes come after the spack command and before the subcommand, and they specify a single path to a directory full of configuration files. You can add the same configuration files to that directory that you can add to any other scope (config.yaml, packages.yaml, etc.).

If multiple scopes are provided:

1.
Each must be preceded with the --config-scope or -C flag.
2.
They must be ordered from lowest to highest precedence.

Example: scopes for release and development

Suppose that you need to support simultaneous building of release and development versions of mypackage, where mypackage -> A -> B. You could create The following files:

~/myscopes/release/packages.yaml

packages:

mypackage:
version: [1.7]
A:
version: [2.3]
B:
version: [0.8]


~/myscopes/develop/packages.yaml

packages:

mypackage:
version: [develop]
A:
version: [develop]
B:
version: [develop]


You can switch between release and develop configurations using configuration arguments. You would type spack -C ~/myscopes/release when you want to build the designated release versions of mypackage, A, and B, and you would type spack -C ~/myscopes/develop when you want to build all of these packages at the develop version.

Example: swapping MPI providers

Suppose that you need to build two software packages, packagea and packageb. packagea is Python 2-based and packageb is Python 3-based. packagea only builds with OpenMPI and packageb only builds with MPICH. You can create different configuration scopes for use with packagea and packageb:

~/myscopes/packgea/packages.yaml

packages:

python:
version: [2.7.11]
all:
providers:
mpi: [openmpi]


~/myscopes/packageb/packages.yaml

packages:

python:
version: [3.5.2]
all:
providers:
mpi: [mpich]


Platform-specific Scopes

For each scope above (excluding environment scopes), there can also be platform-specific settings. For example, on most platforms, GCC is the preferred compiler. However, on macOS (darwin), Clang often works for more packages, and is set as the default compiler. This configuration is set in $(prefix)/etc/spack/defaults/darwin/packages.yaml. It will take precedence over settings in the defaults scope, but can still be overridden by settings in system, system/darwin, site, site/darwin, user, user/darwin, custom, or custom/darwin. So, the full scope precedence is:

1.
defaults
2.
defaults/<platform>
3.
system
4.
system/<platform>
5.
site
6.
site/<platform>
7.
user
8.
user/<platform>
9.
custom
10.
custom/<platform>

You can get the name to use for <platform> by running spack arch --platform. The system config scope has a <platform> section for sites at which /etc is mounted on multiple heterogeneous machines.

Scope Precedence

When spack queries for configuration parameters, it searches in higher-precedence scopes first. So, settings in a higher-precedence file can override those with the same key in a lower-precedence one. For list-valued settings, Spack prepends higher-precedence settings to lower-precedence settings. Completely ignoring higher-level configuration options is supported with the :: notation for keys (see Overriding entire sections below).

There are also special notations for string concatenation and precendense override. Using the +: notation can be used to force prepending strings or lists. For lists, this is identical to the default behavior. Using the -: works similarly, but for appending values. String Concatenation

Simple keys

Let's look at an example of overriding a single key in a Spack file. If your configurations look like this:

$(prefix)/etc/spack/defaults/config.yaml

config:

install_tree: $spack/opt/spack
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage


~/.spack/config.yaml

config:

install_tree: /some/other/directory


Spack will only override install_tree in the config section, and will take the site preferences for other settings. You can see the final, combined configuration with the spack config get <configtype> command:

$ spack config get config
config:

install_tree: /some/other/directory
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage


String Concatenation

Above, the user config.yaml completely overrides specific settings in the default config.yaml. Sometimes, it is useful to add a suffix/prefix to a path or name. To do this, you can use the -: notation for append string concatenation at the end of a key in a configuration file. For example:

~/.spack/config.yaml

config:

install_tree-: /my/custom/suffix/


Spack will then append to the lower-precedence configuration under the install_tree-: section:

$ spack config get config
config:

install_tree: /some/other/directory/my/custom/suffix
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage


Similarly, +: can be used to prepend to a path or name:

~/.spack/config.yaml

config:

install_tree+: /my/custom/suffix/


Overriding entire sections

Above, the user config.yaml only overrides specific settings in the default config.yaml. Sometimes, it is useful to completely override lower-precedence settings. To do this, you can use two colons at the end of a key in a configuration file. For example:

~/.spack/config.yaml

config::

install_tree: /some/other/directory


Spack will ignore all lower-precedence configuration under the config:: section:

$ spack config get config
config:

install_tree: /some/other/directory


List-valued settings

Let's revisit the config.yaml example one more time. The build_stage setting's value is an ordered list of directories:

$(prefix)/etc/spack/defaults/config.yaml

build_stage:

- $tempdir/$user/spack-stage
- ~/.spack/stage


Suppose the user configuration adds its own list of build_stage paths:

~/.spack/config.yaml

build_stage:

- /lustre-scratch/$user/spack
- ~/mystage


Spack will first look at the paths in the defaults config.yaml, then the paths in the user's ~/.spack/config.yaml. The list in the higher-precedence scope is prepended to the defaults. spack config get config shows the result:

$ spack config get config
config:

install_tree: /some/other/directory
build_stage:
- /lustre-scratch/$user/spack
- ~/mystage
- $tempdir/$user/spack-stage
- ~/.spack/stage


As in Overriding entire sections, the higher-precedence scope can completely override the lower-precedence scope using ::. So if the user config looked like this:

~/.spack/config.yaml

build_stage::

- /lustre-scratch/$user/spack
- ~/mystage


The merged configuration would look like this:

$ spack config get config
config:

install_tree: /some/other/directory
build_stage:
- /lustre-scratch/$user/spack
- ~/mystage


Config File Variables

Spack understands several variables which can be used in config file paths wherever they appear. There are three sets of these variables: Spack-specific variables, environment variables, and user path variables. Spack-specific variables and environment variables are both indicated by prefixing the variable name with $. User path variables are indicated at the start of the path with ~ or ~user.

Spack-specific variables

Spack understands over a dozen special variables. These are:

  • $env: name of the currently active environment
  • $spack: path to the prefix of this Spack installation
  • $tempdir: default system temporary directory (as specified in Python's tempfile.tempdir variable.
  • $user: name of the current user
  • $user_cache_path: user cache directory (~/.spack unless overridden)
  • $architecture: the architecture triple of the current host, as detected by Spack.
  • $arch: alias for $architecture.
  • $platform: the platform of the current host, as detected by Spack.
  • $operating_system: the operating system of the current host, as detected by the distro python module.
  • $os: alias for $operating_system.
  • $target: the ISA target for the current host, as detected by ArchSpec. E.g. skylake or neoverse-n1.
  • $target_family. The target family for the current host, as detected by ArchSpec. E.g. x86_64 or aarch64.
  • $date: the current date in the format YYYY-MM-DD

Note that, as with shell variables, you can write these as $varname or with braces to distinguish the variable from surrounding characters: ${varname}. Their names are also case insensitive, meaning that $SPACK works just as well as $spack. These special variables are substituted first, so any environment variables with the same name will not be used.

Environment variables

After Spack-specific variables are evaluated, environment variables are expanded. These are formatted like Spack-specific variables, e.g., ${varname}. You can use this to insert environment variables in your Spack configuration.

User home directories

Spack performs Unix-style tilde expansion on paths in configuration files. This means that tilde (~) will expand to the current user's home directory, and ~user will expand to a specified user's home directory. The ~ must appear at the beginning of the path, or Spack will not expand it.

Environment Modifications

Spack allows to prescribe custom environment modifications in a few places within its configuration files. Every time these modifications are allowed they are specified as a dictionary, like in the following example:

environment:

set:
LICENSE_FILE: '/path/to/license'
unset:
- CPATH
- LIBRARY_PATH
append_path:
PATH: '/new/bin/dir'


The possible actions that are permitted are set, unset, append_path, prepend_path and finally remove_path. They all require a dictionary of variable names mapped to the values used for the modification. The only exception is unset that requires just a list of variable names. No particular order is ensured on the execution of each of these modifications.

Seeing Spack's Configuration

With so many scopes overriding each other, it can sometimes be difficult to understand what Spack's final configuration looks like.

Spack provides two useful ways to view the final "merged" version of any configuration file: spack config get and spack config blame.

spack config get

spack config get shows a fully merged configuration file, taking into account all scopes. For example, to see the fully merged config.yaml, you can type:

$ spack config get config
config:

debug: false
checksum: true
verify_ssl: true
dirty: false
build_jobs: 8
install_tree: $spack/opt/spack
template_dirs:
- $spack/templates
directory_layout: {architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage
- $spack/var/spack/stage
source_cache: $spack/var/spack/cache
misc_cache: ~/.spack/cache
locks: true


Likewise, this will show the fully merged packages.yaml:

$ spack config get packages


You can use this in conjunction with the -C / --config-scope argument to see how your scope will affect Spack's configuration:

$ spack -C /path/to/my/scope config get packages


spack config blame

spack config blame functions much like spack config get, but it shows exactly which configuration file each preference came from. If you do not know why Spack is behaving a certain way, this can help you track down the problem:

$ spack --insecure -C ./my-scope -C ./my-scope-2 config blame config
==> Warning: You asked for --insecure. Will NOT check SSL certificates.
---                                                   config:
_builtin                                                debug: False
/home/myuser/spack/etc/spack/defaults/config.yaml:72    checksum: True
command_line                                            verify_ssl: False
./my-scope-2/config.yaml:2                              dirty: False
_builtin                                                build_jobs: 8
./my-scope/config.yaml:2                                install_tree: /path/to/some/tree
/home/myuser/spack/etc/spack/defaults/config.yaml:23    template_dirs:
/home/myuser/spack/etc/spack/defaults/config.yaml:24    - $spack/templates
/home/myuser/spack/etc/spack/defaults/config.yaml:28    directory_layout: {architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}
/home/myuser/spack/etc/spack/defaults/config.yaml:49    build_stage:
/home/myuser/spack/etc/spack/defaults/config.yaml:50    - $tempdir/$user/spack-stage
/home/myuser/spack/etc/spack/defaults/config.yaml:51    - ~/.spack/stage
/home/myuser/spack/etc/spack/defaults/config.yaml:52    - $spack/var/spack/stage
/home/myuser/spack/etc/spack/defaults/config.yaml:57    source_cache: $spack/var/spack/cache
/home/myuser/spack/etc/spack/defaults/config.yaml:62    misc_cache: ~/.spack/cache
/home/myuser/spack/etc/spack/defaults/config.yaml:86    locks: True


You can see above that the build_jobs and debug settings are built in and are not overridden by a configuration file. The verify_ssl setting comes from the --insecure option on the command line. dirty and install_tree come from the custom scopes ./my-scope and ./my-scope-2, and all other configuration options come from the default configuration files that ship with Spack.

Overriding Local Configuration

Spack's system and user scopes provide ways for administrators and users to set global defaults for all Spack instances, but for use cases where one wants a clean Spack installation, these scopes can be undesirable. For example, users may want to opt out of global system configuration, or they may want to ignore their own home directory settings when running in a continuous integration environment.

Spack also, by default, keeps various caches and user data in ~/.spack, but users may want to override these locations.

Spack provides three environment variables that allow you to override or opt out of configuration locations:

  • SPACK_USER_CONFIG_PATH: Override the path to use for the user scope (~/.spack by default).
  • SPACK_SYSTEM_CONFIG_PATH: Override the path to use for the system scope (/etc/spack by default).
  • SPACK_DISABLE_LOCAL_CONFIG: set this environment variable to completely disable both the system and user configuration directories. Spack will only consider its own defaults and site configuration locations.

And one that allows you to move the default cache location:

SPACK_USER_CACHE_PATH: Override the default path to use for user data (misc_cache, tests, reports, etc.)

With these settings, if you want to isolate Spack in a CI environment, you can do this:

export SPACK_DISABLE_LOCAL_CONFIG=true
export SPACK_USER_CACHE_PATH=/tmp/spack


SPACK SETTINGS (CONFIG.YAML)

Spack's basic configuration options are set in config.yaml. You can see the default settings by looking at etc/spack/defaults/config.yaml:

# -------------------------------------------------------------------------
# This is the default spack configuration file.
#
# Settings here are versioned with Spack and are intended to provide
# sensible defaults out of the box. Spack maintainers should edit this
# file to keep it current.
#
# Users can override these settings by editing the following files.
#
# Per-spack-instance settings (overrides defaults):
#   $SPACK_ROOT/etc/spack/config.yaml
#
# Per-user settings (overrides default and site settings):
#   ~/.spack/config.yaml
# -------------------------------------------------------------------------
config:

# This is the path to the root of the Spack install tree.
# You can use $spack here to refer to the root of the spack instance.
install_tree:
root: $spack/opt/spack
projections:
all: "{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}"
# install_tree can include an optional padded length (int or boolean)
# default is False (do not pad)
# if padded_length is True, Spack will pad as close to the system max path
# length as possible
# if padded_length is an integer, Spack will pad to that many characters,
# assuming it is higher than the length of the install_tree root.
# padded_length: 128
# Locations where templates should be found
template_dirs:
- $spack/share/spack/templates
# Directory where licenses should be located
license_dir: $spack/etc/spack/licenses
# Temporary locations Spack can try to use for builds.
#
# Recommended options are given below.
#
# Builds can be faster in temporary directories on some (e.g., HPC) systems.
# Specifying `$tempdir` will ensure use of the default temporary directory
# (i.e., ``$TMP` or ``$TMPDIR``).
#
# Another option that prevents conflicts and potential permission issues is
# to specify `$user_cache_path/stage`, which ensures each user builds in their
# home directory.
#
# A more traditional path uses the value of `$spack/var/spack/stage`, which
# builds directly inside Spack's instance without staging them in a
# temporary space. Problems with specifying a path inside a Spack instance
# are that it precludes its use as a system package and its ability to be
# pip installable.
#
# In Spack environment files, chaining onto existing system Spack
# installations, the $env variable can be used to download, cache and build
# into user-writable paths that are relative to the currently active
# environment.
#
# In any case, if the username is not already in the path, Spack will append
# the value of `$user` in an attempt to avoid potential conflicts between
# users in shared temporary spaces.
#
# The build stage can be purged with `spack clean --stage` and
# `spack clean -a`, so it is important that the specified directory uniquely
# identifies Spack staging to avoid accidentally wiping out non-Spack work.
build_stage:
- $tempdir/$user/spack-stage
- $user_cache_path/stage
# - $spack/var/spack/stage
# Directory in which to run tests and store test results.
# Tests will be stored in directories named by date/time and package
# name/hash.
test_stage: $user_cache_path/test
# Cache directory for already downloaded source tarballs and archived
# repositories. This can be purged with `spack clean --downloads`.
source_cache: $spack/var/spack/cache
## Directory where spack managed environments are created and stored
# environments_root: $spack/var/spack/environments
# Cache directory for miscellaneous files, like the package index.
# This can be purged with `spack clean --misc-cache`
misc_cache: $user_cache_path/cache
# Timeout in seconds used for downloading sources etc. This only applies
# to the connection phase and can be increased for slow connections or
# servers. 0 means no timeout.
connect_timeout: 10
# If this is false, tools like curl that use SSL will not verify
# certifiates. (e.g., curl will use use the -k option)
verify_ssl: true
# Suppress gpg warnings from binary package verification
# Only suppresses warnings, gpg failure will still fail the install
# Potential rationale to set True: users have already explicitly trusted the
# gpg key they are using, and may not want to see repeated warnings that it
# is self-signed or something of the sort.
suppress_gpg_warnings: false
# If set to true, Spack will attempt to build any compiler on the spec
# that is not already available. If set to False, Spack will only use
# compilers already configured in compilers.yaml
install_missing_compilers: false
# If set to true, Spack will always check checksums after downloading
# archives. If false, Spack skips the checksum step.
checksum: true
# If set to true, Spack will fetch deprecated versions without warning.
# If false, Spack will raise an error when trying to install a deprecated version.
deprecated: false
# If set to true, `spack install` and friends will NOT clean
# potentially harmful variables from the build environment. Use wisely.
dirty: false
# The language the build environment will use. This will produce English
# compiler messages by default, so the log parser can highlight errors.
# If set to C, it will use English (see man locale).
# If set to the empty string (''), it will use the language from the
# user's environment.
build_language: C
# When set to true, concurrent instances of Spack will use locks to
# avoid modifying the install tree, database file, etc. If false, Spack
# will disable all locking, but you must NOT run concurrent instances
# of Spack. For filesystems that don't support locking, you should set
# this to false and run one Spack at a time, but otherwise we recommend
# enabling locks.
locks: true
# The default url fetch method to use.
# If set to 'curl', Spack will require curl on the user's system
# If set to 'urllib', Spack will use python built-in libs to fetch
url_fetch_method: urllib
# The maximum number of jobs to use for the build system (e.g. `make`), when
# the -j flag is not given on the command line. Defaults to 16 when not set.
# Note that the maximum number of jobs is limited by the number of cores
# available, taking thread affinity into account when supported. For instance:
# - With `build_jobs: 16` and 4 cores available `spack install` will run `make -j4`
# - With `build_jobs: 16` and 32 cores available `spack install` will run `make -j16`
# - With `build_jobs: 2` and 4 cores available `spack install -j6` will run `make -j6`
# build_jobs: 16
# If set to true, Spack will use ccache to cache C compiles.
ccache: false
# The concretization algorithm to use in Spack. Options are:
#
# 'clingo': Uses a logic solver under the hood to solve DAGs with full
# backtracking and optimization for user preferences. Spack will
# try to bootstrap the logic solver, if not already available.
#
# 'original': Spack's original greedy, fixed-point concretizer. This
# algorithm can make decisions too early and will not backtrack
# sufficiently for many specs. This will soon be deprecated in
# favor of clingo.
#
# See `concretizer.yaml` for more settings you can fine-tune when
# using clingo.
concretizer: clingo
# How long to wait to lock the Spack installation database. This lock is used
# when Spack needs to manage its own package metadata and all operations are
# expected to complete within the default time limit. The timeout should
# therefore generally be left untouched.
db_lock_timeout: 60
# How long to wait when attempting to modify a package (e.g. to install it).
# This value should typically be 'null' (never time out) unless the Spack
# instance only ever has a single user at a time, and only if the user
# anticipates that a significant delay indicates that the lock attempt will
# never succeed.
package_lock_timeout: null
# Control how shared libraries are located at runtime on Linux. See the
# the Spack documentation for details.
shared_linking:
# Spack automatically embeds runtime search paths in ELF binaries for their
# dependencies. Their type can either be "rpath" or "runpath". For glibc, rpath is
# inherited and has precedence over LD_LIBRARY_PATH; runpath is not inherited
# and of lower precedence. DO NOT MIX these within the same install tree.
type: rpath
# (Experimental) Embed absolute paths of dependent libraries directly in ELF
# binaries to avoid runtime search. This can improve startup time of
# executables with many dependencies, in particular on slow filesystems.
bind: false
# Set to 'false' to allow installation on filesystems that doesn't allow setgid bit
# manipulation by unprivileged user (e.g. AFS)
allow_sgid: true
# Whether to show status information during building and installing packages.
# This gives information about Spack's current progress as well as the current
# and total number of packages. Information is shown both in the terminal
# title and inline.
install_status: true
# Number of seconds a buildcache's index.json is cached locally before probing
# for updates, within a single Spack invocation. Defaults to 10 minutes.
binary_index_ttl: 600
flags:
# Whether to keep -Werror flags active in package builds.
keep_werror: 'none'
# A mapping of aliases that can be used to define new commands. For instance,
# `sp: spec -I` will define a new command `sp` that will execute `spec` with
# the `-I` argument. Aliases cannot override existing commands.
aliases:
concretise: concretize
containerise: containerize
rm: remove


These settings can be overridden in etc/spack/config.yaml or ~/.spack/config.yaml. See Configuration Scopes for details.

install_tree:root

The location where Spack will install packages and their dependencies. Default is $spack/opt/spack.

install_hash_length and install_path_scheme

The default Spack installation path can be very long and can create problems for scripts with hardcoded shebangs. Additionally, when using the Intel compiler, and if there is also a long list of dependencies, the compiler may segfault. If you see the following:

: internal error: ** The compiler has encountered an unexpected problem.
** Segmentation violation signal raised. **
Access violation or stack overflow. Please contact Intel Support for assistance.




it may be because variables containing dependency specs may be too long. There are two parameters to help with long path names. Firstly, the install_hash_length parameter can set the length of the hash in the installation path from 1 to 32. The default path uses the full 32 characters.

Secondly, it is also possible to modify the entire installation scheme. By default Spack uses {architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash} where the tokens that are available for use in this directive are the same as those understood by the format() method. Using this parameter it is possible to use a different package layout or reduce the depth of the installation paths. For example

config:

install_path_scheme: '{name}/{version}/{hash:7}'




would install packages into sub-directories using only the package name, version and a hash length of 7 characters.

When using either parameter to set the hash length it only affects the representation of the hash in the installation directory. You should be aware that the smaller the hash length the more likely naming conflicts will occur. These parameters are independent of those used to configure module names.

WARNING:

Modifying the installation hash length or path scheme after packages have been installed will prevent Spack from being able to find the old installation directories.


build_stage

Spack is designed to run out of a user home directory, and on many systems the home directory is a (slow) network file system. On most systems, building in a temporary file system is faster. Usually, there is also more space available in the temporary location than in the home directory. If the username is not already in the path, Spack will append the value of $user to the selected build_stage path.

WARNING:

We highly recommend specifying build_stage paths that distinguish between staging and other activities to ensure spack clean does not inadvertently remove unrelated files. Spack prepends spack-stage- to temporary staging directory names to reduce this risk. Using a combination of spack and or stage in each specified path, as shown in the default settings and documented examples, will add another layer of protection.


By default, Spack's build_stage is configured like this:

build_stage:

- $tempdir/$user/spack-stage
- ~/.spack/stage


This can be an ordered list of paths that Spack should search when trying to find a temporary directory for the build stage. The list is searched in order, and Spack will use the first directory to which it has write access.

Specifying ~/.spack/stage first will ensure each user builds in their home directory. The historic Spack stage path $spack/var/spack/stage will build directly inside the Spack instance. See Config File Variables for more on $tempdir and $spack.

When Spack builds a package, it creates a temporary directory within the build_stage. After the package is successfully installed, Spack deletes the temporary directory it used to build. Unsuccessful builds are not deleted, but you can manually purge them with spack clean --stage.

NOTE:

The build will fail if there is no writable directory in the build_stage list, where any user- and site-specific setting will be searched first.


source_cache

Location to cache downloaded tarballs and repositories. By default these are stored in $spack/var/spack/cache. These are stored indefinitely by default. Can be purged with spack clean --downloads.

misc_cache

Temporary directory to store long-lived cache files, such as indices of packages available in repositories. Defaults to ~/.spack/cache. Can be purged with spack clean --misc-cache.

verify_ssl

When set to true (default) Spack will verify certificates of remote hosts when making ssl connections. Set to false to disable, and tools like curl will use their --insecure options. Disabling this can expose you to attacks. Use at your own risk.

checksum

When set to true, Spack verifies downloaded source code using a checksum, and will refuse to build packages that it cannot verify. Set to false to disable these checks. Disabling this can expose you to attacks. Use at your own risk.

locks

When set to true, concurrent instances of Spack will use locks to avoid modifying the install tree, database file, etc. If false, Spack will disable all locking, but you must not run concurrent instances of Spack. For file systems that don't support locking, you should set this to false and run one Spack at a time, but otherwise we recommend enabling locks.

dirty

By default, Spack unsets variables in your environment that can change the way packages build. This includes LD_LIBRARY_PATH, CPATH, LIBRARY_PATH, DYLD_LIBRARY_PATH, and others.

By default, builds are clean, but on some machines, compilers and other tools may need custom LD_LIBRARY_PATH settings to run. You can set dirty to true to skip the cleaning step and make all builds "dirty" by default. Be aware that this will reduce the reproducibility of builds.

build_jobs

Unless overridden in a package or on the command line, Spack builds all packages in parallel. The default parallelism is equal to the number of cores available to the process, up to 16 (the default of build_jobs). For a build system that uses Makefiles, this spack install runs:

  • make -j<build_jobs>, when build_jobs is less than the number of cores available
  • make -j<ncores>, when build_jobs is greater or equal to the number of cores available

If you work on a shared login node or have a strict ulimit, it may be necessary to set the default to a lower value. By setting build_jobs to 4, for example, commands like spack install will run make -j4 instead of hogging every core. To build all software in serial, set build_jobs to 1.

Note that specifying the number of jobs on the command line always takes priority, so that spack install -j<n> always runs make -j<n>, even when that exceeds the number of cores available.

ccache

When set to true Spack will use ccache to cache compiles. This is useful specifically in two cases: (1) when using spack dev-build, and (2) when building the same package with many different variants. The default is false.

When enabled, Spack will look inside your PATH for a ccache executable and stop if it is not found. Some systems come with ccache, but it can also be installed using spack install ccache. ccache comes with reasonable defaults for cache size and location. (See the Configuration settings section of man ccache to learn more about the default settings and how to change them). Please note that we currently disable ccache's hash_dir feature to avoid an issue with the stage directory (see https://github.com/spack/spack/pull/3761#issuecomment-294352232).

shared_linking:type

Control whether Spack embeds RPATH or RUNPATH attributes in ELF binaries so that they can find their dependencies. Has no effect on macOS. Two options are allowed:

1.
rpath uses RPATH and forces the --disable-new-tags flag to be passed to the linker
2.
runpath uses RUNPATH and forces the --enable-new-tags flag to be passed to the linker



RPATH search paths have higher precedence than LD_LIBRARY_PATH and ld.so will search for libraries in transitive RPATHs of parent objects.

RUNPATH search paths have lower precedence than LD_LIBRARY_PATH, and ld.so will ONLY search for dependencies in the RUNPATH of the loading object.

DO NOT MIX the two options within the same install tree.

shared_linking:bind

This is an experimental option that controls whether Spack embeds absolute paths to needed shared libraries in ELF executables and shared libraries on Linux. Setting this option to true has two advantages:

1.
Improved startup time: when running an executable, the dynamic loader does not have to perform a search for needed libraries, they are loaded directly.
2.
Reliability: libraries loaded at runtime are those that were linked to. This minimizes the risk of accidentally picking up system libraries.

In the current implementation, Spack sets the soname (shared object name) of libraries to their install path upon installation. This has two implications:

1.
binding does not apply to libraries installed before the option was enabled;
2.
toggling the option off does not prevent binding of libraries installed when the option was still enabled.

It is also worth noting that:

1.
Applications relying on dlopen(3) will continue to work, even when they open a library by name. This is because RPATHs are retained in binaries also when bind is enabled.
2.
LD_PRELOAD continues to work for the typical use case of overriding symbols, such as preloading a library with a more efficient malloc. However, the preloaded library will be loaded additionally to, instead of in place of another library with the same name --- this can be problematic in very rare cases where libraries rely on a particular init or fini order.

NOTE:

In some cases packages provide stub libraries that only contain an interface for linking, but lack an implementation for runtime. An example of this is libcuda.so, provided by the CUDA toolkit; it can be used to link against, but the library needed at runtime is the one installed with the CUDA driver. To avoid binding those libraries, they can be marked as non-bindable using a property in the package:

class Example(Package):

non_bindable_shared_objects = ["libinterface.so"]




install_status

When set to true, Spack will show information about its current progress as well as the current and total package numbers. Progress is shown both in the terminal title and inline. Setting it to false will not show any progress information.

To work properly, this requires your terminal to reset its title after Spack has finished its work, otherwise Spack's status information will remain in the terminal's title indefinitely. Most terminals should already be set up this way and clear Spack's status information.

aliases

Aliases can be used to define new Spack commands. They can be either shortcuts for longer commands or include specific arguments for convenience. For instance, if users want to use spack install's -v argument all the time, they can create a new alias called inst that will always call install -v:

aliases:

inst: install -v


PACKAGE SETTINGS (PACKAGES.YAML)

Spack allows you to customize how your software is built through the packages.yaml file. Using it, you can make Spack prefer particular implementations of virtual dependencies (e.g., MPI or BLAS/LAPACK), or you can make it prefer to build with particular compilers. You can also tell Spack to use external software installations already present on your system.

At a high level, the packages.yaml file is structured like this:

packages:

package1:
# settings for package1
package2:
# settings for package2
# ...
all:
# settings that apply to all packages.


So you can either set build preferences specifically for one package, or you can specify that certain settings should apply to all packages. The types of settings you can customize are described in detail below.

Spack's build defaults are in the default etc/spack/defaults/packages.yaml file. You can override them in ~/.spack/packages.yaml or etc/spack/packages.yaml. For more details on how this works, see Configuration Scopes.

External Packages

Spack can be configured to use externally-installed packages rather than building its own packages. This may be desirable if machines ship with system packages, such as a customized MPI that should be used instead of Spack building its own MPI.

External packages are configured through the packages.yaml file. Here's an example of an external configuration:

packages:

openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel


This example lists three installations of OpenMPI, one built with GCC, one built with GCC and debug information, and another built with Intel. If Spack is asked to build a package that uses one of these MPIs as a dependency, it will use the pre-installed OpenMPI in the given directory. Note that the specified path is the top-level install prefix, not the bin subdirectory.

packages.yaml can also be used to specify modules to load instead of the installation prefixes. The following example says that module CMake/3.7.2 provides cmake version 3.7.2.

cmake:

externals:
- spec: cmake@3.7.2
modules:
- CMake/3.7.2


Each packages.yaml begins with a packages: attribute, followed by a list of package names. To specify externals, add an externals: attribute under the package name, which lists externals. Each external should specify a spec: string that should be as well-defined as reasonably possible. If a package lacks a spec component, such as missing a compiler or package version, then Spack will guess the missing component based on its most-favored packages, and it may guess incorrectly.

Each package version and compiler listed in an external should have entries in Spack's packages and compiler configuration, even though the package and compiler may not ever be built.

Prevent packages from being built from sources

Adding an external spec in packages.yaml allows Spack to use an external location, but it does not prevent Spack from building packages from sources. In the above example, Spack might choose for many valid reasons to start building and linking with the latest version of OpenMPI rather than continue using the pre-installed OpenMPI versions.

To prevent this, the packages.yaml configuration also allows packages to be flagged as non-buildable. The previous example could be modified to be:

packages:

openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
buildable: False


The addition of the buildable flag tells Spack that it should never build its own version of OpenMPI from sources, and it will instead always rely on a pre-built OpenMPI.

NOTE:

If concretizer:reuse is on (see Concretization Settings (concretizer.yaml) for more information on that flag) pre-built specs include specs already available from a local store, an upstream store, a registered buildcache or specs marked as externals in packages.yaml. If concretizer:reuse is off, only external specs in packages.yaml are included in the list of pre-built specs.


If an external module is specified as not buildable, then Spack will load the external module into the build environment which can be used for linking.

The buildable does not need to be paired with external packages. It could also be used alone to forbid packages that may be buggy or otherwise undesirable.

Non-buildable virtual packages

Virtual packages in Spack can also be specified as not buildable, and external implementations can be provided. In the example above, OpenMPI is configured as not buildable, but Spack will often prefer other MPI implementations over the externally available OpenMPI. Spack can be configured with every MPI provider not buildable individually, but more conveniently:

packages:

mpi:
buildable: False
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel


Spack can then use any of the listed external implementations of MPI to satisfy a dependency, and will choose depending on the compiler and architecture.

In cases where the concretizer is configured to reuse specs, and other mpi providers (available via stores or buildcaches) are not wanted, Spack can be configured to require specs matching only the available externals:

packages:

mpi:
buildable: False
require:
- one_of: [
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64",
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug",
"openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
]
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel


This configuration prevents any spec using MPI and originating from stores or buildcaches to be reused, unless it matches the requirements under packages:mpi:require. For more information on requirements see Package Requirements.

Automatically Find External Packages

You can run the spack external find command to search for system-provided packages and add them to packages.yaml. After running this command your packages.yaml may include new entries:

packages:

cmake:
externals:
- spec: cmake@3.17.2
prefix: /usr


Generally this is useful for detecting a small set of commonly-used packages; for now this is generally limited to finding build-only dependencies. Specific limitations include:

  • Packages are not discoverable by default: For a package to be discoverable with spack external find, it needs to add special logic. See here for more details.
  • The logic does not search through module files, it can only detect packages with executables defined in PATH; you can help Spack locate externals which use module files by loading any associated modules for packages that you want Spack to know about before running spack external find.
  • Spack does not overwrite existing entries in the package configuration: If there is an external defined for a spec at any configuration scope, then Spack will not add a new external entry (spack config blame packages can help locate all external entries).

Package Requirements

Spack can be configured to always use certain compilers, package versions, and variants during concretization through package requirements.

Package requirements are useful when you find yourself repeatedly specifying the same constraints on the command line, and wish that Spack respects these constraints whether you mention them explicitly or not. Another use case is specifying constraints that should apply to all root specs in an environment, without having to repeat the constraint everywhere.

Apart from that, requirements config is more flexible than constraints on the command line, because it can specify constraints on packages when they occur as a dependency. In contrast, on the command line it is not possible to specify constraints on dependencies while also keeping those dependencies optional.

SEE ALSO:

FAQ: Why does Spack pick particular versions and variants?


Requirements syntax

The package requirements configuration is specified in packages.yaml, keyed by package name and expressed using the Spec syntax. In the simplest case you can specify attributes that you always want the package to have by providing a single spec string to require:

packages:

libfabric:
require: "@1.13.2"


In the above example, libfabric will always build with version 1.13.2. If you need to compose multiple configuration scopes require accepts a list of strings:

packages:

libfabric:
require:
- "@1.13.2"
- "%gcc"


In this case libfabric will always build with version 1.13.2 and using GCC as a compiler.

For more complex use cases, require accepts also a list of objects. These objects must have either a any_of or a one_of field, containing a list of spec strings, and they can optionally have a when and a message attribute:

packages:

openmpi:
require:
- any_of: ["@4.1.5", "%gcc"]
message: "in this example only 4.1.5 can build with other compilers"


any_of is a list of specs. One of those specs must be satisfied and it is also allowed for the concretized spec to match more than one. In the above example, that means you could build openmpi@4.1.5%gcc, openmpi@4.1.5%clang or openmpi@3.9%gcc, but not openmpi@3.9%clang.

If a custom message is provided, and the requirement is not satisfiable, Spack will print the custom error message:

$ spack spec openmpi@3.9%clang
==> Error: in this example only 4.1.5 can build with other compilers


We could express a similar requirement using the when attribute:

packages:

openmpi:
require:
- any_of: ["%gcc"]
when: "@:4.1.4"
message: "in this example only 4.1.5 can build with other compilers"


In the example above, if the version turns out to be 4.1.4 or less, we require the compiler to be GCC. For readability, Spack also allows a spec key accepting a string when there is only a single constraint:

packages:

openmpi:
require:
- spec: "%gcc"
when: "@:4.1.4"
message: "in this example only 4.1.5 can build with other compilers"


This code snippet and the one before it are semantically equivalent.

Finally, instead of any_of you can use one_of which also takes a list of specs. The final concretized spec must match one and only one of them:

packages:

mpich:
require:
- one_of: ["+cuda", "+rocm"]


In the example above, that means you could build mpich+cuda or mpich+rocm but not mpich+cuda+rocm.

NOTE:

For any_of and one_of, the order of specs indicates a preference: items that appear earlier in the list are preferred (note that these preferences can be ignored in favor of others).


NOTE:

When using a conditional requirement, Spack is allowed to actively avoid the triggering condition (the when=... spec) if that leads to a concrete spec with better scores in the optimization criteria. To check the current optimization criteria and their priorities you can run spack solve zlib.


Setting default requirements

You can also set default requirements for all packages under all like this:

packages:

all:
require: '%clang'


which means every spec will be required to use clang as a compiler.

Note that in this case all represents a default set of requirements - if there are specific package requirements, then the default requirements under all are disregarded. For example, with a configuration like this:

packages:

all:
require: '%clang'
cmake:
require: '%gcc'


Spack requires cmake to use gcc and all other nodes (including cmake dependencies) to use clang.

Setting requirements on virtual specs

A requirement on a virtual spec applies whenever that virtual is present in the DAG. This can be useful for fixing which virtual provider you want to use:

packages:

mpi:
require: 'mvapich2 %gcc'


With the configuration above the only allowed mpi provider is mvapich2 %gcc.

Requirements on the virtual spec and on the specific provider are both applied, if present. For instance with a configuration like:

packages:

mpi:
require: 'mvapich2 %gcc'
mvapich2:
require: '~cuda'


you will use mvapich2~cuda %gcc as an mpi provider.

Package Preferences

In some cases package requirements can be too strong, and package preferences are the better option. Package preferences do not impose constraints on packages for particular versions or variants values, they rather only set defaults. The concretizer is free to change them if it must, due to other constraints, and also prefers reusing installed packages over building new ones that are a better match for preferences.

SEE ALSO:

FAQ: Why does Spack pick particular versions and variants?


Most package preferences (compilers, target and providers) can only be set globally under the all section of packages.yaml:

packages:

all:
compiler: [gcc@12.2.0, clang@12:, oneapi@2023:]
target: [x86_64_v3]
providers:
mpi: [mvapich2, mpich, openmpi]


These preferences override Spack's default and effectively reorder priorities when looking for the best compiler, target or virtual package provider. Each preference takes an ordered list of spec constraints, with earlier entries in the list being preferred over later entries.

In the example above all packages prefer to be compiled with gcc@12.2.0, to target the x86_64_v3 microarchitecture and to use mvapich2 if they depend on mpi.

The variants and version preferences can be set under package specific sections of the packages.yaml file:

packages:

opencv:
variants: +debug
gperftools:
version: [2.2, 2.4, 2.3]


In this case, the preference for opencv is to build with debug options, while gperftools prefers version 2.2 over 2.4.

Any preference can be overwritten on the command line if explicitly requested.

Preferences cannot overcome explicit constraints, as they only set a preferred ordering among homogeneous attribute values. Going back to the example, if gperftools@2.3: was requested, then Spack will install version 2.4 since the most preferred version 2.2 is prohibited by the version constraint.

Package Permissions

Spack can be configured to assign permissions to the files installed by a package.

In the packages.yaml file under permissions, the attributes read, write, and group control the package permissions. These attributes can be set per-package, or for all packages under all. If permissions are set under all and for a specific package, the package-specific settings take precedence.

The read and write attributes take one of user, group, and world.

packages:

all:
permissions:
write: group
group: spack
my_app:
permissions:
read: group
group: my_team


The permissions settings describe the broadest level of access to installations of the specified packages. The execute permissions of the file are set to the same level as read permissions for those files that are executable. The default setting for read is world, and for write is user. In the example above, installations of my_app will be installed with user and group permissions but no world permissions, and owned by the group my_team. All other packages will be installed with user and group write privileges, and world read privileges. Those packages will be owned by the group spack.

The group attribute assigns a Unix-style group to a package. All files installed by the package will be owned by the assigned group, and the sticky group bit will be set on the install prefix and all directories inside the install prefix. This will ensure that even manually placed files within the install prefix are owned by the assigned group. If no group is assigned, Spack will allow the OS default behavior to go as expected.

Assigning Package Attributes

You can assign class-level attributes in the configuration:

packages:

mpileaks:
package_attributes:
# Override existing attributes
url: http://www.somewhereelse.com/mpileaks-1.0.tar.gz
# ... or add new ones
x: 1


Attributes set this way will be accessible to any method executed in the package.py file (e.g. the install() method). Values for these attributes may be any value parseable by yaml.

These can only be applied to specific packages, not "all" or virtual packages.

CONCRETIZATION SETTINGS (CONCRETIZER.YAML)

The concretizer.yaml configuration file allows to customize aspects of the algorithm used to select the dependencies you install. The default configuration is the following:

# -------------------------------------------------------------------------
# This is the default spack configuration file.
#
# Settings here are versioned with Spack and are intended to provide
# sensible defaults out of the box. Spack maintainers should edit this
# file to keep it current.
#
# Users can override these settings by editing
# `$SPACK_ROOT/etc/spack/concretizer.yaml`, `~/.spack/concretizer.yaml`,
# or by adding a `concretizer:` section to an environment.
# -------------------------------------------------------------------------
concretizer:

# Whether to consider installed packages or packages from buildcaches when
# concretizing specs. If `true`, we'll try to use as many installs/binaries
# as possible, rather than building. If `false`, we'll always give you a fresh
# concretization. If `dependencies`, we'll only reuse dependencies but
# give you a fresh concretization for your root specs.
reuse: dependencies
# Options that tune which targets are considered for concretization. The
# concretization process is very sensitive to the number targets, and the time
# needed to reach a solution increases noticeably with the number of targets
# considered.
targets:
# Determine whether we want to target specific or generic
# microarchitectures. Valid values are: "microarchitectures" or "generic".
# An example of "microarchitectures" would be "skylake" or "bulldozer",
# while an example of "generic" would be "aarch64" or "x86_64_v4".
granularity: microarchitectures
# If "false" allow targets that are incompatible with the current host (for
# instance concretize with target "icelake" while running on "haswell").
# If "true" only allow targets that are compatible with the host.
host_compatible: true
# When "true" concretize root specs of environments together, so that each unique
# package in an environment corresponds to one concrete spec. This ensures
# environments can always be activated. When "false" perform concretization separately
# on each root spec, allowing different versions and variants of the same package in
# an environment.
unify: true
# Option to deal with possible duplicate nodes (i.e. different nodes from the same package) in the DAG.
duplicates:
# "none": allows a single node for any package in the DAG.
# "minimal": allows the duplication of 'build-tools' nodes only (e.g. py-setuptools, cmake etc.)
# "full" (experimental): allows separation of the entire build-tool stack (e.g. the entire "cmake" subDAG)
strategy: minimal


Reuse already installed packages

The reuse attribute controls whether Spack will prefer to use installed packages (true), or whether it will do a "fresh" installation and prefer the latest settings from package.py files and packages.yaml (false). You can use:

% spack install --reuse <spec>


to enable reuse for a single installation, and you can use:

spack install --fresh <spec>


to do a fresh install if reuse is enabled by default. reuse: dependencies is the default.

SEE ALSO:

FAQ: Why does Spack pick particular versions and variants?


Selection of the target microarchitectures

The options under the targets attribute control which targets are considered during a solve. Currently the options in this section are only configurable from the concretizer.yaml file and there are no corresponding command line arguments to enable them for a single solve.

The granularity option can take two possible values: microarchitectures and generic. If set to:

concretizer:

targets:
granularity: microarchitectures


Spack will consider all the microarchitectures known to archspec to label nodes for compatibility. If instead the option is set to:

concretizer:

targets:
granularity: generic


Spack will consider only generic microarchitectures. For instance, when running on an Haswell node, Spack will consider haswell as the best target in the former case and x86_64_v3 as the best target in the latter case.

The host_compatible option is a Boolean option that determines whether or not the microarchitectures considered during the solve are constrained to be compatible with the host Spack is currently running on. For instance, if this option is set to true, a user cannot concretize for target=icelake while running on an Haswell node.

Duplicate nodes

The duplicates attribute controls whether the DAG can contain multiple configurations of the same package. This is mainly relevant for build dependencies, which may have their version pinned by some nodes, and thus be required at different versions by different nodes in the same DAG.

The strategy option controls how the solver deals with duplicates. If the value is none, then a single configuration per package is allowed in the DAG. This means, for instance, that only a single cmake or a single py-setuptools version is allowed. The result would be a slightly faster concretization, at the expense of making a few specs unsolvable.

If the value is minimal Spack will allow packages tagged as build-tools to have duplicates. This allows, for instance, to concretize specs whose nodes require different, and incompatible, ranges of some build tool. For instance, in the figure below the latest py-shapely requires a newer py-setuptools, while py-numpy still needs an older version:

[image]


Up to Spack v0.20 duplicates:strategy:none was the default (and only) behavior. From Spack v0.21 the default behavior is duplicates:strategy:minimal.

ENVIRONMENTS (SPACK.YAML)

An environment is used to group together a set of specs for the purpose of building, rebuilding and deploying in a coherent fashion. Environments provide a number of advantages over the à la carte approach of building and loading individual Spack modules:

1.
Environments separate the steps of (a) choosing what to install, (b) concretizing, and (c) installing. This allows Environments to remain stable and repeatable, even if Spack packages are upgraded: specs are only re-concretized when the user explicitly asks for it. It is even possible to reliably transport environments between different computers running different versions of Spack!
2.
Environments allow several specs to be built at once; a more robust solution than ad-hoc scripts making multiple calls to spack install.
3.
An Environment that is built as a whole can be loaded as a whole into the user environment. An Environment can be built to maintain a filesystem view of its packages, and the environment can load that view into the user environment at activation time. Spack can also generate a script to load all modules related to an environment.

Other packaging systems also provide environments that are similar in some ways to Spack environments; for example, Conda environments or Python Virtual Environments. Spack environments provide some distinctive features:

1.
A spec installed "in" an environment is no different from the same spec installed anywhere else in Spack. Environments are assembled simply by collecting together a set of specs.
2.
Spack Environments may contain more than one spec of the same package.

Spack uses a "manifest and lock" model similar to Bundler gemfiles and other package managers. The user input file is named spack.yaml and the lock file is named spack.lock

Using Environments

Here we follow a typical use case of creating, concretizing, installing and loading an environment.

Creating a managed Environment

An environment is created by:

$ spack env create myenv


Spack then creates the directory var/spack/environments/myenv.

NOTE:

All managed environments by default are stored in the var/spack/environments folder. This location can be changed by setting the environments_root variable in config.yaml.


In the var/spack/environments/myenv directory, Spack creates the file spack.yaml and the hidden directory .spack-env.

Spack stores metadata in the .spack-env directory. User interaction will occur through the spack.yaml file and the Spack commands that affect it. When the environment is concretized, Spack will create a file spack.lock with the concrete information for the environment.

In addition to being the default location for the view associated with an Environment, the .spack-env directory also contains:

  • repo/: A repo consisting of the Spack packages used in this environment. This allows the environment to build the same, in theory, even on different versions of Spack with different packages!
  • logs/: A directory containing the build logs for the packages in this Environment.



Spack Environments can also be created from either a manifest file (usually but not necessarily named, spack.yaml) or a lockfile. To create an Environment from a manifest:

$ spack env create myenv spack.yaml


To create an Environment from a spack.lock lockfile:

$ spack env create myenv spack.lock


Either of these commands can also take a full path to the initialization file.

A Spack Environment created from a spack.yaml manifest is guaranteed to have the same root specs as the original Environment, but may concretize differently. A Spack Environment created from a spack.lock lockfile is guaranteed to have the same concrete specs as the original Environment. Either may obviously then differ as the user modifies it.

Activating an Environment

To activate an environment, use the following command:

$ spack env activate myenv


By default, the spack env activate will load the view associated with the Environment into the user environment. The -v, --with-view argument ensures this behavior, and the -V, --without-view argument activates the environment without changing the user environment variables.

The -p option to the spack env activate command modifies the user's prompt to begin with the environment name in brackets.

$ spack env activate -p myenv
[myenv] $ ...


To deactivate an environment, use the command:

$ spack env deactivate


or the shortcut alias

$ despacktivate


If the environment was activated with its view, deactivating the environment will remove the view from the user environment.

Anonymous Environments

Any directory can be treated as an environment if it contains a file spack.yaml. To load an anonymous environment, use:

$ spack env activate -d /path/to/directory


Anonymous specs can be created in place using the command:

$ spack env create -d .


In this case Spack simply creates a spack.yaml file in the requested directory.

Environment Sensitive Commands

Spack commands are environment sensitive. For example, the find command shows only the specs in the active Environment if an Environment has been activated. Similarly, the install and uninstall commands act on the active environment.

$ spack find
==> 0 installed packages
$ spack install zlib@1.2.11
==> Installing zlib-1.2.11-q6cqrdto4iktfg6qyqcc5u4vmfmwb7iv
==> No binary for zlib-1.2.11-q6cqrdto4iktfg6qyqcc5u4vmfmwb7iv found: installing from source
==> zlib: Executing phase: 'install'
[+] ~/spack/opt/spack/linux-rhel7-broadwell/gcc-8.1.0/zlib-1.2.11-q6cqrdto4iktfg6qyqcc5u4vmfmwb7iv
$ spack env activate myenv
$ spack find
==> In environment myenv
==> No root specs
==> 0 installed packages
$ spack install zlib@1.2.8
==> Installing zlib-1.2.8-yfc7epf57nsfn2gn4notccaiyxha6z7x
==> No binary for zlib-1.2.8-yfc7epf57nsfn2gn4notccaiyxha6z7x found: installing from source
==> zlib: Executing phase: 'install'
[+] ~/spack/opt/spack/linux-rhel7-broadwell/gcc-8.1.0/zlib-1.2.8-yfc7epf57nsfn2gn4notccaiyxha6z7x
==> Updating view at ~/spack/var/spack/environments/myenv/.spack-env/view
$ spack find
==> In environment myenv
==> Root specs
zlib@1.2.8
==> 1 installed package
-- linux-rhel7-broadwell / gcc@8.1.0 ----------------------------
zlib@1.2.8
$ despacktivate
$ spack find
==> 2 installed packages
-- linux-rhel7-broadwell / gcc@8.1.0 ----------------------------
zlib@1.2.8  zlib@1.2.11


Note that when we installed the abstract spec zlib@1.2.8, it was presented as a root of the Environment. All explicitly installed packages will be listed as roots of the Environment.

All of the Spack commands that act on the list of installed specs are Environment-sensitive in this way, including install, uninstall, find, extensions, and more. In the Configuring Environments section we will discuss Environment-sensitive commands further.

Adding Abstract Specs

An abstract spec is the user-specified spec before Spack has applied any defaults or dependency information.

Users can add abstract specs to an Environment using the spack add command. The most important component of an Environment is a list of abstract specs.

Adding a spec adds to the manifest (the spack.yaml file), which is used to define the roots of the Environment, but does not affect the concrete specs in the lockfile, nor does it install the spec.

The spack add command is environment aware. It adds to the currently active environment. All environment aware commands can also be called using the spack -e flag to specify the environment.

$ spack env activate myenv
$ spack add mpileaks


or

$ spack -e myenv add python


Concretizing

Once some user specs have been added to an environment, they can be concretized. There are at the moment three different modes of operation to concretize an environment, which are explained in details in Spec concretization. Regardless of which mode of operation has been chosen, the following command will ensure all the root specs are concretized according to the constraints that are prescribed in the configuration:

[myenv]$ spack concretize


In the case of specs that are not concretized together, the command above will concretize only the specs that were added and not yet concretized. Forcing a re-concretization of all the specs can be done instead with this command:

[myenv]$ spack concretize -f


When the -f flag is not used to reconcretize all specs, Spack guarantees that already concretized specs are unchanged in the environment.

The concretize command does not install any packages. For packages that have already been installed outside of the environment, the process of adding the spec and concretizing is identical to installing the spec assuming it concretizes to the exact spec that was installed outside of the environment.

The spack find command can show concretized specs separately from installed specs using the -c (--concretized) flag.

[myenv]$ spack add zlib
[myenv]$ spack concretize
[myenv]$ spack find -c
==> In environment myenv
==> Root specs
zlib
==> Concretized roots
-- linux-rhel7-x86_64 / gcc@4.9.3 -------------------------------
zlib@1.2.11
==> 0 installed packages


Installing an Environment

In addition to installing individual specs into an Environment, one can install the entire Environment at once using the command

[myenv]$ spack install


If the Environment has been concretized, Spack will install the concretized specs. Otherwise, spack install will first concretize the Environment and then install the concretized specs.

NOTE:

Every spack install process builds one package at a time with multiple build jobs, controlled by the -j flag and the config:build_jobs option (see build_jobs). To speed up environment builds further, independent packages can be installed in parallel by launching more Spack instances. For example, the following will build at most four packages in parallel using three background jobs:

[myenv]$ spack install & spack install & spack install & spack install


Another option is to generate a Makefile and run make -j<N> to control the number of parallel install processes. See Generating Depfiles from Environments for details.



As it installs, spack install creates symbolic links in the logs/ directory in the Environment, allowing for easy inspection of build logs related to that environment. The spack install command also stores a Spack repo containing the package.py file used at install time for each package in the repos/ directory in the Environment.

The --no-add option can be used in a concrete environment to tell spack to install specs already present in the environment but not to add any new root specs to the environment. For root specs provided to spack install on the command line, --no-add is the default, while for dependency specs on the other hand, it is optional. In other words, if there is an unambiguous match in the active concrete environment for a root spec provided to spack install on the command line, spack does not require you to specify the --no-add option to prevent the spec from being added again. At the same time, a spec that already exists in the environment, but only as a dependency, will be added to the environment as a root spec without the --no-add option.

Developing Packages in a Spack Environment

The spack develop command allows one to develop Spack packages in an environment. It requires a spec containing a concrete version, and will configure Spack to install the package from local source. By default, it will also clone the package to a subdirectory in the environment. This package will have a special variant dev_path set, and Spack will ensure the package and its dependents are rebuilt any time the environment is installed if the package's local source code has been modified. Spack ensures that all instances of a developed package in the environment are concretized to match the version (and other constraints) passed as the spec argument to the spack develop command.

For packages with git attributes, git branches, tags, and commits can also be used as valid concrete versions (see Version specifier). This means that for a package foo, spack develop foo@git.main will clone the main branch of the package, and spack install will install from that git clone if foo is in the environment. Further development on foo can be tested by reinstalling the environment, and eventually committed and pushed to the upstream git repo.

Loading

Once an environment has been installed, the following creates a load script for it:

$ spack env loads -r


This creates a file called loads in the environment directory. Sourcing that file in Bash will make the environment available to the user; and can be included in .bashrc files, etc. The loads file may also be copied out of the environment, renamed, etc.

Configuring Environments

A variety of Spack behaviors are changed through Spack configuration files, covered in more detail in the Configuration Files section.

Spack Environments provide an additional level of configuration scope between the custom scope and the user scope discussed in the configuration documentation.

There are two ways to include configuration information in a Spack Environment:

1.
Inline in the spack.yaml file
2.
Included in the spack.yaml file from another file.

Many Spack commands also affect configuration information in files automatically. Those commands take a --scope argument, and the environment can be specified by env:NAME (to affect environment foo, set --scope env:foo). These commands will automatically manipulate configuration inline in the spack.yaml file.

Inline configurations

Inline Environment-scope configuration is done using the same yaml format as standard Spack configuration scopes, covered in the Configuration Files section. Each section is contained under a top-level yaml object with it's name. For example, a spack.yaml manifest file containing some package preference configuration (as in a packages.yaml file) could contain:

spack:

...
packages:
all:
compiler: [intel]
...


This configuration sets the default compiler for all packages to intel.

Included configurations

Spack environments allow an include heading in their yaml schema. This heading pulls in external configuration files and applies them to the Environment.

spack:

include:
- relative/path/to/config.yaml
- https://github.com/path/to/raw/config/compilers.yaml
- /absolute/path/to/packages.yaml


Environments can include files or URLs. File paths can be relative or absolute. URLs include the path to the text for individual files or can be the path to a directory containing configuration files.

Configuration precedence

Inline configurations take precedence over included configurations, so you don't have to change shared configuration files to make small changes to an individual environment. Included configurations listed earlier will have higher precedence, as the included configs are applied in reverse order.

Manually Editing the Specs List

The list of abstract/root specs in the Environment is maintained in the spack.yaml manifest under the heading specs.

spack:

specs:
- ncview
- netcdf
- nco
- py-sphinx


Appending to this list in the yaml is identical to using the spack add command from the command line. However, there is more power available from the yaml file.

Spec concretization

An environment can be concretized in three different modes and the behavior active under any environment is determined by the concretizer:unify configuration option.

The default mode is to unify all specs:

spack:

specs:
- hdf5+mpi
- zlib@1.2.8
concretizer:
unify: true


This means that any package in the environment corresponds to a single concrete spec. In the above example, when hdf5 depends down the line of zlib, it is required to take zlib@1.2.8 instead of a newer version. This mode of concretization is particularly useful when environment views are used: if every package occurs in only one flavor, it is usually possible to merge all install directories into a view.

A downside of unified concretization is that it can be overly strict. For example, a concretization error would happen when both hdf5+mpi and hdf5~mpi are specified in an environment.

The second mode is to unify when possible: this makes concretization of root specs more independendent. Instead of requiring reuse of dependencies across different root specs, it is only maximized:

spack:

specs:
- hdf5~mpi
- hdf5+mpi
- zlib@1.2.8
concretizer:
unify: when_possible


This means that both hdf5 installations will use zlib@1.2.8 as a dependency even if newer versions of that library are available.

The third mode of operation is to concretize root specs entirely independently by disabling unified concretization:

spack:

specs:
- hdf5~mpi
- hdf5+mpi
- zlib@1.2.8
concretizer:
unify: false


In this example hdf5 is concretized separately, and does not consider zlib@1.2.8 as a constraint or preference. Instead, it will take the latest possible version.

The last two concretization options are typically useful for system administrators and user support groups providing a large software stack for their HPC center.

NOTE:

The concretizer:unify config option was introduced in Spack 0.18 to replace the concretization property. For reference, concretization: together is replaced by concretizer:unify:true, and concretization: separately is replaced by concretizer:unify:false.


The spack concretize command without additional arguments will not change any previously concretized specs. This may prevent it from finding a solution when using unify: true, and it may prevent it from finding a minimal solution when using unify: when_possible. You can force Spack to ignore the existing concrete environment with spack concretize -f.



Spec Matrices

Entries in the specs list can be individual abstract specs or a spec matrix.

A spec matrix is a yaml object containing multiple lists of specs, and evaluates to the cross-product of those specs. Spec matrices also contain an excludes directive, which eliminates certain combinations from the evaluated result.

The following two Environment manifests are identical:

spack:

specs:
- zlib %gcc@7.1.0
- zlib %gcc@4.9.3
- libelf %gcc@7.1.0
- libelf %gcc@4.9.3
- libdwarf %gcc@7.1.0
- cmake spack:
specs:
- matrix:
- [zlib, libelf, libdwarf]
- ['%gcc@7.1.0', '%gcc@4.9.3']
exclude:
- libdwarf%gcc@4.9.3
- cmake


Spec matrices can be used to install swaths of software across various toolchains.

Spec List References

The last type of possible entry in the specs list is a reference.

The Spack Environment manifest yaml schema contains an additional heading definitions. Under definitions is an array of yaml objects. Each object has one or two fields. The one required field is a name, and the optional field is a when clause.

The named field is a spec list. The spec list uses the same syntax as the specs entry. Each entry in the spec list can be a spec, a spec matrix, or a reference to an earlier named list. References are specified using the $ sigil, and are "splatted" into place (i.e. the elements of the referent are at the same level as the elements listed separately). As an example, the following two manifest files are identical.

spack:

definitions:
- first: [libelf, libdwarf]
- compilers: ['%gcc', '%intel']
- second:
- $first
- matrix:
- [zlib]
- [$compilers]
specs:
- $second
- cmake spack:
specs:
- libelf
- libdwarf
- zlib%gcc
- zlib%intel
- cmake


NOTE:

Named spec lists in the definitions section may only refer to a named list defined above itself. Order matters.


In short files like the example, it may be easier to simply list the included specs. However for more complicated examples involving many packages across many toolchains, separately factored lists make Environments substantially more manageable.

Additionally, the -l option to the spack add command allows one to add to named lists in the definitions section of the manifest file directly from the command line.

The when directive can be used to conditionally add specs to a named list. The when directive takes a string of Python code referring to a restricted set of variables, and evaluates to a boolean. The specs listed are appended to the named list if the when string evaluates to True. In the following snippet, the named list compilers is ['%gcc', '%clang', '%intel'] on x86_64 systems and ['%gcc', '%clang'] on all other systems.

spack:

definitions:
- compilers: ['%gcc', '%clang']
- when: arch.satisfies('x86_64:')
compilers: ['%intel']


NOTE:

Any definitions with the same named list with true when clauses (or absent when clauses) will be appended together


The valid variables for a when clause are:

1.
platform. The platform string of the default Spack architecture on the system.
2.
os. The os string of the default Spack architecture on the system.
3.
target. The target string of the default Spack architecture on the system.
4.
architecture or arch. A Spack spec satisfying the default Spack architecture on the system. This supports querying via the satisfies method, as shown above.
5.
arch_str. The architecture string of the default Spack architecture on the system.
6.
re. The standard regex module in Python.
7.
env. The user environment (usually os.environ in Python).
8.
hostname. The hostname of the system (if hostname is an executable in the user's PATH).

SpecLists as Constraints

Dependencies and compilers in Spack can be both packages in an environment and constraints on other packages. References to SpecLists allow a shorthand to treat packages in a list as either a compiler or a dependency using the $% or $^ syntax respectively.

For example, the following environment has three root packages: gcc@8.1.0, mvapich2@2.3.1 %gcc@8.1.0, and hdf5+mpi %gcc@8.1.0 ^mvapich2@2.3.1.

spack:

definitions:
- compilers: [gcc@8.1.0]
- mpis: [mvapich2@2.3.1]
- packages: [hdf5+mpi]
specs:
- $compilers
- matrix:
- [$mpis]
- [$%compilers]
- matrix:
- [$packages]
- [$^mpis]
- [$%compilers]


This allows for a much-needed reduction in redundancy between packages and constraints.

Filesystem Views

Spack Environments can define filesystem views, which provide a direct access point for software similar to the directory hierarchy that might exist under /usr/local. Filesystem views are updated every time the environment is written out to the lock file spack.lock, so the concrete environment and the view are always compatible. The files of the view's installed packages are brought into the view by symbolic or hard links, referencing the original Spack installation, or by copy.

Configuration in spack.yaml

The Spack Environment manifest file has a top-level keyword view. Each entry under that heading is a view descriptor, headed by a name. Any number of views may be defined under the view heading. The view descriptor contains the root of the view, and optionally the projections for the view, select and exclude lists for the view and link information via link and link_type.

For example, in the following manifest file snippet we define a view named mpis, rooted at /path/to/view in which all projections use the package name, version, and compiler name to determine the path for a given package. This view selects all packages that depend on MPI, and excludes those built with the PGI compiler at version 18.5. The root specs with their (transitive) link and run type dependencies will be put in the view due to the link: all option, and the files in the view will be symlinks to the spack install directories.

spack:

...
view:
mpis:
root: /path/to/view
select: [^mpi]
exclude: ['%pgi@18.5']
projections:
all: '{name}/{version}-{compiler.name}'
link: all
link_type: symlink


The default for the select and exclude values is to select everything and exclude nothing. The default projection is the default view projection ({}). The link attribute allows the following values:

1.
link: all include root specs with their transitive run and link type dependencies (default);
2.
link: run include root specs with their transitive run type dependencies;
3.
link: roots include root specs without their dependencies.

The link_type defaults to symlink but can also take the value of hardlink or copy.

TIP:

The option link: run can be used to create small environment views for Python packages. Python will be able to import packages inside of the view even when the environment is not activated, and linked libraries will be located outside of the view thanks to rpaths.


There are two shorthands for environments with a single view. If the environment at /path/to/env has a single view, with a root at /path/to/env/.spack-env/view, with default selection and exclusion and the default projection, we can put view: True in the environment manifest. Similarly, if the environment has a view with a different root, but default selection, exclusion, and projections, the manifest can say view: /path/to/view. These views are automatically named default, so that

spack:

...
view: True


is equivalent to

spack:

...
view:
default:
root: .spack-env/view


and

spack:

...
view: /path/to/view


is equivalent to

spack:

...
view:
default:
root: /path/to/view


By default, Spack environments are configured with view: True in the manifest. Environments can be configured without views using view: False. For backwards compatibility reasons, environments with no view key are treated the same as view: True.

From the command line, the spack env create command takes an argument --with-view [PATH] that sets the path for a single, default view. If no path is specified, the default path is used (view: True). The argument --without-view can be used to create an environment without any view configured.

The spack env view command can be used to change the manage views of an Environment. The subcommand spack env view enable will add a view named default to an environment. It takes an optional argument to specify the path for the new default view. The subcommand spack env view disable will remove the view named default from an environment if one exists. The subcommand spack env view regenerate will regenerate the views for the environment. This will apply any updates in the environment configuration that have not yet been applied.

View Projections

The default projection into a view is to link every package into the root of the view. The projections attribute is a mapping of partial specs to spec format strings, defined by the format() function, as shown in the example below:

projections:

zlib: "{name}-{version}"
^mpi: "{name}-{version}/{^mpi.name}-{^mpi.version}-{compiler.name}-{compiler.version}"
all: "{name}-{version}/{compiler.name}-{compiler.version}"


The entries in the projections configuration file must all be either specs or the keyword all. For each spec, the projection used will be the first non-all entry that the spec satisfies, or all if there is an entry for all and no other entry is satisfied by the spec. Where the keyword all appears in the file does not matter.

Given the example above, the spec zlib@1.2.8 will be linked into /my/view/zlib-1.2.8/, the spec hdf5@1.8.10+mpi %gcc@4.9.3 ^mvapich2@2.2 will be linked into /my/view/hdf5-1.8.10/mvapich2-2.2-gcc-4.9.3, and the spec hdf5@1.8.10~mpi %gcc@4.9.3 will be linked into /my/view/hdf5-1.8.10/gcc-4.9.3.

If the keyword all does not appear in the projections configuration file, any spec that does not satisfy any entry in the file will be linked into the root of the view as in a single-prefix view. Any entries that appear below the keyword all in the projections configuration file will not be used, as all specs will use the projection under all before reaching those entries.

Activating environment views

The spack env activate command will put the default view for the environment into the user's path, in addition to activating the environment for Spack commands. The arguments -v,--with-view and -V,--without-view can be used to tune this behavior. The default behavior is to activate with the environment view if there is one.

The environment variables affected by the spack env activate command and the paths that are used to update them are determined by the prefix inspections defined in your modules configuration; the defaults are summarized in the following table.

Variable Paths
PATH bin
MANPATH man, share/man
ACLOCAL_PATH share/aclocal
PKG_CONFIG_PATH lib/pkgconfig, lib64/pkgconfig, share/pkgconfig
CMAKE_PREFIX_PATH .

Each of these paths are appended to the view root, and added to the relevant variable if the path exists. For this reason, it is not recommended to use non-default projections with the default view of an environment.

The spack env deactivate command will remove the default view of the environment from the user's path.

Generating Depfiles from Environments

Spack can generate Makefiles to make it easier to build multiple packages in an environment in parallel. Generated Makefiles expose targets that can be included in existing Makefiles, to allow other targets to depend on the environment installation.

A typical workflow is as follows:

spack env create -d .
spack -e . add perl
spack -e . concretize
spack -e . env depfile -o Makefile
make -j64


This generates a Makefile from a concretized environment in the current working directory, and make -j64 installs the environment, exploiting parallelism across packages as much as possible. Spack respects the Make jobserver and forwards it to the build environment of packages, meaning that a single -j flag is enough to control the load, even when packages are built in parallel.

By default the following phony convenience targets are available:

  • make all: installs the environment (default target);
  • make clean: cleans files used by make, but does not uninstall packages.

TIP:

GNU Make version 4.3 and above have great support for output synchronization through the -O and --output-sync flags, which ensure that output is printed orderly per package install. To get synchronized output with colors, use make -j<N> SPACK_COLOR=always --output-sync=recurse.


Specifying dependencies on generated make targets

An interesting question is how to include generated Makefiles in your own Makefiles. This comes up when you want to install an environment that provides executables required in a command for a make target of your own.

The example below shows how to accomplish this: the env target specifies the generated spack/env target as a prerequisite, meaning that the environment gets installed and is available for use in the env target.

SPACK ?= spack
.PHONY: all clean env
all: env
spack.lock: spack.yaml
	$(SPACK) -e . concretize -f
env.mk: spack.lock
	$(SPACK) -e . env depfile -o $@ --make-prefix spack
env: spack/env
	$(info Environment installed!)
clean:
	rm -rf spack.lock env.mk spack/
ifeq (,$(filter clean,$(MAKECMDGOALS)))
include env.mk
endif


This works as follows: when make is invoked, it first "remakes" the missing include env.mk as there is a target for it. This triggers concretization of the environment and makes spack output env.mk. At that point the generated target spack/env becomes available through include env.mk.

As it is typically undesirable to remake env.mk as part of make clean, the include is conditional.

NOTE:

When including generated Makefiles, it is important to use the --make-prefix flag and use the non-phony target <prefix>/env as prerequisite, instead of the phony target <prefix>/all.


Building a subset of the environment

The generated Makefiles contain install targets for each spec, identified by <name>-<version>-<hash>. This allows you to install only a subset of the packages in the environment. When packages are unique in the environment, it's enough to know the name and let tab-completion fill out the version and hash.

The following phony targets are available: install/<spec> to install the spec with its dependencies, and install-deps/<spec> to only install its dependencies. This can be useful when certain flags should only apply to dependencies. Below we show a use case where a spec is installed with verbose output (spack install --verbose) while its dependencies are installed silently:

$ spack env depfile -o Makefile
# Install dependencies in parallel, only show a log on error.
$ make -j16 install-deps/python-3.11.0-<hash> SPACK_INSTALL_FLAGS=--show-log-on-error
# Install the root spec with verbose output.
$ make -j16 install/python-3.11.0-<hash> SPACK_INSTALL_FLAGS=--verbose


Adding post-install hooks

Another advanced use-case of generated Makefiles is running a post-install command for each package. These "hooks" could be anything from printing a post-install message, running tests, or pushing just-built binaries to a buildcache.

This can be accomplished through the generated [<prefix>/]SPACK_PACKAGE_IDS variable. Assuming we have an active and concrete environment, we generate the associated Makefile with a prefix example:

$ spack env depfile -o env.mk --make-prefix example


And we now include it in a different Makefile, in which we create a target example/push/% with % referring to a package identifier. This target depends on the particular package installation. In this target we automatically have the target-specific HASH and SPEC variables at our disposal. They are respectively the spec hash (excluding leading /), and a human-readable spec. Finally, we have an entrypoint target push that will update the buildcache index once every package is pushed. Note how this target uses the generated example/SPACK_PACKAGE_IDS variable to define its prerequisites.

SPACK ?= spack
BUILDCACHE_DIR = $(CURDIR)/tarballs
.PHONY: all
all: push
include env.mk
example/push/%: example/install/%
	@mkdir -p $(dir $@)
	$(info About to push $(SPEC) to a buildcache)
	$(SPACK) -e . buildcache push --allow-root --only=package $(BUILDCACHE_DIR) /$(HASH)
	@touch $@
push: $(addprefix example/push/,$(example/SPACK_PACKAGE_IDS))
	$(info Updating the buildcache index)
	$(SPACK) -e . buildcache update-index $(BUILDCACHE_DIR)
	$(info Done!)
	@touch $@


CONTAINER IMAGES

Spack Environments (spack.yaml) are a great tool to create container images, but preparing one that is suitable for production requires some more boilerplate than just:

COPY spack.yaml /environment
RUN spack -e /environment install


Additional actions may be needed to minimize the size of the container, or to update the system software that is installed in the base image, or to set up a proper entrypoint to run the image. These tasks are usually both necessary and repetitive, so Spack comes with a command to generate recipes for container images starting from a spack.yaml.

A Quick Introduction

Consider having a Spack environment like the following:

spack:

specs:
- gromacs+mpi
- mpich


Producing a Dockerfile from it is as simple as moving to the directory where the spack.yaml file is stored and giving the following command:

$ spack containerize > Dockerfile


The Dockerfile that gets created uses multi-stage builds and other techniques to minimize the size of the final image:

# Build stage with Spack pre-installed and ready to be used
FROM spack/ubuntu-bionic:latest as builder
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&&  (echo "spack:" \
&&   echo "  specs:" \
&&   echo "  - gromacs+mpi" \
&&   echo "  - mpich" \
&&   echo "  concretizer:" \
&&   echo "    unify: true" \
&&   echo "  config:" \
&&   echo "    install_tree: /opt/software" \
&&   echo "  view: /opt/view") > /opt/spack-environment/spack.yaml
# Install the software, remove unnecessary deps
RUN cd /opt/spack-environment && spack env activate . && spack install --fail-fast && spack gc -y
# Strip all the binaries
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \

xargs file -i | \
grep 'charset=binary' | \
grep 'x-executable\|x-archive\|x-sharedlib' | \
awk -F: '{print $1}' | xargs strip -s # Modifications to the environment that are necessary to run RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh # Bare OS image to run the installed executables FROM ubuntu:18.04 COPY --from=builder /opt/spack-environment /opt/spack-environment COPY --from=builder /opt/software /opt/software COPY --from=builder /opt/view /opt/view COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]


The image itself can then be built and run in the usual way, with any of the tools suitable for the task. For instance, if we decided to use docker:

$ spack containerize > Dockerfile
$ docker build -t myimage .
[ ... ]
$ docker run -it myimage


The various components involved in the generation of the recipe and their configuration are discussed in details in the sections below.

Spack Images on Docker Hub

Docker images with Spack preinstalled and ready to be used are built when a release is tagged, or nightly on develop. The images are then pushed both to Docker Hub and to GitHub Container Registry. The OS that are currently supported are summarized in the table below:

Supported operating systems

Operating System Base Image Spack Image
Ubuntu 18.04 ubuntu:18.04 spack/ubuntu-bionic
Ubuntu 20.04 ubuntu:20.04 spack/ubuntu-focal
Ubuntu 22.04 ubuntu:22.04 spack/ubuntu-jammy
CentOS 7 centos:7 spack/centos7
CentOS Stream quay.io/centos/centos:stream spack/centos-stream
openSUSE Leap opensuse/leap spack/leap15
Amazon Linux 2 amazonlinux:2 spack/amazon-linux
AlmaLinux 8 almalinux:8 spack/almalinux8
AlmaLinux 9 almalinux:9 spack/almalinux9
Rocky Linux 8 rockylinux:8 spack/rockylinux8
Rocky Linux 9 rockylinux:9 spack/rockylinux9
Fedora Linux 37 fedora:37 spack/fedora37
Fedora Linux 38 fedora:38 spack/fedora38

All the images are tagged with the corresponding release of Spack: [image]

with the exception of the latest tag that points to the HEAD of the develop branch. These images are available for anyone to use and take care of all the repetitive tasks that are necessary to setup Spack within a container. The container recipes generated by Spack use them as default base images for their build stage, even though handles to use custom base images provided by users are available to accommodate complex use cases.

Creating Images From Environments

Any Spack Environment can be used for the automatic generation of container recipes. Sensible defaults are provided for things like the base image or the version of Spack used in the image. If a finer tuning is needed it can be obtained by adding the relevant metadata under the container attribute of environments:

spack:

specs:
- gromacs+mpi
- mpich
container:
# Select the format of the recipe e.g. docker,
# singularity or anything else that is currently supported
format: docker
# Sets the base images for the stages where Spack builds the
# software or where the software gets installed after being built..
images:
os: "centos:7"
spack: develop
# Whether or not to strip binaries
strip: true
# Additional system packages that are needed at runtime
os_packages:
final:
- libgomp
# Labels for the image
labels:
app: "gromacs"
mpi: "mpich"


A detailed description of the options available can be found in the Configuration Reference section.

Setting Base Images

The images subsection is used to select both the image where Spack builds the software and the image where the built software is installed. This attribute can be set in different ways and which one to use depends on the use case at hand.

Use Official Spack Images From Dockerhub

To generate a recipe that uses an official Docker image from the Spack organization to build the software and the corresponding official OS image to install the built software, all the user has to do is specify:

1.
An operating system under images:os
2.
A Spack version under images:spack

Any combination of these two values that can be mapped to one of the images discussed in Spack Images on Docker Hub is allowed. For instance, the following spack.yaml:

spack:

specs:
- gromacs+mpi
- mpich
container:
images:
os: centos:7
spack: 0.15.4


uses spack/centos7:0.15.4 and centos:7 for the stages where the software is respectively built and installed:

# Build stage with Spack pre-installed and ready to be used
FROM spack/centos7:0.15.4 as builder
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&&  (echo "spack:" \
&&   echo "  specs:" \
&&   echo "  - gromacs+mpi" \
&&   echo "  - mpich" \
&&   echo "  concretizer:" \
&&   echo "    unify: true" \
&&   echo "  config:" \
&&   echo "    install_tree: /opt/software" \
&&   echo "  view: /opt/view") > /opt/spack-environment/spack.yaml
[ ... ]
# Bare OS image to run the installed executables
FROM centos:7
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]


This is the simplest available method of selecting base images, and we advise to use it whenever possible. There are cases though where using Spack official images is not enough to fit production needs. In these situations users can extend the recipe to start with the bootstrapping of Spack at a certain pinned version or manually select which base image to start from in the recipe, as we'll see next.

Use a Bootstrap Stage for Spack

In some cases users may want to pin the commit sha that is used for Spack, to ensure later reproducibility, or start from a fork of the official Spack repository to try a bugfix or a feature in the early stage of development. This is possible by being just a little more verbose when specifying information about Spack in the spack.yaml file:

images:

os: amazonlinux:2
spack:
# URL of the Spack repository to be used in the container image
url: <to-use-a-fork>
# Either a commit sha, a branch name or a tag
ref: <sha/tag/branch>
# If true turn a branch name or a tag into the corresponding commit
# sha at the time of recipe generation
resolve_sha: <true/false>


url specifies the URL from which to clone Spack and defaults to https://github.com/spack/spack. The ref attribute can be either a commit sha, a branch name or a tag. The default value in this case is to use the develop branch, but it may change in the future to point to the latest stable release. Finally resolve_sha transform branch names or tags into the corresponding commit shas at the time of recipe generation, to allow for a greater reproducibility of the results at a later time.

The list of operating systems that can be used to bootstrap Spack can be obtained with:

$ spack containerize --list-os
==> The following operating systems can be used to bootstrap Spack:
alpine:3 amazonlinux:2 fedora:38 fedora:37 rockylinux:9 rockylinux:8 almalinux:9 almalinux:8 centos:stream centos:7 opensuse/leap:15 suse/sle:15 nvidia/cuda:11.2.1 ubuntu:22.04 ubuntu:20.04 ubuntu:18.04


NOTE:

The resolve_sha option uses git rev-parse under the hood and thus it requires to checkout the corresponding Spack repository in a temporary folder before generating the recipe. Recipe generation may take longer when this option is set to true because of this additional step.


Use Custom Images Provided by Users

Consider, as an example, building a production grade image for a CUDA application. The best strategy would probably be to build on top of images provided by the vendor and regard CUDA as an external package.

Spack doesn't currently provide an official image with CUDA configured this way, but users can build it on their own and then configure the environment to explicitly pull it. This requires users to:

1.
Specify the image used to build the software under images:build
2.
Specify the image used to install the built software under images:final

A spack.yaml like the following:

spack:

specs:
- gromacs@2019.4+cuda build_type=Release
- mpich
- fftw precision=float
packages:
cuda:
buildable: False
externals:
- spec: cuda%gcc
prefix: /usr/local/cuda
container:
images:
build: custom/cuda-10.1-ubuntu18.04:latest
final: nvidia/cuda:10.1-base-ubuntu18.04


produces, for instance, the following Dockerfile:

# Build stage with Spack pre-installed and ready to be used
FROM custom/cuda-10.1-ubuntu18.04:latest as builder
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&&  (echo "spack:" \
&&   echo "  specs:" \
&&   echo "  - gromacs@2019.4+cuda build_type=Release" \
&&   echo "  - mpich" \
&&   echo "  - fftw precision=float" \
&&   echo "  packages:" \
&&   echo "    cuda:" \
&&   echo "      buildable: false" \
&&   echo "      externals:" \
&&   echo "      - spec: cuda%gcc" \
&&   echo "        prefix: /usr/local/cuda" \
&&   echo "  concretizer:" \
&&   echo "    unify: true" \
&&   echo "  config:" \
&&   echo "    install_tree: /opt/software" \
&&   echo "  view: /opt/view") > /opt/spack-environment/spack.yaml
# Install the software, remove unnecessary deps
RUN cd /opt/spack-environment && spack env activate . && spack install --fail-fast && spack gc -y
# Strip all the binaries
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \

xargs file -i | \
grep 'charset=binary' | \
grep 'x-executable\|x-archive\|x-sharedlib' | \
awk -F: '{print $1}' | xargs strip -s # Modifications to the environment that are necessary to run RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh # Bare OS image to run the installed executables FROM nvidia/cuda:10.1-base-ubuntu18.04 COPY --from=builder /opt/spack-environment /opt/spack-environment COPY --from=builder /opt/software /opt/software COPY --from=builder /opt/view /opt/view COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]


where the base images for both stages are completely custom.

This second mode of selection for base images is more flexible than just choosing an operating system and a Spack version, but is also more demanding. Users may need to generate by themselves their base images and it's also their responsibility to ensure that:

1.
Spack is available in the build stage and set up correctly to install the required software
2.
The artifacts produced in the build stage can be executed in the final stage

Therefore we don't recommend its use in cases that can be otherwise covered by the simplified mode shown first.

Singularity Definition Files

In addition to producing recipes in Dockerfile format Spack can produce Singularity Definition Files by just changing the value of the format attribute:

$ cat spack.yaml
spack:

specs:
- hdf5~mpi
container:
format: singularity $ spack containerize > hdf5.def $ sudo singularity build hdf5.sif hdf5.def


The minimum version of Singularity required to build a SIF (Singularity Image Format) image from the recipes generated by Spack is 3.5.3.

Extending the Jinja2 Templates

The Dockerfile and the Singularity definition file that Spack can generate are based on a few Jinja2 templates that are rendered according to the environment being containerized. Even though Spack allows a great deal of customization by just setting appropriate values for the configuration options, sometimes that is not enough.

In those cases, a user can directly extend the template that Spack uses to render the image to e.g. set additional environment variables or perform specific operations either before or after a given stage of the build. Let's consider as an example the following structure:

$ tree /opt/environment
/opt/environment
├── data
│     └── data.csv
├── spack.yaml
├── data
└── templates

└── container
└── CustomDockerfile


containing both the custom template extension and the environment manifest file. To use a custom template, the environment must register the directory containing it, and declare its use under the container configuration:

spack:

specs:
- hdf5~mpi
concretizer:
unify: true
config:
template_dirs:
- /opt/environment/templates
container:
format: docker
depfile: true
template: container/CustomDockerfile


The template extension can override two blocks, named build_stage and final_stage, similarly to the example below:

{% extends "container/Dockerfile" %}
{% block build_stage %}
RUN echo "Start building"
{{ super() }}
{% endblock %}
{% block final_stage %}
{{ super() }}
COPY data /share/myapp/data
{% endblock %}


The Dockerfile is generated by running:

$ spack -e /opt/environment containerize


Note that the environment must be active for spack to read the template. The recipe that gets generated contains the two extra instruction that we added in our template extension:

# Build stage with Spack pre-installed and ready to be used
FROM spack/ubuntu-jammy:latest as builder
RUN echo "Start building"
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&&  (echo "spack:" \
&&   echo "  specs:" \
&&   echo "  - hdf5~mpi" \
&&   echo "  concretizer:" \
&&   echo "    unify: true" \
&&   echo "  config:" \
&&   echo "    template_dirs:" \
&&   echo "    - /tmp/environment/templates" \
&&   echo "    install_tree: /opt/software" \
&&   echo "  view: /opt/view") > /opt/spack-environment/spack.yaml
# Install the software, remove unnecessary deps
RUN cd /opt/spack-environment && spack env activate . && spack concretize && spack env depfile -o Makefile && make -j $(nproc) && spack gc -y
# Strip all the binaries
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \

xargs file -i | \
grep 'charset=binary' | \
grep 'x-executable\|x-archive\|x-sharedlib' | \
awk -F: '{print $1}' | xargs strip -s # Modifications to the environment that are necessary to run RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh # Bare OS image to run the installed executables FROM ubuntu:22.04 COPY --from=builder /opt/spack-environment /opt/spack-environment COPY --from=builder /opt/software /opt/software COPY --from=builder /opt/._view /opt/._view COPY --from=builder /opt/view /opt/view COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh COPY data /share/myapp/data ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l", "-c", "$*", "--" ] CMD [ "/bin/bash" ]


Configuration Reference

The tables below describe all the configuration options that are currently supported to customize the generation of container recipes:

General configuration options for the container section of spack.yaml

Option Name Description Allowed Values Required
format The format of the recipe docker or singularity Yes
depfile Whether to use a depfile for installation, or not True or False (default) No
images:os Operating system used as a base for the image See Supported operating systems Yes, if using constrained selection of base images
images:spack Version of Spack use in the build stage Valid tags for base:image Yes, if using constrained selection of base images
images:spack:url Repository from which Spack is cloned Any fork of Spack No
images:spack:ref Reference for the checkout of Spack Either a commit sha, a branch name or a tag No
images:spack:resolve_sha Resolve branches and tags in spack.yaml to commits in the generated recipe True or False (default: False) No
images:build Image to be used in the build stage Any valid container image Yes, if using custom selection of base images
images:final Image to be used in the build stage Any valid container image Yes, if using custom selection of base images
strip Whether to strip binaries true (default) or false No
os_packages:command Tool used to manage system packages apt, yum, dnf, dnf_epel, zypper, apk, yum_amazon Only with custom base images
os_packages:update Whether or not to update the list of available packages True or False (default: True) No
os_packages:build System packages needed at build-time Valid packages for the current OS No
os_packages:final System packages needed at run-time Valid packages for the current OS No
labels Labels to tag the image Pairs of key-value strings No

Configuration options specific to Singularity

Option Name Description Allowed Values Required
singularity:runscript Content of %runscript Any valid script No
singularity:startscript Content of %startscript Any valid script No
singularity:test Content of %test Any valid script No
singularity:help Description of the image Description string No

Best Practices

MPI

Due to the dependency on Fortran for OpenMPI, which is the spack default implementation, consider adding gfortran to the apt-get install list.

Recent versions of OpenMPI will require you to pass --allow-run-as-root to your mpirun calls if started as root user inside Docker.

For execution on HPC clusters, it can be helpful to import the docker image into Singularity in order to start a program with an external MPI. Otherwise, also add openssh-server to the apt-get install list.

CUDA

Starting from CUDA 9.0, Nvidia provides minimal CUDA images based on Ubuntu. Please see their instructions. Avoid double-installing CUDA by adding, e.g.

packages:

cuda:
externals:
- spec: "cuda@9.0.176%gcc@5.4.0 arch=linux-ubuntu16-x86_64"
prefix: /usr/local/cuda
buildable: False


to your spack.yaml.

Users will either need nvidia-docker or e.g. Singularity to execute device kernels.

Docker on Windows and OSX

On Mac OS and Windows, docker runs on a hypervisor that is not allocated much memory by default, and some spack packages may fail to build due to lack of memory. To work around this issue, consider configuring your docker installation to use more of your host memory. In some cases, you can also ease the memory pressure on parallel builds by limiting the parallelism in your config.yaml.

config:

build_jobs: 2


MIRRORS (MIRRORS.YAML)

Some sites may not have access to the internet for fetching packages. These sites will need a local repository of tarballs from which they can get their files. Spack has support for this with mirrors. A mirror is a URL that points to a directory, either on the local filesystem or on some server, containing tarballs for all of Spack's packages.

Here's an example of a mirror's directory structure:

mirror/

cmake/
cmake-2.8.10.2.tar.gz
dyninst/
dyninst-8.1.1.tgz
dyninst-8.1.2.tgz
libdwarf/
libdwarf-20130126.tar.gz
libdwarf-20130207.tar.gz
libdwarf-20130729.tar.gz
libelf/
libelf-0.8.12.tar.gz
libelf-0.8.13.tar.gz
libunwind/
libunwind-1.1.tar.gz
mpich/
mpich-3.0.4.tar.gz
mvapich2/
mvapich2-1.9.tgz


The structure is very simple. There is a top-level directory. The second level directories are named after packages, and the third level contains tarballs for each package, named after each package.

NOTE:

Archives are not named exactly the way they were in the package's fetch URL. They have the form <name>-<version>.<extension>, where <name> is Spack's name for the package, <version> is the version of the tarball, and <extension> is whatever format the package's fetch URL contains.

In order to make mirror creation reasonably fast, we copy the tarball in its original format to the mirror directory, but we do not standardize on a particular compression algorithm, because this would potentially require expanding and re-compressing each archive.



spack mirror

Mirrors are managed with the spack mirror command. The help for spack mirror looks like this:

$ spack help mirror
usage: spack mirror [-hn] [--deprecated] SUBCOMMAND ...
manage mirrors (source and binary)
positional arguments:

SUBCOMMAND
create create a directory to be used as a spack mirror, and fill it with package archives
destroy given a url, recursively delete everything under it
add add a mirror to Spack
remove (rm) remove a mirror by name
set-url change the URL of a mirror
set configure the connection details of a mirror
list print out available mirrors to the console options:
--deprecated fetch deprecated versions without warning
-h, --help show this help message and exit
-n, --no-checksum do not use checksums to verify downloaded files (unsafe)


The create command actually builds a mirror by fetching all of its packages from the internet and checksumming them.

The other three commands are for managing mirror configuration. They control the URL(s) from which Spack downloads its packages.

spack mirror create

You can create a mirror using the spack mirror create command, assuming you're on a machine where you can access the internet.

The command will iterate through all of Spack's packages and download the safe ones into a directory structure like the one above. Here is what it looks like:

$ spack mirror create libelf libdwarf
==> Created new mirror in spack-mirror-2014-06-24
==> Trying to fetch from http://www.mr511.de/software/libelf-0.8.13.tar.gz
##########################################################                81.6%
==> Checksum passed for libelf@0.8.13
==> Added libelf@0.8.13
==> Trying to fetch from http://www.mr511.de/software/libelf-0.8.12.tar.gz
######################################################################    98.6%
==> Checksum passed for libelf@0.8.12
==> Added libelf@0.8.12
==> Trying to fetch from http://www.prevanders.net/libdwarf-20130207.tar.gz
######################################################################    97.3%
==> Checksum passed for libdwarf@20130207
==> Added libdwarf@20130207
==> Trying to fetch from http://www.prevanders.net/libdwarf-20130126.tar.gz
########################################################                  78.9%
==> Checksum passed for libdwarf@20130126
==> Added libdwarf@20130126
==> Trying to fetch from http://www.prevanders.net/libdwarf-20130729.tar.gz
#############################################################             84.7%
==> Added libdwarf@20130729
==> Added spack-mirror-2014-06-24/libdwarf/libdwarf-20130729.tar.gz to mirror
==> Added python@2.7.8.
==> Successfully updated mirror in spack-mirror-2015-02-24.

Archive stats:
0 already present
5 added
0 failed to fetch.


Once this is done, you can tar up the spack-mirror-2014-06-24 directory and copy it over to the machine you want it hosted on.

Custom package sets

Normally, spack mirror create downloads all the archives it has checksums for. If you want to only create a mirror for a subset of packages, you can do that by supplying a list of package specs on the command line after spack mirror create. For example, this command:

$ spack mirror create libelf@0.8.12: boost@1.44:


Will create a mirror for libelf versions greater than or equal to 0.8.12 and boost versions greater than or equal to 1.44.

Mirror files

If you have a very large number of packages you want to mirror, you can supply a file with specs in it, one per line:

$ cat specs.txt
libdwarf
libelf@0.8.12:
boost@1.44:
boost@1.39.0
...
$ spack mirror create --file specs.txt
...


This is useful if there is a specific suite of software managed by your site.

Mirror environment

To create a mirror of all packages required by a concrete environment, activate the environment and call spack mirror create -a. This is especially useful to create a mirror of an environment concretized on another machine.

[remote] $ spack env create myenv
[remote] $ spack env activate myenv
[remote] $ spack add ...
[remote] $ spack concretize
$ sftp remote:/spack/var/environment/myenv/spack.lock
$ spack env create myenv spack.lock
$ spack env activate myenv
$ spack mirror create -a


spack mirror add

Once you have a mirror, you need to let spack know about it. This is relatively simple. First, figure out the URL for the mirror. If it's a directory, you can use a file URL like this one:


That points to the directory on the local filesystem. If it were on a web server, you could use a URL like this one:

https://example.com/some/web-hosted/directory/spack-mirror-2014-06-24

Spack will use the URL as the root for all of the packages it fetches. You can tell your Spack installation to use that mirror like this:

$ spack mirror add local_filesystem file://$HOME/spack-mirror-2014-06-24


Each mirror has a name so that you can refer to it again later.

spack mirror list

To see all the mirrors Spack knows about, run spack mirror list:

$ spack mirror list
local_filesystem    file:///home/username/spack-mirror-2014-06-24


spack mirror remove

To remove a mirror by name, run:

$ spack mirror remove local_filesystem
$ spack mirror list
==> No mirrors configured.


Mirror precedence

Adding a mirror really adds a line in ~/.spack/mirrors.yaml:


If you want to change the order in which mirrors are searched for packages, you can edit this file and reorder the sections. Spack will search the topmost mirror first and the bottom-most mirror last.

Local Default Cache

Spack caches resources that are downloaded as part of installs. The cache is a valid spack mirror: it uses the same directory structure and naming scheme as other Spack mirrors (so it can be copied anywhere and referenced with a URL like other mirrors). The mirror is maintained locally (within the Spack installation directory) at var/spack/cache/. It is always enabled (and is always searched first when attempting to retrieve files for an installation) but can be cleared with clean; the cache directory can also be deleted manually without issue.

Caching includes retrieved tarball archives and source control repositories, but only resources with an associated digest or commit ID (e.g. a revision number for SVN) will be cached.

MODULES (MODULES.YAML)

The use of module systems to manage user environment in a controlled way is a common practice at HPC centers that is often embraced also by individual programmers on their development machines. To support this common practice Spack integrates with Environment Modules and Lmod by providing post-install hooks that generate module files and commands to manipulate them.

Modules are one of several ways you can use Spack packages. For other options that may fit your use case better, you should also look at spack load and environments.

Using module files via Spack

If you have installed a supported module system you should be able to run module avail to see what module files have been installed. Here is sample output of those programs, showing lots of installed packages:

$ module avail
--------------------------------------------------------------- ~/spack/share/spack/modules/linux-ubuntu14-x86_64 ---------------------------------------------------------------
autoconf/2.69-gcc-4.8-qextxkq       hwloc/1.11.6-gcc-6.3.0-akcisez             m4/1.4.18-gcc-4.8-ev2znoc                   openblas/0.2.19-gcc-6.3.0-dhkmed6        py-setuptools/34.2.0-gcc-6.3.0-fadur4s
automake/1.15-gcc-4.8-maqvukj       isl/0.18-gcc-4.8-afi6taq                   m4/1.4.18-gcc-6.3.0-uppywnz                 openmpi/2.1.0-gcc-6.3.0-go2s4z5          py-six/1.10.0-gcc-6.3.0-p4dhkaw
binutils/2.28-gcc-4.8-5s7c6rs       libiconv/1.15-gcc-4.8-at46wg3              mawk/1.3.4-gcc-4.8-acjez57                  openssl/1.0.2k-gcc-4.8-dkls5tk           python/2.7.13-gcc-6.3.0-tyehea7
bison/3.0.4-gcc-4.8-ek4luo5         libpciaccess/0.13.4-gcc-6.3.0-gmufnvh      mawk/1.3.4-gcc-6.3.0-ostdoms                openssl/1.0.2k-gcc-6.3.0-gxgr5or         readline/7.0-gcc-4.8-xhufqhn
bzip2/1.0.6-gcc-4.8-iffrxzn         libsigsegv/2.11-gcc-4.8-pp2cvte            mpc/1.0.3-gcc-4.8-g5mztc5                   pcre/8.40-gcc-4.8-r5pbrxb                readline/7.0-gcc-6.3.0-zzcyicg
bzip2/1.0.6-gcc-6.3.0-bequudr       libsigsegv/2.11-gcc-6.3.0-7enifnh          mpfr/3.1.5-gcc-4.8-o7xm7az                  perl/5.24.1-gcc-4.8-dg5j65u              sqlite/3.8.5-gcc-6.3.0-6zoruzj
cmake/3.7.2-gcc-6.3.0-fowuuby       libtool/2.4.6-gcc-4.8-7a523za              mpich/3.2-gcc-6.3.0-dmvd3aw                 perl/5.24.1-gcc-6.3.0-6uzkpt6            tar/1.29-gcc-4.8-wse2ass
curl/7.53.1-gcc-4.8-3fz46n6         libtool/2.4.6-gcc-6.3.0-n7zmbzt            ncurses/6.0-gcc-4.8-dcpe7ia                 pkg-config/0.29.2-gcc-4.8-ib33t75        tcl/8.6.6-gcc-4.8-tfxzqbr
expat/2.2.0-gcc-4.8-mrv6bd4         libxml2/2.9.4-gcc-4.8-ryzxnsu              ncurses/6.0-gcc-6.3.0-ucbhcdy               pkg-config/0.29.2-gcc-6.3.0-jpgubk3      util-macros/1.19.1-gcc-6.3.0-xorz2x2
flex/2.6.3-gcc-4.8-yf345oo          libxml2/2.9.4-gcc-6.3.0-rltzsdh            netlib-lapack/3.6.1-gcc-6.3.0-js33dog       py-appdirs/1.4.0-gcc-6.3.0-jxawmw7       xz/5.2.3-gcc-4.8-mew4log
gcc/6.3.0-gcc-4.8-24puqve           lmod/7.4.1-gcc-4.8-je4srhr                 netlib-scalapack/2.0.2-gcc-6.3.0-5aidk4l    py-numpy/1.12.0-gcc-6.3.0-oemmoeu        xz/5.2.3-gcc-6.3.0-3vqeuvb
gettext/0.19.8.1-gcc-4.8-yymghlh    lua/5.3.4-gcc-4.8-im75yaz                  netlib-scalapack/2.0.2-gcc-6.3.0-hjsemcn    py-packaging/16.8-gcc-6.3.0-i2n3dtl      zip/3.0-gcc-4.8-rwar22d
gmp/6.1.2-gcc-4.8-5ub2wu5           lua-luafilesystem/1_6_3-gcc-4.8-wkey3nl    netlib-scalapack/2.0.2-gcc-6.3.0-jva724b    py-pyparsing/2.1.10-gcc-6.3.0-tbo6gmw    zlib/1.2.11-gcc-4.8-pgxsxv7
help2man/1.47.4-gcc-4.8-kcnqmau     lua-luaposix/33.4.0-gcc-4.8-mdod2ry        netlib-scalapack/2.0.2-gcc-6.3.0-rgqfr6d    py-scipy/0.19.0-gcc-6.3.0-kr7nat4        zlib/1.2.11-gcc-6.3.0-7cqp6cj


The names should look familiar, as they resemble the output from spack find. For example, you could type the following command to load the cmake module:

$ module load cmake/3.7.2-gcc-6.3.0-fowuuby


Neither of these is particularly pretty, easy to remember, or easy to type. Luckily, Spack offers many facilities for customizing the module scheme used at your site.

Module file customization

Module files are generated by post-install hooks after the successful installation of a package.

NOTE:

Spack only generates modulefiles when a package is installed. If you attempt to install a package and it is already installed, Spack will not regenerate modulefiles for the package. This may lead to inconsistent modulefiles if the Spack module configuration has changed since the package was installed, either by editing a file or changing scopes or environments.

Later in this section there is a subsection on regenerating modules that will allow you to bring your modules to a consistent state.



The table below summarizes the essential information associated with the different file formats that can be generated by Spack:

Hook name Default root directory Default template file Compatible tools
Tcl - Non-Hierarchical tcl share/spack/modules share/spack/templates/modules/modulefile.tcl Env. Modules/Lmod
Lua - Hierarchical lmod share/spack/lmod share/spack/templates/modules/modulefile.lua Lmod


Spack ships with sensible defaults for the generation of module files, but you can customize many aspects of it to accommodate package or site specific needs. In general you can override or extend the default behavior by:

1.
overriding certain callback APIs in the Python packages
2.
writing specific rules in the modules.yaml configuration file
3.
writing your own templates to override or extend the defaults



The former method let you express changes in the run-time environment that are needed to use the installed software properly, e.g. injecting variables from language interpreters into their extensions. The latter two instead permit to fine tune the filesystem layout, content and creation of module files to meet site specific conventions.

Override API calls in package.py

There are two methods that you can override in any package.py to affect the content of the module files generated by Spack. The first one:

def setup_run_environment(self, env):

pass


can alter the content of the module file associated with the same package where it is overridden. The second method:

def setup_dependent_run_environment(self, env, dependent_spec):

pass


can instead inject run-time environment modifications in the module files of packages that depend on it. In both cases you need to fill env with the desired list of environment modifications.

An example in which it is crucial to override both methods is given by the r package. This package installs libraries and headers in non-standard locations and it is possible to prepend the appropriate directory to the corresponding environment variables:

LD_LIBRARY_PATH self.prefix/rlib/R/lib
PKG_CONFIG_PATH self.prefix/rlib/pkgconfig

with the following snippet:


def setup_run_environment(self, env):
env.prepend_path("LD_LIBRARY_PATH", join_path(self.prefix, "rlib", "R", "lib"))
env.prepend_path("PKG_CONFIG_PATH", join_path(self.prefix, "rlib", "pkgconfig"))
env.set("R_HOME", join_path(self.prefix, "rlib", "R"))
if "+rmath" in self.spec:
env.prepend_path("LD_LIBRARY_PATH", join_path(self.prefix, "rlib"))


The r package also knows which environment variable should be modified to make language extensions provided by other packages available, and modifies it appropriately in the override of the second method:


def setup_dependent_run_environment(self, env, dependent_spec):
# For run time environment set only the path for dependent_spec and
# prepend it to R_LIBS
env.set("R_HOME", join_path(self.prefix, "rlib", "R"))
if dependent_spec.package.extends(self.spec):
env.prepend_path("R_LIBS", join_path(dependent_spec.prefix, self.r_lib_dir))




Write a configuration file

The configuration files that control module generation behavior are named modules.yaml. The default configuration:

# -------------------------------------------------------------------------
# This is the default configuration for Spack's module file generation.
#
# Settings here are versioned with Spack and are intended to provide
# sensible defaults out of the box. Spack maintainers should edit this
# file to keep it current.
#
# Users can override these settings by editing the following files.
#
# Per-spack-instance settings (overrides defaults):
#   $SPACK_ROOT/etc/spack/modules.yaml
#
# Per-user settings (overrides default and site settings):
#   ~/.spack/modules.yaml
# -------------------------------------------------------------------------
modules:

# This maps paths in the package install prefix to environment variables
# they should be added to. For example, <prefix>/bin should be in PATH.
prefix_inspections:
./bin:
- PATH
./man:
- MANPATH
./share/man:
- MANPATH
./share/aclocal:
- ACLOCAL_PATH
./lib/pkgconfig:
- PKG_CONFIG_PATH
./lib64/pkgconfig:
- PKG_CONFIG_PATH
./share/pkgconfig:
- PKG_CONFIG_PATH
./:
- CMAKE_PREFIX_PATH
# These are configurations for the module set named "default"
default:
# Where to install modules
roots:
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
# What type of modules to use ("tcl" and/or "lmod")
enable:
- lmod
tcl:
all:
autoload: direct
# Default configurations if lmod is enabled
lmod:
all:
autoload: direct
hierarchy:
- mpi


activates the hooks to generate tcl module files and inspects the installation folder of each package for the presence of a set of subdirectories (bin, man, share/man, etc.). If any is found its full path is prepended to the environment variables listed below the folder name.

Spack modules can be configured for multiple module sets. The default module set is named default. All Spack commands which operate on modules default to apply the default module set, but can be applied to any module set in the configuration.

Changing the modules root

As shown in the table above, the default module root for lmod is $spack/share/spack/lmod and the default root for tcl is $spack/share/spack/modules. This can be overridden for any module set by changing the roots key of the configuration.

modules:

default:
roots:
tcl: /path/to/install/tcl/modules
my_custom_lmod_modules:
roots:
lmod: /path/to/install/custom/lmod/modules
...


This configuration will create two module sets. The default module set will install its tcl modules to /path/to/install/tcl/modules (and still install its lmod modules, if any, to the default location). The set my_custom_lmod_modules will install its lmod modules to /path/to/install/custom/lmod/modules (and still install its tcl modules, if any, to the default location).

By default, an architecture-specific directory is added to the root directory. A module set may override that behavior by setting the arch_folder config value to False.

modules:

default:
roots:
tcl: /path/to/install/tcl/modules
arch_folder: false


Obviously, having multiple module sets install modules to the default location could be confusing to users of your modules. In the next section, we will discuss enabling and disabling module types (module file generators) for each module set.

Activate other hooks

Any other module file generator shipped with Spack can be activated adding it to the list under the enable key in the module file. Currently the only generator that is not active by default is lmod, which produces hierarchical lua module files.

Each module system can then be configured separately. In fact, you should list configuration options that affect a particular type of module files under a top level key corresponding to the generator being customized:

modules:

default:
enable:
- tcl
- lmod
tcl:
# contains environment modules specific customizations
lmod:
# contains lmod specific customizations


In general, the configuration options that you can use in modules.yaml will either change the layout of the module files on the filesystem, or they will affect their content. For the latter point it is possible to use anonymous specs to fine tune the set of packages on which the modifications should be applied.

Selection by anonymous specs

In the configuration file you can use anonymous specs (i.e. specs that are not required to have a root package and are thus used just to express constraints) to apply certain modifications on a selected set of the installed software. For instance, in the snippet below:

modules:

default:
tcl:
# The keyword `all` selects every package
all:
environment:
set:
BAR: 'bar'
# This anonymous spec selects any package that
# depends on mpi. The double colon at the
# end clears the set of rules that matched so far.
^mpi::
environment:
prepend_path:
PATH: '{^mpi.prefix}/bin'
set:
BAR: 'baz'
# Selects any zlib package
zlib:
environment:
prepend_path:
LD_LIBRARY_PATH: 'foo'
# Selects zlib compiled with gcc@4.8
zlib%gcc@4.8:
environment:
unset:
- FOOBAR


you are instructing Spack to set the environment variable BAR=bar for every module, unless the associated spec satisfies the abstract dependency ^mpi in which case BAR=baz, and the directory containing the respective MPI executables is prepended to the PATH variable. In addition in any spec that satisfies zlib the value foo will be prepended to LD_LIBRARY_PATH and in any spec that satisfies zlib%gcc@4.8 the variable FOOBAR will be unset.

NOTE:

The modifications associated with the all keyword are always evaluated first, no matter where they appear in the configuration file. All the other spec constraints are instead evaluated top to bottom.



Exclude or include specific module files

You can use anonymous specs also to prevent module files from being written or to force them to be written. Consider the case where you want to hide from users all the boilerplate software that you had to build in order to bootstrap a new compiler. Suppose for instance that gcc@4.4.7 is the compiler provided by your system. If you write a configuration file like:

modules:

default:
tcl:
include: ['gcc', 'llvm'] # include will have precedence over exclude
exclude: ['%gcc@4.4.7'] # Assuming gcc@4.4.7 is the system compiler


you will prevent the generation of module files for any package that is compiled with gcc@4.4.7, with the only exception of any gcc or any llvm installation.

Customize the naming of modules

The names of environment modules generated by spack are not always easy to fully comprehend due to the long hash in the name. There are three module configuration options to help with that. The first is a global setting to adjust the hash length. It can be set anywhere from 0 to 32 and has a default length of 7. This is the representation of the hash in the module file name and does not affect the size of the package hash. Be aware that the smaller the hash length the more likely naming conflicts will occur. The following snippet shows how to set hash length in the module file names:

modules:

default:
tcl:
hash_length: 7


To help make module names more readable, and to help alleviate name conflicts with a short hash, one can use the suffixes option in the modules configuration file. This option will add strings to modules that match a spec. For instance, the following config options,

modules:

default:
tcl:
all:
suffixes:
^python@2.7.12: 'python-2.7.12'
^openblas: 'openblas'


will add a python-2.7.12 version string to any packages compiled with python matching the spec, python@2.7.12. This is useful to know which version of python a set of python extensions is associated with. Likewise, the openblas string is attached to any program that has openblas in the spec, most likely via the +blas variant specification.

The most heavyweight solution to module naming is to change the entire naming convention for module files. This uses the projections format covered in View Projections.

modules:

default:
tcl:
projections:
all: '{name}/{version}-{compiler.name}-{compiler.version}-module'
^mpi: '{name}/{version}-{^mpi.name}-{^mpi.version}-{compiler.name}-{compiler.version}-module'


will create module files that are nested in directories by package name, contain the version and compiler name and version, and have the word module before the hash for all specs that do not depend on mpi, and will have the same information plus the MPI implementation name and version for all packages that depend on mpi.

When specifying module names by projection for Lmod modules, we recommend NOT including names of dependencies (e.g., MPI, compilers) that are already in the Lmod hierarchy.

NOTE:

Tcl and Lua modules also allow for explicit conflicts between modulefiles.

modules:

default:
enable:
- tcl
tcl:
projections:
all: '{name}/{version}-{compiler.name}-{compiler.version}'
all:
conflict:
- '{name}'
- 'intel/14.0.1'


will create module files that will conflict with intel/14.0.1 and with the base directory of the same module, effectively preventing the possibility to load two or more versions of the same software at the same time. The tokens that are available for use in this directive are the same understood by the format() method.

For Lmod and Environment Modules versions prior 4.2, it is important to express the conflict on both modulefiles conflicting with each other.



NOTE:

When lmod is activated Spack will generate a set of hierarchical lua module files that are understood by Lmod. The hierarchy will always contain the two layers Core / Compiler but can be further extended to any of the virtual dependencies present in Spack. A case that could be useful in practice is for instance:

modules:

default:
enable:
- lmod
lmod:
core_compilers:
- 'gcc@4.8'
core_specs:
- 'python'
hierarchy:
- 'mpi'
- 'lapack'


that will generate a hierarchy in which the lapack and mpi layer can be switched independently. This allows a site to build the same libraries or applications against different implementations of mpi and lapack, and let Lmod switch safely from one to the other.

All packages built with a compiler in core_compilers and all packages that satisfy a spec in core_specs will be put in the Core hierarchy of the lua modules.




WARNING:

The user is responsible for maintining consistency among core packages, as core_specs bypasses the hierarchy that allows Lmod to safely switch between coherent software stacks.



WARNING:

For hierarchies that are deeper than three layers lmod spider may have some issues. See this discussion on the Lmod project.



Select default modules

By default, when multiple modules of the same name share a directory, the highest version number will be the default module. This behavior of the module command can be overridden with a symlink named default to the desired default module. If you wish to configure default modules with Spack, add a defaults key to your modules configuration:

modules:

my-module-set:
tcl:
defaults:
- gcc@10.2.1
- hdf5@1.2.10+mpi+hl%gcc


These defaults may be arbitrarily specific. For any package that satisfies a default, Spack will generate the module file in the appropriate path, and will generate a default symlink to the module file as well.

WARNING:

If Spack is configured to generate multiple default packages in the same directory, the last modulefile to be generated will be the default module.


Customize environment modifications

You can control which prefixes in a Spack package are added to environment variables with the prefix_inspections section; this section maps relative prefixes to the list of environment variables which should be updated with those prefixes.

The prefix_inspections configuration is different from other settings in that a prefix_inspections configuration at the modules level of the configuration file applies to all module sets. This allows users to make general overrides to the default inspections and customize them per-module-set.

modules:

prefix_inspections:
./bin:
- PATH
./man:
- MANPATH
./:
- CMAKE_PREFIX_PATH


Prefix inspections are only applied if the relative path inside the installation prefix exists. In this case, for a Spack package foo installed to /spack/prefix/foo, if foo installs executables to bin but no manpages in man, the generated module file for foo would update PATH to contain /spack/prefix/foo/bin and CMAKE_PREFIX_PATH to contain /spack/prefix/foo, but would not update MANPATH.

The default list of environment variables in this config section includes PATH, MANPATH, ACLOCAL_PATH, PKG_CONFIG_PATH and CMAKE_PREFIX_PATH, as well as DYLD_FALLBACK_LIBRARY_PATH on macOS. On Linux however, the corresponding LD_LIBRARY_PATH variable is not set, because it affects the behavior of system executables too.

NOTE:

In general, the LD_LIBRARY_PATH variable is not required when using packages built with Spack, thanks to the use of RPATH. Some packages may still need the variable, which is best handled on a per-package basis instead of globally, as explained in Override API calls in package.py.


There is a special case for prefix inspections relative to environment views. If all of the following conditions hold for a module set configuration:

1.
The configuration is for an environment and will never be applied outside the environment,
2.
The environment in question is configured to use a view,
3.
The environment view is configured with a projection that ensures every package is linked to a unique directory,

then the module set may be configured to create modules relative to the environment view. This is specified by the use_view configuration option in the module set. If True, the module set is constructed relative to the default view of the environment. Otherwise, the value must be the name of the environment view relative to which to construct modules, or False-ish to disable the feature explicitly (the default is False).

If the use_view value is set in the config, then the prefix inspections for the package are done relative to the package's path in the view.

spack:

modules:
view_relative_modules:
use_view: my_view
prefix_inspections:
./bin:
- PATH
view:
my_view:
projections:
root: /path/to/my/view
all: '{name}-{hash}'


The spack key is relevant to environment configuration, and the view key is discussed in detail in the section on Configuring environment views. With this configuration the generated module for package foo would set PATH to include /path/to/my/view/foo-<hash>/bin instead of /spack/prefix/foo/bin.

The use_view option is useful when deploying a large software stack to users who are likely to inspect the modules to find full paths to software, when it is desirable to present the users with a simpler set of paths than those generated by the Spack install tree.

Filter out environment modifications

Modifications to certain environment variables in module files are there by default, for instance because they are generated by prefix inspections. If you want to prevent modifications to some environment variables, you can do so by using the exclude_env_vars:

modules:

default:
tcl:
all:
filter:
# Exclude changes to any of these variables
exclude_env_vars: ['CPATH', 'LIBRARY_PATH']


The configuration above will generate module files that will not contain modifications to either CPATH or LIBRARY_PATH.

Autoload dependencies

Often it is required for a module to have its (transient) dependencies loaded as well. One example where this is useful is when one package needs to use executables provided by its dependency; when the dependency is autoloaded, the executable will be in the PATH. Similarly for scripting languages such as Python, packages and their dependencies have to be loaded together.

Autoloading is enabled by default for Lmod and Environment Modules. The former has builtin support for through the depends_on function. The latter uses module load statement to load and track dependencies.

Autoloading can also be enabled conditionally:

modules:

default:
tcl:
all:
autoload: none
^python:
autoload: direct


The configuration file above will produce module files that will load their direct dependencies if the package installed depends on python. The allowed values for the autoload statement are either none, direct or all.

NOTE:

In the tcl section of the configuration file it is possible to use the prerequisites directive that accepts the same values as autoload. It will produce module files that have a prereq statement, which autoloads dependencies on Environment Modules when its auto_handling configuration option is enabled. If Environment Modules is installed with Spack, auto_handling is enabled by default starting version 4.2. Otherwise it is enabled by default since version 5.0.



Maintaining Module Files

Each type of module file has a command with the same name associated with it. The actions these commands permit are usually associated with the maintenance of a production environment. Here's, for instance, a sample of the features of the spack module tcl command:

$ spack module tcl --help
usage: spack module tcl [-h] [-n MODULE_SET_NAME] SUBCOMMAND ...
positional arguments:

SUBCOMMAND
refresh regenerate module files
find find module files for packages
rm remove module files
loads prompt the list of modules associated with a constraint
setdefault set the default module file for a package options:
-h, --help show this help message and exit
-n MODULE_SET_NAME, --name MODULE_SET_NAME
named module set to use from modules configuration


Refresh the set of modules

The subcommand that regenerates module files to update their content or their layout is refresh:

$ spack module tcl refresh --help
usage: spack module tcl refresh [-hy] [--delete-tree] [--upstream-modules] ...
positional arguments:

installed_specs constraint to select a subset of installed packages options:
--delete-tree delete the module file tree before refresh
--upstream-modules generate modules for packages installed upstream
-h, --help show this help message and exit
-y, --yes-to-all assume "yes" is the answer to every confirmation request


A set of packages can be selected using anonymous specs for the optional constraint positional argument. Optionally the entire tree can be deleted before regeneration if the change in layout is radical.

Delete module files

If instead what you need is just to delete a few module files, then the right subcommand is rm:

$ spack module tcl rm --help
usage: spack module tcl rm [-hy] ...
positional arguments:

installed_specs constraint to select a subset of installed packages options:
-h, --help show this help message and exit
-y, --yes-to-all assume "yes" is the answer to every confirmation request


NOTE:

Every modification done on modules that are already existing will ask for a confirmation by default. If the command is used in a script it is possible though to pass the -y argument, that will skip this safety measure.



Using Spack modules in shell scripts

The easiest To enable additional Spack commands for loading and unloading module files, and to add the correct path to MODULEPATH, you need to source the appropriate setup file. Assuming Spack is installed in $SPACK_ROOT, run the appropriate command for your shell:

# For bash/zsh/sh
$ . $SPACK_ROOT/share/spack/setup-env.sh
# For tcsh/csh
$ source $SPACK_ROOT/share/spack/setup-env.csh
# For fish
$ . $SPACK_ROOT/share/spack/setup-env.fish


If you want to have Spack's shell support available on the command line at any login you can put this source line in one of the files that are sourced at startup (like .profile, .bashrc or .cshrc). Be aware that the shell startup time may increase slightly as a result.

spack module tcl loads

In some cases, it is desirable to use a Spack-generated module, rather than relying on Spack's built-in user-environment modification capabilities. To translate a spec into a module name, use spack module tcl loads or spack module lmod loads depending on the module system desired.

To load not just a module, but also all the modules it depends on, use the --dependencies option. This is not required for most modules because Spack builds binaries with RPATH support. However, not all packages use RPATH to find their dependencies: this can be true in particular for Python extensions, which are currently not built with RPATH.

Scripts to load modules recursively may be made with the command:

$ spack module tcl loads --dependencies <spec>


An equivalent alternative using process substitution is:

$ source <( spack module tcl loads --dependencies <spec> )


Module Commands for Shell Scripts

Although Spack is flexible, the module command is much faster. This could become an issue when emitting a series of spack load commands inside a shell script. By adding the --dependencies flag, spack module tcl loads may also be used to generate code that can be cut-and-pasted into a shell script. For example:

$ spack module tcl loads --dependencies py-numpy git
# bzip2@1.0.6%gcc@4.9.3=linux-x86_64
module load bzip2/1.0.6-gcc-4.9.3-ktnrhkrmbbtlvnagfatrarzjojmkvzsx
# ncurses@6.0%gcc@4.9.3=linux-x86_64
module load ncurses/6.0-gcc-4.9.3-kaazyneh3bjkfnalunchyqtygoe2mncv
# zlib@1.2.8%gcc@4.9.3=linux-x86_64
module load zlib/1.2.8-gcc-4.9.3-v3ufwaahjnviyvgjcelo36nywx2ufj7z
# sqlite@3.8.5%gcc@4.9.3=linux-x86_64
module load sqlite/3.8.5-gcc-4.9.3-a3eediswgd5f3rmto7g3szoew5nhehbr
# readline@6.3%gcc@4.9.3=linux-x86_64
module load readline/6.3-gcc-4.9.3-se6r3lsycrwxyhreg4lqirp6xixxejh3
# python@3.5.1%gcc@4.9.3=linux-x86_64
module load python/3.5.1-gcc-4.9.3-5q5rsrtjld4u6jiicuvtnx52m7tfhegi
# py-setuptools@20.5%gcc@4.9.3=linux-x86_64
module load py-setuptools/20.5-gcc-4.9.3-4qr2suj6p6glepnedmwhl4f62x64wxw2
# py-nose@1.3.7%gcc@4.9.3=linux-x86_64
module load py-nose/1.3.7-gcc-4.9.3-pwhtjw2dvdvfzjwuuztkzr7b4l6zepli
# openblas@0.2.17%gcc@4.9.3+shared=linux-x86_64
module load openblas/0.2.17-gcc-4.9.3-pw6rmlom7apfsnjtzfttyayzc7nx5e7y
# py-numpy@1.11.0%gcc@4.9.3+blas+lapack=linux-x86_64
module load py-numpy/1.11.0-gcc-4.9.3-mulodttw5pcyjufva4htsktwty4qd52r
# curl@7.47.1%gcc@4.9.3=linux-x86_64
module load curl/7.47.1-gcc-4.9.3-ohz3fwsepm3b462p5lnaquv7op7naqbi
# autoconf@2.69%gcc@4.9.3=linux-x86_64
module load autoconf/2.69-gcc-4.9.3-bkibjqhgqm5e3o423ogfv2y3o6h2uoq4
# cmake@3.5.0%gcc@4.9.3~doc+ncurses+openssl~qt=linux-x86_64
module load cmake/3.5.0-gcc-4.9.3-x7xnsklmgwla3ubfgzppamtbqk5rwn7t
# expat@2.1.0%gcc@4.9.3=linux-x86_64
module load expat/2.1.0-gcc-4.9.3-6pkz2ucnk2e62imwakejjvbv6egncppd
# git@2.8.0-rc2%gcc@4.9.3+curl+expat=linux-x86_64
module load git/2.8.0-rc2-gcc-4.9.3-3bib4hqtnv5xjjoq5ugt3inblt4xrgkd


The script may be further edited by removing unnecessary modules.

Module Prefixes

On some systems, modules are automatically prefixed with a certain string; spack module tcl loads needs to know about that prefix when it issues module load commands. Add the --prefix option to your spack module tcl loads commands if this is necessary.

For example, consider the following on one system:

$ module avail
linux-SuSE11-x86_64/antlr/2.7.7-gcc-5.3.0-bdpl46y
$ spack module tcl loads antlr    # WRONG!
# antlr@2.7.7%gcc@5.3.0~csharp+cxx~java~python arch=linux-SuSE11-x86_64
module load antlr/2.7.7-gcc-5.3.0-bdpl46y
$ spack module tcl loads --prefix linux-SuSE11-x86_64/ antlr
# antlr@2.7.7%gcc@5.3.0~csharp+cxx~java~python arch=linux-SuSE11-x86_64
module load linux-SuSE11-x86_64/antlr/2.7.7-gcc-5.3.0-bdpl46y


PACKAGE REPOSITORIES (REPOS.YAML)

Spack comes with thousands of built-in package recipes in var/spack/repos/builtin/. This is a package repository -- a directory that Spack searches when it needs to find a package by name. You may need to maintain packages for restricted, proprietary or experimental software separately from the built-in repository. Spack allows you to configure local repositories using either the repos.yaml or the spack repo command.

A package repository a directory structured like this:

repo/

repo.yaml
packages/
hdf5/
package.py
mpich/
package.py
mpich-1.9-bugfix.patch
trilinos/
package.py
...


The top-level repo.yaml file contains configuration metadata for the repository. The packages subdirectory, typically packages, contains subdirectories for each package in the repository. Each package directory contains a package.py file and any patches or other files needed to build the package.

The repo.yaml file may also contain a subdirectory key, which can modify the name of the subdirectory used for packages. As seen above, the default value is packages. An empty string (subdirectory: '') requires a flattened repo structure in which the package names are top-level subdirectories.

Package repositories allow you to:

1.
Maintain your own packages separately from Spack;
2.
Share your packages (e.g., by hosting them in a shared file system), without committing them to the built-in Spack package repository; and
3.
Override built-in Spack packages with your own implementation.

Packages in a separate repository can also depend on built-in Spack packages. So, you can leverage existing recipes without re-implementing them in your own repository.

repos.yaml

Spack uses the repos.yaml file in ~/.spack (and elsewhere) to find repositories. Note that the repos.yaml configuration file is distinct from the repo.yaml file in each repository. For more on the YAML format, and on how configuration file precedence works in Spack, see configuration.

The default etc/spack/defaults/repos.yaml file looks like this:

repos:
- $spack/var/spack/repos/builtin


The file starts with repos: and contains a single ordered list of paths to repositories. Each path is on a separate line starting with -. You can add a repository by inserting another path into the list:

repos:
- /opt/local-repo
- $spack/var/spack/repos/builtin


When Spack interprets a spec, e.g., mpich in spack install mpich, it searches these repositories in order (first to last) to resolve each package name. In this example, Spack will look for the following packages and use the first valid file:

1.
/opt/local-repo/packages/mpich/package.py
2.
$spack/var/spack/repos/builtin/packages/mpich/package.py

NOTE:

Currently, Spack can only use repositories in the file system. We plan to eventually support URLs in repos.yaml, so that you can easily point to remote package repositories, but that is not yet implemented.


Namespaces

Every repository in Spack has an associated namespace defined in its top-level repo.yaml file. If you look at var/spack/repos/builtin/repo.yaml in the built-in repository, you'll see that its namespace is builtin:

$ cat var/spack/repos/builtin/repo.yaml
repo:

namespace: builtin


Spack records the repository namespace of each installed package. For example, if you install the mpich package from the builtin repo, Spack records its fully qualified name as builtin.mpich. This accomplishes two things:

1.
You can have packages with the same name from different namespaces installed at once.

1.
You can easily determine which repository a package came from after it is installed (more below).

NOTE:

It may seem redundant for a repository to have both a namespace and a path, but repository paths may change over time, or, as mentioned above, a locally hosted repository path may eventually be hosted at some remote URL.

Namespaces are designed to allow package authors to associate a unique identifier with their packages, so that the package can be identified even if the repository moves. This is why the namespace is determined by the repo.yaml file in the repository rather than the local repos.yaml configuration: the repository maintainer sets the name.



Uniqueness

You should choose a namespace that uniquely identifies your package repository. For example, if you make a repository for packages written by your organization, you could use your organization's name. You can also nest namespaces using periods, so you could identify a repository by a sub-organization. For example, LLNL might use a namespace for its internal repositories like llnl. Packages from the Physical & Life Sciences directorate (PLS) might use the llnl.pls namespace, and packages created by the Computation directorate might use llnl.comp.

Spack cannot ensure that every repository is named uniquely, but it will prevent you from registering two repositories with the same namespace at the same time. If you try to add a repository that has the same name as an existing one, e.g., builtin, Spack will print a warning message.

Namespace example

Suppose that LLNL maintains its own version of mpich, separate from Spack's built-in mpich package, and suppose you've installed both LLNL's and Spack's mpich packages. If you just use spack find, you won't see a difference between these two packages:

$ spack find
==> 2 installed packages.
-- linux-rhel6-x86_64 / gcc@4.4.7 -------------
mpich@3.2  mpich@3.2


However, if you use spack find -N, Spack will display the packages with their namespaces:

$ spack find -N
==> 2 installed packages.
-- linux-rhel6-x86_64 / gcc@4.4.7 -------------
builtin.mpich@3.2  llnl.comp.mpich@3.2


Now you know which one is LLNL's special version, and which one is the built-in Spack package. As you might guess, packages that are identical except for their namespace will still have different hashes:

$ spack find -lN
==> 2 installed packages.
-- linux-rhel6-x86_64 / gcc@4.4.7 -------------
c35p3gc builtin.mpich@3.2  itoqmox llnl.comp.mpich@3.2


All Spack commands that take a package spec can also accept a fully qualified spec with a namespace. This means you can use the namespace to be more specific when designating, e.g., which package you want to uninstall:

spack uninstall llnl.comp.mpich


Overriding built-in packages

Spack's search semantics mean that you can make your own implementation of a built-in Spack package (like mpich), put it in a repository, and use it to override the built-in package. As long as the repository containing your mpich is earlier any other in repos.yaml, any built-in package that depends on mpich will be use the one in your repository.

Suppose you have three repositories: the builtin Spack repo (builtin), a shared repo for your institution (e.g., llnl), and a repo containing your own prototype packages (proto). Suppose they contain packages as follows:

Namespace Path to repo Packages
proto ~/proto mpich
llnl /usr/local/llnl hdf5
builtin $spack/var/spack/repos/builtin mpich, hdf5, others


Suppose that hdf5 depends on mpich. You can override the built-in hdf5 by adding the llnl repo to repos.yaml:

repos:
- /usr/local/llnl
- $spack/var/spack/repos/builtin


spack install hdf5 will install llnl.hdf5 ^builtin.mpich.

If, instead, repos.yaml looks like this:

repos:
- ~/proto
- /usr/local/llnl
- $spack/var/spack/repos/builtin


spack install hdf5 will install llnl.hdf5 ^proto.mpich.

Any unqualified package name will be resolved by searching repos.yaml from the first entry to the last. You can force a particular repository's package by using a fully qualified name. For example, if your repos.yaml is as above, and you want builtin.mpich instead of proto.mpich, you can write:

spack install hdf5 ^builtin.mpich


which will install llnl.hdf5 ^builtin.mpich.

Similarly, you can force the builtin.hdf5 like this:

spack install builtin.hdf5 ^builtin.mpich


This will not search repos.yaml at all, as the builtin repo is specified in both cases. It will install builtin.hdf5 ^builtin.mpich.

If you want to see which repositories will be used in a build before you install it, you can use spack spec -N:

$ spack spec -N hdf5
Input spec
--------------------------------
hdf5
Normalized
--------------------------------
hdf5

^zlib@1.1.2: Concretized -------------------------------- builtin.hdf5@1.10.0-patch1%apple-clang@7.0.2+cxx~debug+fortran+mpi+shared~szip~threadsafe arch=darwin-elcapitan-x86_64
^builtin.openmpi@2.0.1%apple-clang@7.0.2~mxm~pmi~psm~psm2~slurm~sqlite3~thread_multiple~tm~verbs+vt arch=darwin-elcapitan-x86_64
^builtin.hwloc@1.11.4%apple-clang@7.0.2 arch=darwin-elcapitan-x86_64
^builtin.libpciaccess@0.13.4%apple-clang@7.0.2 arch=darwin-elcapitan-x86_64
^builtin.libtool@2.4.6%apple-clang@7.0.2 arch=darwin-elcapitan-x86_64
^builtin.m4@1.4.17%apple-clang@7.0.2+sigsegv arch=darwin-elcapitan-x86_64
^builtin.libsigsegv@2.10%apple-clang@7.0.2 arch=darwin-elcapitan-x86_64
^builtin.pkg-config@0.29.1%apple-clang@7.0.2+internal_glib arch=darwin-elcapitan-x86_64
^builtin.util-macros@1.19.0%apple-clang@7.0.2 arch=darwin-elcapitan-x86_64
^builtin.zlib@1.2.8%apple-clang@7.0.2+pic arch=darwin-elcapitan-x86_64


WARNING:

You can use a fully qualified package name in a depends_on directive in a package.py file, like so:

depends_on('proto.hdf5')


This is not recommended, as it makes it very difficult for multiple repos to be composed and shared. A package.py like this will fail if the proto repository is not registered in repos.yaml.



spack repo

Spack's configuration system allows repository settings to come from repos.yaml files in many locations. If you want to see the repositories registered as a result of all configuration files, use spack repo list.

spack repo list

$ spack repo list
==> 2 package repositories.
myrepo     ~/myrepo
builtin    ~/spack/var/spack/repos/builtin


Each repository is listed with its associated namespace. To get the raw, merged YAML from all configuration files, use spack config get repos:

$ spack config get repos
repos:srepos:
- ~/myrepo
- $spack/var/spack/repos/builtin


Note that, unlike spack repo list, this does not include the namespace, which is read from each repo's repo.yaml.

spack repo create

To make your own repository, you don't need to construct a directory yourself; you can use the spack repo create command.

$ spack repo create myrepo
==> Created repo with namespace 'myrepo'.
==> To register it with spack, run this command:

spack repo add ~/myrepo $ ls myrepo packages/ repo.yaml $ cat myrepo/repo.yaml repo:
namespace: 'myrepo'


By default, the namespace of a new repo matches its directory's name. You can supply a custom namespace with a second argument, e.g.:

$ spack repo create myrepo llnl.comp
==> Created repo with namespace 'llnl.comp'.
==> To register it with spack, run this command:

spack repo add ~/myrepo $ cat myrepo/repo.yaml repo:
namespace: 'llnl.comp'


You can also create repositories with custom structure with the -d/--subdirectory argument, e.g.:

$ spack repo create -d applications myrepo apps
==> Created repo with namespace 'apps'.
==> To register it with Spack, run this command:

spack repo add ~/myrepo $ ls myrepo applications/ repo.yaml $ cat myrepo/repo.yaml repo:
namespace: apps
subdirectory: applications


spack repo add

Once your repository is created, you can register it with Spack with spack repo add:

$ spack repo add ./myrepo
==> Added repo with namespace 'llnl.comp'.
$ spack repo list
==> 2 package repositories.
llnl.comp    ~/myrepo
builtin      ~/spack/var/spack/repos/builtin


This simply adds the repo to your repos.yaml file.

Once a repository is registered like this, you should be able to see its packages' names in the output of spack list, and you should be able to build them using spack install <name> as you would with any built-in package.

spack repo remove

You can remove an already-registered repository with spack repo rm. This will work whether you pass the repository's namespace or its path.

By namespace:

$ spack repo rm llnl.comp
==> Removed repository ~/myrepo with namespace 'llnl.comp'.
$ spack repo list
==> 1 package repository.
builtin    ~/spack/var/spack/repos/builtin


By path:

$ spack repo rm ~/myrepo
==> Removed repository ~/myrepo
$ spack repo list
==> 1 package repository.
builtin    ~/spack/var/spack/repos/builtin


Repo namespaces and Python

You may have noticed that namespace notation for repositories is similar to the notation for namespaces in Python. As it turns out, you can treat Spack repositories like Python packages; this is how they are implemented.

You could, for example, extend a builtin package in your own repository:

from spack.pkg.builtin.mpich import Mpich
class MyPackage(Mpich):

...


Spack repo namespaces are actually Python namespaces tacked on under spack.pkg. The search semantics of repos.yaml are actually implemented using Python's built-in sys.path search. The spack.repo module implements a custom Python importer.

WARNING:

The mechanism for extending packages is not yet extensively tested, and extending packages across repositories imposes inter-repo dependencies, which may be hard to manage. Use this feature at your own risk, but let us know if you have a use case for it.


BUILD CACHES

Some sites may encourage users to set up their own test environments before carrying out central installations, or some users may prefer to set up these environments on their own motivation. To reduce the load of recompiling otherwise identical package specs in different installations, installed packages can be put into build cache tarballs, pushed to your Spack mirror and then downloaded and installed by others.

Whenever a mirror provides prebuilt packages, Spack will take these packages into account during concretization and installation, making spack install significantly faster.

NOTE:

We use the terms "build cache" and "mirror" often interchangeably. Mirrors are used during installation both for sources and prebuilt packages. Build caches refer to mirrors that provide prebuilt packages.


Creating a build cache

Build caches are created via:

$ spack buildcache push <path/url/mirror name> <spec>


This command takes the locally installed spec and its dependencies, and creates tarballs of their install prefixes. It also generates metadata files, signed with GPG. These tarballs and metadata files are then pushed to the provided binary cache, which can be a local directory or a remote URL.

Here is an example where a build cache is created in a local directory named "spack-cache", to which we push the "ninja" spec:

$ spack buildcache push ./spack-cache ninja
==> Pushing binary packages to file:///home/spackuser/spack/spack-cache/build_cache


Note that ninja must be installed locally for this to work.

Once you have a build cache, you can add it as a mirror, discussed next.

Finding or installing build cache files

To find build caches or install build caches, a Spack mirror must be configured with:

$ spack mirror add <name> <url or path>


Both web URLs and local paths on the filesystem can be specified. In the previous example, you might add the directory "spack-cache" and call it mymirror:

$ spack mirror add mymirror ./spack-cache


You can see that the mirror is added with spack mirror list as follows:


At this point, you've create a buildcache, but spack hasn't indexed it, so if you run spack buildcache list you won't see any results. You need to index this new build cache as follows:

$ spack buildcache update-index ./spack-cache


Now you can use list:

$  spack buildcache list
==> 1 cached build.
-- linux-ubuntu20.04-skylake / gcc@9.3.0 ------------------------
ninja@1.10.2


With mymirror configured and an index available, Spack will automatically use it during concretization and installation. That means that you can expect spack install ninja to fetch prebuilt packages from the mirror. Let's verify by re-installing ninja:

$ spack uninstall ninja
$ spack install ninja
==> Installing ninja-1.11.1-yxferyhmrjkosgta5ei6b4lqf6bxbscz
==> Fetching file:///home/spackuser/spack/spack-cache/build_cache/linux-ubuntu20.04-skylake-gcc-9.3.0-ninja-1.10.2-yxferyhmrjkosgta5ei6b4lqf6bxbscz.spec.json.sig
gpg: Signature made Do 12 Jan 2023 16:01:04 CET
gpg:                using RSA key 61B82B2B2350E171BD17A1744E3A689061D57BF6
gpg: Good signature from "example (GPG created for Spack) <example@example.com>" [ultimate]
==> Fetching file:///home/spackuser/spack/spack-cache/build_cache/linux-ubuntu20.04-skylake/gcc-9.3.0/ninja-1.10.2/linux-ubuntu20.04-skylake-gcc-9.3.0-ninja-1.10.2-yxferyhmrjkosgta5ei6b4lqf6bxbscz.spack
==> Extracting ninja-1.10.2-yxferyhmrjkosgta5ei6b4lqf6bxbscz from binary cache
==> ninja: Successfully installed ninja-1.11.1-yxferyhmrjkosgta5ei6b4lqf6bxbscz
Search: 0.00s.  Fetch: 0.17s.  Install: 0.12s.  Total: 0.29s
[+] /home/harmen/spack/opt/spack/linux-ubuntu20.04-skylake/gcc-9.3.0/ninja-1.11.1-yxferyhmrjkosgta5ei6b4lqf6bxbscz


It worked! You've just completed a full example of creating a build cache with a spec of interest, adding it as a mirror, updating its index, listing the contents, and finally, installing from it.

By default Spack falls back to building from sources when the mirror is not available or when the package is simply not already available. To force Spack to only install prebuilt packages, you can use

$ spack install --use-buildcache only <package>


For example, to combine all of the commands above to add the E4S build cache and then install from it exclusively, you would do:

$ spack mirror add E4S https://cache.e4s.io
$ spack buildcache keys --install --trust
$ spack install --use-buildcache only <package>


We use --install and --trust to say that we are installing keys to our keyring, and trusting all downloaded keys.

Extreme-scale Scientific Software Stack (E4S): build cache

Relocation

When using buildcaches across different machines, it is likely that the install root will be different from the one used to build the binaries.

To address this issue, Spack automatically relocates all paths encoded in binaries and scripts to their new location upon install.

Note that there are some cases where this is not possible: if binaries are built in a relatively short path, and then installed to a longer path, there may not be enough space in the binary to encode the new path. In this case, Spack will fail to install the package from the build cache, and a source build is required.

To reduce the likelihood of this happening, it is highly recommended to add padding to the install root during the build, as specified in the config section of the configuration:

config:

install_tree:
root: /opt/spack
padded_length: 128


OCI / Docker V2 registries as build cache

Spack can also use OCI or Docker V2 registries such as Dockerhub, Quay.io, Github Packages, GitLab Container Registry, JFrog Artifactory, and others as build caches. This is a convenient way to share binaries using public infrastructure, or to cache Spack built binaries in Github Actions and GitLab CI.

To get started, configure an OCI mirror using oci:// as the scheme, and optionally specify a username and password (or personal access token):

$ spack mirror add --oci-username username --oci-password password my_registry oci://example.com/my_image


Spack follows the naming conventions of Docker, with Dockerhub as the default registry. To use Dockerhub, you can omit the registry domain:

$ spack mirror add --oci-username username --oci-password password my_registry oci://username/my_image


From here, you can use the mirror as any other build cache:

$ spack buildcache push my_registry <specs...>  # push to the registry
$ spack install <specs...> # install from the registry


A unique feature of buildcaches on top of OCI registries is that it's incredibly easy to generate get a runnable container image with the binaries installed. This is a great way to make applications available to users without requiring them to install Spack -- all you need is Docker, Podman or any other OCI-compatible container runtime.

To produce container images, all you need to do is add the --base-image flag when pushing to the build cache:

$ spack buildcache push --base-image ubuntu:20.04 my_registry ninja
Pushed to example.com/my_image:ninja-1.11.1-yxferyhmrjkosgta5ei6b4lqf6bxbscz.spack
$ docker run -it example.com/my_image:ninja-1.11.1-yxferyhmrjkosgta5ei6b4lqf6bxbscz.spack
root@e4c2b6f6b3f4:/# ninja --version
1.11.1


If --base-image is not specified, distroless images are produced. In practice, you won't be able to run these as containers, since they don't come with libc and other system dependencies. However, they are still compatible with tools like skopeo, podman, and docker for pulling and pushing.

NOTE:

The docker overlayfs2 storage driver is limited to 128 layers, above which a max depth exceeded error may be produced when pulling the image. There are alternative drivers.


Spack build cache for GitHub Actions

To significantly speed up Spack in GitHub Actions, binaries can be cached in GitHub Packages. This service is an OCI registry that can be linked to a GitHub repository.

A typical workflow is to include a spack.yaml environment in your repository that specifies the packages to install, the target architecture, and the build cache to use under mirrors:

spack:

specs:
- python@3.11
config:
install_tree:
root: /opt/spack
padded_length: 128
packages:
all:
require: target=x86_64_v2
mirrors:
local-buildcache: oci://ghcr.io/<organization>/<repository>


A GitHub action can then be used to install the packages and push them to the build cache:

name: Install Spack packages
on: push
env:

SPACK_COLOR: always jobs:
example:
runs-on: ubuntu-22.04
permissions:
packages: write
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Checkout Spack
uses: actions/checkout@v3
with:
repository: spack/spack
path: spack
- name: Setup Spack
run: echo "$PWD/spack/bin" >> "$GITHUB_PATH"
- name: Concretize
run: spack -e . concretize
- name: Install
run: spack -e . install --no-check-signature
- name: Run tests
run: ./my_view/bin/python3 -c 'print("hello world")'
- name: Push to buildcache
run: |
spack -e . mirror set --oci-username ${{ github.actor }} --oci-password "${{ secrets.GITHUB_TOKEN }}" local-buildcache
spack -e . buildcache push --base-image ubuntu:22.04 --unsigned --update-index local-buildcache
if: ${{ !cancelled() }}


The first time this action runs, it will build the packages from source and push them to the build cache. Subsequent runs will pull the binaries from the build cache. The concretizer will ensure that prebuilt binaries are favored over source builds.

The build cache entries appear in the GitHub Packages section of your repository, and contain instructions for pulling and running them with docker or podman.

Using Spack's public build cache for GitHub Actions

Spack offers a public build cache for GitHub Actions with a set of common packages, which lets you get started quickly. See the following resources for more information:

spack/github-actions-buildcache

spack buildcache

spack buildcache push

Create tarball of installed Spack package and all dependencies. Tarballs are checksummed and signed if gpg2 is available. Places them in a directory build_cache that can be copied to a mirror. Commands like spack buildcache install will search Spack mirrors for build_cache to get the list of build caches.

Arguments Description
<specs> list of partial specs or hashes with a leading / to match from installed packages and used for creating build caches
-d <path> directory in which build_cache directory is created, defaults to .
-f overwrite .spack file in build_cache directory if it exists
-k <key> the key to sign package with. In the case where multiple keys exist, the package will be unsigned unless -k is used.
-r make paths in binaries relative before creating tarball
-y answer yes to all create unsigned build_cache questions

spack buildcache list

Retrieves all specs for build caches available on a Spack mirror.

Arguments Description
<specs> list of partial package specs to be matched against specs downloaded for build caches

E.g. spack buildcache list gcc with print only commands to install gcc package(s)

spack buildcache install

Retrieves all specs for build caches available on a Spack mirror and installs build caches with specs matching the specs input.

Arguments Description
<specs> list of partial package specs or hashes with a leading / to be installed from build caches
-f remove install directory if it exists before unpacking tarball
-y answer yes to all to don't verify package with gpg questions

spack buildcache keys

List public keys available on Spack mirror.

Arguments Description
-i trust the keys downloaded with prompt for each
-y answer yes to all trust all keys downloaded

BOOTSTRAPPING

In the Getting started Section we already mentioned that Spack can bootstrap some of its dependencies, including clingo. In fact, there is an entire command dedicated to the management of every aspect of bootstrapping:

$ spack bootstrap --help
usage: spack bootstrap [-h] SUBCOMMAND ...
manage bootstrap configuration
positional arguments:

SUBCOMMAND
now Spack ready, right now!
status get the status of Spack
enable enable bootstrapping
disable disable bootstrapping
reset reset bootstrapping configuration to Spack defaults
root get/set the root bootstrap directory
list list all the sources of software to bootstrap Spack
add add a new source for bootstrapping
remove remove a bootstrapping source
mirror create a local mirror to bootstrap Spack options:
-h, --help show this help message and exit


Spack is configured to bootstrap its dependencies lazily by default; i.e. the first time they are needed and can't be found. You can readily check if any prerequisite for using Spack is missing by running:

% spack bootstrap status
Spack v0.19.0 - python@3.8
[FAIL] Core Functionalities

[B] MISSING "clingo": required to concretize specs [FAIL] Binary packages
[B] MISSING "gpg2": required to sign/verify buildcaches Spack will take care of bootstrapping any missing dependency marked as [B]. Dependencies marked as [-] are instead required to be found on the system. % echo $? 1


In the case of the output shown above Spack detected that both clingo and gnupg are missing and it's giving detailed information on why they are needed and whether they can be bootstrapped. The return code of this command summarizes the results, if any dependencies are missing the return code is 1, otherwise 0. Running a command that concretizes a spec, like:

% spack solve zlib
==> Bootstrapping clingo from pre-built binaries
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.1/build_cache/darwin-catalina-x86_64/apple-clang-12.0.0/clingo-bootstrap-spack/darwin-catalina-x86_64-apple-clang-12.0.0-clingo-bootstrap-spack-p5on7i4hejl775ezndzfdkhvwra3hatn.spack
==> Installing "clingo-bootstrap@spack%apple-clang@12.0.0~docs~ipo+python build_type=Release arch=darwin-catalina-x86_64" from a buildcache
[ ... ]


automatically triggers the bootstrapping of clingo from pre-built binaries as expected.

Users can also bootstrap all the dependencies needed by Spack in a single command, which might be useful to setup containers or other similar environments:


The Bootstrapping store

The software installed for bootstrapping purposes is deployed in a separate store. Its location can be checked with the following command:

% spack bootstrap root


It can also be changed with the same command by just specifying the newly desired path:

% spack bootstrap root /opt/spack/bootstrap


You can check what is installed in the bootstrapping store at any time using:

% spack find -b
==> Showing internal bootstrap store at "/Users/spack/.spack/bootstrap/store"
==> 11 installed packages
-- darwin-catalina-x86_64 / apple-clang@12.0.0 ------------------
clingo-bootstrap@spack  libassuan@2.5.5  libgpg-error@1.42  libksba@1.5.1  pinentry@1.1.1  zlib@1.2.11
gnupg@2.3.1             libgcrypt@1.9.3  libiconv@1.16      npth@1.6       python@3.8


In case it is needed you can remove all the software in the current bootstrapping store with:

% spack clean -b
==> Removing bootstrapped software and configuration in "/Users/spack/.spack/bootstrap"
% spack find -b
==> Showing internal bootstrap store at "/Users/spack/.spack/bootstrap/store"
==> 0 installed packages


Enabling and disabling bootstrapping methods

Bootstrapping is always performed by trying the methods listed by:

$ spack bootstrap list
Name: github-actions-v0.5 ENABLED

Type: buildcache
Info:
url: https://mirror.spack.io/bootstrap/github-actions/v0.5
homepage: https://github.com/spack/spack-bootstrap-mirrors
releases: https://github.com/spack/spack-bootstrap-mirrors/releases
Description:
Buildcache generated from a public workflow using Github Actions.
The sha256 checksum of binaries is checked before installation.
Name: github-actions-v0.4 ENABLED
Type: buildcache
Info:
url: https://mirror.spack.io/bootstrap/github-actions/v0.4
homepage: https://github.com/spack/spack-bootstrap-mirrors
releases: https://github.com/spack/spack-bootstrap-mirrors/releases
Description:
Buildcache generated from a public workflow using Github Actions.
The sha256 checksum of binaries is checked before installation.
Name: spack-install ENABLED
Type: install
Info:
url: https://mirror.spack.io
Description:
Specs built from sources downloaded from the Spack public mirror.


in the order they appear, from top to bottom. By default Spack is configured to try first bootstrapping from pre-built binaries and to fall-back to bootstrapping from sources if that failed.

If need be, you can disable bootstrapping altogether by running:

% spack bootstrap disable


in which case it's your responsibility to ensure Spack runs in an environment where all its prerequisites are installed. You can also configure Spack to skip certain bootstrapping methods by disabling them specifically:

% spack bootstrap disable github-actions
==> "github-actions" is now disabled and will not be used for bootstrapping


tells Spack to skip trying to bootstrap from binaries. To add the "github-actions" method back you can:

% spack bootstrap enable github-actions


There is also an option to reset the bootstrapping configuration to Spack's defaults:

% spack bootstrap reset
==> Bootstrapping configuration is being reset to Spack's defaults. Current configuration will be lost.
Do you want to continue? [Y/n]
%


Creating a mirror for air-gapped systems

Spack's default configuration for bootstrapping relies on the user having access to the internet, either to fetch pre-compiled binaries or source tarballs. Sometimes though Spack is deployed on air-gapped systems where such access is denied.

To help with similar situations Spack has a command that recreates, in a local folder of choice, a mirror containing the source tarballs and/or binary packages needed for bootstrapping.

% spack bootstrap mirror --binary-packages /opt/bootstrap
==> Adding "clingo-bootstrap@spack+python %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding "gnupg@2.3: %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding "patchelf@0.13.1:0.13.99 %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding binary packages from "https://github.com/alalazo/spack-bootstrap-mirrors/releases/download/v0.1-rc.2/bootstrap-buildcache.tar.gz" to the mirror at /opt/bootstrap/local-mirror
To register the mirror on the platform where it's supposed to be used run the following command(s):

% spack bootstrap add --trust local-sources /opt/bootstrap/metadata/sources
% spack bootstrap add --trust local-binaries /opt/bootstrap/metadata/binaries


This command needs to be run on a machine with internet access and the resulting folder has to be moved over to the air-gapped system. Once the local sources are added using the commands suggested at the prompt, they can be used to bootstrap Spack.

COMMAND REFERENCE

This is a reference for all commands in the Spack command line interface. The same information is available through spack help.

Commands that also have sections in the main documentation have a link to "More documentation".

Category Commands
Administration clone, deprecate, make-installer, mark, reindex, test, test-env, verify

Query packages dependencies, dependents, diff, find, graph, info, list, location, providers, resource, tags

Build packages build-env, ci, clean, dev-build, fetch, gc, install, log-parse, patch, restage, spec, stage, uninstall

Configuration config, external, mirror, repo, tutorial

Container containerize

Developer blame, cd, commands, debug, license, maintainers, pkg, pydoc, python, solve, style, unit-test, url

Environments add, change, concretize, deconcretize, develop, env, remove, undevelop, view

Extensions extensions

More help docs, help

Create packages buildcache, checksum, create, edit, gpg, versions

System arch, audit, bootstrap, compiler, compilers

User environment load, module, unload


----



spack

A flexible package manager that supports multiple versions, configurations, platforms, and compilers.

spack [-hHdklLmbpvV] [--color {always,never,auto}] [-c CONFIG_VARS] [-C DIR] [--timestamp] [--pdb]

[-e ENV | -D DIR | -E] [--use-env-repo] [--sorted-profile STAT] [--lines LINES] [--stacktrace] [--backtrace]
[--print-shell-vars PRINT_SHELL_VARS]
COMMAND ...


Optional arguments

show this help message and exit
show help for all commands (same as spack help --all)
when to colorize output (default: auto)
add one or more custom, one off config settings
add a custom configuration scope
write out debug messages
add a timestamp to tty output
run spack under the pdb debugger
run with a specific environment (see spack env)
run with an environment directory (ignore managed environments)
run without any environments activated (see spack env)
when running in an environment, use its package repository
do not check ssl certificates when downloading
use filesystem locking (default)
do not use filesystem locking (unsafe)
use mock packages instead of real ones
use bootstrap configuration (bootstrap store, config, externals)
profile execution using cProfile
profile and sort
lines of profile output or 'all' (default: 20)
print additional output during builds
add stacktraces to all printed statements
always show backtraces for exceptions
show version number and exit
print info needed by setup-env.*sh

Subcommands

  • add
  • arch
  • audit
  • blame
  • bootstrap
  • build-env
  • buildcache
  • cd
  • change
  • checksum
  • ci
  • clean
  • clone
  • commands
  • compiler
  • compilers
  • concretize
  • config
  • containerize
  • create

  • debug
  • deconcretize
  • dependencies
  • dependents
  • deprecate
  • dev-build
  • develop
  • diff
  • docs
  • edit
  • env
  • extensions
  • external
  • fetch
  • find
  • gc
  • gpg
  • graph
  • help

  • info
  • install
  • license
  • list
  • load
  • location
  • log-parse
  • maintainers
  • make-installer
  • mark
  • mirror
  • module
  • patch
  • pkg
  • providers
  • pydoc
  • python
  • reindex
  • remove

  • repo
  • resource
  • restage
  • solve
  • spec
  • stage
  • style
  • tags
  • test
  • test-env
  • tutorial
  • undevelop
  • uninstall
  • unit-test
  • unload
  • url
  • verify
  • versions
  • view



----



spack add

add a spec to an environment

spack add [-h] [-l LIST_NAME] ...


Positional arguments

one or more package specs

Optional arguments

show this help message and exit
name of the list to add specs to


----



spack arch

print architecture information about this machine

spack arch [-hg] [--known-targets] [-p | -o | -t] [-f | -b]


Optional arguments

show this help message and exit
show the best generic target
show a list of all known targets and exit
print only the platform
print only the operating system
print only the target
print frontend
print backend


----



spack audit

audit configuration files, packages, etc.

spack audit [-h] SUBCOMMAND ...


Optional arguments

show this help message and exit

Subcommands

  • audit configs
  • audit externals

audit packages-https

audit packages

audit list



----



spack audit configs

spack audit configs [-h]


Optional arguments

show this help message and exit


----



spack audit externals

spack audit externals [-h] [--list] [PKG ...]


Positional arguments

package to be analyzed (if none all packages will be processed)

Optional arguments

show this help message and exit
if passed, list which packages have detection tests


----



spack audit packages-https

spack audit packages-https [-h] [--all] [PKG ...]


Positional arguments

package to be analyzed (if none all packages will be processed)

Optional arguments

show this help message and exit
audit all packages


----



spack audit packages

spack audit packages [-h] [PKG ...]


Positional arguments

package to be analyzed (if none all packages will be processed)

Optional arguments

show this help message and exit


----



spack audit list

spack audit list [-h]


Optional arguments

show this help message and exit


----



spack blame

show contributors to packages

spack blame [-h] [-t | -p | -g] [--json] package_or_file


Positional arguments

name of package to show contributions for, or path to a file in the spack repo

Optional arguments

show this help message and exit
sort by last modification date (default)
sort by percent of code
show git blame output instead of summary
output blame as machine-readable json records


----



spack bootstrap

manage bootstrap configuration

spack bootstrap [-h] SUBCOMMAND ...


Optional arguments

show this help message and exit

Subcommands

  • bootstrap now
  • bootstrap status
  • bootstrap enable

  • bootstrap disable
  • bootstrap reset
  • bootstrap root

  • bootstrap list
  • bootstrap add

  • bootstrap remove
  • bootstrap mirror



----



spack bootstrap now

spack bootstrap now [-h] [--dev]


Optional arguments

show this help message and exit
bootstrap dev dependencies too


----



spack bootstrap status

spack bootstrap status [-h] [--optional] [--dev]


Optional arguments

show this help message and exit
show the status of rarely used optional dependencies
show the status of dependencies needed to develop Spack


----



spack bootstrap enable

spack bootstrap enable [-h] [--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT] [name]


Positional arguments

name of the source to be enabled

Optional arguments

show this help message and exit
configuration scope to read/modify


----



spack bootstrap disable

spack bootstrap disable [-h] [--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT] [name]


Positional arguments

name of the source to be disabled

Optional arguments

show this help message and exit
configuration scope to read/modify


----



spack bootstrap reset

spack bootstrap reset [-hy]


Optional arguments

show this help message and exit
assume "yes" is the answer to every confirmation request


----



spack bootstrap root

spack bootstrap root [-h] [--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT] [path]


Positional arguments

set the bootstrap directory to this value

Optional arguments

show this help message and exit
configuration scope to read/modify


----



spack bootstrap list

spack bootstrap list [-h] [--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT]


Optional arguments

show this help message and exit
configuration scope to read/modify


----



spack bootstrap add

spack bootstrap add [-h] [--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT] [--trust]

name metadata_dir


Positional arguments

name of the new source of software
directory where to find metadata files

Optional arguments

show this help message and exit
configuration scope to read/modify
enable the source immediately upon addition


----



spack bootstrap remove

spack bootstrap remove [-h] name


Positional arguments

name of the source to be removed

Optional arguments

show this help message and exit


----



spack bootstrap mirror

spack bootstrap mirror [-h] [--binary-packages] [--dev] DIRECTORY


Positional arguments

root directory in which to create the mirror and metadata

Optional arguments

show this help message and exit
download public binaries in the mirror
download dev dependencies too


----



spack build-env

run a command in a spec's install environment, or dump its environment to screen or file

spack build-env [-hU] [--clean] [--dirty] [--reuse] [--reuse-deps] [--dump FILE] [--pickle FILE] ...


More documentation

Positional arguments

spec [--] [cmd]...
specs of package environment to emulate

Optional arguments

show this help message and exit
unset harmful variables in the build environment (default)
preserve user environment in spack's build environment (danger!)
do not reuse installed deps; build newest configuration
reuse installed packages/buildcaches when possible
reuse installed dependencies only
dump a source-able environment to FILE
dump a pickled source-able environment to FILE


----



spack buildcache

create, download and install binary packages

spack buildcache [-h] SUBCOMMAND ...


More documentation

Optional arguments

show this help message and exit

Subcommands

  • buildcache push
  • buildcache install
  • buildcache list

  • buildcache keys
  • buildcache preview
  • buildcache check

  • buildcache download
  • buildcache get-buildcache-name
  • buildcache save-specfile

  • buildcache sync
  • buildcache update-index



----



spack buildcache push

spack buildcache push [-hf] [--allow-root] [--unsigned | --key key] [--update-index] [--spec-file SPEC_FILE]

[--only {package,dependencies}] [--fail-fast] [--base-image BASE_IMAGE] [-j JOBS]
mirror ...


Positional arguments

mirror name, path, or URL
one or more package specs

Optional arguments

show this help message and exit
overwrite tarball if it exists
allow install root string in binary files after RPATH substitution
push unsigned buildcache tarballs
key for signing
regenerate buildcache index after building package(s)
--spec-file SPEC_FILE
create buildcache entry for spec from json or yaml file
select the buildcache mode. The default is to build a cache for the package along with all its dependencies. Alternatively, one can decide to build a cache for only the package or only the dependencies
stop pushing on first failure (default is best effort)
specify the base image for the buildcache.
explicitly set number of parallel jobs


----



spack buildcache install

spack buildcache install [-hfmuo] ...


Positional arguments

one or more package specs

Optional arguments

show this help message and exit
overwrite install directory if it exists
allow all matching packages
install unsigned buildcache tarballs for testing
install specs from other architectures instead of default platform and OS


----



spack buildcache list

spack buildcache list [-hlLNva] ...


Positional arguments

one or more package specs

Optional arguments

show this help message and exit
show dependency hashes as well as versions
show full dependency hashes as well as versions
show fully qualified package names
show variants in output (can be long)
list specs for all available architectures instead of default platform and OS


----



spack buildcache keys

spack buildcache keys [-hitf]


Optional arguments

show this help message and exit
install Keys pulled from mirror
trust all downloaded keys
force new download of keys


----



spack buildcache preview

spack buildcache preview [-h] ...


Positional arguments

one or more installed package specs

Optional arguments

show this help message and exit


----



spack buildcache check

spack buildcache check [-h] [-m MIRROR_URL] [-o OUTPUT_FILE] [--scope {defaults,system,site,user}[/PLATFORM] or

env:ENVIRONMENT] (-s SPEC | --spec-file SPEC_FILE)


Optional arguments

show this help message and exit
override any configured mirrors with this mirror URL
file where rebuild info should be written
configuration scope containing mirrors to check
check single spec instead of release specs file
--spec-file SPEC_FILE
check single spec from json or yaml file instead of release specs file


----



spack buildcache download

spack buildcache download [-h] (-s SPEC | --spec-file SPEC_FILE) -p PATH


Optional arguments

show this help message and exit
download built tarball for spec from mirror
--spec-file SPEC_FILE
download built tarball for spec (from json or yaml file) from mirror
path to directory where tarball should be downloaded


----



spack buildcache get-buildcache-name

spack buildcache get-buildcache-name [-h] (-s SPEC | --spec-file SPEC_FILE)


Optional arguments

show this help message and exit
spec string for which buildcache name is desired
--spec-file SPEC_FILE
path to spec json or yaml file for which buildcache name is desired


----



spack buildcache save-specfile

spack buildcache save-specfile [-h] (--root-spec ROOT_SPEC | --root-specfile ROOT_SPECFILE) -s SPECS --specfile-dir

SPECFILE_DIR


Optional arguments

show this help message and exit
root spec of dependent spec
path to json or yaml file containing root spec of dependent spec
list of dependent specs for which saved yaml is desired
path to directory where spec yamls should be saved


----



spack buildcache sync

spack buildcache sync [-h] [--manifest-glob MANIFEST_GLOB] [source mirror] [destination mirror]


Positional arguments

source mirror name, path, or URL
destination mirror name, path, or URL

Optional arguments

show this help message and exit
a quoted glob pattern identifying copy manifest files


----



spack buildcache update-index

spack buildcache update-index [-hk] mirror


Positional arguments

destination mirror name, path, or URL

Optional arguments

show this help message and exit
if provided, key index will be updated as well as package index


----



spack cd

cd to spack directories in the shell

spack cd [-h] [-m | -r | -i | -p | -P | -s | -S | --source-dir | -b | -e [name]] [--first] ...


More documentation

Positional arguments

package spec

Optional arguments

show this help message and exit
spack python module directory
spack installation root
install prefix for spec (spec need not be installed)
directory enclosing a spec's package.py file
top-level packages directory for Spack
stage directory for a spec
top level stage directory
source directory for a spec (requires it to be staged first)
build directory for a spec (requires it to be staged first)
location of the named or current environment
use the first match if multiple packages match the spec


----



spack change

change an existing spec in an environment

spack change [-ha] [-l LIST_NAME] [--match-spec MATCH_SPEC] ...


Positional arguments

one or more package specs

Optional arguments

show this help message and exit
name of the list to remove specs from
if name is ambiguous, supply a spec to match
change all matching specs (allow changing more than one spec)


----



spack checksum

checksum available versions of a package

spack checksum [-h] [--keep-stage] [--batch] [--latest] [--preferred] [--add-to-package | --verify] [-j JOBS]

package [versions ...]


More documentation

Positional arguments

name or spec (e.g. cmake or cmake@3.18)
checksum these specific versions (if omitted, Spack searches for remote versions)

Optional arguments

show this help message and exit
don't clean up staging area when command completes
don't ask which versions to checksum
checksum the latest available version
checksum the known Spack preferred version
add new versions to package
verify known package checksums
explicitly set number of parallel jobs


----



spack ci

manage continuous integration pipelines

spack ci [-h] SUBCOMMAND ...


More documentation

Optional arguments

show this help message and exit

Subcommands

ci generate

ci rebuild-index

ci rebuild

ci reproduce-build



----



spack ci generate

generate jobs file from a CI-aware spack file

if you want to report the results on CDash, you will need to set the SPACK_CDASH_AUTH_TOKEN before invoking this command. the value must be the CDash authorization token needed to create a build group and register all generated jobs under it

spack ci generate [-h] [--output-file OUTPUT_FILE] [--copy-to COPY_TO] [--optimize] [--dependencies]

[--buildcache-destination BUILDCACHE_DESTINATION] [--prune-dag | --no-prune-dag]
[--check-index-only] [--artifacts-root ARTIFACTS_ROOT]


More documentation

Optional arguments

show this help message and exit
pathname for the generated gitlab ci yaml file
path to additional directory for job files
(experimental) optimize the gitlab yaml file for size
(experimental) disable DAG scheduling (use 'plain' dependencies)
override the mirror configured in the environment
skip up-to-date specs
process up-to-date specs
only check spec state from buildcache indices
path to the root of the artifacts directory


----



spack ci rebuild-index

rebuild the buildcache index for the remote mirror

use the active, gitlab-enabled environment to rebuild the buildcache index for the associated mirror

spack ci rebuild-index [-h]


More documentation

Optional arguments

show this help message and exit


----



spack ci rebuild

rebuild a spec if it is not on the remote mirror

check a single spec against the remote mirror, and rebuild it from source if the mirror does not contain the hash

spack ci rebuild [-ht] [--fail-fast]


More documentation

Optional arguments

show this help message and exit
run stand-alone tests after the build
stop stand-alone tests after the first failure


----



spack ci reproduce-build

generate instructions for reproducing the spec rebuild job

artifacts of the provided gitlab pipeline rebuild job's URL will be used to derive instructions for reproducing the build locally

spack ci reproduce-build [-hs] [--runtime {docker,podman}] [--working-dir WORKING_DIR]

[--gpg-file GPG_FILE | --gpg-url GPG_URL]
job_url


More documentation

Positional arguments

URL of job artifacts bundle

Optional arguments

show this help message and exit
Container runtime to use.
where to unpack artifacts
Run docker reproducer automatically
Path to public GPG key for validating binary cache installs
URL to public GPG key for validating binary cache installs


----



spack clean

remove temporary build files and/or downloaded archives

spack clean [-hsdfmpba] ...


More documentation

Positional arguments

one or more package specs

Optional arguments

show this help message and exit
remove all temporary build stages (default)
remove cached downloads
force removal of all install failure tracking markers
remove long-lived caches, like the virtual package index
remove .pyc, .pyo files and __pycache__ folders
remove software and configuration needed to bootstrap Spack
equivalent to -sdfmp (does not include --bootstrap)


----



spack clone

create a new installation of spack in another prefix

spack clone [-h] [-r REMOTE] prefix


Positional arguments

name of prefix where we should install spack

Optional arguments

show this help message and exit
name of the remote to clone from


----



spack commands

list available spack commands

spack commands [-ha] [--update-completion] [--format {subcommands,rst,names,bash,fish}] [--header FILE]

[--update FILE]
...


Positional arguments

list of rst files to search for _cmd-spack-<cmd> cross-refs

Optional arguments

show this help message and exit
regenerate spack's tab completion scripts
include command aliases
format to be used to print the output (default: names)
prepend contents of FILE to the output (useful for rst format)
write output to the specified file, if any command is newer


----



spack compiler

manage compilers

spack compiler [-h] SUBCOMMAND ...


Optional arguments

show this help message and exit

Subcommands

compiler find

compiler remove

compiler list

compiler info



----



spack compiler find

spack compiler find [-h] [--mixed-toolchain | --no-mixed-toolchain] [--scope {defaults,system,site,user}[/PLATFORM] or

env:ENVIRONMENT]
...


More documentation

Positional arguments

add_paths

Optional arguments

show this help message and exit
Allow mixed toolchains (for example: clang, clang++, gfortran)
Do not allow mixed toolchains (for example: clang, clang++, gfortran)
configuration scope to modify


----



spack compiler remove

spack compiler remove [-ha] [--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT] compiler_spec


Positional arguments

compiler_spec

Optional arguments

show this help message and exit
remove ALL compilers that match spec
configuration scope to modify


----



spack compiler list

spack compiler list [-h] [--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT]


Optional arguments

show this help message and exit
configuration scope to read from


----



spack compiler info

spack compiler info [-h] [--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT] compiler_spec


More documentation

Positional arguments

compiler_spec

Optional arguments

show this help message and exit
configuration scope to read from


----



spack compilers

list available compilers

spack compilers [-h] [--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT]


More documentation

Optional arguments

show this help message and exit
configuration scope to read/modify


----



spack concretize

concretize an environment and write a lockfile

spack concretize [-hfqU] [--test {root,all}] [--reuse] [--reuse-deps] [-j JOBS]


Optional arguments

show this help message and exit
re-concretize even if already concretized
concretize with test dependencies of only root packages or all packages
don't print concretized specs
do not reuse installed deps; build newest configuration
reuse installed packages/buildcaches when possible
reuse installed dependencies only
explicitly set number of parallel jobs


----



spack config

get and set configuration options

spack config [-h] [--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT] SUBCOMMAND ...


Optional arguments

show this help message and exit
configuration scope to read/modify

Subcommands

  • config get
  • config blame
  • config edit

  • config list
  • config add

  • config prefer-upstream
  • config remove

  • config update
  • config revert



----



spack config get

spack config get [-h] [section]


More documentation

Positional arguments

configuration section to print

Optional arguments

show this help message and exit


----



spack config blame

spack config blame [-h] section


More documentation

Positional arguments

configuration section to print

Optional arguments

show this help message and exit


----



spack config edit

spack config edit [-h] [--print-file] [section]


Positional arguments

configuration section to edit

Optional arguments

show this help message and exit
print the file name that would be edited


----



spack config list

spack config list [-h]


Optional arguments

show this help message and exit


----



spack config add

spack config add [-h] [-f FILE] [path]


Positional arguments

colon-separated path to config that should be added, e.g. 'config:default:true'

Optional arguments

show this help message and exit
file from which to set all config values


----



spack config prefer-upstream

spack config prefer-upstream [-h] [--local]


Optional arguments

show this help message and exit
set packages preferences based on local installs, rather than upstream


----



spack config remove

spack config remove [-h] path


Positional arguments

colon-separated path to config that should be removed, e.g. 'config:default:true'

Optional arguments

show this help message and exit


----



spack config update

spack config update [-hy] section


Positional arguments

section to update

Optional arguments

show this help message and exit
assume "yes" is the answer to every confirmation request


----



spack config revert

spack config revert [-hy] section


Positional arguments

section to update

Optional arguments

show this help message and exit
assume "yes" is the answer to every confirmation request


----



spack containerize

creates recipes to build images for different container runtimes

spack containerize [-h] [--list-os] [--last-stage {bootstrap,build,final}]


Optional arguments

show this help message and exit
list all the OS that can be used in the bootstrap phase and exit
last stage in the container recipe


----



spack create

create a new package file

spack create [-hfb] [--keep-stage] [-n NAME] [-t TEMPLATE] [-r REPO] [-N NAMESPACE] [--skip-editor] [url]


More documentation

Positional arguments

url of package archive

Optional arguments

show this help message and exit
don't clean up staging area when command completes
name of the package to create
build system template to use
path to a repository where the package should be created
specify a namespace for the package
overwrite any existing package file with the same name
skip the edit session for the package (e.g., automation)
don't ask which versions to checksum


----



spack debug

debugging commands for troubleshooting Spack

spack debug [-h] SUBCOMMAND ...


Optional arguments

show this help message and exit

Subcommands

debug create-db-tarball

debug report





----



spack debug create-db-tarball

spack debug create-db-tarball [-h]


Optional arguments

show this help message and exit


----



spack debug report

spack debug report [-h]


Optional arguments

show this help message and exit


----



spack deconcretize

remove specs from the concretized lockfile of an environment

spack deconcretize [-hya] [--root] ...


Positional arguments

one or more package specs

Optional arguments

show this help message and exit
deconcretize only specific environment roots
assume "yes" is the answer to every confirmation request
deconcretize ALL specs that match each supplied spec


----



spack dependencies

show dependencies of a package

spack dependencies [-hitV] [--deptype DEPTYPE] ...


Positional arguments

package spec

Optional arguments

show this help message and exit
list installed dependencies of an installed spec instead of possible dependencies of a package
show all transitive dependencies
comma-separated list of deptypes to traverse (default=build,link,run,test)
do not expand virtual dependencies


----



spack dependents

show packages that depend on another

spack dependents [-hit] ...


Positional arguments

package spec

Optional arguments

show this help message and exit
list installed dependents of an installed spec instead of possible dependents of a package
show all transitive dependents


----



spack deprecate

replace one package with another via symlinks

spack deprecate [-hy] [-d | -D] [-i | -I] [-l {soft,hard}] ...


Positional arguments

spec to deprecate and spec to use as deprecator

Optional arguments

show this help message and exit
assume "yes" is the answer to every confirmation request
deprecate dependencies (default)
do not deprecate dependencies
concretize and install deprecator spec
deprecator spec must already be installed (default)
type of filesystem link to use for deprecation (default soft)


----



spack dev-build

developer build: build from code in current working directory

spack dev-build [-hinqU] [-j JOBS] [-d SOURCE_PATH] [--deprecated] [--keep-prefix] [--skip-patch] [--drop-in SHELL]

[--test {root,all}] [-b BEFORE | -u UNTIL] [--clean | --dirty] [--reuse] [--reuse-deps]
...


Positional arguments

package spec

Optional arguments

show this help message and exit
explicitly set number of parallel jobs
path to source directory (defaults to the current directory)
do not try to install dependencies of requested packages
do not use checksums to verify downloaded files (unsafe)
fetch deprecated versions without warning
do not remove the install prefix if installation fails
skip patching for the developer build
do not display verbose build output while installing
drop into a build environment in a new shell, e.g., bash
run tests on only root packages or all packages
phase to stop before when installing (default None)
phase to stop after when installing (default None)
unset harmful variables in the build environment (default)
preserve user environment in spack's build environment (danger!)
do not reuse installed deps; build newest configuration
reuse installed packages/buildcaches when possible
reuse installed dependencies only


----



spack develop

add a spec to an environment's dev-build information

spack develop [-h] [-p PATH] [--no-clone | --clone] [-f FORCE] ...


Positional arguments

package spec

Optional arguments

show this help message and exit
source location of package
do not clone, the package already exists at the source path
clone the package even if the path already exists
remove any files or directories that block cloning source code


----



spack diff

compare two specs

spack diff [-h] [--json] [--first] [-a ATTRIBUTE] ...


Positional arguments

one or more package specs

Optional arguments

show this help message and exit
dump json output instead of pretty printing
load the first match if multiple packages match the spec
select the attributes to show (defaults to all)


----



spack docs

open spack documentation in a web browser

spack docs [-h]


Optional arguments

show this help message and exit


----



spack edit

open package files in $EDITOR

spack edit [-h] [-b | -c | -d | -t | -m | -r REPO | -N NAMESPACE] [package]


More documentation

Positional arguments

package name

Optional arguments

show this help message and exit
edit the build system with the supplied name
edit the command with the supplied name
edit the docs with the supplied name
edit the test with the supplied name
edit the main spack module with the supplied name
path to repo to edit package in
namespace of package to edit


----



spack env

manage virtual environments

spack env [-h] SUBCOMMAND ...


Optional arguments

show this help message and exit

Subcommands

  • env activate
  • env deactivate
  • env create

  • env remove
  • env list
  • env status

  • env loads
  • env view
  • env update

  • env revert
  • env depfile



----



spack env activate

spack env activate [-hp] [--sh | --csh | --fish | --bat | --pwsh] [--with-view name | --without-view] [--temp]

[-d DIR]
[env]


Positional arguments

name of environment to activate

Optional arguments

show this help message and exit
print sh commands to activate the environment
print csh commands to activate the environment
print fish commands to activate the environment
print bat commands to activate the environment
print powershell commands to activate environment
set runtime environment variables for specific view
do not set runtime environment variables for any view
decorate the command line prompt when activating
create and activate an environment in a temporary directory
activate the environment in this directory


----



spack env deactivate

spack env deactivate [-h] [--sh | --csh | --fish | --bat | --pwsh]


Optional arguments

show this help message and exit
print sh commands to deactivate the environment
print csh commands to deactivate the environment
print fish commands to activate the environment
print bat commands to activate the environment
print pwsh commands to activate the environment


----



spack env create

spack env create [-hd] [--keep-relative] [--without-view | --with-view WITH_VIEW] env [envfile]


Positional arguments

name of environment to create
either a lockfile (must end with '.json' or '.lock') or a manifest file

Optional arguments

show this help message and exit
create an environment in a specific directory
copy relative develop paths verbatim into the new environment when initializing from envfile
do not maintain a view for this environment
specify that this environment should maintain a view at the specified path (by default the view is maintained in the environment directory)


----



spack env remove

spack env remove [-hy] env [env ...]


Positional arguments

environment(s) to remove

Optional arguments

show this help message and exit
assume "yes" is the answer to every confirmation request


----



spack env list

spack env list [-h]


Optional arguments

show this help message and exit


----



spack env status

spack env status [-h]


Optional arguments

show this help message and exit


----



spack env loads

spack env loads [-hr] [-n MODULE_SET_NAME] [-m {tcl,lmod}] [--input-only] [-p PREFIX] [-x EXCLUDE]


Optional arguments

show this help message and exit
module set for which to generate load operations
type of module system to generate loads for
generate input for module command (instead of a shell script)
prepend to module names when issuing module load commands
exclude package from output; may be specified multiple times
recursively traverse spec dependencies


----



spack env view

spack env view [-h] {regenerate,enable,disable} [view_path]


Positional arguments

{regenerate,enable,disable}
action to take for the environment's view
when enabling a view, optionally set the path manually

Optional arguments

show this help message and exit


----



spack env update

spack env update [-hy] env


Positional arguments

name or directory of the environment to activate

Optional arguments

show this help message and exit
assume "yes" is the answer to every confirmation request


----



spack env revert

spack env revert [-hy] env


Positional arguments

name or directory of the environment to activate

Optional arguments

show this help message and exit
assume "yes" is the answer to every confirmation request


----



spack env depfile

spack env depfile [-h] [--make-prefix TARGET] [--make-disable-jobserver]

[--use-buildcache [{auto,only,never},][package:{auto,only,never},][dependencies:{auto,only,never}]]
[-o FILE] [-G {make}]
...


Positional arguments

generate a depfile only for matching specs in the environment

Optional arguments

show this help message and exit
prefix Makefile targets (and variables) with <TARGET>/<name>
disable POSIX jobserver support
when using only, redundant build dependencies are pruned from the DAG
write the depfile to FILE rather than to stdout
specify the depfile type


----



spack extensions

list extensions for package

spack extensions [-hlLdp] [-s {packages,installed,all}] ...


More documentation

Positional arguments

spec of package to list extensions for

Optional arguments

show this help message and exit
show dependency hashes as well as versions
show full dependency hashes as well as versions
output dependencies along with found specs
show paths to package install directories
show only part of output


----



spack external

manage external packages in Spack configuration

spack external [-h] SUBCOMMAND ...


Optional arguments

show this help message and exit

Subcommands

external find

external list

external read-cray-manifest




----



spack external find

spack external find [-h] [--not-buildable] [--exclude EXCLUDE] [-p PATH]

[--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT] [--all] [-t TAG] [-j JOBS]
...


More documentation

Positional arguments

packages

Optional arguments

show this help message and exit
packages with detected externals won't be built with Spack
packages to exclude from search
one or more alternative search paths for finding externals
configuration scope to modify
search for all packages that Spack knows about
filter a package query by tag (multiple use allowed)
explicitly set number of parallel jobs


----



spack external list

spack external list [-h]


Optional arguments

show this help message and exit


----



spack external read-cray-manifest

spack external read-cray-manifest [-h] [--file FILE] [--directory DIRECTORY] [--ignore-default-dir] [--dry-run]

[--fail-on-error]


Optional arguments

show this help message and exit
--file FILE
specify a location other than the default
--directory DIRECTORY
specify a directory storing a group of manifest files
ignore the default directory of manifest files
don't modify DB with files that are read
if a manifest file cannot be parsed, fail and report the full stack trace


----



spack fetch

fetch archives for packages

spack fetch [-hnmD] [--deprecated] ...


More documentation

Positional arguments

one or more package specs

Optional arguments

show this help message and exit
do not use checksums to verify downloaded files (unsafe)
fetch deprecated versions without warning
fetch only missing (not yet installed) dependencies
also fetch all dependencies


----



spack find

list and search installed packages

spack find [-hdplLNcfumvM] [--format FORMAT | -H | --json] [--groups] [--no-groups] [-t TAG] [--show-full-compiler]

[-x | -X] [--loaded] [--deprecated] [--only-deprecated] [--start-date START_DATE] [--end-date END_DATE]
...


More documentation

Positional arguments

constraint to select a subset of installed packages

Optional arguments

show this help message and exit
output specs with the specified format string
same as '--format {/hash}'; use with xargs or $()
output specs as machine-readable json records
output dependencies along with found specs
show paths to package install directories
display specs in arch/compiler groups (default on)
do not group specs by arch/compiler
show dependency hashes as well as versions
show full dependency hashes as well as versions
filter a package query by tag (multiple use allowed)
show fully qualified package names
show concretized specs in an environment
show spec compiler flags
show full compiler specs
show only specs that were installed explicitly
show only specs that were installed as dependencies
show only specs Spack does not have a package for
show missing dependencies as well as installed specs
show variants in output (can be long)
show only packages loaded in the user environment
show only missing dependencies
show deprecated packages as well as installed specs
show only deprecated packages
--start-date START_DATE
earliest date of installation [YYYY-MM-DD]
--end-date END_DATE
latest date of installation [YYYY-MM-DD]


----



spack gc

remove specs that are now no longer needed

spack gc [-hy]


Optional arguments

show this help message and exit
assume "yes" is the answer to every confirmation request


----



spack gpg

handle GPG actions for spack

spack gpg [-h] SUBCOMMAND ...


More documentation

Optional arguments

show this help message and exit

Subcommands

  • gpg verify
  • gpg trust
  • gpg untrust

  • gpg sign
  • gpg create

  • gpg list
  • gpg init

  • gpg export
  • gpg publish



----



spack gpg verify

spack gpg verify [-h] ... [signature]


Positional arguments

installed package spec
the signature file

Optional arguments

show this help message and exit


----



spack gpg trust

spack gpg trust [-h] keyfile


Positional arguments

add a key to the trust store

Optional arguments

show this help message and exit


----



spack gpg untrust

spack gpg untrust [-h] [--signing] keys [keys ...]


Positional arguments

remove keys from the trust store

Optional arguments

show this help message and exit
allow untrusting signing keys


----



spack gpg sign

spack gpg sign [-h] [--output DEST] [--key KEY] [--clearsign] ...


Positional arguments

installed package spec

Optional arguments

show this help message and exit
the directory to place signatures
the key to use for signing
if specified, create a clearsign signature


----



spack gpg create

spack gpg create [-h] [--comment COMMENT] [--expires EXPIRATION] [--export DEST] [--export-secret DEST] name email


Positional arguments

the name to use for the new key
the email address to use for the new key

Optional arguments

show this help message and exit
a description for the intended use of the key
when the key should expire
export the public key to a file
export the private key to a file


----



spack gpg list

spack gpg list [-h] [--trusted] [--signing]


Optional arguments

show this help message and exit
list trusted keys
list keys which may be used for signing


----



spack gpg init

spack gpg init [-h]


Optional arguments

show this help message and exit

--from DIR


----



spack gpg export

spack gpg export [-h] [--secret] location [keys ...]


Positional arguments

where to export keys
the keys to export (all public keys if unspecified)

Optional arguments

show this help message and exit
export secret keys


----



spack gpg publish

spack gpg publish [-h] (-d directory | -m mirror-name | --mirror-url mirror-url) [--rebuild-index] [keys ...]


Positional arguments

keys to publish (all public keys if unspecified)

Optional arguments

show this help message and exit
local directory where keys will be published
name of the mirror where keys will be published
--mirror-url mirror-url
URL of the mirror where keys will be published
regenerate buildcache key index after publishing key(s)


----



spack graph

generate graphs of package dependency relationships

spack graph [-hsci] [-a | -d] [--deptype DEPTYPE] ...


More documentation

Positional arguments

one or more package specs

Optional arguments

show this help message and exit
draw graph as ascii to stdout (default)
generate graph in dot format and print to stdout
graph static (possible) deps, don't concretize (implies --dot)
use different colors for different dependency types
graph installed specs, or specs in the active env (implies --dot)
comma-separated list of deptypes to traverse (default=build,link,run,test)


----



spack help

get help on spack and its commands

spack help [-ha] [--spec help_command]


More documentation

Positional arguments

command to get help on

Optional arguments

show this help message and exit
list all available commands and options
--spec
help on the package specification syntax


----



spack info

get detailed information on a particular package

spack info [-ha] [--detectable] [--maintainers] [--no-dependencies] [--no-variants] [--no-versions] [--phases]

[--tags] [--tests] [--virtuals] [--variants-by-name]
package


More documentation

Positional arguments

package name

Optional arguments

show this help message and exit
output all package information
output information on external detection
output package maintainers
do not output build, link, and run package dependencies
do not output variants
do not output versions
output installation phases
output package tags
output relevant build-time and stand-alone tests
output virtual packages
list variants in strict name order; don't group by condition


----



spack install

build and install packages

spack install [-hnvyU] [--only {package,dependencies}] [-u UNTIL] [-j JOBS] [--overwrite] [--fail-fast]

[--keep-prefix] [--keep-stage] [--dont-restage]
[--use-cache | --no-cache | --cache-only | --use-buildcache [{auto,only,never},][package:{auto,only,never},][dependencies:{auto,only,never}]]
[--include-build-deps] [--no-check-signature] [--show-log-on-error] [--source] [--deprecated] [--fake]
[--only-concrete] [--add | --no-add] [-f SPEC_YAML_FILE] [--clean | --dirty] [--test {root,all}]
[--log-format {junit,cdash}] [--log-file LOG_FILE] [--help-cdash] [--reuse] [--reuse-deps]
...


More documentation

Positional arguments

package spec

Optional arguments

show this help message and exit
select the mode of installation
phase to stop after when installing (default None)
explicitly set number of parallel jobs
reinstall an existing spec, even if it has dependents
stop all builds if any build fails (default is best effort)
don't remove the install prefix if installation fails
don't remove the build stage if installation succeeds
if a partial install is detected, don't delete prior state
check for pre-built Spack packages in mirrors (default)
do not check for pre-built Spack packages in mirrors
only install package from binary mirrors
select the mode of buildcache for the 'package' and 'dependencies'
include build deps when installing from cache, useful for CI pipeline troubleshooting
do not check signatures of binary packages
print full build log to stderr if build fails
install source files in prefix
do not use checksums to verify downloaded files (unsafe)
fetch deprecated versions without warning
display verbose build output while installing
fake install for debug purposes
(with environment) only install already concretized specs
(with environment) add spec to the environment as a root
(with environment) do not add spec to the environment as a root
read specs to install from .yaml files
unset harmful variables in the build environment (default)
preserve user environment in spack's build environment (danger!)
run tests on only root packages or all packages
format to be used for log files
filename for the log file
show usage instructions for CDash reporting

--cdash-upload-url CDASH_UPLOAD_URL

--cdash-build CDASH_BUILD

--cdash-site CDASH_SITE

--cdash-track CDASH_TRACK

--cdash-buildstamp CDASH_BUILDSTAMP

assume "yes" is the answer to every confirmation request
do not reuse installed deps; build newest configuration
reuse installed packages/buildcaches when possible
reuse installed dependencies only


----



spack license

list and check license headers on files in spack

spack license [-h] [--root ROOT] SUBCOMMAND ...


Optional arguments

show this help message and exit
scan a different prefix for license issues

Subcommands

license list-files

license verify

license update-copyright-year




----



spack license list-files

spack license list-files [-h]


Optional arguments

show this help message and exit


----



spack license verify

spack license verify [-h]


Optional arguments

show this help message and exit


----



spack license update-copyright-year [-h]


Optional arguments

show this help message and exit


----



spack list

list and search available packages

spack list [-hdv] [--format {name_only,version_json,html}] [-t TAG] [--count | --update FILE] ...


More documentation

Positional arguments

optional case-insensitive glob patterns to filter results

Optional arguments

show this help message and exit
filtering will also search the description for a match
format to be used to print the output [default: name_only]
include virtual packages in list
filter a package query by tag (multiple use allowed)
--count
display the number of packages that would be listed
write output to the specified file, if any package is newer


----



spack load

add package to the user environment

spack load [-h] [--sh | --csh | --fish | --bat | --pwsh] [--first] [--only {package,dependencies}] [--list] ...


More documentation

Positional arguments

constraint to select a subset of installed packages

Optional arguments

show this help message and exit
print sh commands to load the package
print csh commands to load the package
print fish commands to load the package
print bat commands to load the package
print pwsh commands to load the package
load the first match if multiple packages match the spec
select whether to load the package and its dependencies
show loaded packages: same as spack find --loaded


----



spack location

print out locations of packages and spack directories

spack location [-h] [-m | -r | -i | -p | -P | -s | -S | --source-dir | -b | -e [name]] [--first] ...


More documentation

Positional arguments

package spec

Optional arguments

show this help message and exit
spack python module directory
spack installation root
install prefix for spec (spec need not be installed)
directory enclosing a spec's package.py file
top-level packages directory for Spack
stage directory for a spec
top level stage directory
source directory for a spec (requires it to be staged first)
build directory for a spec (requires it to be staged first)
location of the named or current environment
use the first match if multiple packages match the spec


----



spack log-parse

filter errors and warnings from build logs

spack log-parse [-hp] [--show SHOW] [-c CONTEXT] [-w WIDTH] [-j JOBS] file


Positional arguments

a log file containing build output, or - for stdin

Optional arguments

show this help message and exit
comma-separated list of what to show; options: errors, warnings
lines of context to show around lines of interest
print out a profile of time spent in regexes during parse
wrap width: auto-size to terminal by default; 0 for no wrap
number of jobs to parse log file (default: 1 for short logs, ncpus for long logs)


----



spack maintainers

get information about package maintainers

spack maintainers [-ha] [--maintained | --unmaintained] [--by-user] ...


Positional arguments

names of packages or users to get info for

Optional arguments

show this help message and exit
show names of maintained packages
show names of unmaintained packages
show maintainers for all packages
show packages for users instead of users for packages


----



spack make-installer

generate Windows installer

spack make-installer [-h] (-v SPACK_VERSION | -s SPACK_SOURCE) [-g {SILENT,VERYSILENT}] output_dir


Positional arguments

output directory

Optional arguments

show this help message and exit
download given spack version
full path to spack source
level of verbosity provided by bundled git installer (default is fully verbose)


----



spack mark

mark packages as explicitly or implicitly installed

spack mark [-ha] (-e | -i) ...


Positional arguments

one or more installed package specs

Optional arguments

show this help message and exit
mark ALL installed packages that match each supplied spec
mark packages as explicitly installed
mark packages as implicitly installed


----



spack mirror

manage mirrors (source and binary)

spack mirror [-hn] [--deprecated] SUBCOMMAND ...


More documentation

Optional arguments

show this help message and exit
do not use checksums to verify downloaded files (unsafe)
fetch deprecated versions without warning

Subcommands

  • mirror create
  • mirror destroy

  • mirror add
  • mirror remove

  • mirror set-url
  • mirror set

mirror list



----



spack mirror create

spack mirror create [-haD] [-d DIRECTORY] [-f FILE] [--exclude-file EXCLUDE_FILE] [--exclude-specs EXCLUDE_SPECS]

[--skip-unstable-versions] [-n VERSIONS_PER_SPEC]
...


More documentation

Positional arguments

one or more package specs

Optional arguments

show this help message and exit
directory in which to create mirror
mirror all versions of all packages in Spack, or all packages in the current environment if there is an active environment (this requires significant time and space)
file with specs of packages to put in mirror
specs which Spack should not try to add to a mirror (listed in a file, one per line)
specs which Spack should not try to add to a mirror (specified on command line)
don't cache versions unless they identify a stable (unchanging) source code
also fetch all dependencies
the number of versions to fetch for each spec, choose 'all' to retrieve all versions of each package


----



spack mirror destroy

spack mirror destroy [-h] (-m mirror_name | --mirror-url mirror_url)


Optional arguments

show this help message and exit
find mirror to destroy by name
--mirror-url mirror_url
find mirror to destroy by url


----



spack mirror add

spack mirror add [-h] [--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT] [--type {binary,source}]

[--s3-access-key-id S3_ACCESS_KEY_ID] [--s3-access-key-secret S3_ACCESS_KEY_SECRET]
[--s3-access-token S3_ACCESS_TOKEN] [--s3-profile S3_PROFILE] [--s3-endpoint-url S3_ENDPOINT_URL]
[--oci-username OCI_USERNAME] [--oci-password OCI_PASSWORD]
mirror url


More documentation

Positional arguments

mnemonic name for mirror
url of mirror directory from 'spack mirror create'

Optional arguments

show this help message and exit
configuration scope to modify
specify the mirror type: for both binary and source use --type binary --type source (default)
ID string to use to connect to this S3 mirror
secret string to use to connect to this S3 mirror
access token to use to connect to this S3 mirror
S3 profile name to use to connect to this S3 mirror
endpoint URL to use to connect to this S3 mirror
username to use to connect to this OCI mirror
password to use to connect to this OCI mirror


----



spack mirror remove

spack mirror remove [-h] [--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT] mirror


More documentation

Positional arguments

mnemonic name for mirror

Optional arguments

show this help message and exit
configuration scope to modify


----



spack mirror set-url

spack mirror set-url [-h] [--push | --fetch] [--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT]

[--s3-access-key-id S3_ACCESS_KEY_ID] [--s3-access-key-secret S3_ACCESS_KEY_SECRET]
[--s3-access-token S3_ACCESS_TOKEN] [--s3-profile S3_PROFILE] [--s3-endpoint-url S3_ENDPOINT_URL]
[--oci-username OCI_USERNAME] [--oci-password OCI_PASSWORD]
mirror url


Positional arguments

mnemonic name for mirror
url of mirror directory from 'spack mirror create'

Optional arguments

show this help message and exit
set only the URL used for uploading
set only the URL used for downloading
configuration scope to modify
ID string to use to connect to this S3 mirror
secret string to use to connect to this S3 mirror
access token to use to connect to this S3 mirror
S3 profile name to use to connect to this S3 mirror
endpoint URL to use to connect to this S3 mirror
username to use to connect to this OCI mirror
password to use to connect to this OCI mirror


----



spack mirror set

spack mirror set [-h] [--push | --fetch] [--type {binary,source}] [--url URL]

[--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT]
[--s3-access-key-id S3_ACCESS_KEY_ID] [--s3-access-key-secret S3_ACCESS_KEY_SECRET]
[--s3-access-token S3_ACCESS_TOKEN] [--s3-profile S3_PROFILE] [--s3-endpoint-url S3_ENDPOINT_URL]
[--oci-username OCI_USERNAME] [--oci-password OCI_PASSWORD]
mirror


Positional arguments

mnemonic name for mirror

Optional arguments

show this help message and exit
modify just the push connection details
modify just the fetch connection details
specify the mirror type: for both binary and source use --type binary --type source
--url URL
url of mirror directory from 'spack mirror create'
configuration scope to modify
ID string to use to connect to this S3 mirror
secret string to use to connect to this S3 mirror
access token to use to connect to this S3 mirror
S3 profile name to use to connect to this S3 mirror
endpoint URL to use to connect to this S3 mirror
username to use to connect to this OCI mirror
password to use to connect to this OCI mirror


----



spack mirror list

spack mirror list [-h] [--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT]


More documentation

Optional arguments

show this help message and exit
configuration scope to read from


----



spack module

generate/manage module files

spack module [-h] SUBCOMMAND ...


Optional arguments

show this help message and exit

Subcommands

module lmod

module tcl





----



spack module lmod

spack module lmod [-h] [-n MODULE_SET_NAME] SUBCOMMAND ...


Optional arguments

show this help message and exit
named module set to use from modules configuration

Subcommands

  • module lmod refresh
  • module lmod find

module lmod rm

module lmod loads

module lmod setdefault



----



spack module lmod refresh

spack module lmod refresh [-hy] [--delete-tree] [--upstream-modules] ...


Positional arguments

constraint to select a subset of installed packages

Optional arguments

show this help message and exit
delete the module file tree before refresh
generate modules for packages installed upstream
assume "yes" is the answer to every confirmation request


----



spack module lmod find

spack module lmod find [-hr] [--full-path] ...


Positional arguments

constraint to select a subset of installed packages

Optional arguments

show this help message and exit
display full path to module file
recursively traverse spec dependencies


----



spack module lmod rm

spack module lmod rm [-hy] ...


Positional arguments

constraint to select a subset of installed packages

Optional arguments

show this help message and exit
assume "yes" is the answer to every confirmation request


----



spack module lmod loads

spack module lmod loads [-hr] [--input-only] [-p PREFIX] [-x EXCLUDE] ...


Positional arguments

constraint to select a subset of installed packages

Optional arguments

show this help message and exit
generate input for module command (instead of a shell script)
prepend to module names when issuing module load commands
exclude package from output; may be specified multiple times
recursively traverse spec dependencies


----



spack module lmod setdefault

spack module lmod setdefault [-h] ...


Positional arguments

constraint to select a subset of installed packages

Optional arguments

show this help message and exit


----



spack module tcl

spack module tcl [-h] [-n MODULE_SET_NAME] SUBCOMMAND ...


Optional arguments

show this help message and exit
named module set to use from modules configuration

Subcommands

  • module tcl refresh
  • module tcl find

module tcl rm

module tcl loads

module tcl setdefault



----



spack module tcl refresh

spack module tcl refresh [-hy] [--delete-tree] [--upstream-modules] ...


Positional arguments

constraint to select a subset of installed packages

Optional arguments

show this help message and exit
delete the module file tree before refresh
generate modules for packages installed upstream
assume "yes" is the answer to every confirmation request


----



spack module tcl find

spack module tcl find [-hr] [--full-path] ...


Positional arguments

constraint to select a subset of installed packages

Optional arguments

show this help message and exit
display full path to module file
recursively traverse spec dependencies


----



spack module tcl rm

spack module tcl rm [-hy] ...


Positional arguments

constraint to select a subset of installed packages

Optional arguments

show this help message and exit
assume "yes" is the answer to every confirmation request


----



spack module tcl loads

spack module tcl loads [-hr] [--input-only] [-p PREFIX] [-x EXCLUDE] ...


Positional arguments

constraint to select a subset of installed packages

Optional arguments

show this help message and exit
generate input for module command (instead of a shell script)
prepend to module names when issuing module load commands
exclude package from output; may be specified multiple times
recursively traverse spec dependencies


----



spack module tcl setdefault

spack module tcl setdefault [-h] ...


Positional arguments

constraint to select a subset of installed packages

Optional arguments

show this help message and exit


----



spack patch

patch expanded archive sources in preparation for install

spack patch [-hnU] [--deprecated] [--reuse] [--reuse-deps] ...


More documentation

Positional arguments

one or more package specs

Optional arguments

show this help message and exit
do not use checksums to verify downloaded files (unsafe)
fetch deprecated versions without warning
do not reuse installed deps; build newest configuration
reuse installed packages/buildcaches when possible
reuse installed dependencies only


----



spack pkg

query packages associated with particular git revisions

spack pkg [-h] SUBCOMMAND ...


Optional arguments

show this help message and exit

Subcommands

  • pkg add
  • pkg list
  • pkg diff

  • pkg added
  • pkg changed

  • pkg removed
  • pkg grep

  • pkg source
  • pkg hash



----



spack pkg add

spack pkg add [-h] package [package ...]


Positional arguments

one or more package names

Optional arguments

show this help message and exit


----



spack pkg list

spack pkg list [-h] [rev]


Positional arguments

revision to list packages for

Optional arguments

show this help message and exit


----



spack pkg diff

spack pkg diff [-h] [rev1] [rev2]


Positional arguments

revision to compare against
revision to compare to rev1 (default is HEAD)

Optional arguments

show this help message and exit


----



spack pkg added

spack pkg added [-h] [rev1] [rev2]


Positional arguments

revision to compare against
revision to compare to rev1 (default is HEAD)

Optional arguments

show this help message and exit


----



spack pkg changed

spack pkg changed [-h] [-t TYPE] [rev1] [rev2]


Positional arguments

revision to compare against
revision to compare to rev1 (default is HEAD)

Optional arguments

show this help message and exit
types of changes to show (A: added, R: removed, C: changed); default is 'C'


----



spack pkg removed

spack pkg removed [-h] [rev1] [rev2]


Positional arguments

revision to compare against
revision to compare to rev1 (default is HEAD)

Optional arguments

show this help message and exit


----



spack pkg grep

spack pkg grep [--help] ...


Positional arguments

arguments for grep

Optional arguments

show this help message and exit


----



spack pkg source

spack pkg source [-hc] ...


Positional arguments

package spec

Optional arguments

show this help message and exit
dump canonical source as used by package hash


----



spack pkg hash

spack pkg hash [-h] ...


Positional arguments

package spec

Optional arguments

show this help message and exit


----



spack providers

list packages that provide a particular virtual package

spack providers [-h] [virtual_package ...]


More documentation

Positional arguments

find packages that provide this virtual package

Optional arguments

show this help message and exit


----



spack pydoc

run pydoc from within spack

spack pydoc [-h] entity


Positional arguments

run pydoc help on entity

Optional arguments

show this help message and exit


----



spack python

launch an interpreter as spack would launch a command

spack python [-hVu] [-c PYTHON_COMMAND] [-i {python,ipython}] [-m MODULE] [--path] ...


More documentation

Positional arguments

file to run plus arguments

Optional arguments

show this help message and exit
print the Python version number and exit
command to execute
for compatibility with xdist, do not use without adding -u to the interpreter
python interpreter
run library module as a script
--path
show path to python interpreter that spack uses


----



spack reindex

rebuild Spack's package database

spack reindex [-h]


Optional arguments

show this help message and exit


----



spack remove

remove specs from an environment

spack remove [-haf] [-l LIST_NAME] ...


Positional arguments

one or more package specs

Optional arguments

show this help message and exit
remove all specs from (clear) the environment
name of the list to remove specs from
remove concretized spec (if any) immediately


----



spack repo

manage package source repositories

spack repo [-h] SUBCOMMAND ...


More documentation

Optional arguments

show this help message and exit

Subcommands

repo create

repo list

repo add

repo remove



----



spack repo create

spack repo create [-h] [-d SUBDIR] directory [namespace]


Positional arguments

directory to create the repo in
namespace to identify packages in the repository (defaults to the directory name)

Optional arguments

show this help message and exit
subdirectory to store packages in the repository


----



spack repo list

spack repo list [-h] [--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT]


Optional arguments

show this help message and exit
configuration scope to read from


----



spack repo add

spack repo add [-h] [--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT] path


Positional arguments

path to a Spack package repository directory

Optional arguments

show this help message and exit
configuration scope to modify


----



spack repo remove

spack repo remove [-h] [--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT] namespace_or_path


Positional arguments

namespace or path of a Spack package repository

Optional arguments

show this help message and exit
configuration scope to modify


----



spack resource

list downloadable resources (tarballs, repos, patches, etc.)

spack resource [-h] SUBCOMMAND ...


More documentation

Optional arguments

show this help message and exit

Subcommands

resource list

resource show





----



spack resource list

spack resource list [-h] [--only-hashes]


Optional arguments

show this help message and exit
only print sha256 hashes of resources


----



spack resource show

spack resource show [-h] hash


Positional arguments

hash

Optional arguments

show this help message and exit


----



spack restage

revert checked out package source code

spack restage [-h] ...


More documentation

Positional arguments

one or more package specs

Optional arguments

show this help message and exit


----



spack solve

concretize a specs using an ASP solver

spack solve [-hlLNyjtU] [--show SHOW] [-I | --no-install-status] [-c {nodes,edges,paths}] [--timers] [--stats]

[--reuse] [--reuse-deps]
...


Positional arguments

specs of packages

Optional arguments

show this help message and exit
select outputs
show dependency hashes as well as versions
show full dependency hashes as well as versions
show fully qualified package names
show install status of packages
do not show install status annotations
print concrete spec as yaml
print concrete spec as json
how extensively to traverse the DAG (default: nodes)
show dependency types
--timers
print out timers for different solve phases
--stats
print out statistics from clingo
do not reuse installed deps; build newest configuration
reuse installed packages/buildcaches when possible
reuse installed dependencies only


----



spack spec

show what would be installed, given a spec

spack spec [-hlLNtU] [-I | --no-install-status] [-y | -j | --format FORMAT] [-c {nodes,edges,paths}] [--reuse]

[--reuse-deps]
...


More documentation

Positional arguments

one or more package specs

Optional arguments

show this help message and exit
show dependency hashes as well as versions
show full dependency hashes as well as versions
show fully qualified package names
show install status of packages
do not show install status annotations
print concrete spec as YAML
print concrete spec as JSON
print concrete spec with the specified format string
how extensively to traverse the DAG (default: nodes)
show dependency types
do not reuse installed deps; build newest configuration
reuse installed packages/buildcaches when possible
reuse installed dependencies only


----



spack stage

expand downloaded archive in preparation for install

spack stage [-hnU] [--deprecated] [-p PATH] [--reuse] [--reuse-deps] ...


More documentation

Positional arguments

one or more package specs

Optional arguments

show this help message and exit
do not use checksums to verify downloaded files (unsafe)
fetch deprecated versions without warning
path to stage package, does not add to spack tree
do not reuse installed deps; build newest configuration
reuse installed packages/buildcaches when possible
reuse installed dependencies only


----



spack style

runs source code style checks on spack

spack style [-harUf] [-b BASE] [--root ROOT] [-t TOOL | -s TOOL] ...


More documentation

Positional arguments

specific files to check

Optional arguments

show this help message and exit
branch to compare against to determine changed files (default: develop)
check all files, not just changed files
print root-relative paths (default: cwd-relative)
exclude untracked files from checks
format automatically if possible (e.g., with isort, black)
style check a different spack instance
specify which tools to run (default: isort,black,flake8,mypy)
specify tools to skip (choose from isort,black,flake8,mypy)


----



spack tags

show package tags and associated packages

spack tags [-hia] [tag ...]


Positional arguments

show packages with the specified tag

Optional arguments

show this help message and exit
show information for installed packages only
show packages for all available tags


----



spack test

run spack's tests for an install

spack test [-h] SUBCOMMAND ...


More documentation

Optional arguments

show this help message and exit

Subcommands

  • test run
  • test list

  • test find
  • test status

test results

test remove



----



spack test run

run tests for the specified installed packages

if no specs are listed, run tests for all packages in the current environment or all installed packages if there is no active environment


spack test run [-hx] [--alias ALIAS] [--fail-fast] [--fail-first] [--externals] [--keep-stage]

[--log-format {junit,cdash}] [--log-file LOG_FILE] [--help-cdash] [--clean | --dirty]
...


More documentation

Positional arguments

one or more installed package specs

Optional arguments

show this help message and exit
provide an alias for this test-suite for subsequent access
stop tests for each package after the first failure
stop after the first failed package
test packages that are externally installed
only test packages that are explicitly installed
keep testing directory for debugging
format to be used for log files
filename for the log file

--cdash-upload-url CDASH_UPLOAD_URL

--cdash-build CDASH_BUILD

--cdash-site CDASH_SITE

--cdash-track CDASH_TRACK

--cdash-buildstamp CDASH_BUILDSTAMP

show usage instructions for CDash reporting
unset harmful variables in the build environment (default)
preserve user environment in spack's build environment (danger!)


----



spack test list

list installed packages with available tests

spack test list [-ha] [tag ...]


More documentation

Positional arguments

limit packages to those with all listed tags

Optional arguments

show this help message and exit
list all packages with tests (not just installed)


----



spack test find

find tests that are running or have available results

displays aliases for tests that have them, otherwise test suite content hashes


spack test find [-h] ...


More documentation

Positional arguments

optional case-insensitive glob patterns to filter results

Optional arguments

show this help message and exit


----



spack test status

get the current status for the specified Spack test suite(s)

spack test status [-h] ...


Positional arguments

test suites for which to print status

Optional arguments

show this help message and exit


----



spack test results

get the results from Spack test suite(s) (default all)

spack test results [-hlf] ...


More documentation

Positional arguments

[name(s)] [-- installed_specs]...
suite names and installed package constraints

Optional arguments

show this help message and exit
print the test log for each matching package
only show results for failed tests of matching packages


----



spack test remove

remove results from Spack test suite(s) (default all)

if no test suite is listed, remove results for all suites.

removed tests can no longer be accessed for results or status, and will not appear in spack test list results



spack test remove [-hy] ...


More documentation

Positional arguments

test suites to remove from test stage

Optional arguments

show this help message and exit
assume "yes" is the answer to every confirmation request


----



spack test-env

run a command in a spec's test environment, or dump its environment to screen or file

spack test-env [-hU] [--clean] [--dirty] [--reuse] [--reuse-deps] [--dump FILE] [--pickle FILE] ...


Positional arguments

spec [--] [cmd]...
specs of package environment to emulate

Optional arguments

show this help message and exit
unset harmful variables in the build environment (default)
preserve user environment in spack's build environment (danger!)
do not reuse installed deps; build newest configuration
reuse installed packages/buildcaches when possible
reuse installed dependencies only
dump a source-able environment to FILE
dump a pickled source-able environment to FILE


----



spack tutorial

set up spack for our tutorial (WARNING: modifies config!)

spack tutorial [-hy]


Optional arguments

show this help message and exit
assume "yes" is the answer to every confirmation request


----



spack undevelop

remove specs from an environment

spack undevelop [-ha] ...


Positional arguments

one or more package specs

Optional arguments

show this help message and exit
remove all specs from (clear) the environment


----



spack uninstall

remove installed packages

spack uninstall [-hfRya] [--remove] [--origin ORIGIN] ...


More documentation

Positional arguments

one or more installed package specs

Optional arguments

show this help message and exit
remove regardless of whether other packages or environments depend on this one
if in an environment, then the spec should also be removed from the environment description
also uninstall any packages that depend on the ones given via command line
assume "yes" is the answer to every confirmation request
remove ALL installed packages that match each supplied spec
only remove DB records with the specified origin


----



spack unit-test

run spack's unit tests (wrapper around pytest)

spack unit-test [-hHs] [-l | -L | -N] [--extension EXTENSION] [-k EXPRESSION] [--showlocals] ...


More documentation

Positional arguments

arguments for pytest

Optional arguments

show this help message and exit
show full pytest help, with advanced options
list test filenames
list all test functions
list full names of all tests
run test for a given spack extension
print output while tests run (disable capture)
filter tests by keyword (can also use w/list options)
show local variable values in tracebacks


----



spack unload

remove package from the user environment

spack unload [-ha] [--sh | --csh | --fish | --bat | --pwsh] ...


Positional arguments

one or more installed package specs

Optional arguments

show this help message and exit
print sh commands to activate the environment
print csh commands to activate the environment
print fish commands to load the package
print bat commands to load the package
print pwsh commands to load the package
unload all loaded Spack packages


----



spack url

debugging tool for url parsing

spack url [-h] SUBCOMMAND ...


More documentation

Optional arguments

show this help message and exit

Subcommands

url parse

url list

url summary

url stats



----



spack url parse

spack url parse [-hs] url


Positional arguments

url to parse

Optional arguments

show this help message and exit
spider the source page for versions


----



spack url list

spack url list [-hce] [-n | -N | -v | -V]


Optional arguments

show this help message and exit
color the parsed version and name in the urls shown (versions will be cyan, name red)
color the versions used for extrapolation as well (additional versions will be green, names magenta)
only list urls for which the name was incorrectly parsed
only list urls for which the name was correctly parsed
only list urls for which the version was incorrectly parsed
only list urls for which the version was correctly parsed


----



spack url summary

spack url summary [-h]


Optional arguments

show this help message and exit


----



spack url stats

spack url stats [-h] [--show-issues]


Optional arguments

show this help message and exit
show packages with issues (md5 hashes, http urls)


----



spack verify

check that all spack packages are on disk as installed

spack verify [-hlja] [-s | -f] ...


Positional arguments

specs or files to verify

Optional arguments

show this help message and exit
verify only locally installed packages
ouptut json-formatted errors
verify all packages
treat entries as specs (default)
treat entries as absolute filenames


----



spack versions

list available versions of a package

spack versions [-h] [-s | --safe-only | -r | -n] [-j JOBS] package


More documentation

Positional arguments

package name

Optional arguments

show this help message and exit
only list safe versions of the package
[deprecated] only list safe versions of the package
only list remote versions of the package
only list remote versions newer than the latest checksummed version
explicitly set number of parallel jobs


----



spack view

project packages to a compact naming scheme on the filesystem

spack view [-hv] [-e EXCLUDE] [-d {true,false,yes,no}] ACTION ...


Optional arguments

show this help message and exit
if not verbose only warnings/errors will be printed
exclude packages with names matching the given regex pattern
link/remove/list dependencies

Subcommands

  • view symlink
  • view hardlink

view copy

view remove

view statlink



----



spack view symlink [-hi] [--projection-file PROJECTION_FILE] path spec [spec ...]


Positional arguments

path to file system view directory
seed specs of the packages to view

Optional arguments

show this help message and exit
initialize view using projections from file

-i, --ignore-conflicts


----



spack view hardlink [-hi] [--projection-file PROJECTION_FILE] path spec [spec ...]


Positional arguments

path to file system view directory
seed specs of the packages to view

Optional arguments

show this help message and exit
initialize view using projections from file

-i, --ignore-conflicts


----



spack view copy

spack view copy [-hi] [--projection-file PROJECTION_FILE] path spec [spec ...]


Positional arguments

path to file system view directory
seed specs of the packages to view

Optional arguments

show this help message and exit
initialize view using projections from file

-i, --ignore-conflicts


----



spack view remove

spack view remove [-ha] [--no-remove-dependents] path [spec ...]


Positional arguments

path to file system view directory
seed specs of the packages to view

Optional arguments

show this help message and exit
do not remove dependents of specified specs
act on all specs in view


----



spack view statlink [-h] path [spec ...]


Positional arguments

path to file system view directory
seed specs of the packages to view

Optional arguments

show this help message and exit

CHAINING SPACK INSTALLATIONS

You can point your Spack installation to another installation to use any packages that are installed there. To register the other Spack instance, you can add it as an entry to upstreams.yaml:

upstreams:

spack-instance-1:
install_tree: /path/to/other/spack/opt/spack
spack-instance-2:
install_tree: /path/to/another/spack/opt/spack


install_tree must point to the opt/spack directory inside of the Spack base directory.

Once the upstream Spack instance has been added, spack find will automatically check the upstream instance when querying installed packages, and new package installations for the local Spack install will use any dependencies that are installed in the upstream instance.

This other instance of Spack has no knowledge of the local Spack instance and may not have the same permissions or ownership as the local Spack instance. This has the following consequences:

1.
Upstream Spack instances are not locked. Therefore it is up to users to make sure that the local instance is not using an upstream instance when it is being modified.
2.
Users should not uninstall packages from the upstream instance. Since the upstream instance doesn't know about the local instance, it cannot prevent the uninstallation of packages which the local instance depends on.

Other details about upstream installations:

1.
If a package is installed both locally and upstream, the local installation will always be used as a dependency. This can occur if the local Spack installs a package which is not present in the upstream, but later on the upstream Spack instance also installs that package.
2.
If an upstream Spack instance registers and installs an external package, the local Spack instance will treat this the same as a Spack-installed package. This feature will only work if the upstream Spack instance includes the upstream functionality (i.e. if its commit is after March 27, 2019).

Using Multiple Upstream Spack Instances

A single Spack instance can use multiple upstream Spack installations. Spack will search upstream instances in the order you list them in your configuration. If your installation refers to instances X and Y, in that order, then instance X must list Y as an upstream in its own upstreams.yaml.

Using Modules for Upstream Packages

The local Spack instance does not generate modules for packages which are installed upstream. The local Spack instance can be configured to use the modules generated by the upstream Spack instance.

There are two requirements to use the modules created by an upstream Spack instance: firstly the upstream instance must do a spack module tcl refresh, which generates an index file that maps installed packages to their modules; secondly, the local Spack instance must add a modules entry to the configuration:

upstreams:

spack-instance-1:
install_tree: /path/to/other/spack/opt/spack
modules:
tcl: /path/to/other/spack/share/spack/modules


Each time new packages are installed in the upstream Spack instance, the upstream Spack maintainer should run spack module tcl refresh (or the corresponding command for the type of module they intend to use).

NOTE:

Spack can generate modules that automatically load the modules of dependency packages. Spack cannot currently do this for modules in upstream packages.


CUSTOM EXTENSIONS

Spack extensions permit you to extend Spack capabilities by deploying your own custom commands or logic in an arbitrary location on your filesystem. This might be extremely useful e.g. to develop and maintain a command whose purpose is too specific to be considered for reintegration into the mainline or to evolve a command through its early stages before starting a discussion to merge it upstream. From Spack's point of view an extension is any path in your filesystem which respects a prescribed naming and layout for files:

spack-scripting/ # The top level directory must match the format 'spack-{extension_name}'
├── pytest.ini # Optional file if the extension ships its own tests
├── scripting # Folder that may contain modules that are needed for the extension commands
│   └── cmd # Folder containing extension commands
│       └── filter.py # A new command that will be available
├── tests # Tests for this extension
│   ├── conftest.py
│   └── test_filter.py
└── templates # Templates that may be needed by the extension


In the example above the extension named scripting adds an additional command (filter) and unit tests to verify its behavior. The code for this example can be obtained by cloning the corresponding git repository:

$ cd ~/
$ mkdir tmp && cd tmp
$ git clone https://github.com/alalazo/spack-scripting.git
Cloning into 'spack-scripting'...
remote: Counting objects: 11, done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 11 (delta 0), reused 11 (delta 0), pack-reused 0
Receiving objects: 100% (11/11), done.


As you can see by inspecting the sources, Python modules that are part of the extension can import any core Spack module.

Configure Spack to Use Extensions

To make your current Spack instance aware of extensions you should add their root paths to config.yaml. In the case of our example this means ensuring that:

config:

extensions:
- ~/tmp/spack-scripting


is part of your configuration file. Once this is setup any command that the extension provides will be available from the command line:

$ spack filter --help
usage: spack filter [-h] [--installed | --not-installed]

[--explicit | --implicit] [--output OUTPUT]
... filter specs based on their properties positional arguments:
specs specs to be filtered optional arguments:
-h, --help show this help message and exit
--installed select installed specs
--not-installed select specs that are not yet installed
--explicit select specs that were installed explicitly
--implicit select specs that are not installed or were installed implicitly
--output OUTPUT where to dump the result


The corresponding unit tests can be run giving the appropriate options to spack unit-test:

$ spack unit-test --extension=scripting
============================================================== test session starts ===============================================================
platform linux2 -- Python 2.7.15rc1, pytest-3.2.5, py-1.4.34, pluggy-0.4.0
rootdir: /home/mculpo/tmp/spack-scripting, inifile: pytest.ini
collected 5 items
tests/test_filter.py ...XX
============================================================ short test summary info =============================================================
XPASS tests/test_filter.py::test_filtering_specs[flags3-specs3-expected3]
XPASS tests/test_filter.py::test_filtering_specs[flags4-specs4-expected4]
=========================================================== slowest 20 test durations ============================================================
3.74s setup    tests/test_filter.py::test_filtering_specs[flags0-specs0-expected0]
0.17s call     tests/test_filter.py::test_filtering_specs[flags3-specs3-expected3]
0.16s call     tests/test_filter.py::test_filtering_specs[flags2-specs2-expected2]
0.15s call     tests/test_filter.py::test_filtering_specs[flags1-specs1-expected1]
0.13s call     tests/test_filter.py::test_filtering_specs[flags4-specs4-expected4]
0.08s call     tests/test_filter.py::test_filtering_specs[flags0-specs0-expected0]
0.04s teardown tests/test_filter.py::test_filtering_specs[flags4-specs4-expected4]
0.00s setup    tests/test_filter.py::test_filtering_specs[flags4-specs4-expected4]
0.00s setup    tests/test_filter.py::test_filtering_specs[flags3-specs3-expected3]
0.00s setup    tests/test_filter.py::test_filtering_specs[flags1-specs1-expected1]
0.00s setup    tests/test_filter.py::test_filtering_specs[flags2-specs2-expected2]
0.00s teardown tests/test_filter.py::test_filtering_specs[flags2-specs2-expected2]
0.00s teardown tests/test_filter.py::test_filtering_specs[flags1-specs1-expected1]
0.00s teardown tests/test_filter.py::test_filtering_specs[flags0-specs0-expected0]
0.00s teardown tests/test_filter.py::test_filtering_specs[flags3-specs3-expected3]
====================================================== 3 passed, 2 xpassed in 4.51 seconds =======================================================


CI PIPELINES

Spack provides commands that support generating and running automated build pipelines in CI instances. At the highest level it works like this: provide a spack environment describing the set of packages you care about, and include a description of how those packages should be mapped to Gitlab runners. Spack can then generate a .gitlab-ci.yml file containing job descriptions for all your packages that can be run by a properly configured CI instance. When run, the generated pipeline will build and deploy binaries, and it can optionally report to a CDash instance regarding the health of the builds as they evolve over time.

Getting started with pipelines

To get started with automated build pipelines a Gitlab instance with version >= 12.9 (more about Gitlab CI here) with at least one runner configured is required. This can be done quickly by setting up a local Gitlab instance.

It is possible to set up pipelines on gitlab.com, but the builds there are limited to 60 minutes and generic hardware. It is possible to hook up Gitlab to Google Kubernetes Engine (GKE) or Amazon Elastic Kubernetes Service (EKS), though those topics are outside the scope of this document.

After setting up a Gitlab instance for running CI, the basic steps for setting up a build pipeline are as follows:

1.
Create a repository in the Gitlab instance with CI and a runner enabled.
2.
Add a spack.yaml at the root containing your pipeline environment
3.
Add a .gitlab-ci.yml at the root containing two jobs (one to generate the pipeline dynamically, and one to run the generated jobs).
4.
Push a commit containing the spack.yaml and .gitlab-ci.yml mentioned above to the gitlab repository

See the Functional Example section for a minimal working example. See also the Custom Workflow section for a link to an example of a custom workflow based on spack pipelines.

Spack's pipelines are now making use of the trigger syntax to run dynamically generated child pipelines. Note that the use of dynamic child pipelines requires running Gitlab version >= 12.9.

Functional Example

The simplest fully functional standalone example of a working pipeline can be examined live at this example project on gitlab.com.

Here's the .gitlab-ci.yml file from that example that builds and runs the pipeline:

stages: [generate, build]
variables:

SPACK_REPO: https://github.com/scottwittenburg/spack.git
SPACK_REF: pipelines-reproducible-builds generate-pipeline:
stage: generate
tags:
- docker
image:
name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
entrypoint: [""]
before_script:
- git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_REF} && popd
- . "./spack/share/spack/setup-env.sh"
script:
- spack env activate --without-view .
- spack -d ci generate
--artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml"
artifacts:
paths:
- "${CI_PROJECT_DIR}/jobs_scratch_dir" build-jobs:
stage: build
trigger:
include:
- artifact: "jobs_scratch_dir/pipeline.yml"
job: generate-pipeline
strategy: depend


The key thing to note above is that there are two jobs: The first job to run, generate-pipeline, runs the spack ci generate command to generate a dynamic child pipeline and write it to a yaml file, which is then picked up by the second job, build-jobs, and used to trigger the downstream pipeline.

And here's the spack environment built by the pipeline represented as a spack.yaml file:

spack:

view: false
concretizer:
unify: false
definitions:
- pkgs:
- zlib
- bzip2
- arch:
- '%gcc@7.5.0 arch=linux-ubuntu18.04-x86_64'
specs:
- matrix:
- - $pkgs
- - $arch
mirrors: { "mirror": "s3://spack-public/mirror" }
ci:
enable-artifacts-buildcache: True
rebuild-index: False
pipeline-gen:
- any-job:
before_script:
- git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_CHECKOUT_VERSION} && popd
- . "./spack/share/spack/setup-env.sh"
- build-job:
tags: [docker]
image:
name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
entrypoint: [""]


The elements of this file important to spack ci pipelines are described in more detail below, but there are a couple of things to note about the above working example:

NOTE:

There is no script attribute specified for here. The reason for this is Spack CI will automatically generate reasonable default scripts. More detail on what is in these scripts can be found below.

Also notice the before_script section. It is required when using any of the default scripts to source the setup-env.sh script in order to inform the default scripts where to find the spack executable.



Normally enable-artifacts-buildcache is not recommended in production as it results in large binary artifacts getting transferred back and forth between gitlab and the runners. But in this example on gitlab.com where there is no shared, persistent file system, and where no secrets are stored for giving permission to write to an S3 bucket, enabled-buildcache-artifacts is the only way to propagate binaries from jobs to their dependents.

Also, it is usually a good idea to let the pipeline generate a final "rebuild the buildcache index" job, so that subsequent pipeline generation can quickly determine which specs are up to date and which need to be rebuilt (it's a good idea for other reasons as well, but those are out of scope for this discussion). In this case we have disabled it (using rebuild-index: False) because the index would only be generated in the artifacts mirror anyway, and consequently would not be available during subsequent pipeline runs.

NOTE:

With the addition of reproducible builds (#22887) a previously working pipeline will require some changes:
  • In the build-jobs, the environment location changed. This will typically show as a KeyError in the failing job. Be sure to point to ${SPACK_CONCRETE_ENV_DIR}.
  • When using include in your environment, be sure to make the included files available in the build jobs. This means adding those files to the artifact directory. Those files will also be missing in the reproducibility artifact.
  • Because the location of the environment changed, including files with relative path may have to be adapted to work both in the project context (generation job) and in the concrete env dir context (build job).



Spack commands supporting pipelines

Spack provides a ci command with a few sub-commands supporting spack ci pipelines. These commands are covered in more detail in this section.

spack ci

Super-command for functionality related to generating pipelines and executing pipeline jobs.

spack ci generate

Throughout this documentation, references to the "mirror" mean the target mirror which is checked for the presence of up-to-date specs, and where any scheduled jobs should push built binary packages. In the past, this defaulted to the mirror at index 0 in the mirror configs, and could be overridden using the --buildcache-destination argument. Starting with Spack 0.23, spack ci generate will require you to identify this mirror by the name "buildcache-destination". While you can configure any number of mirrors as sources for your pipelines, you will need to identify the destination mirror by name.

Concretizes the specs in the active environment, stages them (as described in Summary of .gitlab-ci.yml generation algorithm), and writes the resulting .gitlab-ci.yml to disk. During concretization of the environment, spack ci generate also writes a spack.lock file which is then provided to generated child jobs and made available in all generated job artifacts to aid in reproducing failed builds in a local environment. This means there are two artifacts that need to be exported in your pipeline generation job (defined in your .gitlab-ci.yml). The first is the output yaml file of spack ci generate, and the other is the directory containing the concrete environment files. In the Functional Example section, we only mentioned one path in the artifacts paths list because we used --artifacts-root as the top level directory containing both the generated pipeline yaml and the concrete environment.

Using --prune-dag or --no-prune-dag configures whether or not jobs are generated for specs that are already up to date on the mirror. If enabling DAG pruning using --prune-dag, more information may be required in your spack.yaml file, see the No Op (noop) section below regarding noop-job.

The optional --check-index-only argument can be used to speed up pipeline generation by telling spack to consider only remote buildcache indices when checking the remote mirror to determine if each spec in the DAG is up to date or not. The default behavior is for spack to fetch the index and check it, but if the spec is not found in the index, to also perform a direct check for the spec on the mirror. If the remote buildcache index is out of date, which can easily happen if it is not updated frequently, this behavior ensures that spack has a way to know for certain about the status of any concrete spec on the remote mirror, but can slow down pipeline generation significantly.

The --optimize argument is experimental and runs the generated pipeline document through a series of optimization passes designed to reduce the size of the generated file.

The --dependencies is also experimental and disables what in Gitlab is referred to as DAG scheduling, internally using the dependencies keyword rather than needs to list dependency jobs. The drawback of using this option is that before any job can begin, all jobs in previous stages must first complete. The benefit is that Gitlab allows more dependencies to be listed when using dependencies instead of needs.

The optional --output-file argument should be an absolute path (including file name) to the generated pipeline, and if not given, the default is ./.gitlab-ci.yml.

While optional, the --artifacts-root argument is used to determine where the concretized environment directory should be located. This directory will be created by spack ci generate and will contain the spack.yaml and generated spack.lock which are then passed to all child jobs as an artifact. This directory will also be the root directory for all artifacts generated by jobs in the pipeline.

spack ci rebuild

The purpose of spack ci rebuild is to take an assigned spec and ensure a binary of a successful build exists on the target mirror. If the binary does not already exist, it is built from source and pushed to the mirror. The associated stand-alone tests are optionally run against the new build. Additionally, files for reproducing the build outside of the CI environment are created to facilitate debugging.

If a binary for the spec does not exist on the target mirror, an install shell script, install.sh, is created and saved in the current working directory. The script is run in a job to install the spec from source. The resulting binary package is pushed to the mirror. If cdash is configured for the environment, then the build results will be uploaded to the site.

Environment variables and values in the ci::pipeline-gen section of the spack.yaml environment file provide inputs to this process. The two main sources of environment variables are variables written into .gitlab-ci.yml by spack ci generate and the GitLab CI runtime. Several key CI pipeline variables are described in Environment variables affecting pipeline operation.

If the --tests option is provided, stand-alone tests are performed but only if the build was successful and the package does not appear in the list of broken-tests-packages. A shell script, test.sh, is created and run to perform the tests. On completion, test logs are exported as job artifacts for review and to facilitate debugging. If cdash is configured, test results are also uploaded to the site.

A snippet from an example spack.yaml file illustrating use of this option and specification of a package with broken tests is given below. The inclusion of a spec for building gptune is not shown here. Note that --tests is passed to spack ci rebuild as part of the build-job script.

ci:

pipeline-gen:
- build-job
script:
- . "./share/spack/setup-env.sh"
- spack --version
- cd ${SPACK_CONCRETE_ENV_DIR}
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack -d ci rebuild --tests > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
broken-tests-packages:
- gptune


In this case, even if gptune is successfully built from source, the pipeline will not run its stand-alone tests since the package is listed under broken-tests-packages.

Spack's cloud pipelines provide actual, up-to-date examples of the CI/CD configuration and environment files used by Spack. You can find them under Spack's stacks repository directory.

spack ci rebuild-index

This is a convenience command to rebuild the buildcache index associated with the mirror in the active, gitlab-enabled environment (specifying the mirror url or name is not required).

spack ci reproduce-build

Given the url to a gitlab pipeline rebuild job, downloads and unzips the artifacts into a local directory (which can be specified with the optional --working-dir argument), then finds the target job in the generated pipeline to extract details about how it was run. Assuming the job used a docker image, the command prints a docker run command line and some basic instructions on how to reproduce the build locally.

Note that jobs failing in the pipeline will print messages giving the arguments you can pass to spack ci reproduce-build in order to reproduce a particular build locally.

Job Types

Rebuild (build)

Rebuild jobs, denoted as build-job's in the pipeline-gen list, are jobs associated with concrete specs that have been marked for rebuild. By default a simple script for doing rebuild is generated, but may be modified as needed.

The default script does three main steps, change directories to the pipelines concrete environment, activate the concrete environment, and run the spack ci rebuild command:

cd ${concrete_environment_dir}
spack env activate --without-view .
spack ci rebuild


Update Index (reindex)

By default, while a pipeline job may rebuild a package, create a buildcache entry, and push it to the mirror, it does not automatically re-generate the mirror's buildcache index afterward. Because the index is not needed by the default rebuild jobs in the pipeline, not updating the index at the end of each job avoids possible race conditions between simultaneous jobs, and it avoids the computational expense of regenerating the index. This potentially saves minutes per job, depending on the number of binary packages in the mirror. As a result, the default is that the mirror's buildcache index may not correctly reflect the mirror's contents at the end of a pipeline.

To make sure the buildcache index is up to date at the end of your pipeline, spack generates a job to update the buildcache index of the target mirror at the end of each pipeline by default. You can disable this behavior by adding rebuild-index: False inside the ci section of your spack environment.

Reindex jobs do not allow modifying the script attribute since it is automatically generated using the target mirror listed in the mirrors::mirror configuration.

Signing (signing)

This job is run after all of the rebuild jobs are completed and is intended to be used to sign the package binaries built by a protected CI run. Signing jobs are generated only if a signing job script is specified and the spack CI job type is protected. Note, if an any-job section contains a script, this will not implicitly create a signing job, a signing job may only exist if it is explicitly specified in the configuration with a script attribute. Specifying a signing job without a script does not create a signing job and the job configuration attributes will be ignored. Signing jobs are always assigned the runner tags aws, protected, and notary.

Cleanup (cleanup)

When using temporary-storage-url-prefix the cleanup job will destroy the mirror created for the associated Gitlab pipeline. Cleanup jobs do not allow modifying the script, but do expect that the spack command is in the path and require a before_script to be specified that sources the setup-env.sh script.

No Op (noop)

If no specs in an environment need to be rebuilt during a given pipeline run (meaning all are already up to date on the mirror), a single successful job (a NO-OP) is still generated to avoid an empty pipeline (which GitLab considers to be an error). The noop-job* sections can be added to your spack.yaml where you can provide tags and image or variables for the generated NO-OP job. This section also supports providing before_script, script, and after_script, in case you want to take some custom actions in the case of any empty pipeline.

Following is an example of this section added to a spack.yaml:

spack:

ci:
pipeline-gen:
- noop-job:
tags: ['custom', 'tag']
image:
name: 'some.image.registry/custom-image:latest'
entrypoint: ['/bin/bash']
script::
- echo "Custom message in a custom script"


The example above illustrates how you can provide the attributes used to run the NO-OP job in the case of an empty pipeline. The only field for the NO-OP job that might be generated for you is script, but that will only happen if you do not provide one yourself. Notice in this example the script uses the :: notation to prescribe override behavior. Without this, the echo command would have been prepended to the automatically generated script rather than replacing it.

ci.yaml

Here's an example of a spack configuration file describing a build pipeline:

ci:

target: gitlab
rebuild_index: True
broken-specs-url: https://broken.specs.url
broken-tests-packages:
- gptune
pipeline-gen:
- submapping:
- match:
- os=ubuntu18.04
build-job:
tags:
- spack-kube
image: spack/ubuntu-bionic
- match:
- os=centos7
build-job:
tags:
- spack-kube
image: spack/centos7 cdash:
build-group: Release Testing
url: https://cdash.spack.io
project: Spack
site: Spack AWS Gitlab Instance


The ci config section is used to configure how the pipeline workload should be generated, mainly how the jobs for building specs should be assigned to the configured runners on your instance. The main section for configuring pipelines is pipeline-gen, which is a list of job attribute sections that are merged, using the same rules as Spack configs (Scope Precedence), from the bottom up. The order sections are applied is to be consistent with how spack orders scope precedence when merging lists. There are two main section types, <type>-job sections and submapping sections.

Job Attribute Sections

Each type of job may have attributes added or removed via sections in the pipeline-gen list. Job type specific attributes may be specified using the keys <type>-job to add attributes to all jobs of type <type> or <type>-job-remove to remove attributes of type <type>. Each section may only contain one type of job attribute specification, ie. , build-job and noop-job may not coexist but build-job and build-job-remove may.

NOTE:

The *-remove specifications are applied before the additive attribute specification. For example, in the case where both build-job and build-job-remove are listed in the same pipeline-gen section, the value will still exist in the merged build-job after applying the section.


All of the attributes specified are forwarded to the generated CI jobs, however special treatment is applied to the attributes tags, image, variables, script, before_script, and after_script as they are components recognized explicitly by the Spack CI generator. For the tags attribute, Spack will remove reserved tags (Reserved Tags) from all jobs specified in the config. In some cases, such as for signing jobs, reserved tags will be added back based on the type of CI that is being run.

Once a runner has been chosen to build a release spec, the build-job* sections provide information determining details of the job in the context of the runner. At lease one of the build-job* sections must contain a tags key, which is a list containing at least one tag used to select the runner from among the runners known to the gitlab instance. For Docker executor type runners, the image key is used to specify the Docker image used to build the release spec (and could also appear as a dictionary with a name specifying the image name, as well as an entrypoint to override whatever the default for that image is). For other types of runners the variables key will be useful to pass any information on to the runner that it needs to do its work (e.g. scheduler parameters, etc.). Any variables provided here will be added, verbatim, to each job.

The build-job section also allows users to supply custom script, before_script, and after_script sections to be applied to every job scheduled on that runner. This allows users to do any custom preparation or cleanup tasks that fit their particular workflow, as well as completely customize the rebuilding of a spec if they so choose. Spack will not generate a before_script or after_script for jobs, but if you do not provide a custom script, spack will generate one for you that assumes the concrete environment directory is located within your --artifacts_root (or if not provided, within your $CI_PROJECT_DIR), activates that environment for you, and invokes spack ci rebuild.

Sections that specify scripts (script, before_script, after_script) are all read as lists of commands or lists of lists of commands. It is recommended to write scripts as lists of lists if scripts will be composed via merging. The default behavior of merging lists will remove duplicate commands and potentially apply unwanted reordering, whereas merging lists of lists will preserve the local ordering and never removes duplicate commands. When writing commands to the CI target script, all lists are expanded and flattened into a single list.

Submapping Sections

A special case of attribute specification is the submapping section which may be used to apply job attributes to build jobs based on the package spec associated with the rebuild job. Submapping is specified as a list of spec match lists associated with build-job/build-job-remove sections. There are two options for match_behavior, either first or merge may be specified. In either case, the submapping list is processed from the bottom up, and then each match list is searched for a string that satisfies the check spec.satisfies({match_item}) for each concrete spec.

The the case of match_behavior: first, the first match section in the list of submappings that contains a string that satisfies the spec will apply it's build-job* attributes to the rebuild job associated with that spec. This is the default behavior and will be the method if no match_behavior is specified.

The the case of merge match, all of the match sections in the list of submappings that contain a string that satisfies the spec will have the associated build-job* attributes applied to the rebuild job associated with that spec. Again, the attributes will be merged starting from the bottom match going up to the top match.

In the case that no match is found in a submapping section, no additional attributes will be applied.

Bootstrapping

The bootstrap section allows you to specify lists of specs from your definitions that should be staged ahead of the environment's specs. At the moment the only viable use-case for bootstrapping is to install compilers.

Here's an example of what bootstrapping some compilers might look like:

spack:

definitions:
- compiler-pkgs:
- 'llvm+clang@6.0.1 os=centos7'
- 'gcc@6.5.0 os=centos7'
- 'llvm+clang@6.0.1 os=ubuntu18.04'
- 'gcc@6.5.0 os=ubuntu18.04'
- pkgs:
- readline@7.0
- compilers:
- '%gcc@5.5.0'
- '%gcc@6.5.0'
- '%gcc@7.3.0'
- '%clang@6.0.0'
- '%clang@6.0.1'
- oses:
- os=ubuntu18.04
- os=centos7
specs:
- matrix:
- [$pkgs]
- [$compilers]
- [$oses]
exclude:
- '%gcc@7.3.0 os=centos7'
- '%gcc@5.5.0 os=ubuntu18.04'
ci:
bootstrap:
- name: compiler-pkgs
compiler-agnostic: true
pipeline-gen:
# similar to the example higher up in this description
...


The example above adds a list to the definitions called compiler-pkgs (you can add any number of these), which lists compiler packages that should be staged ahead of the full matrix of release specs (in this example, only readline). Then within the ci section, note the addition of a bootstrap section, which can contain a list of items, each referring to a list in the definitions section. These items can either be a dictionary or a string. If you supply a dictionary, it must have a name key whose value must match one of the lists in definitions and it can have a compiler-agnostic key whose value is a boolean. If you supply a string, then it needs to match one of the lists provided in definitions. You can think of the bootstrap list as an ordered list of pipeline "phases" that will be staged before your actual release specs. While this introduces another layer of bottleneck in the pipeline (all jobs in all stages of one phase must complete before any jobs in the next phase can begin), it also means you are guaranteed your bootstrapped compilers will be available when you need them.

The compiler-agnostic key can be provided with each item in the bootstrap list. It tells the spack ci generate command that any jobs staged from that particular list should have the compiler removed from the spec, so that any compiler available on the runner where the job is run can be used to build the package.

When including a bootstrapping phase as in the example above, the result is that the bootstrapped compiler packages will be pushed to the binary mirror (and the local artifacts mirror) before the actual release specs are built. In this case, the jobs corresponding to subsequent release specs are configured to install_missing_compilers, so that if spack is asked to install a package with a compiler it doesn't know about, it can be quickly installed from the binary mirror first.

Since bootstrapping compilers is optional, those items can be left out of the environment/stack file, and in that case no bootstrapping will be done (only the specs will be staged for building) and the runners will be expected to already have all needed compilers installed and configured for spack to use.

Pipeline Buildcache

The enable-artifacts-buildcache key takes a boolean and determines whether the pipeline uses artifacts to store and pass along the buildcaches from one stage to the next (the default if you don't provide this option is False).

Broken Specs URL

The optional broken-specs-url key tells Spack to check against a list of specs that are known to be currently broken in develop. If any such specs are found, the spack ci generate command will fail with an error message informing the user what broken specs were encountered. This allows the pipeline to fail early and avoid wasting compute resources attempting to build packages that will not succeed.

CDash

The optional cdash section provides information that will be used by the spack ci generate command (invoked by spack ci start) for reporting to CDash. All the jobs generated from this environment will belong to a "build group" within CDash that can be tracked over time. As the release progresses, this build group may have jobs added or removed. The url, project, and site are used to specify the CDash instance to which build results should be reported.

Take a look at the schema for the ci section of the spack environment file, to see precisely what syntax is allowed there.

Reserved Tags

Spack has a subset of tags (public, protected, and notary) that it reserves for classifying runners that may require special permissions or access. The tags public and protected are used to distinguish between runners that use public permissions and runners with protected permissions. The notary tag is a special tag that is used to indicate runners that have access to the highly protected information used for signing binaries using the signing job.

Summary of .gitlab-ci.yml generation algorithm

All specs yielded by the matrix (or all the specs in the environment) have their dependencies computed, and the entire resulting set of specs are staged together before being run through the ci/pipeline-gen entries, where each staged spec is assigned a runner. "Staging" is the name given to the process of figuring out in what order the specs should be built, taking into consideration Gitlab CI rules about jobs/stages. In the staging process the goal is to maximize the number of jobs in any stage of the pipeline, while ensuring that the jobs in any stage only depend on jobs in previous stages (since those jobs are guaranteed to have completed already). As a runner is determined for a job, the information in the merged any-job* and build-job* sections is used to populate various parts of the job description that will be used by the target CI pipelines. Once all the jobs have been assigned a runner, the .gitlab-ci.yml is written to disk.

The short example provided above would result in the readline, ncurses, and pkgconf packages getting staged and built on the runner chosen by the spack-k8s tag. In this example, spack assumes the runner is a Docker executor type runner, and thus certain jobs will be run in the centos7 container, and others in the ubuntu-18.04 container. The resulting .gitlab-ci.yml will contain 6 jobs in three stages. Once the jobs have been generated, the presence of a SPACK_CDASH_AUTH_TOKEN environment variable during the spack ci generate command would result in all of the jobs being put in a build group on CDash called "Release Testing" (that group will be created if it didn't already exist).

Using a custom spack in your pipeline

If your runners will not have a version of spack ready to invoke, or if for some other reason you want to use a custom version of spack to run your pipelines, this section provides an example of how you could take advantage of user-provided pipeline scripts to accomplish this fairly simply. First, consider specifying the source and version of spack you want to use with variables, either written directly into your .gitlab-ci.yml, or provided by CI variables defined in the gitlab UI or from some upstream pipeline. Let's say you choose the variable names SPACK_REPO and SPACK_REF to refer to the particular fork of spack and branch you want for running your pipeline. You can then refer to those in a custom shell script invoked both from your pipeline generation job and your rebuild jobs. Here's the generate-pipeline job from the top of this document, updated to clone and source a custom spack:

generate-pipeline:

tags:
- <some-other-tag> before_script:
- git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_REF} && popd
- . "./spack/share/spack/setup-env.sh" script:
- spack env activate --without-view .
- spack ci generate --check-index-only
--artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml" after_script:
- rm -rf ./spack artifacts:
paths:
- "${CI_PROJECT_DIR}/jobs_scratch_dir"


That takes care of getting the desired version of spack when your pipeline is generated by spack ci generate. You also want your generated rebuild jobs (all of them) to clone that version of spack, so next you would update your spack.yaml from above as follows:

spack:

...
ci:
pipeline-gen:
- build-job:
tags:
- spack-kube
image: spack/ubuntu-bionic
before_script:
- git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_REF} && popd
- . "./spack/share/spack/setup-env.sh"
script:
- spack env activate --without-view ${SPACK_CONCRETE_ENV_DIR}
- spack -d ci rebuild
after_script:
- rm -rf ./spack


Now all of the generated rebuild jobs will use the same shell script to clone spack before running their actual workload.

Now imagine you have long pipelines with many specs to be built, and you are pointing to a spack repository and branch that has a tendency to change frequently, such as the main repo and its develop branch. If each child job checks out the develop branch, that could result in some jobs running with one SHA of spack, while later jobs run with another. To help avoid this issue, the pipeline generation process saves global variables called SPACK_VERSION and SPACK_CHECKOUT_VERSION that capture the version of spack used to generate the pipeline. While the SPACK_VERSION variable simply contains the human-readable value produced by spack -V at pipeline generation time, the SPACK_CHECKOUT_VERSION variable can be used in a git checkout command to make sure all child jobs checkout the same version of spack used to generate the pipeline. To take advantage of this, you could simply replace git checkout ${SPACK_REF} in the example spack.yaml above with git checkout ${SPACK_CHECKOUT_VERSION}.

On the other hand, if you're pointing to a spack repository and branch under your control, there may be no benefit in using the captured SPACK_CHECKOUT_VERSION, and you can instead just clone using the variables you define (SPACK_REPO and SPACK_REF in the example above).

Custom Workflow

There are many ways to take advantage of spack CI pipelines to achieve custom workflows for building packages or other resources. One example of a custom pipelines workflow is the spack tutorial container repo. This project uses GitHub (for source control), GitLab (for automated spack ci pipelines), and DockerHub automated builds to build Docker images (complete with fully populate binary mirror) used by instructors and participants of a spack tutorial.

Take a look a the repo to see how it is accomplished using spack CI pipelines, and see the following markdown files at the root of the repository for descriptions and documentation describing the workflow: DESCRIPTION.md, DOCKERHUB_SETUP.md, GITLAB_SETUP.md, and UPDATING.md.

Environment variables affecting pipeline operation

Certain secrets and some other information should be provided to the pipeline infrastructure via environment variables, usually for reasons of security, but in some cases to support other pipeline use cases such as PR testing. The environment variables used by the pipeline infrastructure are described here.

AWS_ACCESS_KEY_ID

Optional. Only needed when binary mirror is an S3 bucket.

AWS_SECRET_ACCESS_KEY

Optional. Only needed when binary mirror is an S3 bucket.

S3_ENDPOINT_URL

Optional. Only needed when binary mirror is an S3 bucket that is not on AWS.

CDASH_AUTH_TOKEN

Optional. Only needed in order to report build groups to CDash.

SPACK_SIGNING_KEY

Optional. Only needed if you want spack ci rebuild to trust the key you store in this variable, in which case, it will subsequently be used to sign and verify binary packages (when installing or creating buildcaches). You could also have already trusted a key spack know about, or if no key is present anywhere, spack will install specs using --no-check-signature and create buildcaches using -u (for unsigned binaries).

SPACK PACKAGE SIGNING

The goal of package signing in Spack is to provide data integrity assurances around official packages produced by the automated Spack CI pipelines. These assurances directly address the security of Spack’s software supply chain by explaining why a security-conscious user can be reasonably justified in the belief that packages installed via Spack have an uninterrupted auditable trail back to change management decisions judged to be appropriate by the Spack maintainers. This is achieved through cryptographic signing of packages built by Spack CI pipelines based on code that has been transparently reviewed and approved on GitHub. This document describes the signing process for interested users.

Risks, Impact and Threat Model

This document addresses the approach taken to safeguard Spack’s reputation with regard to the integrity of the package data produced by Spack’s CI pipelines. It does not address issues of data confidentiality (Spack is intended to be largely open source) or availability (efforts are described elsewhere). With that said the main reputational risk can be broadly categorized as a loss of faith in the data integrity due to a breach of the private key used to sign packages. Remediation of a private key breach would require republishing the public key with a revocation certificate, generating a new signing key, an assessment and potential rebuild/resigning of all packages since the key was breached, and finally direct intervention by every spack user to update their copy of Spack’s public keys used for local verification.

The primary threat model used in mitigating the risks of these stated impacts is one of individual error not malicious intent or insider threat. The primary objective is to avoid the above impacts by making a private key breach nearly impossible due to oversight or configuration error. Obvious and straightforward measures are taken to mitigate issues of malicious interference in data integrity and insider threats but these attack vectors are not systematically addressed. It should be hard to exfiltrate the private key intentionally, and almost impossible to leak the key by accident.

Pipeline Overview

Spack pipelines build software through progressive stages where packages in later stages nominally depend on packages built in earlier stages. For both technical and design reasons these dependencies are not implemented through the default GitLab artifacts mechanism; instead built packages are uploaded to AWS S3 mirrors (buckets) where they are retrieved by subsequent stages in the pipeline. Two broad categories of pipelines exist: Pull Request (PR) pipelines and Develop/Release pipelines.

  • PR pipelines are launched in response to pull requests made by trusted and untrusted users. Packages built on these pipelines upload code to quarantined AWS S3 locations which cache the built packages for the purposes of review and iteration on the changes proposed in the pull request. Packages built on PR pipelines can come from untrusted users so signing of these pipelines is not implemented. Jobs in these pipelines are executed via normal GitLab runners both within the AWS GitLab infrastructure and at affiliated institutions.
  • Develop and Release pipelines sign the packages they produce and carry strong integrity assurances that trace back to auditable change management decisions. These pipelines only run after members from a trusted group of reviewers verify that the proposed changes in a pull request are appropriate. Once the PR is merged, or a release is cut, a pipeline is run on protected GitLab runners which provide access to the required signing keys within the job. Intermediary keys are used to sign packages in each stage of the pipeline as they are built and a final job officially signs each package external to any specific packages’ build environment. An intermediate key exists in the AWS infrastructure and for each affiliated instritution that maintains protected runners. The runners that execute these pipelines exclusively accept jobs from protected branches meaning the intermediate keys are never exposed to unreviewed code and the official keys are never exposed to any specific build environment.

Key Architecture

Spack’s CI process uses public-key infrastructure (PKI) based on GNU Privacy Guard (gpg) keypairs to sign public releases of spack package metadata, also called specs. Two classes of GPG keys are involved in the process to reduce the impact of an individual private key compromise, these key classes are the Intermediate CI Key and Reputational Key. Each of these keys has signing sub-keys that are used exclusively for signing packages. This can be confusing so for the purpose of this explanation we’ll refer to Root and Signing keys. Each key has a private and a public component as well as one or more identities and zero or more signatures.

Intermediate CI Key

The Intermediate key class is used to sign and verify packages between stages within a develop or release pipeline. An intermediate key exists for the AWS infrastructure as well as each affiliated institution that maintains protected runners. These intermediate keys are made available to the GitLab execution environment building the package so that the package’s dependencies may be verified by the Signing Intermediate CI Public Key and the final package may be signed by the Signing Intermediate CI Private Key.

Intermediate CI Key (GPG)
Root Intermediate CI Private Key (RSA 4096)# Root Intermediate CI Public Key (RSA 4096)
Signing Intermediate CI Private Key (RSA 4096) Signing Intermediate CI Public Key (RSA 4096)
Identity: “Intermediate CI Key <maintainers@spack.io>”
Signatures: None

The Root intermediate CI Private KeyIs stripped out of the GPG key and stored offline completely separate from Spack’s infrastructure. This allows the core development team to append revocation certificates to the GPG key and issue new sub-keys for use in the pipeline. It is our expectation that this will happen on a semi regular basis. A corollary of this is that this key should not be used to verify package integrity outside the internal CI process.

Reputational Key

The Reputational Key is the public facing key used to sign complete groups of development and release packages. Only one key pair exsits in this class of keys. In contrast to the Intermediate CI Key the Reputational Key should be used to verify package integrity. At the end of develop and release pipeline a final pipeline job pulls down all signed package metadata built by the pipeline, verifies they were signed with an Intermediate CI Key, then strips the Intermediate CI Key signature from the package and re-signs them with the Signing Reputational Private Key. The officially signed packages are then uploaded back to the AWS S3 mirror. Please note that separating use of the reputational key into this final job is done to prevent leakage of the key in a spack package. Because the Signing Reputational Private Key is never exposed to a build job it cannot accidentally end up in any built package.

Reputational Key (GPG)
Root Reputational Private Key (RSA 4096)# Root Reputational Public Key (RSA 4096)
Signing Reputational Private Key (RSA 4096) Signing Reputational Public Key (RSA 4096)
Identity: “Spack Project <maintainers@spack.io>”
Signatures: Signed by core development team [1]

The Root Reputational Private Key is stripped out of the GPG key and stored offline completely separate from Spack’s infrastructure. This allows the core development team to append revocation certificates to the GPG key in the unlikely event that the Signing Reputation Private Key is compromised. In general it is the expectation that rotating this key will happen infrequently if at all. This should allow relatively transparent verification for the end-user community without needing deep familiarity with GnuPG or Public Key Infrastructure.

Build Cache Format

A binary package consists of a metadata file unambiguously defining the built package (and including other details such as how to relocate it) and the installation directory of the package stored as a compressed archive file. The metadata files can either be unsigned, in which case the contents are simply the json-serialized concrete spec plus metadata, or they can be signed, in which case the json-serialized concrete spec plus metadata is wrapped in a gpg cleartext signature. Built package metadata files are named to indicate the operating system and architecture for which the package was built as well as the compiler used to build it and the packages name and version. For example:

linux-ubuntu18.04-haswell-gcc-7.5.0-zlib-1.2.12-llv2ysfdxnppzjrt5ldybb5c52qbmoow.spec.json.sig


would contain the concrete spec and binary metadata for a binary package of zlib@1.2.12, built for the ubuntu operating system and haswell architecture. The id of the built package exists in the name of the file as well (after the package name and version) and in this case begins with llv2ys. The id distinguishes a particular built package from all other built packages with the same os/arch, compiler, name, and version. Below is an example of a signed binary package metadata file. Such a file would live in the build_cache directory of a binary mirror:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
{

"spec": {
<concrete-spec-contents-omitted>
},
"buildcache_layout_version": 1,
"binary_cache_checksum": {
"hash_algorithm": "sha256",
"hash": "4f1e46452c35a5e61bcacca205bae1bfcd60a83a399af201a29c95b7cc3e1423"
} } -----BEGIN PGP SIGNATURE----- iQGzBAEBCgAdFiEETZn0sLle8jIrdAPLx/P+voVcifMFAmKAGvwACgkQx/P+voVc ifNoVgv/VrhA+wurVs5GB9PhmMA1m5U/AfXZb4BElDRwpT8ZcTPIv5X8xtv60eyn 4EOneGVbZoMThVxgev/NKARorGmhFXRqhWf+jknJZ1dicpqn/qpv34rELKUpgXU+ QDQ4d1P64AIdTczXe2GI9ZvhOo6+bPvK7LIsTkBbtWmopkomVxF0LcMuxAVIbA6b 887yBvVO0VGlqRnkDW7nXx49r3AG2+wDcoU1f8ep8QtjOcMNaPTPJ0UnjD0VQGW6 4ZFaGZWzdo45MY6tF3o5mqM7zJkVobpoW3iUz6J5tjz7H/nMlGgMkUwY9Kxp2PVH qoj6Zip3LWplnl2OZyAY+vflPFdFh12Xpk4FG7Sxm/ux0r+l8tCAPvtw+G38a5P7 QEk2JBr8qMGKASmnRlJUkm1vwz0a95IF3S9YDfTAA2vz6HH3PtsNLFhtorfx8eBi Wn5aPJAGEPOawEOvXGGbsH4cDEKPeN0n6cy1k92uPEmBLDVsdnur8q42jk5c2Qyx j3DXty57 =3gvm -----END PGP SIGNATURE-----


If a user has trusted the public key associated with the private key used to sign the above spec file, the signature can be verified with gpg, as follows:

$ gpg –verify linux-ubuntu18.04-haswell-gcc-7.5.0-zlib-1.2.12-llv2ysfdxnppzjrt5ldybb5c52qbmoow.spec.json.sig


The metadata (regardless whether signed or unsigned) contains the checksum of the .spack file containing the actual installation. The checksum should be compared to a checksum computed locally on the .spack file to ensure the contents have not changed since the binary spec plus metadata were signed. The .spack files are actually tarballs containing the compressed archive of the install tree. These files, along with the metadata files, live within the build_cache directory of the mirror, and together are organized as follows:

build_cache/

# unsigned metadata (for indexing, contains sha256 of .spack file)
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spec.json
# clearsigned metadata (same as above, but signed)
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spec.json.sig
<arch>/
<compiler>/
<name>-<ver>/
# tar.gz-compressed prefix (may support more compression formats later)
<arch>-<compiler>-<name>-<ver>-24zvipcqgg2wyjpvdq2ajy5jnm564hen.spack


Uncompressing and extracting the .spack file results in the install tree. This is in contrast to previous versions of spack, where the .spack file contained a (duplicated) metadata file, a signature file and a nested tarball containing the install tree.

Internal Implementation

The technical implementation of the pipeline signing process includes components defined in Amazon Web Services, the Kubernetes cluster, at affilicated institutions, and the GitLab/GitLab Runner deployment. We present the techincal implementation in two interdependent sections. The first addresses how secrets are managed through the lifecycle of a develop or release pipeline. The second section describes how Gitlab Runner and pipelines are configured and managed to support secure automated signing.

Secrets Management

As stated above the Root Private Keys (intermediate and reputational) are stripped from the GPG keys and stored outside Spack’s infrastructure.

WARNING:

  • Explanation here about where and how access is handled for these keys.
  • Both Root private keys are protected with strong passwords
  • Who has access to these and how?




Intermediate CI Key

Multiple intermediate CI signing keys exist, one Intermediate CI Key for jobs run in AWS, and one key for each affiliated institution (e.g. Univerity of Oregon). Here we describe how the Intermediate CI Key is managed in AWS:

The Intermediate CI Key (including the Signing Intermediate CI Private Key is exported as an ASCII armored file and stored in a Kubernetes secret called spack-intermediate-ci-signing-key. For convenience sake, this same secret contains an ASCII-armored export of just the public components of the Reputational Key. This secret also contains the public components of each of the affiliated institutions' Intermediate CI Key. These are potentially needed to verify dependent packages which may have been found in the public mirror or built by a protected job running on an affiliated institution's infrastrcuture in an earlier stage of the pipeline.

Procedurally the spack-intermediate-ci-signing-key secret is used in the following way:

1.
A large-arm-prot or large-x86-prot protected runner picks up a job tagged protected from a protected GitLab branch. (See Protected Runners and Reserved Tags).
2.
Based on its configuration, the runner creates a job Pod in the pipeline namespace and mounts the spack-intermediate-ci-signing-key Kubernetes secret into the build container
3.
The Intermediate CI Key, affiliated institutions' public key and the Reputational Public Key are imported into a keyring by the spack gpg … sub-command. This is initiated by the job’s build script which is created by the generate job at the beginning of the pipeline.
4.
Assuming the package has dependencies those specs are verified using the keyring.
5.
The package is built and the spec.json is generated
6.
The spec.json is signed by the keyring and uploaded to the mirror’s build cache.

Reputational Key

Because of the increased impact to end users in the case of a private key breach, the Reputational Key is managed separately from the Intermediate CI Keys and has additional controls. First, the Reputational Key was generated outside of Spack’s infrastructure and has been signed by the core development team. The Reputational Key (along with the Signing Reputational Private Key) was then ASCII armor exported to a file. Unlike the Intermediate CI Key this exported file is not stored as a base64 encoded secret in Kubernetes. Insteadthe key file itselfis encrypted and stored in Kubernetes as the spack-signing-key-encrypted secret in the pipeline namespace.

The encryption of the exported Reputational Key (including the Signing Reputational Private Key) is handled by AWS Key Management Store (KMS) data keys. The private key material is decrypted and imported at the time of signing into a memory mounted temporary directory holding the keychain. The signing job uses the AWS Encryption SDK (i.e. aws-encryption-cli) to decrypt the Reputational Key. Permission to decrypt the key is granted to the job Pod through a Kubernetes service account specifically used for this, and only this, function. Finally, for convenience sake, this same secret contains an ASCII-armored export of the public components of the Intermediate CI Keys and the Reputational Key. This allows the signing script to verify that packages were built by the pipeline (both on AWS or at affiliated institutions), or signed previously as a part of a different pipeline. This is is done before importing decrypting and importing the Signing Reputational Private Key material and officially signing the packages.

Procedurally the spack-singing-key-encrypted secret is used in the following way:

1.
The spack-package-signing-gitlab-runner protected runner picks up a job tagged notary from a protected GitLab branch (See Protected Runners and Reserved Tags).
2.
Based on its configuration, the runner creates a job pod in the pipeline namespace. The job is run in a stripped down purpose-built image ghcr.io/spack/notary:latest Docker image. The runner is configured to only allow running jobs with this image.
3.
The runner also mounts the spack-signing-key-encrypted secret to a path on disk. Note that this becomes several files on disk, the public components of the Intermediate CI Keys, the public components of the Reputational CI, and an AWS KMS encrypted file containing the Singing Reputational Private Key.
4.
In addition to the secret, the runner creates a tmpfs memory mounted directory where the GnuPG keyring will be created to verify, and then resign the package specs.
5.
The job script syncs all spec.json.sig files from the build cache to a working directory in the job’s execution environment.
6.
The job script then runs the sign.sh script built into the notary Docker image.
7.
The sign.sh script imports the public components of the Reputational and Intermediate CI Keys and uses them to verify good signatures on the spec.json.sig files. If any signed spec does not verify the job immediately fails.
8.
Assuming all specs are verified, the sign.sh script then unpacks the spec json data from the signed file in preparation for being re-signed with the Reputational Key.
9.
The private components of the Reputational Key are decrypted to standard out using aws-encryption-cli directly into a gpg –import … statement which imports the key into the keyring mounted in-memory.
10.
The private key is then used to sign each of the json specs and the keyring is removed from disk.
11.
The re-signed json specs are resynced to the AWS S3 Mirror and the public signing of the packages for the develop or release pipeline that created them is complete.

Non service-account access to the private components of the Reputational Key that are managed through access to the symmetric secret in KMS used to encrypt the data key (which in turn is used to encrypt the GnuPG key - See:Encryption SDK Documentation). A small trusted subset of the core development team are the only individuals with access to this symmetric key.

Protected Runners and Reserved Tags

Spack has a large number of Gitlab Runners operating in its build farm. These include runners deployed in the AWS Kubernetes cluster as well as runners deployed at affiliated institutions. The majority of runners are shared runners that operate across projects in gitlab.spack.io. These runners pick up jobs primarily from the spack/spack project and execute them in PR pipelines.

A small number of runners operating on AWS and at affiliated institutions are registered as specific protected runners on the spack/spack project. In addition to protected runners there are protected branches on the spack/spack project. These are the develop branch, any release branch (i.e. managed with the releases/v* wildcard) and any tag branch (managed with the v* wildcard) Finally Spack’s pipeline generation code reserves certain tags to make sure jobs are routed to the correct runners, these tags are public, protected, and notary. Understanding how all this works together to protect secrets and provide integrity assurances can be a little confusing so lets break these down:

  • Protected Branches- Protected branches in Spack prevent anyone other than Maintainers in GitLab from pushing code. In the case of Spack the only Maintainer level entity pushing code to protected branches is Spack bot. Protecting branches also marks them in such a way that Protected Runners will only run jobs from those branches
branches. Because protected runners have access to secrets, it's critical that they not run Jobs from untrusted code (i.e. PR branches). If they did it would be possible for a PR branch to tag a job in such a way that a protected runner executed that job and mounted secrets into a code execution environment that had not been reviewed by Spack maintainers. Note however that in the absence of tagging used to route jobs, public runners could run jobs from protected branches. No secrets would be at risk of being breached because non-protected runners do not have access to those secrets; lack of secrets would, however, cause the jobs to fail.

protected jobs Spack uses a small set of “reserved” job tags (Note that these are job tags not git tags). These tags are “public”, “private”, and “notary.” The majority of jobs executed in Spack’s GitLab instance are executed via a generate job. The generate job code systematically ensures that no user defined configuration sets these tags. Instead, the generate job sets these tags based on rules related to the branch where this pipeline originated. If the job is a part of a pipeline on a PR branch it sets the public tag. If the job is part of a pipeline on a protected branch it sets the protected tag. Finally if the job is the package signing job and it is running on a pipeline that is part of a protected branch then it sets the notary tag.


Protected Runners are configured to only run jobs from protected branches. Only jobs running in pipelines on protected branches are tagged with protected or notary tags. This tightly couples jobs on protected branches to protected runners that provide access to the secrets required to sign the built packages. The secrets are can only be accessed via:

1.
Runners under direct control of the core development team.
2.
Runners under direct control of trusted maintainers at affiliated institutions.
3.
By code running the automated pipeline that has been reviewed by the Spack maintainers and judged to be appropriate.

Other attempts (either through malicious intent or incompetence) can at worst grab jobs intended for protected runners which will cause those jobs to fail alerting both Spack maintainers and the core development team.

[1]
The Reputational Key has also cross signed core development team keys.

USING EXTERNAL GPU SUPPORT

Many packages come with a +cuda or +rocm variant. With no added configuration Spack will download and install the needed components. It may be preferable to use existing system support: the following sections help with using a system installation of GPU libraries.

Using an External ROCm Installation

Spack breaks down ROCm into many separate component packages. The following is an example packages.yaml that organizes a consistent set of ROCm components for use by dependent packages:

packages:

all:
compiler: [rocmcc@=5.3.0]
variants: amdgpu_target=gfx90a
hip:
buildable: false
externals:
- spec: hip@5.3.0
prefix: /opt/rocm-5.3.0/hip
hsa-rocr-dev:
buildable: false
externals:
- spec: hsa-rocr-dev@5.3.0
prefix: /opt/rocm-5.3.0/
llvm-amdgpu:
buildable: false
externals:
- spec: llvm-amdgpu@5.3.0
prefix: /opt/rocm-5.3.0/llvm/
comgr:
buildable: false
externals:
- spec: comgr@5.3.0
prefix: /opt/rocm-5.3.0/
hipsparse:
buildable: false
externals:
- spec: hipsparse@5.3.0
prefix: /opt/rocm-5.3.0/
hipblas:
buildable: false
externals:
- spec: hipblas@5.3.0
prefix: /opt/rocm-5.3.0/
rocblas:
buildable: false
externals:
- spec: rocblas@5.3.0
prefix: /opt/rocm-5.3.0/
rocprim:
buildable: false
externals:
- spec: rocprim@5.3.0
prefix: /opt/rocm-5.3.0/rocprim/


This is in combination with the following compiler definition:

compilers:
- compiler:

spec: rocmcc@=5.3.0
paths:
cc: /opt/rocm-5.3.0/bin/amdclang
cxx: /opt/rocm-5.3.0/bin/amdclang++
f77: null
fc: /opt/rocm-5.3.0/bin/amdflang
operating_system: rhel8
target: x86_64


This includes the following considerations:

  • Each of the listed externals specifies buildable: false to force Spack to use only the externals we defined.
  • spack external find can automatically locate some of the hip/rocm packages, but not all of them, and furthermore not in a manner that guarantees a complementary set if multiple ROCm installations are available.
  • The prefix is the same for several components, but note that others require listing one of the subdirectories as a prefix.

Using an External CUDA Installation

CUDA is split into fewer components and is simpler to specify:

packages:

all:
variants:
- cuda_arch=70
cuda:
buildable: false
externals:
- spec: cuda@11.0.2
prefix: /opt/cuda/cuda-11.0.2/


where /opt/cuda/cuda-11.0.2/lib/ contains libcudart.so.

CONTRIBUTION GUIDE

This guide is intended for developers or administrators who want to contribute a new package, feature, or bugfix to Spack. It assumes that you have at least some familiarity with Git VCS and Github. The guide will show a few examples of contributing workflows and discuss the granularity of pull-requests (PRs). It will also discuss the tests your PR must pass in order to be accepted into Spack.

First, what is a PR? Quoting Bitbucket's tutorials:

Pull requests are a mechanism for a developer to notify team members that they have completed a feature. The pull request is more than just a notification—it’s a dedicated forum for discussing the proposed feature.


Important is completed feature. The changes one proposes in a PR should correspond to one feature/bugfix/extension/etc. One can create PRs with changes relevant to different ideas, however reviewing such PRs becomes tedious and error prone. If possible, try to follow the one-PR-one-package/feature rule.

Branches

Spack's develop branch has the latest contributions. Nearly all pull requests should start from develop and target develop.

There is a branch for each major release series. Release branches originate from develop and have tags for each point release in the series. For example, releases/v0.14 has tags for 0.14.0, 0.14.1, 0.14.2, etc. versions of Spack. We backport important bug fixes to these branches, but we do not advance the package versions or make other changes that would change the way Spack concretizes dependencies. Currently, the maintainers manage these branches by cherry-picking from develop. See Releases for more information.

Continuous Integration

Spack uses Github Actions for Continuous Integration testing. This means that every time you submit a pull request, a series of tests will be run to make sure you didn't accidentally introduce any bugs into Spack. Your PR will not be accepted until it passes all of these tests. While you can certainly wait for the results of these tests after submitting a PR, we recommend that you run them locally to speed up the review process.

NOTE:

Oftentimes, CI will fail for reasons other than a problem with your PR. For example, apt-get, pip, or homebrew will fail to download one of the dependencies for the test suite, or a transient bug will cause the unit tests to timeout. If any job fails, click the "Details" link and click on the test(s) that is failing. If it doesn't look like it is failing for reasons related to your PR, you have two options. If you have write permissions for the Spack repository, you should see a "Restart workflow" button on the right-hand side. If not, you can close and reopen your PR to rerun all of the tests. If the same test keeps failing, there may be a problem with your PR. If you notice that every recent PR is failing with the same error message, it may be that an issue occurred with the CI infrastructure or one of Spack's dependencies put out a new release that is causing problems. If this is the case, please file an issue.


We currently test against Python 2.7 and 3.6-3.10 on both macOS and Linux and perform 3 types of tests:

Unit Tests

Unit tests ensure that core Spack features like fetching or spec resolution are working as expected. If your PR only adds new packages or modifies existing ones, there's very little chance that your changes could cause the unit tests to fail. However, if you make changes to Spack's core libraries, you should run the unit tests to make sure you didn't break anything.

Since they test things like fetching from VCS repos, the unit tests require git, mercurial, and subversion to run. Make sure these are installed on your system and can be found in your PATH. All of these can be installed with Spack or with your system package manager.

To run all of the unit tests, use:

$ spack unit-test


These tests may take several minutes to complete. If you know you are only modifying a single Spack feature, you can run subsets of tests at a time. For example, this would run all the tests in lib/spack/spack/test/architecture.py:

$ spack unit-test lib/spack/spack/test/architecture.py


And this would run the test_platform test from that file:

$ spack unit-test lib/spack/spack/test/architecture.py::test_platform


This allows you to develop iteratively: make a change, test that change, make another change, test that change, etc. We use pytest as our tests framework, and these types of arguments are just passed to the pytest command underneath. See the pytest docs for more details on test selection syntax.

spack unit-test has a few special options that can help you understand what tests are available. To get a list of all available unit test files, run:

$ spack unit-test --list
...


To see a more detailed list of available unit tests, use spack unit-test --list-long:

$ spack unit-test --list-long
...


And to see the fully qualified names of all tests, use --list-names:

$ spack unit-test --list-names
...


You can combine these with pytest arguments to restrict which tests you want to know about. For example, to see just the tests in architecture.py:

$ spack unit-test --list-long lib/spack/spack/test/architecture.py


You can also combine any of these options with a pytest keyword search. See the pytest usage docs for more details on test selection syntax. For example, to see the names of all tests that have "spec" or "concretize" somewhere in their names:

$ spack unit-test --list-names -k "spec and concretize"


By default, pytest captures the output of all unit tests, and it will print any captured output for failed tests. Sometimes it's helpful to see your output interactively, while the tests run (e.g., if you add print statements to a unit tests). To see the output live, use the -s argument to pytest:

$ spack unit-test -s --list-long lib/spack/spack/test/architecture.py::test_platform


Unit tests are crucial to making sure bugs aren't introduced into Spack. If you are modifying core Spack libraries or adding new functionality, please add new unit tests for your feature, and consider strengthening existing tests. You will likely be asked to do this if you submit a pull request to the Spack project on GitHub. Check out the pytest docs and feel free to ask for guidance on how to write tests!

NOTE:

You may notice the share/spack/qa/run-unit-tests script in the repository. This script is designed for CI. It runs the unit tests and reports coverage statistics back to Codecov. If you want to run the unit tests yourself, we suggest you use spack unit-test.


Style Tests

Spack uses Flake8 to test for PEP 8 conformance and mypy <https://mypy.readthedocs.io/en/stable/> for type checking. PEP 8 is a series of style guides for Python that provide suggestions for everything from variable naming to indentation. In order to limit the number of PRs that were mostly style changes, we decided to enforce PEP 8 conformance. Your PR needs to comply with PEP 8 in order to be accepted, and if it modifies the spack library it needs to successfully type-check with mypy as well.

Testing for compliance with spack's style is easy. Simply run the spack style command:

$ spack style


spack style has a couple advantages over running the tools by hand:

1.
It only tests files that you have modified since branching off of develop.
2.
It works regardless of what directory you are in.
3.
It automatically adds approved exemptions from the flake8 checks. For example, URLs are often longer than 80 characters, so we exempt them from line length checks. We also exempt lines that start with "homepage", "url", "version", "variant", "depends_on", and "extends" in package.py files. This is now also possible when directly running flake8 if you can use the spack formatter plugin included with spack.

More approved flake8 exemptions can be found here.

If all is well, you'll see something like this:

$ run-flake8-tests
Dependencies found.
=======================================================
flake8: running flake8 code checks on spack.
Modified files:

var/spack/repos/builtin/packages/hdf5/package.py
var/spack/repos/builtin/packages/hdf/package.py
var/spack/repos/builtin/packages/netcdf/package.py ======================================================= Flake8 checks were clean.


However, if you aren't compliant with PEP 8, flake8 will complain:

var/spack/repos/builtin/packages/netcdf/package.py:26: [F401] 'os' imported but unused
var/spack/repos/builtin/packages/netcdf/package.py:61: [E303] too many blank lines (2)
var/spack/repos/builtin/packages/netcdf/package.py:106: [E501] line too long (92 > 79 characters)
Flake8 found errors.


Most of the error messages are straightforward, but if you don't understand what they mean, just ask questions about them when you submit your PR. The line numbers will change if you add or delete lines, so simply run spack style again to update them.

TIP:

Try fixing flake8 errors in reverse order. This eliminates the need for multiple runs of spack style just to re-compute line numbers and makes it much easier to fix errors directly off of the CI output.


Documentation Tests

Spack uses Sphinx to build its documentation. In order to prevent things like broken links and missing imports, we added documentation tests that build the documentation and fail if there are any warning or error messages.

Building the documentation requires several dependencies:

  • sphinx
  • sphinxcontrib-programoutput
  • sphinx-rtd-theme
  • graphviz
  • git
  • mercurial
  • subversion

All of these can be installed with Spack, e.g.

$ spack install py-sphinx py-sphinxcontrib-programoutput py-sphinx-rtd-theme graphviz git mercurial subversion


WARNING:

Sphinx has several required dependencies. If you're using a python from Spack and you installed py-sphinx and friends, you need to make them available to your python. The easiest way to do this is to run:

$ spack load py-sphinx py-sphinx-rtd-theme py-sphinxcontrib-programoutput


so that all of the dependencies are added to PYTHONPATH. If you see an error message like:

Extension error:
Could not import extension sphinxcontrib.programoutput (exception: No module named sphinxcontrib.programoutput)
make: *** [html] Error 1


that means Sphinx couldn't find py-sphinxcontrib-programoutput in your PYTHONPATH.



Once all of the dependencies are installed, you can try building the documentation:

$ cd path/to/spack/lib/spack/docs/
$ make clean
$ make


If you see any warning or error messages, you will have to correct those before your PR is accepted. If you are editing the documentation, you should be running the documentation tests to make sure there are no errors. Documentation changes can result in some obfuscated warning messages. If you don't understand what they mean, feel free to ask when you submit your PR.

Coverage

Spack uses Codecov to generate and report unit test coverage. This helps us tell what percentage of lines of code in Spack are covered by unit tests. Although code covered by unit tests can still contain bugs, it is much less error prone than code that is not covered by unit tests.

Codecov provides browser extensions for Google Chrome and Firefox. These extensions integrate with GitHub and allow you to see coverage line-by-line when viewing the Spack repository. If you are new to Spack, a great way to get started is to write unit tests to increase coverage!

Unlike with CI on Github Actions Codecov tests are not required to pass in order for your PR to be merged. If you modify core Spack libraries, we would greatly appreciate unit tests that cover these changed lines. Otherwise, we have no way of knowing whether or not your changes introduce a bug. If you make substantial changes to the core, we may request unit tests to increase coverage.

NOTE:

If the only files you modified are package files, we do not care about coverage on your PR. You may notice that the Codecov tests fail even though you didn't modify any core files. This means that Spack's overall coverage has increased since you branched off of develop. This is a good thing! If you really want to get the Codecov tests to pass, you can rebase off of the latest develop, but again, this is not required.


Git Workflows

Spack is still in the beta stages of development. Most of our users run off of the develop branch, and fixes and new features are constantly being merged. So how do you keep up-to-date with upstream while maintaining your own local differences and contributing PRs to Spack?

Branching

The easiest way to contribute a pull request is to make all of your changes on new branches. Make sure your develop is up-to-date and create a new branch off of it:

$ git checkout develop
$ git pull upstream develop
$ git branch <descriptive_branch_name>
$ git checkout <descriptive_branch_name>


Here we assume that the local develop branch tracks the upstream develop branch of Spack. This is not a requirement and you could also do the same with remote branches. But for some it is more convenient to have a local branch that tracks upstream.

Normally we prefer that commits pertaining to a package <package-name> have a message <package-name>: descriptive message. It is important to add descriptive message so that others, who might be looking at your changes later (in a year or maybe two), would understand the rationale behind them.

Now, you can make your changes while keeping the develop branch pure. Edit a few files and commit them by running:

$ git add <files_to_be_part_of_the_commit>
$ git commit --message <descriptive_message_of_this_particular_commit>


Next, push it to your remote fork and create a PR:

$ git push origin <descriptive_branch_name> --set-upstream


GitHub provides a tutorial on how to file a pull request. When you send the request, make develop the destination branch.

If you need this change immediately and don't have time to wait for your PR to be merged, you can always work on this branch. But if you have multiple PRs, another option is to maintain a Frankenstein branch that combines all of your other branches:

$ git co develop
$ git branch <your_modified_develop_branch>
$ git checkout <your_modified_develop_branch>
$ git merge <descriptive_branch_name>


This can be done with each new PR you submit. Just make sure to keep this local branch up-to-date with upstream develop too.

Cherry-Picking

What if you made some changes to your local modified develop branch and already committed them, but later decided to contribute them to Spack? You can use cherry-picking to create a new branch with only these commits.

First, check out your local modified develop branch:

$ git checkout <your_modified_develop_branch>


Now, get the hashes of the commits you want from the output of:

$ git log


Next, create a new branch off of upstream develop and copy the commits that you want in your PR:

$ git checkout develop
$ git pull upstream develop
$ git branch <descriptive_branch_name>
$ git checkout <descriptive_branch_name>
$ git cherry-pick <hash>
$ git push origin <descriptive_branch_name> --set-upstream


Now you can create a PR from the web-interface of GitHub. The net result is as follows:

1.
You patched your local version of Spack and can use it further.
2.
You "cherry-picked" these changes in a stand-alone branch and submitted it as a PR upstream.

Should you have several commits to contribute, you could follow the same procedure by getting hashes of all of them and cherry-picking to the PR branch.

NOTE:

It is important that whenever you change something that might be of importance upstream, create a pull request as soon as possible. Do not wait for weeks/months to do this, because:
1.
you might forget why you modified certain files
2.
it could get difficult to isolate this change into a stand-alone clean PR.



Rebasing

Other developers are constantly making contributions to Spack, possibly on the same files that your PR changed. If their PR is merged before yours, it can create a merge conflict. This means that your PR can no longer be automatically merged without a chance of breaking your changes. In this case, you will be asked to rebase on top of the latest upstream develop.

First, make sure your develop branch is up-to-date:

$ git checkout develop
$ git pull upstream develop


Now, we need to switch to the branch you submitted for your PR and rebase it on top of develop:

$ git checkout <descriptive_branch_name>
$ git rebase develop


Git will likely ask you to resolve conflicts. Edit the file that it says can't be merged automatically and resolve the conflict. Then, run:

$ git add <file_that_could_not_be_merged>
$ git rebase --continue


You may have to repeat this process multiple times until all conflicts are resolved. Once this is done, simply force push your rebased branch to your remote fork:

$ git push --force origin <descriptive_branch_name>


Rebasing with cherry-pick

You can also perform a rebase using cherry-pick. First, create a temporary backup branch:

$ git checkout <descriptive_branch_name>
$ git branch tmp


If anything goes wrong, you can always go back to your tmp branch. Now, look at the logs and save the hashes of any commits you would like to keep:

$ git log


Next, go back to the original branch and reset it to develop. Before doing so, make sure that you local develop branch is up-to-date with upstream:

$ git checkout develop
$ git pull upstream develop
$ git checkout <descriptive_branch_name>
$ git reset --hard develop


Now you can cherry-pick relevant commits:

$ git cherry-pick <hash1>
$ git cherry-pick <hash2>


Push the modified branch to your fork:

$ git push --force origin <descriptive_branch_name>


If everything looks good, delete the backup branch:

$ git branch --delete --force tmp


Re-writing History

Sometimes you may end up on a branch that has diverged so much from develop that it cannot easily be rebased. If the current commits history is more of an experimental nature and only the net result is important, you may rewrite the history.

First, merge upstream develop and reset you branch to it. On the branch in question, run:

$ git merge develop
$ git reset develop


At this point your branch will point to the same commit as develop and thereby the two are indistinguishable. However, all the files that were previously modified will stay as such. In other words, you do not lose the changes you made. Changes can be reviewed by looking at diffs:

$ git status
$ git diff


The next step is to rewrite the history by adding files and creating commits:

$ git add <files_to_be_part_of_commit>
$ git commit --message <descriptive_message>


After all changed files are committed, you can push the branch to your fork and create a PR:

$ git push origin --set-upstream


PACKAGING GUIDE

This guide is intended for developers or administrators who want to package software so that Spack can install it. It assumes that you have at least some familiarity with Python, and that you've read the basic usage guide, especially the part about specs.

There are two key parts of Spack:

1.
Specs: expressions for describing builds of software, and
2.
Packages: Python modules that describe how to build and test software according to a spec.

Specs allow a user to describe a particular build in a way that a package author can understand. Packages allow the packager to encapsulate the build logic for different versions, compilers, options, platforms, and dependency combinations in one place. Essentially, a package translates a spec into build logic. It also allows the packager to write spec-specific tests of the installed software.

Packages in Spack are written in pure Python, so you can do anything in Spack that you can do in Python. Python was chosen as the implementation language for two reasons. First, Python is becoming ubiquitous in the scientific software community. Second, it's a modern language and has many powerful features to help make package writing easy.

WARNING:

As a general rule, packages should install the software from source. The only exception is for proprietary software (e.g., vendor compilers).

If a special build system needs to be added in order to support building a package from source, then the associated code and recipe should be added first.



Overview of the installation procedure

Whenever Spack installs software, it goes through a series of predefined steps: [image]

All these steps are influenced by the metadata in each package.py and by the current Spack configuration. Since build systems are different from one another, the execution of the last block in the figure is further expanded in a build system specific way. An example for CMake is, for instance: [image]

The predefined steps for each build system are called "phases". In general, the name and order in which the phases will be executed can be obtained by either reading the API docs at build_systems, or using the spack info command:

$ spack info --phases m4
AutotoolsPackage:    m4
Homepage:            https://www.gnu.org/software/m4/m4.html
Safe versions:

1.4.17 ftp://ftp.gnu.org/gnu/m4/m4-1.4.17.tar.gz Variants:
Name Default Description
sigsegv on Build the libsigsegv dependency Installation Phases:
autoreconf configure build install Build Dependencies:
libsigsegv ...


An extensive list of available build systems and phases is provided in Overriding build system defaults.

Writing a package recipe

Since v0.19, Spack supports two ways of writing a package recipe. The most commonly used is to encode both the metadata (directives, etc.) and the build behavior in a single class, like shown in the following example:

class Openjpeg(CMakePackage):

"""OpenJPEG is an open-source JPEG 2000 codec written in C language"""
homepage = "https://github.com/uclouvain/openjpeg"
url = "https://github.com/uclouvain/openjpeg/archive/v2.3.1.tar.gz"
version("2.4.0", sha256="8702ba68b442657f11aaeb2b338443ca8d5fb95b0d845757968a7be31ef7f16d")
variant("codec", default=False, description="Build the CODEC executables")
depends_on("libpng", when="+codec")
def url_for_version(self, version):
if version >= Version("2.1.1"):
return super().url_for_version(version)
url_fmt = "https://github.com/uclouvain/openjpeg/archive/version.{0}.tar.gz"
return url_fmt.format(version)
def cmake_args(self):
args = [
self.define_from_variant("BUILD_CODEC", "codec"),
self.define("BUILD_MJ2", False),
self.define("BUILD_THIRDPARTY", False),
]
return args


A package encoded with a single class is backward compatible with versions of Spack lower than v0.19, and so are custom repositories containing only recipes of this kind. The downside is that this format doesn't allow packagers to use more than one build system in a single recipe.

To do that, we have to resort to the second way Spack has of writing packages, which involves writing a builder class explicitly. Using the same example as above, this reads:

class Openjpeg(CMakePackage):

"""OpenJPEG is an open-source JPEG 2000 codec written in C language"""
homepage = "https://github.com/uclouvain/openjpeg"
url = "https://github.com/uclouvain/openjpeg/archive/v2.3.1.tar.gz"
version("2.4.0", sha256="8702ba68b442657f11aaeb2b338443ca8d5fb95b0d845757968a7be31ef7f16d")
variant("codec", default=False, description="Build the CODEC executables")
depends_on("libpng", when="+codec")
def url_for_version(self, version):
if version >= Version("2.1.1"):
return super().url_for_version(version)
url_fmt = "https://github.com/uclouvain/openjpeg/archive/version.{0}.tar.gz"
return url_fmt.format(version) class CMakeBuilder(spack.build_systems.cmake.CMakeBuilder):
def cmake_args(self):
args = [
self.define_from_variant("BUILD_CODEC", "codec"),
self.define("BUILD_MJ2", False),
self.define("BUILD_THIRDPARTY", False),
]
return args


This way of writing packages allows extending the recipe to support multiple build systems, see Multiple build systems for more details. The downside is that recipes of this kind are only understood by Spack since v0.19+. More information on the internal architecture of Spack can be found at Package class architecture.

NOTE:

If a builder is implemented in package.py, all build-specific methods must be moved to the builder. This means that if you have a package like

class Foo(CmakePackage):

def cmake_args(self):
...


and you add a builder to the package.py, you must move cmake_args to the builder.



Creating new packages

To help creating a new package Spack provides a command that generates a package.py file in an existing repository, with a boilerplate package template. Here's an example:


Spack examines the tarball URL and tries to figure out the name of the package to be created. If the name contains uppercase letters, these are automatically converted to lowercase. If the name contains underscores or periods, these are automatically converted to dashes.

Spack also searches for additional versions located in the same directory of the website. Spack prompts you to tell you how many versions it found and asks you how many you would like to download and checksum:

$ spack create https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2
==> This looks like a URL for gmp
==> Found 16 versions of gmp:

6.1.2 https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2
6.1.1 https://gmplib.org/download/gmp/gmp-6.1.1.tar.bz2
6.1.0 https://gmplib.org/download/gmp/gmp-6.1.0.tar.bz2
...
5.0.0 https://gmplib.org/download/gmp/gmp-5.0.0.tar.bz2 How many would you like to checksum? (default is 1, q to abort)


Spack will automatically download the number of tarballs you specify (starting with the most recent) and checksum each of them.

You do not have to download all of the versions up front. You can always choose to download just one tarball initially, and run spack checksum later if you need more versions.

Spack automatically creates a directory in the appropriate repository, generates a boilerplate template for your package, and opens up the new package.py in your favorite $EDITOR (see Controlling the editor for details):

# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
# ----------------------------------------------------------------------------
# If you submit this package back to Spack as a pull request,
# please first remove this boilerplate and all FIXME comments.
#
# This is a template package file for Spack.  We've put "FIXME"
# next to all the things you'll want to change. Once you've handled
# them, you can save this file and test your package like this:
#
#     spack install gmp
#
# You can edit this file again by typing:
#
#     spack edit gmp
#
# See the Spack documentation for more information on packaging.
# ----------------------------------------------------------------------------
import spack.build_systems.autotools
from spack.package import *
class Gmp(AutotoolsPackage):

"""FIXME: Put a proper description of your package here."""
# FIXME: Add a proper url for your package's homepage here.
homepage = "https://www.example.com"
url = "https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2"
# FIXME: Add a list of GitHub accounts to
# notify when the package is updated.
# maintainers("github_user1", "github_user2")
version("6.2.1", sha256="eae9326beb4158c386e39a356818031bd28f3124cf915f8c5b1dc4c7a36b4d7c")
# FIXME: Add dependencies if required.
# depends_on("foo")
def configure_args(self):
# FIXME: Add arguments other than --prefix
# FIXME: If not needed delete the function
args = []
return args


The tedious stuff (creating the class, checksumming archives) has been done for you. Spack correctly detected that gmp uses the autotools build system, so it created a new Gmp package that subclasses the AutotoolsPackage base class.

The default installation procedure for a package subclassing the AutotoolsPackage is to go through the typical process of:

./configure --prefix=/path/to/installation/directory
make
make check
make install


For most Autotools packages, this is sufficient. If you need to add additional arguments to the ./configure call, add them via the configure_args function.

In the generated package, the download url attribute is already set. All the things you still need to change are marked with FIXME labels. You can delete the commented instructions between the license and the first import statement after reading them. The rest of the tasks you need to do are as follows:

1.
Add a description.

Immediately inside the package class is a docstring in triple-quotes ("""). It is used to generate the description shown when users run spack info.

2.
Change the homepage to a useful URL.

The homepage is displayed when users run spack info so that they can learn more about your package.

3.
Add a comma-separated list of maintainers.

Add a list of Github accounts of people who want to be notified any time the package is modified. See Maintainers.

4.
Add depends_on() calls for the package's dependencies.

depends_on tells Spack that other packages need to be built and installed before this one. See Dependencies.

5.
Get the installation working.

Your new package may require specific flags during configure. These can be added via configure_args. Specifics will differ depending on the package and its build system. Overriding build system defaults is covered in detail later.


Controlling the editor

When Spack needs to open an editor for you (e.g., for commands like Creating new packages or Editing existing packages, it looks at several environment variables to figure out what to use. The order of precedence is:

  • SPACK_EDITOR: highest precedence, in case you want something specific for Spack;
  • VISUAL: standard environment variable for full-screen editors like vim or emacs;
  • EDITOR: older environment variable for your editor.

You can set any of these to the command you want to run, e.g., in bash you might run one of these:

export VISUAL=vim
export EDITOR="emacs -nw"
export SPACK_EDITOR=nano


If Spack finds none of these variables set, it will look for vim, vi, emacs, nano, and notepad, in that order.

Bundling software

If you have a collection of software expected to work well together with no source code of its own, you can create a BundlePackage. Examples where bundle packages can be useful include defining suites of applications (e.g, EcpProxyApps), commonly used libraries (e.g., AmdAocl), and software development kits (e.g., EcpDataVisSdk).

These versioned packages primarily consist of dependencies on the associated software packages. They can include variants to ensure common build options are consistently applied to dependencies. Known build failures, such as not building on a platform or when certain compilers or variants are used, can be flagged with conflicts. Build requirements, such as only building with specific compilers, can similarly be flagged with requires.

The spack create --template bundle command will create a skeleton BundlePackage package.py for you:

$ spack create --template bundle --name coolsdk


Now you can fill in the basic package documentation, version(s), and software package dependencies along with any other relevant customizations.

NOTE:

Remember that bundle packages have no software of their own so there is nothing to download.


Non-downloadable software

If your software cannot be downloaded from a URL you can still create a boilerplate package.py by telling spack create what name you want to use:

$ spack create --name intel


This will create a simple intel package with an install() method that you can craft to install your package. Likewise, you can force the build system to be used with --template and, in case it's needed, you can overwrite a package already in the repository with --force:

$ spack create --name gmp https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2
$ spack create --force --template autotools https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2


A complete list of available build system templates can be found by running spack create --help.

Editing existing packages

One of the easiest ways to learn how to write packages is to look at existing ones. You can edit a package file by name with the spack edit command:

$ spack edit gmp


If you used spack create to create a package, you can get back to it later with spack edit. For instance, the gmp package actually lives in:

$ spack location -p gmp
${SPACK_ROOT}/var/spack/repos/builtin/packages/gmp/package.py


but spack edit provides a much simpler shortcut and saves you the trouble of typing the full path.

Naming & directory structure

This section describes how packages need to be named, and where they live in Spack's directory structure. In general, Creating new packages handles creating package files for you, so you can skip most of the details here.

var/spack/repos/builtin/packages

A Spack installation directory is structured like a standard UNIX install prefix (bin, lib, include, var, opt, etc.). Most of the code for Spack lives in $SPACK_ROOT/lib/spack. Packages themselves live in /usr/share/spack/repos/builtin/packages.

If you cd to that directory, you will see directories for each package:

$ cd /usr/share/spack/repos/builtin/packages && ls
3dtk
3proxy
7zip
abacus
abduco
abi-compliance-checker
abi-dumper
abinit
abseil-cpp
abyss
...


Each directory contains a file called package.py, which is where all the python code for the package goes. For example, the libelf package lives in:

/usr/share/spack/repos/builtin/packages/libelf/package.py


Alongside the package.py file, a package may contain extra directories or files (like patches) that it needs to build.

Package Names

Packages are named after the directory containing package.py. So, libelf's package.py lives in a directory called libelf. The package.py file defines a class called Libelf, which extends Spack's Package class. For example, here is /usr/share/spack/repos/builtin/packages/libelf/package.py:

from spack import *
class Libelf(Package):

""" ... description ... """
homepage = ...
url = ...
version(...)
depends_on(...)
def install():
...


The directory name (libelf) determines the package name that users should provide on the command line. e.g., if you type any of these:

$ spack info libelf
$ spack versions libelf
$ spack install libelf@0.8.13


Spack sees the package name in the spec and looks for libelf/package.py in var/spack/repos/builtin/packages. Likewise, if you run spack install py-numpy, Spack looks for py-numpy/package.py.

Spack uses the directory name as the package name in order to give packagers more freedom in naming their packages. Package names can contain letters, numbers, and dashes. Using a Python identifier (e.g., a class name or a module name) would make it difficult to support these options. So, you can name a package 3proxy or foo-bar and Spack won't care. It just needs to see that name in the packages directory.

Package class names

Spack loads package.py files dynamically, and it needs to find a special class name in the file for the load to succeed. The class name (Libelf in our example) is formed by converting words separated by - in the file name to CamelCase. If the name starts with a number, we prefix the class name with _. Here are some examples:

Module Name Class Name
foo-bar FooBar
3proxy _3proxy

In general, you won't have to remember this naming convention because Creating new packages and Editing existing packages handle the details for you.

Maintainers

Each package in Spack may have one or more maintainers, i.e. one or more GitHub accounts of people who want to be notified any time the package is modified.

When a pull request is submitted that updates the package, these people will be requested to review the PR. This is useful for developers who maintain a Spack package for their own software, as well as users who rely on a piece of software and want to ensure that the package doesn't break. It also gives users a list of people to contact for help when someone reports a build error with the package.

To add maintainers to a package, simply declare them with the maintainers directive:

maintainers("user1", "user2")


The list of maintainers is additive, and includes all the accounts eventually declared in base classes.

Trusted Downloads

Spack verifies that the source code it downloads is not corrupted or compromised; or at least, that it is the same version the author of the Spack package saw when the package was created. If Spack uses a download method it can verify, we say the download method is trusted. Trust is important for all downloads: Spack has no control over the security of the various sites from which it downloads source code, and can never assume that any particular site hasn't been compromised.

Trust is established in different ways for different download methods. For the most common download method --- a single-file tarball --- the tarball is checksummed. Git downloads using commit= are trusted implicitly, as long as a hash is specified.

Spack also provides untrusted download methods: tarball URLs may be supplied without a checksum, or Git downloads may specify a branch or tag instead of a hash. If the user does not control or trust the source of an untrusted download, it is a security risk. Unless otherwise specified by the user for special cases, Spack should by default use only trusted download methods.

Unfortunately, Spack does not currently provide that guarantee. It does provide the following mechanisms for safety:

1.
By default, Spack will only install a tarball package if it has a checksum and that checksum matches. You can override this with spack install --no-checksum.
2.
Numeric versions are almost always tarball downloads, whereas non-numeric versions not named develop frequently download untrusted branches or tags from a version control system. As long as a package has at least one numeric version, and no non-numeric version named develop, Spack will prefer it over any non-numeric versions.

Checksums

For tarball downloads, Spack can currently support checksums using the MD5, SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512 algorithms. It determines the algorithm to use based on the hash length.

Versions and fetching

The most straightforward way to add new versions to your package is to add a line like this in the package class:

class Foo(Package):

url = "http://example.com/foo-1.0.tar.gz"
version("8.2.1", md5="4136d7b4c04df68b686570afa26988ac")
version("8.2.0", md5="1c9f62f0778697a09d36121ead88e08e")
version("8.1.2", md5="d47dd09ed7ae6e7fd6f9a816d7f5fdf6")


NOTE:

By convention, we list versions in descending order, from newest to oldest.


NOTE:

Bundle packages do not have source code so there is nothing to fetch. Consequently, their version directives consist solely of the version name (e.g., version("202309")).


Date Versions

If you wish to use dates as versions, it is best to use the format @yyyy-mm-dd. This will ensure they sort in the correct order.

Alternately, you might use a hybrid release-version / date scheme. For example, @1.3_2016-08-31 would mean the version from the 1.3 branch, as of August 31, 2016.

Version URLs

By default, each version's URL is extrapolated from the url field in the package. For example, Spack is smart enough to download version 8.2.1 of the Foo package above from http://example.com/foo-8.2.1.tar.gz.

If the URL is particularly complicated or changes based on the release, you can override the default URL generation algorithm by defining your own url_for_version() function. For example, the download URL for OpenMPI contains the major.minor version in one spot and the major.minor.patch version in another:

https://www.open-mpi.org/software/ompi/v2.1/downloads/openmpi-2.1.1.tar.bz2

In order to handle this, you can define a url_for_version() function like so:


def url_for_version(self, version):
url = "https://download.open-mpi.org/release/open-mpi/v{0}/openmpi-{1}.tar.bz2"
return url.format(version.up_to(2), version)


With the use of this url_for_version(), Spack knows to download OpenMPI 2.1.1 from http://www.open-mpi.org/software/ompi/v2.1/downloads/openmpi-2.1.1.tar.bz2 but download OpenMPI 1.10.7 from http://www.open-mpi.org/software/ompi/v1.10/downloads/openmpi-1.10.7.tar.bz2.

You'll notice that OpenMPI's url_for_version() function makes use of a special Version function called up_to(). When you call version.up_to(2) on a version like 1.10.0, it returns 1.10. version.up_to(1) would return 1. This can be very useful for packages that place all X.Y.* versions in a single directory and then places all X.Y.Z versions in a sub-directory.

There are a few Version properties you should be aware of. We generally prefer numeric versions to be separated by dots for uniformity, but not all tarballs are named that way. For example, icu4c separates its major and minor versions with underscores, like icu4c-57_1-src.tgz. The value 57_1 can be obtained with the use of the version.underscored property. Note that Python properties don't need parentheses. There are other separator properties as well:

Property Result
version.dotted 1.2.3
version.dashed 1-2-3
version.underscored 1_2_3
version.joined 123

NOTE:

Python properties don't need parentheses. version.dashed is correct. version.dashed() is incorrect.


In addition, these version properties can be combined with up_to(). For example:

>>> version = Version("1.2.3")
>>> version.up_to(2).dashed
Version("1-2")
>>> version.underscored.up_to(2)
Version("1_2")


As you can see, order is not important. Just keep in mind that up_to() and the other version properties return Version objects, not strings.

If a URL cannot be derived systematically, or there is a special URL for one of its versions, you can add an explicit URL for a particular version:

version("8.2.1", md5="4136d7b4c04df68b686570afa26988ac",

url="http://example.com/foo-8.2.1-special-version.tar.gz")


When you supply a custom URL for a version, Spack uses that URL verbatim and does not perform extrapolation. The order of precedence of these methods is:

1.
package-level url
2.
url_for_version()
3.
version-specific url

so if your package contains a url_for_version(), it can be overridden by a version-specific url.

If your package does not contain a package-level url or url_for_version(), Spack can determine which URL to download from even if only some of the versions specify their own url. Spack will use the nearest URL before the requested version. This is useful for packages that have an easy to extrapolate URL, but keep changing their URL format every few releases. With this method, you only need to specify the url when the URL changes.

Mirrors of the main URL

Spack supports listing mirrors of the main URL in a package by defining the urls attribute:


instead of just a single url. This attribute is a list of possible URLs that will be tried in order when fetching packages. Notice that either one of url or urls can be present in a package, but not both at the same time.

A well-known case of packages that can be fetched from multiple mirrors is that of GNU. For that, Spack goes a step further and defines a mixin class that takes care of all of the plumbing and requires packagers to just define a proper gnu_mirror_path attribute:

class Autoconf(AutotoolsPackage, GNUMirrorPackage):

"""Autoconf -- system configuration part of autotools"""
homepage = "https://www.gnu.org/software/autoconf/"
gnu_mirror_path = "autoconf/autoconf-2.69.tar.gz"
version("2.71", sha256="431075ad0bf529ef13cb41e9042c542381103e80015686222b8a9d4abef42a1c")
version("2.70", sha256="f05f410fda74323ada4bdc4610db37f8dbd556602ba65bc843edb4d4d4a1b2b7")


Skipping the expand step

Spack normally expands archives (e.g. *.tar.gz and *.zip) automatically into a standard stage source directory (self.stage.source_path) after downloading them. If you want to skip this step (e.g., for self-extracting executables and other custom archive types), you can add expand=False to a version directive.

version("8.2.1", md5="4136d7b4c04df68b686570afa26988ac",

url="http://example.com/foo-8.2.1-special-version.sh", expand=False)


When expand is set to False, Spack sets the current working directory to the directory containing the downloaded archive before it calls your install method. Within install, the path to the downloaded archive is available as self.stage.archive_file.

Here is an example snippet for packages distributed as self-extracting archives. The example sets permissions on the downloaded file to make it executable, then runs it with some arguments.

def install(self, spec, prefix):

set_executable(self.stage.archive_file)
installer = Executable(self.stage.archive_file)
installer("--prefix=%s" % prefix, "arg1", "arg2", "etc.")


Deprecating old versions

There are many reasons to remove old versions of software:

1.
Security vulnerabilities (most serious reason)
2.
Changing build systems that increase package complexity
3.
Changing dependencies/patches/resources/flags that increase package complexity
4.
Maintainer/developer inability/unwillingness to support old versions
5.
No longer available for download (right to be forgotten)
6.
Package or version rename

At the same time, there are many reasons to keep old versions of software:

1.
Reproducibility
2.
Requirements for older packages (e.g. some packages still rely on Qt 3)

In general, you should not remove old versions from a package.py. Instead, you should first deprecate them using the following syntax:

version("1.2.3", sha256="...", deprecated=True)


This has two effects. First, spack info will no longer advertise that version. Second, commands like spack install that fetch the package will require user approval:

$ spack install openssl@1.0.1e
==> Warning: openssl@1.0.1e is deprecated and may be removed in a future Spack release.
==>   Fetch anyway? [y/N]


If you use spack install --deprecated, this check can be skipped.

This also applies to package recipes that are renamed or removed. You should first deprecate all versions before removing a package. If you need to rename it, you can deprecate the old package and create a new package at the same time.

Version deprecations should always last at least one Spack minor release cycle before the version is completely removed. For example, if a version is deprecated in Spack 0.16.0, it should not be removed until Spack 0.17.0. No version should be removed without such a deprecation process. This gives users a chance to complain about the deprecation in case the old version is needed for some application. If you require a deprecated version of a package, simply submit a PR to remove deprecated=True from the package. However, you may be asked to help maintain this version of the package if the current maintainers are unwilling to support this older version.

Download caching

Spack maintains a cache (described here) which saves files retrieved during package installations to avoid re-downloading in the case that a package is installed with a different specification (but the same version) or reinstalled on account of a change in the hashing scheme. It may (rarely) be necessary to avoid caching for a particular version by adding no_cache=True as an option to the version() directive. Example situations would be a "snapshot"-like Version Control System (VCS) tag, a VCS branch such as v6-16-00-patches, or a URL specifying a regularly updated snapshot tarball.

Version comparison

Most Spack versions are numeric, a tuple of integers; for example, 0.1, 6.96 or 1.2.3.1. Spack knows how to compare and sort numeric versions.

Some Spack versions involve slight extensions of numeric syntax; for example, py-sphinx-rtd-theme@=0.1.10a0. In this case, numbers are always considered to be "newer" than letters. This is for consistency with RPM.

Spack versions may also be arbitrary non-numeric strings, for example develop, master, local.

The order on versions is defined as follows. A version string is split into a list of components based on delimiters such as ., - etc. Lists are then ordered lexicographically, where components are ordered as follows:

1.
The following special strings are considered larger than any other numeric or non-numeric version component, and satisfy the following order between themselves: develop > main > master > head > trunk > stable.
2.
Numbers are ordered numerically, are less than special strings, and larger than other non-numeric components.
3.
All other non-numeric components are less than numeric components, and are ordered alphabetically.

The logic behind this sort order is two-fold:

1.
Non-numeric versions are usually used for special cases while developing or debugging a piece of software. Keeping most of them less than numeric versions ensures that Spack chooses numeric versions by default whenever possible.
2.
The most-recent development version of a package will usually be newer than any released numeric versions. This allows the @develop version to satisfy dependencies like depends_on(abc, when="@x.y.z:")

Version selection

When concretizing, many versions might match a user-supplied spec. For example, the spec python matches all available versions of the package python. Similarly, python@3: matches all versions of Python 3 and above. Given a set of versions that match a spec, Spack concretization uses the following priorities to decide which one to use:

1.
If the user provided a list of versions in packages.yaml, the first matching version in that list will be used.
2.
If one or more versions is specified as preferred=True, in either packages.yaml or package.py, the largest matching version will be used. ("Latest" is defined by the sort order above).
3.
If no preferences in particular are specified in the package or in packages.yaml, then the largest matching non-develop version will be used. By avoiding @develop, this prevents users from accidentally installing a @develop version.
4.
If all else fails and @develop is the only matching version, it will be used.

Ranges versus specific versions

When specifying versions in Spack using the pkg@<specifier> syntax, you can use either ranges or specific versions. It is generally recommended to use ranges instead of specific versions when packaging to avoid overly constraining dependencies, patches, and conflicts.

For example, depends_on("python@3") denotes a range of versions, allowing Spack to pick any 3.x.y version for Python, while depends_on("python@=3.10.1") restricts it to a specific version.

Specific @= versions should only be used in exceptional cases, such as when the package has a versioning scheme that omits the zero in the first patch release: 3.1, 3.1.1, 3.1.2. In this example, the specifier @=3.1 is the correct way to select only the 3.1 version, whereas @3.1 would match all those versions.

Ranges are preferred even if they would only match a single version defined in the package. This is because users can define custom versions in packages.yaml that typically include a custom suffix. For example, if the package defines the version 1.2.3, the specifier @1.2.3 will also match a user-defined version 1.2.3-custom.

spack checksum

If you want to add new versions to a package you've already created, this is automated with the spack checksum command. Here's an example for libelf:


This does the same thing that spack create does, but it allows you to go back and add new versions easily as you need them (e.g., as they're released). It fetches the tarballs you ask for and prints out a list of version commands ready to copy/paste into your package file:

==> Checksummed new versions of libelf:

version("0.8.13", md5="4136d7b4c04df68b686570afa26988ac")
version("0.8.12", md5="e21f8273d9f5f6d43a59878dc274fec7")
version("0.8.11", md5="e931910b6d100f6caa32239849947fbf")
version("0.8.10", md5="9db4d36c283d9790d8fa7df1f4d7b4d9")


By default, Spack will search for new tarball downloads by scraping the parent directory of the tarball you gave it. So, if your tarball is at http://example.com/downloads/foo-1.0.tar.gz, Spack will look in http://example.com/downloads/ for links to additional versions. If you need to search another path for download links, you can supply some extra attributes that control how your package finds new versions. See the documentation on list_url and list_depth.

NOTE:

  • This command assumes that Spack can extrapolate new URLs from an existing URL in the package, and that Spack can find similar URLs on a webpage. If that's not possible, e.g. if the package's developers don't name their tarballs consistently, you'll need to manually add version calls yourself.
  • For spack checksum to work, Spack needs to be able to import your package in Python. That means it can't have any syntax errors, or the import will fail. Use this once you've got your package in working order.



Finding new versions

You've already seen the homepage and url package attributes:

from spack import *
class Mpich(Package):

"""MPICH is a high performance and widely portable implementation of
the Message Passing Interface (MPI) standard."""
homepage = "http://www.mpich.org"
url = "http://www.mpich.org/static/downloads/3.0.4/mpich-3.0.4.tar.gz"


These are class-level attributes used by Spack to show users information about the package, and to determine where to download its source code.

Spack uses the tarball URL to extrapolate where to find other tarballs of the same package (e.g. in spack checksum, but this does not always work. This section covers ways you can tell Spack to find tarballs elsewhere.

list_url

When spack tries to find available versions of packages (e.g. with spack checksum), it spiders the parent directory of the tarball in the url attribute. For example, for libelf, the url is:


Here, Spack spiders http://www.mr511.de/software/ to find similar tarball links and ultimately to make a list of available versions of libelf.

For many packages, the tarball's parent directory may be unlistable, or it may not contain any links to source code archives. In fact, many times additional package downloads aren't even available in the same directory as the download URL.

For these, you can specify a separate list_url indicating the page to search for tarballs. For example, libdwarf has the homepage as the list_url, because that is where links to old versions are:

class Libdwarf(Package):

homepage = "http://www.prevanders.net/dwarf.html"
url = "http://www.prevanders.net/libdwarf-20130729.tar.gz"
list_url = homepage


list_depth

libdwarf and many other packages have a listing of available versions on a single webpage, but not all do. For example, mpich has a tarball URL that looks like this:


But its downloads are in many different subdirectories of http://www.mpich.org/static/downloads/. So, we need to add a list_url and a list_depth attribute:


By default, Spack only looks at the top-level page available at list_url. list_depth = 1 tells it to follow up to 1 level of links from the top-level page. Note that here, this implies 1 level of subdirectories, as the mpich website is structured much like a filesystem. But list_depth really refers to link depth when spidering the page.

Fetching from code repositories

For some packages, source code is provided in a Version Control System (VCS) repository rather than in a tarball. Spack can fetch packages from VCS repositories. Currently, Spack supports fetching with Git, Mercurial (hg), Subversion (svn), CVS (cvs), and Go. In all cases, the destination is the standard stage source path.

To fetch a package from a source repository, Spack needs to know which VCS to use and where to download from. Much like with url, package authors can specify a class-level git, hg, svn, cvs, or go attribute containing the correct download location.

Many packages developed with Git have both a Git repository as well as release tarballs available for download. Packages can define both a class-level tarball URL and VCS. For example:

class Trilinos(CMakePackage):

homepage = "https://trilinos.org/"
url = "https://github.com/trilinos/Trilinos/archive/trilinos-release-12-12-1.tar.gz"
git = "https://github.com/trilinos/Trilinos.git"
version("develop", branch="develop")
version("master", branch="master")
version("12.12.1", md5="ecd4606fa332212433c98bf950a69cc7")
version("12.10.1", md5="667333dbd7c0f031d47d7c5511fd0810")
version("12.8.1", "9f37f683ee2b427b5540db8a20ed6b15")


If a package contains both a url and git class-level attribute, Spack decides which to use based on the arguments to the version() directive. Versions containing a specific branch, tag, or revision are assumed to be for VCS download methods, while versions containing a checksum are assumed to be for URL download methods.

Like url, if a specific version downloads from a different repository than the default repo, it can be overridden with a version-specific argument.

NOTE:

In order to reduce ambiguity, each package can only have a single VCS top-level attribute in addition to url. In the rare case that a package uses multiple VCS, a fetch strategy can be specified for each version. For example, the rockstar package contains:

class Rockstar(MakefilePackage):

homepage = "https://bitbucket.org/gfcstanford/rockstar"
version("develop", git="https://bitbucket.org/gfcstanford/rockstar.git")
version("yt", hg="https://bitbucket.org/MatthewTurk/rockstar")




Git

Git fetching supports the following parameters to version:

  • git: URL of the git repository, if different than the class-level git.
  • branch: Name of a branch to fetch.
  • tag: Name of a tag to fetch.
  • commit: SHA hash (or prefix) of a commit to fetch.
  • submodules: Also fetch submodules recursively when checking out this repository.
  • submodules_delete: A list of submodules to forcibly delete from the repository after fetching. Useful if a version in the repository has submodules that have disappeared/are no longer accessible.
  • get_full_repo: Ensure the full git history is checked out with all remote branch information. Normally (get_full_repo=False, the default), the git option --depth 1 will be used if the version of git and the specified transport protocol support it, and --single-branch will be used if the version of git supports it.

Only one of tag, branch, or commit can be used at a time.

The destination directory for the clone is the standard stage source path.

To fetch a repository's default branch:

class Example(Package):

git = "https://github.com/example-project/example.git"
version("develop")


This download method is untrusted, and is not recommended. Aside from HTTPS, there is no way to verify that the repository has not been compromised, and the commit you get when you install the package likely won't be the same commit that was used when the package was first written. Additionally, the default branch may change. It is best to at least specify a branch name.

To fetch a particular branch, use the branch parameter:

version("experimental", branch="experimental")


This download method is untrusted, and is not recommended. Branches are moving targets, so the commit you get when you install the package likely won't be the same commit that was used when the package was first written.

To fetch from a particular tag, use tag instead:

version("1.0.1", tag="v1.0.1")


This download method is untrusted, and is not recommended. Although tags are generally more stable than branches, Git allows tags to be moved. Many developers use tags to denote rolling releases, and may move the tag when a bug is patched.

Finally, to fetch a particular commit, use commit:

version("2014-10-08", commit="9d38cd4e2c94c3cea97d0e2924814acc")


This doesn't have to be a full hash; you can abbreviate it as you'd expect with git:

version("2014-10-08", commit="9d38cd")


This download method is trusted. It is the recommended way to securely download from a Git repository.

It may be useful to provide a saner version for commits like this, e.g. you might use the date as the version, as done above. Or, if you know the commit at which a release was cut, you can use the release version. It's up to the package author to decide what makes the most sense. Although you can use the commit hash as the version number, this is not recommended, as it won't sort properly.

You can supply submodules=True to cause Spack to fetch submodules recursively along with the repository at fetch time.

version("1.0.1", tag="v1.0.1", submodules=True)


If a package has needs more fine-grained control over submodules, define submodules to be a callable function that takes the package instance as its only argument. The function should return a list of submodules to be fetched.

def submodules(package):

submodules = []
if "+variant-1" in package.spec:
submodules.append("submodule_for_variant_1")
if "+variant-2" in package.spec:
submodules.append("submodule_for_variant_2")
return submodules
class MyPackage(Package):
version("0.1.0", submodules=submodules)


For more information about git submodules see the manpage of git: man git-submodule.


GitHub

If a project is hosted on GitHub, any valid Git branch, tag, or hash may be downloaded as a tarball. This is accomplished simply by constructing an appropriate URL. Spack can checksum any package downloaded this way, thereby producing a trusted download. For example, the following downloads a particular hash, and then applies a checksum.

version("1.9.5.1.1", md5="d035e4bc704d136db79b43ab371b27d2",

url="https://www.github.com/jswhit/pyproj/tarball/0be612cc9f972e38b50a90c946a9b353e2ab140f")


Mercurial

Fetching with Mercurial works much like Git, but you use the hg parameter. The destination directory is still the standard stage source path.

Add the hg attribute with no revision passed to version:

class Example(Package):

hg = "https://bitbucket.org/example-project/example"
version("develop")


This download method is untrusted, and is not recommended. As with Git's default fetching strategy, there is no way to verify the integrity of the download.

To fetch a particular revision, use the revision parameter:

version("1.0", revision="v1.0")


Unlike git, which has special parameters for different types of revisions, you can use revision for branches, tags, and commits when you fetch with Mercurial. Like Git, fetching specific branches or tags is an untrusted download method, and is not recommended. The recommended fetch strategy is to specify a particular commit hash as the revision.


Subversion

To fetch with subversion, use the svn and revision parameters. The destination directory will be the standard stage source path.

Simply add an svn parameter to the package:

class Example(Package):

svn = "https://outreach.scidac.gov/svn/example/trunk"
version("develop")


This download method is untrusted, and is not recommended for the same reasons as mentioned above.

To fetch a particular revision, add a revision argument to the version directive:

version("develop", revision=128)


This download method is untrusted, and is not recommended.

Unfortunately, Subversion has no commit hashing scheme like Git and Mercurial do, so there is no way to guarantee that the download you get is the same as the download used when the package was created. Use at your own risk.


Subversion branches are handled as part of the directory structure, so you can check out a branch or tag by changing the URL. If you want to package multiple branches, simply add a svn argument to each version directive.

CVS

CVS (Concurrent Versions System) is an old centralized version control system. It is a predecessor of Subversion.

To fetch with CVS, use the cvs, branch, and date parameters. The destination directory will be the standard stage source path.

Simply add a cvs parameter to the package:

class Example(Package):

cvs = ":pserver:outreach.scidac.gov/cvsroot%module=modulename"
version("1.1.2.4")


CVS repository locations are described using an older syntax that is different from today's ubiquitous URL syntax. :pserver: denotes the transport method. CVS servers can host multiple repositories (called "modules") at the same location, and one needs to specify both the server location and the module name to access. Spack combines both into one string using the %module=modulename suffix shown above.

This download method is untrusted.

Versions in CVS are commonly specified by date. To fetch a particular branch or date, add a branch and/or date argument to the version directive:

version("2021.4.22", branch="branchname", date="2021-04-22")


Unfortunately, CVS does not identify repository-wide commits via a revision or hash like Subversion, Git, or Mercurial do. This makes it impossible to specify an exact commit to check out.


CVS has more features, but since CVS is rarely used these days, Spack does not support all of them.

Go

Go isn't a VCS, it is a programming language with a builtin command, go get, that fetches packages and their dependencies automatically. The destination directory will be the standard stage source path.

This strategy can clone a Git repository, or download from another source location. For example:

class ThePlatinumSearcher(Package):

homepage = "https://github.com/monochromegane/the_platinum_searcher"
go = "github.com/monochromegane/the_platinum_searcher/..."
version("head")


Go cannot be used to fetch a particular commit or branch, it always downloads the head of the repository. This download method is untrusted, and is not recommended. Use another fetch strategy whenever possible.

.. _variants:

Variants

Many software packages can be configured to enable optional features, which often come at the expense of additional dependencies or longer build times. To be flexible enough and support a wide variety of use cases, Spack allows you to expose to the end-user the ability to choose which features should be activated in a package at the time it is installed. The mechanism to be employed is the spack.directives.variant() directive.

Boolean variants

In their simplest form variants are boolean options specified at the package level:

class Hdf5(AutotoolsPackage):

...
variant(
"shared", default=True, description="Builds a shared version of the library"
)




with a default value and a description of their meaning / use in the package. Variants can be tested in any context where a spec constraint is expected. In the example above the shared variant is tied to the build of shared dynamic libraries. To pass the right option at configure time we can branch depending on its value:

def configure_args(self):

...
if self.spec.satisfies("+shared"):
extra_args.append("--enable-shared")
else:
extra_args.append("--disable-shared")
extra_args.append("--enable-static-exec")




As explained in Variants the constraint +shared means that the boolean variant is set to True, while ~shared means it is set to False. Another common example is the optional activation of an extra dependency which requires to use the variant in the when argument of spack.directives.depends_on():

class Hdf5(AutotoolsPackage):

...
variant("szip", default=False, description="Enable szip support")
depends_on("szip", when="+szip")




as shown in the snippet above where szip is modeled to be an optional dependency of hdf5.

Multi-valued variants

If need be, Spack can go beyond Boolean variants and permit an arbitrary number of allowed values. This might be useful when modeling options that are tightly related to each other. The values in this case are passed to the spack.directives.variant() directive as a tuple:

class Blis(Package):

...
variant(
"threads", default="none", description="Multithreading support",
values=("pthreads", "openmp", "none"), multi=False
)




In the example above the argument multi is set to False to indicate that only one among all the variant values can be active at any time. This constraint is enforced by the parser and an error is emitted if a user specifies two or more values at the same time:

$ spack spec blis threads=openmp,pthreads
Input spec
--------------------------------
blis threads=openmp,pthreads
Concretized
--------------------------------
==> Error: multiple values are not allowed for variant "threads"




Another useful note is that Python's None is not allowed as a default value and therefore it should not be used to denote that no feature was selected. Users should instead select another value, like "none", and handle it explicitly within the package recipe if need be:

if self.spec.variants["threads"].value == "none":

options.append("--no-threads")




In cases where multiple values can be selected at the same time multi should be set to True:

class Gcc(AutotoolsPackage):

...
variant(
"languages", default="c,c++,fortran",
values=("ada", "brig", "c", "c++", "fortran",
"go", "java", "jit", "lto", "objc", "obj-c++"),
multi=True,
description="Compilers and runtime libraries to build"
)




Within a package recipe a multi-valued variant is tested using a key=value syntax:

if spec.satisfies("languages=jit"):

options.append("--enable-host-shared")




Complex validation logic for variant values

To cover complex use cases, the spack.directives.variant() directive could accept as the values argument a full-fledged object which has default and other arguments of the directive embedded as attributes.

An example, already implemented in Spack's core, is spack.variant.DisjointSetsOfValues. This class is used to implement a few convenience functions, like spack.variant.any_combination_of():

class Adios(AutotoolsPackage):

...
variant(
"staging",
values=any_combination_of("flexpath", "dataspaces"),
description="Enable dataspaces and/or flexpath staging transports"
)




that allows any combination of the specified values, and also allows the user to specify "none" (as a string) to choose none of them. The objects returned by these functions can be modified at will by chaining method calls to change the default value, customize the error message or other similar operations:

class Mvapich2(AutotoolsPackage):

...
variant(
"process_managers",
description="List of the process managers to activate",
values=disjoint_sets(
("auto",), ("slurm",), ("hydra", "gforker", "remshell")
).prohibit_empty_set().with_error(
"'slurm' or 'auto' cannot be activated along with "
"other process managers"
).with_default("auto").with_non_feature_values("auto"),
)




Conditional Possible Values

There are cases where a variant may take multiple values, and the list of allowed values expand over time. Think for instance at the C++ standard with which we might compile Boost, which can take one of multiple possible values with the latest standards only available from a certain version on.

To model a similar situation we can use conditional possible values in the variant declaration:

variant(

"cxxstd", default="98",
values=(
"98", "11", "14",
# C++17 is not supported by Boost < 1.63.0.
conditional("17", when="@1.63.0:"),
# C++20/2a is not support by Boost < 1.73.0
conditional("2a", "2b", when="@1.73.0:")
),
multi=False,
description="Use the specified C++ standard when building.", )


The snippet above allows 98, 11 and 14 as unconditional possible values for the cxxstd variant, while 17 requires a version greater or equal to 1.63.0 and both 2a and 2b require a version greater or equal to 1.73.0.

Conditional Variants

The variant directive accepts a when clause. The variant will only be present on specs that otherwise satisfy the spec listed as the when clause. For example, the following class has a variant bar when it is at version 2.0 or higher.

class Foo(Package):

...
variant("bar", default=False, when="@2.0:", description="help message")


The when clause follows the same syntax and accepts the same values as the when argument of spack.directives.depends_on()

Sticky Variants

The variant directive can be marked as sticky by setting to True the corresponding argument:

variant("bar", default=False, sticky=True)


A sticky variant differs from a regular one in that it is always set to either:

1.
An explicit value appearing in a spec literal or
2.
Its default value

The concretizer thus is not free to pick an alternate value to work around conflicts, but will error out instead. Setting this property on a variant is useful in cases where the variant allows some dangerous or controversial options (e.g. using unsupported versions of a compiler for a library) and the packager wants to ensure that allowing these options is done on purpose by the user, rather than automatically by the solver.

Overriding Variants

Packages may override variants for several reasons, most often to change the default from a variant defined in a parent class or to change the conditions under which a variant is present on the spec.

When a variant is defined multiple times, whether in the same package file or in a subclass and a superclass, the last definition is used for all attributes except for the when clauses. The when clauses are accumulated through all invocations, and the variant is present on the spec if any of the accumulated conditions are satisfied.

For example, consider the following package:

class Foo(Package):

...
variant("bar", default=False, when="@1.0", description="help1")
variant("bar", default=True, when="platform=darwin", description="help2")
...


This package foo has a variant bar when the spec satisfies either @1.0 or platform=darwin, but not for other platforms at other versions. The default for this variant, when it is present, is always True, regardless of which condition of the variant is satisfied. This allows packages to override variants in packages or build system classes from which they inherit, by modifying the variant values without modifying the when clause. It also allows a package to implement or semantics for a variant when clause by duplicating the variant definition.

Resources (expanding extra tarballs)

Some packages (most notably compilers) provide optional features if additional resources are expanded within their source tree before building. In Spack it is possible to describe such a need with the resource directive :

resource(

name="cargo",
git="https://github.com/rust-lang/cargo.git",
tag="0.10.0",
destination="cargo" )




Based on the keywords present among the arguments the appropriate FetchStrategy will be used for the resource. The keyword destination is relative to the source root of the package and should point to where the resource is to be expanded.

Licensed software

In order to install licensed software, Spack needs to know a few more details about a package. The following class attributes should be defined.

license_required

Boolean. If set to True, this software requires a license. If set to False, all of the following attributes will be ignored. Defaults to False.

license_comment

String. Contains the symbol used by the license manager to denote a comment. Defaults to #.

license_files

List of strings. These are files that the software searches for when looking for a license. All file paths must be relative to the installation directory. More complex packages like Intel may require multiple licenses for individual components. Defaults to the empty list.

license_vars

List of strings. Environment variables that can be set to tell the software where to look for a license if it is not in the usual location. Defaults to the empty list.

license_url

String. A URL pointing to license setup instructions for the software. Defaults to the empty string.

For example, let's take a look at the package for the PGI compilers.

# Licensing
license_required = True
license_comment  = "#"
license_files    = ["license.dat"]
license_vars     = ["PGROUPD_LICENSE_FILE", "LM_LICENSE_FILE"]
license_url      = "http://www.pgroup.com/doc/pgiinstall.pdf"


As you can see, PGI requires a license. Its license manager, FlexNet, uses the # symbol to denote a comment. It expects the license file to be named license.dat and to be located directly in the installation prefix. If you would like the installation file to be located elsewhere, simply set PGROUPD_LICENSE_FILE or LM_LICENSE_FILE after installation. For further instructions on installation and licensing, see the URL provided.

Let's walk through a sample PGI installation to see exactly what Spack is and isn't capable of. Since PGI does not provide a download URL, it must be downloaded manually. It can either be added to a mirror or located in the current directory when spack install pgi is run. See Mirrors (mirrors.yaml) for instructions on setting up a mirror.

After running spack install pgi, the first thing that will happen is Spack will create a global license file located at $SPACK_ROOT/etc/spack/licenses/pgi/license.dat. It will then open up the file using your favorite editor. It will look like this:

# A license is required to use pgi.
#
# The recommended solution is to store your license key in this global
# license file. After installation, the following symlink(s) will be
# added to point to this file (relative to the installation prefix):
#
#   license.dat
#
# Alternatively, use one of the following environment variable(s):
#
#   PGROUPD_LICENSE_FILE
#   LM_LICENSE_FILE
#
# If you choose to store your license in a non-standard location, you may
# set one of these variable(s) to the full pathname to the license file, or
# port@host if you store your license keys on a dedicated license server.
# You will likely want to set this variable in a module file so that it
# gets loaded every time someone tries to use pgi.
#
# For further information on how to acquire a license, please refer to:
#
#   http://www.pgroup.com/doc/pgiinstall.pdf
#
# You may enter your license below.


You can add your license directly to this file, or tell FlexNet to use a license stored on a separate license server. Here is an example that points to a license server called licman1:

SERVER licman1.mcs.anl.gov 00163eb7fba5 27200
USE_SERVER


If your package requires the license to install, you can reference the location of this global license using self.global_license_file. After installation, symlinks for all of the files given in license_files will be created, pointing to this global license. If you install a different version or variant of the package, Spack will automatically detect and reuse the already existing global license.

If the software you are trying to package doesn't rely on license files, Spack will print a warning message, letting the user know that they need to set an environment variable or pointing them to installation documentation.

Patches

Depending on the host architecture, package version, known bugs, or other issues, you may need to patch your software to get it to build correctly. Like many other package systems, spack allows you to store patches alongside your package files and apply them to source code after it's downloaded.

patch

You can specify patches in your package file with the patch() directive. patch looks like this:

class Mvapich2(Package):

...
patch("ad_lustre_rwcontig_open_source.patch", when="@1.9:")


The first argument can be either a URL or a filename. It specifies a patch file that should be applied to your source. If the patch you supply is a filename, then the patch needs to live within the spack source tree. For example, the patch above lives in a directory structure like this:

/usr/share/spack/repos/builtin/packages/

mvapich2/
package.py
ad_lustre_rwcontig_open_source.patch


If you supply a URL instead of a filename, you need to supply a sha256 checksum, like this:

patch("http://www.nwchem-sw.org/images/Tddft_mxvec20.patch",

sha256="252c0af58be3d90e5dc5e0d16658434c9efa5d20a5df6c10bf72c2d77f780866")


Spack includes the hashes of patches in its versioning information, so that the same package with different patches applied will have different hash identifiers. To ensure that the hashing scheme is consistent, you must use a sha256 checksum for the patch. Patches will be fetched from their URLs, checked, and applied to your source code. You can use the GNU utils sha256sum or the macOS shasum -a 256 commands to generate a checksum for a patch file.

Spack can also handle compressed patches. If you use these, Spack needs a little more help. Specifically, it needs two checksums: the sha256 of the patch and archive_sha256 for the compressed archive. archive_sha256 helps Spack ensure that the downloaded file is not corrupted or malicious, before running it through a tool like tar or zip. The sha256 of the patch is still required so that it can be included in specs. Providing it in the package file ensures that Spack won't have to download and decompress patches it won't end up using at install time. Both the archive and patch checksum are checked when patch archives are downloaded.

patch("http://www.nwchem-sw.org/images/Tddft_mxvec20.patch.gz",

sha256="252c0af58be3d90e5dc5e0d16658434c9efa5d20a5df6c10bf72c2d77f780866",
archive_sha256="4e8092a161ec6c3a1b5253176fcf33ce7ba23ee2ff27c75dbced589dabacd06e")


patch keyword arguments are described below.

sha256, archive_sha256

Hashes of downloaded patch and compressed archive, respectively. Only needed for patches fetched from URLs.

when

If supplied, this is a spec that tells spack when to apply the patch. If the installed package spec matches this spec, the patch will be applied. In our example above, the patch is applied when mvapich is at version 1.9 or higher.

level

This tells spack how to run the patch command. By default, the level is 1 and spack runs patch -p 1. If level is 2, spack will run patch -p 2, and so on.

A lot of people are confused by level, so here's a primer. If you look in your patch file, you may see something like this:

--- a/src/mpi/romio/adio/ad_lustre/ad_lustre_rwcontig.c 2013-12-10 12:05:44.806417000 -0800
+++ b/src/mpi/romio/adio/ad_lustre/ad_lustre_rwcontig.c 2013-12-10 11:53:03.295622000 -0800
@@ -8,7 +8,7 @@

* Copyright (C) 2008 Sun Microsystems, Lustre group
\*/ -#define _XOPEN_SOURCE 600 +//#define _XOPEN_SOURCE 600
#include <stdlib.h>
#include <malloc.h>
#include "ad_lustre.h"


Lines 1-2 show paths with synthetic a/ and b/ prefixes. These are placeholders for the two mvapich2 source directories that diff compared when it created the patch file. This is git's default behavior when creating patch files, but other programs may behave differently.

-p1 strips off the first level of the prefix in both paths, allowing the patch to be applied from the root of an expanded mvapich2 archive. If you set level to 2, it would strip off src, and so on.

It's generally easier to just structure your patch file so that it applies cleanly with -p1, but if you're using a patch you didn't create yourself, level can be handy.

working_dir

This tells spack where to run the patch command. By default, the working directory is the source path of the stage (.). However, sometimes patches are made with respect to a subdirectory and this is where the working directory comes in handy. Internally, the working directory is given to patch via the -d option. Let's take the example patch from above and assume for some reason, it can only be downloaded in the following form:

--- a/romio/adio/ad_lustre/ad_lustre_rwcontig.c 2013-12-10 12:05:44.806417000 -0800
+++ b/romio/adio/ad_lustre/ad_lustre_rwcontig.c 2013-12-10 11:53:03.295622000 -0800
@@ -8,7 +8,7 @@

* Copyright (C) 2008 Sun Microsystems, Lustre group
\*/ -#define _XOPEN_SOURCE 600 +//#define _XOPEN_SOURCE 600
#include <stdlib.h>
#include <malloc.h>
#include "ad_lustre.h"


Hence, the patch needs to applied in the src/mpi subdirectory, and the working_dir="src/mpi" option would exactly do that.

Patch functions

In addition to supplying patch files, you can write a custom function to patch a package's source. For example, the py-pyside package contains some custom code for tweaking the way the PySide build handles RPATH:


def patch(self):
"""Undo PySide RPATH handling and add Spack RPATH."""
# Figure out the special RPATH
rpath = self.rpath
rpath.append(os.path.join(python_platlib, "PySide"))
# Fix subprocess.mswindows check for Python 3.5
# https://github.com/pyside/pyside-setup/pull/55
filter_file(
"^if subprocess.mswindows:",
'mswindows = (sys.platform == "win32")\r\nif mswindows:',
"popenasync.py",
)
filter_file("^ if subprocess.mswindows:", " if mswindows:", "popenasync.py")
# Remove check for python version because the above patch adds support for newer versions
filter_file("^check_allowed_python_version()", "", "setup.py")
# Add Spack's standard CMake args to the sub-builds.
# They're called BY setup.py so we have to patch it.
filter_file(
r"OPTION_CMAKE,",
r"OPTION_CMAKE, "
+ (
'"-DCMAKE_INSTALL_RPATH_USE_LINK_PATH=FALSE", '
'"-DCMAKE_INSTALL_RPATH=%s",' % ":".join(rpath)
),
"setup.py",
)
# PySide tries to patch ELF files to remove RPATHs
# Disable this and go with the one we set.
if self.spec.satisfies("@1.2.4:"):
rpath_file = "setup.py"
else:
rpath_file = "pyside_postinstall.py"
filter_file(r"(^\s*)(rpath_cmd\(.*\))", r"\1#\2", rpath_file)


A patch function, if present, will be run after patch files are applied and before install() is run.

You could put this logic in install(), but putting it in a patch function gives you some benefits. First, spack ensures that the patch() function is run once per code checkout. That means that if you run install, hit ctrl-C, and run install again, the code in the patch function is only run once. Also, you can tell Spack to run only the patching part of the build using the spack patch command.

Dependency patching

So far we've covered how the patch directive can be used by a package to patch its own source code. Packages can also specify patches to be applied to their dependencies, if they require special modifications. As with all packages in Spack, a patched dependency library can coexist with other versions of that library. See the section on depends_on for more details.

Inspecting patches

If you want to better understand the patches that Spack applies to your packages, you can do that using spack spec, spack find, and other query commands. Let's look at m4. If you run spack spec m4, you can see the patches that would be applied to m4:

$ spack spec m4
Input spec
--------------------------------
m4
Concretized
--------------------------------
m4@1.4.18%apple-clang@9.0.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=darwin-highsierra-x86_64

^libsigsegv@2.11%apple-clang@9.0.0 arch=darwin-highsierra-x86_64


You can also see patches that have been applied to installed packages with spack find -v:

$ spack find -v m4
==> 1 installed package
-- darwin-highsierra-x86_64 / apple-clang@9.0.0 -----------------
m4@1.4.18 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv


In both cases above, you can see that the patches' sha256 hashes are stored on the spec as a variant. As mentioned above, this means that you can have multiple, differently-patched versions of a package installed at once.

You can look up a patch by its sha256 hash (or a short version of it) using the spack resource show command:

$ spack resource show 3877ab54
3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00

path: /home/spackuser/src/spack/var/spack/repos/builtin/packages/m4/gnulib-pgi.patch
applies to: builtin.m4


spack resource show looks up downloadable resources from package files by hash and prints out information about them. Above, we see that the 3877ab54 patch applies to the m4 package. The output also tells us where to find the patch.

Things get more interesting if you want to know about dependency patches. For example, when dealii is built with boost@1.68.0, it has to patch boost to work correctly. If you didn't know this, you might wonder where the extra boost patches are coming from:

$ spack spec dealii ^boost@1.68.0 ^hdf5+fortran | grep "\^boost"

^boost@1.68.0
^boost@1.68.0%apple-clang@9.0.0+atomic+chrono~clanglibcpp cxxstd=default +date_time~debug+exception+filesystem+graph~icu+iostreams+locale+log+math~mpi+multithreaded~numpy patches=2ab6c72d03dec6a4ae20220a9dfd5c8c572c5294252155b85c6874d97c323199,b37164268f34f7133cbc9a4066ae98fda08adf51e1172223f6a969909216870f ~pic+program_options~python+random+regex+serialization+shared+signals~singlethreaded+system~taggedlayout+test+thread+timer~versionedlayout+wave arch=darwin-highsierra-x86_64 $ spack resource show b37164268 b37164268f34f7133cbc9a4066ae98fda08adf51e1172223f6a969909216870f
path: /home/spackuser/src/spack/var/spack/repos/builtin/packages/dealii/boost_1.68.0.patch
applies to: builtin.boost
patched by: builtin.dealii


Here you can see that the patch is applied to boost by dealii, and that it lives in dealii's directory in Spack's builtin package repository.

Handling RPATHs

Spack installs each package in a way that ensures that all of its dependencies are found when it runs. It does this using RPATHs. An RPATH is a search path, stored in a binary (an executable or library), that tells the dynamic loader where to find its dependencies at runtime. You may be familiar with LD_LIBRARY_PATH on Linux or DYLD_LIBRARY_PATH on Mac OS X. RPATH is similar to these paths, in that it tells the loader where to find libraries. Unlike them, it is embedded in the binary and not set in each user's environment.

RPATHs in Spack are handled in one of three ways:

1.
For most packages, RPATHs are handled automatically using Spack's compiler wrappers. These wrappers are set in standard variables like CC, CXX, F77, and FC, so most build systems (autotools and many gmake systems) pick them up and use them.
2.
CMake also respects Spack's compiler wrappers, but many CMake builds have logic to overwrite RPATHs when binaries are installed. Spack provides the std_cmake_args variable, which includes parameters necessary for CMake build use the right installation RPATH. It can be used like this when cmake is invoked:

class MyPackage(Package):

...
def install(self, spec, prefix):
cmake("..", *std_cmake_args)
make()
make("install")


3.
If you need to modify the build to add your own RPATHs, you can use the self.rpath property of your package, which will return a list of all the RPATHs that Spack will use when it links. You can see this how this is used in the PySide example above.

Parallel builds

Spack supports parallel builds on an individual package and at the installation level. Package-level parallelism is established by the --jobs option and its configuration and package recipe equivalents. Installation-level parallelism is driven by the DAG(s) of the requested package or packages.

Package-level build parallelism

By default, Spack will invoke make(), or any other similar tool, with a -j <njobs> argument, so those builds run in parallel. The parallelism is determined by the value of the build_jobs entry in config.yaml (see here for more details on how this value is computed).

If a package does not build properly in parallel, you can override this setting by adding parallel = False to your package. For example, OpenSSL's build does not work in parallel, so its package looks like this:

class Openssl(Package):

homepage = "http://www.openssl.org"
url = "http://www.openssl.org/source/openssl-1.0.1h.tar.gz"
version("1.0.1h", md5="8d6d684a9430d5cc98a62a5d8fbda8cf")
depends_on("zlib-api")
parallel = False


Similarly, you can disable parallel builds only for specific make commands, as libdwarf does:

class Libelf(Package):

...
def install(self, spec, prefix):
configure("--prefix=" + prefix,
"--enable-shared",
"--disable-dependency-tracking",
"--disable-debug")
make()
# The mkdir commands in libelf's install can fail in parallel
make("install", parallel=False)


The first make will run in parallel here, but the second will not. If you set parallel to False at the package level, then each call to make() will be sequential by default, but packagers can call make(parallel=True) to override it.

Install-level build parallelism

Spack supports the concurrent installation of packages within a Spack instance across multiple processes using file system locks. This parallelism is separate from the package-level achieved through build systems' use of the -j <njobs> option. With install-level parallelism, processes coordinate the installation of the dependencies of specs provided on the command line and as part of an environment build with only one process being allowed to install a given package at a time. Refer to Dependencies for more information on dependencies and Installing an Environment for how to install an environment.

Concurrent processes may be any combination of interactive sessions and batch jobs. Which means a spack install can be running in a terminal window while a batch job is running spack install on the same or overlapping dependencies without any process trying to re-do the work of another.

For example, if you are using SLURM, you could launch an installation of mpich using the following command:

$ srun -N 2 -n 8 spack install -j 4 mpich@3.3.2


This will create eight concurrent, four-job installs on two different nodes.

Alternatively, you could run the same installs on one node by entering the following at the command line of a bash shell:

$ for i in {1..12}; do nohup spack install -j 4 mpich@3.3.2 >> mpich_install.txt 2>&1 & done


NOTE:

The effective parallelism is based on the maximum number of packages that can be installed at the same time, which is limited by the number of packages with no (remaining) uninstalled dependencies.


Dependencies

We've covered how to build a simple package, but what if one package relies on another package to build? How do you express that in a package file? And how do you refer to the other package in the build script for your own package?

Spack makes this relatively easy. Let's take a look at the libdwarf package to see how it's done:

class Libdwarf(Package):

homepage = "http://www.prevanders.net/dwarf.html"
url = "http://www.prevanders.net/libdwarf-20130729.tar.gz"
list_url = homepage
version("20130729", md5="4cc5e48693f7b93b7aa0261e63c0e21d")
...
depends_on("libelf")
def install(self, spec, prefix):
...


depends_on()

The highlighted depends_on("libelf") call tells Spack that it needs to build and install the libelf package before it builds libdwarf. This means that in your install() method, you are guaranteed that libelf has been built and installed successfully, so you can rely on it for your libdwarf build.

Dependency specs

depends_on doesn't just take the name of another package. It can take a full spec as well. This means that you can restrict the versions or other configuration options of libelf that libdwarf will build with. For example, suppose that in the libdwarf package you write:

depends_on("libelf@0.8")


Now libdwarf will require libelf at exactly version 0.8. You can also specify a requirement for a particular variant or for specific compiler flags:

depends_on("libelf@0.8+debug")
depends_on("libelf debug=True")
depends_on("libelf cppflags='-fPIC'")


Both users and package authors can use the same spec syntax to refer to different package configurations. Users use the spec syntax on the command line to find installed packages or to install packages with particular constraints, and package authors can use specs to describe relationships between packages.

Version ranges

Although some packages require a specific version for their dependencies, most can be built with a range of versions. For example, if you are writing a package for a legacy Python module that only works with Python 2.4 through 2.6, this would look like:

depends_on("python@2.4:2.6")


Version ranges in Spack are inclusive, so 2.4:2.6 means any version greater than or equal to 2.4 and up to and including any 2.6.x. If you want to specify that a package works with any version of Python 3 (or higher), this would look like:

depends_on("python@3:")


Here we leave out the upper bound. If you want to say that a package requires Python 2, you can similarly leave out the lower bound:

depends_on("python@:2")


Notice that we didn't use @:3. Version ranges are inclusive, so @:3 means "up to and including any 3.x version".

You can also simply write

depends_on("python@2.7")


to tell Spack that the package needs Python 2.7.x. This is equivalent to @2.7:2.7.

In very rare cases, you may need to specify an exact version, for example if you need to distinguish between 3.2 and 3.2.1:

depends_on("pkg@=3.2")


But in general, you should try to use version ranges as much as possible, so that custom suffixes are included too. The above example can be rewritten in terms of ranges as follows:

depends_on("pkg@3.2:3.2.0")


A spec can contain a version list of ranges and individual versions separated by commas. For example, if you need Boost 1.59.0 or newer, but there are known issues with 1.64.0, 1.65.0, and 1.66.0, you can say:

depends_on("boost@1.59.0:1.63,1.65.1,1.67.0:")


Dependency types

Not all dependencies are created equal, and Spack allows you to specify exactly what kind of a dependency you need. For example:

depends_on("cmake", type="build")
depends_on("py-numpy", type=("build", "run"))
depends_on("libelf", type=("build", "link"))
depends_on("py-pytest", type="test")


The following dependency types are available:

  • "build": the dependency will be added to the PATH and PYTHONPATH at build-time.
  • "link": the dependency will be added to Spack's compiler wrappers, automatically injecting the appropriate linker flags, including -I, -L, and RPATH/RUNPATH handling.
  • "run": the dependency will be added to the PATH and PYTHONPATH at run-time. This is true for both spack load and the module files Spack writes.
  • "test": the dependency will be added to the PATH and PYTHONPATH at build-time. The only difference between "build" and "test" is that test dependencies are only built if the user requests unit tests with spack install --test.

One of the advantages of the build dependency type is that although the dependency needs to be installed in order for the package to be built, it can be uninstalled without concern afterwards. link and run disallow this because uninstalling the dependency would break the package.

build, link, and run dependencies all affect the hash of Spack packages (along with sha256 sums of patches and archives used to build the package, and a canonical hash of the package.py recipes). test dependencies do not affect the package hash, as they are only used to construct a test environment after building and installing a given package installation. Older versions of Spack did not include build dependencies in the hash, but this has been fixed as of Spack v0.18.

If the dependency type is not specified, Spack uses a default of ("build", "link"). This is the common case for compiler languages. Non-compiled packages like Python modules commonly use ("build", "run"). This means that the compiler wrappers don't need to inject the dependency's prefix/lib directory, but the package needs to be in PATH and PYTHONPATH during the build process and later when a user wants to run the package.

Conditional dependencies

You may have a package that only requires a dependency under certain conditions. For example, you may have a package with optional MPI support. You would then provide a variant to reflect that the feature is optional and specify the MPI dependency only applies when MPI support is enabled. In that case, you could say something like:

variant("mpi", default=False, description="Enable MPI support")
depends_on("mpi", when="+mpi")


Suppose the above package also has, since version 3, optional Trilinos support and you want them both to build either with or without MPI. Further suppose you require a version of Trilinos no older than 12.6. In that case, the trilinos variant and dependency directives would be:

variant("trilinos", default=False, description="Enable Trilinos support")
depends_on("trilinos@12.6:", when="@3: +trilinos")
depends_on("trilinos@12.6: +mpi", when="@3: +trilinos +mpi")


Alternatively, you could use the when context manager to equivalently specify the trilinos variant dependencies as follows:

with when("@3: +trilinos"):

depends_on("trilinos@12.6:")
depends_on("trilinos +mpi", when="+mpi")


The argument to when in either case can include any Spec constraints that are supported on the command line using the same syntax.

NOTE:

If a dependency isn't typically used, you can save time by making it conditional since Spack will not build the dependency unless it is required for the Spec.


Dependency patching

Some packages maintain special patches on their dependencies, either to add new features or to fix bugs. This typically makes a package harder to maintain, and we encourage developers to upstream (contribute back) their changes rather than maintaining patches. However, in some cases it's not possible to upstream. Maybe the dependency's developers don't accept changes, or maybe they just haven't had time to integrate them.

For times like these, Spack's depends_on directive can optionally take a patch or list of patches:

class SpecialTool(Package):

...
depends_on("binutils", patches="special-binutils-feature.patch")
...


Here, the special-tool package requires a special feature in binutils, so it provides an extra patches=<filename> keyword argument. This is similar to the patch directive, with one small difference. Here, special-tool is responsible for the patch, so it should live in special-tool's directory in the package repository, not the binutils directory.

If you need something more sophisticated than this, you can simply nest a patch() directive inside of depends_on:

class SpecialTool(Package):

...
depends_on(
"binutils",
patches=patch("special-binutils-feature.patch",
level=3,
when="@:1.3"), # condition on binutils
when="@2.0:") # condition on special-tool
...


Note that there are two optional when conditions here -- one on the patch directive and the other on depends_on. The condition in the patch directive applies to binutils (the package being patched), while the condition in depends_on applies to special-tool. See patch directive for details on all the arguments the patch directive can take.

Finally, if you need multiple patches on a dependency, you can provide a list for patches, e.g.:

class SpecialTool(Package):

...
depends_on(
"binutils",
patches=[
"binutils-bugfix1.patch",
"binutils-bugfix2.patch",
patch("https://example.com/special-binutils-feature.patch",
sha256="252c0af58be3d90e5dc5e0d16658434c9efa5d20a5df6c10bf72c2d77f780866",
when="@:1.3")],
when="@2.0:")
...


As with patch directives, patches are applied in the order they appear in the package file (or in this case, in the list).

NOTE:

You may wonder whether dependency patching will interfere with other packages that depend on binutils. It won't.

As described in patching, Patching a package adds the sha256 of the patch to the package's spec, which means it will have a different unique hash than other versions without the patch. The patched version coexists with unpatched versions, and Spack's support for handling_rpaths guarantees that each installation finds the right version. If two packages depend on binutils patched the same way, they can both use a single installation of binutils.



Conflicts and requirements

Sometimes packages have known bugs, or limitations, that would prevent them from building e.g. against other dependencies or with certain compilers. Spack makes it possible to express such constraints with the conflicts directive.

Adding the following to a package:

conflicts(

"%intel",
when="@:1.2",
msg="<myNicePackage> <= v1.2 cannot be built with Intel ICC, "
"please use a newer release." )


we express the fact that the current package cannot be built with the Intel compiler when we are trying to install a version "<=1.2".

The when argument can be omitted, in which case the conflict will always be active.

An optional custom error message can be added via the msg= parameter, and will be printed by Spack in case the conflict cannot be avoided and leads to a concretization error.

Sometimes, packages allow only very specific choices and they can't use the rest. In those cases the requires directive can be used:

requires(

"%apple-clang",
when="platform=darwin",
msg="<myNicePackage> builds only with Apple-Clang on Darwin" )


In the example above, our package can only be built with Apple-Clang on Darwin. The requires directive is effectively the opposite of the conflicts directive, and takes the same optional when and msg arguments.

If a package needs to express more complex requirements, involving more than a single spec, that can also be done using the requires directive. To express that a package can be built either with GCC or with Clang we can write:

requires(

"%gcc", "%clang",
policy="one_of"
msg="<myNicePackage> builds only with GCC or Clang" )


When using multiple specs in a requires directive, it is advised to set the policy= argument explicitly. That argument can take either the value any_of or the value one_of, and the semantic is the same as for Package Requirements.

Extensions

Spack's support for package extensions is documented extensively in Extensions & Python support. This section documents how to make your own extendable packages and extensions.

To support extensions, a package needs to set its extendable property to True, e.g.:

class Python(Package):

...
extendable = True
...


To make a package into an extension, simply add an extends call in the package definition, and pass it the name of an extendable package:

class PyNumpy(Package):

...
extends("python")
...


This accomplishes a few things. Firstly, the Python package can set special variables such as PYTHONPATH for all extensions when the run or build environment is set up. Secondly, filesystem views can ensure that extensions are put in the same prefix as their extendee. This ensures that Python in a view can always locate its Python packages, even without environment variables set.

A package can only extend one other package at a time. To support packages that may extend one of a list of other packages, Spack supports multiple extends directives as long as at most one of them is selected as a dependency during concretization. For example, a lua package could extend either lua or luajit, but not both:

class LuaLpeg(Package):

...
variant("use_lua", default=True)
extends("lua", when="+use_lua")
extends("lua-luajit", when="~use_lua")
...


Now, a user can install, and activate, the lua-lpeg package for either lua or luajit.

Adding additional constraints

Some packages produce a Python extension, but are only compatible with Python 3, or with Python 2. In those cases, a depends_on() declaration should be made in addition to the extends() declaration:

class Icebin(Package):

extends("python", when="+python")
depends_on("python@3:", when="+python")


Many packages produce Python extensions for some variants, but not others: they should extend python only if the appropriate variant(s) are selected. This may be accomplished with conditional extends() declarations:

class FooLib(Package):

variant("python", default=True, description="Build the Python extension Module")
extends("python", when="+python")
...


Runtime and build time environment variables

Spack provides a few methods to help package authors set up the required environment variables for their package. Environment variables typically depend on how the package is used: variables that make sense during the build phase may not be needed at runtime, and vice versa. Further, sometimes it makes sense to let a dependency set the environment variables for its dependents. To allow all this, Spack provides four different methods that can be overridden in a package:

1.
setup_build_environment
2.
setup_run_environment
3.
setup_dependent_build_environment
4.
setup_dependent_run_environment

The Qt package, for instance, uses this call:


def setup_dependent_build_environment(self, env, dependent_spec):
env.set("QTDIR", self.prefix)
env.set("QTINC", self.prefix.inc)
env.set("QTLIB", self.prefix.lib)
env.prepend_path("QT_PLUGIN_PATH", self.prefix.plugins)


to set the QTDIR environment variable so that packages that depend on a particular Qt installation will find it.

The following diagram will give you an idea when each of these methods is called in a build context: [image]

Notice that setup_dependent_run_environment can be called multiple times, once for each dependent package, whereas setup_run_environment is called only once for the package itself. This means that the former should only be used if the environment variables depend on the dependent package, whereas the latter should be used if the environment variables depend only on the package itself.

Setting package module variables

Apart from modifying environment variables of the dependent package, you can also define Python variables to be used by the dependent. This is done by implementing setup_dependent_package. An example of this can be found in the Python package:


def setup_dependent_package(self, module, dependent_spec):
"""Called before python modules' install() methods."""
module.python = self.command
module.python_include = join_path(dependent_spec.prefix, self.include)
module.python_platlib = join_path(dependent_spec.prefix, self.platlib)
module.python_purelib = join_path(dependent_spec.prefix, self.purelib)
# Make the site packages directory for extensions
if dependent_spec.package.is_extension:
mkdirp(module.python_platlib)
mkdirp(module.python_purelib)


This allows Python packages to directly use these variables:

def install(self, spec, prefix):

...
install("script.py", python_platlib)


NOTE:

We recommend using setup_dependent_package sparingly, as it is not always clear where global variables are coming from when editing a package.py file.


Views

The spack view command can be used to symlink a number of packages into a merged prefix. The methods of PackageViewMixin can be overridden to customize how packages are added to views. Generally this can be used to create copies of specific files rather than symlinking them when symlinking does not work. For example, Python overrides add_files_to_view in order to create a copy of the python binary since the real path of the Python executable is used to detect extensions; as a consequence python extension packages (those inheriting from PythonPackage) likewise override add_files_to_view in order to rewrite shebang lines which point to the Python interpreter.

Virtual dependencies

In some cases, more than one package can satisfy another package's dependency. One way this can happen is if a package depends on a particular interface, but there are multiple implementations of the interface, and the package could be built with any of them. A very common interface in HPC is the Message Passing Interface (MPI), which is used in many large-scale parallel applications.

MPI has several different implementations (e.g., MPICH, OpenMPI, and MVAPICH) and scientific applications can be built with any one of them. Complicating matters, MPI does not have a standardized ABI, so a package built with one implementation cannot simply be relinked with another implementation. Many package managers handle interfaces like this by requiring many similar package files, e.g., foo, foo-mvapich, foo-mpich, but Spack avoids this explosion of package files by providing support for virtual dependencies.

provides

In Spack, mpi is handled as a virtual package. A package like mpileaks can depend on it just like any other package, by supplying a depends_on call in the package definition. For example:

class Mpileaks(Package):

homepage = "https://github.com/hpc/mpileaks"
url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz"
version("1.0", md5="8838c574b39202a57d7c2d68692718aa")
depends_on("mpi")
depends_on("adept-utils")
depends_on("callpath")


Here, callpath and adept-utils are concrete packages, but there is no actual package file for mpi, so we say it is a virtual package. The syntax of depends_on, is the same for both. If we look inside the package file of an MPI implementation, say MPICH, we'll see something like this:

class Mpich(Package):

provides("mpi")
...


The provides("mpi") call tells Spack that the mpich package can be used to satisfy the dependency of any package that depends_on("mpi").

Providing multiple virtuals simultaneously

Packages can provide more than one virtual dependency. Sometimes, due to implementation details, there are subsets of those virtuals that need to be provided together by the same package.

A well-known example is openblas, which provides both the lapack and blas API in a single libopenblas library. A package that needs lapack and blas must either use openblas to provide both, or not use openblas at all. It cannot pick one or the other.

To express this constraint in a package, the two virtual dependencies must be listed in the same provides directive:

provides('blas', 'lapack')


This makes it impossible to select openblas as a provider for one of the two virtual dependencies and not for the other. If you try to, Spack will report an error:

$ spack spec netlib-scalapack  ^[virtuals=lapack] openblas ^[virtuals=blas] atlas
==> Error: concretization failed for the following reasons:

1. Package 'openblas' needs to provide both 'lapack' and 'blas' together, but provides only 'lapack'


Versioned Interfaces

Just as you can pass a spec to depends_on, so can you pass a spec to provides to add constraints. This allows Spack to support the notion of versioned interfaces. The MPI standard has gone through many revisions, each with new functions added, and each revision of the standard has a version number. Some packages may require a recent implementation that supports MPI-3 functions, but some MPI versions may only provide up to MPI-2. Others may need MPI 2.1 or higher. You can indicate this by adding a version constraint to the spec passed to provides:

provides("mpi@:2")


Suppose that the above provides call is in the mpich2 package. This says that mpich2 provides MPI support up to version 2, but if a package depends_on("mpi@3"), then Spack will not build that package with mpich2.

provides when

The same package may provide different versions of an interface depending on its version. Above, we simplified the provides call in mpich to make the explanation easier. In reality, this is how mpich calls provides:

provides("mpi@:3", when="@3:")
provides("mpi@:1", when="@1:")


The when argument to provides allows you to specify optional constraints on the providing package, or the provider. The provider only provides the declared virtual spec when it matches the constraints in the when clause. Here, when mpich is at version 3 or higher, it provides MPI up to version 3. When mpich is at version 1 or higher, it provides the MPI virtual package at version 1.

The when qualifier ensures that Spack selects a suitably high version of mpich to satisfy some other package that depends_on a particular version of MPI. It will also prevent a user from building with too low a version of mpich. For example, suppose the package foo declares this:

class Foo(Package):

...
depends_on("mpi@2")


Suppose a user invokes spack install like this:

$ spack install foo ^mpich@1.0


Spack will fail with a constraint violation, because the version of MPICH requested is too low for the mpi requirement in foo.

Custom attributes

Often a package will need to provide attributes for dependents to query various details about what it provides. While any number of custom defined attributes can be implemented by a package, the four specific attributes described below are always available on every package with default implementations and the ability to customize with alternate implementations in the case of virtual packages provided:

Attribute Purpose Default
home The installation path for the package spec.prefix
command An executable command for the package spec.name found in .home.bin

headers A list of headers provided by the package All headers searched recursively in .home.include

libs A list of libraries provided by the package lib{spec.name} searched recursively in .home starting with lib, lib64, then the rest of .home

Each of these can be customized by implementing the relevant attribute as a @property in the package's class:

class Foo(Package):

...
@property
def libs(self):
# The library provided by Foo is libMyFoo.so
return find_libraries("libMyFoo", root=self.home, recursive=True)


A package may also provide a custom implementation of each attribute for the virtual packages it provides by implementing the virtualpackagename_attributename property in the package's class. The implementation used is the first one found from:

1.
Specialized virtual: Package.virtualpackagename_attributename
2.
Generic package: Package.attributename
3.
Default

The use of customized attributes is demonstrated in the next example.

Example: Customized attributes for virtual packages

Consider a package foo that can optionally provide two virtual packages bar and baz. When both are enabled the installation tree appears as follows:

include/foo.h
include/bar/bar.h
lib64/libFoo.so
lib64/libFooBar.so
baz/include/baz/baz.h
baz/lib/libFooBaz.so


The install tree shows that foo is providing the header include/foo.h and library lib64/libFoo.so in its install prefix. The virtual package bar is providing include/bar/bar.h and library lib64/libFooBar.so, also in foo's install prefix. The baz package, however, is provided in the baz subdirectory of foo's prefix with the include/baz/baz.h header and lib/libFooBaz.so library. Such a package could implement the optional attributes as follows:

class Foo(Package):

...
variant("bar", default=False, description="Enable the Foo implementation of bar")
variant("baz", default=False, description="Enable the Foo implementation of baz")
...
provides("bar", when="+bar")
provides("baz", when="+baz")
....
# Just the foo headers
@property
def headers(self):
return find_headers("foo", root=self.home.include, recursive=False)
# Just the foo libraries
@property
def libs(self):
return find_libraries("libFoo", root=self.home, recursive=True)
# The header provided by the bar virtual package
@property
def bar_headers(self):
return find_headers("bar/bar.h", root=self.home.include, recursive=False)
# The library provided by the bar virtual package
@property
def bar_libs(self):
return find_libraries("libFooBar", root=self.home, recursive=True)
# The baz virtual package home
@property
def baz_home(self):
return self.prefix.baz
# The header provided by the baz virtual package
@property
def baz_headers(self):
return find_headers("baz/baz", root=self.baz_home.include, recursive=False)
# The library provided by the baz virtual package
@property
def baz_libs(self):
return find_libraries("libFooBaz", root=self.baz_home, recursive=True)


Now consider another package, foo-app, depending on all three:

class FooApp(CMakePackage):

...
depends_on("foo")
depends_on("bar")
depends_on("baz")


The resulting spec objects for it's dependencies shows the result of the above attribute implementations:

# The core headers and libraries of the foo package
>>> spec["foo"]
foo@1.0%gcc@11.3.1+bar+baz arch=linux-fedora35-haswell
>>> spec["foo"].prefix
"/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6"
# home defaults to the package install prefix without an explicit implementation
>>> spec["foo"].home
"/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6"
# foo headers from the foo prefix
>>> spec["foo"].headers
HeaderList([

"/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6/include/foo.h", ]) # foo include directories from the foo prefix >>> spec["foo"].headers.directories ["/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6/include"] # foo libraries from the foo prefix >>> spec["foo"].libs LibraryList([
"/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6/lib64/libFoo.so", ]) # foo library directories from the foo prefix >>> spec["foo"].libs.directories ["/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6/lib64"]


# The virtual bar package in the same prefix as foo
# bar resolves to the foo package
>>> spec["bar"]
foo@1.0%gcc@11.3.1+bar+baz arch=linux-fedora35-haswell
>>> spec["bar"].prefix
"/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6"
# home defaults to the foo prefix without either a Foo.bar_home
# or Foo.home implementation
>>> spec["bar"].home
"/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6"
# bar header in the foo prefix
>>> spec["bar"].headers
HeaderList([

"/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6/include/bar/bar.h" ]) # bar include dirs from the foo prefix >>> spec["bar"].headers.directories ["/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6/include"] # bar library from the foo prefix >>> spec["bar"].libs LibraryList([
"/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6/lib64/libFooBar.so" ]) # bar library directories from the foo prefix >>> spec["bar"].libs.directories ["/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6/lib64"]


# The virtual baz package in a subdirectory of foo's prefix
# baz resolves to the foo package
>>> spec["baz"]
foo@1.0%gcc@11.3.1+bar+baz arch=linux-fedora35-haswell
>>> spec["baz"].prefix
"/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6"
# baz_home implementation provides the subdirectory inside the foo prefix
>>> spec["baz"].home
"/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6/baz"
# baz headers in the baz subdirectory of the foo prefix
>>> spec["baz"].headers
HeaderList([

"/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6/baz/include/baz/baz.h" ]) # baz include directories in the baz subdirectory of the foo prefix >>> spec["baz"].headers.directories [
"/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6/baz/include" ] # baz libraries in the baz subdirectory of the foo prefix >>> spec["baz"].libs LibraryList([
"/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6/baz/lib/libFooBaz.so" ]) # baz library directories in the baz subdirectory of the foo porefix >>> spec["baz"].libs.directories [
"/opt/spack/linux-fedora35-haswell/gcc-11.3.1/foo-1.0-ca3rczp5omy7dfzoqw4p7oc2yh3u7lt6/baz/lib" ]


Abstract & concrete specs

Now that we've seen how spec constraints can be specified on the command line and within package definitions, we can talk about how Spack puts all of this information together. When you run this:

$ spack install mpileaks ^callpath@1.0+debug ^libelf@0.8.11


Spack parses the command line and builds a spec from the description. The spec says that mpileaks should be built with the callpath library at 1.0 and with the debug option enabled, and with libelf version 0.8.11. Spack will also look at the depends_on calls in all of these packages, and it will build a spec from that. The specs from the command line and the specs built from package descriptions are then combined, and the constraints are checked against each other to make sure they're satisfiable.

What we have after this is done is called an abstract spec. An abstract spec is partially specified. In other words, it could describe more than one build of a package. Spack does this to make things easier on the user: they should only have to specify as much of the package spec as they care about. Here's an example partial spec DAG, based on the constraints above:

mpileaks

^callpath@1.0+debug
^dyninst
^libdwarf
^libelf@0.8.11
^mpi


[graph]

This diagram shows a spec DAG output as a tree, where successive levels of indentation represent a depends-on relationship. In the above DAG, we can see some packages annotated with their constraints, and some packages with no annotations at all. When there are no annotations, it means the user doesn't care what configuration of that package is built, just so long as it works.

Concretization

An abstract spec is useful for the user, but you can't install an abstract spec. Spack has to take the abstract spec and "fill in" the remaining unspecified parts in order to install. This process is called concretization. Concretization happens in between the time the user runs spack install and the time the install() method is called. The concretized version of the spec above might look like this:

mpileaks@2.3%gcc@4.7.3 arch=linux-debian7-x86_64

^callpath@1.0%gcc@4.7.3+debug arch=linux-debian7-x86_64
^dyninst@8.1.2%gcc@4.7.3 arch=linux-debian7-x86_64
^libdwarf@20130729%gcc@4.7.3 arch=linux-debian7-x86_64
^libelf@0.8.11%gcc@4.7.3 arch=linux-debian7-x86_64
^mpich@3.0.4%gcc@4.7.3 arch=linux-debian7-x86_64


[graph]

Here, all versions, compilers, and platforms are filled in, and there is a single version (no version ranges) for each package. All decisions about configuration have been made, and only after this point will Spack call the install() method for your package.

Concretization in Spack is based on certain selection policies that tell Spack how to select, e.g., a version, when one is not specified explicitly. Concretization policies are discussed in more detail in Configuration Files. Sites using Spack can customize them to match the preferences of their own users.

spack spec

For an arbitrary spec, you can see the result of concretization by running spack spec. For example:

$ spack spec dyninst@8.0.1
dyninst@8.0.1

^libdwarf
^libelf dyninst@8.0.1%gcc@4.7.3 arch=linux-debian7-x86_64
^libdwarf@20130729%gcc@4.7.3 arch=linux-debian7-x86_64
^libelf@0.8.13%gcc@4.7.3 arch=linux-debian7-x86_64


This is useful when you want to know exactly what Spack will do when you ask for a particular spec.

Concretization Policies

A user may have certain preferences for how packages should be concretized on their system. For example, one user may prefer packages built with OpenMPI and the Intel compiler. Another user may prefer packages be built with MVAPICH and GCC.

See the Package Preferences section for more details.

Common when= constraints

In case a package needs many directives to share the whole when= argument, or just part of it, Spack allows you to group the common part under a context manager:

class Gcc(AutotoolsPackage):

with when("+nvptx"):
depends_on("cuda")
conflicts("@:6", msg="NVPTX only supported in gcc 7 and above")
conflicts("languages=ada")
conflicts("languages=brig")
conflicts("languages=go")


The snippet above is equivalent to the more verbose:

class Gcc(AutotoolsPackage):

depends_on("cuda", when="+nvptx")
conflicts("@:6", when="+nvptx", msg="NVPTX only supported in gcc 7 and above")
conflicts("languages=ada", when="+nvptx")
conflicts("languages=brig", when="+nvptx")
conflicts("languages=go", when="+nvptx")


Constraints stemming from the context are added to what is explicitly present in the when= argument of a directive, so:

with when("+elpa"):

depends_on("elpa+openmp", when="+openmp")


is equivalent to:

depends_on("elpa+openmp", when="+openmp+elpa")


Constraints from nested context managers are also combined together, but they are rarely needed or recommended.

Common default arguments

Similarly, if directives have a common set of default arguments, you can group them together in a with default_args() block:

class PyExample(PythonPackage):

with default_args(type=("build", "run")):
depends_on("py-foo")
depends_on("py-foo@2:", when="@2:")
depends_on("py-bar")
depends_on("py-bz")


The above is short for:

class PyExample(PythonPackage):

depends_on("py-foo", type=("build", "run"))
depends_on("py-foo@2:", when="@2:", type=("build", "run"))
depends_on("py-bar", type=("build", "run"))
depends_on("py-bz", type=("build", "run"))


NOTE:

The with when() context manager is composable, while with default_args() merely overrides the default. For example:

with default_args(when="+feature"):

depends_on("foo")
depends_on("bar")
depends_on("baz", when="+baz")


is equivalent to:

depends_on("foo", when="+feature")
depends_on("bar", when="+feature")
depends_on("baz", when="+baz")  # Note: not when="+feature+baz"




Conflicting Specs

Suppose a user needs to install package C, which depends on packages A and B. Package A builds a library with a Python2 extension, and package B builds a library with a Python3 extension. Packages A and B cannot be loaded together in the same Python runtime:

class A(Package):

variant("python", default=True, "enable python bindings")
depends_on("python@2.7", when="+python")
def install(self, spec, prefix):
# do whatever is necessary to enable/disable python
# bindings according to variant class B(Package):
variant("python", default=True, "enable python bindings")
depends_on("python@3.2:", when="+python")
def install(self, spec, prefix):
# do whatever is necessary to enable/disable python
# bindings according to variant


Package C needs to use the libraries from packages A and B, but does not need either of the Python extensions. In this case, package C should simply depend on the ~python variant of A and B:

class C(Package):

depends_on("A~python")
depends_on("B~python")


This may require that A or B be built twice, if the user wishes to use the Python extensions provided by them: once for +python and once for ~python. Other than using a little extra disk space, that solution has no serious problems.

Overriding build system defaults

NOTE:

If you code a single class in package.py all the functions shown in the table below can be implemented with the same signature on the *Package instead of the corresponding builder.


Most of the time the default implementation of methods or attributes in build system base classes is what a packager needs, and just a very few entities need to be overwritten. Typically we just need to override methods like configure_args:

def configure_args(self):

args = ["--enable-cxx"] + self.enable_or_disable("libs")
if self.spec.satisfies("libs=static"):
args.append("--with-pic")
return args


The actual set of entities available for overriding in package.py depend on the build system. The build systems currently supported by Spack are:

API docs Description
generic Generic build system without any base implementation
makefile Specialized build system for software built invoking hand-written Makefiles
autotools Specialized build system for software built using GNU Autotools
cmake Specialized build system for software built using CMake
maven Specialized build system for software built using Maven
meson Specialized build system for software built using Meson
nmake Specialized build system for software built using NMake
qmake Specialized build system for software built using QMake
scons Specialized build system for software built using SCons
waf Specialized build system for software built using Waf
r Specialized build system for R extensions
octave Specialized build system for Octave packages
python Specialized build system for Python extensions
perl Specialized build system for Perl extensions
ruby Specialized build system for Ruby extensions
intel Specialized build system for licensed Intel software
oneapi Specialized build system for Intel oneAPI software
aspell_dict Specialized build system for Aspell dictionaries

NOTE:

In most cases packagers don't have to worry about the selection of the right base class for a package, as spack create will make the appropriate choice on their behalf. In those rare cases where manual intervention is needed we need to stress that a package base class depends on the build system being used, not the language of the package. For example, a Python extension installed with CMake would extends("python") and subclass from CMakePackage.



Overriding builder methods

Build-system "phases" have default implementations that fit most of the common cases:


def configure(self, pkg, spec, prefix):
"""Run "configure", with the arguments specified by the builder and an
appropriately set prefix.
"""
options = getattr(self.pkg, "configure_flag_args", [])
options += ["--prefix={0}".format(prefix)]
options += self.configure_args()
with fs.working_dir(self.build_directory, create=True):
inspect.getmodule(self.pkg).configure(*options)


It is usually sufficient for a packager to override a few build system specific helper methods or attributes to provide, for instance, configure arguments:


def configure_args(self):
spec = self.spec
args = ["--enable-c++"]
if spec.satisfies("%cce@9:"):
args.append("LDFLAGS=-rtlib=compiler-rt")
if (
spec.satisfies("%clang")
or spec.satisfies("%aocc")
or spec.satisfies("%arm")
or spec.satisfies("%fj")
) and not spec.satisfies("platform=darwin"):
args.append("LDFLAGS=-rtlib=compiler-rt")
if spec.satisfies("%intel@:18"):
args.append("CFLAGS=-no-gcc")
if "+sigsegv" in spec:
args.append("--with-libsigsegv-prefix={0}".format(spec["libsigsegv"].prefix))
else:
args.append("--without-libsigsegv-prefix")
# https://lists.gnu.org/archive/html/bug-m4/2016-09/msg00002.html
arch = spec.architecture
if arch.platform == "darwin" and arch.os == "sierra" and "%gcc" in spec:
args.append("ac_cv_type_struct_sched_param=yes")
return args


Each specific build system has a list of attributes and methods that can be overridden to fine-tune the installation of a package without overriding an entire phase. To have more information on them the place to go is the API docs of the build_systems module.

Overriding an entire phase

Sometimes it is necessary to override an entire phase. If the package.py contains a single class recipe, see Package class architecture, then the signature for a phase is:

class Openjpeg(CMakePackage):

def install(self, spec, prefix):
...


regardless of the build system. The arguments for the phase are:

This is the package object, which extends CMakePackage. For API docs on Package objects, see Package.
spec
This is the concrete spec object created by Spack from an abstract spec supplied by the user. It describes what should be installed. It will be of type Spec.
prefix
This is the path that your install method should copy build targets into. It acts like a string, but it's actually its own special type, Prefix.

The arguments spec and prefix are passed only for convenience, as they always correspond to self.spec and self.spec.prefix respectively.

If the package.py has build instructions in a separate builder class, the signature for a phase changes slightly:

class CMakeBuilder(spack.build_systems.cmake.CMakeBuilder):

def install(self, pkg, spec, prefix):
...


In this case the package is passed as the second argument, and self is the builder instance.

Mixin base classes

Besides build systems, there are other cases where common metadata and behavior can be extracted and reused by many packages. For instance, packages that depend on Cuda or Rocm, share common dependencies and constraints. To factor these attributes into a single place, Spack provides a few mixin classes in the spack.build_systems module:

API docs Description
CudaPackage A helper class for packages that use CUDA
ROCmPackage A helper class for packages that use ROCm
GNUMirrorPackage A helper class for GNU packages
PythonExtension A helper class for Python extensions
SourceforgePackage A helper class for packages from sourceforge.org
SourcewarePackage A helper class for packages from sourceware.org
XorgPackage A helper class for x.org packages

These classes should be used by adding them to the inheritance tree of the package that needs them, for instance:

class Cp2k(MakefilePackage, CudaPackage):

"""CP2K is a quantum chemistry and solid state physics software package
that can perform atomistic simulations of solid state, liquid, molecular,
periodic, material, crystal, and biological systems
"""


In the example above Cp2k inherits all the conflicts and variants that CudaPackage defines.

Multiple build systems

There are cases where a package actively supports two build systems, or changes build systems as it evolves, or needs different build systems on different platforms. Spack allows dealing with these cases by splitting the build instructions into separate builder classes.

For instance, software that supports two build systems unconditionally should derive from both *Package base classes, and declare the possible use of multiple build systems using a directive:

class Example(CMakePackage, AutotoolsPackage):

variant("my_feature", default=True)
build_system("cmake", "autotools", default="cmake")


In this case the software can be built with both autotools and cmake. Since the package supports multiple build systems, it is necessary to declare which one is the default.

Additional build instructions are split into separate builder classes:

class CMakeBuilder(spack.build_systems.cmake.CMakeBuilder):

def cmake_args(self):
return [
self.define_from_variant("MY_FEATURE", "my_feature")
] class AutotoolsBuilder(spack.build_systems.autotools.AutotoolsBuilder):
def configure_args(self):
return self.with_or_without("my-feature", variant="my_feature")


In this example, spack install example +feature build_sytem=cmake will pick the CMakeBuilder and invoke cmake -DMY_FEATURE:BOOL=ON.

Similarly, spack install example +feature build_system=autotools will pick the AutotoolsBuilder and invoke ./configure --with-my-feature.

Dependencies are always specified in the package class. When some dependencies depend on the choice of the build system, it is possible to use when conditions as usual:

class Example(CMakePackage, AutotoolsPackage):

build_system("cmake", "autotools", default="cmake")
# Runtime dependencies
depends_on("ncurses")
depends_on("libxml2")
# Lowerbounds for cmake only apply when using cmake as the build system
with when("build_system=cmake"):
depends_on("cmake@3.18:", when="@2.0:", type="build")
depends_on("cmake@3:", type="build")
# Specify extra build dependencies used only in the configure script
with when("build_system=autotools"):
depends_on("perl", type="build")
depends_on("pkgconfig", type="build")


Very often projects switch from one build system to another, or add support for a new build system from a certain version, which means that the choice of the build system typically depends on a version range. Those situations can be handled by using conditional values in the build_system directive:

class Example(CMakePackage, AutotoolsPackage):

build_system(
conditional("cmake", when="@0.64:"),
conditional("autotools", when="@:0.63"),
default="cmake",
)


In the example the directive impose a change from Autotools to CMake going from v0.63 to v0.64.

The build_system can be used as an ordinary variant, which also means that it can be used in depends_on statements. This can be useful when a package requires that its dependency has a CMake config file, meaning that the dependent can only build when the dependency is built with CMake, and not Autotools. In that case, you can force the choice of the build system in the dependent:

class Dependent(CMakePackage):

depends_on("example build_system=cmake")


The build environment

In general, you should not have to do much differently in your install method than you would when installing a package on the command line. In fact, you may need to do less than you would on the command line.

Spack tries to set environment variables and modify compiler calls so that it appears to the build system that you're building with a standard system install of everything. Obviously that's not going to cover all build systems, but it should make it easy to port packages to Spack if they use a standard build system. Usually with autotools or cmake, building and installing is easy. With builds that use custom Makefiles, you may need to add logic to modify the makefiles.

The remainder of the section covers the way Spack's build environment works.

Forking install()

To give packagers free reign over their install environment, Spack forks a new process each time it invokes a package's install() method. This allows packages to have a sandboxed build environment, without impacting the environments ofother jobs that the main Spack process runs. Packages are free to change the environment or to modify Spack internals, because each install() call has its own dedicated process.

Environment variables

Spack sets a number of standard environment variables that serve two purposes:

1.
Make build systems use Spack's compiler wrappers for their builds.
2.
Allow build systems to find dependencies more easily

The Compiler environment variables that Spack sets are:

Variable Purpose
CC C compiler
CXX C++ compiler
F77 Fortran 77 compiler
FC Fortran 90 and above compiler


Spack sets these variables so that they point to compiler wrappers. These are covered in their own section below.

All of these are standard variables respected by most build systems. If your project uses Autotools or CMake, then it should pick them up automatically when you run configure or cmake in the install() function. Many traditional builds using GNU Make and BSD make also respect these variables, so they may work with these systems.

If your build system does not automatically pick these variables up from the environment, then you can simply pass them on the command line or use a patch as part of your build process to get the correct compilers into the project's build system. There are also some file editing commands you can use -- these are described later in the section on file manipulation.

In addition to the compiler variables, these variables are set before entering install() so that packages can locate dependencies easily:

PATH Set to point to /bin directories of dependencies
CMAKE_PREFIX_PATH Path to dependency prefixes for CMake
PKG_CONFIG_PATH Path to any pkgconfig directories for dependencies
PYTHONPATH Path to site-packages dir of any python dependencies

PATH is set up to point to dependencies /bin directories so that you can use tools installed by dependency packages at build time. For example, $MPICH_ROOT/bin/mpicc is frequently used by dependencies of mpich.

CMAKE_PREFIX_PATH contains a colon-separated list of prefixes where cmake will search for dependency libraries and headers. This causes all standard CMake find commands to look in the paths of your dependencies, so you do not have to manually specify arguments like -DDEPENDENCY_DIR=/path/to/dependency to cmake. More on this is in the CMake documentation.

PKG_CONFIG_PATH is for packages that attempt to discover dependencies using the GNU pkg-config tool. It is similar to CMAKE_PREFIX_PATH in that it allows a build to automatically discover its dependencies.

If you want to see the environment that a package will build with, or if you want to run commands in that environment to test them out, you can use the spack build-env command, documented below.

Failing the build

Sometimes you don't want a package to successfully install unless some condition is true. You can explicitly cause the build to fail from install() by raising an InstallError, for example:

if spec.architecture.startswith("darwin"):

raise InstallError("This package does not build on Mac OS X!")


Shell command functions

Recall the install method from libelf:


def install(self, spec, prefix):
make("install", parallel=False)


Normally in Python, you'd have to write something like this in order to execute shell commands:

import subprocess
subprocess.check_call("configure", "--prefix={0}".format(prefix))


We've tried to make this a bit easier by providing callable wrapper objects for some shell commands. By default, configure, cmake, and make wrappers are are provided, so you can call them more naturally in your package files.

If you need other commands, you can use which to get them:

sed = which("sed")
sed("s/foo/bar/", filename)


The which function will search the PATH for the application.

Callable wrappers also allow spack to provide some special features. For example, in Spack, make is parallel by default, and Spack figures out the number of cores on your machine and passes an appropriate value for -j<numjobs> when it calls make (see the parallel package attribute <attribute_parallel>). In a package file, you can supply a keyword argument, parallel=False, to the make wrapper to disable parallel make. In the libelf package, this allows us to avoid race conditions in the library's build system.

Compiler flags

Compiler flags set by the user through the Spec object can be passed to the build in one of three ways. By default, the build environment injects these flags directly into the compiler commands using Spack's compiler wrappers. In cases where the build system requires knowledge of the compiler flags, they can be registered with the build system by alternatively passing them through environment variables or as build system arguments. The flag_handler method can be used to change this behavior.

Packages can override the flag_handler method with one of three built-in flag_handlers. The built-in flag_handlers are named inject_flags, env_flags, and build_system_flags. The inject_flags method is the default. The env_flags method puts all of the flags into the environment variables that make uses as implicit variables ("CFLAGS", "CXXFLAGS", etc.). The build_system_flags method adds the flags as arguments to the invocation of configure or cmake, respectively.

WARNING:

Passing compiler flags using build system arguments is only supported for CMake and Autotools packages. Individual packages may also differ in whether they properly respect these arguments.


Individual packages may also define their own flag_handler methods. The flag_handler method takes the package instance (self), the name of the flag, and a list of the values of the flag. It will be called on each of the six compiler flags supported in Spack. It should return a triple of (injf, envf, bsf) where injf is a list of flags to inject via the Spack compiler wrappers, envf is a list of flags to set in the appropriate environment variables, and bsf is a list of flags to pass to the build system as arguments.

WARNING:

Passing a non-empty list of flags to bsf for a build system that does not support build system arguments will result in an error.


Here are the definitions of the three built-in flag handlers:

def inject_flags(pkg, name, flags):

return (flags, None, None) def env_flags(pkg, name, flags):
return (None, flags, None) def build_system_flags(pkg, name, flags):
return (None, None, flags)


NOTE:

Returning [] and None are equivalent in a flag_handler method.


Packages can override the default behavior either by specifying one of the built-in flag handlers,

flag_handler = env_flags


or by implementing the flag_handler method. Suppose for a package Foo we need to pass cflags, cxxflags, and cppflags through the environment, the rest of the flags through compiler wrapper injection, and we need to add -lbar to ldlibs. The following flag handler method accomplishes that.

def flag_handler(self, name, flags):

if name in ["cflags", "cxxflags", "cppflags"]:
return (None, flags, None)
elif name == "ldlibs":
flags.append("-lbar")
return (flags, None, None)


Because these methods can pass values through environment variables, it is important not to override these variables unnecessarily (E.g. setting env["CFLAGS"]) in other package methods when using non-default flag handlers. In the setup_environment and setup_dependent_environment methods, use the append_flags method of the EnvironmentModifications class to append values to a list of flags whenever the flag handler is env_flags. If the package passes flags through the environment or the build system manually (in the install method, for example), we recommend using the default flag handler, or removing manual references and implementing a custom flag handler method that adds the desired flags to export as environment variables or pass to the build system. Manual flag passing is likely to interfere with the env_flags and build_system_flags methods.

In rare circumstances such as compiling and running small unit tests, a package developer may need to know what are the appropriate compiler flags to enable features like OpenMP, c++11, c++14 and alike. To that end the compiler classes in spack implement the following properties: openmp_flag, cxx98_flag, cxx11_flag, cxx14_flag, and cxx17_flag, which can be accessed in a package by self.compiler.cxx11_flag and alike. Note that the implementation is such that if a given compiler version does not support this feature, an error will be produced. Therefore package developers can also use these properties to assert that a compiler supports the requested feature. This is handy when a package supports additional variants like

variant("openmp", default=True, description="Enable OpenMP support.")


Blas, Lapack and ScaLapack libraries

Multiple packages provide implementations of Blas, Lapack and ScaLapack routines. The names of the resulting static and/or shared libraries differ from package to package. In order to make the install() method independent of the choice of Blas implementation, each package which provides it implements @property def blas_libs(self): to return an object of LibraryList type which simplifies usage of a set of libraries. The same applies to packages which provide Lapack and ScaLapack. Package developers are requested to use this interface. Common usage cases are:

1.
Space separated list of full paths

lapack_blas = spec["lapack"].libs + spec["blas"].libs
options.append(

"--with-blas-lapack-lib={0}".format(lapack_blas.joined()) )


2.
Names of libraries and directories which contain them

blas = spec["blas"].libs
options.extend([

"-DBLAS_LIBRARY_NAMES={0}".format(";".join(blas.names)),
"-DBLAS_LIBRARY_DIRS={0}".format(";".join(blas.directories)) ])


3.
Search and link flags

math_libs = spec["scalapack"].libs + spec["lapack"].libs + spec["blas"].libs
options.append(

"-DMATH_LIBS:STRING={0}".format(math_libs.ld_flags) )


For more information, see documentation of LibraryList class.

Prefix objects

Spack passes the prefix parameter to the install method so that you can pass it to configure, cmake, or some other installer, e.g.:

configure("--prefix={0}".format(prefix))


For the most part, prefix objects behave exactly like strings. For packages that do not have their own install target, or for those that implement it poorly (like libdwarf), you may need to manually copy things into particular directories under the prefix. For this, you can refer to standard subdirectories without having to construct paths yourself, e.g.:

def install(self, spec, prefix):

mkdirp(prefix.bin)
install("foo-tool", prefix.bin)
mkdirp(prefix.include)
install("foo.h", prefix.include)
mkdirp(prefix.lib)
install("libfoo.a", prefix.lib)


Attributes of this object are created on the fly when you request them, so any of the following will work:

Prefix Attribute Location
prefix.bin $prefix/bin
prefix.lib64 $prefix/lib64
prefix.share.man $prefix/share/man
prefix.foo.bar.baz $prefix/foo/bar/baz

Of course, this only works if your file or directory is a valid Python variable name. If your file or directory contains dashes or dots, use join instead:

prefix.lib.join("libz.a")


Spec objects

When install is called, most parts of the build process are set up for you. The correct version's tarball has been downloaded and expanded. Environment variables like CC and CXX are set to point to the correct compiler and version. An install prefix has already been selected and passed in as prefix. In most cases this is all you need to get configure, cmake, or another install working correctly.

There will be times when you need to know more about the build configuration. For example, some software requires that you pass special parameters to configure, like --with-libelf=/path/to/libelf or --with-mpich. You might also need to supply special compiler flags depending on the compiler. All of this information is available in the spec.

Testing spec constraints

You can test whether your spec is configured a certain way by using the satisfies method. For example, if you want to check whether the package's version is in a particular range, you can use specs to do that, e.g.:

configure_args = [

"--prefix={0}".format(prefix) ] if spec.satisfies("@1.2:1.4"):
configure_args.append("CXXFLAGS='-DWITH_FEATURE'") configure(*configure_args)


This works for compilers, too:

if spec.satisfies("%gcc"):

configure_args.append("CXXFLAGS='-g3 -O3'") if spec.satisfies("%intel"):
configure_args.append("CXXFLAGS='-xSSE2 -fast'")


Or for combinations of spec constraints:

if spec.satisfies("@1.2%intel"):

tty.error("Version 1.2 breaks when using Intel compiler!")


You can also do similar satisfaction tests for dependencies:

if spec.satisfies("^dyninst@8.0"):

configure_args.append("CXXFLAGS=-DSPECIAL_DYNINST_FEATURE")


This could allow you to easily work around a bug in a particular dependency version.

You can use satisfies() to test for particular dependencies, e.g. foo.satisfies("^openmpi@1.2") or foo.satisfies("^mpich"), or you can use Python's built-in in operator:

if "libelf" in spec:

print "this package depends on libelf"


This is useful for virtual dependencies, as you can easily see what implementation was selected for this build:

if "openmpi" in spec:

configure_args.append("--with-openmpi") elif "mpich" in spec:
configure_args.append("--with-mpich") elif "mvapich" in spec:
configure_args.append("--with-mvapich")


It's also a bit more concise than satisfies. The difference between the two functions is that satisfies() tests whether spec constraints overlap at all, while in tests whether a spec or any of its dependencies satisfy the provided spec.

Architecture specifiers

As mentioned in Support for specific microarchitectures each node in a concretized spec object has an architecture attribute which is a triplet of platform, os and target. Each of these three items can be queried to take decisions when configuring, building or installing a package.

Querying the platform and the operating system

Sometimes the actions to be taken to install a package might differ depending on the platform we are installing for. If that is the case we can use conditionals:

if spec.platform == "darwin":

# Actions that are specific to Darwin
args.append("--darwin-specific-flag")


and branch based on the current spec platform. If we need to make a package directive conditional on the platform we can instead employ the usual spec syntax and pass the corresponding constraint to the appropriate argument of that directive:

class Libnl(AutotoolsPackage):

conflicts("platform=darwin", msg="libnl requires FreeBSD or Linux")


Similar considerations are also valid for the os part of a spec's architecture. For instance:

class Glib(AutotoolsPackage)

patch("old-kernels.patch", when="os=centos6")


will apply the patch only when the operating system is Centos 6.

NOTE:

Even though experienced Python programmers might recognize that there are other ways to retrieve information on the platform:

if sys.platform == "darwin":

# Actions that are specific to Darwin
args.append("--darwin-specific-flag")


querying the spec architecture's platform should be considered the preferred. The key difference is that a query on sys.platform, or anything similar, is always bound to the host on which the interpreter running Spack is located and as such it won't work correctly in environments where cross-compilation is required.



Querying the target microarchitecture

The third item of the architecture tuple is the target which abstracts the information on the CPU microarchitecture. A list of all the targets known to Spack can be obtained via the command line:

$ spack arch --known-targets
Generic architectures (families)

aarch64 armv8.1a armv8.3a armv8.5a ppc ppc64le riscv64 sparc64 x86_64 x86_64_v3
arm armv8.2a armv8.4a armv9.0a ppc64 ppcle sparc x86 x86_64_v2 x86_64_v4 GenuineIntel - x86
i686 pentium2 pentium3 pentium4 prescott GenuineIntel - x86_64
nocona nehalem sandybridge haswell skylake cannonlake cascadelake sapphirerapids
core2 westmere ivybridge broadwell mic_knl skylake_avx512 icelake AuthenticAMD - x86_64
k10 bulldozer piledriver zen steamroller zen2 zen3 excavator zen4 IBM - ppc64
power7 power8 power9 power10 IBM - ppc64le
power8le power9le power10le Cavium - aarch64
thunderx2 Fujitsu - aarch64
a64fx ARM - aarch64
cortex_a72 neoverse_n1 neoverse_v1 neoverse_v2 Apple - aarch64
m1 m2 SiFive - riscv64
u74mc


Within directives each of the names above can be used to match a particular target:

class Julia(Package):

# This patch is only applied on icelake microarchitectures
patch("icelake.patch", when="target=icelake")


It's also possible to select all the architectures belonging to the same family using an open range:

class Julia(Package):

# This patch is applied on all x86_64 microarchitectures.
# The trailing colon that denotes an open range of targets
patch("generic_x86_64.patch", when="target=x86_64:")


in a way that resembles what was shown in Versions and fetching for versions. Where target objects really shine though is when they are used in methods called at configure, build or install time. In that case we can test targets for supported features, for instance:

if spec.satisfies("target=avx512"):

args.append("--with-avx512")


The snippet above will append the --with-avx512 item to a list of arguments only if the corresponding feature is supported by the current target. Sometimes we need to take different actions based on the architecture family and not on the specific microarchitecture. In those cases we can check the family attribute:

if spec.target.family == "ppc64le":

args.append("--enable-power")


Possible values for the family attribute are displayed by spack arch --known-targets under the "Generic architectures (families)" header. Finally it's possible to perform actions based on whether the current microarchitecture is compatible with a known one:

if spec.target > "haswell":

args.append("--needs-at-least-haswell")


The snippet above will add an item to a list of configure options only if the current architecture is a superset of haswell or, said otherwise, only if the current architecture is a later microarchitecture still compatible with haswell.

If Spack is used on an unknown microarchitecture it will try to perform a best match of the features it detects and will select the closest microarchitecture it has information for. In case nothing matches, it will create on the fly a new generic architecture. This is done to allow users to still be able to use Spack for their work. The software built won't be probably as optimized as it could but just as you need a newer compiler to build for newer architectures, you may need newer versions of Spack for new architectures to be correctly labeled.



Accessing Dependencies

You may need to get at some file or binary that's in the installation prefix of one of your dependencies. You can do that by sub-scripting the spec:

spec["mpi"]


The value in the brackets needs to be some package name, and spec needs to depend on that package, or the operation will fail. For example, the above code will fail if the spec doesn't depend on mpi. The value returned is itself just another Spec object, so you can do all the same things you would do with the package's own spec:

spec["mpi"].prefix.bin
spec["mpi"].version


Multimethods and @when

Spack allows you to make multiple versions of instance functions in packages, based on whether the package's spec satisfies particular criteria.

The @when annotation lets packages declare multiple versions of methods like install() that depend on the package's spec. For example:

class SomePackage(Package):

...
def install(self, prefix):
# Do default install
@when("arch=chaos_5_x86_64_ib")
def install(self, prefix):
# This will be executed instead of the default install if
# the package's sys_type() is chaos_5_x86_64_ib.
@when("arch=linux-debian7-x86_64")
def install(self, prefix):
# This will be executed if the package's sys_type() is
# linux-debian7-x86_64.


In the above code there are three versions of install(), two of which are specialized for particular platforms. The version that is called depends on the architecture of the package spec.

Note that this works for methods other than install, as well. So, if you only have part of the install that is platform specific, you could do something more like this:

class SomePackage(Package):

...
# virtual dependence on MPI.
# could resolve to mpich, mpich2, OpenMPI
depends_on("mpi")
def setup(self):
# do nothing in the default case
pass
@when("^openmpi")
def setup(self):
# do something special when this is built with OpenMPI for
# its MPI implementations.
def install(self, prefix):
# Do common install stuff
self.setup()
# Do more common install stuff


You can write multiple @when specs that satisfy the package's spec, for example:

class SomePackage(Package):

...
depends_on("mpi")
def setup_mpi(self):
# the default, called when no @when specs match
pass
@when("^mpi@3:")
def setup_mpi(self):
# this will be called when mpi is version 3 or higher
pass
@when("^mpi@2:")
def setup_mpi(self):
# this will be called when mpi is version 2 or higher
pass
@when("^mpi@1:")
def setup_mpi(self):
# this will be called when mpi is version 1 or higher
pass


In situations like this, the first matching spec, in declaration order will be called. As before, if no @when spec matches, the default method (the one without the @when decorator) will be called.

WARNING:

The default version of decorated methods must always come first. Otherwise it will override all of the platform-specific versions. There's not much we can do to get around this because of the way decorators work.


Compiler wrappers

As mentioned, CC, CXX, F77, and FC are set to point to Spack's compiler wrappers. These are simply called cc, c++, f77, and f90, and they live in $SPACK_ROOT/lib/spack/env.

$SPACK_ROOT/lib/spack/env is added first in the PATH environment variable when install() runs so that system compilers are not picked up instead.

All of these compiler wrappers point to a single compiler wrapper script that figures out which real compiler it should be building with. This comes either from spec concretization or from a user explicitly asking for a particular compiler using, e.g., %intel on the command line.

In addition to invoking the right compiler, the compiler wrappers add flags to the compile line so that dependencies can be easily found. These flags are added for each dependency, if they exist:

  • Compile-time library search paths: -L$dep_prefix/lib, -L$dep_prefix/lib64
  • Runtime library search paths (RPATHs): $rpath_flag$dep_prefix/lib, $rpath_flag$dep_prefix/lib64
  • Include search paths: -I$dep_prefix/include

An example of this would be the libdwarf build, which has one dependency: libelf. Every call to cc in the libdwarf build will have -I$LIBELF_PREFIX/include, -L$LIBELF_PREFIX/lib, and $rpath_flag$LIBELF_PREFIX/lib inserted on the command line. This is done transparently to the project's build system, which will just think it's using a system where libelf is readily available. Because of this, you do not have to insert extra -I, -L, etc. on the command line.

Another useful consequence of this is that you often do not have to add extra parameters on the configure line to get autotools to find dependencies. The libdwarf install method just calls configure like this:

configure("--prefix=" + prefix)


Because of the -L and -I arguments, configure will successfully find libdwarf.h and libdwarf.so, without the packager having to provide --with-libdwarf=/path/to/libdwarf on the command line.

NOTE:

For most compilers, $rpath_flag is -Wl,-rpath,. However, NAG passes its flags to GCC instead of passing them directly to the linker. Therefore, its $rpath_flag is doubly wrapped: -Wl,-Wl,,-rpath,. $rpath_flag can be overridden on a compiler specific basis in lib/spack/spack/compilers/$compiler.py.


The compiler wrappers also pass the compiler flags specified by the user from the command line (cflags, cxxflags, fflags, cppflags, ldflags, and/or ldlibs). They do not override the canonical autotools flags with the same names (but in ALL-CAPS) that may be passed into the build by particularly challenging package scripts.

MPI support in Spack

It is common for high performance computing software/packages to use the Message Passing Interface ( MPI). As a result of conretization, a given package can be built using different implementations of MPI such as Openmpi, MPICH or IntelMPI. That is, when your package declares that it depends_on("mpi"), it can be built with any of these mpi implementations. In some scenarios, to configure a package, one has to provide it with appropriate MPI compiler wrappers such as mpicc, mpic++. However different implementations of MPI may have different names for those wrappers.

Spack provides an idiomatic way to use MPI compilers in your package. To use MPI wrappers to compile your whole build, do this in your install() method:

env["CC"] = spec["mpi"].mpicc
env["CXX"] = spec["mpi"].mpicxx
env["F77"] = spec["mpi"].mpif77
env["FC"] = spec["mpi"].mpifc


That's all. A longer explanation of why this works is below.

We don't try to force any particular build method on packagers. The decision to use MPI wrappers depends on the way the package is written, on common practice, and on "what works". Loosely, There are three types of MPI builds:

1.
Some build systems work well without the wrappers and can treat MPI as an external library, where the person doing the build has to supply includes/libs/etc. This is fairly uncommon.
2.
Others really want the wrappers and assume you're using an MPI "compiler" – i.e., they have no mechanism to add MPI includes/libraries/etc.
3.
CMake's FindMPI needs the compiler wrappers, but it uses them to extract –I / -L / -D arguments, then treats MPI like a regular library.



Note that some CMake builds fall into case 2 because they either don't know about or don't like CMake's FindMPI support – they just assume an MPI compiler. Also, some autotools builds fall into case 3 (e.g. here is an autotools version of CMake's FindMPI).

Given all of this, we leave the use of the wrappers up to the packager. Spack will support all three ways of building MPI packages.

Packaging Conventions

As mentioned above, in the install() method, CC, CXX, F77, and FC point to Spack's wrappers around the chosen compiler. Spack's wrappers are not the MPI compiler wrappers, though they do automatically add –I, –L, and –Wl,-rpath args for dependencies in a similar way. The MPI wrappers are a bit different in that they also add -l arguments for the MPI libraries, and some add special -D arguments to trigger build options in MPI programs.

For case 1 above, you generally don't need to do more than patch your Makefile or add configure args as you normally would.

For case 3, you don't need to do much of anything, as Spack puts the MPI compiler wrappers in the PATH, and the build will find them and interrogate them.

For case 2, things are a bit more complicated, as you'll need to tell the build to use the MPI compiler wrappers instead of Spack's compiler wrappers. All it takes some lines like this:

env["CC"] = spec["mpi"].mpicc
env["CXX"] = spec["mpi"].mpicxx
env["F77"] = spec["mpi"].mpif77
env["FC"] = spec["mpi"].mpifc


Or, if you pass CC, CXX, etc. directly to your build with, e.g., --with-cc=<path>, you'll want to substitute spec["mpi"].mpicc in there instead, e.g.:

configure("—prefix=%s" % prefix,

"—with-cc=%s" % spec["mpi"].mpicc)


Now, you may think that doing this will lose the includes, library paths, and RPATHs that Spack's compiler wrapper get you, but we've actually set things up so that the MPI compiler wrappers use Spack's compiler wrappers when run from within Spack. So using the MPI wrappers should really be as simple as the code above.

spec["mpi"]

Ok, so how does all this work?

If your package has a virtual dependency like mpi, then referring to spec["mpi"] within install() will get you the concrete mpi implementation in your dependency DAG. That is a spec object just like the one passed to install, only the MPI implementations all set some additional properties on it to help you out. E.g., in mvapich2, you'll find this:


def setup_dependent_package(self, module, dependent_spec):
# For Cray MPIs, the regular compiler wrappers *are* the MPI wrappers.
# Cray MPIs always have cray in the module name, e.g. "cray-mvapich"
if self.spec.satisfies("platform=cray"):
self.spec.mpicc = spack_cc
self.spec.mpicxx = spack_cxx
self.spec.mpifc = spack_fc
self.spec.mpif77 = spack_f77
else:
self.spec.mpicc = join_path(self.prefix.bin, "mpicc")
self.spec.mpicxx = join_path(self.prefix.bin, "mpicxx")
self.spec.mpifc = join_path(self.prefix.bin, "mpif90")
self.spec.mpif77 = join_path(self.prefix.bin, "mpif77")
self.spec.mpicxx_shared_libs = [
os.path.join(self.prefix.lib, "libmpicxx.{0}".format(dso_suffix)),
os.path.join(self.prefix.lib, "libmpi.{0}".format(dso_suffix)),
]


That code allows the mvapich2 package to associate an mpicc property with the mvapich2 node in the DAG, so that dependents can access it. openmpi and mpich do similar things. So, no matter what MPI you're using, spec["mpi"].mpicc gets you the location of the MPI compilers. This allows us to have a fairly simple polymorphic interface for information about virtual dependencies like MPI.

Wrapping wrappers

Spack likes to use its own compiler wrappers to make it easy to add RPATHs to builds, and to try hard to ensure that your builds use the right dependencies. This doesn't play nicely by default with MPI, so we have to do a couple tricks.

1.
If we build MPI with Spack's wrappers, mpicc and friends will be installed with hard-coded paths to Spack's wrappers, and using them from outside of Spack will fail because they only work within Spack. To fix this, we patch mpicc and friends to use the regular compilers. Look at the filter_compilers method in mpich, openmpi, or mvapich2 for details.
2.
We still want to use the Spack compiler wrappers when Spack is calling mpicc. Luckily, wrappers in all mainstream MPI implementations provide environment variables that allow us to dynamically set the compiler to be used by mpicc, mpicxx, etc. Denis pasted some code from this below – Spack's build environment sets MPICC, MPICXX, etc. for mpich derivatives and OMPI_CC, OMPI_CXX, etc. for OpenMPI. This makes the MPI compiler wrappers use the Spack compiler wrappers so that your dependencies still get proper RPATHs even if you use the MPI wrappers.



MPI on Cray machines

The Cray programming environment notably uses ITS OWN compiler wrappers, which function like MPI wrappers. On Cray systems, the CC, cc, and ftn wrappers ARE the MPI compiler wrappers, and it's assumed that you'll use them for all of your builds. So on Cray we don't bother with mpicc, mpicxx, etc, Spack MPI implementations set spec["mpi"].mpicc to point to Spack's wrappers, which wrap the Cray wrappers, which wrap the regular compilers and include MPI flags. That may seem complicated, but for packagers, that means the same code for using MPI wrappers will work, even on even on a Cray:

env["CC"] = spec["mpi"].mpicc


This is because on Cray, spec["mpi"].mpicc is just spack_cc.

Checking an installation

A package that appears to install successfully does not mean it is actually installed correctly or will continue to work indefinitely. There are a number of possible points of failure so Spack provides features for checking the software along the way.

Failures can occur during and after the installation process. The build may start but the software not end up fully installed. The installed software may not work at all or as expected. The software may work after being installed but, due to changes on the system, may stop working days, weeks, or months after being installed.

This section describes Spack's support for checks that can be performed during and after its installation. The former checks are referred to as build-time tests and the latter as stand-alone (or smoke) tests.

Build-time tests

Spack infers the status of a build based on the contents of the install prefix. Success is assumed if anything (e.g., a file, directory) is written after install() completes. Otherwise, the build is assumed to have failed. However, the presence of install prefix contents is not a sufficient indicator of success so Spack supports the addition of tests that can be performed during spack install processing.

Consider a simple autotools build using the following commands:

$ ./configure --prefix=/path/to/installation/prefix
$ make
$ make install


Standard Autotools and CMake do not write anything to the prefix from the configure and make commands. Files are only written from the make install after the build completes.

NOTE:

If you want to learn more about Autotools and CMake packages in Spack, refer to AutotoolsPackage and CMakePackage, respectively.


What can you do to check that the build is progressing satisfactorily? If there are specific files and or directories expected of a successful installation, you can add basic, fast sanity checks. You can also add checks to be performed after one or more installation phases.

NOTE:

Build-time tests are performed when the --test option is passed to spack install.


WARNING:

Build-time test failures result in a failed installation of the software.


Adding sanity checks

Unfortunately, many builds of scientific software modify the installation prefix before make install. Builds like this can falsely report success when an error occurs before the installation is complete. Simple sanity checks can be used to identify files and or directories that are required of a successful installation. Spack checks for the presence of the files and directories after install() runs.

If any of the listed files or directories are missing, then the build will fail and the install prefix will be removed. If they all exist, then Spack considers the build successful from a sanity check perspective and keeps the prefix in place.

For example, the sanity checks for the reframe package below specify that eight paths must exist within the installation prefix after the install method completes.

class Reframe(Package):

...
# sanity check
sanity_check_is_file = [join_path("bin", "reframe")]
sanity_check_is_dir = ["bin", "config", "docs", "reframe", "tutorials",
"unittests", "cscs-checks"]


When you run spack install with tests enabled, Spack will ensure that a successfully installed package has the required files and or directories.

For example, running:

$ spack install --test=root reframe


results in spack checking that the installation created the following file:

self.prefix.bin.reframe

and the following directories:

  • self.prefix.bin
  • self.prefix.config
  • self.prefix.docs
  • self.prefix.reframe
  • self.prefix.tutorials
  • self.prefix.unittests
  • self.prefix.cscs-checks

If any of these paths are missing, then Spack considers the installation to have failed.

NOTE:

You MUST use sanity_check_is_file to specify required files and sanity_check_is_dir for required directories.


Adding installation phase tests

Sometimes packages appear to build "correctly" only to have run-time behavior issues discovered at a later stage, such as after a full software stack relying on them has been built. Checks can be performed at different phases of the package installation to possibly avoid these types of problems. Some checks are built-in to different build systems, while others will need to be added to the package.

Built-in installation phase tests are provided by packages inheriting from select build systems, where naming conventions are used to identify typical test identifiers for those systems. In general, you won't need to add anything to your package to take advantage of these tests if your software's build system complies with the convention; otherwise, you'll want or need to override the post-phase method to perform other checks.

Built-in installation phase tests

Build System Class Post-Build Phase Method (Runs) Post-Install Phase Method (Runs)
AutotoolsPackage check (make test, make check) installcheck (make installcheck)
CachedCMakePackage check (make check, make test) Not applicable
CMakePackage check (make check, make test) Not applicable
MakefilePackage check (make test, make check) installcheck (make installcheck)
MesonPackage check (make test, make check) Not applicable
PerlPackage check (make test) Not applicable
PythonPackage Not applicable test (module imports)
QMakePackage check (make check) Not applicable
SConsPackage build_test (must be overridden) Not applicable
SIPPackage Not applicable test (module imports)
WafPackage build_test (must be overridden) install_test (must be overridden)

For example, the Libelf package inherits from AutotoolsPackage and its Makefile has a standard check target. So Spack will automatically run make check after the build phase when it is installed using the --test option, such as:

$ spack install --test=root libelf


In addition to overriding any built-in build system installation phase tests, you can write your own install phase tests. You will need to use two decorators for each phase test method:

  • run_after
  • on_package_attributes

The first decorator tells Spack when in the installation process to run your test method installation process; namely after the provided installation phase. The second decorator tells Spack to only run the checks when the --test option is provided on the command line.

NOTE:

Be sure to place the directives above your test method in the order run_after then on_package_attributes.


NOTE:

You also want to be sure the package supports the phase you use in the run_after directive. For example, PackageBase only supports the install phase while the AutotoolsPackage and MakefilePackage support both install and build phases.


Assuming both build and install phases are available to you, you could add additional checks to be performed after each of those phases based on the skeleton provided below.

class YourMakefilePackage(MakefilePackage):

...
@run_after("build")
@on_package_attributes(run_tests=True)
def check_build(self):
# Add your custom post-build phase tests
pass
@run_after("install")
@on_package_attributes(run_tests=True)
def check_install(self):
# Add your custom post-install phase tests
pass


NOTE:

You could also schedule work to be done before a given phase using the run_before decorator.


By way of a concrete example, the reframe package mentioned previously has a simple installation phase check that runs the installed executable. The check is implemented as follows:

class Reframe(Package):

...
# check if we can run reframe
@run_after("install")
@on_package_attributes(run_tests=True)
def check_list(self):
with working_dir(self.stage.source_path):
reframe = Executable(self.prefix.bin.reframe)
reframe("-l")


WARNING:

The API for adding tests is not yet considered stable and may change in future releases.


Checking build-time test results

Checking the results of these tests after running spack install --test can be done by viewing the spec's install-time-test-log.txt file whose location will depend on whether the spec installed successfully.

A successful installation results in the build and stage logs being copied to the .spack subdirectory of the spec's prefix. For example,

$ spack install --test=root zlib@1.2.13
...
[+] /home/user/spack/opt/spack/linux-rhel8-broadwell/gcc-10.3.1/zlib-1.2.13-tehu6cbsujufa2tb6pu3xvc6echjstv6
$ cat /home/user/spack/opt/spack/linux-rhel8-broadwell/gcc-10.3.1/zlib-1.2.13-tehu6cbsujufa2tb6pu3xvc6echjstv6/.spack/install-time-test-log.txt


If the installation fails due to build-time test failures, then both logs will be left in the build stage directory as illustrated below:

$ spack install --test=root zlib@1.2.13
...
See build log for details:

/var/tmp/user/spack-stage/spack-stage-zlib-1.2.13-lxfsivs4htfdewxe7hbi2b3tekj4make/spack-build-out.txt $ cat /var/tmp/user/spack-stage/spack-stage-zlib-1.2.13-lxfsivs4htfdewxe7hbi2b3tekj4make/install-time-test-log.txt


Stand-alone tests

While build-time tests are integrated with the build process, stand-alone tests are expected to run days, weeks, even months after the software is installed. The goal is to provide a mechanism for gaining confidence that packages work as installed and continue to work as the underlying software evolves. Packages can add and inherit stand-alone tests. The spack test` command is used to manage stand-alone testing.

NOTE:

Execution speed is important since these tests are intended to quickly assess whether installed specs work on the system. Consequently, they should run relatively quickly -- as in on the order of at most a few minutes -- while ideally executing all, or at least key aspects of the installed software.


NOTE:

Failing stand-alone tests indicate problems with the installation and, therefore, there is no reason to proceed with more resource-intensive tests until those have been investigated.

Passing stand-alone tests indicate that more thorough testing, such as running extensive unit or regression tests, or tests that run at scale can proceed without wasting resources on a problematic installation.



Tests are defined in the package using methods with names beginning test_. This allows Spack to support multiple independent checks, or parts. Files needed for testing, such as source, data, and expected outputs, may be saved from the build and or stored with the package in the repository. Regardless of origin, these files are automatically copied to the spec's test stage directory prior to execution of the test method(s). Spack also provides some helper functions to facilitate processing.

Configuring the test stage directory

Stand-alone tests utilize a test stage directory for building, running, and tracking results in the same way Spack uses a build stage directory. The default test stage root directory, ~/.spack/test, is defined in etc/spack/defaults/config.yaml. This location is customizable by adding or changing the test_stage path in the high-level config of the appropriate config.yaml file such that:

config:

test_stage: /path/to/test/stage


Packages can use the self.test_suite.stage property to access this setting. Other package properties that provide access to spec-specific subdirectories and files are described in accessing staged files.

NOTE:

The test stage path is the root directory for the entire suite. In other words, it is the root directory for all specs being tested by the spack test run command. Each spec gets its own stage subdirectory. Use self.test_suite.test_dir_for_spec(self.spec) to access the spec-specific test stage directory.


Adding stand-alone tests

Test recipes are defined in the package using methods with names beginning test_. This allows for the implementation of multiple independent tests. Each method has access to the information Spack tracks on the package, such as options, compilers, and dependencies, supporting the customization of tests to the build. Standard python assert statements and other error reporting mechanisms are available. Such exceptions are automatically caught and reported as test failures.

Each test method is an implicit test part named by the method and whose purpose is the method's docstring. Providing a purpose gives context for aiding debugging. A test method may contain embedded test parts. Spack outputs the test name and purpose prior to running each test method and any embedded test parts. For example, MyPackage below provides two basic examples of installation tests: test_always_fails and test_example. As the name indicates, the first always fails. The second simply runs the installed example.

class MyPackage(Package):

...
def test_always_fails(self):
"""use assert to always fail"""
assert False
def test_example(self):
"""run installed example"""
example = which(self.prefix.bin.example)
example()


Output showing the identification of each test part after runnig the tests is illustrated below.

$ spack test run --alias mypackage mypackage@1.0
==> Spack test mypackage
...
$ spack test results -l mypackage
==> Results for test suite 'mypackage':
...
==> [2023-03-10-16:03:56.625204] test: test_always_fails: use assert to always fail
...
FAILED
==> [2023-03-10-16:03:56.625439] test: test_example: run installed example
...
PASSED


NOTE:

If MyPackage were a recipe for a library, the tests should build an example or test program that is then executed.


A test method can include test parts using the test_part context manager. Each part is treated as an independent check to allow subsequent test parts to execute even after a test part fails.

The signature for test_part is:

def test_part(pkg, test_name, purpose, work_dir=".", verbose=False):


where each argument has the following meaning:

  • pkg is an instance of the package for the spec under test.
  • test_name is the name of the test part, which must start with test_.
  • purpose is a brief description used as a heading for the test part.

    Output from the test is written to a test log file allowing the test name and purpose to be searched for test part confirmation and debugging.

  • work_dir is the path to the directory in which the test will run.

    The default of None, or ".", corresponds to the the spec's test stage (i.e., self.test_suite.test_dir_for_spec(self.spec).


Use of the package spec's installation directory for building and running tests is strongly discouraged. Doing so causes permission errors for shared spack instances and facilities that install the software in read-only file systems or directories.



Suppose MyPackage actually installs two examples we want to use for tests. These checks can be implemented as separate checks or, as illustrated below, embedded test parts.

class MyPackage(Package):

...
def test_example(self):
"""run installed examples"""
for example in ["ex1", "ex2"]:
with test_part(
self,
f"test_example_{example}",
purpose=f"run installed {example}",
):
exe = which(join_path(self.prefix.bin, example))
exe()


In this case, there will be an implicit test part for test_example and separate sub-parts for ex1 and ex2. The second sub-part will be executed regardless of whether the first passes. The test log for a run where the first executable fails and the second passes is illustrated below.

$ spack test run --alias mypackage mypackage@1.0
==> Spack test mypackage
...
$ spack test results -l mypackage
==> Results for test suite 'mypackage':
...
==> [2023-03-10-16:03:56.625204] test: test_example: run installed examples
==> [2023-03-10-16:03:56.625439] test: test_example_ex1: run installed ex1
...
FAILED
==> [2023-03-10-16:03:56.625555] test: test_example_ex2: run installed ex2
...
PASSED
...


WARNING:

Test results reporting requires that each test method and embedded test part for a package have a unique name.


Stand-alone tests run in an environment that provides access to information Spack has on how the software was built, such as build options, dependencies, and compilers. Build options and dependencies are accessed with the normal spec checks. Examples of checking variant settings and spec constraints can be found at the provided links. Accessing compilers in stand-alone tests that are used by the build requires setting a package property as described below.

Enabling test compilation

If you want to build and run binaries in tests, then you'll need to tell Spack to load the package's compiler configuration. This is accomplished by setting the package's test_requires_compiler property to True.

Setting the property to True ensures access to the compiler through canonical environment variables (e.g., CC, CXX, FC, F77). It also gives access to build dependencies like cmake through their spec objects (e.g., self.spec["cmake"].prefix.bin.cmake).

NOTE:

The test_requires_compiler property should be added at the top of the package near other attributes, such as the homepage and url.


Below illustrates using this feature to compile an example.

class MyLibrary(Package):

...
test_requires_compiler = True
...
def test_cxx_example(self):
"""build and run cxx-example"""
exe = "cxx-example"
...
cxx = which(os.environ["CXX"])
cxx(
f"-L{self.prefix.lib}",
f"-I{self.prefix.include}",
f"{exe}.cpp",
"-o", exe
)
cxx_example = which(exe)
cxx_example()


Saving build-time files

NOTE:

We highly recommend re-using build-time test sources and pared down input files for testing installed software. These files are easier to keep synchronized with software capabilities since they reside within the software's repository.

If that is not possible, you can add test-related files to the package repository (see adding custom files). It will be important to maintain them so they work across listed or supported versions of the package.



You can use the cache_extra_test_sources helper to copy directories and or files from the source build stage directory to the package's installation directory.

The signature for cache_extra_test_sources is:

def cache_extra_test_sources(pkg, srcs):


where each argument has the following meaning:

  • pkg is an instance of the package for the spec under test.
  • srcs is a string or a list of strings corresponding to the paths of subdirectories and or files needed for stand-alone testing.

The paths must be relative to the staged source directory. Contents of subdirectories and files are copied to a special test cache subdirectory of the installation prefix. They are automatically copied to the appropriate relative paths under the test stage directory prior to executing stand-alone tests.

For example, a package method for copying everything in the tests subdirectory plus the foo.c and bar.c files from examples and using foo.c in a test method is illustrated below.

class MyLibPackage(Package):

...
@run_after("install")
def copy_test_files(self):
srcs = ["tests",
join_path("examples", "foo.c"),
join_path("examples", "bar.c")]
cache_extra_test_sources(self, srcs)
def test_foo(self):
exe = "foo"
src_dir = self.test_suite.current_test_cache_dir.examples
with working_dir(src_dir):
cc = which(os.environ["CC"])
cc(
f"-L{self.prefix.lib}",
f"-I{self.prefix.include}",
f"{exe}.c",
"-o", exe
)
foo = which(exe)
foo()


In this case, the method copies the associated files from the build stage, after the software is installed, to the package's test cache directory. Then test_foo builds foo using foo.c before running the program.

NOTE:

The method name copy_test_files here is for illustration purposes. You are free to use a name that is more suited to your package.

The key to copying files for stand-alone testing at build time is use of the run_after directive, which ensures the associated files are copied after the provided build stage where the files and installation prefix are available.



These paths are automatically copied from cache to the test stage directory prior to the execution of any stand-alone tests. Tests access the files using the self.test_suite.current_test_cache_dir property. In our example above, test methods can use the following paths to reference the copy of each entry listed in srcs, respectively:

  • self.test_suite.current_test_cache_dir.tests
  • join_path(self.test_suite.current_test_cache_dir.examples, "foo.c")
  • join_path(self.test_suite.current_test_cache_dir.examples, "bar.c")

Library developers will want to build the associated tests against their installed libraries before running them.



NOTE:

While source and input files are generally recommended, binaries may also be cached by the build process. Only you, as the package writer or maintainer, know whether these files would be appropriate for testing the installed software weeks to months later.


NOTE:

If one or more of the copied files needs to be modified to reference the installed software, it is recommended that those changes be made to the cached files once in the copy_test_sources method and *after the call to cache_extra_test_sources(). This will reduce the amount of unnecessary work in the test method and avoid problems testing in shared instances and facility deployments.

The filter_file function can be quite useful for such changes. See file manipulation.



Adding custom files

In some cases it can be useful to have files that can be used to build or check the results of tests. Examples include:

  • test source files
  • test input files
  • test build scripts
  • expected test outputs

While obtaining such files from the software repository is preferred (see adding build-time files), there are circumstances where that is not feasible (e.g., the software is not being actively maintained). When test files can't be obtained from the repository or as a supplement to files that can, Spack supports the inclusion of additional files under the test subdirectory of the package in the Spack repository.

Spack automatically copies the contents of that directory to the test staging directory prior to running stand-alone tests. Test methods access those files using the self.test_suite.current_test_data_dir property as shown below.

class MyLibrary(Package):

...
test_requires_compiler = True
...
def test_example(self):
"""build and run custom-example"""
data_dir = self.test_suite.current_test_data_dir
exe = "custom-example"
src = datadir.join(f"{exe}.cpp")
...
# TODO: Build custom-example using src and exe
...
custom_example = which(exe)
custom_example()


Reading expected output from a file

The helper function get_escaped_text_output is available for packages to retrieve and properly format the text from a file that contains the expected output from running an executable that may contain special characters.

The signature for get_escaped_text_output is:

def get_escaped_text_output(filename):


where filename is the path to the file containing the expected output.

The filename for a custom file can be accessed by tests using the self.test_suite.current_test_data_dir property. The example below illustrates how to read a file that was added to the package's test subdirectory.

import re
class Sqlite(AutotoolsPackage):

...
def test_example(self):
"""check example table dump"""
test_data_dir = self.test_suite.current_test_data_dir
db_filename = test_data_dir.join("packages.db")
..
expected = get_escaped_text_output(test_data_dir.join("dump.out"))
sqlite3 = which(self.prefix.bin.sqlite3)
out = sqlite3(
db_filename, ".dump", output=str.split, error=str.split
)
for exp in expected:
assert re.search(exp, out), f"Expected '{exp}' in output"


If the file was instead copied from the tests subdirectory of the staged source code, the path would be obtained as shown below.

def test_example(self):

"""check example table dump"""
test_cache_dir = self.test_suite.current_test_cache_dir
db_filename = test_cache_dir.join("packages.db")


Alternatively, if the file was copied to the share/tests subdirectory as part of the installation process, the test could access the path as follows:

def test_example(self):

"""check example table dump"""
db_filename = join_path(self.prefix.share.tests, "packages.db")


Comparing expected to actual outputs

The helper function check_outputs is available for packages to ensure the expected outputs from running an executable are contained within the actual outputs.

The signature for check_outputs is:

def check_outputs(expected, actual):


where each argument has the expected type and meaning:

  • expected is a string or list of strings containing the expected (raw) output.
  • actual is a string containing the actual output from executing the command

Invoking the method is the equivalent of:

errors = []
for check in expected:

if not re.search(check, actual):
errors.append(f"Expected '{check}' in output '{actual}'") if errors:
raise RuntimeError("\n ".join(errors))


You may need to access files from one or more locations when writing stand-alone tests. This can happen if the software's repository does not include test source files or includes them but has no way to build the executables using the installed headers and libraries. In these cases you may need to reference the files relative to one or more root directory. The table below lists relevant path properties and provides additional examples of their use. Reading expected output provides examples of accessing files saved from the software repository, package repository, and installation.

Directory-to-property mapping

Root Directory Package Property Example(s)
Package (Spec) Installation self.prefix self.prefix.include, self.prefix.lib
Dependency Installation self.spec["<dependency-package>"].prefix self.spec["trilinos"].prefix.include
Test Suite Stage self.test_suite.stage join_path(self.test_suite.stage, "results.txt")
Spec's Test Stage self.test_suite.test_dir_for_spec(<spec>) self.test_suite.test_dir_for_spec(self.spec)
Current Spec's Build-time Files self.test_suite.current_test_cache_dir join_path(self.test_suite.current_test_cache_dir.examples, "foo.c")
Current Spec's Custom Test Files self.test_suite.current_test_data_dir join_path(self.test_suite.current_test_data_dir, "hello.f90")

Inheriting stand-alone tests

Stand-alone tests defined in parent (.e.g., Build Systems) and virtual (e.g., Virtual dependencies) packages are executed by packages that inherit from or provide interface implementations for those packages, respectively.

The table below summarizes the stand-alone tests that will be executed along with those implemented in the package itself.

Inherited/provided stand-alone tests

Parent/Provider Package Stand-alone Tests
C Compiles hello.c and runs it
Cxx Compiles and runs several hello programs
Fortan Compiles and runs hello programs (F and f90)
Mpi Compiles and runs mpi_hello (c, fortran)
PythonPackage Imports modules listed in the self.import_modules property with defaults derived from the tarball
SipPackage Imports modules listed in the self.import_modules property with defaults derived from the tarball

These tests are very basic so it is important that package developers and maintainers provide additional stand-alone tests customized to the package.

WARNING:

Any package that implements a test method with the same name as an inherited method overrides the inherited method. If that is not the goal and you are not explicitly calling and adding functionality to the inherited method for the test, then make sure that all test methods and embedded test parts have unique test names.


One example of a package that adds its own stand-alone tests to those "inherited" by the virtual package it provides an implementation for is the Openmpi package.

Below are snippets from running and viewing the stand-alone test results for openmpi:

$ spack test run --alias openmpi openmpi@4.1.4
==> Spack test openmpi
==> Testing package openmpi-4.1.4-ubmrigj
============================== 1 passed of 1 spec ==============================
$ spack test results -l openmpi
==> Results for test suite 'openmpi':
==> test specs:
==>   openmpi-4.1.4-ubmrigj PASSED
==> Testing package openmpi-4.1.4-ubmrigj
==> [2023-03-10-16:03:56.160361] Installing $spack/opt/spack/linux-rhel7-broadwell/gcc-8.3.1/openmpi-4.1.4-ubmrigjrqcafh3hffqcx7yz2nc5jstra/.spack/test to $test_stage/xez37ekynfbi4e7h4zdndfemzufftnym/openmpi-4.1.4-ubmrigj/cache/openmpi
==> [2023-03-10-16:03:56.625204] test: test_bin: test installed binaries
==> [2023-03-10-16:03:56.625439] test: test_bin_mpirun: run and check output of mpirun
==> [2023-03-10-16:03:56.629807] '$spack/opt/spack/linux-rhel7-broadwell/gcc-8.3.1/openmpi-4.1.4-ubmrigjrqcafh3hffqcx7yz2nc5jstra/bin/mpirun' '-n' '1' 'ls' '..'
openmpi-4.1.4-ubmrigj            repo
openmpi-4.1.4-ubmrigj-test-out.txt  test_suite.lock
PASSED: test_bin_mpirun
...
==> [2023-03-10-16:04:01.486977] test: test_version_oshcc: ensure version of oshcc is 8.3.1
SKIPPED: test_version_oshcc: oshcc is not installed
...
==> [2023-03-10-16:04:02.215227] Completed testing
==> [2023-03-10-16:04:02.215597]
======================== SUMMARY: openmpi-4.1.4-ubmrigj ========================
Openmpi::test_bin_mpirun .. PASSED
Openmpi::test_bin_ompi_info .. PASSED
Openmpi::test_bin_oshmem_info .. SKIPPED
Openmpi::test_bin_oshrun .. SKIPPED
Openmpi::test_bin_shmemrun .. SKIPPED
Openmpi::test_bin .. PASSED
...
============================== 1 passed of 1 spec ==============================


spack test list

Packages available for install testing can be found using the spack test list command. The command outputs all installed packages that have defined stand-alone test methods.

Alternatively you can use the --all option to get a list of all packages that have stand-alone test methods even if the packages are not installed.

For more information, refer to spack test list.

spack test run

Install tests can be run for one or more installed packages using the spack test run command. A test suite is created for all of the provided specs. The command accepts the same arguments provided to spack install (see Specs & dependencies). If no specs are provided the command tests all specs in the active environment or all specs installed in the Spack instance if no environment is active.

Test suites can be named using the --alias option. Unaliased test suites use the content hash of their specs as their name.

Some of the more commonly used debugging options are:

  • --fail-fast stops testing each package after the first failure
  • --fail-first stops testing packages after the first failure

Test output is written to a text log file by default though junit and cdash are outputs available through the --log-format option.

For more information, refer to spack test run.

spack test results

The spack test results command shows results for all completed test suites by default. The alias or content hash can be provided to limit reporting to the corresponding test suite.

The --logs option includes the output generated by the associated test(s) to facilitate debugging.

The --failed option limits results shown to that of the failed tests, if any, of matching packages.

For more information, refer to spack test results.

spack test find

The spack test find command lists the aliases or content hashes of all test suites whose results are available.

For more information, refer to spack test find.

spack test remove

The spack test remove command removes test suites to declutter the test stage directory. You are prompted to confirm the removal of each test suite unless you use the --yes-to-all option.

For more information, refer to spack test remove.

File manipulation functions

Many builds are not perfect. If a build lacks an install target, or if it does not use systems like CMake or autotools, which have standard ways of setting compilers and options, you may need to edit files or install some files yourself to get them working with Spack.

You can do this with standard Python code, and Python has rich libraries with functions for file manipulation and filtering. Spack also provides a number of convenience functions of its own to make your life even easier. These functions are described in this section.

All of the functions in this section can be included by simply running:

from spack import *


This is already part of the boilerplate for packages created with spack create.

Filtering functions

Works like sed but with Python regular expression syntax. Takes a regular expression, a replacement, and a set of files. repl can be a raw string or a callable function. If it is a raw string, it can contain \1, \2, etc. to refer to capture groups in the regular expression. If it is a callable, it is passed the Python MatchObject and should return a suitable replacement string for the particular match.

Examples:

1.
Filtering a Makefile to force it to use Spack's compiler wrappers:

filter_file(r"^\s*CC\s*=.*",  "CC = "  + spack_cc,  "Makefile")
filter_file(r"^\s*CXX\s*=.*", "CXX = " + spack_cxx, "Makefile")
filter_file(r"^\s*F77\s*=.*", "F77 = " + spack_f77, "Makefile")
filter_file(r"^\s*FC\s*=.*",  "FC = "  + spack_fc,  "Makefile")


2.
Replacing #!/usr/bin/perl with #!/usr/bin/env perl in bib2xhtml:

filter_file(r"#!/usr/bin/perl",

"#!/usr/bin/env perl", prefix.bin.bib2xhtml)


3.
Switching the compilers used by mpich's MPI wrapper scripts from cc, etc. to the compilers used by the Spack build:

filter_file("CC='cc'", "CC='%s'" % self.compiler.cc,

prefix.bin.mpicc) filter_file("CXX='c++'", "CXX='%s'" % self.compiler.cxx,
prefix.bin.mpicxx)



Some packages, like TAU, have a build system that can't install into directories with, e.g. "@" in the name, because they use hard-coded sed commands in their build.

change_sed_delimiter finds all sed search/replace commands and change the delimiter. e.g., if the file contains commands that look like s///, you can use this to change them to s@@@.

Example of changing s/// to s@@@ in TAU:

change_sed_delimiter("@", ";", "configure")
change_sed_delimiter("@", ";", "utils/FixMakefile")
change_sed_delimiter("@", ";", "utils/FixMakefile.sed.default")



File functions

Get the nth ancestor of the directory dir.
True if we can read and write to the file at path. Same as native python os.access(file_name, os.R_OK|os.W_OK).
Install a file to a particular location. For example, install a header into the include directory under the install prefix:

install("my-header.h", prefix.include)


An alias for os.path.join. This joins paths using the OS path separator.
Create each of the directories in paths, creating any parent directories if they do not exist.
This is a Python Context Manager that makes it easier to work with subdirectories in builds. You use this with the Python with statement to change into a working directory, and when the with block is done, you change back to the original directory. Think of it as a safe pushd / popd combination, where popd is guaranteed to be called at the end, even if exceptions are thrown.

Example usage:

1.
The libdwarf build first runs configure and make in a subdirectory called libdwarf. It then implements the installation code itself. This is natural with working_dir:

with working_dir("libdwarf"):

configure("--prefix=" + prefix, "--enable-shared")
make()
install("libdwarf.a", prefix.lib)


2.
Many CMake builds require that you build "out of source", that is, in a subdirectory. You can handle creating and cd'ing to the subdirectory like the LLVM package does:

with working_dir("spack-build", create=True):

cmake("..",
"-DLLVM_REQUIRES_RTTI=1",
"-DPYTHON_EXECUTABLE=/usr/bin/python",
"-DPYTHON_INCLUDE_DIR=/usr/include/python2.6",
"-DPYTHON_LIBRARY=/usr/lib64/libpython2.6.so",
*std_cmake_args)
make()
make("install")


The create=True keyword argument causes the command to create the directory if it does not exist.


Create an empty file at path.

Making a package discoverable with spack external find

The simplest way to make a package discoverable with spack external find is to:

1.
Define the executables associated with the package
2.
Implement a method to determine the versions of these executables

Minimal detection

The first step is fairly simple, as it requires only to specify a package level executables attribute:

class Foo(Package):

# Each string provided here is treated as a regular expression, and
# would match for example "foo", "foobar", and "bazfoo".
executables = ["foo"]


This attribute must be a list of strings. Each string is a regular expression (e.g. "gcc" would match "gcc", "gcc-8.3", "my-weird-gcc", etc.) to determine a set of system executables that might be part or this package. Note that to match only executables named "gcc" the regular expression "^gcc$" must be used.

Finally to determine the version of each executable the determine_version method must be implemented:

@classmethod
def determine_version(cls, exe):

"""Return either the version of the executable passed as argument
or ``None`` if the version cannot be determined.
Args:
exe (str): absolute path to the executable being examined
"""


This method receives as input the path to a single executable and must return as output its version as a string; if the user cannot determine the version or determines that the executable is not an instance of the package, they can return None and the exe will be discarded as a candidate. Implementing the two steps above is mandatory, and gives the package the basic ability to detect if a spec is present on the system at a given version.

NOTE:

Any executable for which the determine_version method returns None will be discarded and won't appear in later stages of the workflow described below.


Additional functionality

Besides the two mandatory steps described above, there are also optional methods that can be implemented to either increase the amount of details being detected or improve the robustness of the detection logic in a package.

Variants and custom attributes

The determine_variants method can be optionally implemented in a package to detect additional details of the spec:

@classmethod
def determine_variants(cls, exes, version_str):

"""Return either a variant string, a tuple of a variant string
and a dictionary of extra attributes that will be recorded in
packages.yaml or a list of those items.
Args:
exes (list of str): list of executables (absolute paths) that
live in the same prefix and share the same version
version_str (str): version associated with the list of
executables, as detected by ``determine_version``
"""


This method takes as input a list of executables that live in the same prefix and share the same version string, and returns either:

1.
A variant string
2.
A tuple of a variant string and a dictionary of extra attributes
3.
A list of items matching either 1 or 2 (if multiple specs are detected from the set of executables)

If extra attributes are returned, they will be recorded in packages.yaml and be available for later reuse. As an example, the gcc package will record by default the different compilers found and an entry in packages.yaml would look like:

packages:

gcc:
externals:
- spec: "gcc@9.0.1 languages=c,c++,fortran"
prefix: /usr
extra_attributes:
compilers:
c: /usr/bin/x86_64-linux-gnu-gcc-9
c++: /usr/bin/x86_64-linux-gnu-g++-9
fortran: /usr/bin/x86_64-linux-gnu-gfortran-9


This allows us, for instance, to keep track of executables that would be named differently if built by Spack (e.g. x86_64-linux-gnu-gcc-9 instead of just gcc).

Filter matching executables

Sometimes defining the appropriate regex for the executables attribute might prove to be difficult, especially if one has to deal with corner cases or exclude "red herrings". To help keeping the regular expressions as simple as possible, each package can optionally implement a filter_executables method:

@classmethod
def filter_detected_exes(cls, prefix, exes_in_prefix):

"""Return a filtered list of the executables in prefix"""


which takes as input a prefix and a list of matching executables and returns a filtered list of said executables.

Using this method has the advantage of allowing custom logic for filtering, and does not restrict the user to regular expressions only. Consider the case of detecting the GNU C++ compiler. If we try to search for executables that match g++, that would have the unwanted side effect of selecting also clang++ - which is a C++ compiler provided by another package - if present on the system. Trying to select executables that contain g++ but not clang would be quite complicated to do using regex only. Employing the filter_detected_exes method it becomes:

class Gcc(Package):

executables = ["g++"]
def filter_detected_exes(cls, prefix, exes_in_prefix):
return [x for x in exes_in_prefix if "clang" not in x]


Another possibility that this method opens is to apply certain filtering logic when specific conditions are met (e.g. take some decisions on an OS and not on another).

Validate detection

To increase detection robustness, packagers may also implement a method to validate the detected Spec objects:

@classmethod
def validate_detected_spec(cls, spec, extra_attributes):

"""Validate a detected spec. Raise an exception if validation fails."""


This method receives a detected spec along with its extra attributes and can be used to check that certain conditions are met by the spec. Packagers can either use assertions or raise an InvalidSpecDetected exception when the check fails. In case the conditions are not honored the spec will be discarded and any message associated with the assertion or the exception will be logged as the reason for discarding it.

As an example, a package that wants to check that the compilers attribute is in the extra attributes can implement this method like this:

@classmethod
def validate_detected_spec(cls, spec, extra_attributes):

"""Check that "compilers" is in the extra attributes."""
msg = ("the extra attribute 'compilers' must be set for "
"the detected spec '{0}'".format(spec))
assert "compilers" in extra_attributes, msg


or like this:

@classmethod
def validate_detected_spec(cls, spec, extra_attributes):

"""Check that "compilers" is in the extra attributes."""
if "compilers" not in extra_attributes:
msg = ("the extra attribute 'compilers' must be set for "
"the detected spec '{0}'".format(spec))
raise InvalidSpecDetected(msg)


Custom detection workflow

In the rare case when the mechanisms described so far don't fit the detection of a package, the implementation of all the methods above can be disregarded and instead a custom determine_spec_details method can be implemented directly in the package class (note that the definition of the executables attribute is still required):

@classmethod
def determine_spec_details(cls, prefix, exes_in_prefix):

# exes_in_prefix = a set of paths, each path is an executable
# prefix = a prefix that is common to each path in exes_in_prefix
# return None or [] if none of the exes represent an instance of
# the package. Return one or more Specs for each instance of the
# package which is thought to be installed in the provided prefix


This method takes as input a set of discovered executables (which match those specified by the user) as well as a common prefix shared by all of those executables. The function must return one or more spack.spec.Spec associated with the executables (it can also return None to indicate that no provided executables are associated with the package).

As an example, consider a made-up package called foo-package which builds an executable called foo. FooPackage would appear as follows:

class FooPackage(Package):

homepage = "..."
url = "..."
version(...)
# Each string provided here is treated as a regular expression, and
# would match for example "foo", "foobar", and "bazfoo".
executables = ["foo"]
@classmethod
def determine_spec_details(cls, prefix, exes_in_prefix):
candidates = list(x for x in exes_in_prefix
if os.path.basename(x) == "foo")
if not candidates:
return
# This implementation is lazy and only checks the first candidate
exe_path = candidates[0]
exe = Executable(exe_path)
output = exe("--version", output=str, error=str)
version_str = ... # parse output for version string
return Spec.from_detection(
"foo-package@{0}".format(version_str)
)


Add detection tests to packages

To ensure that software is detected correctly for multiple configurations and on different systems users can write a detection_test.yaml file and put it in the package directory alongside the package.py file. This YAML file contains enough information for Spack to mock an environment and try to check if the detection logic yields the results that are expected.

As a general rule, attributes at the top-level of detection_test.yaml represent search mechanisms and they each map to a list of tests that should confirm the validity of the package's detection logic.

The detection tests can be run with the following command:

$ spack audit externals


Errors that have been detected are reported to screen.

Tests for PATH inspections

Detection tests insisting on PATH inspections are listed under the paths attribute:

paths:
- layout:

- executables:
- "bin/clang-3.9"
- "bin/clang++-3.9"
script: |
echo "clang version 3.9.1-19ubuntu1 (tags/RELEASE_391/rc2)"
echo "Target: x86_64-pc-linux-gnu"
echo "Thread model: posix"
echo "InstalledDir: /usr/bin"
results:
- spec: 'llvm@3.9.1 +clang~lld~lldb'


Each test is performed by first creating a temporary directory structure as specified in the corresponding layout and by then running package detection and checking that the outcome matches the expected results. The exact details on how to specify both the layout and the results are reported in the table below:

Test based on PATH inspections

Option Name Description Allowed Values Required Field
layout Specifies the filesystem tree used for the test List of objects Yes
layout:[0]:executables Relative paths for the mock executables to be created List of strings Yes
layout:[0]:script Mock logic for the executable Any valid shell script Yes
results List of expected results List of objects (empty if no result is expected) Yes
results:[0]:spec A spec that is expected from detection Any valid spec Yes

Reuse tests from other packages

When using a custom repository, it is possible to customize a package that already exists in builtin and reuse its external tests. To do so, just write a detection_tests.yaml alongside the customized package.py with an includes attribute. For instance the detection_tests.yaml for myrepo.llvm might look like:

includes:
- "builtin.llvm"


This YAML file instructs Spack to run the detection tests defined in builtin.llvm in addition to those locally defined in the file.

Style guidelines for packages

The following guidelines are provided, in the interests of making Spack packages work in a consistent manner:

Variant Names

Spack packages with variants similar to already-existing Spack packages should use the same name for their variants. Standard variant names are:

Name Default Description
shared True Build shared libraries
mpi True Use MPI
python False Build Python extension


If specified in this table, the corresponding default should be used when declaring a variant.

The semantics of the shared variant are important. When a package is built ~shared, the package guarantees that no shared libraries are built. When a package is built +shared, the package guarantees that shared libraries are built, but it makes no guarantee about whether static libraries are built.

Version Lists

Spack packages should list supported versions with the newest first.

Using home vs prefix

home and prefix are both attributes that can be queried on a package's dependencies, often when passing configure arguments pointing to the location of a dependency. The difference is that while prefix is the location on disk where a concrete package resides, home is the logical location that a package resides, which may be different than prefix in the case of virtual packages or other special circumstances. For most use cases inside a package, its dependency locations can be accessed via either self.spec["foo"].home or self.spec["foo"].prefix. Specific packages that should be consumed by dependents via .home instead of .prefix should be noted in their respective documentation.

See Custom attributes for more details and an example implementing a custom home attribute.

Packaging workflow commands

When you are building packages, you will likely not get things completely right the first time.

The spack install command performs a number of tasks before it finally installs each package. It downloads an archive, expands it in a temporary directory, and only then gives control to the package's install() method. If the build doesn't go as planned, you may want to clean up the temporary directory, or if the package isn't downloading properly, you might want to run only the fetch stage of the build.

Spack performs best-effort installation of package dependencies by default, which means it will continue to install as many dependencies as possible after detecting failures. If you are trying to install a package with a lot of dependencies where one or more may fail to build, you might want to try the --fail-fast option to stop the installation process on the first failure.

A typical package workflow might look like this:

$ spack edit mypackage
$ spack install --fail-fast mypackage
... build breaks! ...
$ spack clean mypackage
$ spack edit mypackage
$ spack install --fail-fast mypackage
... repeat clean/install until install works ...


Below are some commands that will allow you some finer-grained control over the install process.

spack fetch

The first step of spack install. Takes a spec and determines the correct download URL to use for the requested package version, then downloads the archive, checks it against an MD5 checksum, and stores it in a staging directory if the check was successful. The staging directory will be located under the first writable directory in the build_stage configuration setting.

When run after the archive has already been downloaded, spack fetch is idempotent and will not download the archive again.

spack stage

The second step in spack install after spack fetch. Expands the downloaded archive in its temporary directory, where it will be built by spack install. Similar to fetch, if the archive has already been expanded, stage is idempotent.

spack patch

After staging, Spack applies patches to downloaded packages, if any have been specified in the package file. This command will run the install process through the fetch, stage, and patch phases. Spack keeps track of whether patches have already been applied and skips this step if they have been. If Spack discovers that patches didn't apply cleanly on some previous run, then it will restage the entire package before patching.

spack restage

Restores the source code to pristine state, as it was before building.

Does this in one of two ways:

1.
If the source was fetched as a tarball, deletes the entire build directory and re-expands the tarball.
2.
If the source was checked out from a repository, this deletes the build directory and checks it out again.

spack clean

Cleans up Spack's temporary and cached files. This command can be used to recover disk space if temporary files from interrupted or failed installs accumulate.

When called with --stage or without arguments this removes all staged files.

The --downloads option removes cached cached downloads.

You can force the removal of all install failure tracking markers using the --failures option. Note that spack install will automatically clear relevant failure markings prior to performing the requested installation(s).

Long-lived caches, like the virtual package index, are removed using the --misc-cache option.

The --python-cache option removes .pyc, .pyo, and __pycache__ folders.

To remove all of the above, the command can be called with --all.

When called with positional arguments, this command cleans up temporary files only for a particular package. If fetch, stage, or install are run again after this, Spack's build process will start from scratch.

Keeping the stage directory on success

By default, spack install will delete the staging area once a package has been successfully built and installed. Use --keep-stage to leave the build directory intact:

$ spack install --keep-stage <spec>


This allows you to inspect the build directory and potentially debug the build. You can use clean later to get rid of the unwanted temporary files.

Keeping the install prefix on failure

By default, spack install will delete any partially constructed install prefix if anything fails during install(). If you want to keep the prefix anyway (e.g. to diagnose a bug), you can use --keep-prefix:

$ spack install --keep-prefix <spec>


Note that this may confuse Spack into thinking that the package has been installed properly, so you may need to use spack uninstall --force to get rid of the install prefix before you build again:

$ spack uninstall --force <spec>


Graphing dependencies

spack graph

Spack provides the spack graph command for graphing dependencies. The command by default generates an ASCII rendering of a spec's dependency graph. For example:

$ spack graph hdf5 target=x86_64 os=SUSE


At the top is the root package in the DAG, with dependency edges emerging from it. On a color terminal, the edges are colored by which dependency they lead to.

$ spack graph --deptype=link hdf5 target=x86_64 os=SUSE


The deptype argument tells Spack what types of dependencies to graph. By default it includes link and run dependencies but not build dependencies. Supplying --deptype=link will show only link dependencies. The default is --deptype=all, which is equivalent to --deptype=build,link,run,test. Options for deptype include:

  • Any combination of build, link, run, and test separated by commas.
  • all for all types of dependencies.

You can also use spack graph to generate graphs in the widely used Dot format. For example:

$ spack graph --dot hdf5 target=x86_64 os=SUSE


This graph can be provided as input to other graphing tools, such as those in Graphviz. If you have graphviz installed, you can write straight to PDF like this:

$ spack graph --dot hdf5 | dot -Tpdf > hdf5.pdf


Interactive shell support

Spack provides some limited shell support to make life easier for packagers. You can enable these commands by sourcing a setup file in the share/spack directory. For bash or ksh, run:

export SPACK_ROOT=/path/to/spack
. $SPACK_ROOT/share/spack/setup-env.sh


For csh and tcsh run:

setenv SPACK_ROOT /path/to/spack
source $SPACK_ROOT/share/spack/setup-env.csh


spack cd will then be available.

spack cd

spack cd allows you to quickly cd to pertinent directories in Spack. Suppose you've staged a package but you want to modify it before you build it:

$ spack stage libelf
==> Trying to fetch from http://www.mr511.de/software/libelf-0.8.13.tar.gz
######################################################################## 100.0%
==> Staging archive: ~/spack/var/spack/stage/libelf@0.8.13%gcc@4.8.3 arch=linux-debian7-x86_64/libelf-0.8.13.tar.gz
==> Created stage in ~/spack/var/spack/stage/libelf@0.8.13%gcc@4.8.3 arch=linux-debian7-x86_64.
$ spack cd libelf
$ pwd
~/spack/var/spack/stage/libelf@0.8.13%gcc@4.8.3 arch=linux-debian7-x86_64/libelf-0.8.13


spack cd here changed the current working directory to the directory containing the expanded libelf source code. There are a number of other places you can cd to in the spack directory hierarchy:

$ spack cd --help
usage: spack cd [-h] [-m | -r | -i | -p | -P | -s | -S | --source-dir | -b | -e [name]] [--first] ...
cd to spack directories in the shell
positional arguments:

spec package spec options:
--first use the first match if multiple packages match the spec
--source-dir source directory for a spec (requires it to be staged first)
-P, --packages top-level packages directory for Spack
-S, --stages top level stage directory
-b, --build-dir build directory for a spec (requires it to be staged first)
-e [name], --env [name]
location of the named or current environment
-h, --help show this help message and exit
-i, --install-dir install prefix for spec (spec need not be installed)
-m, --module-dir spack python module directory
-p, --package-dir directory enclosing a spec's package.py file
-r, --spack-root spack installation root
-s, --stage-dir stage directory for a spec


Some of these change directory into package-specific locations (stage directory, install directory, package directory) and others change to core spack locations. For example, spack cd --module-dir will take you to the main python source directory of your spack install.

spack build-env

spack build-env functions much like the standard unix build-env command, but it takes a spec as an argument. You can use it to see the environment variables that will be set when a particular build runs, for example:

$ spack build-env mpileaks@1.1%intel


This will display the entire environment that will be set when the mpileaks@1.1%intel build runs.

To run commands in a package's build environment, you can simply provide them after the spec argument to spack build-env:

$ spack cd mpileaks@1.1%intel
$ spack build-env mpileaks@1.1%intel ./configure


This will cd to the build directory and then run configure in the package's build environment.

spack location

spack location is the same as spack cd but it does not require shell support. It simply prints out the path you ask for, rather than cd'ing to it. In bash, this:

$ cd $(spack location --build-dir <spec>)


is the same as:

$ spack cd --build-dir <spec>


spack location is intended for use in scripts or makefiles that need to know where packages are installed. e.g., in a makefile you might write:

DWARF_PREFIX = $(spack location --install-dir libdwarf)
CXXFLAGS += -I$DWARF_PREFIX/include
CXXFLAGS += -L$DWARF_PREFIX/lib


Package class architecture

NOTE:

This section aims to provide a high-level knowledge of how the package class architecture evolved in Spack, and provides some insights on the current design.


Packages in Spack were originally designed to support only a single build system. The overall class structure for a package looked like: [image]

In this architecture the base class AutotoolsPackage was responsible for both the metadata related to the autotools build system (e.g. dependencies or variants common to all packages using it), and for encoding the default installation procedure.

In reality, a non-negligible number of packages are either changing their build system during the evolution of the project, or using different build systems for different platforms. An architecture based on a single class requires hacks or other workarounds to deal with these cases.

To support a model more adherent to reality, Spack v0.19 changed its internal design by extracting the attributes and methods related to building a software into a separate hierarchy: [image]

In this new format each package.py contains one *Package class that gathers all the metadata, and one or more *Builder classes that encode the installation procedure. A specific builder object is created just before the software is built, so at a time where Spack knows which build system needs to be used for the current installation, and receives a package object during initialization.

build_system variant

To allow imposing conditions based on the build system, each package must a have build_system variant, which is usually inherited from base classes. This variant allows for writing metadata that is conditional on the build system:

with when("build_system=cmake"):

depends_on("cmake", type="build")


and also for selecting a specific build system from a spec literal, like in the following command:

$ spack install arpack-ng build_system=autotools


Compatibility with single-class format

Internally, Spack always uses builders to perform operations related to the installation of a specific software. The builders are created in the spack.builder.create function

def create(pkg):

"""Given a package object with an associated concrete spec,
return the builder object that can install it.
Args:
pkg (spack.package_base.PackageBase): package for which we want the builder
"""
if id(pkg) not in _BUILDERS:
_BUILDERS[id(pkg)] = _create(pkg)
return _BUILDERS[id(pkg)]


To achieve backward compatibility with the single-class format Spack creates in this function a special "adapter builder", if no custom builder is detected in the recipe: [image]

Overall the role of the adapter is to route access to attributes of methods first through the *Package hierarchy, and then back to the base class builder. This is schematically shown in the diagram above, where the adapter role is to "emulate" a method resolution order like the one represented by the red arrows.

Specifying License Information

Most of the software in Spack is open source, and most open source software is released under one or more common open source licenses. Specifying the license that a package is released under in a project's package.py is good practice. To specify a license, find the SPDX identifier for a project and then add it using the license directive:

license("<SPDX Identifier HERE>")


For example, the SPDX ID for the Apache Software License, version 2.0 is Apache-2.0, so you'd write:

license("Apache-2.0")


Or, for a dual-licensed package like Spack, you would use an SPDX Expression with both of its licenses:

license("Apache-2.0 OR MIT")


Note that specifying a license without a when clause makes it apply to all versions and variants of the package, which might not actually be the case. For example, a project might have switched licenses at some point or have certain build configurations that include files that are licensed differently. Spack itself used to be under the LGPL-2.1 license, until it was relicensed in version 0.12 in 2018.

You can specify when a license() directive applies using with a when= clause, just like other directives. For example, to specify that a specific license identifier should only apply to versions up to 0.11, but another license should apply for later versions, you could write:

license("LGPL-2.1", when="@:0.11")
license("Apache-2.0 OR MIT", when="@0.12:")


Note that unlike for most other directives, the when= constraints in the license() directive can't intersect. Spack needs to be able to resolve exactly one license identifier expression for any given version. To specify multiple licenses, use SPDX expressions and operators as above. The operators you probably care most about are:

  • OR: user chooses one license to adhere to; and
  • AND: user has to adhere to all the licenses.

You may also care about license exceptions that use the WITH operator, e.g. Apache-2.0 WITH LLVM-exception.

BUILD SYSTEMS

Spack defines a number of classes which understand how to use common build systems (Makefiles, CMake, etc.). Spack package definitions can inherit these classes in order to streamline their builds.

This guide provides information specific to each particular build system. It assumes that you've read the Packaging Guide and expands on these ideas for each distinct build system that Spack supports:

Makefile

The most primitive build system a package can use is a plain Makefile. Makefiles are simple to write for small projects, but they usually require you to edit the Makefile to set platform and compiler-specific variables.

Phases

The MakefileBuilder and MakefilePackage base classes come with 3 phases:

1.
edit - edit the Makefile
2.
build - build the project
3.
install - install the project

By default, edit does nothing, but you can override it to replace hard-coded Makefile variables. The build and install phases run:

$ make
$ make install


Important files

The main file that matters for a MakefilePackage is the Makefile. This file will be named one of the following ways:

  • GNUmakefile (only works with GNU Make)
  • Makefile (most common)
  • makefile

Some Makefiles also include other configuration files. Check for an include directive in the Makefile.

Build system dependencies

Spack assumes that the operating system will have a valid make utility installed already, so you don't need to add a dependency on make. However, if the package uses a GNUmakefile or the developers recommend using GNU Make, you should add a dependency on gmake:

depends_on("gmake", type="build")


Types of Makefile packages

Most of the work involved in packaging software that uses Makefiles involves overriding or replacing hard-coded variables. Many packages make the mistake of hard-coding compilers, usually for GCC or Intel. This is fine if you happen to be using that particular compiler, but Spack is designed to work with any compiler, and you need to ensure that this is the case.

Depending on how the Makefile is designed, there are 4 common strategies that can be used to set or override the appropriate variables:

Environment variables

Make has multiple types of assignment operators. Some Makefiles use = to assign variables. The only way to override these variables is to edit the Makefile or override them on the command-line. However, Makefiles that use ?= for assignment honor environment variables. Since Spack already sets CC, CXX, F77, and FC, you won't need to worry about setting these variables. If there are any other variables you need to set, you can do this in the edit method:

def edit(self, spec, prefix):

env["PREFIX"] = prefix
env["BLASLIB"] = spec["blas"].libs.ld_flags


cbench is a good example of a simple package that does this, while esmf is a good example of a more complex package.

Command-line arguments

If the Makefile ignores environment variables, the next thing to try is command-line arguments. You can do this by overriding the build_targets attribute. If you don't need access to the spec, you can do this like so:

build_targets = ["CC=cc"]


If you do need access to the spec, you can create a property like so:

@property
def build_targets(self):

spec = self.spec
return [
"CC=cc",
f"BLASLIB={spec['blas'].libs.ld_flags}",
]


cloverleaf is a good example of a package that uses this strategy.

Edit Makefile

Some Makefiles are just plain stubborn and will ignore command-line variables. The only way to ensure that these packages build correctly is to directly edit the Makefile. Spack provides a FileFilter class and a filter_file method to help with this. For example:

def edit(self, spec, prefix):

makefile = FileFilter("Makefile")
makefile.filter(r"^\s*CC\s*=.*", f"CC = {spack_cc}")
makefile.filter(r"^\s*CXX\s*=.*", f"CXX = {spack_cxx}")
makefile.filter(r"^\s*F77\s*=.*", f"F77 = {spack_f77}")
makefile.filter(r"^\s*FC\s*=.*", f"FC = {spack_fc}")


stream is a good example of a package that involves editing a Makefile to set the appropriate variables.

Config file

More complex packages often involve Makefiles that include a configuration file. These configuration files are primarily composed of variables relating to the compiler, platform, and the location of dependencies or names of libraries. Since these config files are dependent on the compiler and platform, you will often see entire directories of examples for common compilers and architectures. Use these examples to help determine what possible values to use.

If the config file is long and only contains one or two variables that need to be modified, you can use the technique above to edit the config file. However, if you end up needing to modify most of the variables, it may be easier to write a new file from scratch.

If each variable is independent of each other, a dictionary works well for storing variables:

def edit(self, spec, prefix):

config = {
"CC": "cc",
"MAKE": "make",
}
if spec.satisfies("+blas"):
config["BLAS_LIBS"] = spec["blas"].libs.joined()
with open("make.inc", "w") as inc:
for key in config:
inc.write(f"{key} = {config[key]}\n")


elk is a good example of a package that uses a dictionary to store configuration variables.

If the order of variables is important, it may be easier to store them in a list:

def edit(self, spec, prefix):

config = [
f"INSTALL_DIR = {prefix}",
"INCLUDE_DIR = $(INSTALL_DIR)/include",
"LIBRARY_DIR = $(INSTALL_DIR)/lib",
]
with open("make.inc", "w") as inc:
for var in config:
inc.write(f"{var}\n")


hpl is a good example of a package that uses a list to store configuration variables.

Variables to watch out for

The following is a list of common variables to watch out for. The first two sections are implicit variables defined by Make and will always use the same name, while the rest are user-defined variables and may vary from package to package.

  • Compilers

    This includes variables such as CC, CXX, F77, F90, and FC, as well as variables related to MPI compiler wrappers, like MPICC and friends.

  • Compiler flags

    This includes variables for specific compilers, like CFLAGS, CXXFLAGS, F77FLAGS, F90FLAGS, FCFLAGS, and CPPFLAGS. These variables are often hard-coded to contain flags specific to a certain compiler. If these flags don't work for every compiler, you may want to consider filtering them.

  • Variables that enable or disable features

    This includes variables like MPI, OPENMP, PIC, and DEBUG. These flags often require you to create a variant so that you can either build with or without MPI support, for example. These flags are often compiler-dependent. You should replace them with the appropriate compiler flags, such as self.compiler.openmp_flag or self.compiler.pic_flag.

  • Platform flags

    These flags control the type of architecture that the executable is compiler for. Watch out for variables like PLAT or ARCH.

  • Dependencies

    Look out for variables that sound like they could be used to locate dependencies, such as JAVA_HOME, JPEG_ROOT, or ZLIBDIR. Also watch out for variables that control linking, such as LIBS, LDFLAGS, and INCLUDES. These variables need to be set to the installation prefix of a dependency, or to the correct linker flags to link to that dependency.

  • Installation prefix

    If your Makefile has an install target, it needs some way of knowing where to install. By default, many packages install to /usr or /usr/local. Since many Spack users won't have sudo privileges, it is imperative that each package is installed to the proper prefix. Look for variables like PREFIX or INSTALL.


Makefiles in a sub-directory

Not every package places their Makefile in the root of the package tarball. If the Makefile is in a sub-directory like src, you can tell Spack where to locate it like so:

build_directory = "src"


Manual installation

Not every Makefile includes an install target. If this is the case, you can override the default install method to manually install the package:

def install(self, spec, prefix):

mkdir(prefix.bin)
install("foo", prefix.bin)
install_tree("lib", prefix.lib)


External documentation

For more information on reading and writing Makefiles, see: https://www.gnu.org/software/make/manual/make.html

Maven

Apache Maven is a general-purpose build system that does not rely on Makefiles to build software. It is designed for building and managing and Java-based project.

Phases

The MavenBuilder and MavenPackage base classes come with the following phases:

1.
build - compile code and package into a JAR file
2.
install - copy to installation prefix

By default, these phases run:

$ mvn package
$ install . <prefix>


Important files

Maven packages can be identified by the presence of a pom.xml file. This file lists dependencies and other metadata about the project. There may also be configuration files in the .mvn directory.

Build system dependencies

Maven requires the mvn executable to build the project. It also requires Java at both build- and run-time. Because of this, the base class automatically adds the following dependencies:

depends_on('java', type=('build', 'run'))
depends_on('maven', type='build')


In the pom.xml file, you may see sections like:

<requireJavaVersion>

<version>[1.7,)</version> </requireJavaVersion> <requireMavenVersion>
<version>[3.5.4,)</version> </requireMavenVersion>


This specifies the versions of Java and Maven that are required to build the package. See https://docs.oracle.com/middleware/1212/core/MAVEN/maven_version.htm#MAVEN402 for a description of this version range syntax. In this case, you should add:

depends_on('java@7:', type='build')
depends_on('maven@3.5.4:', type='build')


Passing arguments to the build phase

The default build and install phases should be sufficient to install most packages. However, you may want to pass additional flags to the build phase. For example:

def build_args(self):

return [
'-Pdist,native',
'-Dtar',
'-Dmaven.javadoc.skip=true'
]


External documentation

For more information on the Maven build system, see: https://maven.apache.org/index.html

SCons

SCons is a general-purpose build system that does not rely on Makefiles to build software. SCons is written in Python, and handles all building and linking itself.

As far as build systems go, SCons is very non-uniform. It provides a common framework for developers to write build scripts, but the build scripts themselves can vary drastically. Some developers add subcommands like:

$ scons clean
$ scons build
$ scons test
$ scons install


Others don't add any subcommands. Some have configuration options that can be specified through variables on the command line. Others don't.

Phases

As previously mentioned, SCons allows developers to add subcommands like build and install, but by default, installation usually looks like:

$ scons
$ scons install


To facilitate this, the SConsBuilder and SconsPackage base classes provide the following phases:

1.
build - build the package
2.
install - install the package

Package developers often add unit tests that can be invoked with scons test or scons check. Spack provides a test method to handle this. Since we don't know which one the package developer chose, the test method does nothing by default, but can be easily overridden like so:

def test(self):

scons("check")


Important files

SCons packages can be identified by their SConstruct files. These files handle everything from setting up subcommands and command-line options to linking and compiling.

One thing to look for is the EnsureSConsVersion function:

EnsureSConsVersion(2, 3, 0)


This means that SCons 2.3.0 is the earliest release that will work. You should specify this in a depends_on statement.

Build system dependencies

At the bare minimum, packages that use the SCons build system need a scons dependency. Since this is always the case, the SConsPackage base class already contains:

depends_on("scons", type="build")


If you want to specify a particular version requirement, you can override this in your package:

depends_on("scons@2.3.0:", type="build")


Finding available options

The first place to start when looking for a list of valid options to build a package is scons --help. Some packages like kahip don't bother overwriting the default SCons help message, so this isn't very useful, but other packages like serf print a list of valid command-line variables:

$ scons --help
scons: Reading SConscript files ...
Checking for GNU-compatible C compiler...yes
scons: done reading SConscript files.
PREFIX: Directory to install under ( /path/to/PREFIX )

default: /usr/local
actual: /usr/local LIBDIR: Directory to install architecture dependent libraries under ( /path/to/LIBDIR )
default: $PREFIX/lib
actual: /usr/local/lib APR: Path to apr-1-config, or to APR's install area ( /path/to/APR )
default: /usr
actual: /usr APU: Path to apu-1-config, or to APR's install area ( /path/to/APU )
default: /usr
actual: /usr OPENSSL: Path to OpenSSL's install area ( /path/to/OPENSSL )
default: /usr
actual: /usr ZLIB: Path to zlib's install area ( /path/to/ZLIB )
default: /usr
actual: /usr GSSAPI: Path to GSSAPI's install area ( /path/to/GSSAPI )
default: None
actual: None DEBUG: Enable debugging info and strict compile warnings (yes|no)
default: False
actual: False APR_STATIC: Enable using a static compiled APR (yes|no)
default: False
actual: False CC: Command name or path of the C compiler
default: None
actual: gcc CFLAGS: Extra flags for the C compiler (space-separated)
default: None
actual: LIBS: Extra libraries passed to the linker, e.g. "-l<library1> -l<library2>" (space separated)
default: None
actual: None LINKFLAGS: Extra flags for the linker (space-separated)
default: None
actual: CPPFLAGS: Extra flags for the C preprocessor (space separated)
default: None
actual: None Use scons -H for help about command-line options.


More advanced packages like cantera use scons --help to print a list of subcommands:

$ scons --help
scons: Reading SConscript files ...
SCons build script for Cantera
Basic usage:

'scons help' - print a description of user-specifiable options.
'scons build' - Compile Cantera and the language interfaces using
default options.
'scons clean' - Delete files created while building Cantera.
'[sudo] scons install' - Install Cantera.
'[sudo] scons uninstall' - Uninstall Cantera.
'scons test' - Run all tests which did not previously pass or for which the
results may have changed.
'scons test-reset' - Reset the passing status of all tests.
'scons test-clean' - Delete files created while running the tests.
'scons test-help' - List available tests.
'scons test-NAME' - Run the test named "NAME".
'scons <command> dump' - Dump the state of the SCons environment to the
screen instead of doing <command>, e.g.
'scons build dump'. For debugging purposes.
'scons samples' - Compile the C++ and Fortran samples.
'scons msi' - Build a Windows installer (.msi) for Cantera.
'scons sphinx' - Build the Sphinx documentation
'scons doxygen' - Build the Doxygen documentation


You'll notice that cantera provides a scons help subcommand. Running scons help prints a list of valid command-line variables.

Passing arguments to scons

Now that you know what arguments the project accepts, you can add them to the package build phase. This is done by overriding build_args like so:

def build_args(self, spec, prefix):

args = [
f"PREFIX={prefix}",
f"ZLIB={spec['zlib'].prefix}",
]
if spec.satisfies("+debug"):
args.append("DEBUG=yes")
else:
args.append("DEBUG=no")
return args


SConsPackage also provides an install_args function that you can override to pass additional arguments to scons install.

Compiler wrappers

By default, SCons builds all packages in a separate execution environment, and doesn't pass any environment variables from the user environment. Even changes to PATH are not propagated unless the package developer does so.

This is particularly troublesome for Spack's compiler wrappers, which depend on environment variables to manage dependencies and linking flags. In many cases, SCons packages are not compatible with Spack's compiler wrappers, and linking must be done manually.

First of all, check the list of valid options for anything relating to environment variables. For example, cantera has the following option:

* env_vars: [ string ]

Environment variables to propagate through to SCons. Either the
string "all" or a comma separated list of variable names, e.g.
"LD_LIBRARY_PATH,HOME".
- default: "LD_LIBRARY_PATH,PYTHONPATH"


In the case of cantera, using env_vars=all allows us to use Spack's compiler wrappers. If you don't see an option related to environment variables, try using Spack's compiler wrappers by passing spack_cc, spack_cxx, and spack_fc via the CC, CXX, and FC arguments, respectively. If you pass them to the build and you see an error message like:

Spack compiler must be run from Spack! Input 'SPACK_PREFIX' is missing.


you'll know that the package isn't compatible with Spack's compiler wrappers. In this case, you'll have to use the path to the actual compilers, which are stored in self.compiler.cc and friends. Note that this may involve passing additional flags to the build to locate dependencies, a task normally done by the compiler wrappers. serf is an example of a package with this limitation.

External documentation

For more information on the SCons build system, see: http://scons.org/documentation.html

Waf

Like SCons, Waf is a general-purpose build system that does not rely on Makefiles to build software.

Phases

The WafBuilder and WafPackage base classes come with the following phases:

1.
configure - configure the project
2.
build - build the project
3.
install - install the project

By default, these phases run:

$ python waf configure --prefix=/path/to/installation/prefix
$ python waf build
$ python waf install


Each of these are standard Waf commands and can be found by running:

$ python waf --help


Each phase provides a <phase> function that runs:

$ python waf -j<jobs> <phase>


where <jobs> is the number of parallel jobs to build with. Each phase also has a <phase_args> function that can pass arguments to this call. All of these functions are empty. The configure phase automatically adds --prefix=/path/to/installation/prefix, so you don't need to add that in the configure_args.

Testing

WafPackage also provides test and installtest methods, which are run after the build and install phases, respectively. By default, these phases do nothing, but you can override them to run package-specific unit tests.

def installtest(self):

with working_dir('test'):
pytest = which('py.test')
pytest()


Important files

Each Waf package comes with a custom waf build script, written in Python. This script contains instructions to build the project.

The package also comes with a wscript file. This file is used to override the default configure, build, and install phases to customize the Waf project. It also allows developers to override the default ./waf --help message. Check this file to find useful information about dependencies and the minimum versions that are supported.

Build system dependencies

WafPackage does not require waf to build. waf is only needed to create the ./waf script. Since ./waf is a Python script, Python is needed to build the project. WafPackage adds the following dependency automatically:

depends_on('python@2.5:', type='build')


Waf only supports Python 2.5 and up.

Passing arguments to waf

As previously mentioned, each phase comes with a <phase_args> function that can be used to pass arguments to that particular phase. For example, if you need to pass arguments to the build phase, you can use:

def build_args(self, spec, prefix):

args = []
if self.run_tests:
args.append('--test')
return args


A list of valid options can be found by running ./waf --help.

External documentation

For more information on the Waf build system, see: https://waf.io/book/

Autotools

Autotools is a GNU build system that provides a build-script generator. By running the platform-independent ./configure script that comes with the package, you can generate a platform-dependent Makefile.

Phases

The AutotoolsBuilder and AutotoolsPackage base classes come with the following phases:

1.
autoreconf - generate the configure script
2.
configure - generate the Makefiles
3.
build - build the package
4.
install - install the package

Most of the time, the autoreconf phase will do nothing, but if the package is missing a configure script, autoreconf will generate one for you.

The other phases run:

$ ./configure --prefix=/path/to/installation/prefix
$ make
$ make check  # optional
$ make install
$ make installcheck  # optional


Of course, you may need to add a few arguments to the ./configure line.

Important files

The most important file for an Autotools-based package is the configure script. This script is automatically generated by Autotools and generates the appropriate Makefile when run.

WARNING:

Watch out for fake Autotools packages!

Autotools is a very popular build system, and many people are used to the classic steps to install a package:

$ ./configure
$ make
$ make install


For this reason, some developers will write their own configure scripts that have nothing to do with Autotools. These packages may not accept the same flags as other Autotools packages, so it is better to use the Package base class and create a custom build system. You can tell if a package uses Autotools by running ./configure --help and comparing the output to other known Autotools packages. You should also look for files like:

  • configure.ac
  • configure.in
  • Makefile.am

Packages that don't use Autotools aren't likely to have these files.



Build system dependencies

Whether or not your package requires Autotools to install depends on how the source code is distributed. Most of the time, when developers distribute tarballs, they will already contain the configure script necessary for installation. If this is the case, your package does not require any Autotools dependencies.

However, a basic rule of version control systems is to never commit code that can be generated. The source code repository itself likely does not have a configure script. Developers typically write (or auto-generate) a configure.ac script that contains configuration preferences and a Makefile.am script that contains build instructions. Then, autoconf is used to convert configure.ac into configure, while automake is used to convert Makefile.am into Makefile.in. Makefile.in is used by configure to generate a platform-dependent Makefile for you. The following diagram provides a high-level overview of the process:

#.. figure:: Autoconf-automake-process.* # :target: https://commons.wikimedia.org/w/index.php?curid=15581407

GNU autoconf and automake process for generating makefiles by Jdthood under CC BY-SA 3.0


If a configure script is not present in your tarball, you will need to generate one yourself. Luckily, Spack already has an autoreconf phase to do most of the work for you. By default, the autoreconf phase runs:

$ autoreconf --install --verbose --force -I <aclocal-prefix>/share/aclocal


In case you need to add more arguments, override autoreconf_extra_args in your package.py on class scope like this:

autoreconf_extra_args = ["-Im4"]


All you need to do is add a few Autotools dependencies to the package. Most stable releases will come with a configure script, but if you check out a commit from the master branch, you would want to add:

depends_on("autoconf", type="build", when="@master")
depends_on("automake", type="build", when="@master")
depends_on("libtool",  type="build", when="@master")


It is typically redundant to list the m4 macro processor package as a dependency, since autoconf already depends on it.

Using a custom autoreconf phase

In some cases, it might be needed to replace the default implementation of the autoreconf phase with one running a script interpreter. In this example, the bash shell is used to run the autogen.sh script.

def autoreconf(self, spec, prefix):

which("bash")("autogen.sh")


patching configure or Makefile.in files

In some cases, developers might need to distribute a patch that modifies one of the files used to generate configure or Makefile.in. In this case, these scripts will need to be regenerated. It is preferable to regenerate these manually using the patch, and then create a new patch that directly modifies configure. That way, Spack can use the secondary patch and additional build system dependencies aren't necessary.

Old Autotools helper scripts

Autotools based tarballs come with helper scripts such as config.sub and config.guess. It is the responsibility of the developers to keep these files up to date so that they run on every platform, but for very old software releases this is impossible. In these cases Spack can help to replace these files with newer ones, without having to add the heavy dependency on automake.

Automatic helper script replacement is currently enabled by default on ppc64le and aarch64, as these are the known cases where old scripts fail. On these targets, AutotoolsPackage adds a build dependency on gnuconfig, which is a very light-weight package with newer versions of the helper files. Spack then tries to run all the helper scripts it can find in the release, and replaces them on failure with the helper scripts from gnuconfig.

To opt out of this feature, use the following setting:

patch_config_files = False


To enable it conditionally on different architectures, define a property and make the package depend on gnuconfig as a build dependency:

depends_on("gnuconfig", when="@1.0:")
@property
def patch_config_files(self):

return self.spec.satisfies("@1.0:")


NOTE:

On some exotic architectures it is necessary to use system provided config.sub and config.guess files. In this case, the most transparent solution is to mark the gnuconfig package as external and non-buildable, with a prefix set to the directory containing the files:


gnuconfig:

buildable: false
externals:
- spec: gnuconfig@master
prefix: /usr/share/configure_files/




force_autoreconf

If for whatever reason you really want to add the original patch and tell Spack to regenerate configure, you can do so using the following setting:

force_autoreconf = True


This line tells Spack to wipe away the existing configure script and generate a new one. If you only need to do this for a single version, this can be done like so:

@property
def force_autoreconf(self):

return self.version == Version("1.2.3")


Finding configure flags

Once you have a configure script present, the next step is to determine what option flags are available. These flags can be found by running:

$ ./configure --help


configure will display a list of valid flags separated into some or all of the following sections:

  • Configuration
  • Installation directories
  • Fine tuning of the installation directories
  • Program names
  • X features
  • System types
  • Optional Features
  • Optional Packages
  • Some influential environment variables

For the most part, you can ignore all but the last 3 sections. The "Optional Features" sections lists flags that enable/disable features you may be interested in. The "Optional Packages" section often lists dependencies and the flags needed to locate them. The "environment variables" section lists environment variables that the build system uses to pass flags to the compiler and linker.

Addings flags to configure

For most of the flags you encounter, you will want a variant to optionally enable/disable them. You can then optionally pass these flags to the configure call by overriding the configure_args function like so:

def configure_args(self):

args = []
if self.spec.satisfies("+mpi"):
args.append("--enable-mpi")
else:
args.append("--disable-mpi")
return args


Alternatively, you can use the enable_or_disable helper:

def configure_args(self):

return [self.enable_or_disable("mpi")]


Note that we are explicitly disabling MPI support if it is not requested. This is important, as many Autotools packages will enable options by default if the dependencies are found, and disable them otherwise. We want Spack installations to be as deterministic as possible. If two users install a package with the same variants, the goal is that both installations work the same way. See here and here for a rationale as to why these so-called "automagic" dependencies are a problem.

NOTE:

By default, Autotools installs packages to /usr. We don't want this, so Spack automatically adds --prefix=/path/to/installation/prefix to your list of configure_args. You don't need to add this yourself.


Helper functions

You may have noticed that most of the Autotools flags are of the form --enable-foo, --disable-bar, --with-baz=<prefix>, or --without-baz. Since these flags are so common, Spack provides a couple of helper functions to make your life easier.

enable_or_disable

Autotools flags for simple boolean variants can be automatically generated by calling the enable_or_disable method. This is typically used to enable or disable some feature within the package.

variant(

"memchecker",
default=False,
description="Memchecker support for debugging [degrades performance]" ) config_args.extend(self.enable_or_disable("memchecker"))


In this example, specifying the variant +memchecker will generate the following configuration options:

--enable-memchecker


with_or_without

Autotools flags for more complex variants, including boolean variants and multi-valued variants, can be automatically generated by calling the with_or_without method.

variant(

"schedulers",
values=disjoint_sets(
("auto",), ("alps", "lsf", "tm", "slurm", "sge", "loadleveler")
).with_non_feature_values("auto", "none"),
description="List of schedulers for which support is enabled; "
"'auto' lets openmpi determine", ) if not spec.satisfies("schedulers=auto"):
config_args.extend(self.with_or_without("schedulers"))


In this example, specifying the variant schedulers=slurm,sge will generate the following configuration options:

--with-slurm --with-sge


enable_or_disable is actually functionally equivalent with with_or_without, and accepts the same arguments and variant types; but idiomatic autotools packages often follow these naming conventions.

activation_value

Autotools parameters that require an option can still be automatically generated, using the activation_value argument to with_or_without (or, rarely, enable_or_disable).

variant(

"fabrics",
values=disjoint_sets(
("auto",), ("psm", "psm2", "verbs", "mxm", "ucx", "libfabric")
).with_non_feature_values("auto", "none"),
description="List of fabrics that are enabled; "
"'auto' lets openmpi determine", ) if not spec.satisfies("fabrics=auto"):
config_args.extend(self.with_or_without("fabrics",
activation_value="prefix"))


activation_value accepts a callable that generates the configure parameter value given the variant value; but the special value prefix tells Spack to automatically use the dependenency's installation prefix, which is the most common use for such parameters. In this example, specifying the variant fabrics=libfabric will generate the following configuration options:

--with-libfabric=</path/to/libfabric>


The variant keyword

When Spack variants and configure flags do not correspond one-to-one, the variant keyword can be passed to with_or_without and enable_or_disable. For example:

variant("debug_tools", default=False)
config_args += self.enable_or_disable("debug-tools", variant="debug_tools")


Or when one variant controls multiple flags:

variant("debug_tools", default=False)
config_args += self.with_or_without("memchecker", variant="debug_tools")
config_args += self.with_or_without("profiler", variant="debug_tools")


Conditional variants

When a variant is conditional and its condition is not met on the concrete spec, the with_or_without and enable_or_disable methods will simply return an empty list.

For example:

variant("profiler", when="@2.0:")
config_args += self.with_or_without("profiler")


will neither add --with-profiler nor --without-profiler when the version is below 2.0.

Activation overrides

Finally, the behavior of either with_or_without or enable_or_disable can be overridden for specific variant values. This is most useful for multi-values variants where some of the variant values require atypical behavior.

def with_or_without_verbs(self, activated):

# Up through version 1.6, this option was named --with-openib.
# In version 1.7, it was renamed to be --with-verbs.
opt = "verbs" if self.spec.satisfies("@1.7:") else "openib"
if not activated:
return f"--without-{opt}"
return f"--with-{opt}={self.spec['rdma-core'].prefix}"


Defining with_or_without_verbs overrides the behavior of a fabrics=verbs variant, changing the configure-time option to --with-openib for older versions of the package and specifying an alternative dependency name:

--with-openib=</path/to/rdma-core>


Configure script in a sub-directory

Occasionally, developers will hide their source code and configure script in a subdirectory like src. If this happens, Spack won't be able to automatically detect the build system properly when running spack create. You will have to manually change the package base class and tell Spack where the configure script resides. You can do this like so:

configure_directory = "src"


Building out of source

Some packages like gcc recommend building their software in a different directory than the source code to prevent build pollution. This can be done using the build_directory variable:

build_directory = "spack-build"


By default, Spack will build the package in the same directory that contains the configure script

Build and install targets

For most Autotools packages, the usual:

$ configure
$ make
$ make install


is sufficient to install the package. However, if you need to run make with any other targets, for example, to build an optional library or build the documentation, you can add these like so:

build_targets = ["all", "docs"]
install_targets = ["install", "docs"]


Testing

Autotools-based packages typically provide unit testing via the check and installcheck targets. If you build your software with spack install --test=root, Spack will check for the presence of a check or test target in the Makefile and run make check for you. After installation, it will check for an installcheck target and run make installcheck if it finds one.

External documentation

For more information on the Autotools build system, see: https://www.gnu.org/software/automake/manual/html_node/Autotools-Introduction.html

CMake

Like Autotools, CMake is a widely-used build-script generator. Designed by Kitware, CMake is the most popular build system for new C, C++, and Fortran projects, and many older projects are switching to it as well.

Unlike Autotools, CMake can generate build scripts for builders other than Make: Ninja, Visual Studio, etc. It is therefore cross-platform, whereas Autotools is Unix-only.

Phases

The CMakeBuilder and CMakePackage base classes come with the following phases:

1.
cmake - generate the Makefile
2.
build - build the package
3.
install - install the package

By default, these phases run:

$ mkdir spack-build
$ cd spack-build
$ cmake .. -DCMAKE_INSTALL_PREFIX=/path/to/installation/prefix
$ make
$ make test  # optional
$ make install


A few more flags are passed to cmake by default, including flags for setting the build type and flags for locating dependencies. Of course, you may need to add a few arguments yourself.

Important files

A CMake-based package can be identified by the presence of a CMakeLists.txt file. This file defines the build flags that can be passed to the cmake invocation, as well as linking instructions. If you are familiar with CMake, it can prove very useful for determining dependencies and dependency version requirements.

One thing to look for is the cmake_minimum_required function:

cmake_minimum_required(VERSION 2.8.12)


This means that CMake 2.8.12 is the earliest release that will work. You should specify this in a depends_on statement.

CMake-based packages may also contain CMakeLists.txt in subdirectories. This modularization helps to manage complex builds in a hierarchical fashion. Sometimes these nested CMakeLists.txt require additional dependencies not mentioned in the top-level file.

There's also usually a cmake or CMake directory containing additional macros, find scripts, etc. These may prove useful in determining dependency version requirements.

Build system dependencies

Every package that uses the CMake build system requires a cmake dependency. Since this is always the case, the CMakePackage base class already contains:

depends_on('cmake', type='build')


If you need to specify a particular version requirement, you can override this in your package:

depends_on('cmake@2.8.12:', type='build')


Finding cmake flags

To get a list of valid flags that can be passed to cmake, run the following command in the directory that contains CMakeLists.txt:

$ cmake . -LAH


CMake will start by checking for compilers and dependencies. Eventually it will begin to list build options. You'll notice that most of the build options at the top are prefixed with CMAKE_. You can safely ignore most of these options as Spack already sets them for you. This includes flags needed to locate dependencies, RPATH libraries, set the installation directory, and set the build type.

The rest of the flags are the ones you should consider adding to your package. They often include flags to enable/disable support for certain features and locate specific dependencies. One thing you'll notice that makes CMake different from Autotools is that CMake has an understanding of build flag hierarchy. That is, certain flags will not display unless their parent flag has been selected. For example, flags to specify the lib and include directories for a package might not appear unless CMake found the dependency it was looking for. You may need to manually specify certain flags to explore the full depth of supported build flags, or check the CMakeLists.txt yourself.

Adding flags to cmake

To add additional flags to the cmake call, simply override the cmake_args function. The following example defines values for the flags WHATEVER, ENABLE_BROKEN_FEATURE, DETECT_HDF5, and THREADS with and without the define() and define_from_variant() helper functions:

def cmake_args(self):

args = [
'-DWHATEVER:STRING=somevalue',
self.define('ENABLE_BROKEN_FEATURE', False),
self.define_from_variant('DETECT_HDF5', 'hdf5'),
self.define_from_variant('THREADS'), # True if +threads
]
return args


Spack supports CMake defines from conditional variants too. Whenever the condition on the variant is not met, define_from_variant() will simply return an empty string, and CMake simply ignores the empty command line argument. For example the following

variant('example', default=True, when='@2.0:')
def cmake_args(self):

return [self.define_from_variant('EXAMPLE', 'example')]


will generate 'cmake' '-DEXAMPLE=ON' ... when @2.0: +example is met, but will result in 'cmake' '' ... when the spec version is below 2.0.

CMake arguments provided by Spack

The following default arguments are controlled by Spack:

CMAKE_INSTALL_PREFIX

Is set to the the package's install directory.

CMAKE_PREFIX_PATH

CMake finds dependencies through calls to find_package(), find_program(), find_library(), find_file(), and find_path(), which use a list of search paths from CMAKE_PREFIX_PATH. Spack sets this variable to a list of prefixes of the spec's transitive dependencies.

For troubleshooting cases where CMake fails to find a dependency, add the --debug-find flag to cmake_args.

CMAKE_BUILD_TYPE

Every CMake-based package accepts a -DCMAKE_BUILD_TYPE flag to dictate which level of optimization to use. In order to ensure uniformity across packages, the CMakePackage base class adds a variant to control this:

variant('build_type', default='RelWithDebInfo',

description='CMake build type',
values=('Debug', 'Release', 'RelWithDebInfo', 'MinSizeRel'))


However, not every CMake package accepts all four of these options. Grep the CMakeLists.txt file to see if the default values are missing or replaced. For example, the dealii package overrides the default variant with:

variant('build_type', default='DebugRelease',

description='The build type to build',
values=('Debug', 'Release', 'DebugRelease'))


For more information on CMAKE_BUILD_TYPE, see: https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html

CMake uses different RPATHs during the build and after installation, so that executables can locate the libraries they're linked to during the build, and installed executables do not have RPATHs to build directories. In Spack, we have to make sure that RPATHs are set properly after installation.

Spack sets CMAKE_INSTALL_RPATH to a list of <prefix>/lib or <prefix>/lib64 directories of the spec's link-type dependencies. Apart from that, it sets -DCMAKE_INSTALL_RPATH_USE_LINK_PATH=ON, which should add RPATHs for directories of linked libraries not in the directories covered by CMAKE_INSTALL_RPATH.

Usually it's enough to set only -DCMAKE_INSTALL_RPATH_USE_LINK_PATH=ON, but the reason to provide both options is that packages may dynamically open shared libraries, which CMake cannot detect. In those cases, the RPATHs from CMAKE_INSTALL_RPATH are used as search paths.

NOTE:

Some packages provide stub libraries, which contain an interface for linking without an implementation. When using such libraries, it's best to override the option -DCMAKE_INSTALL_RPATH_USE_LINK_PATH=OFF in cmake_args, so that stub libraries are not used at runtime.


Generators

CMake and Autotools are build-script generation tools; they "generate" the Makefiles that are used to build a software package. CMake actually supports multiple generators, not just Makefiles. Another common generator is Ninja. To switch to the Ninja generator, simply add:

generator = 'Ninja'


CMakePackage defaults to "Unix Makefiles". If you switch to the Ninja generator, make sure to add:

depends_on('ninja', type='build')


to the package as well. Aside from that, you shouldn't need to do anything else. Spack will automatically detect that you are using Ninja and run:

$ cmake .. -G Ninja
$ ninja
$ ninja install


Spack currently only supports "Unix Makefiles" and "Ninja" as valid generators, but it should be simple to add support for alternative generators. For more information on CMake generators, see: https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html

CMakeLists.txt in a sub-directory

Occasionally, developers will hide their source code and CMakeLists.txt in a subdirectory like src. If this happens, Spack won't be able to automatically detect the build system properly when running spack create. You will have to manually change the package base class and tell Spack where CMakeLists.txt resides. You can do this like so:

root_cmakelists_dir = 'src'


Note that this path is relative to the root of the extracted tarball, not to the build_directory. It defaults to the current directory.

Building out of source

By default, Spack builds every CMakePackage in a spack-build sub-directory. If, for whatever reason, you would like to build in a different sub-directory, simply override build_directory like so:

build_directory = 'my-build'


Build and install targets

For most CMake packages, the usual:

$ cmake
$ make
$ make install


is sufficient to install the package. However, if you need to run make with any other targets, for example, to build an optional library or build the documentation, you can add these like so:

build_targets = ['all', 'docs']
install_targets = ['install', 'docs']


Testing

CMake-based packages typically provide unit testing via the test target. If you build your software with --test=root, Spack will check for the presence of a test target in the Makefile and run make test for you. If you want to run a different test instead, simply override the check method.

External documentation

For more information on the CMake build system, see: https://cmake.org/cmake/help/latest/

CachedCMake

The CachedCMakePackage base class is used for CMake-based workflows that create a CMake cache file prior to running cmake. This is useful for packages with arguments longer than the system limit, and for reproducibility.

The documentation for this class assumes that the user is familiar with the CMakePackage class from which it inherits. See the documentation for CMakePackage.

Phases

The CachedCMakePackage base class comes with the following phases:

1.
initconfig - generate the CMake cache file
2.
cmake - generate the Makefile
3.
build - build the package
4.
install - install the package

By default, these phases run:

$ mkdir spack-build
$ cd spack-build
$ cat << EOF > name-arch-compiler@version.cmake
# Write information on compilers and dependencies
# includes information on mpi and cuda if applicable
$ cmake .. -DCMAKE_INSTALL_PREFIX=/path/to/installation/prefix -C name-arch-compiler@version.cmake
$ make
$ make test  # optional
$ make install


The CachedCMakePackage class inherits from the CMakePackage class, and accepts all of the same options and adds all of the same flags to the cmake command. Similar to the CMakePAckage class, you may need to add a few arguments yourself, and the CachedCMakePackage provides the same interface to add those flags.

Adding entries to the CMake cache

In addition to adding flags to the cmake command, you may need to add entries to the CMake cache in the initconfig phase. This can be done by overriding one of four methods:

1.
CachedCMakePackage.initconfig_compiler_entries
2.
CachedCMakePackage.initconfig_mpi_entries
3.
CachedCMakePackage.initconfig_hardware_entries
4.
CachedCMakePackage.initconfig_package_entries

Each of these methods returns a list of CMake cache strings. The distinction between these methods is merely to provide a well-structured and legible cmake cache file -- otherwise, entries from each of these methods are handled identically.

Spack also provides convenience methods for generating CMake cache entries. These methods are available at module scope in every Spack package. Because CMake parses boolean options, strings, and paths differently, there are three such methods:

1.
cmake_cache_option
2.
cmake_cache_string
3.
cmake_cache_path

These methods each accept three parameters -- the name of the CMake variable associated with the entry, the value of the entry, and an optional comment -- and return strings in the appropriate format to be returned from any of the initconfig* methods. Additionally, these methods may return comments beginning with the # character.

A typical usage of these methods may look something like this:

def initconfig_mpi_entries(self):

# Get existing MPI configurations
entries = super(self, Foo).initconfig_mpi_entries()
# The existing MPI configurations key on whether ``mpi`` is in the spec
# This spec has an MPI variant, and we need to enable MPI when it is on.
# This hypothetical package controls MPI with the ``FOO_MPI`` option to
# cmake.
if self.spec.satisfies("+mpi"):
entries.append(cmake_cache_option("FOO_MPI", True, "enable mpi"))
else:
entries.append(cmake_cache_option("FOO_MPI", False, "disable mpi")) def initconfig_package_entries(self):
# Package specific options
entries = []
entries.append("#Entries for build options")
bar_on = self.spec.satisfies("+bar")
entries.append(cmake_cache_option("FOO_BAR", bar_on, "toggle bar"))
entries.append("#Entries for dependencies")
if self.spec["blas"].name == "baz": # baz is our blas provider
entries.append(cmake_cache_string("FOO_BLAS", "baz", "Use baz"))
entries.append(cmake_cache_path("BAZ_PREFIX", self.spec["baz"].prefix))


External documentation

For more information on CMake cache files, see: https://cmake.org/cmake/help/latest/manual/cmake.1.html

Meson

Much like Autotools and CMake, Meson is a build system. But it is meant to be both fast and as user friendly as possible. GNOME's goal is to port modules to use the Meson build system.

Phases

The MesonBuilder and MesonPackage base classes come with the following phases:

1.
meson - generate ninja files
2.
build - build the project
3.
install - install the project

By default, these phases run:

$ mkdir spack-build
$ cd spack-build
$ meson .. --prefix=/path/to/installation/prefix
$ ninja
$ ninja test  # optional
$ ninja install


Any of these phases can be overridden in your package as necessary. There is also a check method that looks for a test target in the build file. If a test target exists and the user runs:

$ spack install --test=root <meson-package>


Spack will run ninja test after the build phase.

Important files

Packages that use the Meson build system can be identified by the presence of a meson.build file. This file declares things like build instructions and dependencies.

One thing to look for is the meson_version key that gets passed to the project function:

project('gtk+', 'c',

version: '3.94.0',
default_options: [
'buildtype=debugoptimized',
'warning_level=1',
# We only need c99, but glib needs GNU-specific features
# https://github.com/mesonbuild/meson/issues/2289
'c_std=gnu99',
],
meson_version: '>= 0.43.0',
license: 'LGPLv2.1+')


This means that Meson 0.43.0 is the earliest release that will work. You should specify this in a depends_on statement.

Build system dependencies

At the bare minimum, packages that use the Meson build system need meson and `ninja` dependencies. Since this is always the case, the MesonPackage base class already contains:

depends_on('meson', type='build')
depends_on('ninja', type='build')


If you need to specify a particular version requirement, you can override this in your package:

depends_on('meson@0.43.0:', type='build')
depends_on('ninja', type='build')


Finding meson flags

To get a list of valid flags that can be passed to meson, run the following command in the directory that contains meson.build:

$ meson setup --help


Passing arguments to meson

If you need to pass any arguments to the meson call, you can override the meson_args method like so:

def meson_args(self):

return ['--warnlevel=3']


This method can be used to pass flags as well as variables.

Note that the MesonPackage base class already defines variants for buildtype, default_library and strip, which are mapped to default Meson arguments, meaning that you don't have to specify these.

External documentation

For more information on the Meson build system, see: https://mesonbuild.com/index.html

QMake

Much like Autotools and CMake, QMake is a build-script generator designed by the developers of Qt. In its simplest form, Spack's QMakePackage runs the following steps:

$ qmake
$ make
$ make check  # optional
$ make install


QMake does not appear to have a standardized way of specifying the installation directory, so you may have to set environment variables or edit *.pro files to get things working properly.

Phases

The QMakeBuilder and QMakePackage base classes come with the following phases:

1.
qmake - generate Makefiles
2.
build - build the project
3.
install - install the project

By default, these phases run:

$ qmake
$ make
$ make install


Any of these phases can be overridden in your package as necessary. There is also a check method that looks for a check target in the Makefile. If a check target exists and the user runs:

$ spack install --test=root <qmake-package>


Spack will run make check after the build phase.

Important files

Packages that use the QMake build system can be identified by the presence of a <project-name>.pro file. This file declares things like build instructions and dependencies.

One thing to look for is the minQtVersion function:

minQtVersion(5, 6, 0)


This means that Qt 5.6.0 is the earliest release that will work. You should specify this in a depends_on statement.

Build system dependencies

At the bare minimum, packages that use the QMake build system need a qt dependency. Since this is always the case, the QMakePackage base class already contains:

depends_on('qt', type='build')


If you want to specify a particular version requirement, or need to link to the qt libraries, you can override this in your package:

depends_on('qt@5.6.0:')


Passing arguments to qmake

If you need to pass any arguments to the qmake call, you can override the qmake_args method like so:

def qmake_args(self):

return ['-recursive']


This method can be used to pass flags as well as variables.

*.pro file in a sub-directory

If the *.pro file used to tell QMake how to build the package is found in a sub-directory, you can tell Spack to run all phases in this sub-directory by adding the following to the package:

build_directory = 'src'


External documentation

For more information on the QMake build system, see: http://doc.qt.io/qt-5/qmake-manual.html

SIP

SIP is a tool that makes it very easy to create Python bindings for C and C++ libraries. It was originally developed to create PyQt, the Python bindings for the Qt toolkit, but can be used to create bindings for any C or C++ library.

SIP comprises a code generator and a Python module. The code generator processes a set of specification files and generates C or C++ code which is then compiled to create the bindings extension module. The SIP Python module provides support functions to the automatically generated code.

Phases

The SIPBuilder and SIPPackage base classes come with the following phases:

1.
configure - configure the package
2.
build - build the package
3.
install - install the package

By default, these phases run:

$ sip-build --verbose --target-dir ...
$ make
$ make install


Important files

Each SIP package comes with a custom configuration file written in Python. For newer packages, this is called project.py, while in older packages, it may be called configure.py. This script contains instructions to build the project.

Build system dependencies

SIPPackage requires several dependencies. Python and SIP are needed at build-time to run the aforementioned configure script. Python is also needed at run-time to actually use the installed Python library. And as we are building Python bindings for C/C++ libraries, Python is also needed as a link dependency. All of these dependencies are automatically added via the base class.

extends("python", type=("build", "link", "run"))
depends_on("py-sip", type="build")


Passing arguments to sip-build

Each phase comes with a <phase_args> function that can be used to pass arguments to that particular phase. For example, if you need to pass arguments to the configure phase, you can use:

def configure_args(self):

return ["--no-python-dbus"]


A list of valid options can be found by running sip-build --help.

Testing

Just because a package successfully built does not mean that it built correctly. The most reliable test of whether or not the package was correctly installed is to attempt to import all of the modules that get installed. To get a list of modules, run the following command in the site-packages directory:

$ python
>>> import setuptools
>>> setuptools.find_packages()
[

'PyQt5', 'PyQt5.QtCore', 'PyQt5.QtGui', 'PyQt5.QtHelp',
'PyQt5.QtMultimedia', 'PyQt5.QtMultimediaWidgets', 'PyQt5.QtNetwork',
'PyQt5.QtOpenGL', 'PyQt5.QtPrintSupport', 'PyQt5.QtQml',
'PyQt5.QtQuick', 'PyQt5.QtSvg', 'PyQt5.QtTest', 'PyQt5.QtWebChannel',
'PyQt5.QtWebSockets', 'PyQt5.QtWidgets', 'PyQt5.QtXml',
'PyQt5.QtXmlPatterns'
]


Large, complex packages like py-pyqt5 will return a long list of packages, while other packages may return an empty list. These packages only install a single foo.py file. In Python packaging lingo, a "package" is a directory containing files like:

foo/__init__.py
foo/bar.py
foo/baz.py


whereas a "module" is a single Python file.

The SIPPackage base class automatically detects these module names for you. If, for whatever reason, the module names detected are wrong, you can provide the names yourself by overriding import_modules like so:

import_modules = ['PyQt5']


These tests often catch missing dependencies and non-RPATHed libraries. Make sure not to add modules/packages containing the word "test", as these likely won't end up in the installation directory, or may require test dependencies like pytest to be installed.

These tests can be triggered by running spack install --test=root or by running spack test run after the installation has finished.

External documentation

For more information on the SIP build system, see:


Lua

The Lua build-system is a helper for the common case of Lua packages that provide a rockspec file. This is not meant to take a rock archive, but to build a source archive or repository that provides a rockspec, which should cover most lua packages. In the case a Lua package builds by Make rather than luarocks, prefer MakefilePackage.

Phases

The LuaBuilder and LuaPackage` base classes come with the following phases:

1.
unpack - if using a rock, unpacks the rock and moves into the source directory
2.
preprocess - adjust sources or rockspec to fix build
3.
install - install the project

By default, these phases run:

# If the archive is a source rock
$ luarocks unpack <archive>.src.rock
$ # preprocess is a noop by default
$ luarocks make <name>.rockspec


Any of these phases can be overridden in your package as necessary.

Important files

Packages that use the Lua/LuaRocks build system can be identified by the presence of a *.rockspec file in their sourcetree, or can be fetched as a source rock archive (.src.rock). This file declares things like build instructions and dependencies, the .src.rock also contains all code.

It is common for the rockspec file to list the lua version required in a dependency. The LuaPackage class adds appropriate dependencies on a Lua implementation, but it is a good idea to specify the version required with a depends_on statement. The block normally will be a table definition like this:

dependencies = {

"lua >= 5.1", }


The LuaPackage class supports source repositories and archives containing a rockspec and directly downloading source rock files. It does not support downloading dependencies listed inside a rockspec, and thus does not support directly downloading a rockspec as an archive.

Build system dependencies

All base dependencies are added by the build system, but LuaRocks is run to avoid downloading extra Lua dependencies during build. If the package needs Lua libraries outside the standard set, they should be added as dependencies.

To specify a Lua version constraint but allow all lua implementations, prefer to use depends_on("lua-lang@5.1:5.1.99") to express any 5.1 compatible version. If the package requires LuaJit rather than Lua, a depends_on("luajit") should be used to ensure a LuaJit distribution is used instead of the Lua interpreter. Alternately, if only interpreted Lua will work depends_on("lua") will express that.

Passing arguments to luarocks make

If you need to pass any arguments to the luarocks make call, you can override the luarocks_args method like so:

def luarocks_args(self):

return ['flag1', 'flag2']


One common use of this is to override warnings or flags for newer compilers, as in:

def luarocks_args(self):

return ["CFLAGS='-Wno-error=implicit-function-declaration'"]


External documentation

For more information on the LuaRocks build system, see: https://luarocks.org/

Octave

Octave has its own build system for installing packages.

Phases

The OctaveBuilder and OctavePackage base classes have a single phase:

1.
install - install the package

By default, this phase runs the following command:

$ octave '--eval' 'pkg prefix <prefix>; pkg install <archive_file>'


Beware that uninstallation is not implemented at the moment. After uninstalling a package via Spack, you also need to manually uninstall it from Octave via pkg uninstall <package_name>.

Finding Octave packages

Most Octave packages are listed at https://octave.sourceforge.io/packages.php.

Dependencies

Usually, the homepage of a package will list dependencies, i.e. Dependencies: Octave >= 3.6.0 struct >= 1.0.12. The same information should be available in the DESCRIPTION file in the root of each archive.

External Documentation

For more information on the Octave build system, see: https://octave.org/doc/v4.4.0/Installing-and-Removing-Packages.html

Perl

Much like Octave, Perl has its own language-specific build system.

Phases

The PerlBuilder and PerlPackage base classes come with 3 phases that can be overridden:

1.
configure - configure the package
2.
build - build the package
3.
install - install the package

Perl packages have 2 common modules used for module installation:

ExtUtils::MakeMaker

The ExtUtils::MakeMaker module is just what it sounds like, a module designed to generate Makefiles. It can be identified by the presence of a Makefile.PL file, and has the following installation steps:

$ perl Makefile.PL INSTALL_BASE=/path/to/installation/prefix
$ make
$ make test  # optional
$ make install


Module::Build

The Module::Build module is a pure-Perl build system, and can be identified by the presence of a Build.PL file. It has the following installation steps:

$ perl Build.PL --install_base /path/to/installation/prefix
$ ./Build
$ ./Build test  # optional
$ ./Build install


If both Makefile.PL and Build.PL files exist in the package, Spack will use Makefile.PL by default. If your package uses a different module, PerlPackage will need to be extended to support it.

PerlPackage automatically detects which build steps to use, so there shouldn't be much work on the package developer's side to get things working.

Finding Perl packages

Most Perl modules are hosted on CPAN - The Comprehensive Perl Archive Network. If you need to find a package for XML::Parser, for example, you should search for "CPAN XML::Parser".

Some CPAN pages are versioned. Check for a link to the "Latest Release" to make sure you have the latest version.

Package name

When you use spack create to create a new Perl package, Spack will automatically prepend perl- to the front of the package name. This helps to keep Perl modules separate from other packages. The same naming scheme is used for other language extensions, like Python and R.

Description

Most CPAN pages have a short description under "NAME" and a longer description under "DESCRIPTION". Use whichever you think is more useful while still being succinct.

Homepage

In the top-right corner of the CPAN page, you'll find a "permalink" for the package. This should be used instead of the current URL, as it doesn't contain the version number and will always link to the latest release.

URL

If you haven't found it already, the download URL is on the right side of the page below the permalink. Search for "Download".

Build system dependencies

Every PerlPackage obviously depends on Perl at build and run-time, so PerlPackage contains:

extends('perl')


If your package requires a specific version of Perl, you should specify this.

Although newer versions of Perl include ExtUtils::MakeMaker and Module::Build as "core" modules, you may want to add dependencies on perl-extutils-makemaker and perl-module-build anyway. Many people add Perl as an external package, and we want the build to work properly. If your package uses Makefile.PL to build, add:

depends_on('perl-extutils-makemaker', type='build')


If your package uses Build.PL to build, add:

depends_on('perl-module-build', type='build')


Perl dependencies

Below the download URL, you will find a "Dependencies" link, which takes you to a page listing all of the dependencies of the package. Packages listed as "Core module" don't need to be added as dependencies, but all direct dependencies should be added. Don't add dependencies of dependencies. These should be added as dependencies to the dependency, not to your package.

Passing arguments to configure

Packages that have non-Perl dependencies often use command-line variables to specify their installation directory. You can pass arguments to Makefile.PL or Build.PL by overriding configure_args like so:

def configure_args(self):

expat = self.spec['expat'].prefix
return [
'EXPATLIBPATH={0}'.format(expat.lib),
'EXPATINCPATH={0}'.format(expat.include),
]


Alternatives to Spack

If you need to maintain a stack of Perl modules for a user and don't want to add all of them to Spack, a good alternative is cpanm. If Perl is already installed on your system, it should come with a cpan executable. To install cpanm, run the following command:

$ cpan App::cpanminus


Now, you can install any Perl module you want by running:

$ cpanm Module::Name


Obviously, these commands can only be run if you have root privileges. Furthermore, cpanm is not capable of installing non-Perl dependencies. If you need to install to your home directory or need to install a module with non-Perl dependencies, Spack is a better option.

External documentation

You can find more information on installing Perl modules from source at: http://www.perlmonks.org/?node_id=128077

More generic Perl module installation instructions can be found at: http://www.cpan.org/modules/INSTALL.html

Python

Python packages and modules have their own special build system. This documentation covers everything you'll need to know in order to write a Spack build recipe for a Python library.

Terminology

In the Python ecosystem, there are a number of terms that are important to understand.

PyPI
The Python Package Index, where most Python libraries are hosted.
Source distributions, distributed as tarballs (.tar.gz) and zip files (.zip). Contain the source code of the package.
Built distributions, distributed as wheels (.whl). Contain the pre-built library.
A binary distribution format common in the Python ecosystem. This file is actually just a zip file containing specific metadata and code. See the documentation for more details.
build frontend
Command-line tools used to build and install wheels. Examples include pip, build, and installer.
build backend
Libraries used to define how to build a wheel. Examples include setuptools, flit, poetry, hatchling, meson, and pdm.

Downloading

The first step in packaging a Python library is to figure out where to download it from. The vast majority of Python packages are hosted on PyPI, which is preferred over GitHub for downloading packages. Search for the package name on PyPI to find the project page. The project page is usually located at:


On the project page, there is a "Download files" tab containing download URLs. Whenever possible, we prefer to build Spack packages from source. If PyPI only has wheels, check to see if the project is hosted on GitHub and see if GitHub has source distributions. The project page usually has a "Homepage" and/or "Source code" link for this. If the project is closed-source, it may only have wheels available. For example, py-azureml-sdk is closed-source and can be downloaded from:


Once you've found a URL to download the package from, run:

$ spack create <url>


to create a new package template.

PyPI vs. GitHub

Many packages are hosted on PyPI, but are developed on GitHub or another version control system hosting service. The source code can be downloaded from either location, but PyPI is preferred for the following reasons:

1.
PyPI contains the bare minimum number of files needed to install the package.

You may notice that the tarball you download from PyPI does not have the same checksum as the tarball you download from GitHub. When a developer uploads a new release to PyPI, it doesn't contain every file in the repository, only the files necessary to install the package. PyPI tarballs are therefore smaller.

2.
PyPI is the official source for package managers like pip.

Let's be honest, pip is much more popular than Spack. If the GitHub tarball contains a file not present in the PyPI tarball that causes a bug, the developers may not realize this for quite some time. If the bug was in a file contained in the PyPI tarball, users would notice the bug much more quickly.

3.
GitHub release may be a beta version.

When a developer releases a new version of a package on GitHub, it may not be intended for most users. Until that release also makes its way to PyPI, it should be assumed that the release is not yet ready for general use.

4.
The checksum for a GitHub release may change.

Unfortunately, some developers have a habit of patching releases without incrementing the version number. This results in a change in tarball checksum. Package managers like Spack that use checksums to verify the integrity of a download tarball grind to a halt when the checksum for a known version changes. Most of the time, the change is intentional, and contains a needed bug fix. However, sometimes the change indicates a download source that has been compromised, and a tarball that contains a virus. If this happens, you must contact the developers to determine which is the case. PyPI is nice because it makes it physically impossible to re-release the same version of a package with a different checksum.


The only reason to use GitHub instead of PyPI is if PyPI only has wheels or if the PyPI sdist is missing a file needed to build the package. If this is the case, please add a comment above the url explaining this.

PyPI

Since PyPI is so commonly used to host Python libraries, the PythonPackage base class has a pypi attribute that can be set. Once set, pypi will be used to define the homepage, url, and list_url. For example, the following:


is equivalent to:

pypi = "setuptools/setuptools-49.2.0.zip"


If a package has a different homepage listed on PyPI, you can override it by setting your own homepage.

Description

The top of the PyPI project page contains a short description of the package. The "Project description" tab may also contain a longer description of the package. Either of these can be used to populate the package docstring.

Dependencies

Once you've determined the basic metadata for a package, the next step is to determine the build backend. PythonPackage uses pip to install the package, but pip requires a backend to actually build the package.

To determine the build backend, look for a pyproject.toml file. If there is no pyproject.toml file and only a setup.py or setup.cfg file, you can assume that the project uses setuptools. If there is a pyproject.toml file, see if it contains a [build-system] section. For example:

[build-system]
requires = [

"setuptools>=42",
"wheel", ] build-backend = "setuptools.build_meta"


This section does two things: the requires key lists build dependencies of the project, and the build-backend key defines the build backend. All of these build dependencies should be added as dependencies to your package:

depends_on("py-setuptools@42:", type="build")


Note that py-wheel is already listed as a build dependency in the PythonPackage base class, so you don't need to add it unless you need to specify a specific version requirement or change the dependency type.

See PEP 517 and PEP 518 for more information on the design of pyproject.toml.

Depending on which build backend a project uses, there are various places that run-time dependencies can be listed. Most modern build backends support listing dependencies directly in pyproject.toml. Look for dependencies under the following keys:

  • requires-python under [project]

    This specifies the version of Python that is required

  • dependencies under [project]

    These packages are required for building and installation. You can add them with type=("build", "run").

  • [project.optional-dependencies]

    This section includes keys with lists of optional dependencies needed to enable those features. You should add a variant that optionally adds these dependencies. This variant should be False by default.


Some build backends may have additional locations where dependencies can be found.

distutils

Before the introduction of setuptools and other build backends, Python packages had to rely on the built-in distutils library. Distutils is missing many of the features that setuptools and other build backends offer, and users are encouraged to use setuptools instead. In fact, distutils was deprecated in Python 3.10 and will be removed in Python 3.12. Because of this, pip actually replaces all imports of distutils with setuptools. If a package uses distutils, you should instead add a build dependency on setuptools. Check for a requirements.txt file that may list dependencies of the project.

setuptools

If the pyproject.toml lists setuptools.build_meta as a build-backend, or if the package has a setup.py that imports setuptools, or if the package has a setup.cfg file, then it uses setuptools to build. Setuptools is a replacement for the distutils library, and has almost the exact same API. In addition to pyproject.toml, dependencies can be listed in the setup.py or setup.cfg file. Look for the following arguments:

  • python_requires

    This specifies the version of Python that is required.

  • setup_requires

    These packages are usually only needed at build-time, so you can add them with type="build".

  • install_requires

    These packages are required for building and installation. You can add them with type=("build", "run").

  • extras_require

    These packages are optional dependencies that enable additional functionality. You should add a variant that optionally adds these dependencies. This variant should be False by default.

  • tests_require

    These are packages that are required to run the unit tests for the package. These dependencies can be specified using the type="test" dependency type. However, the PyPI tarballs rarely contain unit tests, so there is usually no reason to add these.


See https://setuptools.pypa.io/en/latest/userguide/dependency_management.html for more information on how setuptools handles dependency management. See PEP 440 for documentation on version specifiers in setuptools.

flit

There are actually two possible build-backend for flit, flit and flit_core. If you see these in the pyproject.toml, add a build dependency to your package. With flit, all dependencies are listed directly in the pyproject.toml file. Older versions of flit used to store this info in a flit.ini file, so check for this too.

In addition to the default pyproject.toml keys listed above, older versions of flit may use the following keys:

  • requires under [tool.flit.metadata]

    These packages are required for building and installation. You can add them with type=("build", "run").

  • [tool.flit.metadata.requires-extra]

    This section includes keys with lists of optional dependencies needed to enable those features. You should add a variant that optionally adds these dependencies. This variant should be False by default.


See https://flit.pypa.io/en/latest/pyproject_toml.html for more information.

poetry

Like flit, poetry also has two possible build-backend, poetry and poetry_core. If you see these in the pyproject.toml, add a build dependency to your package. With poetry, all dependencies are listed directly in the pyproject.toml file. Dependencies are listed in a [tool.poetry.dependencies] section, and use a custom syntax for specifying the version requirements. Note that ~= works differently in poetry than in setuptools and flit for versions that start with a zero.

hatchling

If the pyproject.toml lists hatchling.build as the build-backend, it uses the hatchling build system. Hatchling uses the default pyproject.toml keys to list dependencies.

See https://hatch.pypa.io/latest/config/dependency/ for more information.

meson

If the pyproject.toml lists mesonpy as the build-backend, it uses the meson build system. Meson uses the default pyproject.toml keys to list dependencies.

See https://meson-python.readthedocs.io/en/latest/tutorials/introduction.html for more information.

pdm

If the pyproject.toml lists pdm.pep517.api as the build-backend, it uses the PDM build system. PDM uses the default pyproject.toml keys to list dependencies.

See https://pdm.fming.dev/latest/ for more information.

wheels

Some Python packages are closed-source and are distributed as Python wheels. For example, py-azureml-sdk downloads a .whl file. This file is simply a zip file, and can be extracted using:

$ unzip *.whl


The zip file will not contain a setup.py, but it will contain a METADATA file which contains all the information you need to write a package.py build recipe. Check for lines like:

Requires-Python: >=3.5,<4
Requires-Dist: azureml-core (~=1.11.0)
Requires-Dist: azureml-dataset-runtime[fuse] (~=1.11.0)
Requires-Dist: azureml-train (~=1.11.0)
Requires-Dist: azureml-train-automl-client (~=1.11.0)
Requires-Dist: azureml-pipeline (~=1.11.0)
Provides-Extra: accel-models
Requires-Dist: azureml-accel-models (~=1.11.0); extra == 'accel-models'
Provides-Extra: automl
Requires-Dist: azureml-train-automl (~=1.11.0); extra == 'automl'


Requires-Python is equivalent to python_requires and Requires-Dist is equivalent to install_requires. Provides-Extra is used to name optional features (variants) and a Requires-Dist with extra == 'foo' will list any dependencies needed for that feature.

Passing arguments to setup.py

The default install phase should be sufficient to install most packages. However, the installation instructions for a package may suggest passing certain flags to the setup.py call. The PythonPackage class has two techniques for doing this.

Config settings

These settings are passed to PEP 517 build backends. For example, py-scipy package allows you to specify the name of the BLAS/LAPACK library you want pkg-config to search for:

depends_on("py-pip@22.1:", type="build")
def config_settings(self, spec, prefix):

return {
"blas": spec["blas"].libs.names[0],
"lapack": spec["lapack"].libs.names[0],
}


NOTE:

This flag only works for packages that define a build-backend in pyproject.toml. Also, it is only supported by pip 22.1+, which requires Python 3.7+. For packages that still support Python 3.6 and older, install_options should be used instead.


Global options

These flags are added directly after setup.py when pip runs python setup.py install. For example, the py-pyyaml package has an optional dependency on libyaml that can be enabled like so:

def global_options(self, spec, prefix):

options = []
if spec.satisfies("+libyaml"):
options.append("--with-libyaml")
else:
options.append("--without-libyaml")
return options


NOTE:

Direct invocation of setup.py is deprecated. This flag forces pip to use a deprecated installation procedure. It should only be used in packages that don't define a build-backend in pyproject.toml or packages that still support Python 3.6 and older.


Install options

These flags are added directly after install when pip runs python setup.py install. For example, the py-pyyaml package allows you to specify the directories to search for libyaml:

def install_options(self, spec, prefix):

options = []
if spec.satisfies("+libyaml"):
options.extend([
spec["libyaml"].libs.search_flags,
spec["libyaml"].headers.include_flags,
])
return options


NOTE:

Direct invocation of setup.py is deprecated. This flag forces pip to use a deprecated installation procedure. It should only be used in packages that don't define a build-backend in pyproject.toml or packages that still support Python 3.6 and older.


Testing

PythonPackage provides a couple of options for testing packages both during and after the installation process.

Import tests

Just because a package successfully built does not mean that it built correctly. The most reliable test of whether or not the package was correctly installed is to attempt to import all of the modules that get installed. To get a list of modules, run the following command in the source directory:

$ python
>>> import setuptools
>>> setuptools.find_packages()
['numpy', 'numpy._build_utils', 'numpy.compat', 'numpy.core', 'numpy.distutils', 'numpy.doc', 'numpy.f2py', 'numpy.fft', 'numpy.lib', 'numpy.linalg', 'numpy.ma', 'numpy.matrixlib', 'numpy.polynomial', 'numpy.random', 'numpy.testing', 'numpy.core.code_generators', 'numpy.distutils.command', 'numpy.distutils.fcompiler']


Large, complex packages like numpy will return a long list of packages, while other packages like six will return an empty list. py-six installs a single six.py file. In Python packaging lingo, a "package" is a directory containing files like:

foo/__init__.py
foo/bar.py
foo/baz.py


whereas a "module" is a single Python file.

The PythonPackage base class automatically detects these package and module names for you. If, for whatever reason, the module names detected are wrong, you can provide the names yourself by overriding import_modules like so:

import_modules = ["six"]


Sometimes the list of module names to import depends on how the package was built. For example, the py-pyyaml package has a +libyaml variant that enables the build of a faster optimized version of the library. If the user chooses ~libyaml, only the yaml library will be importable. If the user chooses +libyaml, both the yaml and yaml.cyaml libraries will be available. This can be expressed like so:

@property
def import_modules(self):

modules = ["yaml"]
if self.spec.satisfies("+libyaml"):
modules.append("yaml.cyaml")
return modules


These tests often catch missing dependencies and non-RPATHed libraries. Make sure not to add modules/packages containing the word "test", as these likely won't end up in the installation directory, or may require test dependencies like pytest to be installed.

Instead of defining the import_modules explicitly, only the subset of module names to be skipped can be defined by using skip_modules. If a defined module has submodules, they are skipped as well, e.g., in case the plotting modules should be excluded from the automatically detected import_modules ["nilearn", "nilearn.surface", "nilearn.plotting", "nilearn.plotting.data"] set:

skip_modules = ["nilearn.plotting"]


This will set import_modules to ["nilearn", "nilearn.surface"]

Import tests can be run during the installation using spack install --test=root or at any time after the installation using spack test run.

Unit tests

The package may have its own unit or regression tests. Spack can run these tests during the installation by adding test methods after installation.

For example, py-numpy adds the following as a check to run after the install phase:

@run_after("install")
@on_package_attributes(run_tests=True)
def install_test(self):

with working_dir("spack-test", create=True):
python("-c", "import numpy; numpy.test('full', verbose=2)")


when testing is enabled during the installation (i.e., spack install --test=root).

NOTE:

Additional information is available on install phase tests.


Setup file in a sub-directory

Many C/C++ libraries provide optional Python bindings in a subdirectory. To tell pip which directory to build from, you can override the build_directory attribute. For example, if a package provides Python bindings in a python directory, you can use:

build_directory = "python"


PythonPackage vs. packages that use Python

There are many packages that make use of Python, but packages that depend on Python are not necessarily PythonPackage's.

Choosing a build system

First of all, you need to select a build system. spack create usually does this for you, but if for whatever reason you need to do this manually, choose PythonPackage if and only if the package contains one of the following files:

  • pyproject.toml
  • setup.py
  • setup.cfg

Choosing a package name

Selecting the appropriate package name is a little more complicated than choosing the build system. By default, spack create will prepend py- to the beginning of the package name if it detects that the package uses the PythonPackage build system. However, there are occasionally packages that use PythonPackage that shouldn't start with py-. For example:

  • awscli
  • aws-parallelcluster
  • busco
  • easybuild
  • httpie
  • mercurial
  • scons
  • snakemake

The thing these packages have in common is that they are command-line tools that just so happen to be written in Python. Someone who wants to install mercurial with Spack isn't going to realize that it is written in Python, and they certainly aren't going to assume the package is called py-mercurial. For this reason, we manually renamed the package to mercurial.

Likewise, there are occasionally packages that don't use the PythonPackage build system but should still be prepended with py-. For example:

  • py-genders
  • py-py2cairo
  • py-pygobject
  • py-pygtk
  • py-pyqt
  • py-pyserial
  • py-sip
  • py-xpyb

These packages are primarily used as Python libraries, not as command-line tools. You may see C/C++ packages that have optional Python language-bindings, such as:

  • antlr
  • cantera
  • conduit
  • pagmo
  • vtk

Don't prepend these kind of packages with py-. When in doubt, think about how this package will be used. Is it primarily a Python library that will be imported in other Python scripts? Or is it a command-line tool, or C/C++/Fortran program with optional Python modules? The former should be prepended with py-, while the latter should not.

extends vs. depends_on

This is very similar to the naming dilemma above, with a slight twist. As mentioned in the Packaging Guide, extends and depends_on are very similar, but extends ensures that the extension and extendee share the same prefix in views. This allows the user to import a Python module without having to add that module to PYTHONPATH.

When deciding between extends and depends_on, the best rule of thumb is to check the installation prefix. If Python libraries are installed to <prefix>/lib/pythonX.Y/site-packages, then you should use extends. If Python libraries are installed elsewhere or the only files that get installed reside in <prefix>/bin, then don't use extends.

Alternatives to Spack

PyPI has hundreds of thousands of packages that are not yet in Spack, and pip may be a perfectly valid alternative to using Spack. The main advantage of Spack over pip is its ability to compile non-Python dependencies. It can also build cythonized versions of a package or link to an optimized BLAS/LAPACK library like MKL, resulting in calculations that run orders of magnitudes faster. Spack does not offer a significant advantage over other python-management systems for installing and using tools like flake8 and sphinx. But if you need packages with non-Python dependencies like numpy and scipy, Spack will be very valuable to you.

Anaconda is another great alternative to Spack, and comes with its own conda package manager. Like Spack, Anaconda is capable of compiling non-Python dependencies. Anaconda contains many Python packages that are not yet in Spack, and Spack contains many Python packages that are not yet in Anaconda. The main advantage of Spack over Anaconda is its ability to choose a specific compiler and BLAS/LAPACK or MPI library. Spack also has better platform support for supercomputers, and can build optimized binaries for your specific microarchitecture.

External documentation

For more information on Python packaging, see:


For more information on build and installation frontend tools, see:


For more information on build backend tools, see:


R

Like Python, R has its own built-in build system.

The R build system is remarkably uniform and well-tested. This makes it one of the easiest build systems to create new Spack packages for.

Phases

The RBuilder and RPackage base classes have a single phase:

1.
install - install the package

By default, this phase runs the following command:

$ R CMD INSTALL --library=/path/to/installation/prefix/rlib/R/library .


Finding R packages

The vast majority of R packages are hosted on CRAN - The Comprehensive R Archive Network. If you are looking for a particular R package, search for "CRAN <package-name>" and you should quickly find what you want. If it isn't on CRAN, try Bioconductor, another common R repository.

For the purposes of this tutorial, we will be walking through r-caret as an example. If you search for "CRAN caret", you will quickly find what you are looking for at https://cran.r-project.org/package=caret. https://cran.r-project.org is the main CRAN website. However, CRAN also has a https://cloud.r-project.org site that automatically redirects to mirrors around the world. For stability and performance reasons, we will use https://cloud.r-project.org/package=caret. If you search for "Package source", you will find the download URL for the latest release. Use this URL with spack create to create a new package.

Package name

The first thing you'll notice is that Spack prepends r- to the front of the package name. This is how Spack separates R package extensions from the rest of the packages in Spack. Without this, we would end up with package name collisions more frequently than we would like. For instance, there are already packages for both:

  • ape and r-ape
  • curl and r-curl
  • gmp and r-gmp
  • jpeg and r-jpeg
  • openssl and r-openssl
  • uuid and r-uuid
  • xts and r-xts

Many popular programs written in C/C++ are later ported to R as a separate project.

Description

The first thing you'll need to add to your new package is a description. The top of the homepage for caret lists the following description:

Classification and Regression Training

Misc functions for training and plotting classification and regression models.



The first line is a short description (title) and the second line is a long description. In this case the description is only one line but often the description is several lines. Spack makes use of both short and long descriptions and convention is to use both when creating an R package.

Homepage

If you look at the bottom of the page, you'll see:

Linking:

Please use the canonical form https://CRAN.R-project.org/package=caret to link to this page.



Please uphold the wishes of the CRAN admins and use https://cloud.r-project.org/package=caret as the homepage instead of https://cloud.r-project.org/web/packages/caret/index.html. The latter may change without notice.

URL

As previously mentioned, the download URL for the latest release can be found by searching "Package source" on the homepage.

List URL

CRAN maintains a single webpage containing the latest release of every single package: https://cloud.r-project.org/src/contrib/

Of course, as soon as a new release comes out, the version you were using in your package is no longer available at that URL. It is moved to an archive directory. If you search for "Old sources", you will find: https://cloud.r-project.org/src/contrib/Archive/caret

If you only specify the URL for the latest release, your package will no longer be able to fetch that version as soon as a new release comes out. To get around this, add the archive directory as a list_url.

Bioconductor packages

Bioconductor packages are set up in a similar way to CRAN packages, but there are some very important distinctions. Bioconductor packages can be found at: https://bioconductor.org/. Bioconductor packages are R packages and so follow the same packaging scheme as CRAN packages. What is different is that Bioconductor itself is versioned and released. This scheme, using the Bioconductor package installer, allows further specification of the minimum version of R as well as further restrictions on the dependencies between packages than what is possible with the native R packaging system. Spack can not replicate these extra features and thus Bioconductor packages in Spack need to be managed as a group during updates in order to maintain package consistency with Bioconductor itself.

Another key difference is that, while previous versions of packages are available, they are not available from a site that can be programmatically set, thus a list_url attribute can not be used. However, each package is also available in a git repository, with branches corresponding to each Bioconductor release. Thus, it is always possible to retrieve the version of any package corresponding to a Bioconductor release simply by fetching the branch that corresponds to the Bioconductor release of the package repository. For this reason, spack Bioconductor R packages use the git repository, with the commit of the respective branch used in the version() attribute of the package.

cran and bioc attributes

Much like the pypi attribute for python packages, due to the fact that R packages are obtained from specific repositories, it is possible to set up shortcut attributes that can be used to set homepage, url, list_url, and git. For example, the following cran attribute:

cran = 'caret'


is equivalent to:


Likewise, the following bioc attribute:

bioc = 'BiocVersion'


is equivalent to:


Build system dependencies

As an extension of the R ecosystem, your package will obviously depend on R to build and run. Normally, we would use depends_on to express this, but for R packages, we use extends. This implies a special dependency on R, which is used to set environment variables such as R_LIBS uniformly. Since every R package needs this, the RPackage base class contains:

extends('r')


Take a close look at the homepage for caret. If you look at the "Depends" section, you'll notice that caret depends on "R (≥ 3.2.0)". You should add this to your package like so:

depends_on('r@3.2.0:', type=('build', 'run'))


R dependencies

R packages are often small and follow the classic Unix philosophy of doing one thing well. They are modular and usually depend on several other packages. You may find a single package with over a hundred dependencies. Luckily, R packages are well-documented and list all of their dependencies in the following sections:

  • Depends
  • Imports
  • LinkingTo

As far as Spack is concerned, all 3 of these dependency types correspond to type=('build', 'run'), so you don't have to worry about the details. If you are curious what they mean, https://github.com/spack/spack/issues/2951 has a pretty good summary:

Depends is required and will cause those R packages to be attached, that is, their APIs are exposed to the user. Imports loads packages so that the package importing these packages can access their APIs, while not being exposed to the user. When a user calls library(foo) s/he attaches package foo and all of the packages under Depends. Any function in one of these package can be called directly as bar(). If there are conflicts, user can also specify pkgA::bar() and pkgB::bar() to distinguish between them. Historically, there was only Depends and Suggests, hence the confusing names. Today, maybe Depends would have been named Attaches.

The LinkingTo is not perfect and there was recently an extensive discussion about API/ABI among other things on the R-devel mailing list among very skilled R developers:




Some packages also have a fourth section:

Suggests

These are optional, rarely-used dependencies that a user might find useful. You should NOT add these dependencies to your package. R packages already have enough dependencies as it is, and adding optional dependencies can really slow down the concretization process. They can also introduce circular dependencies.

A fifth rarely used section is:

Enhances

This means that the package can be used as an optional dependency for another package. Again, these packages should NOT be listed as dependencies.

Core, recommended, and non-core packages

If you look at "Depends", "Imports", and "LinkingTo", you will notice 3 different types of packages:

Core packages

If you look at the caret homepage, you'll notice a few dependencies that don't have a link to the package, like methods, stats, and utils. These packages are part of the core R distribution and are tied to the R version installed. You can basically consider these to be "R itself". These are so essential to R that it would not make sense for them to be updated via CRAN. If you did, you would basically get a different version of R. Thus, they're updated when R is updated.

You can find a list of these core libraries at: https://github.com/wch/r-source/tree/trunk/src/library

When you install R, there is an option called --with-recommended-packages. This flag causes the R installation to include a few "Recommended" packages (legacy term). They are for historical reasons quite tied to the core R distribution, developed by the R core team or people closely related to it. The R core distribution "knows" about these package, but they are indeed distributed via CRAN. Because they're distributed via CRAN, they can also be updated between R version releases.

Spack explicitly adds the --without-recommended-packages flag to prevent the installation of these packages. Due to the way Spack handles package activation (symlinking packages to the R installation directory), pre-existing recommended packages will cause conflicts for already-existing files. We could either not include these recommended packages in Spack and require them to be installed through --with-recommended-packages, or we could not install them with R and let users choose the version of the package they want to install. We chose the latter.

Since these packages are so commonly distributed with the R system, many developers may assume these packages exist and fail to list them as dependencies. Watch out for this.

You can find a list of these recommended packages at: https://github.com/wch/r-source/blob/trunk/share/make/vars.mk

Non-core packages

These are packages that are neither "core" nor "recommended". There are more than 10,000 of these packages hosted on CRAN alone.

For each of these package types, if you see that a specific version is required, for example, "lattice (≥ 0.20)", please add this information to the dependency:

depends_on('r-lattice@0.20:', type=('build', 'run'))


Non-R dependencies

Some packages depend on non-R libraries for linking. Check out the r-stringi package for an example: https://cloud.r-project.org/package=stringi. If you search for the text "SystemRequirements", you will see:

ICU4C (>= 52, optional)


This is how non-R dependencies are listed. Make sure to add these dependencies. The default dependency type should suffice.

Passing arguments to the installation

Some R packages provide additional flags that can be passed to R CMD INSTALL, often to locate non-R dependencies. r-rmpi is an example of this, and flags for linking to an MPI library. To pass these to the installation command, you can override configure_args like so:

def configure_args(self):

mpi_name = self.spec['mpi'].name
# The type of MPI. Supported values are:
# OPENMPI, LAM, MPICH, MPICH2, or CRAY
if mpi_name == 'openmpi':
Rmpi_type = 'OPENMPI'
elif mpi_name == 'mpich':
Rmpi_type = 'MPICH2'
else:
raise InstallError('Unsupported MPI type')
return [
'--with-Rmpi-type={0}'.format(Rmpi_type),
'--with-mpi={0}'.format(spec['mpi'].prefix),
]


There is a similar configure_vars function that can be overridden to pass variables to the build.

Alternatives to Spack

CRAN hosts over 10,000 R packages, most of which are not in Spack. Many users may not need the advanced features of Spack, and may prefer to install R packages the normal way:

$ R
> install.packages("ggplot2")


R will search CRAN for the ggplot2 package and install all necessary dependencies for you. If you want to update all installed R packages to the latest release, you can use:

> update.packages(ask = FALSE)


This works great for users who have internet access, but those on an air-gapped cluster will find it easier to let Spack build a download mirror and install these packages for you.

Where Spack really shines is its ability to install non-R dependencies and link to them properly, something the R installation mechanism cannot handle.

External documentation

For more information on installing R packages, see: https://stat.ethz.ch/R-manual/R-devel/library/utils/html/INSTALL.html

For more information on writing R packages, see: https://cloud.r-project.org/doc/manuals/r-release/R-exts.html

In particular, https://cloud.r-project.org/doc/manuals/r-release/R-exts.html#Package-Dependencies has a great explanation of the difference between Depends, Imports, and LinkingTo.

Racket

Much like Python, Racket packages and modules have their own special build system. To learn more about the specifics of Racket package system, please refer to the Racket Docs.

Phases

The RacketBuilder and RacketPackage base classes provides an install phase that can be overridden, corresponding to the use of:

$ raco pkg install


Caveats

In principle, raco supports a second, setup phase; however, we have not implemented this separately, as in normal circumstances, install also handles running setup automatically.

Unlike Python, Racket currently on supports two installation scopes for packages, user or system, and keeps a registry of installed packages at each scope in its configuration files. This means we can't simply compose a "RACKET_PATH" environment variable listing all of the places packages are installed, and update this at will.

Unfortunately this means that all currently installed packages which extend Racket via raco pkg install are accessible whenever Racket is accessible.

Additionally, because Spack does not implement uninstall hooks, uninstalling a Spack rkt- package will have no effect on the raco installed packages visible to your Racket installation. Instead, you must manually run raco pkg remove to keep the two package managers in a mutually consistent state.

Ruby

Like Perl, Python, and R, Ruby has its own build system for installing Ruby gems.

Phases

The RubyBuilder and RubyPackage base classes provide the following phases that can be overridden:

1.
build - build everything needed to install
2.
install - install everything from build directory

For packages that come with a *.gemspec file, these phases run:

$ gem build *.gemspec
$ gem install *.gem


For packages that come with a Rakefile file, these phases run:

$ rake package
$ gem install *.gem


For packages that come pre-packaged as a *.gem file, the build phase is skipped and the install phase runs:

$ gem install *.gem


These are all standard gem commands and can be found by running:

$ gem help commands


For packages that only distribute *.gem files, these files can be downloaded with the expand=False option in the version directive. The build phase will be automatically skipped.

Important files

When building from source, Ruby packages can be identified by the presence of any of the following files:

  • *.gemspec
  • Rakefile
  • setup.rb (not yet supported)

However, not all Ruby packages are released as source code. Some are only released as *.gem files. These files can be extracted using:

$ gem unpack *.gem


Description

The *.gemspec file may contain something like:

summary = 'An implementation of the AsciiDoc text processor and publishing toolchain'
description = 'A fast, open source text processor and publishing toolchain for converting AsciiDoc content to HTML 5, DocBook 5, and other formats.'


Either of these can be used for the description of the Spack package.

Homepage

The *.gemspec file may contain something like:


This should be used as the official homepage of the Spack package.

Build system dependencies

All Ruby packages require Ruby at build and run-time. For this reason, the base class contains:

extends('ruby')


The *.gemspec file may contain something like:

required_ruby_version = '>= 2.3.0'


This can be added to the Spack package using:

depends_on('ruby@2.3.0:', type=('build', 'run'))


Ruby dependencies

When you install a package with gem, it reads the *.gemspec file in order to determine the dependencies of the package. If the dependencies are not yet installed, gem downloads them and installs them for you. This may sound convenient, but Spack cannot rely on this behavior for two reasons:

1.
Spack needs to be able to install packages on air-gapped networks.

If there is no internet connection, gem can't download the package dependencies. By explicitly listing every dependency in the package.py, Spack knows what to download ahead of time.

2.
Duplicate installations of the same dependency may occur.

Spack supports activation of Ruby extensions, which involves symlinking the package installation prefix to the Ruby installation prefix. If your package is missing a dependency, that dependency will be installed to the installation directory of the same package. If you try to activate the package + dependency, it may cause a problem if that package has already been activated.


For these reasons, you must always explicitly list all dependencies. Although the documentation may list the package's dependencies, often the developers assume people will use gem and won't have to worry about it. Always check the *.gemspec file to find the true dependencies.

Check for the following clues in the *.gemspec file:

  • add_runtime_dependency

    These packages are required for installation.

  • add_dependency

    This is an alias for add_runtime_dependency

  • add_development_dependency

    These packages are optional dependencies used for development. They should not be added as dependencies of the package.


External documentation

For more information on Ruby packaging, see: https://guides.rubygems.org/

Bundle

BundlePackage represents a set of packages that are expected to work well together, such as a collection of commonly used software libraries. The associated software is specified as dependencies.

If it makes sense, variants, conflicts, and requirements can be added to the package. Variants ensure that common build options are consistent across the packages supporting them. Conflicts and requirements prevent attempts to build with known bugs or limitations.

For example, if MyBundlePackage is known to only build on linux, it could use the require directive as follows:

require("platform=linux", msg="MyBundlePackage only builds on linux")


Spack has a number of built-in bundle packages, such as:

  • AmdAocl
  • EcpProxyApps
  • Libc
  • Xsdk

where Xsdk also inherits from CudaPackage and RocmPackage and Libc is a virtual bundle package for the C standard library.

Creation

Be sure to specify the bundle template if you are using spack create to generate a package from the template. For example, use the following command to create a bundle package whose class name will be Mybundle:

$ spack create --template bundle --name mybundle


Phases

The BundlePackage base class does not provide any phases by default since the bundle does not represent a build system.

URL

The url property does not have meaning since there is no package-specific code to fetch.

Version

At least one version must be specified in order for the package to build.

Cuda

Different from other packages, CudaPackage does not represent a build system. Instead its goal is to simplify and unify usage of CUDA in other packages by providing a mixin-class.

You can find source for the package at https://github.com/spack/spack/blob/develop/lib/spack/spack/build_systems/cuda.py.

Variants

This package provides the following variants:

  • cuda

    This variant is used to enable/disable building with CUDA. The default is disabled (or False).

  • cuda_arch

    This variant supports the optional specification of one or multiple architectures. Valid values are maintained in the cuda_arch_values property and are the numeric character equivalent of the compute capability version (e.g., '10' for version 1.0). Each provided value affects associated CUDA dependencies and compiler conflicts.

    The variant builds both PTX code for the _virtual_ architecture (e.g. compute_10) and binary code for the _real_ architecture (e.g. sm_10).

    GPUs and their compute capability versions are listed at https://developer.nvidia.com/cuda-gpus .


Conflicts

Conflicts are used to prevent builds with known bugs or issues. While base CUDA conflicts have been included with this package, you may want to add more for your software.

For example, if your package requires cuda_arch to be specified when cuda is enabled, you can add the following conflict to your package to terminate such build attempts with a suitable message:

conflicts("cuda_arch=none", when="+cuda",

msg="CUDA architecture is required")


Similarly, if your software does not support all versions of the property, you could add conflicts to your package for those versions. For example, suppose your software does not work with CUDA compute capability versions prior to SM 5.0 (50). You can add the following code to display a custom message should a user attempt such a build:

unsupported_cuda_archs = [

"10", "11", "12", "13",
"20", "21",
"30", "32", "35", "37" ] for value in unsupported_cuda_archs:
conflicts(f"cuda_arch={value}", when="+cuda",
msg=f"CUDA architecture {value} is not supported")


Methods

This package provides one custom helper method, which is used to build standard CUDA compiler flags.

cuda_flags

This built-in static method returns a list of command line flags for the chosen cuda_arch value(s). The flags are intended to be passed to the CUDA compiler driver (i.e., nvcc).

This method must be explicitly called when you are creating the arguments for your build in order to use the values.



Usage

This helper package can be added to your package by adding it as a base class of your package. For example, you can add it to your CMakePackage-based package as follows:


class MyCudaPackage(CMakePackage, CudaPackage):
...
def cmake_args(self):
spec = self.spec
args = []
...
if spec.satisfies("+cuda"):
# Set up the cuda macros needed by the build
args.append("-DWITH_CUDA=ON")
cuda_arch_list = spec.variants["cuda_arch"].value
cuda_arch = cuda_arch_list[0]
if cuda_arch != "none":
args.append(f"-DCUDA_FLAGS=-arch=sm_{cuda_arch}")
else:
# Ensure build with cuda is disabled
args.append("-DWITH_CUDA=OFF")
...
return args


assuming only the WITH_CUDA and CUDA_FLAGS flags are required. You will need to customize options as needed for your build.

This example also illustrates how to check for the cuda variant using self.spec and how to retrieve the cuda_arch variant's value, which is a list, using self.spec.variants["cuda_arch"].value.

With over 70 packages using CudaPackage as of January 2021 there are lots of examples to choose from to get more ideas for using this package.

Custom Build Systems

While the built-in build systems should meet your needs for the vast majority of packages, some packages provide custom build scripts. This guide is intended for the following use cases:

  • Packaging software with its own custom build system
  • Adding support for new build systems

If you want to add support for a new build system, a good place to start is to look at the definitions of other build systems. This guide focuses mostly on how Spack's build systems work.

In this guide, we will be using the perl and cmake packages as examples. perl's build system is a hand-written Configure shell script, while cmake bootstraps itself during installation. Both of these packages require custom build systems.

Base class

If your package does not belong to any of the built-in build systems that Spack already supports, you should inherit from the Package base class. Package is a simple base class with a single phase: install. If your package is simple, you may be able to simply write an install method that gets the job done. However, if your package is more complex and installation involves multiple steps, you should add separate phases as mentioned in the next section.

If you are creating a new build system base class, you should inherit from PackageBase. This is the superclass for all build systems in Spack.

Phases

The most important concept in Spack's build system support is the idea of phases. Each build system defines a set of phases that are necessary to install the package. They usually follow some sort of "configure", "build", "install" guideline, but any of those phases may be missing or combined with another phase.

If you look at the perl package, you'll see:

phases = ["configure", "build", "install"]


Similarly, cmake defines:

phases = ["bootstrap", "build", "install"]


If we look at the cmake example, this tells Spack's PackageBase class to run the bootstrap, build, and install functions in that order. It is now up to you to define these methods.

Phase and phase_args functions

If we look at perl, we see that it defines a configure method:

def configure(self, spec, prefix):

configure = Executable("./Configure")
configure(*self.configure_args())


There is also a corresponding configure_args function that handles all of the arguments to pass to Configure, just like in AutotoolsPackage. Comparatively, the build and install phases are pretty simple:

def build(self, spec, prefix):

make() def install(self, spec, prefix):
make("install")


The cmake package looks very similar, but with a bootstrap function instead of configure:

def bootstrap(self, spec, prefix):

bootstrap = Executable("./bootstrap")
bootstrap(*self.bootstrap_args()) def build(self, spec, prefix):
make() def install(self, spec, prefix):
make("install")


Again, there is a boostrap_args function that determines the correct bootstrap flags to use.

run_before/run_after

Occasionally, you may want to run extra steps either before or after a given phase. This applies not just to custom build systems, but to existing build systems as well. You may need to patch a file that is generated by configure, or install extra files in addition to what make install copies to the installation prefix. This is where @run_before and @run_after come in.

These Python decorators allow you to write functions that are called before or after a particular phase. For example, in perl, we see:

@run_after("install")
def install_cpanm(self):

spec = self.spec
if spec.satisfies("+cpanm"):
with working_dir(join_path("cpanm", "cpanm")):
perl = spec["perl"].command
perl("Makefile.PL")
make()
make("install")


This extra step automatically installs cpanm in addition to the base Perl installation.

on_package_attributes

The run_before/run_after logic discussed above becomes particularly powerful when combined with the @on_package_attributes decorator. This decorator allows you to conditionally run certain functions depending on the attributes of that package. The most common example is conditional testing. Many unit tests are prone to failure, even when there is nothing wrong with the installation. Unfortunately, non-portable unit tests and tests that are "supposed to fail" are more common than we would like. Instead of always running unit tests on installation, Spack lets users conditionally run tests with the --test=root flag.

If we wanted to define a function that would conditionally run if and only if this flag is set, we would use the following line:

@on_package_attributes(run_tests=True)


Testing

Let's put everything together and add unit tests to be optionally run during the installation of our package. In the perl package, we can see:

@run_after("build")
@on_package_attributes(run_tests=True)
def test(self):

make("test")


As you can guess, this runs make test after building the package, if and only if testing is requested. Again, this is not specific to custom build systems, it can be added to existing build systems as well.

WARNING:

The order of decorators matters. The following ordering:

@run_after("install")
@on_package_attributes(run_tests=True)


works as expected. However, if you reverse the ordering:

@on_package_attributes(run_tests=True)
@run_after("install")


the tests will always be run regardless of whether or not --test=root is requested. See https://github.com/spack/spack/issues/3833 for more information



Ideally, every package in Spack will have some sort of test to ensure that it was built correctly. It is up to the package authors to make sure this happens. If you are adding a package for some software and the developers list commands to test the installation, please add these tests to your package.py.

For more information on other forms of package testing, refer to Checking an installation.

IntelOneapi

Contents

IntelOneapi
  • oneAPI packages in Spack
  • Examples
  • Building a Package With icx
  • Using oneAPI Spack environment
  • Using oneAPI MPI to Satisfy a Virtual Dependence

Using Externally Installed oneAPI Tools
  • Compilers
  • Libraries

  • Using oneAPI Tools Installed by Spack
  • More information


oneAPI packages in Spack

Spack can install and use the Intel oneAPI products. You may either use spack to install the oneAPI tools or use the Intel installers. After installation, you may use the tools directly, or use Spack to build packages with the tools.

The Spack Python class IntelOneapiPackage is a base class that is used by IntelOneapiCompilers, IntelOneapiMkl, IntelOneapiTbb and other classes to implement the oneAPI packages. Search for oneAPI at packages.spack.io for the full list of available oneAPI packages, or use:

spack list -d oneAPI


For more information on a specific package, do:

spack info --all <package-name>


Intel no longer releases new versions of Parallel Studio, which can be used in Spack via the Intel. All of its components can now be found in oneAPI.

Examples

Building a Package With icx

In this example, we build patchelf with icc and icx. The compilers are installed with spack.

Install the oneAPI compilers:

spack install intel-oneapi-compilers


Add the compilers to your compilers.yaml so spack can use them:

spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/linux/bin/intel64
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/linux/bin


Verify that the compilers are available:

spack compiler list


The intel-oneapi-compilers package includes 2 families of compilers:

  • intel: icc, icpc, ifort. Intel's classic compilers.
  • oneapi: icx, icpx, ifx. Intel's new generation of compilers based on LLVM.

To build the patchelf Spack package with icc, do:

spack install patchelf%intel


To build with with icx, do

spack install patchelf%oneapi


Using oneAPI Spack environment

In this example, we build lammps with icx using Spack environment for oneAPI packages created by Intel. The compilers are installed with Spack like in example above.

Install the oneAPI compilers:

spack install intel-oneapi-compilers


Add the compilers to your compilers.yaml so Spack can use them:

spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/linux/bin/intel64
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/linux/bin


Verify that the compilers are available:

spack compiler list


Clone spack-configs repo and activate Intel oneAPI CPU environment:

git clone https://github.com/spack/spack-configs
spack env activate spack-configs/INTEL/CPU
spack concretize -f


Intel oneAPI CPU environment contains applications tested and validated by Intel, this list is constantly extended. And currently it supports:

  • Devito
  • GROMACS
  • HPCG
  • HPL
  • LAMMPS
  • OpenFOAM
  • Quantum Espresso
  • STREAM
  • WRF

To build lammps with oneAPI compiler from this environment just run:

spack install lammps


Compiled binaries can be find using:

spack cd -i lammps


You can do the same for all other applications from this environment.

Using oneAPI MPI to Satisfy a Virtual Dependence

The hdf5 package works with any compatible MPI implementation. To build hdf5 with Intel oneAPI MPI do:

spack install hdf5 +mpi ^intel-oneapi-mpi


Using Externally Installed oneAPI Tools

Spack can also use oneAPI tools that are manually installed with Intel Installers. The procedures for configuring Spack to use external compilers and libraries are different.

Compilers

To use the compilers, add some information about the installation to compilers.yaml. For most users, it is sufficient to do:

spack compiler add /opt/intel/oneapi/compiler/latest/linux/bin/intel64
spack compiler add /opt/intel/oneapi/compiler/latest/linux/bin


Adapt the paths above if you did not install the tools in the default location. After adding the compilers, using them is the same as if you had installed the intel-oneapi-compilers package. Another option is to manually add the configuration to compilers.yaml as described in Compiler configuration.

Libraries

If you want Spack to use oneMKL that you have installed without Spack in the default location, then add the following to ~/.spack/packages.yaml, adjusting the version as appropriate:

intel-oneapi-mkl:

externals:
- spec: intel-oneapi-mkl@2021.1.1
prefix: /opt/intel/oneapi/


Using oneAPI Tools Installed by Spack

Spack can be a convenient way to install and configure compilers and libraries, even if you do not intend to build a Spack package. If you want to build a Makefile project using Spack-installed oneAPI compilers, then use spack to configure your environment:

spack load intel-oneapi-compilers


And then you can build with:

CXX=icpx make


You can also use Spack-installed libraries. For example:

spack load intel-oneapi-mkl


Will update your environment CPATH, LIBRARY_PATH, and other environment variables for building an application with oneMKL.

More information

This section describes basic use of oneAPI, especially if it has changed compared to Parallel Studio. See Intel for more information on Selecting libraries to satisfy virtual packages, Unrelated packages, Integrating external libraries, and Tips for configuring client packages to use MKL.

Intel

Contents

Intel
Intel packages in Spack
  • Packages under no-cost license
  • Licensed packages
  • Unrelated packages
  • Configuring Spack to use Intel licenses
  • Pointing to an existing license server
  • Installing a standalone license file


Integration of Intel tools installed external to Spack
  • Integrating external compilers
  • Integrating external libraries

Installing Intel tools within Spack
  • Install steps for packages with compilers and libraries
  • Install steps for library-only packages
  • Debug notes

Using Intel tools in Spack to install client packages
  • Selecting Intel compilers
  • Selecting libraries to satisfy virtual packages
  • Using Intel tools as explicit dependency
  • Tips for configuring client packages to use MKL

Footnotes


Intel packages in Spack

This is an earlier version of Intel software development tools and has now been replaced by Intel oneAPI Toolkits.

Spack can install and use several software development products offered by Intel. Some of these are available under no-cost terms, others require a paid license. All share the same basic steps for configuration, installation, and, where applicable, license management. The Spack Python class IntelPackage implements these steps.

Spack interacts with Intel tools in several routes, like it does for any other package:

1.
Accept system-provided tools after you declare them to Spack as external packages.

2.
Install the products for you as internal packages in Spack.

3.
Use the packages, regardless of installation route, to install what we'll call client packages for you, this being Spack's primary purpose.

An auxiliary route follows from route 2, as it would for most Spack packages, namely:

4.
Make Spack-installed Intel tools available outside of Spack for ad-hoc use, typically through Spack-managed modulefiles.

This document covers routes 1 through 3.

Packages under no-cost license

Intel's standalone performance library products, notably MKL and MPI, are available for use under a simplified license since 2017 [fn1]. They are packaged in Spack as:

  • intel-mkl -- Math Kernel Library (linear algebra and FFT),
  • intel-mpi -- The Intel-MPI implementation (derived from MPICH),
  • intel-ipp -- Primitives for image-, signal-, and data-processing,
  • intel-daal -- Machine learning and data analytics.

Some earlier versions of these libraries were released under a paid license. For these older versions, the license must be available at installation time of the products and during compilation of client packages.

The library packages work well with the Intel compilers but do not require them -- those packages can just as well be used with other compilers. The Intel compiler invocation commands offer custom options to simplify linking Intel libraries (sometimes considerably), but Spack always uses fairly explicit linkage anyway.

Licensed packages

Intel's core software development products that provide compilers, analyzers, and optimizers do require a paid license. In Spack, they are packaged as:

  • intel-parallel-studio -- the entire suite of compilers and libraries,
  • intel -- a subset containing just the compilers and the Intel-MPI runtime [fn2].

The license is needed at installation time and to compile client packages, but never to merely run any resulting binaries. The license status for a given Spack package is normally specified in the package code through directives like license_required (see Licensed software). For the Intel packages, however, the class code provides these directives (in exchange of forfeiting a measure of OOP purity) and takes care of idiosyncasies like historic version dependence.

The libraries that are provided in the standalone packages are also included in the all-encompassing intel-parallel-studio. To complicate matters a bit, that package is sold in 3 "editions", of which only the upper-tier cluster edition supports compiling MPI applications, and hence only that edition can provide the mpi virtual package. (As mentioned [fn2], all editions provide support for running MPI applications.)

The edition forms the leading part of the version number for Spack's intel* packages discussed here. This differs from the primarily numeric version numbers seen with most other Spack packages. For example, we have:

$ spack info intel-parallel-studio
...
Preferred version:

professional.2018.3 http:... Safe versions:
professional.2018.3 http:...
...
composer.2018.3 http:...
...
cluster.2018.3 http:...
... ...


The full studio suite, capable of compiling MPI applications, currently requires about 12 GB of disk space when installed (see section Install steps for packages with compilers and libraries for detailed instructions). If you need to save disk space or installation time, you could install the intel compilers-only subset (0.6 GB) and just the library packages you need, for example intel-mpi (0.5 GB) and intel-mkl (2.5 GB).

Unrelated packages

The following packages do not use the Intel installer and are not in class IntelPackage that is discussed here:

  • intel-gpu-tools -- Test suite and low-level tools for the Linux Direct Rendering Manager
  • intel-mkl-dnn -- Math Kernel Library for Deep Neural Networks (CMakePackage)
  • intel-xed -- X86 machine instructions encoder/decoder
  • intel-tbb -- Standalone version of Intel Threading Building Blocks. Note that a TBB runtime version is included with intel-mkl, and development versions are provided by the packages intel-parallel-studio (all editions) and its intel subset.

Configuring Spack to use Intel licenses

If you wish to integrate licensed Intel products into Spack as external packages (route 1 above) we assume that their license configuration is in place and is working [fn3]. In this case, skip to section Integration of Intel tools installed external to Spack.

If you plan to have Spack install licensed products for you (route 2 above), the Intel product installer that Spack will run underneath must have access to a license that is either provided by a license server or as a license file. The installer may be able to locate a license that is already configured on your system. If it cannot, you must configure Spack to provide either the server location or the license file.

For authoritative information on Intel licensing, see:


Pointing to an existing license server

Installing and configuring a license server is outside the scope of Spack. We assume that:

  • Your system administrator has a license server running.
  • The license server offers valid licenses for the Intel packages of interest.
  • You can access these licenses under the user id running Spack.

Be aware of the difference between (a) installing and configuring a license server, and (b) configuring client software to use a server's so-called floating licenses. We are concerned here with (b) only. The process of obtaining a license from a server for temporary use is called "checking out a license". For that, a client application such as the Intel package installer or a compiler needs to know the host name and port number of one or more license servers that it may query [fn4].

Follow one of three methods to point client software to a floating license server. Ideally, your license administrator will already have implemented one that can be used unchanged in Spack: Look for the environment variable INTEL_LICENSE_FILE or for files /opt/intel/licenses/*.lic that contain:

SERVER  hostname  hostid_or_ANY  portnum
USE_SERVER


The relevant tokens, among possibly others, are the USE_SERVER line, intended specifically for clients, and one or more SERVER lines above it which give the network address.

If you cannot find pre-existing /opt/intel/licenses/*.lic files and the INTEL_LICENSE_FILE environment variable is not set (even after you loaded any relevant modulefiles), ask your license administrator for the server address(es) and place them in a "global" license file within your Spack directory tree as shown below).

Installing a standalone license file

If you purchased a user-specific license, follow Intel's instructions to "activate" it for your serial number, then download the resulting license file. If needed, request to have the file re-sent to you.

Intel's license files are text files that contain tokens in the proprietary "FLEXlm" format and whose name ends in .lic. Intel installers and compilers look for license files in several locations when they run. Place your license by one of the following means, in order of decreasing preference:

  • Default directory

    Install your license file in the directory /opt/intel/licenses/ if you have write permission to it. This directory is inspected by all Intel tools and is therefore preferred, as no further configuration will be needed. Create the directory if it does not yet exist. For the file name, either keep the downloaded name or use another suitably plain yet descriptive name that ends in .lic. Adjust file permissions for access by licensed users.

  • Directory given in environment variable

    If you cannot use the default directory, but your system already has set the environment variable INTEL_LICENSE_FILE independent from Spack [fn5], then, if you have the necessary write permissions, place your license file in one of the directories mentioned in this environment variable. Adjust file permissions to match licensed users.

    TIP:

If your system has not yet set and used the environment variable INTEL_LICENSE_FILE, you could start using it with the spack install stage of licensed tools and subsequent client packages. You would, however, be in a bind to always set that variable in the same manner, across updates and re-installations, and perhaps accommodate additions to it. As this may be difficult in the long run, we recommend that you do not attempt to start using the variable solely for Spack.



Spack-managed file

The first time Spack encounters an Intel package that requires a license, it will initialize a Spack-global Intel-specific license file for you, as a template with instructional comments, and bring up an editor [fn6]. Spack will do this even if you have a working license elsewhere on the system.

  • To proceed with an externally configured license, leave the newly templated file as is (containing comments only) and close the editor. You do not need to touch the file again.
  • To configure your own standalone license, copy the contents of your downloaded license file into the opened file, save it, and close the editor.
  • To use a license server (i.e., a floating network license) that is not already configured elsewhere on the system, supply your license server address(es) in the form of SERVER and USE_SERVER lines at the beginning of the file [fn7], in the format shown in section Pointing to an existing license server. Save the file and close the editor.

To revisit and manually edit this file, such as prior to a subsequent installation attempt, find it at $SPACK_ROOT/etc/spack/licenses/intel/intel.lic .

Spack will place symbolic links to this file in each directory where licensed Intel binaries were installed. If you kept the template unchanged, Intel tools will simply ignore it.


Integration of Intel tools installed external to Spack

This section discusses route 1 from the introduction.

A site that already uses Intel tools, especially licensed ones, will likely have some versions already installed on the system, especially at a time when Spack is just being introduced. It will be useful to make such previously installed tools available for use by Spack as they are. How to do this varies depending on the type of the tools:

Integrating external compilers

For Spack to use external Intel compilers, you must tell it both where to find them and when to use them. The present section documents the "where" aspect, involving compilers.yaml and, in most cases, long absolute paths. The "when" aspect actually relates to route 3 and requires explicitly stating the compiler as a spec component (in the form foo %intel or foo %intel@compilerversion) when installing client packages or altering Spack's compiler default in packages.yaml. See section Selecting Intel compilers for details.

To integrate a new set of externally installed Intel compilers into Spack follow section Compiler configuration. Briefly, prepare your shell environment like you would if you were to use these compilers normally, i.e., typically by a module load ... or a shell source ... command, then use spack compiler find to make Spack aware of these compilers. This will create a new entry in a suitably scoped and possibly new compilers.yaml file. You could certainly create such a compiler entry manually, but this is error-prone due to the indentation and different data types involved.

The Intel compilers need and use the system's native GCC compiler (gcc on most systems, clang on macOS) to provide certain functionality, notably to support C++. To provide a different GCC compiler for the Intel tools, or more generally set persistent flags for all invocations of the Intel compilers, locate the compilers.yaml entry that defines your Intel compiler, and, using a text editor, change one or both of the following:

1.
At the modules: tag, add a gcc module to the list.
2.
At the flags: tag, add cflags:, cxxflags:, and fflags: key-value entries.

Consult the examples under Compiler configuration and Vendor-Specific Compiler Configuration in the Spack documentation. When done, validate your compiler definition by running spack compiler info intel@compilerversion (replacing compilerversion by the version that you defined).

Be aware that both the GCC integration and persistent compiler flags can also be affected by an advanced third method:

3.
A modulefile that provides the Intel compilers for you could, for the benefit of users outside of Spack, implicitly integrate a specific gcc version via compiler flag environment variables or (hopefully not) via a sneaky extra PATH addition.

Next, visit section Selecting Intel Compilers to learn how to tell Spack to use the newly configured compilers.

Integrating external libraries

Configure external library-type packages (as opposed to compilers) in the files $SPACK_ROOT/etc/spack/packages.yaml or ~/.spack/packages.yaml, following the Spack documentation under External Packages.

Similar to compilers.yaml, the packages.yaml files define a package external to Spack in terms of a Spack spec and resolve each such spec via either the paths or modules tokens to a specific pre-installed package version on the system. Since Intel tools generally need environment variables to interoperate, which cannot be conveyed in a mere paths specification, the modules token will be more sensible to use. It resolves the Spack-side spec to a modulefile generated and managed outside of Spack's purview, which Spack will load internally and transiently when the corresponding spec is called upon to compile client packages.

Unlike for compilers, where spack find compilers [spec] generates an entry in an existing or new compilers.yaml file, Spack does not offer a command to generate an entirely new packages.yaml entry. You must create new entries yourself in a text editor, though the command spack config [--scope=...] edit packages can help with selecting the proper file. See section Configuration Scopes for an explanation about the different files and section Build customization for specifics and examples for packages.yaml files.

The following example integrates packages embodied by hypothetical external modulefiles intel-mkl/18/... into Spack as packages intel-mkl@...:

$ spack config edit packages


Make sure the file begins with:

packages:


Adapt the following example. Be sure to maintain the indentation:

# other content ...

intel-mkl:
externals:
- spec: "intel-mkl@2018.2.199 arch=linux-centos6-x86_64"
modules:
- intel-mkl/18/18.0.2
- spec: "intel-mkl@2018.3.222 arch=linux-centos6-x86_64"
modules:
- intel-mkl/18/18.0.3


The version numbers for the intel-mkl specs defined here correspond to file and directory names that Intel uses for its products because they were adopted and declared as such within Spack's package repository. You can inspect the versions known to your current Spack installation by:

$ spack info intel-mkl


Using the same version numbers for external packages as for packages known internally is useful for clarity, but not strictly necessary. Moreover, with a packages.yaml entry, you can go beyond internally known versions.

Note that the Spack spec in the example does not contain a compiler specification. This is intentional, as the Intel library packages can be used unmodified with different compilers.

A slightly more advanced example illustrates how to provide variants and how to use the buildable: False directive to prevent Spack from installing other versions or variants of the named package through its normal internal mechanism.

packages:

intel-parallel-studio:
externals:
- spec: "intel-parallel-studio@cluster.2018.2.199 +mkl+mpi+ipp+tbb+daal arch=linux-centos6-x86_64"
modules:
- intel/18/18.0.2
- spec: "intel-parallel-studio@cluster.2018.3.222 +mkl+mpi+ipp+tbb+daal arch=linux-centos6-x86_64"
modules:
- intel/18/18.0.3
buildable: False


One additional example illustrates the use of prefix: instead of modules:, useful when external modulefiles are not available or not suitable:

packages:

intel-parallel-studio:
externals:
- spec: "intel-parallel-studio@cluster.2018.2.199 +mkl+mpi+ipp+tbb+daal"
prefix: /opt/intel
- spec: "intel-parallel-studio@cluster.2018.3.222 +mkl+mpi+ipp+tbb+daal"
prefix: /opt/intel
buildable: False


Note that for the Intel packages discussed here, the directory values in the prefix: entries must be the high-level and typically version-less "installation directory" that has been used by Intel's product installer. Such a directory will typically accumulate various product versions. Amongst them, Spack will select the correct version-specific product directory based on the @version spec component that each path is being defined for.

For further background and details, see External Packages.

Installing Intel tools within Spack

This section discusses route 2 from the introduction.

When a system does not yet have Intel tools installed already, or the installed versions are undesirable, Spack can install these tools like any regular Spack package for you and, with appropriate pre- and post-install configuration, use its compilers and/or libraries to install client packages.

Install steps for packages with compilers and libraries

The packages intel-parallel-studio and intel (which is a subset of the former) are many-in-one products that contain both compilers and a set of library packages whose scope depends on the edition. Because they are general products geared towards shell environments, it can be somewhat involved to integrate these packages at their full extent into Spack.

Note: To install library-only packages like intel-mkl, intel-mpi, and intel-daal follow the next section instead.

1.
Review the section Configuring spack to use intel licenses.

2.
To install a version of intel-parallel-studio that provides Intel compilers at a version that you have not yet declared in Spack, the following preparatory steps are recommended:
Determine the compiler spec that the new intel-parallel-studio package will provide, as follows: From the package version, combine the last two digits of the version year, a literal "0" (zero), and the version component that immediately follows the year.
Package version Compiler spec provided
intel-parallel-studio@edition.YYyy.u intel@yy.0.u

Example: The package intel-parallel-studio@cluster.2018.3 will provide the compiler with spec intel@18.0.3.


Add a new compiler section with the newly anticipated version at the end of a compilers.yaml file in a suitable scope. For example, run:

$ spack config --scope=user/linux edit compilers


and append a stub entry:

- compiler:

target: x86_64
operating_system: centos6
modules: []
spec: intel@18.0.3
paths:
cc: /usr/bin/true
cxx: /usr/bin/true
f77: /usr/bin/true
fc: /usr/bin/true


Replace 18.0.3 with the version that you determined in the preceding step. The exact contents under paths: do not matter yet, but the paths must exist.


This temporary stub is required such that the intel-parallel-studio package can be installed for the intel compiler (which the package itself is going to provide after the installation) rather than an arbitrary system compiler. The paths given in cc, cxx, f77, fc must exist, but will never be used to build anything during the installation of intel-parallel-studio.

The reason for this stub is that intel-parallel-studio also provides the mpi and mkl packages and when concretizing a spec, Spack ensures strong consistency of the used compiler across all dependencies: [fn8]. Installing a package foo +mkl %intel will make Spack look for a package mkl %intel, which can be provided by intel-parallel-studio+mkl %intel, but not by intel-parallel-studio+mkl %gcc.

Failure to do so may result in additional installations of mkl, intel-mpi or even intel-parallel-studio as dependencies for other packages.

3.
Verify that the compiler version provided by the new studio version would be used as expected if you were to compile a client package:

$ spack spec zlib %intel


If the version does not match, explicitly state the anticipated compiler version, e.g.:

$ spack spec zlib %intel@18.0.3


if there are problems, review and correct the compiler's compilers.yaml entry, be it still in stub form or already complete (as it would be for a re-installation).

4.
Install the new studio package using Spack's regular install command. It may be wise to provide the anticipated compiler (see above) as an explicit concretization element:

$ spack install intel-parallel-studio@cluster.2018.3  %intel@18.0.3


5.
Follow the same steps as under Integrating external compilers to tell Spack the minutiae for actually using those compilers with client packages. If you placed a stub entry in a compilers.yaml file, now is the time to edit it and fill in the particulars.
Under paths:, give the full paths to the actual compiler binaries (icc, ifort, etc.) located within the Spack installation tree, in all their unsightly length [fn9].

To determine the full path to the C compiler, adapt and run:

$ find `spack location -i intel-parallel-studio@cluster.2018.3` \

-name icc -type f -ls


If you get hits for both intel64 and ia32, you almost certainly will want to use the intel64 variant. The icpc and ifort compilers will be located in the same directory as icc.

  • Make sure to specify modules: ['intel-parallel-studio-cluster2018.3-intel-18.0.3-HASH'] (with HASH being the short hash as displayed when running spack find -l intel-parallel-studio@cluster.2018.3 and the versions adapted accordingly) to ensure that the correct and complete environment for the Intel compilers gets loaded when running them. With modern versions of the Intel compiler you may otherwise see issues about missing libraries. Please also note that module name must exactly match the name as returned by module avail (and shown in the example above).
  • Use the modules: and/or cflags: tokens to further specify a suitable accompanying gcc version to help pacify picky client packages that ask for C++ standards more recent than supported by your system-provided gcc and its libstdc++.so.
  • If you specified a custom variant (for example +vtune) you may want to add this as your preferred variant in the packages configuration for the intel-parallel-studio package as described in Package Preferences. Otherwise you will have to specify the variant every time intel-parallel-studio is being used as mkl, fftw or mpi implementation to avoid pulling in a different variant.
  • To set the Intel compilers for default use in Spack, instead of the usual %gcc, follow section Selecting Intel compilers.


TIP:

Compiler packages like intel-parallel-studio can easily be above 10 GB in size, which can tax the disk space available for temporary files on small, busy, or restricted systems (like virtual machines). The Intel installer will stop and report insufficient space as:

==> './install.sh' '--silent' 'silent.cfg'
...
Missing critical prerequisite
-- Not enough disk space


As first remedy, clean Spack's existing staging area:

$ spack clean --stage


then retry installing the large package. Spack normally cleans staging directories but certain failures may prevent it from doing so.

If the error persists, tell Spack to use an alternative location for temporary files:

1.
Run df -h to identify an alternative location on your system.
2.
Tell Spack to use that location for staging. Do one of the following:
Run Spack with the environment variable TMPDIR altered for just a single command. For example, to use your $HOME directory:

$ TMPDIR="$HOME/spack-stage"  spack install ....


This example uses Bourne shell syntax. Adapt for other shells as needed.

Alternatively, customize Spack's build_stage configuration setting.

$ spack config edit config


Append:

config:

build_stage:
- /home/$user/spack-stage


Do not duplicate the config: line if it already is present. Adapt the location, which here is the same as in the preceding example.


3.
Retry installing the large package.



Install steps for library-only packages

To install library-only packages like intel-mkl, intel-mpi, and intel-daal follow the steps given here. For packages that contain a compiler, follow the previous section instead.

1.
For pre-2017 product releases, review the section Configuring Spack to use Intel licenses.
2.
Inspect the package spec. Specify an explicit compiler if necessary, e.g.:

$ spack spec intel-mpi@2018.3.199
$ spack spec intel-mpi@2018.3.199  %intel


Check that the package will use the compiler flavor and version that you expect.

3.
Install the package normally within Spack. Use the same spec as in the previous command, i.e., as general or as specific as needed:

$ spack install intel-mpi@2018.3.199
$ spack install intel-mpi@2018.3.199  %intel@18


4.
To prepare the new packages for use with client packages, follow Selecting libraries to satisfy virtual packages.

Debug notes

You can trigger a wall of additional diagnostics using Spack options, e.g.:

$ spack --debug -v install intel-mpi


The --debug option can also be useful while installing client packages (see below) to confirm the integration of the Intel tools in Spack, notably MKL and MPI.

The .spack/ subdirectory of an installed IntelPackage will contain, besides Spack's usual archival items, a copy of the silent.cfg file that was passed to the Intel installer:

$ grep COMPONENTS ...intel-mpi...<hash>/.spack/silent.cfg
COMPONENTS=ALL


If an installation error occurs, Spack will normally clean up and remove a partially installed target directory. You can direct Spack to keep it using --keep-prefix, e.g.:

$ spack install --keep-prefix  intel-mpi


You must, however, remove such partial installations prior to subsequent installation attempts. Otherwise, the Intel installer will behave incorrectly.


Using Intel tools in Spack to install client packages

Finally, this section pertains to route 3 from the introduction.

Once Intel tools are installed within Spack as external or internal packages they can be used as intended for installing client packages.

Selecting Intel compilers

Select Intel compilers to compile client packages, like any compiler in Spack, by one of the following means:

Request the Intel compilers explicitly in the client spec, e.g.:

$ spack install libxc@3.0.0%intel


Alternatively, request Intel compilers implicitly by package preferences. Configure the order of compilers in the appropriate packages.yaml file, under either an all: or client-package-specific entry, in a compiler: list. Consult the Spack documentation for Configuring Package Preferences and Package Preferences.

Example: etc/spack/packages.yaml might simply contain:

packages:

all:
compiler: [ intel, gcc, ]


To be more specific, you can state partial or full compiler version numbers, for example:

packages:

all:
compiler: [ intel@18, intel@17, gcc@4.4.7, gcc@4.9.3, gcc@7.3.0, ]


Selecting libraries to satisfy virtual packages

Intel packages, whether integrated into Spack as external packages or installed within Spack, can be called upon to satisfy the requirement of a client package for a library that is available from different providers. The relevant virtual packages for Intel are blas, lapack, scalapack, and mpi.

In both integration routes, Intel packages can have optional variants which alter the list of virtual packages they can satisfy. For Spack-external packages, the active variants are a combination of the defaults declared in Spack's package repository and the spec it is declared as in packages.yaml. Needless to say, those should match the components that are actually present in the external product installation. Likewise, for Spack-internal packages, the active variants are determined, persistently at installation time, from the defaults in the repository and the spec selected to be installed.

To have Intel packages satisfy virtual package requests for all or selected client packages, edit the packages.yaml file. Customize, either in the all: or a more specific entry, a providers: dictionary whose keys are the virtual packages and whose values are the Spack specs that satisfy the virtual package, in order of decreasing preference. To learn more about the providers: settings, see the Spack tutorial for Configuring Package Preferences and the section Package Preferences.

Example: The following fairly minimal example for packages.yaml shows how to exclusively use the standalone intel-mkl package for all the linear algebra virtual packages in Spack, and intel-mpi as the preferred MPI implementation. Other providers can still be chosen on a per-package basis.

packages:

all:
providers:
mpi: [intel-mpi]
blas: [intel-mkl]
lapack: [intel-mkl]
scalapack: [intel-mkl]


If you have access to the intel-parallel-studio@cluster edition, you can use instead:

all:

providers:
mpi: [intel-parallel-studio+mpi]
# Note: +mpi vs. +mkl
blas: [intel-parallel-studio+mkl]
lapack: [intel-parallel-studio+mkl]
scalapack: [intel-parallel-studio+mkl]


If you installed intel-parallel-studio within Spack ("route 2"), make sure you followed the special installation step to ensure that its virtual packages match the compilers it provides.

Using Intel tools as explicit dependency

With the proper installation as detailed above, no special steps should be required when a client package specifically (and thus deliberately) requests an Intel package as dependency, this being one of the target use cases for Spack.

Tips for configuring client packages to use MKL

The Math Kernel Library (MKL) is provided by several Intel packages, currently intel-parallel-studio when variant +mkl is active (it is by default) and the standalone intel-mkl. Because of these different provider packages, a virtual mkl package is declared in Spack.

To use MKL-specific APIs in a client package:

Declare a dependency on mkl, rather than a specific provider like intel-mkl. Declare the dependency either absolutely or conditionally based on variants that your package might have declared:

# Examples for absolute and conditional dependencies:
depends_on('mkl')
depends_on('mkl', when='+mkl')
depends_on('mkl', when='fftw=mkl')


The MKLROOT environment variable (part of the documented API) will be set during all stages of client package installation, and is available to both the Spack packaging code and the client code.

To use MKL as provider for BLAS, LAPACK, or ScaLAPACK:

The packages that provide mkl also provide the narrower virtual blas, lapack, and scalapack packages. See the relevant Packaging Guide section for an introduction. To portably use these virtual packages, construct preprocessor and linker option strings in your package configuration code using the package functions .headers and .libs in conjunction with utility functions from the following classes:

  • llnl.util.filesystem.FileList,
  • llnl.util.filesystem.HeaderList,
  • llnl.util.filesystem.LibraryList.

TIP:

Do not use constructs like .prefix.include or .prefix.lib, with Intel or any other implementation of blas, lapack, and scalapack.


For example, for an AutotoolsPackage use .libs.ld_flags to transform the library file list into linker options passed to ./configure:

def configure_args(self):

args = []
...
args.append('--with-blas=%s' % self.spec['blas'].libs.ld_flags)
args.append('--with-lapack=%s' % self.spec['lapack'].libs.ld_flags)
...


TIP:

Even though .ld_flags will return a string of multiple words, do not use quotes for options like --with-blas=... because Spack passes them to ./configure without invoking a shell.


Likewise, in a MakefilePackage or similar package that does not use AutoTools you may need to provide include and link options for use on command lines or in environment variables. For example, to generate an option string of the form -I<dir>, use:

self.spec['blas'].headers.include_flags


and to generate linker options (-L<dir> -llibname ...), use the same as above,

self.spec['blas'].libs.ld_flags


See MakefilePackage and more generally the Packaging Guide for background and further examples.


Footnotes

[fn1]
Strictly speaking, versions from 2017.2 onward.
[fn2]
The package intel intentionally does not have a +mpi variant since it is meant to be small. The native installer will always add MPI runtime components because it follows defaults defined in the download package, even when intel-parallel-studio ~mpi has been requested.

For intel-parallel-studio +mpi, the class function :py:func:.IntelPackage.pset_components will include "intel-mpi intel-imb" in a list of component patterns passed to the Intel installer. The installer will extend each pattern word with an implied glob-like * to resolve it to package names that are actually present in the product BOM. As a side effect, this pattern approach accommodates occasional package name changes, e.g., capturing both intel-mpirt and intel-mpi-rt .

[fn3]
How could the external installation have succeeded otherwise?
[fn4]
According to Intel's documentation, there is supposedly a way to install a product using a network license even when a FLEXlm server is not running: Specify the license in the form port@serverhost in the INTEL_LICENSE_FILE environment variable. All other means of specifying a network license require that the license server be up.
[fn5]
Despite the name, INTEL_LICENSE_FILE can hold several and diverse entries. They can be either directories (presumed to contain *.lic files), file names, or network locations in the form port@host (on Linux and Mac), with all items separated by ":" (on Linux and Mac).
[fn6]
Should said editor turn out to be vi, you better be in a position to know how to use it.
[fn7]
Comment lines in FLEXlm files, indicated by # as the first non-whitespace character on the line, are generally allowed anywhere in the file. There have been reports, however, that as of 2018, SERVER and USE_SERVER lines must precede any comment lines.
[fn8]
Spack's close coupling of installed packages to compilers, which both necessitates the detour for installing intel-parallel-studio, and largely limits any of its provided virtual packages to a single compiler, heavily favors recommending to install Intel Parallel Studio outside of Spack and declare it for Spack in packages.yaml by a compiler-less spec.
[fn9]
With some effort, you can convince Spack to use shorter paths.

WARNING:

Altering the naming scheme means that Spack will lose track of all packages it has installed for you so far. That said, the time is right for this kind of customization when you are defining a new set of compilers.


The relevant tunables are:

1.
Set the install_tree location in config.yaml (see doc).
2.
Set the hash length in install-path-scheme, also in config.yaml (q.v.).
3.
You will want to set the same hash length for module files if you have Spack produce them for you, under projections in modules.yaml.

ROCm

The ROCmPackage is not a build system but a helper package. Like CudaPackage, it provides standard variants, dependencies, and conflicts to facilitate building packages using GPUs though for AMD in this case.

You can find the source for this package (and suggestions for setting up your compilers.yaml and packages.yaml files) at https://github.com/spack/spack/blob/develop/lib/spack/spack/build_systems/rocm.py.

Variants

This package provides the following variants:

  • rocm

    This variant is used to enable/disable building with rocm. The default is disabled (or False).

  • amdgpu_target

    This variant supports the optional specification of the AMD GPU architecture. Valid values are the names of the GPUs (e.g., gfx701), which are maintained in the amdgpu_targets property.


Dependencies

This package defines basic rocm dependencies, including llvm and hip.

Conflicts

Conflicts are used to prevent builds with known bugs or issues. This package already requires that the amdgpu_target always be specified for rocm builds. It also defines a conflict that prevents builds with an amdgpu_target when rocm is disabled.

Refer to Conflicts for more information on package conflicts.

Methods

This package provides one custom helper method, which is used to build standard AMD hip compiler flags.

hip_flags

This built-in static method returns the appropriately formatted --amdgpu-target build option for hipcc.

This method must be explicitly called when you are creating the arguments for your build in order to use the values.



Usage

This helper package can be added to your package by adding it as a base class of your package. For example, you can add it to your CMakePackage-based package as follows:


class MyRocmPackage(CMakePackage, ROCmPackage):
...
# Ensure +rocm and amdgpu_targets are passed to dependencies
depends_on("mydeppackage", when="+rocm")
for val in ROCmPackage.amdgpu_targets:
depends_on(f"mydeppackage amdgpu_target={val}",
when=f"amdgpu_target={val}")
...
def cmake_args(self):
spec = self.spec
args = []
...
if spec.satisfies("+rocm"):
# Set up the hip macros needed by the build
args.extend([
"-DENABLE_HIP=ON",
f"-DHIP_ROOT_DIR={spec['hip'].prefix}"])
rocm_archs = spec.variants["amdgpu_target"].value
if "none" not in rocm_archs:
args.append(f"-DHIP_HIPCC_FLAGS=--amdgpu-target={','.join(rocm_archs}")
else:
# Ensure build with hip is disabled
args.append("-DENABLE_HIP=OFF")
...
return args
...


assuming only on the ENABLE_HIP, HIP_ROOT_DIR, and HIP_HIPCC_FLAGS macros are required to be set and the only dependency needing rocm options is mydeppackage. You will need to customize the flags as needed for your build.

This example also illustrates how to check for the rocm variant using self.spec and how to retrieve the amdgpu_target variant's value using self.spec.variants["amdgpu_target"].value.

All five packages using ROCmPackage as of January 2021 also use the CudaPackage. So it is worth looking at those packages to get ideas for creating a package that can support both cuda and rocm.

Sourceforge

SourceforgePackage is a mixin-class. It automatically sets the URL based on a list of Sourceforge mirrors listed in sourceforge_mirror_path, which defaults to a half dozen known mirrors. Refer to the package source (https://github.com/spack/spack/blob/develop/lib/spack/spack/build_systems/sourceforge.py) for the current list of mirrors used by Spack.

Methods

This package provides a method for populating mirror URLs.

urls

This method returns a list of possible URLs for package source. It is decorated with property so its results are treated as a package attribute.

Refer to https://spack.readthedocs.io/en/latest/packaging_guide.html#mirrors-of-the-main-url for information on how Spack uses the urls attribute during fetching.



Usage

This helper package can be added to your package by adding it as a base class of your package and defining the relative location of an archive file for one version of your software.


class MyPackage(AutotoolsPackage, SourceforgePackage):
...
sourceforge_mirror_path = "my-package/mypackage.1.0.0.tar.gz"
...


Over 40 packages are using SourceforcePackage this mix-in as of July 2022 so there are multiple packages to choose from if you want to see a real example.

For reference, the Build System API docs provide a list of build systems and methods/attributes that can be overridden. If you are curious about the implementation of a particular build system, you can view the source code by running:

$ spack edit --build-system autotools


This will open up the AutotoolsPackage definition in your favorite editor. In addition, if you are working with a less common build system like QMake, SCons, or Waf, it may be useful to see examples of other packages. You can quickly find examples by running:

$ cd var/spack/repos/builtin/packages
$ grep -l QMakePackage */package.py


You can then view these packages with spack edit.

This guide is intended to supplement the Build System API docs with examples of how to override commonly used methods. It also provides rules of thumb and suggestions for package developers who are unfamiliar with a particular build system.

DEVELOPER GUIDE

This guide is intended for people who want to work on Spack itself. If you just want to develop packages, see the Packaging Guide.

It is assumed that you've read the Basic Usage and Packaging Guide sections, and that you're familiar with the concepts discussed there. If you're not, we recommend reading those first.

Overview

Spack is designed with three separate roles in mind:

1.
Users, who need to install software without knowing all the details about how it is built.
2.
Packagers who know how a particular software package is built and encode this information in package files.
3.
Developers who work on Spack, add new features, and try to make the jobs of packagers and users easier.

Users could be end users installing software in their home directory, or administrators installing software to a shared directory on a shared machine. Packagers could be administrators who want to automate software builds, or application developers who want to make their software more accessible to users.

As you might expect, there are many types of users with different levels of sophistication, and Spack is designed to accommodate both simple and complex use cases for packages. A user who only knows that he needs a certain package should be able to type something simple, like spack install <package name>, and get the package that he wants. If a user wants to ask for a specific version, use particular compilers, or build several versions with different configurations, then that should be possible with a minimal amount of additional specification.

This gets us to the two key concepts in Spack's software design:

1.
Specs: expressions for describing builds of software, and
2.
Packages: Python modules that build software according to a spec.

A package is a template for building particular software, and a spec as a descriptor for one or more instances of that template. Users express the configuration they want using a spec, and a package turns the spec into a complete build.

The obvious difficulty with this design is that users under-specify what they want. To build a software package, the package object needs a complete specification. In Spack, if a spec describes only one instance of a package, then we say it is concrete. If a spec could describes many instances, (i.e. it is under-specified in one way or another), then we say it is abstract.

Spack's job is to take an abstract spec from the user, find a concrete spec that satisfies the constraints, and hand the task of building the software off to the package object. The rest of this document describes all the pieces that come together to make that happen.

Directory Structure

So that you can familiarize yourself with the project, we'll start with a high level view of Spack's directory structure:

spack/                  <- installation root

bin/
spack <- main spack executable
etc/
spack/ <- Spack config files.
Can be overridden by files in ~/.spack.
var/
spack/ <- build & stage directories
repos/ <- contains package repositories
builtin/ <- pkg repository that comes with Spack
repo.yaml <- descriptor for the builtin repository
packages/ <- directories under here contain packages
cache/ <- saves resources downloaded during installs
opt/
spack/ <- packages are installed here
lib/
spack/
docs/ <- source for this documentation
env/ <- compiler wrappers for build environment
external/ <- external libs included in Spack distro
llnl/ <- some general-use libraries
spack/ <- spack module; contains Python code
build_systems/ <- modules for different build systems
cmd/ <- each file in here is a spack subcommand
compilers/ <- compiler description files
container/ <- module for spack containerize
hooks/ <- hook modules to run at different points
modules/ <- modules for lmod, tcl, etc.
operating_systems/ <- operating system modules
platforms/ <- different spack platforms
reporters/ <- reporters like cdash, junit
schema/ <- schemas to validate data structures
solver/ <- the spack solver
test/ <- unit test modules
util/ <- common code


Spack is designed so that it could live within a standard UNIX directory hierarchy, so lib, var, and opt all contain a spack subdirectory in case Spack is installed alongside other software. Most of the interesting parts of Spack live in lib/spack.

Spack has one directory layout and there is no install process. Most Python programs don't look like this (they use distutils, setup.py, etc.) but we wanted to make Spack very easy to use. The simple layout spares users from the need to install Spack into a Python environment. Many users don't have write access to a Python installation, and installing an entire new instance of Python to bootstrap Spack would be very complicated. Users should not have to install a big, complicated package to use the thing that's supposed to spare them from the details of big, complicated packages. The end result is that Spack works out of the box: clone it and add bin to your PATH and you're ready to go.

Code Structure

This section gives an overview of the various Python modules in Spack, grouped by functionality.

Contains the PackageBase class, which is the superclass for all packages in Spack.
Contains functions for mapping between Spack package names, Python module names, and Python class names. Functions like mod_to_class() handle mapping package module names to class names.
Directives are functions that can be called inside a package definition to modify the package, like depends_on() and provides(). See Dependencies and Virtual dependencies.
Implementation of the @when decorator, which allows multimethods in packages.

Contains Spec. Also implements most of the logic for concretization of specs.
Contains SpecParser and functions related to parsing specs.
Contains Concretizer implementation, which allows site administrators to change Spack's Concretization Policies.
Implements a simple Version class with simple comparison semantics. Also implements VersionRange and VersionList. All three are comparable with each other and offer union and intersection operations. Spack uses these classes to compare versions and to manage version constraints on specs. Comparison semantics are similar to the LooseVersion class in distutils and to the way RPM compares version strings.
Submodules contains descriptors for all valid compilers in Spack. This is used by the build system to set up the build environment.

WARNING:

Not yet implemented. Currently has two compiler descriptions, but compilers aren't fully integrated with the build process yet.



Build environment

Handles creating temporary directories for builds.
This contains utility functions used by the compiler wrapper script, cc.
Classes that control the way an installation directory is laid out. Create more implementations of this to change the hierarchy and naming scheme in $spack_prefix/opt

Spack Subcommands

Each module in this package implements a Spack subcommand. See writing commands for details.

Unit tests

Implements Spack's test suite. Add a module and put its name in the test suite in __init__.py to add more unit tests.

Other Modules

URL parsing, for deducing names and versions of packages from tarball URLs.
SpackError, the base class for Spack's exception hierarchy.
Basic output functions for all of the messages Spack writes to the terminal.
Implements a color formatting syntax used by spack.tty.
In this package are a number of utility modules for the rest of Spack.

Spec objects

Package objects

Most spack commands look something like this:

1.
Parse an abstract spec (or specs) from the command line,
2.
Normalize the spec based on information in package files,
3.
Concretize the spec according to some customizable policies,
4.
Instantiate a package based on the spec, and
5.
Call methods (e.g., install()) on the package object.

The information in Package files is used at all stages in this process.

Writing commands

Adding a new command to Spack is easy. Simply add a <name>.py file to lib/spack/spack/cmd/, where <name> is the name of the subcommand. At the bare minimum, two functions are required in this file:

setup_parser()

Unless your command doesn't accept any arguments, a setup_parser() function is required to define what arguments and flags your command takes. See the Argparse documentation for more details on how to add arguments.

Some commands have a set of subcommands, like spack compiler find or spack module lmod refresh. You can add subparsers to your parser to handle this. Check out spack edit --command compiler for an example of this.

A lot of commands take the same arguments and flags. These arguments should be defined in lib/spack/spack/cmd/common/arguments.py so that they don't need to be redefined in multiple commands.

<name>()

In order to run your command, Spack searches for a function with the same name as your command in <name>.py. This is the main method for your command, and can call other helper methods to handle common tasks.

Remember, before adding a new command, think to yourself whether or not this new command is actually necessary. Sometimes, the functionality you desire can be added to an existing command. Also remember to add unit tests for your command. If it isn't used very frequently, changes to the rest of Spack can cause your command to break without sufficient unit tests to prevent this from happening.

Whenever you add/remove/rename a command or flags for an existing command, make sure to update Spack's Bash tab completion script.

Writing Hooks

A hook is a callback that makes it easy to design functions that run for different events. We do this by way of defining hook types, and then inserting them at different places in the spack code base. Whenever a hook type triggers by way of a function call, we find all the hooks of that type, and run them.

Spack defines hooks by way of a module at lib/spack/spack/hooks where we can define types of hooks in the __init__.py, and then python files in that folder can use hook functions. The files are automatically parsed, so if you write a new file for some integration (e.g., lib/spack/spack/hooks/myintegration.py you can then write hook functions in that file that will be automatically detected, and run whenever your hook is called. This section will cover the basic kind of hooks, and how to write them.

Types of Hooks

The following hooks are currently implemented to make it easy for you, the developer, to add hooks at different stages of a spack install or similar. If there is a hook that you would like and is missing, you can propose to add a new one.

pre_install(spec)

A pre_install hook is run within an install subprocess, directly before the install starts. It expects a single argument of a spec, and is run in a multiprocessing subprocess. Note that if you see pre_install functions associated with packages these are not hooks as we have defined them here, but rather callback functions associated with a package install.

post_install(spec)

A post_install hook is run within an install subprocess, directly after the install finishes, but before the build stage is removed. If you write one of these hooks, you should expect it to accept a spec as the only argument. This is run in a multiprocessing subprocess. This post_install is also seen in packages, but in this context not related to the hooks described here.

on_install_start(spec)

This hook is run at the beginning of lib/spack/spack/installer.py, in the install function of a PackageInstaller, and importantly is not part of a build process, but before it. This is when we have just newly grabbed the task, and are preparing to install. If you write a hook of this type, you should provide the spec to it.

def on_install_start(spec):

"""On start of an install, we want to...
"""
print('on_install_start')


on_install_success(spec)

This hook is run on a successful install, and is also run inside the build process, akin to post_install. The main difference is that this hook is run outside of the context of the stage directory, meaning after the build stage has been removed and the user is alerted that the install was successful. If you need to write a hook that is run on success of a particular phase, you should use on_phase_success.

on_install_failure(spec)

This hook is run given an install failure that happens outside of the build subprocess, but somewhere in installer.py when something else goes wrong. If you need to write a hook that is relevant to a failure within a build process, you would want to instead use on_phase_failure.

on_install_cancel(spec)

The same, but triggered if a spec install is cancelled for any reason.

on_phase_success(pkg, phase_name, log_file)

This hook is run within the install subprocess, and specifically when a phase successfully finishes. Since we are interested in the package, the name of the phase, and any output from it, we require:

  • pkg: the package variable, which also has the attached spec at pkg.spec
  • phase_name: the name of the phase that was successful (e.g., configure)
  • log_file: the path to the file with output, in case you need to inspect or otherwise interact with it.



on_phase_error(pkg, phase_name, log_file)

In the case of an error during a phase, we might want to trigger some event with a hook, and this is the purpose of this particular hook. Akin to on_phase_success we require the same variables - the package that failed, the name of the phase, and the log file where we might find errors.

Adding a New Hook Type

Adding a new hook type is very simple! In lib/spack/spack/hooks/__init__.py you can simply create a new HookRunner that is named to match your new hook. For example, let's say you want to add a new hook called post_log_write to trigger after anything is written to a logger. You would add it as follows:

# pre/post install and run by the install subprocess
pre_install = HookRunner('pre_install')
post_install = HookRunner('post_install')
# hooks related to logging
post_log_write = HookRunner('post_log_write') # <- here is my new hook!


You then need to decide what arguments my hook would expect. Since this is related to logging, let's say that you want a message and level. That means that when you add a python file to the lib/spack/spack/hooks folder with one or more callbacks intended to be triggered by this hook. You might use my new hook as follows:

def post_log_write(message, level):

"""Do something custom with the message and level every time we write
to the log
"""
print('running post_log_write!')


To use the hook, we would call it as follows somewhere in the logic to do logging. In this example, we use it outside of a logger that is already defined:

import spack.hooks
# We do something here to generate a logger and message
spack.hooks.post_log_write(message, logger.level)


This is not to say that this would be the best way to implement an integration with the logger (you'd probably want to write a custom logger, or you could have the hook defined within the logger) but serves as an example of writing a hook.

Unit tests

Unit testing

Developer environment

WARNING:

This is an experimental feature. It is expected to change and you should not use it in a production environment.


When installing a package, we currently have support to export environment variables to specify adding debug flags to the build. By default, a package install will build without any debug flag. However, if you want to add them, you can export:

export SPACK_ADD_DEBUG_FLAGS=true
spack install zlib


If you want to add custom flags, you should export an additional variable:

export SPACK_ADD_DEBUG_FLAGS=true
export SPACK_DEBUG_FLAGS="-g"
spack install zlib


These environment variables will eventually be integrated into spack so they are set from the command line.

Developer commands

spack doc

spack style

spack style exists to help the developer user to check imports and style with mypy, flake8, isort, and (soon) black. To run all style checks, simply do:

$ spack style


To run automatic fixes for isort you can do:

$ spack style --fix


You do not need any of these Python packages installed on your system for the checks to work! Spack will bootstrap install them from packages for your use.

spack unit-test

See the contributor guide section on spack unit-test.

spack python

spack python is a command that lets you import and debug things as if you were in a Spack interactive shell. Without any arguments, it is similar to a normal interactive Python shell, except you can import spack and any other Spack modules:

$ spack python
Spack version 0.10.0
Python 2.7.13, Linux x86_64
>>> from spack.version import Version
>>> a = Version('1.2.3')
>>> b = Version('1_2_3')
>>> a == b
True
>>> c = Version('1.2.3b')
>>> c > a
True
>>>


If you prefer using an IPython interpreter, given that IPython is installed you can specify the interpreter with -i:

$ spack python -i ipython
Python 3.8.3 (default, May 19 2020, 18:47:26)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.17.0 -- An enhanced Interactive Python. Type '?' for help.
Spack version 0.16.0
Python 3.8.3, Linux x86_64
In [1]:


With either interpreter you can run a single command:

$ spack python -c 'import distro; distro.linux_distribution()'
('Ubuntu', '18.04', 'Bionic Beaver')
$ spack python -i ipython -c 'import distro; distro.linux_distribution()'
Out[1]: ('Ubuntu', '18.04', 'Bionic Beaver')


or a file:

$ spack python ~/test_fetching.py
$ spack python -i ipython ~/test_fetching.py


just like you would with the normal python command.

spack blame

Spack blame is a way to quickly see contributors to packages or files in the spack repository. You should provide a target package name or file name to the command. Here is an example asking to see contributions for the package "python":

$ spack blame python
LAST_COMMIT  LINES  %      AUTHOR            EMAIL
2 weeks ago  3      0.3    Mickey Mouse   <cheddar@gmouse.org>
a month ago  927    99.7   Minnie Mouse   <swiss@mouse.org>
2 weeks ago  930    100.0


By default, you will get a table view (shown above) sorted by date of contribution, with the most recent contribution at the top. If you want to sort instead by percentage of code contribution, then add -p:

$ spack blame -p python


And to see the git blame view, add -g instead:

$ spack blame -g python


Finally, to get a json export of the data, add --json:

$ spack blame --json python


spack url

A package containing a single URL can be used to download several different versions of the package. If you've ever wondered how this works, all of the magic is in spack.url. This module contains methods for extracting the name and version of a package from its URL. The name is used by spack create to guess the name of the package. By determining the version from the URL, Spack can replace it with other versions to determine where to download them from.

The regular expressions in parse_name_offset and parse_version_offset are used to extract the name and version, but they aren't perfect. In order to debug Spack's URL parsing support, the spack url command can be used.

spack url parse

If you need to debug a single URL, you can use the following command:

$ spack url parse http://cache.ruby-lang.org/pub/ruby/2.2/ruby-2.2.0.tar.gz
==> Parsing URL: http://cache.ruby-lang.org/pub/ruby/2.2/ruby-2.2.0.tar.gz
==> Matched version regex  0: r'^[a-zA-Z+._-]+[._-]v?(\\d[\\d._-]*)$'
==> Matched  name   regex 10: r'^([A-Za-z\\d+\\._-]+)$'
==> Detected:

http://cache.ruby-lang.org/pub/ruby/2.2/ruby-2.2.0.tar.gz
---- ~~~~~
name: ruby
version: 2.2.0 ==> Substituting version 9.9.9b:
http://cache.ruby-lang.org/pub/ruby/2.2/ruby-9.9.9b.tar.gz
---- ~~~~~~


You'll notice that the name and version of this URL are correctly detected, and you can even see which regular expressions it was matched to. However, you'll notice that when it substitutes the version number in, it doesn't replace the 2.2 with 9.9 where we would expect 9.9.9b to live. This particular package may require a list_url or url_for_version function.

This command also accepts a --spider flag. If provided, Spack searches for other versions of the package and prints the matching URLs.

spack url list

This command lists every URL in every package in Spack. If given the --color and --extrapolation flags, it also colors the part of the string that it detected to be the name and version. The --incorrect-name and --incorrect-version flags can be used to print URLs that were not being parsed correctly.

spack url summary

This command attempts to parse every URL for every package in Spack and prints a summary of how many of them are being correctly parsed. It also prints a histogram showing which regular expressions are being matched and how frequently:

$ spack url summary
==> Generating a summary of URL parsing in Spack...

Total URLs found: 7308
Names correctly parsed: 6366/7308 (87.11%)
Versions correctly parsed: 6466/7308 (88.48%) ==> Statistics on name regular expressions:
Index Right Wrong Total Regular Expression
0 1771 352 2123 r'github\\.com/[^/]+/([^/]+)'
1 6 1 7 r'gitlab[^/]+/api/v4/projects/[^/]+%2F([^/]+)'
2 58 25 83 r'gitlab[^/]+/(?!api/v4/projects)[^/]+/([^/]+)'
3 16 6 22 r'bitbucket\\.org/[^/]+/([^/]+)'
4 4 0 4 r'pypi\\.(?:python\\.org|io)/packages/source/[A-Za-z\\d]/([^/]+)'
6 13 1 14 r'\\?f=([A-Za-z\\d+-]+)$'
7 19 0 19 r'\\?package=([A-Za-z\\d+-]+)'
9 2 1 3 r'([^/]+)/download.php$'
10 4477 535 5012 r'^([A-Za-z\\d+\\._-]+)$' ==> Statistics on version regular expressions:
Index Right Wrong Total Regular Expression
0 4540 176 4716 r'^[a-zA-Z+._-]+[._-]v?(\\d[\\d._-]*)$'
1 1375 53 1428 r'^v?(\\d[\\d._-]*)$'
2 13 24 37 r'^[a-zA-Z+]*(\\d[\\da-zA-Z]*)$'
3 9 22 31 r'^[a-zA-Z+-]*(\\d[\\da-zA-Z-]*)$'
4 7 125 132 r'^[a-zA-Z+_]*(\\d[\\da-zA-Z_]*)$'
5 61 28 89 r'^[a-zA-Z+.]*(\\d[\\da-zA-Z.]*)$'
6 270 9 279 r'^[a-zA-Z\\d+-]+-v?(\\d[\\da-zA-Z.]*)$'
7 1 0 1 r'^[a-zA-Z\\d+-]+-v?(\\d[\\da-zA-Z_]*)$'
8 33 1 34 r'^[a-zA-Z\\d+_]+_v?(\\d[\\da-zA-Z.]*)$'
9 0 2 2 r'^[a-zA-Z\\d+_]+\\.v?(\\d[\\da-zA-Z.]*)$'
10 0 1 1 r'^[a-zA-Z\\d+]+_r?(\\d[\\da-zA-Z-]*)$'
11 30 71 101 r'^(?:[a-zA-Z\\d+-]+-)?v?(\\d[\\da-zA-Z.-]*)$'
12 3 0 3 r'^[a-zA-Z+]+v?(\\d[\\da-zA-Z.-]*)$'
13 12 2 14 r'^[a-zA-Z\\d+_]+-v?(\\d[\\da-zA-Z.]*)$'
14 28 7 35 r'^[a-zA-Z\\d+.]+_v?(\\d[\\da-zA-Z.-]*)$'
15 1 0 1 r'^[a-zA-Z\\d+-]+-v?(\\d[\\da-zA-Z._]*)$'
16 3 1 4 r'^[a-zA-Z\\d+._]+-v?(\\d[\\da-zA-Z.]*)$'
17 5 1 6 r'^[a-zA-Z+-]+(\\d[\\da-zA-Z._]*)$'
18 1 2 3 r'^[a-zA-Z\\d+_-]+-v?(\\d[\\da-zA-Z.]*)$'
19 0 1 1 r'bzr(\\d[\\da-zA-Z._-]*)$'
20 10 0 10 r'[?&](?:sha|ref|version)=[a-zA-Z\\d+-]*[_-]?v?(\\d[\\da-zA-Z._-]*)$'
21 33 0 33 r'[?&](?:filename|f|get)=[a-zA-Z\\d+-]+[_-]v?(\\d[\\da-zA-Z.]*)'
22 14 1 15 r'github\\.com/[^/]+/[^/]+/releases/download/[a-zA-Z+._-]*v?(\\d[\\da-zA-Z._-]*)/'
23 17 210 227 r'(\\d[\\da-zA-Z._-]*)/[^/]+$'


This command is essential for anyone adding or changing the regular expressions that parse names and versions. By running this command before and after the change, you can make sure that your regular expression fixes more packages than it breaks.

Profiling

Spack has some limited built-in support for profiling, and can report statistics using standard Python timing tools. To use this feature, supply --profile to Spack on the command line, before any subcommands.

spack --profile

spack --profile output looks like this:

$ spack --profile graph hdf5 os=SUSE target=x86_64
...


The bottom of the output shows the top most time consuming functions, slowest on top. The profiling support is from Python's built-in tool, cProfile.

Releases

This section documents Spack's release process. It is intended for project maintainers, as the tasks described here require maintainer privileges on the Spack repository. For others, we hope this section at least provides some insight into how the Spack project works.

Release branches

There are currently two types of Spack releases: major releases (0.17.0, 0.18.0, etc.) and point releases (0.17.1, 0.17.2, 0.17.3, etc.). Here is a diagram of how Spack release branches work:

o    branch: develop  (latest version, v0.19.0.dev0)
|
o
| o  branch: releases/v0.18, tag: v0.18.1
o |
| o  tag: v0.18.0
o |
| o
|/
o
|
o
| o  branch: releases/v0.17, tag: v0.17.2
o |
| o  tag: v0.17.1
o |
| o  tag: v0.17.0
o |
| o
|/
o


The develop branch has the latest contributions, and nearly all pull requests target develop. The develop branch will report that its version is that of the next major release with a .dev0 suffix.

Each Spack release series also has a corresponding branch, e.g. releases/v0.18 has 0.18.x versions of Spack, and releases/v0.17 has 0.17.x versions. A major release is the first tagged version on a release branch. Minor releases are back-ported from develop onto release branches. This is typically done by cherry-picking bugfix commits off of develop.

To avoid version churn for users of a release series, minor releases should not make changes that would change the concretization of packages. They should generally only contain fixes to the Spack core. However, sometimes priorities are such that new functionality needs to be added to a minor release.

Both major and minor releases are tagged. As a convenience, we also tag the latest release as releases/latest, so that users can easily check it out to get the latest stable version. See Updating releases/latest for more details.

NOTE:

Older spack releases were merged back into develop so that we could do fancy things with tags, but since tarballs and many git checkouts do not have tags, this proved overly complex and confusing.

We have since converted to using PEP 440 compliant versions. See here for details.



Scheduling work for releases

We schedule work for releases by creating GitHub projects. At any time, there may be several open release projects. For example, below are two releases (from some past version of the page linked above): [image]

This image shows one release in progress for 0.15.1 and another for 0.16.0. Each of these releases has a project board containing issues and pull requests. GitHub shows a status bar with completed work in green, work in progress in purple, and work not started yet in gray, so it's fairly easy to see progress.

Spack's project boards are not firm commitments so we move work between releases frequently. If we need to make a release and some tasks are not yet done, we will simply move them to the next minor or major release, rather than delaying the release to complete them.

For more on using GitHub project boards, see GitHub's documentation.

Making major releases

Assuming a project board has already been created and all required work completed, the steps to make the major release are:

1.
Create two new project boards:
  • One for the next major release
  • One for the next point release

2.
Move any optional tasks that are not done to one of the new project boards.

In general, small bugfixes should go to the next point release. Major features, refactors, and changes that could affect concretization should go in the next major release.

3.
Create a branch for the release, based on develop:

$ git checkout -b releases/v0.15 develop


For a version vX.Y.Z, the branch's name should be releases/vX.Y. That is, you should create a releases/vX.Y branch if you are preparing the X.Y.0 release.

4.
Remove the dev0 development release segment from the version tuple in lib/spack/spack/__init__.py.

The version number itself should already be correct and should not be modified.

5.
Update CHANGELOG.md with major highlights in bullet form.

Use proper markdown formatting, like this example from 0.15.0.

6.
Push the release branch to GitHub.
7.
Make sure CI passes on the release branch, including:
  • Regular unit tests
  • Build tests
  • The E4S pipeline at gitlab.spack.io

If CI is not passing, submit pull requests to develop as normal and keep rebasing the release branch on develop until CI passes.

8.
Make sure the entire documentation is up to date. If documentation is outdated submit pull requests to develop as normal and keep rebasing the release branch on develop.
9.
Bump the major version in the develop branch.

Create a pull request targeting the develop branch, bumping the major version in lib/spack/spack/__init__.py with a dev0 release segment. For instance when you have just released v0.15.0, set the version to (0, 16, 0, 'dev0') on develop.

10.
Follow the steps in Publishing a release on GitHub.
11.
Follow the steps in Updating releases/latest.
12.
Follow the steps in Announcing a release.

Making point releases

Assuming a project board has already been created and all required work completed, the steps to make the point release are:

1.
Create a new project board for the next point release.
2.
Move any optional tasks that are not done to the next project board.
3.
Check out the release branch (it should already exist).
For the X.Y.Z release, the release branch is called releases/vX.Y. For v0.15.1, you would check out releases/v0.15:


$ git checkout releases/v0.15


4.
If a pull request to the release branch named Backports vX.Y.Z is not already in the project, create it. This pull request ought to be created as early as possible when working on a release project, so that we can build the release commits incrementally, and identify potential conflicts at an early stage.
5.
Cherry-pick each pull request in the Done column of the release project board onto the Backports vX.Y.Z pull request.

This is usually fairly simple since we squash the commits from the vast majority of pull requests. That means there is only one commit per pull request to cherry-pick. For example, this pull request has three commits, but they were squashed into a single commit on merge. You can see the commit that was created here: [image]

You can easily cherry pick it like this (assuming you already have the release branch checked out):

$ git cherry-pick 7e46da7


For pull requests that were rebased (or not squashed), you'll need to cherry-pick each associated commit individually.

WARNING:

It is important to cherry-pick commits in the order they happened, otherwise you can get conflicts while cherry-picking. When cherry-picking look at the merge date, not the number of the pull request or the date it was opened.

Sometimes you may still get merge conflicts even if you have cherry-picked all the commits in order. This generally means there is some other intervening pull request that the one you're trying to pick depends on. In these cases, you'll need to make a judgment call regarding those pull requests. Consider the number of affected files and or the resulting differences.

1.
If the dependency changes are small, you might just cherry-pick it, too. If you do this, add the task to the release board.
2.
If the changes are large, then you may decide that this fix is not worth including in a point release, in which case you should remove the task from the release project.
3.
You can always decide to manually back-port the fix to the release branch if neither of the above options makes sense, but this can require a lot of work. It's seldom the right choice.



6.
When all the commits from the project board are cherry-picked into the Backports vX.Y.Z pull request, you can push a commit to:
1.
Bump the version in lib/spack/spack/__init__.py.
2.
Update CHANGELOG.md with a list of the changes.

This is typically a summary of the commits you cherry-picked onto the release branch. See the changelog from 0.14.1.

7.
Merge the Backports vX.Y.Z PR with the Rebase and merge strategy. This is needed to keep track in the release branch of all the commits that were cherry-picked.
8.
Make sure CI passes on the release branch, including:
  • Regular unit tests
  • Build tests
  • The E4S pipeline at gitlab.spack.io

If CI does not pass, you'll need to figure out why, and make changes to the release branch until it does. You can make more commits, modify or remove cherry-picked commits, or cherry-pick more from develop to make this happen.

9.
Follow the steps in Publishing a release on GitHub.
10.
Follow the steps in Updating releases/latest.
11.
Follow the steps in Announcing a release.
12.
Submit a PR to update the CHANGELOG in the develop branch with the addition of this point release.

Publishing a release on GitHub

1.
Create the release in GitHub.
  • Go to github.com/spack/spack/releases and click Draft a new release.
  • Set Tag version to the name of the tag that will be created.

    The name should start with v and contain all three parts of the version (e.g. v0.15.0 or v0.15.1).

  • Set Target to the releases/vX.Y branch (e.g., releases/v0.15).
  • Set Release title to vX.Y.Z to match the tag (e.g., v0.15.1).
  • Paste the latest release markdown from your CHANGELOG.md file as the text.
  • Save the draft so you can keep coming back to it as you prepare the release.

2.
When you are ready to finalize the release, click Publish release.
3.
Immediately after publishing, go back to github.com/spack/spack/releases and download the auto-generated .tar.gz file for the release. It's the Source code (tar.gz) link.
4.
Click Edit on the release you just made and attach the downloaded release tarball as a binary. This does two things:
1.
Makes sure that the hash of our releases does not change over time.

GitHub sometimes annoyingly changes the way they generate tarballs that can result in the hashes changing if you rely on the auto-generated tarball links.

2.
Gets download counts on releases visible through the GitHub API.

GitHub tracks downloads of artifacts, but not the source links. See the releases page and search for download_count to see this.


5.
Go to readthedocs.org and activate the release tag.

This builds the documentation and makes the released version selectable in the versions menu.


Updating releases/latest

If the new release is the highest Spack release yet, you should also tag it as releases/latest. For example, suppose the highest release is currently 0.15.3:

  • If you are releasing 0.15.4 or 0.16.0, then you should tag it with releases/latest, as these are higher than 0.15.3.
  • If you are making a new release of an older major version of Spack, e.g. 0.14.4, then you should not tag it as releases/latest (as there are newer major versions).

To tag releases/latest, do this:

$ git checkout releases/vX.Y     # vX.Y is the new release's branch
$ git tag --force releases/latest
$ git push --force --tags


The --force argument to git tag makes git overwrite the existing releases/latest tag with the new one.

Announcing a release

We announce releases in all of the major Spack communication channels. Publishing the release takes care of GitHub. The remaining channels are Twitter, Slack, and the mailing list. Here are the steps:

1.
Announce the release on Twitter.
  • Compose the tweet on the @spackpm account per the spack-twitter slack channel.
  • Be sure to include a link to the release's page on GitHub.

    You can base the tweet on this example.


2.
Announce the release on Slack.
  • Compose a message in the #general Slack channel (spackpm.slack.com).
  • Preface the message with @channel to notify even those people not currently logged in.
  • Be sure to include a link to the tweet above.

The tweet will be shown inline so that you do not have to retype your release announcement.

3.
Announce the release on the Spack mailing list.
  • Compose an email to the Spack mailing list.
  • Be sure to include a link to the release's page on GitHub.
  • It is also helpful to include some information directly in the email.

You can base your announcement on this example email.


Once you've completed the above steps, congratulations, you're done! You've finished making the release!

SPACK PACKAGE

(major, minor, micro, dev release) tuple

Subpackages

spack.bootstrap package

Function and classes needed to bootstrap Spack itself.

Bases: Environment

Environment to install dependencies of Spack for a given interpreter and architecture

Paths to be added to PATH

Environment root directory

Paths to be added to sys.path or PYTHONPATH

Spack development requirements

Environment spack.yaml file

Update the installations of this environment.

The update is done using a depfile on Linux and macOS, and using the install_all method of environments on Windows.


Update sys.path and the PATH, PYTHONPATH environment variables to point to the environment view.

Location of the view


Return a list of all the core root specs that may be used to bootstrap Spack

Swap the current configuration for the one used to bootstrap Spack.

The context manager is reference counted to ensure we don't swap multiple times if there's nested use of it in the stack. One compelling use case is bootstrapping patchelf during the bootstrap of clingo.


Ensure the presence of all the core dependencies.

Ensure Spack dependencies from the bootstrap environment are installed and ready to use

Ensure patchelf is in the PATH or raise.

Return True if we are in a bootstrapping context, False otherwise.

Return a status message to be printed to screen that refers to the section passed as argument and a bool which is True if there are missing dependencies.
section (str) -- either 'core' or 'buildcache' or 'optional' or 'develop'


Path to the store used for bootstrapped software

Submodules

spack.bootstrap.config module

Manage configuration swapping for bootstrapping purposes

Swap the current configuration for the one used to bootstrap Spack.

The context manager is reference counted to ensure we don't swap multiple times if there's nested use of it in the stack. One compelling use case is bootstrapping patchelf during the bootstrap of clingo.


Return True if we are in a bootstrapping context, False otherwise.

Root of all the bootstrap related folders

Override the current configuration to set the interpreter under which Spack is currently running as the only Python external spec available.

For bootstrapping purposes we are just interested in the Python minor version (all patches are ABI compatible with the same minor).


Path to the store used for bootstrapped software

spack.bootstrap.core module

Bootstrap Spack core dependencies from binaries.

This module contains logic to bootstrap software required by Spack from binaries served in the bootstrapping mirrors. The logic is quite different from an installation done from a Spack user, because of the following reasons:

1.
The binaries are all compiled on the same OS for a given platform (e.g. they are compiled on centos7 on linux), but they will be installed and used on the host OS. They are also targeted at the most generic architecture possible. That makes the binaries difficult to reuse with other specs in an environment without ad-hoc logic.
2.
Bootstrapping has a fallback procedure where we try to install software by default from the most recent binaries, and proceed to older versions of the mirror, until we try building from sources as a last resort. This allows us not to be blocked on architectures where we don't have binaries readily available, but is also not compatible with the working of environments (they don't have fallback procedures).
3.
Among the binaries we have clingo, so we can't concretize that with clingo :-)
4.
clingo, GnuPG and patchelf binaries need to be verified by sha256 sum (all the other binaries we might add on top of that in principle can be verified with GPG signatures).



Bases: object

Interface for "core" software bootstrappers


Mirror scope to be pushed onto the bootstrapping configuration when using this bootstrapper.

Try to import a Python module from a spec satisfying the abstract spec passed as argument.
  • module -- Python module name to try importing
  • abstract_spec_str -- abstract spec that can provide the Python module

True if the Python module could be imported, False otherwise


Try to search some executables in the prefix of specs satisfying the abstract spec passed as argument.
  • executables -- executables to be found
  • abstract_spec_str -- abstract spec that can provide the Python module

True if the executables are found, False otherwise



Bases: Bootstrapper

Install the software needed during bootstrapping from a buildcache.

Try to import a Python module from a spec satisfying the abstract spec passed as argument.
  • module -- Python module name to try importing
  • abstract_spec_str -- abstract spec that can provide the Python module

True if the Python module could be imported, False otherwise


Try to search some executables in the prefix of specs satisfying the abstract spec passed as argument.
  • executables -- executables to be found
  • abstract_spec_str -- abstract spec that can provide the Python module

True if the executables are found, False otherwise



Whether the current platform is Windows

Name of the file containing metadata about the bootstrapping source

Bases: Bootstrapper

Install the software needed during bootstrapping from sources.

Try to import a Python module from a spec satisfying the abstract spec passed as argument.
  • module -- Python module name to try importing
  • abstract_spec_str -- abstract spec that can provide the Python module

True if the Python module could be imported, False otherwise


Try to search some executables in the prefix of specs satisfying the abstract spec passed as argument.
  • executables -- executables to be found
  • abstract_spec_str -- abstract spec that can provide the Python module

True if the executables are found, False otherwise



Return a list of all the core root specs that may be used to bootstrap Spack

Decorator to register classes implementing bootstrapping methods.
bootstrapper_type -- string identifying the class


Return the list of configured sources of software for bootstrapping Spack
scope -- if a valid configuration scope is given, return the list only from that scope


Return the root spec used to bootstrap clingo

Return a bootstrap object built according to the configuration argument

Ensure that the clingo module is available for import.

Ensure the presence of all the core dependencies.

Ensure that some executables are in path or raise.
  • executables (list) -- list of executables to be searched in the PATH, in order. The function exits on the first one found.
  • abstract_spec (str) -- abstract spec that provides the executables
  • cmd_check (object) -- callable predicate that takes a spack.util.executable.Executable command and validate it. Should return True if the executable is acceptable, False otherwise. Can be used to, e.g., ensure a suitable version of the command before accepting for bootstrapping.

RuntimeError -- if the executables cannot be ensured to be in PATH
Executable object


Ensure gpg or gpg2 are in the PATH or raise.

Make the requested module available for import, or raise.

This function tries to import a Python module in the current interpreter using, in order, the methods configured in bootstrap.yaml.

If none of the methods succeed, an exception is raised. The function exits on first success.

  • module -- module to be imported in the current interpreter
  • abstract_spec -- abstract spec that might provide the module. If not given it defaults to "module"

ImportError -- if the module couldn't be imported


Ensure patchelf is in the PATH or raise.

Return the root spec used to bootstrap GnuPG

Return the root spec used to bootstrap patchelf

Raise ValueError if the source is not enabled for bootstrapping

Older patchelf versions can produce broken binaries, so we verify the version here.
patchelf -- patchelf executable


spack.bootstrap.environment module

Bootstrap non-core Spack dependencies from an environment.

Bases: Environment

Environment to install dependencies of Spack for a given interpreter and architecture

Paths to be added to PATH

Environment root directory

Paths to be added to sys.path or PYTHONPATH

Spack development requirements

Environment spack.yaml file

Update the installations of this environment.

The update is done using a depfile on Linux and macOS, and using the install_all method of environments on Windows.


Update sys.path and the PATH, PYTHONPATH environment variables to point to the environment view.

Location of the view


Return the root spec used to bootstrap black

Ensure Spack dependencies from the bootstrap environment are installed and ready to use

Return the root spec used to bootstrap flake8

Return the root spec used to bootstrap isort

Return the root spec used to bootstrap mypy

Return the root spec used to bootstrap flake8

spack.bootstrap.status module

Query the status of bootstrapping on this machine

Return a status message to be printed to screen that refers to the section passed as argument and a bool which is True if there are missing dependencies.
section (str) -- either 'core' or 'buildcache' or 'optional' or 'develop'


spack.build_systems package

Submodules

spack.build_systems.aspell_dict module

Bases: AutotoolsBuilder

The Aspell builder is close enough to an autotools builder to allow specializing the builder class, so to use variables that are specific to the Aspell extensions.

Run "configure", with the arguments specified by the builder and an appropriately set prefix.


Bases: AutotoolsPackage

Specialized class for building aspell dictionairies.

Override the default autotools builder

alias of AspellBuilder






The target root directory: each file is added relative to this directory.

The source root directory that will be added to the view: files are added such that their path relative to the view destination matches their path relative to the view source.


spack.build_systems.autotools module

Bases: BaseBuilder

The autotools builder encodes the default way of installing software built with autotools. It has four phases that can be overridden, if need be:

1.
autoreconf()
2.
configure()
3.
build()
4.
install()



They all have sensible defaults and for many packages the only thing necessary is to override the helper method configure_args().

For a finer tuning you may also override:

Method Purpose
build_targets Specify make targets for the build phase
install_targets Specify make targets for the install phase
check() Run build time tests if required


Files to archive for packages based on autotools

Not needed usually, configure should be already there

Options to be passed to autoreconf when using the default implementation

Search path includes for autoreconf. Add an -I flag for all aclocal dirs of build deps, skips the default path of automake, move external include flags to the back, since they might pull in unrelated m4 files shadowing spack dependencies.

Run "make" on the build targets specified by the builder.

Override to provide another place to build the package

Build system name. Must also be defined in derived classes.

Targets for make during the build() phase

Callback names for build-time test

Run "make" on the test and check targets, if found.

Run "configure", with the arguments specified by the builder and an appropriately set prefix.


Return the list of all the arguments that must be passed to configure, except --prefix which will be pre-pended to the list.

Return the directory where 'configure' resides.


Same as with_or_without() but substitute with with enable and without with disable.
  • name (str) -- name of a valid multi-valued variant
  • activation_value (Callable) --

    if present accepts a single value and returns the parameter to be used leading to an entry of the type --enable-{name}={parameter}

    The special value 'prefix' can also be assigned and will return spec[name].prefix as activation parameter.


list of arguments to configure


force_autoreconf = False
Set to true to force the autoreconf step even if configure is present

Run "make" on the install targets specified by the builder.

If False deletes all the .la files in the prefix folder after the installation. If True instead it installs them.

Targets for make during the install() phase

Callback names for install-time test

Run "make" on the installcheck target, if found.


Names associated with package methods in the old build-system format

Whether to update old config.guess and config.sub files distributed with the tarball.

This currently only applies to ppc64le:, aarch64:, and riscv64 target architectures.

The substitutes are taken from the gnuconfig package, which is automatically added as a build dependency for these architectures. In case system versions of these config files are required, the gnuconfig package can be marked external, with a prefix pointing to the directory containing the system config.guess and config.sub files.


Whether to update libtool (e.g. for Arm/Clang/Fujitsu/NVHPC compilers)


Remove all .la files in prefix sub-folders if the package sets install_libtool_archives to be False.



Ensure the presence of a "configure" script, or raise. If the "configure" is found, a module level attribute is set.
RuntimeError -- if the "configure" script is not found


Sets up the build environment for a package.

This method will be called before the current package prefix exists in Spack's store.

env (spack.util.environment.EnvironmentModifications) -- environment modifications to be applied when the package is built. Package authors can call methods on it to alter the build environment.


Inspects a variant and returns the arguments that activate or deactivate the selected feature(s) for the configure options.

This function works on all type of variants. For bool-valued variants it will return by default --with-{name} or --without-{name}. For other kinds of variants it will cycle over the allowed values and return either --with-{value} or --without-{value}.

If activation_value is given, then for each possible value of the variant, the option --with-{value}=activation_value(value) or --without-{value} will be added depending on whether or not variant=value is in the spec.

  • name (str) -- name of a valid multi-valued variant
  • activation_value (Callable) --

    callable that accepts a single value and returns the parameter to be used leading to an entry of the type --with-{name}={parameter}.

    The special value 'prefix' can also be assigned and will return spec[name].prefix as activation parameter.


list of arguments to configure



Bases: PackageBase

Specialized class for packages built using GNU Autotools.

This attribute is used in UI queries that need to know the build system base class



Produces a list of all command line arguments to pass specified compiler flags to configure.

Legacy buildsystem attribute used to deserialize and install old specs





spack.build_systems.bundle module


Bases: PackageBase

General purpose bundle, or no-code, package class.

This attribute is used in UI queries that require to know which build-system class we are using


Bundle packages do not have associated source or binary code.

Legacy buildsystem attribute used to deserialize and install old specs




spack.build_systems.cached_cmake module

Bases: CMakeBuilder


Return a Cached CMake field from the given variant's value. See define_from_variant in lib/spack/spack/build_systems/cmake.py package





This method is to be overwritten by the package




Phases of a Cached CMake package Note: the initconfig phase is used for developer builds as a final phase to stop on


Standard cmake arguments provided as a property for convenience of package writers



Bases: CMakePackage

Specialized class for packages built using CMake initial cache.

This feature of CMake allows packages to increase reproducibility, especially between Spack- and manual builds. It also allows packages to sidestep certain parsing bugs in extremely long cmake commands, and to avoid system limits on the length of the command line.

alias of CachedCMakeBuilder









spack.build_systems.cmake module

Bases: BaseBuilder

The cmake builder encodes the default way of building software with CMake. IT has three phases that can be overridden:

1.
cmake()
2.
build()
3.
install()



They all have sensible defaults and for many packages the only thing necessary will be to override cmake_args().

For a finer tuning you may also override:

Method Purpose
root_cmakelists_dir() Location of the root CMakeLists.txt
build_directory() Directory where to build the package


Files to archive for packages based on CMake

Make the build targets

Full-path to the directory to use when building the package.

Directory name to use when building the package.

Build system name. Must also be defined in derived classes.

Targets to be used during the build phase

Callback names for build-time test

Search the CMake-generated files for the targets test and check, and runs them if found.

Runs cmake in the build directory

List of all the arguments that must be passed to cmake, except:
  • CMAKE_INSTALL_PREFIX
  • CMAKE_BUILD_TYPE



which will be set automatically.


Return a CMake command line argument that defines a variable.

The resulting argument will convert boolean values to OFF/ON and lists/tuples to CMake semicolon-separated string lists. All other values will be interpreted as strings.

Examples

[define('BUILD_SHARED_LIBS', True),

define('CMAKE_CXX_STANDARD', 14),
define('swr', ['avx', 'avx2'])]


will generate the following configuration options:

["-DBUILD_SHARED_LIBS:BOOL=ON",

"-DCMAKE_CXX_STANDARD:STRING=14",
"-DSWR:STRING=avx;avx2]



Returns the str -DCMAKE_CUDA_ARCHITECTURES:STRING=(expanded cuda_arch).

cuda_arch is variant composed of a list of target CUDA architectures and it is declared in the cuda package.

This method is no-op for cmake<3.18 and when cuda_arch variant is not set.


Return a CMake command line argument from the given variant's value.

The optional variant argument defaults to the lower-case transform of cmake_var.

This utility function is similar to with_or_without().

Examples

Given a package with:

variant('cxxstd', default='11', values=('11', '14'),

multi=False, description='') variant('shared', default=True, description='') variant('swr', values=any_combination_of('avx', 'avx2'),
description='')


calling this function like:

[self.define_from_variant('BUILD_SHARED_LIBS', 'shared'),

self.define_from_variant('CMAKE_CXX_STANDARD', 'cxxstd'),
self.define_from_variant('SWR')]


will generate the following configuration options:

["-DBUILD_SHARED_LIBS:BOOL=ON",

"-DCMAKE_CXX_STANDARD:STRING=14",
"-DSWR:STRING=avx;avx2]


for <spec-name> cxxstd=14 +shared swr=avx,avx2

this function returns an empty string. CMake discards empty strings provided on the command line.


Returns the str -DCMAKE_HIP_ARCHITECTURES:STRING=(expanded amdgpu_target).

amdgpu_target is variant composed of a list of the target HIP architectures and it is declared in the rocm package.

This method is no-op for cmake<3.18 and when amdgpu_target variant is not set.



Make the install targets

Targets to be used during the install phase


Names associated with package methods in the old build-system format


The relative path to the directory containing CMakeLists.txt

This path is relative to the root of the extracted tarball, not to the build_directory. Defaults to the current directory.



Computes the standard cmake arguments for a generic package

Standard cmake arguments provided as a property for convenience of package writers


Bases: PackageBase

Specialized class for packages built using CMake

For more information on the CMake build system, see: https://cmake.org/cmake/help/latest/

This attribute is used in UI queries that need to know the build system base class




Return a list of all command line arguments to pass the specified compiler flags to cmake. Note CMAKE does not have a cppflags option, so cppflags will be added to cflags, cxxflags, and fflags to mimic the behavior in other tools.

Legacy buildsystem attribute used to deserialize and install old specs




The build system generator to use.

See cmake --help for a list of valid generators. Currently, "Unix Makefiles" and "Ninja" are the only generators that Spack supports. Defaults to "Unix Makefiles".

See https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html for more information.

  • names -- allowed generators for this package
  • default -- default generator



spack.build_systems.cuda module


spack.build_systems.generic module

Bases: BaseBuilder

A builder for a generic build system, that require packagers to implement an "install" phase.

Build system name. Must also be defined in derived classes.

Callback names for post-install phase tests

Names associated with package attributes in the old build-system format

Names associated with package methods in the old build-system format

A generic package has only the "install" phase



Bases: PackageBase

General purpose class with a single install phase that needs to be coded by packagers.

This attribute is used in UI queries that require to know which build-system class we are using


Legacy buildsystem attribute used to deserialize and install old specs




spack.build_systems.gnu module

Bases: PackageBase

Mixin that takes care of setting url and mirrors for GNU packages.


Path of the package in a GNU mirror



spack.build_systems.intel module

Bases: Package

Specialized class for licensed Intel software.

This class provides two phases that can be overridden:

1.
configure()
2.
install()

They both have sensible defaults and for many packages the only thing necessary will be to override setup_run_environment to set the appropriate environment variables.


Provide the library directory located in the base of Intel installation.


This attribute is used in UI queries that need to know the build system base class



Provide directory suitable for find_libraries() and SPACK_COMPILER_EXTRA_RPATHS.

Generates the silent.cfg file to pass to installer.sh.

See https://software.intel.com/en-us/articles/configuration-file-format





Full path of file to source for initializing an Intel package. A client package could override as follows: ` @property` ` def file_to_source(self):` ` return self.normalize_path("apsvars.sh", "vtune_amplifier")`


Returns the path where a Spack-global license file should be stored.

All Intel software shares the same license, so we store it in a common 'intel' directory.



Runs Intel's install.sh installation script. Afterwards, save the installer config and logs to <prefix>/.spack

Provide the suffix for Intel library names to match a client application's desired int size, conveyed by the active spec variant. The possible suffixes and their meanings are:
ilp64 all of int, long, and pointer are 64 bit, `` lp64`` only long and pointer are 64 bit; int will be 32bit.





Comment symbol used in the license.lic file

Built-in mutable sequence.

If no argument is given, the constructor creates a new empty list. The argument must be an iterable if specified.


bool(x) -> bool

Returns True when the argument x is true, False otherwise. The builtins True and False are the only two instances of the class bool. The class bool is a subclass of the class int, and cannot be subclassed.


https://software.intel.com/en-us/articles/intel-license-manager-faq'
URL providing information on how to acquire a license key

Environment variables that Intel searches for a license file

Add libimf.so and other required libraries to the RUNPATH of LLVMgold.so.

These are needed explicitly at dependent link time when ld -plugin LLVMgold.so is called by the compiler.


Return paths to compiler wrappers as a dict of env-like names

Unified back-end for setup_dependent_build_environment() of Intel packages that provide 'mpi'.
  • env -- same as in setup_dependent_build_environment().
  • dependent_spec -- same as in setup_dependent_build_environment().
  • compilers_of_client (dict) -- Conveys spack_cc, spack_cxx, etc., from the scope of dependent packages; constructed in caller.



Returns the absolute or relative path to a component or file under a component suite directory.

Intel's product names, scope, and directory layout changed over the years. This function provides a unified interface to their directory names.

  • component_path (str) -- a component name like 'mkl', or 'mpi', or a deeper relative path.
  • component_suite_dir (str) --

    _Unversioned_ name of the expected parent directory of component_path. When absent or None, an appropriate default will be used. A present but empty string "" requests that component_path refer to self.prefix directly.

    Typical values: compilers_and_libraries, composer_xe, parallel_studio_xe.

    Also supported: advisor, inspector, vtune. The actual directory name for these suites varies by release year. The name will be corrected as needed for use in the return value.

  • relative (bool) -- When True, return path relative to self.prefix, otherwise, return an absolute path (the default).



Returns the version-specific and absolute path to the directory of an Intel product or a suite of product components.
suite_dir_name (str) --

Name of the product directory, without numeric version.

Examples:

composer_xe, parallel_studio_xe, compilers_and_libraries



The following will work as well, even though they are not directly targets for Spack installation:

advisor_xe, inspector_xe, vtune_amplifier_xe,
performance_snapshots (new name for vtune as of 2018)


These are single-component products without subordinate components and are normally made available to users by a toplevel psxevars.sh or equivalent file to source (and thus by the modulefiles that Spack produces).

version_globs (list) -- Suffix glob patterns (most specific first) expected to qualify suite_dir_name to its fully version-specific install directory (as opposed to a compatibility directory or symlink).



Supply LibraryList for linking OpenMP






Set up Python module-scope variables for dependent packages.

Called before the install() method of dependents.

Default implementation does nothing, but this can be overridden by an extendable package to set up the module of its extensions. This is useful if there are some common steps to installing all extensions for a certain package.

Examples:

1.
Extensions often need to invoke the python interpreter from the Python installation being extended. This routine can put a python() Executable object in the module scope for the extension package to simplify extension installs.
2.
MPI compilers could set some variables in the dependent's scope that point to mpicc, mpicxx, etc., allowing them to be called by common name regardless of which MPI is used.
3.
BLAS/LAPACK implementations can set some variables indicating the path to their libraries, since these paths differ by BLAS/LAPACK implementation.

  • module (spack.package_base.PackageBase.module) -- The Python module object of the dependent package. Packages can use this to set module-scope variables for the dependent to use.
  • dependent_spec (spack.spec.Spec) -- The spec of the dependent package about to be built. This allows the extendee (self) to query the dependent's state. Note that this package's spec is available as self.spec.



Adds environment variables to the generated module file.

These environment variables come from running:

$ source parallel_studio_xe_2017/bin/psxevars.sh intel64
[and likewise for MKL, MPI, and other components]





Supply LibraryList for linking TBB



Return the version in a unified style, suitable for Version class conditionals.




Prints a message (usu. a variable) and the callers' names for a couple of stack frames.

Bails out with an error message. Shows args after the first as one per line, tab-indented, useful for long paths to line up and stand out.

spack.build_systems.lua module

Bases: Builder
Build system name. Must also be defined in derived classes.





Names associated with package attributes in the old build-system format

Names associated with package methods in the old build-system format



Override this to preprocess source before building with luarocks

Sets up the build environment for a package.

This method will be called before the current package prefix exists in Spack's store.

env (spack.util.environment.EnvironmentModifications) -- environment modifications to be applied when the package is built. Package authors can call methods on it to alter the build environment.




Bases: PackageBase

Specialized class for lua packages

This attribute is used in UI queries that need to know the build system base class


Legacy buildsystem attribute used to deserialize and install old specs

Link depth to which list_url should be searched for new versions






spack.build_systems.makefile module

Bases: BaseBuilder

The Makefile builder encodes the most common way of building software with Makefiles. It has three phases that can be overridden, if need be:

1.
edit()
2.
build()
3.
install()



It is usually necessary to override the edit() phase (which is by default a no-op), while the other two have sensible defaults.

For a finer tuning you may override:

Method Purpose
build_targets Specify make targets for the build phase
install_targets Specify make targets for the install phase
build_directory() Directory where the Makefile is located


Run "make" on the build targets specified by the builder.

Return the directory containing the main Makefile.

Build system name. Must also be defined in derived classes.

Targets for make during the build() phase

Callback names for build-time test

Run "make" on the test and check targets, if found.

Edit the Makefile before calling make. The default is a no-op.

Run "make" on the install targets specified by the builder.

Targets for make during the install() phase

Callback names for install-time test

Searches the Makefile for an installcheck target and runs it if found.


Names associated with package methods in the old build-system format

Sequence of phases. Must be defined in derived classes



Bases: PackageBase

Specialized class for packages built using a Makefiles.

This attribute is used in UI queries that need to know the build system base class


Legacy buildsystem attribute used to deserialize and install old specs




spack.build_systems.maven module

Bases: BaseBuilder

The Maven builder encodes the default way to build software with Maven. It has two phases that can be overridden, if need be:

1.
build()
2.
install()



Compile code and package into a JAR file.

List of args to pass to build phase.

The directory containing the pom.xml file.

Build system name. Must also be defined in derived classes.


Copy to installation prefix.


Names associated with package attributes in the old build-system format

Names associated with package methods in the old build-system format

Sequence of phases. Must be defined in derived classes


Bases: PackageBase

Specialized class for packages that are built using the Maven build system. See https://maven.apache.org/index.html for more information.



Legacy buildsystem attribute used to deserialize and install old specs




spack.build_systems.meson module

Bases: BaseBuilder

The Meson builder encodes the default way to build software with Meson. The builder has three phases that can be overridden, if need be:

1.
meson()
2.
build()
3.
install()



They all have sensible defaults and for many packages the only thing necessary will be to override meson_args().

For a finer tuning you may also override:

Method Purpose
root_mesonlists_dir() Location of the root MesonLists.txt
build_directory() Directory where to build the package


Files to archive for packages based on Meson

Make the build targets

Directory to use when building the package.

Returns the directory name to use when building the package.

Build system name. Must also be defined in derived classes.



Search Meson-generated files for the target test and run it if found.

Make the install targets



Names associated with package methods in the old build-system format

Run meson in the build directory

List of arguments that must be passed to meson, except:
  • --prefix
  • --libdir
  • --buildtype
  • --strip
  • --default_library

which will be set automatically.


Sequence of phases. Must be defined in derived classes

Relative path to the directory containing meson.build

This path is relative to the root of the extracted tarball, not to the build_directory. Defaults to the current directory.



Standard meson arguments for a generic package.

Standard meson arguments provided as a property for convenience of package writers.


Bases: PackageBase

Specialized class for packages built using Meson. For more information on the Meson build system, see https://mesonbuild.com/

This attribute is used in UI queries that need to know the build system base class


Produces a list of all command line arguments to pass the specified compiler flags to meson.

Legacy buildsystem attribute used to deserialize and install old specs




spack.build_systems.msbuild module

Bases: BaseBuilder

The MSBuild builder encodes the most common way of building software with Mircosoft's MSBuild tool. It has two phases that can be overridden, if need be:

1.
build()
2.
install()



It is usually necessary to override the install() phase as many packages with MSBuild systems neglect to provide an install target. The default install phase will attempt to invoke an install target from MSBuild. If none exists, this will result in a build failure

For a finer tuning you may override:

Method Purpose
build_targets Specify msbuild targets for the build phase
install_targets Specify msbuild targets for the install phase
build_directory() Directory where the project sln/vcxproj is located


Run "msbuild" on the build targets specified by the builder.

Return the directory containing the MSBuild solution or vcxproj.

Build system name. Must also be defined in derived classes.

Targets for make during the build() phase



Run "msbuild" on the install targets specified by the builder. This is INSTALL by default

Targets for msbuild during the install() phase

Define build arguments to MSbuild. This is an empty list by default. Individual packages should override to specify MSBuild args to command line PlatformToolset is already defined an can be controlled via the toolchain_version property

Define install arguments to MSBuild outside of the INSTALL target. This is the same as msbuild_args by default.

Sequence of phases. Must be defined in derived classes

Return common msbuild cl arguments, for now just toolchain

Return currently targeted version of MSVC toolchain Override this method to select a specific version of the toolchain or change selection heuristics. Default is whatever version of msvc has been selected by concretization


Bases: PackageBase

Specialized class for packages built using Visual Studio project files or solutions.

This attribute is used in UI queries that need to know the build system base class





spack.build_systems.nmake module

Bases: BaseBuilder

The NMake builder encodes the most common way of building software with Mircosoft's NMake tool. It has two phases that can be overridden, if need be:

1.
build()
2.
install()



It is usually necessary to override the install() phase as many packages with NMake systems neglect to provide an install target. The default install phase will attempt to invoke an install target from NMake. If none exists, this will result in a build failure

For a finer tuning you may override:

Method Purpose
build_targets Specify nmake targets for the build phase
install_targets Specify nmake targets for the install phase
build_directory() Directory where the project makefile is located


Run "nmake" on the build targets specified by the builder.

Return the directory containing the makefile.

Build system name. Must also be defined in derived classes.

Targets for make during the build() phase

Helper method to format arguments to nmake command line

Control whether or not Spack warns about quoted arguments passed to build utilities. If this is True, spack will not warn about quotes. This is useful in cases with a space in the path or when build scripts require quoted arugments.

Run "nmake" on the install targets specified by the builder. This is INSTALL by default

Targets for make during the install() phase

Name of the current makefile. This is currently an empty value. If a project defines this value, it will be used with the /f argument to provide nmake an explicit makefile. This is usefule in scenarios where there are multiple nmake files in the same directory.

The relative path to the directory containing nmake makefile

This path is relative to the root of the extracted tarball, not to the build_directory. Defaults to the current directory.


Define build arguments to NMake. This is an empty list by default. Individual packages should override to specify NMake args to command line

Define arguments appropriate only for install phase to NMake. This is an empty list by default. Individual packages should override to specify NMake args to command line

Helper method to format arguments for overridding env variables on the nmake command line. Returns properly formatted argument

Sequence of phases. Must be defined in derived classes

Returns list of standards arguments provided to NMake Currently is only /NOLOGO


Bases: PackageBase

Specialized class for packages built using a Makefiles.

This attribute is used in UI queries that need to know the build system base class





spack.build_systems.octave module

Bases: BaseBuilder

The octave builder provides the following phases that can be overridden:

1.
install()

Build system name. Must also be defined in derived classes.


Install the package from the archive file


Names associated with package attributes in the old build-system format

Names associated with package methods in the old build-system format

Sequence of phases. Must be defined in derived classes

Sets up the build environment for a package.

This method will be called before the current package prefix exists in Spack's store.

env (spack.util.environment.EnvironmentModifications) -- environment modifications to be applied when the package is built. Package authors can call methods on it to alter the build environment.




spack.build_systems.oneapi module

Common utilities for managing intel oneapi packages.


Bases: IntelOneApiPackage

Base class for Intel oneAPI library packages.

Contains some convenient default implementations for libraries. Implement the method directly in the package if something different is needed.







Bases: Package

Base class for Intel oneAPI packages.


Subdirectory for this component in the install prefix.

Path to component <prefix>/<component>/<version>.


Additional arguments to pass to vars.sh script.

https://software.intel.com/oneapi'
Package homepage where users can find more information about the package


Shared install method for all oneapi packages.


Adds environment variables to the generated module file.

These environment variables come from running:

$ source {prefix}/{component}/{version}/env/vars.sh





Updates oneapi package descriptions with common text.



Bases: object

Provides ld_flags when static linking is needed

Oneapi puts static and dynamic libraries in the same directory, so -l will default to finding the dynamic library. Use absolute paths, as recommended by oneapi documentation.

Allow both static and dynamic libraries to be supplied by the package.






spack.build_systems.perl module

Bases: BaseBuilder

The perl builder provides four phases that can be overridden, if required:

1.
configure()
2.
build()
3.
check()
4.
install()




Some packages may need to override configure_args(), which produces a list of arguments for configure().

Arguments should not include the installation base directory.

Builds a Perl package.

Build system name. Must also be defined in derived classes.

Callback names for build-time test

Runs built-in tests of a Perl package.

Run Makefile.PL or Build.PL with arguments consisting of an appropriate installation base directory followed by the list returned by configure_args().
RuntimeError -- if neither Makefile.PL nor Build.PL exist


List of arguments passed to configure().

Arguments should not include the installation base directory, which is prepended automatically.



Installs a Perl package.


Names associated with package attributes in the old build-system format

Names associated with package methods in the old build-system format




Bases: PackageBase

Specialized class for packages that are built using Perl.

This attribute is used in UI queries that need to know the build system base class


Legacy buildsystem attribute used to deserialize and install old specs




spack.build_systems.python module

Bases: PackageBase
Given a map of package files to destination paths in the view, add the files to the view. By default this adds all files. Alternative implementations may skip some files, for example if other packages linked into the view already include the file.
  • view (spack.filesystem_view.FilesystemView) -- the view that's updated
  • merge_map (dict) -- maps absolute source paths to absolute dest paths for all files in from this package.
  • skip_if_exists (bool) -- when True, don't link files in view when they already exist. When False, always link files, without checking if they already exist.




For an external package that extends python, find the most likely spec for the python it depends on.

First search: an "installed" external that shares a prefix with this package Second search: a configured external that shares a prefix with this package Third search: search this prefix for a python package

The external Spec for python most likely to be compatible with self.spec
Return type
spack.spec.Spec


Names of modules that the Python package provides.

These are used to test whether or not the installation succeeded. These names generally come from running:

>> import setuptools
>> setuptools.find_packages()


in the source tarball directory. If the module names are incorrectly detected, this property can be overridden by the package.

list of strings of module names
Return type
list


Given a map of package files to files currently linked in the view, remove the files from the view. The default implementation removes all files. Alternative implementations may not remove all files. For example if two packages include the same file, it should only be removed when both packages are removed.

Names of modules that should be skipped when running tests.

These are a subset of import_modules. If a module has submodules, they are skipped as well (meaning a.b is skipped if a is contained).

list of strings of module names
Return type
list



Attempts to import modules of the installed package.

Ensure all external python packages have a python dependency

If another package in the DAG depends on python, we use that python for the dependency of the external. If not, we assume that the external PythonPackage is installed into the same directory as the python it depends on.



Report all file conflicts, excepting special cases for python. Specifically, this does not report errors for duplicate __init__.py files for packages in the same namespace.


Bases: PythonExtension

Specialized class for packages that are built using pip.


Discover header files in platlib.

Package homepage where users can find more information about the package

Callback names for install-time test

Legacy buildsystem attribute used to deserialize and install old specs

Discover libraries in platlib.

Default list URL (place to find available versions)


Package name, version, and extension on PyPI

url = None


Bases: BaseBuilder
The root directory of the Python package.

This is usually the directory containing one of the following files:

  • pyproject.toml
  • setup.cfg
  • setup.py


Build system name. Must also be defined in derived classes.


Configuration settings to be passed to the PEP 517 build backend.

Requires pip 22.1 or newer for keys that appear only a single time, or pip 23.1 or newer if the same key appears multiple times.

  • spec (spack.spec.Spec) -- build spec
  • prefix (spack.util.prefix.Prefix) -- installation prefix

Possibly nested dictionary of KEY, VALUE settings
Return type
dict


Extra global options to be supplied to the setup.py call before the install or bdist_wheel command.

Deprecated in pip 23.1.

  • spec (spack.spec.Spec) -- build spec
  • prefix (spack.util.prefix.Prefix) -- installation prefix

list of options
Return type
list


Install everything from build directory.

Extra arguments to be supplied to the setup.py install command.

Requires pip 23.0 or older.

  • spec (spack.spec.Spec) -- build spec
  • prefix (spack.util.prefix.Prefix) -- installation prefix

list of options
Return type
list


Callback names for install-time test

Names associated with package attributes in the old build-system format

Same as legacy_methods, but the signature is different

Names associated with package methods in the old build-system format

Sequence of phases. Must be defined in derived classes




spack.build_systems.qmake module

Bases: BaseBuilder

The qmake builder provides three phases that can be overridden:

1.
qmake()
2.
build()
3.
install()

They all have sensible defaults and for many packages the only thing necessary will be to override qmake_args().

Make the build targets

The directory containing the *.pro file.

Build system name. Must also be defined in derived classes.

Callback names for build-time test

Search the Makefile for a check: target and runs it if found.

Make the install targets


Names associated with package attributes in the old build-system format

Names associated with package methods in the old build-system format

Sequence of phases. Must be defined in derived classes

Run qmake to configure the project and generate a Makefile.

List of arguments passed to qmake.



Bases: PackageBase

Specialized class for packages built using qmake.

For more information on the qmake build system, see: http://doc.qt.io/qt-5/qmake-manual.html

This attribute is used in UI queries that need to know the build system base class


Legacy buildsystem attribute used to deserialize and install old specs




spack.build_systems.r module

Bases: GenericBuilder

The R builder provides a single phase that can be overridden:

1.
install()



It has sensible defaults, and for many packages the only thing necessary will be to add dependencies.

Arguments to pass to install via --configure-args.

Arguments to pass to install via --configure-vars.

Installs an R package.

Names associated with package methods in the old build-system format


Bases: Package

Specialized class for packages that are built using R.

For more information on the R build system, see: https://stat.ethz.ch/R-manual/R-devel/library/utils/html/INSTALL.html

alias of RBuilder


This attribute is used in UI queries that need to know the build system base class



Package homepage where users can find more information about the package

Default list URL (place to find available versions)

url = None


spack.build_systems.racket module

Bases: Builder

The Racket builder provides an install phase that can be overridden.


Build system name. Must also be defined in derived classes.

Callback names for build-time test

Install everything from build directory.

Names associated with package attributes in the old build-system format

Names associated with package methods in the old build-system format

Sequence of phases. Must be defined in derived classes




Bases: PackageBase

Specialized class for packages that are built using Racket's raco pkg install and raco setup commands.


Package homepage where users can find more information about the package

Legacy buildsystem attribute used to deserialize and install old specs

By default we build in parallel. Subclasses can override this.



spack.build_systems.rocm module


spack.build_systems.ruby module

Bases: BaseBuilder

The Ruby builder provides two phases that can be overridden if required:

1.
build()
2.
install()

Build a Ruby gem.

Build system name. Must also be defined in derived classes.


Install a Ruby gem.

The ruby package sets GEM_HOME to tell gem where to install to.



Names associated with package attributes in the old build-system format

Names associated with package methods in the old build-system format

Sequence of phases. Must be defined in derived classes


Bases: PackageBase

Specialized class for building Ruby gems.

This attribute is used in UI queries that need to know the build system base class


Legacy buildsystem attribute used to deserialize and install old specs




spack.build_systems.scons module

Bases: BaseBuilder

The Scons builder provides the following phases that can be overridden:

1.
build()
2.
install()

Packages that use SCons as a build system are less uniform than packages that use other build systems. Developers can add custom subcommands or variables that control the build. You will likely need to override build_args() to pass the appropriate variables.

Build the package.

Arguments to pass to build.

Build system name. Must also be defined in derived classes.

Run unit tests after build.

By default, does nothing. Override this if you want to add package-specific tests.


Callback names for build-time test

Install the package.

Arguments to pass to install.


Names associated with package attributes in the old build-system format

Same as legacy_methods, but the signature is different

Names associated with package methods in the old build-system format




Bases: PackageBase

Specialized class for packages built using SCons.

See http://scons.org/documentation.html for more information.

To be used in UI queries that require to know which build-system class we are using


Legacy buildsystem attribute used to deserialize and install old specs




spack.build_systems.sip module

Bases: BaseBuilder

The SIP builder provides the following phases that can be overridden:

  • configure
  • build
  • install

The configure phase already adds a set of default flags. To see more options, run sip-build --help.

Build the package.

Arguments to pass to build.


Build system name. Must also be defined in derived classes.


Configure the package.

Arguments to pass to configure.

Install the package.

Arguments to pass to install.



Names associated with package methods in the old build-system format

Sequence of phases. Must be defined in derived classes



Bases: PackageBase

Specialized class for packages that are built using the SIP build system. See https://www.riverbankcomputing.com/software/sip/intro for more information.



Names of modules that the Python package provides.

These are used to test whether or not the installation succeeded. These names generally come from running:

>> import setuptools
>> setuptools.find_packages()


in the source tarball directory. If the module names are incorrectly detected, this property can be overridden by the package.

list of strings of module names
Return type
list


Callback names for install-time testing

Legacy buildsystem attribute used to deserialize and install old specs

The python Executable.

Name of private sip module to install alongside package


Attempts to import modules of the installed package.



spack.build_systems.sourceforge module


spack.build_systems.sourceware module

Bases: PackageBase

Mixin that takes care of setting url and mirrors for Sourceware.org packages.


Path of the package in a Sourceware mirror



spack.build_systems.waf module

Bases: BaseBuilder

The WAF builder provides the following phases that can be overridden:

  • configure
  • build
  • install

These are all standard Waf commands and can be found by running:

$ python waf --help


Each phase provides a function <phase> that runs:

$ python waf -j<jobs> <phase>


where <jobs> is the number of parallel jobs to build with. Each phase also has a <phase_args> function that can pass arguments to this call. All of these functions are empty except for the configure_args function, which passes --prefix=/path/to/installation/prefix.

Executes the build.

Arguments to pass to build.

The directory containing the waf file.

Build system name. Must also be defined in derived classes.

Run unit tests after build.

By default, does nothing. Override this if you want to add package-specific tests.



Configures the project.

Arguments to pass to configure.

Installs the targets on the system.

Arguments to pass to install.

Run unit tests after install.

By default, does nothing. Override this if you want to add package-specific tests.





Sequence of phases. Must be defined in derived classes

The python Executable.


Runs the waf Executable.


Bases: PackageBase

Specialized class for packages that are built using the Waf build system. See https://waf.io/book/ for more information.



Legacy buildsystem attribute used to deserialize and install old specs




spack.build_systems.xorg module


spack.cmd package

Bases: SpackError

Exception class thrown for impermissible command names


Bases: SpackError

Exception class thrown for impermissible python names


Get a sorted list of all spack commands.

This will list the lib/spack/spack/cmd directory and find the commands there to construct the list. It does not actually import the python files -- just gets the names.


Convert module name (with _) to command name (with -).

Given a spec, figure out which installed package it refers to.
  • spec (spack.spec.Spec) -- a spec to disambiguate
  • env (spack.environment.Environment) -- a spack environment, if one is active, or None if no environment is active
  • local (bool) -- do not search chained spack instances
  • installed (bool or spack.database.InstallStatus or Iterable) -- install status argument passed to database query. See spack.database.Database._query for details.



Given a spec and a list of hashes, get concrete spec the spec refers to.
  • spec (spack.spec.Spec) -- a spec to disambiguate
  • hashes (Iterable) -- a set of hashes of specs among which to disambiguate
  • local (bool) -- do not search chained spack instances
  • installed (bool or spack.database.InstallStatus or Iterable) -- install status argument passed to database query. See spack.database.Database._query for details.



Display human readable specs with customizable formatting.

Prints the supplied specs to the screen, formatted according to the arguments provided.

Specs are grouped by architecture and compiler, and columnized if possible.

Options can add more information to the default display. Options can be provided either as keyword arguments or as an argparse namespace. Keyword arguments take precedence over settings in the argparse namespace.

  • specs (list) -- the specs to display
  • args (argparse.Namespace or None) -- namespace containing formatting arguments

  • paths (bool) -- Show paths with each displayed spec
  • deps (bool) -- Display dependencies with specs
  • long (bool) -- Display short hashes with specs
  • very_long (bool) -- Display full hashes with specs (supersedes long)
  • namespaces (bool) -- Print namespaces along with names
  • show_flags (bool) -- Show compiler flags with specs
  • variants (bool) -- Show variants with specs
  • indent (int) -- indent each line this much
  • groups (bool) -- display specs grouped by arch/compiler (default True)
  • decorators (dict) -- dictionary mappng specs to decorators
  • header_callback (Callable) -- called at start of arch/compiler groups
  • all_headers (bool) -- show headers even when arch/compiler aren't defined
  • output (IO) -- A file object to write to. Default is sys.stdout



Convert specs to a list of json records.


Argparse type for files that exist.

Filter a list of specs returning only those that are currently loaded.

Find active environment from args or environment variable.
1.
via spack -e ENV or spack -D DIR (arguments)
2.
via a path in the spack.environment.spack_env_var environment variable.


If an environment is found, read it in. If not, return None.

args (argparse.Namespace) -- argparse namespace with command arguments
a found environment, or None
Return type
(spack.environment.Environment)


Return the first line of the docstring.

Imports the command function associated with cmd_name.

The function's name is derived from cmd_name using python_name().

cmd_name (str) -- name of the command (contains -, not _).


Imports the module for a particular command name and returns it.
cmd_name (str) -- name of the command for which to get a module (contains -, not _).




Break a list of specs into groups indexed by arch/compiler.

Returns a concrete spec, matching what is available in the environment. If no matching spec is found in the environment (or if no environment is active), this will return the given spec but concretized.

Convenience function for parsing arguments from specs. Handles common exceptions and dies if there are errors.

Given a list of specs, this will print a message about how many specs are in that list.
  • specs (list) -- depending on how many items are in this list, choose the plural or singular form of the word "package"
  • pkg_type (str) -- the output string will mention this provided category, e.g. if pkg_type is "installed" then the message would be "3 installed packages"



Convert - to _ in command name, to make a valid identifier.

Remove some options from a parser.

Used by commands to get the active environment

If an environment is not found, print an error message that says the calling command needs an active environment.

cmd_name (str) -- name of calling command
the active environment
Return type
(spack.environment.Environment)


Require that the provided name is a valid command name (per cmd_name()). Useful for checking parameters for function prerequisites.

Require that the provided name is a valid python name (per python_name()). Useful for checking parameters for function prerequisites.

Ensure that this instance of Spack is a git clone.

Subpackages

spack.cmd.common package

Print out instructions for users to initialize shell support.
  • cmd (str) -- the command the user tried to run that requires shell support in order to work
  • equivalent (str) -- a command they can run instead, without enabling shell support



Submodules

spack.cmd.common.arguments module

Extend a parser with extra arguments
  • parser -- parser to be extended
  • list_of_arguments -- arguments to be added to the parser



spack.cmd.common.confirmation module

Display the list of specs to be acted on and ask for confirmation.
  • specs -- specs to be removed
  • participle -- action expressed as a participle, e.g. "uninstalled"
  • noun -- action expressed as a noun, e.g. "uninstallation"



spack.cmd.common.env_utility module




spack.cmd.modules package

Implementation details of the spack module command.

Bases: Exception

Raised when multiple specs match a constraint, in a context where this is not allowed.


Bases: Exception

Raised when no spec matches a constraint, in a context where this is not allowed.



Dictionary populated with the list of sub-commands. Each sub-command must be callable and accept 3 arguments:
  • module_type: the type of module it refers to
  • specs : the list of specs to be processed
  • args : namespace containing the parsed command line arguments





Retrieve paths or use names of module files

Prompt the list of modules associated with a list of specs


Ensures exactly one spec has been selected, or raises the appropriate exception.

Regenerates the module files for every spec in specs and every module type in module types.

Deletes the module files associated with every spec in specs, for every module type in module types.


Submodules

spack.cmd.modules.lmod module


set the default module file, when multiple are present

spack.cmd.modules.tcl module


set the default module file, when multiple are present

Submodules

spack.cmd.add module



spack.cmd.arch module


Prints a human readable list of the targets passed as argument.


spack.cmd.audit module








spack.cmd.blame module


Dump the blame as a json object to the terminal.

Given a set of rows with authors and lines, print a table.


spack.cmd.bootstrap module

Subdirectory where to create the mirror



spack.cmd.build_env module


spack.cmd.buildcache module



check specs against remote binary mirror(s) to see if any need to be rebuilt

this command uses the process exit code to indicate its result, specifically, if the exit code is non-zero, then at least one of the indicated specs needs to be rebuilt



download buildcache entry from a remote mirror to local folder

this command uses the process exit code to indicate its result, specifically, a non-zero exit code indicates that the command failed to download at least one of the required buildcache components


get name (prefix) of buildcache entries for this spec

install from a binary package

get public keys available on mirrors

list binary packages available from mirrors

Read manifest files containing information about specific specs to copy from source to destination, remove duplicates since any binary packge for a given hash should be the same as any other, and copy all files specified in the manifest files.

analyze an installed spec and reports whether executables and libraries are relocatable

create a binary package and push it to a mirror

get full spec for dependencies and write them to files in the specified output directory

uses exit code to signal success or failure. an exit code of zero means the command was likely successful. if any errors or exceptions are encountered, or if expected command-line arguments are not provided, then the exit code will be non-zero



sync binaries (and associated metadata) from one mirror to another

requires an active environment in order to know which specs to sync



update a buildcache index

spack.cmd.cd module


This is for decoration -- spack cd is used through spack's shell support. This allows spack cd to print a descriptive help message when called with -h.

spack.cmd.change module



spack.cmd.checksum module

Add checksumed versions to a package's instructions and open a user's editor so they may double check the work of the function.
  • pkg (spack.package_base.PackageBase) -- A package class for a given package in Spack.
  • version_lines (str) -- A string of rendered version lines.




Verify checksums present in version_hashes against those present in the package's instructions.
  • pkg (spack.package_base.PackageBase) -- A package class for a given package in Spack.
  • version_hashes (dict) -- A dictionary of the form: version -> checksum.




spack.cmd.ci module


generate jobs file from a CI-aware spack file

if you want to report the results on CDash, you will need to set the SPACK_CDASH_AUTH_TOKEN before invoking this command. the value must be the CDash authorization token needed to create a build group and register all generated jobs under it


rebuild a spec if it is not on the remote mirror

check a single spec against the remote mirror, and rebuild it from source if the mirror does not contain the hash


rebuild the buildcache index for the remote mirror

use the active, gitlab-enabled environment to rebuild the buildcache index for the associated mirror


generate instructions for reproducing the spec rebuild job

artifacts of the provided gitlab pipeline rebuild job's URL will be used to derive instructions for reproducing the build locally




spack.cmd.clean module





spack.cmd.clone module




spack.cmd.commands module

Bases: ArgparseWriter

Write argparse output as bash programmable tab completion.

Return the body of the function.
  • positionals -- List of positional arguments.
  • optionals -- List of optional arguments.
  • subcommands -- List of subcommand parsers.

Function body.


Return the syntax needed to end a function definition.
prog -- Program name
Function definition ending.


Return the string representation of a single node in the parser tree.
cmd -- Parsed information about a command or subcommand.
String representation of this subcommand.


Return the syntax for reporting optional flags.
optionals -- List of optional arguments.
Syntax for optional flags.


Return the syntax for reporting positional arguments.
positionals -- List of positional arguments.
Syntax for positional arguments.


Return the syntax needed to begin a function definition.
prog -- Program name.
Function definition beginning.


Return the syntax for reporting subcommands.
subcommands -- List of subcommand parsers.
Syntax for subcommand parsers



Bases: ArgparseWriter

Write argparse output as bash programmable tab completion.

Return all the completion commands.
  • prog -- Program name.
  • positionals -- List of positional arguments.
  • optionals -- List of optional arguments.
  • subcommands -- List of subcommand parsers.

Completion command.


Return the head of the completion command.
  • prog -- Program name.
  • index -- Index of positional argument.
  • nargs -- Number of arguments.

Head of the completion command.


Return the string representation of a single node in the parser tree.
cmd -- Parsed information about a command or subcommand.
String representation of a node.


Return the completion for optional arguments.
  • prog -- Program name.
  • optionals -- List of optional arguments.

Completion command.


Read the optionals and return the command to set optspec.
  • prog -- Program name.
  • optionals -- List of optional arguments.

Command to set optspec variable.


Return the completion for positional arguments.
  • prog -- Program name.
  • positionals -- List of positional arguments.

Completion command.


Return a comment line for the command.
prog -- Program name.
Comment line.


Return the completion for subcommands.
  • prog -- Program name.
  • subcommands -- List of subcommand parsers.

Completion command.




Bases: ArgparseWriter

Write argparse output as a list of subcommands.

Return the string representation of a single node in the parser tree.
cmd -- Parsed information about a command or subcommand.
String representation of this subcommand.



Bash tab-completion script.
  • args -- Command-line arguments.
  • out -- File object to write to.



Main function that calls formatter functions.
  • parser -- Argument parser.
  • args -- Command-line arguments.




Decorator used to register formatters.
func -- Formatting function.
The same function.



Simple list of top-level commands.
  • args -- Command-line arguments.
  • out -- File object to write to.



Prepend header text at the beginning of a file.
  • args -- Command-line arguments.
  • out -- File object to write to.



ReStructuredText documentation of subcommands.
  • args -- Command-line arguments.
  • out -- File object to write to.



Generate an index of all commands.
out -- File object to write to.


Set up the argument parser.
subparser -- Preliminary argument parser.


Hierarchical tree of subcommands.
  • args -- Command-line arguments.
  • out -- File object to write to.



Iterate through the shells and update the standard completion files.

This is a convenience method to avoid calling this command many times, and to simplify completion update for developers.

  • parser -- Argument parser.
  • args -- Command-line arguments.




spack.cmd.compiler module


Search either $PATH or a list of paths OR MODULES for compilers and add them to Spack's configuration.

Print info about all compilers matching a spec.




spack.cmd.compilers module



spack.cmd.concretize module



spack.cmd.config module


Add the given configuration to the specified config scope

This is a stateful operation that edits the config files.


Print out line-by-line blame of merged YAML.

Edit the configuration file for a specific scope and config section.

With no arguments and an active environment, edit the spack.yaml for the active environment.


Dump merged YAML configuration for a specific section.

With no arguments and an active environment, print the contents of the environment's manifest file (spack.yaml).


List the possible configuration sections.

Used primarily for shell tab completion scripts.


Generate a packages config based on the configuration of all upstream installs.

Remove the given configuration from the specified config scope

This is a stateful operation that edits the config files.





spack.cmd.containerize module



spack.cmd.create module




Bases: object

An instance of BuildSystemGuesser provides a callable object to be used during spack create. By passing this object to spack checksum, we can take a peek at the fetched tarball and discern the build system it uses






















Determine the build system template.

If a template is specified, always use that. Otherwise, if a URL is provided, download the tarball and peek inside to guess what build system it uses. Otherwise, use a generic template by default.

  • template (str) -- --template argument given to spack create
  • url (str) -- url argument given to spack create
  • args (argparse.Namespace) -- The arguments given to spack create
  • guesser (BuildSystemGuesser) -- The first_stage_function given to spack checksum which records the build system it detects

The name of the build system template to use
Return type
str


Get the name of the package based on the supplied arguments.

If a name was provided, always use that. Otherwise, if a URL was provided, extract the name from that. Otherwise, use a default.

  • name (str) -- explicit --name argument given to spack create
  • url (str) -- url argument given to spack create

The name of the package
Return type
str


Returns a Repo object that will allow us to determine the path where the new package file should be created.
  • args (argparse.Namespace) -- The arguments given to spack create
  • name (str) -- The name of the package to create


Return type
spack.repo.Repo


Get the URL to use.

Use a default URL if none is provided.

url (str) -- url argument to spack create
The URL of the package
Return type
str


Returns a list of versions and hashes for a package.

Also returns a BuildSystemGuesser object.

Returns default values if no URL is provided.

  • args (argparse.Namespace) -- The arguments given to spack create
  • name (str) -- The name of the package

versions and hashes, and a BuildSystemGuesser object
Return type
tuple



spack.cmd.debug module





spack.cmd.deconcretize module





spack.cmd.dependencies module



spack.cmd.dependents module


Get all dependents for a package.
  • pkg_name (str) -- name of the package whose dependents should be returned
  • ideps (dict) -- dictionary of dependents, from inverted_dependencies()
  • transitive (bool or None) -- return transitive dependents when True



Iterate through all packages and return a dictionary mapping package names to possible dependencies.

Virtual packages are included as sources, so that you can query dependents of, e.g., mpi, but virtuals are not included as actual dependents.



spack.cmd.deprecate module

Deprecate one Spack install in favor of another

Spack packages of different configurations cannot be installed to the same location. However, in some circumstances (e.g. security patches) old installations should never be used again. In these cases, we will mark the old installation as deprecated, remove it, and link another installation into its place.

It is up to the user to ensure binary compatibility between the deprecated installation and its deprecator.

Deprecate one spec in favor of another


spack.cmd.dev_build module



spack.cmd.develop module



spack.cmd.diff module

Generate a comparison, including diffs (for each side) and an intersection.

We can either print the result to the console, or parse into a json object for the user to save. We return an object that shows the differences, intersection, and names for a pair of specs a and b.

  • a (spack.spec.Spec) -- the first spec to compare
  • b (spack.spec.Spec) -- the second spec to compare
  • a_name (str) -- the name of spec a
  • b_name (str) -- the name of spec b
  • to_string (bool) -- return an object that can be json dumped
  • color (bool) -- whether to format the names for the console




Given a list of ASP functions, convert into a list of key: value tuples.

We are squashing whatever is after the first index into one string for easier parsing in the interface


Print the difference.

Given a diffset for A and a diffset for B, print red/green diffs to show the differences.



Transforms attr("foo", "bar") into foo("bar").

spack.cmd.docs module


spack.cmd.edit module


Opens the requested package file in your favorite $EDITOR.
  • name (str) -- The name of the package
  • repo_path (str) -- The path to the repository containing this package
  • namespace (str) -- A valid namespace registered with Spack




spack.cmd.env module


Returns the path of a temporary directory in which to create an environment

Look for a function called environment_<name> and call it.






deactivate any active environment in the shell


generate a depfile from the concrete environment specs




list modules for an installed environment '(see spack module loads)'

Remove a named environment.

This removes an environment managed by Spack. Directory environments and manifests embedded in repositories should be removed manually.


remove an existing environment


restore environments to their state before update


print whether there is an active environment


update environments to the latest format


manage a view associated with the environment


Dictionary mapping subcommand names and aliases to functions


spack.cmd.extensions module



spack.cmd.external module








spack.cmd.fetch module



spack.cmd.find module

Display extra find output when running in an environment.

Find in an environment outputs 2 or 3 sections:

1.
Root specs
2.
Concretized roots (if asked for with -c)
3.
Installed specs




Create a function for decorating specs when in an environment.


spack.cmd.gc module



spack.cmd.gpg module



export a gpg key, optionally including secret key

add the default keys to the keyring

list keys available in the keyring

publish public keys to a build cache


add a key to the keyring

remove a key from the keyring

verify a signed package


spack.cmd.graph module



spack.cmd.help module



spack.cmd.info module




Return a function to pad elements of a list.

output build, link, and run package dependencies

output information on external detection

Output the licenses of the project.

output package maintainers

output installation phases

output package tags

output relevant build-time and stand-alone tests





output virtual packages





spack.cmd.install module


Translate the test cli argument into the proper install argument

Return abstract and concrete spec parsed from the command line.

Return the list of concrete specs read from files.

Computes the default filename for the log file and creates the corresponding directory if not present


Translate command line arguments into a dictionary that will be passed to the package installer.



Return the filename to be used for reporting to JUnit or CDash format.



spack.cmd.license module



Latest year that copyright applies. UPDATE THIS when bumping copyright.

licensed files that can have LGPL language in them so far, just this command -- so it can find LGPL things elsewhere



SPDX license id must appear in the first <license_lines> lines of a file


list files in spack that should have license headers


update copyright for the current year in all licensed files

verify that files in spack have the right license header

spack.cmd.list module

Filters the sequence of packages according to user prescriptions
  • pkgs -- sequence of packages
  • args -- parsed command line arguments

filtered and sorted list of packages


Decorator used to register formatters


Link to a package file on github.

Print out information on all packages in Sphinx HTML.

This is intended to be inlined directly into Sphinx documentation. We write HTML instead of RST for speed; generating RST from all packages causes the Sphinx build to take forever. Including this as raw HTML is much faster.




Print out rows in a table with ncols of elts laid out vertically.


Print all packages with their latest versions.

spack.cmd.load module


Parser is only constructed so that this prints a nice help message with -h.

spack.cmd.location module



spack.cmd.log_parse module



spack.cmd.maintainers module






Given a dictionary with values that are Collections, return their union.
dictionary (dict) -- dictionary whose values are all collections.
the union of all collections in the dictionary's values.
Return type
(set)


spack.cmd.make_installer module

Use CMake to generate WIX installer in newly created build directory



spack.cmd.mark module

Marks all the specs in a list.
  • specs (list) -- list of specs to be marked
  • explicit (bool) -- whether to mark specs as explicitly installed



Returns a list of specs matching the not necessarily
concretized specs given from cli

  • specs (list) -- list of specs to be matched against installed packages
  • allow_multiple_matches (bool) -- if True multiple matches are admitted

list of specs





spack.cmd.mirror module




Return the list of concrete specs that the user wants to mirror. The list is passed either from command line or from a text file.





Extend the input list by adding all the dependencies explicitly.



add a mirror to Spack

create a directory to be used as a spack mirror, and fill it with package archives

given a url, recursively delete everything under it

print out available mirrors to the console

remove a mirror by name

configure the connection details of a mirror

change the URL of a mirror

Return a predicate that evaluate to True if a spec was not explicitly excluded by the user.



Return a list of specs read from a text file.

The file should contain one spec per line.

  • filename (str) -- name of the file containing the abstract specs.
  • concretize (bool) -- if True concretize the specs before returning the list.



Return how many versions should be mirrored per spec.

spack.cmd.module module



spack.cmd.patch module



spack.cmd.pkg module

Get a grep command to use with spack pkg grep.


add a package to the git stage with git add

show packages added since a commit

show packages changed since a commit

compare packages available in two different git revisions

grep for strings in package.py files from all repositories

dump canonical source code hash for a package spec

list packages associated with a particular spack git revision

show packages removed since a commit

dump source code for a package


spack.cmd.providers module



spack.cmd.pydoc module



spack.cmd.python module

An ipython interpreter is intended to be interactive, so it doesn't support running a script or arguments

Set sys.excepthook to let uncaught exceptions return 1 to the shell.
console (code.InteractiveConsole) -- the console that needs a change in sys.excepthook



A python interpreter is the default interpreter


spack.cmd.reindex module


spack.cmd.remove module



spack.cmd.repo module


add a package source to Spack's configuration

create a new package repository

show registered repositories and their namespaces

remove a repository from Spack's configuration


spack.cmd.resource module


list all resources known to spack (currently just patches)

show a resource, identified by its checksum


spack.cmd.restage module



spack.cmd.solve module




spack.cmd.spec module



spack.cmd.stage module



spack.cmd.style module

Get list of changed files in the Spack repository.
  • base (str) -- name of base branch to evaluate differences with.
  • untracked (bool) -- include untracked files in the list.
  • all_files (bool) -- list all files in the repository.
  • root (str) -- use this directory instead of the Spack prefix.



Translate prefix-relative path to current working directory-relative.

List of directories to exclude from checks -- relative to spack root

Collect data into fixed-length chunks or blocks

Whether flake8 should consider a file as a core file or a package.

We run flake8 with different exceptions for the core and for packages, since we allow from spack import * and poking globals into packages.















Order in which tools should be run. flake8 is last so that it can double-check the results of other tools (if, e.g., --fix was provided) The list maps an executable name to a method to ensure the tool is bootstrapped or present in the environment.


Validate --tool and --skip arguments (sets of optionally comma-separated tools).

spack.cmd.tags module




spack.cmd.test module





find tests that are running or have available results

displays aliases for tests that have them, otherwise test suite content hashes


list installed packages with available tests

remove results from Spack test suite(s) (default all)

if no test suite is listed, remove results for all suites.

removed tests can no longer be accessed for results or status, and will not appear in spack test list results


get the results from Spack test suite(s) (default all)

run tests for the specified installed packages

if no specs are listed, run tests for all packages in the current environment or all installed packages if there is no active environment


get the current status for the specified Spack test suite(s)

spack.cmd.test_env module


spack.cmd.tutorial module



spack.cmd.undevelop module



spack.cmd.uninstall module

Given a list of specs, return those that are in the env



Returns a list of specs matching the not necessarily
concretized specs given from cli

  • env -- optional active environment
  • specs -- list of specs to be matched against installed packages
  • allow_multiple_matches -- if True multiple matches are admitted

list of specs
Return type
list


Returns unordered uninstall_list and remove_list: these may overlap (some things may be both uninstalled and removed from the current environment).

It is assumed we are in an environment if --remove is specified (this method raises an exception otherwise).






spack.cmd.unit_test module

Add parsed pytest args, unknown args, and remainder together.

We add some basic pytest arguments to the Spack parser to ensure that they show up in the short help, so we have to reassemble things here.


Print a lists of tests than what pytest offers.



spack.cmd.unload module

Parser is only constructed so that this prints a nice help message with -h.

unload spack packages from the user environment

spack.cmd.url module

Determine if the name of a package was correctly parsed.
  • pkg (spack.package_base.PackageBase) -- The Spack package
  • name (str) -- The name that was extracted from the URL

True if the name was correctly parsed, else False
Return type
bool


Prints a URL. Underlines the detected name with dashes and the detected version with tildes.
url (str) -- The url to parse


Remove build system prefix ('py-', 'perl-', etc.) from a package name.

After determining a name, spack create determines a build system. Some build systems prepend a special string to the front of the name. Since this can't be guessed from the URL, it would be unfair to say that these names are incorrectly parsed, so we remove them.

pkg_name (str) -- the name of the package
the name of the package with any build system prefix removed
Return type
str


Remove separator characters ('.', '_', and '-') from a version.

A version like 1.2.3 may be displayed as 1_2_3 in the URL. Make sure 1.2.3, 1-2-3, 1_2_3, and 123 are considered equal. Unfortunately, this also means that 1.23 and 12.3 are equal.

version (str or spack.version.Version) -- A version
The version with all separator characters removed
Return type
str





Helper function for url_list().
  • args (argparse.Namespace) -- The arguments given to spack url list
  • urls (set) -- List of URLs that have already been added
  • url (str or None) -- A URL to potentially add to urls depending on args
  • pkg (spack.package_base.PackageBase) -- The Spack package

The updated set of urls
Return type
set





Determine if the version of a package was correctly parsed.
  • pkg (spack.package_base.PackageBase) -- The Spack package
  • version (str) -- The version that was extracted from the URL

True if the name was correctly parsed, else False
Return type
bool


spack.cmd.verify module



spack.cmd.versions module



spack.cmd.view module

Produce a "view" of a Spack DAG.

A "view" is file hierarchy representing the union of a number of Spack-installed package file hierarchies. The union is formed from:

  • specs resolved from the package names given by the user (the seeds)
  • all dependencies of the seeds unless user specifies --no-dependencies
  • less any specs with names matching the regular expressions given by --exclude

The view can be built and tore down via a number of methods (the "actions"):

  • symlink :: a file system view which is a directory hierarchy that is the union of the hierarchies of the installed packages in the DAG where installed files are referenced via symlinks.
  • hardlink :: like the symlink view but hardlinks are used.
  • statlink :: a view producing a status report of a symlink or hardlink view.

The file system view concept is imspired by Nix, implemented by brett.viren@gmail.com ca 2016.

All operations on views are performed via proxy objects such as YamlFilesystemView.

When dealing with querying actions (remove/status) we only need to disambiguate among specs in the view


Produce a view of a set of packages.

spack.compilers package

This module contains functions related to finding compilers on the system and configuring Spack to use multiple compilers.

Bases: object

This acts as a hashable reference to any object (regardless of whether the object itself is hashable) and also prevents the object from being garbage-collected (so if two CacheReference objects are equal, they will refer to the same object, since it will not have been gc'ed since the creation of the first CacheReference).



Bases: tuple

Gathers the attribute values by which a detected compiler is considered unique in Spack.

  • os: the operating system
  • compiler_name: the name of the compiler (e.g. 'gcc', 'clang', etc.)
  • version: the version of the compiler



Alias for field number 1

Alias for field number 0

Alias for field number 2



Bases: tuple

Groups together the arguments needed by detect_version. The four entries in the tuple are:


  • variation: a NameVariation for file being tested
  • language: compiler language being tested (one of 'cc', 'cxx', 'fc', 'f77')
  • path: full path to the executable being tested

Alias for field number 0

Alias for field number 2

Alias for field number 3

Alias for field number 1



Bases: tuple

Variations on a matched compiler name

Alias for field number 0

Alias for field number 1





Add compilers to the config for the specified architecture.
  • compilers -- a list of Compiler objects.
  • scope -- configuration scope to modify.








Return a set of specs for all the compiler versions currently available to build with. These are instances of CompilerSpec.

Return the list of classes for all operating systems available on this platform

Returns a list of DetectVersionArgs tuples to be used in a corresponding function to detect compiler versions.

The operating_system instance can customize the behavior of this function by providing a method called with the same name.

  • operating_system -- the operating system on which we are looking for compilers
  • paths -- paths to search for compilers

List of DetectVersionArgs tuples. Each item in the list will be later mapped to the corresponding function call to detect the version of the compilers in this OS.


Given a compiler module name, get the corresponding Compiler class.







Computes the version of a compiler and adds it to the information passed as input.

As this function is meant to be executed by worker processes it won't raise any exception but instead will return a (value, error) tuple that needs to be checked by the code dispatching the calls.

detect_version_args -- information on the compiler for which we should detect the version.
A (DetectVersionArgs, error) tuple. If error is None the version of the compiler was computed correctly and the first argument of the tuple will contain it. Otherwise error is a string containing an explanation on why the version couldn't be computed.



Return the list of compilers found in the paths given as arguments.
  • path_hints -- list of path hints where to look for. A sensible default based on the PATH environment variable will be used if the value is None
  • mixed_toolchain -- allow mixing compilers from different toolchains if otherwise missing for a certain language



Same as find_compilers but return only the compilers that are not already in compilers.yaml.
  • path_hints -- list of path hints where to look for. A sensible default based on the PATH environment variable will be used if the value is None
  • scope -- scope to look for a compiler. If None consider the merged configuration.
  • mixed_toolchain -- allow mixing compilers from different toolchains if otherwise missing for a certain language




Return the compiler configuration for the specified architecture.



Returns True if the current compiler is a mixed toolchain, False otherwise.
compiler (spack.compiler.Compiler) -- a valid compiler object


Process a list of detected versions and turn them into a list of compiler specs.
  • detected_versions -- list of DetectVersionArgs containing a valid version
  • mixed_toolchain -- allow mixing compilers from different toolchains if langauge is missing

list of Compiler objects
Return type
list


Add missing compilers across toolchains when they are missing for a particular language. This currently only adds the most sensible gfortran to (apple)-clang if it doesn't have a fortran compiler (no flang).

Return the spec of the package that provides the compiler.


Given a list of compilers, remove those that are already defined in the configuration.


Return a set of names of compilers supported by Spack.

See available_compilers() to get a list of all the available versions of supported compilers.


Return a set of compiler class objects supported by Spack that are also supported by the current host platform

Return a set of compiler class objects supported by Spack that are also supported by the provided platform
platform (str) -- string representation of platform for which compiler compatability should be determined


Submodules

spack.compilers.aocc module

Bases: Compiler





Returns the flag used by the C compiler to produce Position Independent Code (PIC).






Returns the flag used by the C++ compiler to produce Position Independent Code (PIC).



Extracts the version from compiler's output.


Returns the flag used by the F77 compiler to produce Position Independent Code (PIC).


Returns the flag used by the FC compiler to produce Position Independent Code (PIC).







This property should be overridden in the compiler subclass if a verbose flag is available.

If it is not overridden, it is assumed to not be supported.


Compiler argument that produces version information


spack.compilers.apple_clang module


spack.compilers.arm module

Bases: Compiler



Returns the flag used by the C compiler to produce Position Independent Code (PIC).





Returns the flag used by the C++ compiler to produce Position Independent Code (PIC).


Returns the flag used by the F77 compiler to produce Position Independent Code (PIC).



Returns the flag used by the FC compiler to produce Position Independent Code (PIC).






This property should be overridden in the compiler subclass if a verbose flag is available.

If it is not overridden, it is assumed to not be supported.


Compiler argument that produces version information

Regex used to extract version from compiler's output


spack.compilers.cce module

Bases: Compiler

Cray compiler environment compiler.






Returns the flag used by the C compiler to produce Position Independent Code (PIC).





Returns the flag used by the C++ compiler to produce Position Independent Code (PIC).



Returns the flag used by the F77 compiler to produce Position Independent Code (PIC).


Returns the flag used by the FC compiler to produce Position Independent Code (PIC).






This property should be overridden in the compiler subclass if a verbose flag is available.

If it is not overridden, it is assumed to not be supported.


Compiler argument that produces version information



spack.compilers.clang module

Bases: Compiler





Returns the flag used by the C compiler to produce Position Independent Code (PIC).





Returns the flag used by the C++ compiler to produce Position Independent Code (PIC).


Extracts the version from compiler's output.


Returns the flag used by the F77 compiler to produce Position Independent Code (PIC).


Returns the flag used by the FC compiler to produce Position Independent Code (PIC).





This property should be overridden in the compiler subclass if a verbose flag is available.

If it is not overridden, it is assumed to not be supported.


Compiler argument that produces version information




spack.compilers.dpcpp module

Bases: Oneapi

This is the same as the oneAPI compiler but uses dpcpp instead of icpx (for DPC++ source files). It explicitly refers to dpcpp, so that CMake test files which check the compiler name (e.g. CMAKE_CXX_COMPILER) detect it as dpcpp.

Ideally we could switch out icpx for dpcpp where needed in the oneAPI compiler definition, but two things are needed for that: (a) a way to tell the compiler that it should be using dpcpp and (b) a way to customize the link_paths

See also: https://www.intel.com/content/www/us/en/develop/documentation/oneapi-dpcpp-cpp-compiler-dev-guide-and-reference/top/compiler-setup/using-the-command-line/invoking-the-compiler.html




spack.compilers.fj module

Bases: Compiler



Returns the flag used by the C compiler to produce Position Independent Code (PIC).






Returns the flag used by the C++ compiler to produce Position Independent Code (PIC).



Returns the flag used by the F77 compiler to produce Position Independent Code (PIC).


Returns the flag used by the FC compiler to produce Position Independent Code (PIC).





This property should be overridden in the compiler subclass if a verbose flag is available.

If it is not overridden, it is assumed to not be supported.


Compiler argument that produces version information

Regex used to extract version from compiler's output


spack.compilers.gcc module

Bases: Compiler





Returns the flag used by the C compiler to produce Position Independent Code (PIC).








Returns the flag used by the C++ compiler to produce Position Independent Code (PIC).


Older versions of gcc use the -dumpversion option. Output looks like this:

4.4.7


In GCC 7, this option was changed to only return the major version of the compiler:

7


A new -dumpfullversion option was added that gives us what we want:

7.2.0




Returns the flag used by the F77 compiler to produce Position Independent Code (PIC).



Returns the flag used by the FC compiler to produce Position Independent Code (PIC).

Older versions of gfortran use the -dumpversion option. Output looks like this:

GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
Copyright (C) 2010 Free Software Foundation, Inc.


or:

4.8.5


In GCC 7, this option was changed to only return the major version of the compiler:

7


A new -dumpfullversion option was added that gives us what we want:

7.2.0






Query the compiler for its install prefix. This is the install path as reported by the compiler. Note that paths for cc, cxx, etc are not enough to find the install prefix of the compiler, since the can be symlinks, wrappers, or filenames instead of absolute paths.




This property should be overridden in the compiler subclass if a verbose flag is available.

If it is not overridden, it is assumed to not be supported.



spack.compilers.intel module

Bases: Compiler





Returns the flag used by the C compiler to produce Position Independent Code (PIC).




Returns the flag used by the C++ compiler to produce Position Independent Code (PIC).



Returns the flag used by the F77 compiler to produce Position Independent Code (PIC).


Returns the flag used by the FC compiler to produce Position Independent Code (PIC).






This property should be overridden in the compiler subclass if a verbose flag is available.

If it is not overridden, it is assumed to not be supported.


Compiler argument that produces version information

Regex used to extract version from compiler's output


spack.compilers.msvc module

Bases: object

Compose a call to cmd for an ordered series of cmd commands/scripts



Bases: Compiler

Cl toolset version






Ifx compiler version associated with this version of MSVC

Return values to ignore when invoking the compiler to get its version



This is the VCToolset version NOT the actual version of the cl compiler For CL version, query Msvc.cl_version

This is the platform toolset version of current MSVC compiler i.e. 142. This is different from the VC toolset version as established by short_msvc_version

Set environment variables for MSVC using the Microsoft-provided script.

This is the shorthand VCToolset version of form MSVC<short-ver> NOT the full version, for that see Msvc.msvc_version or MSVC.platform_toolset_ver for the raw platform toolset version


Compiler argument that produces version information

Regex used to extract version from compiler's output



Bases: VarsInvocation


Accessor for Windows SDK version property

Note: This property may not be set by the calling context and as such this property will return an empty string

This property will ONLY be set if the SDK package is a dependency somewhere in the Spack DAG of the package for which we are constructing an MSVC compiler env. Otherwise this property should be unset to allow the VCVARS script to use its internal heuristics to determine appropriate SDK version






spack.compilers.nag module

Bases: Compiler






Extracts the version from compiler's output.


Returns the flag used by the F77 compiler to produce Position Independent Code (PIC).



Returns the flag used by the FC compiler to produce Position Independent Code (PIC).



Flag that need to be used to pass an argument to the linker.



This property should be overridden in the compiler subclass if a verbose flag is available.

If it is not overridden, it is assumed to not be supported.


Compiler argument that produces version information


spack.compilers.nvhpc module

Bases: Compiler





Returns the flag used by the C compiler to produce Position Independent Code (PIC).





Returns the flag used by the C++ compiler to produce Position Independent Code (PIC).



Returns the flag used by the F77 compiler to produce Position Independent Code (PIC).


Returns the flag used by the FC compiler to produce Position Independent Code (PIC).






This property should be overridden in the compiler subclass if a verbose flag is available.

If it is not overridden, it is assumed to not be supported.


Compiler argument that produces version information

Regex used to extract version from compiler's output


spack.compilers.oneapi module

Bases: Compiler





Returns the flag used by the C compiler to produce Position Independent Code (PIC).






Returns the flag used by the C++ compiler to produce Position Independent Code (PIC).



Returns the flag used by the F77 compiler to produce Position Independent Code (PIC).


Returns the flag used by the FC compiler to produce Position Independent Code (PIC).





Set any environment variables necessary to use the compiler.


This property should be overridden in the compiler subclass if a verbose flag is available.

If it is not overridden, it is assumed to not be supported.


Compiler argument that produces version information



spack.compilers.pgi module

Bases: Compiler





Returns the flag used by the C compiler to produce Position Independent Code (PIC).



Returns the flag used by the C++ compiler to produce Position Independent Code (PIC).



Returns the flag used by the F77 compiler to produce Position Independent Code (PIC).


Returns the flag used by the FC compiler to produce Position Independent Code (PIC).

Return values to ignore when invoking the compiler to get its version






This property should be overridden in the compiler subclass if a verbose flag is available.

If it is not overridden, it is assumed to not be supported.


Compiler argument that produces version information

Regex used to extract version from compiler's output


spack.compilers.rocmcc module


spack.compilers.xl module

Bases: Compiler



Returns the flag used by the C compiler to produce Position Independent Code (PIC).




Returns the flag used by the C++ compiler to produce Position Independent Code (PIC).



Returns the flag used by the F77 compiler to produce Position Independent Code (PIC).



Returns the flag used by the FC compiler to produce Position Independent Code (PIC).






This property should be overridden in the compiler subclass if a verbose flag is available.

If it is not overridden, it is assumed to not be supported.


Compiler argument that produces version information

Regex used to extract version from compiler's output


spack.compilers.xl_r module


spack.container package

Package that provides functions and classes to generate container recipes from a Spack environment

Returns a recipe that conforms to the configuration passed as input.
  • configuration (dict) -- how to generate the current recipe
  • last_phase (str) -- last phase to be printed or None to print them all



Validate a Spack environment YAML file that is being used to generate a recipe for a container.

Since a few attributes of the configuration must have specific values for the container recipe, this function returns a sanitized copy of the configuration in the input file. If any modification is needed, a warning will be issued.

configuration_file (str) -- path to the Spack environment YAML file
A sanitized copy of the configuration stored in the input file


Subpackages

spack.container.writers package

Writers for different kind of recipes and related convenience functions.

Bases: Context

Generic context used to instantiate templates of recipes that install software in a common location and make it available directly via PATH.

Information related to the build image.

Information related to the build image.




The spack.yaml file that should be used in the image

Whether or not to update the OS package manager cache.

Additional system packages that are needed at build-time.

Additional system packages that are needed at run-time.

Important paths in the image


Information related to the run image.

Whether or not to strip binaries in the image



Returns a writer that conforms to the configuration passed as input.
  • configuration (dict) -- how to generate the current recipe
  • last_phase (str) -- last phase to be printed or None to print them all



Returns a recipe that conforms to the configuration passed as input.
  • configuration (dict) -- how to generate the current recipe
  • last_phase (str) -- last phase to be printed or None to print them all



Decorator to register a factory for a recipe writer.

Each factory should take a configuration dictionary and return a properly configured writer that, when called, prints the corresponding recipe.


Submodules

spack.container.writers.docker module


spack.container.writers.singularity module


Submodules

spack.container.images module

Manages the details on the images used in the various stages.

Return a list of all the OS that can be used to bootstrap Spack


Returns the name of the build image and its tag.
  • image (str) -- image to be used at run-time. Should be of the form <image_name>:<image_tag> e.g. "ubuntu:18.04"
  • spack_version (str) -- version of Spack that we want to use to build

A tuple with (image_name, image_tag) for the build image


Return the checkout command to be used in the bootstrap phase.
  • url (str) -- url of the Spack repository
  • ref (str) -- either a branch name, a tag or a commit sha
  • enforce_sha (bool) -- if true turns every
  • verify (bool) --



Returns the commands used to update system repositories, install system packages and clean afterwards.
package_manager (str) -- package manager to be used
A tuple of (update, install, clean) commands.


Returns a dictionary with the static data on the images.

The dictionary is read from a JSON file lazily the first time this function is called.


Returns the name of the OS package manager for the image passed as argument.
image (str) -- image to be used at run-time. Should be of the form <image_name>:<image_tag> e.g. "ubuntu:18.04"
Name of the package manager, e.g. "apt" or "yum"


spack.detection package


Return the list of packages that have been detected on the system, keyed by unqualified package name.
  • packages_to_search -- list of packages to be detected. Each package can be either unqualified of fully qualified
  • path_hints -- initial list of paths to be searched
  • max_workers -- maximum number of workers to search for packages in parallel



Returns a list of test runners for a given package.

Currently, detection tests are specified in a YAML file, called detection_test.yaml, alongside the package.py file.

This function reads that file to create a bunch of Runner objects.

  • pkg_name -- name of the package to test
  • repository -- repository where the package lives



Given a directory where an executable is found, guess the prefix (i.e. the "root" directory of that installation) and return it.
executable_dir -- directory where an executable is found


Get the paths of all executables available from the current PATH.

For convenience, this is constructed as a dictionary where the keys are the executable paths and the values are the names of the executables (i.e. the basename of the executable path).

There may be multiple paths with the same basename. In this case it is assumed there are two different instances of the executable.

path_hints -- list of paths to be searched. If None the list will be constructed based on the PATH environment variable.


Add the packages passed as arguments to packages.yaml
  • detected_packages -- list of DetectedPackage objects to be added
  • scope -- configuration scope where to add the detected packages
  • buildable -- whether the detected packages are buildable or not



Submodules

spack.detection.common module

Define a common data structure to represent external packages and a function to update packages.yaml given a list of detected packages.

Ideally, each detection method should be placed in a specific subpackage and implement at least a function that returns a list of DetectedPackage objects. The update in packages.yaml can then be done using the function provided here.

The module also contains other functions that might be useful across different detection mechanisms.


Bases: object
Return all MSVC compiler bundled packages

Semi hard-coded search path for cmake bundled with MSVC

Semi hard-coded search heuristic for locating ninja bundled with MSVC

Helper for Windows compiler installation root discovery

At the moment simply returns location of VS install paths from VSWhere But should be extended to include more information as relevant




Given a package, attempts to compute its Windows program files location, and returns the list of best guesses.
pkg -- package for which Program Files location is to be computed


Given a package attempt to compute its user scoped install location, return list of potential locations based on common heuristics. For more info on Windows user specific installs see: https://learn.microsoft.com/en-us/dotnet/api/system.environment.specialfolder?view=netframework-4.8

Given a directory where an executable is found, guess the prefix (i.e. the "root" directory of that installation) and return it.
executable_dir -- directory where an executable is found


Not all programs on Windows live on the PATH Return a list of other potential install locations.

Return True if the path passed as argument is that of an executable

Given a directory where a library is found, guess the prefix (i.e. the "root" directory of that installation) and return it.
library_dir -- directory where a library is found


Return dictionary[fullpath]: basename from list of paths

Add the packages passed as arguments to packages.yaml
  • detected_packages -- list of DetectedPackage objects to be added
  • scope -- configuration scope where to add the detected packages
  • buildable -- whether the detected packages are buildable or not



spack.detection.path module

Detection of software installed in the system, based on paths inspections and running executables.

Timeout used for package detection (seconds)

Bases: Finder
Returns a list of candidate files found on the system.
  • patterns -- search patterns to be used for matching files
  • paths -- paths where to search for files




Given a path where a file was found, returns the corresponding prefix.
path -- path of a detected file


Returns the list of patterns used to match candidate files.
pkg -- package being detected



Bases: object

Inspects the file-system looking for packages. Guesses places where to look using PATH.

Returns a list of candidate files found on the system.
  • patterns -- search patterns to be used for matching files
  • paths -- paths where to search for files




Given a list of files matching the search patterns, returns a list of detected specs.
  • pkg -- package being detected
  • paths -- files matching the package search patterns



For a given package, returns a list of detected specs.
  • pkg_name -- package being detected
  • initial_guess -- initial list of paths to search from the caller if None, default paths are searched. If this is an empty list, nothing will be searched.



Given a path where a file was found, returns the corresponding prefix.
path -- path of a detected file


Returns the list of patterns used to match candidate files.
pkg -- package being detected



Bases: Finder

Finds libraries on the system, searching by LD_LIBRARY_PATH, LIBRARY_PATH, DYLD_LIBRARY_PATH, DYLD_FALLBACK_LIBRARY_PATH, and standard system library paths

Returns a list of candidate files found on the system.
  • patterns -- search patterns to be used for matching files
  • paths -- paths where to search for files



Given a path where a file was found, returns the corresponding prefix.
path -- path of a detected file


Returns the list of patterns used to match candidate files.
pkg -- package being detected



Accept an ELF file if the header matches the given compat triplet, obtained with get_elf_compat(). In case it's not an ELF (e.g. static library, or some arbitrary file, fall back to is_readable_file).

Return the list of packages that have been detected on the system, keyed by unqualified package name.
  • packages_to_search -- list of packages to be detected. Each package can be either unqualified of fully qualified
  • path_hints -- initial list of paths to be searched
  • max_workers -- maximum number of workers to search for packages in parallel



Get the paths for common package installation location on Windows that are outside the PATH Returns [] on unix

Get the paths of all executables available from the current PATH.

For convenience, this is constructed as a dictionary where the keys are the executable paths and the values are the names of the executables (i.e. the basename of the executable path).

There may be multiple paths with the same basename. In this case it is assumed there are two different instances of the executable.

path_hints -- list of paths to be searched. If None the list will be constructed based on the PATH environment variable.



For ELF files, get a triplet (EI_CLASS, EI_DATA, e_machine) and see if it is host-compatible.

Get the paths of all libraries available from path_hints or the following defaults:
  • Environment variables (Linux: LD_LIBRARY_PATH, Darwin: DYLD_LIBRARY_PATH, and DYLD_FALLBACK_LIBRARY_PATH)
  • Dynamic linker default paths (glibc: ld.so.conf, musl: ld-musl-<arch>.path)
  • Default system library paths.

For convenience, this is constructed as a dictionary where the keys are the library paths and the values are the names of the libraries (i.e. the basename of the library path).

There may be multiple paths with the same basename. In this case it is assumed there are two different instances of the library.

  • path_hints (list) -- list of paths to be searched. If None the list will be constructed based on the set of LD_LIBRARY_PATH, LIBRARY_PATH, DYLD_LIBRARY_PATH, and DYLD_FALLBACK_LIBRARY_PATH environment variables as well as the standard system library paths.
  • path_hints -- list of paths to be searched. If None, the default system paths are used.



Get the paths of all libraries available from the system PATH paths.

For more details, see libraries_in_ld_and_system_library_path regarding return type and contents.

path_hints -- list of paths to be searched. If None the list will be constructed based on the set of PATH environment variables as well as the standard system library paths.


spack.detection.test module

Create and run mock e2e tests for package detection.

Bases: NamedTuple

Data structure to construct detection tests by PATH inspection.

Packages may have a YAML file containing the description of one or more detection tests to be performed. Each test creates a few mock executable scripts in a temporary folder, and checks that detection by PATH gives the expected results.

Alias for field number 1

Alias for field number 0

Alias for field number 2


Bases: NamedTuple

Data structure to model assertions on detection tests

Spec to be detected


Bases: NamedTuple

Mock executables to be used in detection tests

Relative paths for mock executables to be created

Shell script for the mock executable


Bases: object

Runs an external detection test

execute() -> List[Spec]
Executes a test and returns the specs that have been detected.

This function sets-up a test in a temporary directory, according to the prescriptions in the test layout, then performs a detection by executables and returns the specs that have been detected.




Returns a list of test runners for a given package.

Currently, detection tests are specified in a YAML file, called detection_test.yaml, alongside the package.py file.

This function reads that file to create a bunch of Runner objects.

  • pkg_name -- name of the package to test
  • repository -- repository where the package lives



Returns the normalized content of the detection_tests.yaml associated with the package passed in input.

The content is merged with that of any package that is transitively included using the "includes" attribute.

  • pkg_name -- name of the package to test
  • repository -- repository in which to search for packages



spack.environment package

This package implements Spack environments.

spack.lock format

Spack environments have existed since Spack v0.12.0, and there have been 4 different spack.lock formats since then. The formats are documented here.

The high-level format of a Spack lockfile hasn't changed much between versions, but the contents have. Lockfiles are JSON-formatted and their top-level sections are:

1.
_meta (object): this contains details about the file format, including:
  • file-type: always "spack-lockfile"
  • lockfile-version: an integer representing the lockfile format version
  • specfile-version: an integer representing the spec format version (since v0.17)


2.
spack (object): optional, this identifies information about Spack
used to concretize the environment: * type: required, identifies form Spack version took (e.g., git, release) * commit: the commit if the version is from git * version: the Spack version

3.
environment. Each has two fields: * hash: a Spack spec hash uniquely identifying the concrete root spec * spec: a string representation of the abstract spec that was concretized

4.
concrete_specs: a dictionary containing the specs in the environment.



Compatibility

New versions of Spack can (so far) read all old lockfile formats -- they are backward-compatible. Old versions cannot read new lockfile formats, and you'll need to upgrade Spack to use them.

Lockfile version compatibility across Spack versions

Spack version v1 v2 v3 v4
v0.12:0.14
v0.15:0.16
v0.17
v0.18:

Version 1

When lockfiles were first created, there was only one hash in Spack: the DAG hash. This DAG hash (we'll call it the old DAG hash) did not include build dependencies -- it only included transitive link and run dependencies.

The spec format at this time was keyed by name. Each spec started with a key for its name, whose value was a dictionary of other spec attributes. The lockfile put these name-keyed specs into dictionaries keyed by their DAG hash, and the spec records did not actually have a "hash" field in the lockfile -- you have to associate the hash from the key with the spec record after the fact.

Dependencies in original lockfiles were keyed by "hash", i.e. the old DAG hash.

{

"_meta": {
"file-type": "spack-lockfile",
"lockfile-version": 1
},
"roots": [
{
"hash": "<old_dag_hash 1>",
"spec": "<abstract spec 1>"
},
{
"hash": "<old_dag_hash 2>",
"spec": "<abstract spec 2>"
}
],
"concrete_specs": {
"<old_dag_hash 1>": {
"... <spec dict attributes> ...": { },
"dependencies": {
"depname_1": {
"hash": "<old_dag_hash for depname_1>",
"type": ["build", "link"]
},
"depname_2": {
"hash": "<old_dag_hash for depname_3>",
"type": ["build", "link"]
}
},
"hash": "<old_dag_hash 1>"
},
"<old_dag_hash 2>": {
"... <spec dict attributes> ...": { },
"dependencies": {
"depname_3": {
"hash": "<old_dag_hash for depname_3>",
"type": ["build", "link"]
},
"depname_4": {
"hash": "<old_dag_hash for depname_4>",
"type": ["build", "link"]
},
},
"hash": "<old_dag_hash 2>"
},
} }


Version 2

Version 2 changes one thing: specs in the lockfile are now keyed by build_hash instead of the old dag_hash. Specs have a hash attribute with their real DAG hash, so you can't go by the dictionary key anymore to identify a spec -- you have to read it in and look at "hash". Dependencies are still keyed by old DAG hash.

Even though we key lockfiles by build_hash, specs in Spack were still deployed with the old, coarser DAG hash. This means that in v2 and v3 lockfiles (which are keyed by build hash), there may be multiple versions of the same spec with different build dependencies, which means they will have different build hashes but the same DAG hash. Spack would only have been able to actually install one of these.

{

"_meta": {
"file-type": "spack-lockfile",
"lockfile-version": 2
},
"roots": [
{
"hash": "<build_hash 1>",
"spec": "<abstract spec 1>"
},
{
"hash": "<build_hash 2>",
"spec": "<abstract spec 2>"
}
],
"concrete_specs": {
"<build_hash 1>": {
"... <spec dict attributes> ...": { },
"dependencies": {
"depname_1": {
"hash": "<old_dag_hash for depname_1>",
"type": ["build", "link"]
},
"depname_2": {
"hash": "<old_dag_hash for depname_3>",
"type": ["build", "link"]
}
},
"hash": "<old_dag_hash 1>",
},
"<build_hash 2>": {
"... <spec dict attributes> ...": { },
"dependencies": {
"depname_3": {
"hash": "<old_dag_hash for depname_3>",
"type": ["build", "link"]
},
"depname_4": {
"hash": "<old_dag_hash for depname_4>",
"type": ["build", "link"]
}
},
"hash": "<old_dag_hash 2>"
}
} }


Version 3

Version 3 doesn't change the top-level lockfile format, but this was when we changed the specfile format. Specs in concrete_specs are now keyed by the build hash, with no inner dictionary keyed by their package name. The package name is in a name field inside each spec dictionary. The dependencies field in the specs is a list instead of a dictionary, and each element of the list is a record with the name, dependency types, and hash of the dependency. Instead of a key called hash, dependencies are keyed by build_hash. Each spec still has a hash attribute.

Version 3 adds the specfile_version field to _meta and uses the new JSON spec format.

{

"_meta": {
"file-type": "spack-lockfile",
"lockfile-version": 3,
"specfile-version": 2
},
"roots": [
{
"hash": "<build_hash 1>",
"spec": "<abstract spec 1>"
},
{
"hash": "<build_hash 2>",
"spec": "<abstract spec 2>"
},
],
"concrete_specs": {
"<build_hash 1>": {
"... <spec dict attributes> ...": { },
"dependencies": [
{
"name": "depname_1",
"build_hash": "<build_hash for depname_1>",
"type": ["build", "link"]
},
{
"name": "depname_2",
"build_hash": "<build_hash for depname_2>",
"type": ["build", "link"]
},
],
"hash": "<old_dag_hash 1>",
},
"<build_hash 2>": {
"... <spec dict attributes> ...": { },
"dependencies": [
{
"name": "depname_3",
"build_hash": "<build_hash for depname_3>",
"type": ["build", "link"]
},
{
"name": "depname_4",
"build_hash": "<build_hash for depname_4>",
"type": ["build", "link"]
},
],
"hash": "<old_dag_hash 2>"
}
} }


Version 4

Version 4 removes build hashes and is keyed by the new DAG hash (hash). The hash now includes build dependencies and a canonical hash of the package.py file. Dependencies are keyed by hash (DAG hash) as well. There are no more build_hash fields in the specs, and there are no more issues with lockfiles being able to store multiple specs with the same DAG hash (because the DAG hash is now finer-grained). An optional spack property may be included to track version information, such as the commit or version.

{

"_meta": {
"file-type": "spack-lockfile",
"lockfile-version": 4,
"specfile-version": 3
},
"roots": [
{
"hash": "<dag_hash 1>",
"spec": "<abstract spec 1>"
},
{
"hash": "<dag_hash 2>",
"spec": "<abstract spec 2>"
}
],
"concrete_specs": {
"<dag_hash 1>": {
"... <spec dict attributes> ...": { },
"dependencies": [
{
"name": "depname_1",
"hash": "<dag_hash for depname_1>",
"type": ["build", "link"]
},
{
"name": "depname_2",
"hash": "<dag_hash for depname_2>",
"type": ["build", "link"]
}
],
"hash": "<dag_hash 1>",
},
"<daghash 2>": {
"... <spec dict attributes> ...": { },
"dependencies": [
{
"name": "depname_3",
"hash": "<dag_hash for depname_3>",
"type": ["build", "link"]
},
{
"name": "depname_4",
"hash": "<dag_hash for depname_4>",
"type": ["build", "link"]
}
],
"hash": "<dag_hash 2>"
}
} }


Bases: object

A Spack environment, which bundles together configuration and a list of specs.

True if this environment is currently active.

Add a single user_spec (non-concretized) to the Environment
present and did not need to be added

Return type
(bool)


Collect the environment modifications to activate an environment using the provided view. Removes duplicate paths.
  • env_mod -- the environment modifications object that is modified.
  • view -- the name of the view to activate.



Specs that are not yet installed.

Yields the user spec for non-concretized specs, and the concrete spec for already concretized but not yet installed specs.


Return hashes of all specs.

Returns all concretized specs in the environment satisfying any of the input specs

all_specs() -> List[Spec]
Returns a list of all concrete specs

Returns a generator for all concrete specs

Find the spec identified by match_spec and change it to change_spec.
  • change_spec -- defines the spec properties that need to be changed. This will not change attributes of the matched spec unless they conflict with change_spec.
  • list_name -- identifies the spec list in the environment that should be modified
  • match_spec -- if set, this identifies the spec that should be changed. If not set, it is assumed we are looking for a spec with the same name as change_spec.



Checks if the environments default view can be activated.

Clear the contents of the environment
re_read (bool) -- If True, do not clear new_specs nor new_installs values. These values cannot be read from yaml, and need to be maintained when re-reading an existing environment.


Same as concretized_specs, except it returns the list of concrete roots without associated user spec

Concretize user_specs in this environment.

Only concretizes specs that haven't been concretized yet unless force is True.

This only modifies the environment in memory. write() will write out a lockfile containing concretized specs.

  • force (bool) -- re-concretize ALL specs, even those that were already concretized
  • tests (bool or list or set) -- False to run no tests, True to test all packages, or a list of package names to run tests for some

List of specs that have been concretized. Each entry is a tuple of the user spec and the corresponding concretized spec.


Concretize and add a single spec to the environment.

Concretize the provided user_spec and add it along with the concretized result to the environment. If the given user_spec was already present in the environment, this does not add a duplicate. The concretized spec will be added unless the user_spec was already present and an associated concrete spec was already present.

concrete_spec -- if provided, then it is assumed that it is the result of concretizing the provided user_spec


Roots associated with the last concretization, in order

Tuples of (user spec, concrete spec) for all concrete specs.

User specs from the last concretization

A list of all configuration scopes for this environment.

Directory for any staged configuration file(s).

Remove specified spec from environment concretization
  • spec -- Spec to deconcretize. This must be a root of the environment
  • concrete -- If True, find all instances of spec as concrete in the environemnt. If False, find a single instance of the abstract spec as root of the environment.




Deletes the default view associated with this environment.

Remove this environment from Spack entirely.

Dev-build specs from "spack.yaml"

Add dev-build info for package
  • spec -- Set constraints on development specs. Must include a concrete version.
  • path -- Path to find code for developer builds. Relative paths will be resolved relative to the environment.
  • clone -- Clone the package code to the path. If clone is False Spack will assume the code is already present at path.

True iff the environment was changed.
Return type
(bool)


Ensure that the root directory of the environment exists
dot_env -- if True also ensures that the <root>/.env directory exists


Get the configuration scope for the environment's manifest file.

Name of the config scope of this environment's manifest file.

Path to directory where the env stores repos, logs, views.


Returns the single spec from the environment which matches the provided hash. Raises an AssertionError if no specs match or if more than one spec matches.


List of included configuration scopes from the environment.

Scopes are listed in the YAML file in order from highest to lowest precedence, so configuration from earlier scope will take precedence over later ones.

This routine returns them in the order they should be pushed onto the internal scope stack (so, in reverse, from lowest to highest).


Install all concretized specs in an environment.

Note: this does not regenerate the views for the environment; that needs to be done separately with a call to write().

install_args (dict) -- keyword install arguments



Whether this environment is managed by Spack.


Returns true when the spec is built from local sources

Path to spack.lock file in this environment.


Path to spack.yaml file in this environment.

Emits a warning if the manifest file is not up-to-date.

Given a spec (likely not concretized), find a matching concretized spec in the environment.

The matching spec does not have to be installed in the environment, but must be concrete (specs added with spack add without an intervening spack concretize will not be matched).

If there is a single root spec that matches the provided spec or a single dependency spec that matches the provided spec, then the concretized instance of that spec will be returned.

If multiple root specs match the provided spec, or no root specs match and multiple dependency specs match, then this raises an error and reports all matching specs.


Human-readable representation of the environment.

This is the path for directory environments, and just the name for managed environments.



Remove specs from an environment that match a query_spec

Tuples of (user spec, concrete spec) for all specs that will be removed on next concretize.



Collect the environment modifications to deactivate an environment using the provided view. Reverses the action of add_view_to_env.
  • env_mod -- the environment modifications object that is modified.
  • view -- the name of the view to deactivate.



Specs explicitly requested by the user in this environment.

Yields both added and installed specs that have user specs in spack.yaml.


Specs from "spack.yaml"

Concretized specs by hash

Remove develop info for abstract spec spec.

returns True on success, False if no entry existed.



Return root specs that are not installed, or are installed, but are development specs themselves or have those among their dependencies.

Updates the path of the default view.

If the argument passed as input is False the default view is deleted, if present. The manifest will have an entry "view: false".

If the argument passed as input is True a default view is created, if not already present. The manifest will have an entry "view: true". If a default view is already declared, it will be left untouched.

If the argument passed as input is a path a default view pointing to that path is created, if not present already. If a default view is already declared, only its "root" will be changed.

path_or_bool -- either True, or False or a path


Updates the repository associated with the environment.


Iterate over spec lists updating references.



Writes an in-memory environment to its location on disk.

Write out package files for each newly concretized spec. Also regenerate any views associated with the environment and run post-write hooks, if regenerate is True.

regenerate -- regenerate views and run post-write hooks as well as writing if True.


Get a write lock context manager for use in a with block.


Bases: SpackEnvironmentError

Class for Spack environment-specific configuration errors.


Bases: SpackError

Superclass for all errors to do with Spack environments.


Bases: SpackEnvironmentError

Class for errors regarding view generation.


Activate an environment.

To activate an environment, we add its configuration scope to the existing Spack configuration, and we set active to the current environment.

  • env (Environment) -- the environment to activate
  • use_env_repo (bool) -- use the packages exactly as they appear in the environment's repository



True if the named environment is active.

Returns the active environment when there is any

List the names of environments that currently exist.

Generator for all managed Environments.

Create a managed environment in Spack and returns it.

A managed environment is created in a root directory managed by this Spack instance, so that Spack can keep track of them.

Files with suffix .json or .lock are considered lockfiles. Files with any other name are considered manifest files.

  • name -- name of the managed environment
  • init_file -- either a lockfile, a manifest file, or None
  • with_view -- whether a view should be maintained for the environment. If the value is a string, it specifies the path to the view
  • keep_relative -- if True, develop paths are copied verbatim into the new environment file, otherwise they are made absolute



Create an environment in the directory passed as input and returns it.

Files with suffix .json or .lock are considered lockfiles. Files with any other name are considered manifest files.

  • manifest_dir -- directory where to create the environment.
  • init_file -- either a lockfile, a manifest file, or None
  • with_view -- whether a view should be maintained for the environment. If the value is a string, it specifies the path to the view
  • keep_relative -- if True, develop paths are copied verbatim into the new environment file, otherwise they are made absolute



Undo any configuration or repo settings modified by activate().

default spack.yaml file to put in new environments

Displays the list of specs returned by Environment.concretize().
concretized_specs (list) -- list of specs returned by Environment.concretize()


Returns the directory associated with a named environment.
  • name -- name of the environment
  • exists_ok -- if False, raise an error if the environment exists already

SpackEnvironmentError -- if exists_ok is False and the environment exists already


Whether an environment with this name exists or not.

Initialize an environment directory starting from an envfile.

Files with suffix .json or .lock are considered lockfiles. Files with any other name are considered manifest files.

  • environment_dir -- directory where the environment should be placed
  • envfile -- manifest file or lockfile used to initialize the environment

SpackEnvironmentError -- if the directory can't be initialized


Returns the specs of packages installed in the active environment or None if no packages are installed.

Whether a directory contains a spack environment.

Return False if the manifest file exists and is not in the latest schema format.
manifest (str) -- manifest file to be analyzed


Return the absolute path to a manifest file given the environment name or directory.
env_name_or_dir (str) -- either the name of a valid environment or a directory where a manifest file resides
AssertionError -- if the environment is not found


Deactivate the active environment for the duration of the context. Has no effect when there is no active environment.

Get an environment with the supplied name.

Get the root directory for an environment by name.

Update a manifest file from an old format to the current one.
  • manifest (str) -- path to a manifest file
  • backup_file (str) -- file where to copy the original manifest

True if the manifest was updated, False otherwise.
AssertionError -- in case anything goes wrong during the update


Submodules

spack.environment.depfile module

This module contains the traversal logic and models that can be used to generate depfiles from an environment.

Bases: object

Contains a spec, a subset of its dependencies, and a flag whether it should be buildcache only/never/auto.


Bases: object

This visitor produces an adjacency list of a (reduced) DAG, which is used to generate depfile targets with their prerequisites. Currently it only drops build deps when using buildcache only mode.

Note that the DAG could be reduced even more by dropping build edges of specs installed at the moment the depfile is generated, but that would produce stateful depfiles that would not fail when the database is wiped later.


Produce a list of spec to follow from node


Bases: object

This class produces all data to render a makefile for specs of an environment.


Produces a MakefileModel from an environment and a list of specs.
  • env -- the environment to use
  • filter_specs -- if provided, only these specs will be built from the environment, otherwise the environment roots are used.
  • pkg_buildcache -- whether to only use the buildcache for top-level specs.
  • dep_buildcache -- whether to only use the buildcache for non-top-level specs.
  • make_prefix -- the prefix for the makefile targets
  • jobserver -- when enabled, make will invoke Spack with jobserver support. For dry-run this should be disabled.





Bases: object

Limited interface to spec to help generate targets etc. without introducing unwanted special characters.







spack.environment.environment module

Bases: object

A Spack environment, which bundles together configuration and a list of specs.

True if this environment is currently active.

Add a single user_spec (non-concretized) to the Environment
present and did not need to be added

Return type
(bool)


Collect the environment modifications to activate an environment using the provided view. Removes duplicate paths.
  • env_mod -- the environment modifications object that is modified.
  • view -- the name of the view to activate.



Specs that are not yet installed.

Yields the user spec for non-concretized specs, and the concrete spec for already concretized but not yet installed specs.


Return hashes of all specs.

Returns all concretized specs in the environment satisfying any of the input specs

all_specs() -> List[Spec]
Returns a list of all concrete specs

Returns a generator for all concrete specs

Find the spec identified by match_spec and change it to change_spec.
  • change_spec -- defines the spec properties that need to be changed. This will not change attributes of the matched spec unless they conflict with change_spec.
  • list_name -- identifies the spec list in the environment that should be modified
  • match_spec -- if set, this identifies the spec that should be changed. If not set, it is assumed we are looking for a spec with the same name as change_spec.



Checks if the environments default view can be activated.

Clear the contents of the environment
re_read (bool) -- If True, do not clear new_specs nor new_installs values. These values cannot be read from yaml, and need to be maintained when re-reading an existing environment.


Same as concretized_specs, except it returns the list of concrete roots without associated user spec

Concretize user_specs in this environment.

Only concretizes specs that haven't been concretized yet unless force is True.

This only modifies the environment in memory. write() will write out a lockfile containing concretized specs.

  • force (bool) -- re-concretize ALL specs, even those that were already concretized
  • tests (bool or list or set) -- False to run no tests, True to test all packages, or a list of package names to run tests for some

List of specs that have been concretized. Each entry is a tuple of the user spec and the corresponding concretized spec.


Concretize and add a single spec to the environment.

Concretize the provided user_spec and add it along with the concretized result to the environment. If the given user_spec was already present in the environment, this does not add a duplicate. The concretized spec will be added unless the user_spec was already present and an associated concrete spec was already present.

concrete_spec -- if provided, then it is assumed that it is the result of concretizing the provided user_spec


Roots associated with the last concretization, in order

Tuples of (user spec, concrete spec) for all concrete specs.

User specs from the last concretization

A list of all configuration scopes for this environment.

Directory for any staged configuration file(s).

Remove specified spec from environment concretization
  • spec -- Spec to deconcretize. This must be a root of the environment
  • concrete -- If True, find all instances of spec as concrete in the environemnt. If False, find a single instance of the abstract spec as root of the environment.




Deletes the default view associated with this environment.

Remove this environment from Spack entirely.

Dev-build specs from "spack.yaml"

Add dev-build info for package
  • spec -- Set constraints on development specs. Must include a concrete version.
  • path -- Path to find code for developer builds. Relative paths will be resolved relative to the environment.
  • clone -- Clone the package code to the path. If clone is False Spack will assume the code is already present at path.

True iff the environment was changed.
Return type
(bool)


Ensure that the root directory of the environment exists
dot_env -- if True also ensures that the <root>/.env directory exists


Get the configuration scope for the environment's manifest file.

Name of the config scope of this environment's manifest file.

Path to directory where the env stores repos, logs, views.


Returns the single spec from the environment which matches the provided hash. Raises an AssertionError if no specs match or if more than one spec matches.


List of included configuration scopes from the environment.

Scopes are listed in the YAML file in order from highest to lowest precedence, so configuration from earlier scope will take precedence over later ones.

This routine returns them in the order they should be pushed onto the internal scope stack (so, in reverse, from lowest to highest).


Install all concretized specs in an environment.

Note: this does not regenerate the views for the environment; that needs to be done separately with a call to write().

install_args (dict) -- keyword install arguments



Whether this environment is managed by Spack.


Returns true when the spec is built from local sources

Path to spack.lock file in this environment.


Path to spack.yaml file in this environment.

Emits a warning if the manifest file is not up-to-date.

Given a spec (likely not concretized), find a matching concretized spec in the environment.

The matching spec does not have to be installed in the environment, but must be concrete (specs added with spack add without an intervening spack concretize will not be matched).

If there is a single root spec that matches the provided spec or a single dependency spec that matches the provided spec, then the concretized instance of that spec will be returned.

If multiple root specs match the provided spec, or no root specs match and multiple dependency specs match, then this raises an error and reports all matching specs.


Human-readable representation of the environment.

This is the path for directory environments, and just the name for managed environments.





Remove specs from an environment that match a query_spec

Tuples of (user spec, concrete spec) for all specs that will be removed on next concretize.



Collect the environment modifications to deactivate an environment using the provided view. Reverses the action of add_view_to_env.
  • env_mod -- the environment modifications object that is modified.
  • view -- the name of the view to deactivate.



Specs explicitly requested by the user in this environment.

Yields both added and installed specs that have user specs in spack.yaml.


Specs from "spack.yaml"

Concretized specs by hash

Remove develop info for abstract spec spec.

returns True on success, False if no entry existed.



Return root specs that are not installed, or are installed, but are development specs themselves or have those among their dependencies.

Updates the path of the default view.

If the argument passed as input is False the default view is deleted, if present. The manifest will have an entry "view: false".

If the argument passed as input is True a default view is created, if not already present. The manifest will have an entry "view: true". If a default view is already declared, it will be left untouched.

If the argument passed as input is a path a default view pointing to that path is created, if not present already. If a default view is already declared, only its "root" will be changed.

path_or_bool -- either True, or False or a path


Updates the repository associated with the environment.


Iterate over spec lists updating references.




Writes an in-memory environment to its location on disk.

Write out package files for each newly concretized spec. Also regenerate any views associated with the environment and run post-write hooks, if regenerate is True.

regenerate -- regenerate views and run post-write hooks as well as writing if True.


Get a write lock context manager for use in a with block.


Bases: Mapping

Manages the in-memory representation of a manifest file, and its synchronization with the actual manifest on disk.

Normalizes the dev paths in the environment with respect to the directory where the initialization file resides.
init_file_dir -- directory with the "spack.yaml" used to initialize the environment.


Appends a user spec to the first active definition matching the name passed as argument.
  • user_spec -- user spec to be appended
  • list_name -- name of the definition where to append

SpackEnvironmentError -- is no valid definition exists already


Adds a develop spec to the manifest file
  • pkg_name -- name of the package to be developed
  • entry -- spec and path of the developed package



Appends the user spec passed as input to the list of root specs.
user_spec -- user spec to be appended


Return the dictionaries in the YAML, without the top level attribute

flush() -> None
Synchronizes the object with the manifest file on disk.

Returns an environment manifest file compatible with the lockfile already present in the environment directory.

This function also writes a spack.yaml file that is consistent with the spack.lock already existing in the directory.

manifest_dir -- directory where the lockfile is


Overrides a user spec from an active definition that matches the name passed as argument.
  • user_spec -- user spec to be overridden
  • override -- new spec to be used
  • list_name -- name of the definition where to override the spec

SpackEnvironmentError -- if the user spec cannot be overridden


Overrides the user spec at index idx with the one passed as input.
  • user_spec -- new user spec
  • idx -- index of the spec to be overridden

SpackEnvironmentError -- when the user spec cannot be overridden


Return the dictionaries in the pristine YAML, without the top level attribute

Pristine YAML content, without defaults being added

Removes the default view from the manifest file

Removes a user spec from an active definition that matches the name passed as argument.
  • user_spec -- user spec to be removed
  • list_name -- name of the definition where to remove the spec from

SpackEnvironmentError -- if the user spec cannot be removed from the list,
or the list does not exist


Removes a develop spec from the manifest file
pkg_name -- package to be removed from development
SpackEnvironmentError -- if there is nothing to remove


Removes the user spec passed as input from the list of root specs
user_spec -- user spec to be removed
SpackEnvironmentError -- when the user spec is not in the list


Sets the default view root in the manifest to the value passed as input.
view -- If the value is a string or a path, it specifies the path to the view. If True the default view is used for the environment, if False there's no view.


YAML content with defaults added by Spack, if they're missing


Bases: SpackEnvironmentError

Class for Spack environment-specific configuration errors.


Bases: SpackError

Superclass for all errors to do with Spack environments.


Bases: SpackEnvironmentError

Class for errors regarding view generation.


Bases: object



Get projection for spec relative to view root

Getting the projection from the underlying root will get the temporary projection. This gives the permanent projection relative to the root symlink.




From the list of concretized user specs in the environment, flatten the dags, and filter selected, installed specs, remove duplicates on dag hash.



Generate the FilesystemView object for this ViewDescriptor

By default, this method returns a FilesystemView object rooted at the current underlying root of this ViewDescriptor (self._current_root)

Raise if new is None and there is no current view

new (str or None) -- If a string, create a FilesystemView rooted at that path. Default None. This should only be used to regenerate the view, and cannot be used to access specs.



Activate an environment.

To activate an environment, we add its configuration scope to the existing Spack configuration, and we set active to the current environment.

  • env (Environment) -- the environment to activate
  • use_env_repo (bool) -- use the packages exactly as they appear in the environment's repository



True if the named environment is active.

Returns the active environment when there is any

List the names of environments that currently exist.

Generator for all managed Environments.


Create a managed environment in Spack and returns it.

A managed environment is created in a root directory managed by this Spack instance, so that Spack can keep track of them.

Files with suffix .json or .lock are considered lockfiles. Files with any other name are considered manifest files.

  • name -- name of the managed environment
  • init_file -- either a lockfile, a manifest file, or None
  • with_view -- whether a view should be maintained for the environment. If the value is a string, it specifies the path to the view
  • keep_relative -- if True, develop paths are copied verbatim into the new environment file, otherwise they are made absolute



Create an environment in the directory passed as input and returns it.

Files with suffix .json or .lock are considered lockfiles. Files with any other name are considered manifest files.

  • manifest_dir -- directory where to create the environment.
  • init_file -- either a lockfile, a manifest file, or None
  • with_view -- whether a view should be maintained for the environment. If the value is a string, it specifies the path to the view
  • keep_relative -- if True, develop paths are copied verbatim into the new environment file, otherwise they are made absolute



Undo any configuration or repo settings modified by activate().

Remove any scopes from env from the global config path.

default path where environments are stored in the spack tree

default spack.yaml file to put in new environments

Displays the list of specs returned by Environment.concretize().
concretized_specs (list) -- list of specs returned by Environment.concretize()



Override default root path if the user specified it

Name of the directory where environments store repos, logs, views

Returns the directory associated with a named environment.
  • name -- name of the environment
  • exists_ok -- if False, raise an error if the environment exists already

SpackEnvironmentError -- if exists_ok is False and the environment exists already


Whether an environment with this name exists or not.

Initialize an environment directory starting from an envfile.

Files with suffix .json or .lock are considered lockfiles. Files with any other name are considered manifest files.

  • environment_dir -- directory where the environment should be placed
  • envfile -- manifest file or lockfile used to initialize the environment

SpackEnvironmentError -- if the directory can't be initialized


Returns the specs of packages installed in the active environment or None if no packages are installed.

Whether a directory contains a spack environment.

Return False if the manifest file exists and is not in the latest schema format.
manifest (str) -- manifest file to be analyzed


version of the lockfile format. Must increase monotonically.

Name of the input yaml file for an environment

Make a RepoPath from the repo subdirectories in an environment.

Return the absolute path to a manifest file given the environment name or directory.
env_name_or_dir (str) -- either the name of a valid environment or a directory where a manifest file resides
AssertionError -- if the environment is not found


Name of the input yaml file for an environment

Deactivate the active environment for the duration of the context. Has no effect when there is no active environment.

Add env's scope to the global configuration search path.

Get an environment with the supplied name.

Get the root directory for an environment by name.

environment variable used to indicate the active environment

environment variable used to indicate the active environment view

Update a manifest file from an old format to the current one.
  • manifest (str) -- path to a manifest file
  • backup_file (str) -- file where to copy the original manifest

True if the manifest was updated, False otherwise.
AssertionError -- in case anything goes wrong during the update





Returns whether two spack yaml items are equivalent, including overrides

spack.environment.shell module

Activate an environment and append environment modifications

To activate an environment, we add its configuration scope to the existing Spack configuration, and we set active to the current environment.

  • env -- the environment to activate
  • use_env_repo -- use the packages exactly as they appear in the environment's repository
  • view -- generate commands to add runtime environment variables for named view

Environment variables modifications to activate environment.
Return type
spack.util.environment.EnvironmentModifications



Deactivate an environment and collect corresponding environment modifications.
loaded in, meaning that specs that were removed from the spack environment after activation are not unloaded.

Environment variables modifications to activate environment.



spack.hooks package

This package contains modules with hooks for various stages in the Spack install process. You can add modules here and they'll be executed by package at various times during the package lifecycle.

Each hook is just a function that takes a package as a parameter. Hooks are not executed in any particular order.

Currently the following hooks are supported:

  • pre_install(spec)
  • post_install(spec, explicit)
  • pre_uninstall(spec)
  • post_uninstall(spec)
  • on_install_start(spec)
  • on_install_success(spec)
  • on_install_failure(spec)
  • on_phase_success(pkg, phase_name, log_file)
  • on_phase_error(pkg, phase_name, log_file)
  • on_phase_error(pkg, phase_name, log_file)
  • post_env_write(env)



This can be used to implement support for things like module systems (e.g. modules, lmod, etc.) or to add other custom features.

Submodules

spack.hooks.absolutify_elf_sonames module

Bases: BaseDirectoryVisitor

Visitor that collects all shared libraries in a prefix, with the exception of an exclude list.

Return True from this function to recurse into the directory at os.path.join(root, rel_path). Return False in order not to recurse further.
  • root (str) -- root directory
  • rel_path (str) -- relative path to current directory from root
  • depth (int) -- depth of current directory from the root directory

True when the directory should be recursed into. False when not
Return type
bool


Return True to recurse into the symlinked directory and False in order not to. Note: rel_path is the path to the symlink itself. Following symlinked directories blindly can cause infinite recursion due to cycles.
  • root (str) -- root directory
  • rel_path (str) -- relative path to current symlink from root
  • depth (int) -- depth of current symlink from the root directory

True when the directory should be recursed into. False when not
Return type
bool


Get the libraries that should be patched, with the excluded libraries removed.

Handle the non-symlink file at os.path.join(root, rel_path)
  • root (str) -- root directory
  • rel_path (str) -- relative path to current file from root
  • depth (int) -- depth of current file from the root directory



Handle the symlink to a file at os.path.join(root, rel_path). Note: rel_path is the location of the symlink, not to what it is pointing to. The symlink may be dangling.
  • root (str) -- root directory
  • rel_path (str) -- relative path to current symlink from root
  • depth (int) -- depth of current symlink from the root directory





Return true if filepath is most a shared library. Our definition of a shared library for ELF requires: 1. a dynamic section, 2. a soname OR lack of interpreter. The problem is that PIE objects (default on Ubuntu) are ET_DYN too, and not all shared libraries have a soname... no interpreter is typically the best indicator then.

Set the soname to the file's own path for a list of given shared libraries.


spack.hooks.drop_redundant_rpaths module

Bases: BaseDirectoryVisitor

Visitor that collects all elf files that have an rpath

Return True from this function to recurse into the directory at os.path.join(root, rel_path). Return False in order not to recurse further.
  • root (str) -- root directory
  • rel_path (str) -- relative path to current directory from root
  • depth (int) -- depth of current directory from the root directory

True when the directory should be recursed into. False when not
Return type
bool


Return True to recurse into the symlinked directory and False in order not to. Note: rel_path is the path to the symlink itself. Following symlinked directories blindly can cause infinite recursion due to cycles.
  • root (str) -- root directory
  • rel_path (str) -- relative path to current symlink from root
  • depth (int) -- depth of current symlink from the root directory

True when the directory should be recursed into. False when not
Return type
bool


Handle the non-symlink file at os.path.join(root, rel_path)
  • root (str) -- root directory
  • rel_path (str) -- relative path to current file from root
  • depth (int) -- depth of current file from the root directory



Handle the symlink to a file at os.path.join(root, rel_path). Note: rel_path is the location of the symlink, not to what it is pointing to. The symlink may be dangling.
  • root (str) -- root directory
  • rel_path (str) -- relative path to current symlink from root
  • depth (int) -- depth of current symlink from the root directory




Drop redundant entries from rpath.
path -- Path to a potential ELF file to patch.
A tuple of the old and new rpath if the rpath was patched, None otherwise.



Return True iff path starts with $ (typically for $ORIGIN/${ORIGIN}) or is absolute and exists.

spack.hooks.licensing module

This hook symlinks local licenses to the global license for licensed software.

This hook handles global license setup for licensed software.

Prompt the user, letting them know that a license is required.

For packages that rely on license files, a global license file is created and opened for editing.

For packages that rely on environment variables to point to a license, a warning message is printed.

For all other packages, documentation on how to set up a license is printed.


Create local symlinks that point to the global license file.

Writes empty license file.

Comments give suggestions on alternative methods of installing a license.


spack.hooks.module_file_generation module




spack.hooks.permissions_setters module


spack.hooks.sbang module

Bases: SpackError

Raised when the install tree root is too long for sbang to work.


Adds a second shebang line, using sbang, at the beginning of a file, if necessary. Note: Spack imposes a relaxed shebang line limit, meaning that a newline or end of file must occur before spack_shebang_limit bytes. If not, the file is not patched.



Ensure that sbang is installed in the root of Spack's install_tree.

This is the shortest known publicly accessible path, and installing sbang here ensures that users can access the script and that sbang itself is in a short path.


This hook edits scripts so that they call /bin/bash $spack_prefix/bin/sbang instead of something longer than the shebang limit.

Location sbang should be installed within Spack's install_tree.

Full shebang line that should be prepended to files to use sbang.

The line returned does not have a final newline (caller should add it if needed).

This should be the only place in Spack that knows about what interpreter we use for sbang.


Spack itself also limits the shebang line to at most 4KB, which should be plenty.

spack.hooks.write_install_manifest module


spack.modules package

This package contains code for creating environment modules, which can include Tcl non-hierarchical modules, Lua hierarchical modules, and others.



Disable the generation of modulefiles within the context manager.

Submodules

spack.modules.common module

Here we consolidate the logic for creating an abstract description of the information that module systems need.

This information maps a single spec to:

  • a unique module filename
  • the module file content



and is divided among four classes:

  • a configuration class that provides a convenient interface to query details about the configuration for the spec under consideration.
  • a layout class that provides the information associated with module file names and directories
  • a context class that provides the dictionary used by the template engine to generate the module file
  • a writer that collects and uses the information above to either write or remove the module file



Each of the four classes needs to be sub-classed when implementing a new module type.

Bases: object

Manipulates the information needed to generate a module file to make querying easier. It needs to be sub-classed for specific module types.

Conflicts for this module file



Returns the specs configured as defaults or [].

List of environment modifications that should be done in the module.

List of variables that should be left unmodified.

Returns True if the module has been excluded, False otherwise.

Hash tag for the module or None

Returns True if the module has been hidden, False otherwise.

List of literal modules to be loaded.

Projection from specs to module names

List of specs that should be loaded in the module file.

List of specs that should be prerequisite of the module file.

List of suffixes that should be appended to the module file name.

Returns the name of the template to use for the module file or None if not specified in the configuration.

Returns True if the module file needs to be verbose, False otherwise


Bases: Context

Provides the base context needed for template rendering.

This class needs to be sub-classed for specific module types. The following attributes need to be implemented:

fields

List of modules that needs to be loaded automatically.



List of conflicts for the module file.


List of environment modifications to be processed.

True if MANPATH environment variable is modified.


Returns True if environment modification entry needs to be formatted.




Verbosity level.


Bases: object

Provides information on the layout of module files. Needs to be sub-classed for specific module types.

Root folder for module files of this type.

This needs to be redefined

Name of the module file for the current spec.

Spec under consideration

Returns the 'use' name of the module i.e. the name you have to type to console to use it. This implementation fits the needs of most non-hierarchical layouts.


Bases: object



Deletes the module file.



Update modulerc file corresponding to module to add or remove command that hides module depending on its hidden state.
remove (bool) -- if True, hiddenness information for module is removed from modulerc.


Writes the module file.
overwrite (bool) -- if True it is fine to overwrite an already existing file. If False the operation is skipped an we print a warning to the user.



Bases: AttributeError, ModulesError

Raised if the attribute 'default_template' has not been specified in the derived classes.


Bases: AttributeError, ModulesError

Raised if the attribute 'hide_cmd_format' has not been specified in the derived classes.


Bases: tuple
Alias for field number 0

Alias for field number 1


Bases: ModulesError

Raised when a module cannot be found for a spec


Bases: AttributeError, ModulesError

Raised if the attribute 'modulerc_header' has not been specified in the derived classes.



Bases: ModulesError, RuntimeError

Raised if the template for a module file was not found.


Bases: object

This is responsible for taking the individual module indices of all upstream Spack installations and locating the module for a given spec based on which upstream install it is located in.




Returns the list of dependent specs for a given spec, according to the request passed as parameter.
  • spec -- spec to be analyzed
  • request -- either 'none', 'direct' or 'all'

list of dependencies

The return list will be empty if request is 'none', will contain the direct dependencies if request is 'direct', or the entire DAG if request is 'all'.



Disable the generation of modulefiles within the context manager.


Retrieve the module file for a given spec and module type.

Retrieve the module file for the given spec if it is available. If the module is not available, this will raise an exception unless the module is excluded or if the spec is installed upstream.

  • module_type -- the type of module we want to retrieve (e.g. lmod)
  • spec -- refers to the installed package that we want to retrieve a module for
  • required -- if the module is required but excluded, this function will print a debug message. If a module is missing but not excluded, then an exception is raised (regardless of whether it is required)
  • get_full_path -- if True, this returns the full path to the module. Otherwise, this returns the module name.
  • module_set_name -- the named module configuration set from modules.yaml for which to retrieve the module.

The module name or path. May return None if the module is not available.


Parses the module specific part of a configuration and returns a dictionary containing the actions to be performed on the spec passed as an argument.
  • configuration -- module specific configuration (e.g. entries under the top-level 'tcl' key)
  • spec -- spec for which we need to generate a module file

actions to be taken on the spec passed as an argument
Return type
dict




Returns the root folder for module file installation.
  • name -- name of the module system to be used (e.g. 'tcl')
  • module_set_name -- name of the set of module configs to use

root folder for module file installation


Updates a dictionary, but extends lists instead of overriding them.
  • target -- dictionary to be updated
  • update -- update to be applied



spack.modules.lmod module

Bases: SpackError, KeyError

Error raised if the key 'core_compilers' has not been specified in the configuration file.


Bases: BaseConfiguration

Configuration class for lmod module files.

Returns a dictionary of the services that are currently available.

Returns the list of "Core" compilers
CoreCompilersNotFoundError -- if the key was not
specified in the configuration file or the sequence
is empty


Returns the list of "Core" specs


Returns the dict of specs with modified hierarchies

Returns True if the module has been hidden, False otherwise.

Returns the list of tokens that are part of the modulefile hierarchy. 'compiler' is always present.

Returns the list of tokens that are not available.

Returns a dictionary mapping all the services provided by this spec to the spec itself.

Returns a dictionary mapping all the requirements of this spec to the actual provider. 'compiler' is always present among the requirements.


Bases: BaseContext

Context class for lmod module files.

Returns the list of paths that are unlocked conditionally. Each item in the list is a tuple with the structure (condition, path).


True if this module modifies MODULEPATH conditionally to the presence of other services in the environment, False otherwise.

True if this module modifies MODULEPATH, False otherwise.

Returns a list of missing services.

Name of this provider.

Returns the dictionary of provided services.

Returns the list of paths that are unlocked unconditionally.

Version of this provider.


Bases: BaseFileLayout

File layout for lmod module files.

Returns the root folder for THIS architecture

List of path parts that are currently available. Needed to construct the file name.

file extension of lua module files

Returns the filename for the current module file

Returns the modulerc file associated with current module file

Transforms a hierarchy token into the corresponding path part.
  • name (str) -- name of the service in the hierarchy
  • value -- actual provider of the service

part of the path associated with the service
Return type
str


Returns a dictionary mapping conditions to a list of unlocked paths.

The paths that are unconditionally unlocked are under the key 'None'. The other keys represent the list of services you need loaded to unlock the corresponding paths.




Bases: SpackError, TypeError

Error raised if non-virtual specs are used as hierarchy tokens in the lmod section of 'modules.yaml'.



Guesses the list of core compilers installed in the system.
store (bool) -- if True writes the core compilers to the modules.yaml configuration file
List of found core compilers





spack.modules.tcl module

This module implements the classes necessary to generate Tcl non-hierarchical modules.

Bases: BaseConfiguration

Configuration class for tcl module files.



Bases: BaseFileLayout

File layout for tcl module files.

Returns the modulerc file associated with current module file







spack.oci package

Submodules

spack.oci.image module

Bases: object

Represents a digest in the format <algorithm>:<digest>. Currently only supports sha256 digests.










Return a valid, default image tag for a spec.

Validate that the reference is of the format sha256:<checksum> Return the checksum if valid, raise ValueError otherwise.

Check if a tag is likely a Spec

spack.oci.oci module

Bases: NamedTuple
Alias for field number 0

Alias for field number 2

Alias for field number 1




Copy image layers from src to dst for given architecture.
  • src -- The source image reference.
  • dst -- The destination image reference.
  • architecture -- The architecture (when referencing an index)

Tuple of manifest and config of the base image.


Same as copy_missing_layers, but with retry wrapper


Recursively fetch manifest and config for a given image reference with a given architecture.
  • ref -- The image reference.
  • architecture -- The architecture (when referencing an index)
  • recurse -- How many levels of index to recurse into.

A tuple of (manifest, config)


Same as get_manifest_and_config, but with retry wrapper

Given an OCI based mirror, extract the URL and image name from it




Uploads a blob to an OCI registry

We only do monolithic uploads, even though it's very simple to do chunked. Observed problems with chunked uploads: (1) it's slow, many sequential requests, (2) some registries set an unknown max chunk size, and the spec doesn't say how to obtain it

  • ref -- The image reference.
  • file -- The file to upload.
  • digest -- The digest of the file.
  • force -- Whether to force upload the blob, even if it already exists.
  • small_file_size -- For files at most this size, attempt to do a single POST request instead of POST + PUT. Some registries do no support single requests, and others do not specify what size they support in single POST. For now this feature is disabled by default (0KB)

True if the blob was uploaded, False if it already existed.


Same as upload_blob, but with retry wrapper

Uploads a manifest/index to a registry
  • ref -- The image reference.
  • oci_manifest -- The OCI manifest or index.
  • tag -- When true, use the tag, otherwise use the digest, this is relevant for multi-arch images, where the tag is an index, referencing the manifests by digest.

The digest and size of the uploaded manifest.


Same as upload_manifest, but with retry wrapper

Add a query parameter to a URL
  • url -- The URL to add the parameter to.
  • param -- The parameter name.
  • value -- The parameter value.

The URL with the parameter added.


spack.oci.opener module

All the logic for OCI fetching and authentication



Bases: NamedTuple
Alias for field number 0

Alias for field number 2

Alias for field number 1




Bases: NamedTuple
Alias for field number 1

Alias for field number 0


Create an opener that can handle OCI authentication.



Raise an error if the response status is not the expected one.


Very basic parsing of www-authenticate parsing (RFC7235 section 4.1) Notice: this omits token68 support.


Opener that automatically uses OCI authentication based on mirror config

spack.operating_systems package

Bases: LinuxDistro

Compute Node Linux (CNL) is the operating system used for the Cray XC series super computers. It is a very stripped down version of GNU/Linux. Any compilers found through this operating system will be used with modules. If updated, user must make sure that version and name are updated to indicate that OS has been upgraded (or downgraded)





Bases: LinuxDistro

Represents OS that runs on login and service nodes of the Cray platform. It acts as a regular Linux without Cray-specific modules and compiler wrappers.

Calls the default function but unloads Cray's programming environments first.

This prevents from detecting Cray compiler wrappers and avoids possible false detections.



Bases: OperatingSystem

This class will represent the autodetected operating system for a Linux System. Since there are many different flavors of Linux, this class will attempt to encompass them all through autodetection using the python module platform and the method platform.dist()


Bases: OperatingSystem

This class represents the macOS operating system. This will be auto detected using the python platform.mac_ver. The macOS platform will be represented using the major version operating system name, i.e el capitan, yosemite...etc.


Bases: object

Base class for all the Operating Systems.

On a multiple architecture machine, the architecture spec field can be set to build a package against any target and operating system that is present on the platform. On Cray platforms or any other architecture that has different front and back end environments, the operating system will determine the method of compiler detection.

There are two different types of compiler detection:

1.
Through the $PATH env variable (front-end detection)
2.
Through the module system. (back-end detection)



Depending on which operating system is specified, the compiler will be detected using one of those methods.

For platforms such as linux and darwin, the operating system is autodetected.



Bases: OperatingSystem

This class represents the Windows operating system. This will be auto detected using the python platform.win32_ver() once we have a python setup that runs natively. The Windows platform will be represented using the major version operating system number, e.g. 10.





Submodules

spack.operating_systems.cray_backend module

Bases: LinuxDistro

Compute Node Linux (CNL) is the operating system used for the Cray XC series super computers. It is a very stripped down version of GNU/Linux. Any compilers found through this operating system will be used with modules. If updated, user must make sure that version and name are updated to indicate that OS has been upgraded (or downgraded)





Read the CLE release file and return a dict with its attributes.

This file is present on newer versions of Cray.

The release file looks something like this:

RELEASE=6.0.UP07
BUILD=6.0.7424
...


The dictionary we produce looks like this:

{

"RELEASE": "6.0.UP07",
"BUILD": "6.0.7424",
... }


dictionary of release attributes
Return type
dict


Read the CLE release file and return the Cray OS version.

This file is present on older versions of Cray.

The release file looks something like this:

5.2.UP04


the Cray OS version
Return type
str


spack.operating_systems.cray_frontend module

Bases: LinuxDistro

Represents OS that runs on login and service nodes of the Cray platform. It acts as a regular Linux without Cray-specific modules and compiler wrappers.

Calls the default function but unloads Cray's programming environments first.

This prevents from detecting Cray compiler wrappers and avoids possible false detections.



Context manager that unloads Cray Programming Environments.

spack.operating_systems.linux_distro module

Bases: OperatingSystem

This class will represent the autodetected operating system for a Linux System. Since there are many different flavors of Linux, this class will attempt to encompass them all through autodetection using the python module platform and the method platform.dist()


Return the kernel version as a Version object. Note that the kernel version is distinct from OS and/or distribution versions. For instance: >>> distro.id() 'centos' >>> distro.version() '7' >>> platform.release() '5.10.84+'

spack.operating_systems.mac_os module

Bases: OperatingSystem

This class represents the macOS operating system. This will be auto detected using the python platform.mac_ver. The macOS platform will be represented using the major version operating system name, i.e el capitan, yosemite...etc.


Find the last installed version of the CommandLineTools.

The CLT version might only affect the build if it's selected as the macOS SDK path.


Return path to the active macOS SDK.

Return the version of the active macOS SDK.

The SDK version usually corresponds to the installed Xcode version and can affect how some packages (especially those that use the GUI) can fail. This information should somehow be embedded into the future "compilers are dependencies" feature.

The macOS deployment target cannot be greater than the SDK version, but usually it can be at least a few versions less.


Get the current macOS version as a version object.

This has three mechanisms for determining the macOS version, which is used for spack identification (the os in the spec's arch) and indirectly for setting the value of MACOSX_DEPLOYMENT_TARGET, which affects the minos value of the LC_BUILD_VERSION macho header. Mixing minos values can lead to lots of linker warnings, and using a consistent version (pinned to the major OS version) allows distribution across clients that might be slightly behind.

The version determination is made with three mechanisms in decreasing priority:

1.
The MACOSX_DEPLOYMENT_TARGET variable overrides the actual operating system version, just like the value can be used to build for older macOS targets on newer systems. Spack currently will truncate this value when building packages, but at least the major version will be the same.
2.
The system sw_vers command reports the actual operating system version.
3.
The Python platform.mac_ver function is a fallback if the operating system identification fails, because some Python versions and/or installations report the OS on which Python was built rather than the one on which it is running.


spack.operating_systems.windows_os module

Bases: OperatingSystem

This class represents the Windows operating system. This will be auto detected using the python platform.win32_ver() once we have a python setup that runs natively. The Windows platform will be represented using the major version operating system number, e.g. 10.





Windows version as a Version object

spack.platforms package

Bases: Platform

Detect whether this system requires CrayPE module support.

Systems with newer CrayPE (21.10 for EX systems, future work for CS and XC systems) have compilers and MPI wrappers that can be used directly by path. These systems are considered linux platforms.

For systems running an older CrayPE, we detect the Cray platform based on the availability through module of the Cray programming environment. If this environment is available, we can use it to find compilers, target modules, etc. If the Cray programming environment is not available via modules, then we will treat it as a standard linux system, as the Cray compiler wrappers and other components of the Cray programming environment are irrelevant without module support.



Change the linker to default dynamic to be more similar to linux/standard linker behavior


Bases: Platform
binary formats used on this platform; used by relocation logic

Return True if the the host platform is detected to be the current Platform class, False otherwise.

Derived classes are responsible for implementing this method.



Specify deployment target based on target OS version.

The MACOSX_DEPLOYMENT_TARGET environment variable provides a default -mmacosx-version-min argument for GCC and Clang compilers, as well as the default value of CMAKE_OSX_DEPLOYMENT_TARGET for CMake-based build systems. The default value for the deployment target is usually the major version (11, 10.16, ...) for CMake and Clang, but some versions of GCC specify a minor component as well (11.3), leading to numerous link warnings about inconsistent or incompatible target versions. Setting the environment variable ensures consistent versions for an install toolchain target, even when the host macOS version changes.

TODO: it may be necessary to add SYSTEM_VERSION_COMPAT for older versions of the macosx developer tools; see https://github.com/spack/spack/pull/26290 for discussion.



Bases: Platform
Return True if the the host platform is detected to be the current Platform class, False otherwise.

Derived classes are responsible for implementing this method.




Bases: object

Platform is an abstract class extended by subclasses.

To add a new type of platform (such as cray_xe), create a subclass and set all the class attributes such as priority, front_target, back_target, front_os, back_os.

Platform also contain a priority class attribute. A lower number signifies higher priority. These numbers are arbitrarily set and can be changed though often there isn't much need unless a new platform is added and the user wants that to be detected first.

Targets are created inside the platform subclasses. Most architecture (like linux, and darwin) will have only one target family (x86_64) but in the case of Cray machines, there is both a frontend and backend processor. The user can specify which targets are present on front-end and back-end architecture.

Depending on the platform, operating systems are either autodetected or are set. The user can set the frontend and backend operating setting by the class attributes front_os and back_os. The operating system will be responsible for compiler detection.

Add the operating_system class object into the platform.operating_sys dictionary.

Used by the platform specific subclass to list available targets. Raises an error if the platform specifies a name that is reserved by spack as an alias.



binary formats used on this platform; used by relocation logic



Return True if the the host platform is detected to be the current Platform class, False otherwise.

Derived classes are responsible for implementing this method.








Subclass can override this method if it requires any platform-specific build environment modifications.

This is a getter method for the target dictionary that handles defaulting based on the values provided by default, front-end, and back-end. This can be overwritten by a subclass for which we want to provide further aliasing options.


Bases: Platform




Return True if the the host platform is detected to be the current Platform class, False otherwise.

Derived classes are responsible for implementing this method.






Bases: Platform
Return True if the the host platform is detected to be the current Platform class, False otherwise.

Derived classes are responsible for implementing this method.




Return a platform object that corresponds to the given name or None if there is no match.
name (str) -- name of the platform


The current platform used by Spack. May be swapped by the use_platform context manager.

Context manager that prevents the detection of the Cray platform

The result of the host search is memoized. In case it needs to be recomputed we must clear the cache, which is what this function does.

Submodules

spack.platforms.cray module

Bases: Platform

Detect whether this system requires CrayPE module support.

Systems with newer CrayPE (21.10 for EX systems, future work for CS and XC systems) have compilers and MPI wrappers that can be used directly by path. These systems are considered linux platforms.

For systems running an older CrayPE, we detect the Cray platform based on the availability through module of the Cray programming environment. If this environment is available, we can use it to find compilers, target modules, etc. If the Cray programming environment is not available via modules, then we will treat it as a standard linux system, as the Cray compiler wrappers and other components of the Cray programming environment are irrelevant without module support.



Change the linker to default dynamic to be more similar to linux/standard linker behavior



spack.platforms.darwin module

Bases: Platform
binary formats used on this platform; used by relocation logic

Return True if the the host platform is detected to be the current Platform class, False otherwise.

Derived classes are responsible for implementing this method.



Specify deployment target based on target OS version.

The MACOSX_DEPLOYMENT_TARGET environment variable provides a default -mmacosx-version-min argument for GCC and Clang compilers, as well as the default value of CMAKE_OSX_DEPLOYMENT_TARGET for CMake-based build systems. The default value for the deployment target is usually the major version (11, 10.16, ...) for CMake and Clang, but some versions of GCC specify a minor component as well (11.3), leading to numerous link warnings about inconsistent or incompatible target versions. Setting the environment variable ensures consistent versions for an install toolchain target, even when the host macOS version changes.

TODO: it may be necessary to add SYSTEM_VERSION_COMPAT for older versions of the macosx developer tools; see https://github.com/spack/spack/pull/26290 for discussion.



spack.platforms.linux module

Bases: Platform
Return True if the the host platform is detected to be the current Platform class, False otherwise.

Derived classes are responsible for implementing this method.




spack.platforms.test module

Bases: Platform




Return True if the the host platform is detected to be the current Platform class, False otherwise.

Derived classes are responsible for implementing this method.






spack.platforms.windows module

Bases: Platform
Return True if the the host platform is detected to be the current Platform class, False otherwise.

Derived classes are responsible for implementing this method.




spack.reporters package

Bases: Reporter

Generate reports of spec installations for CDash.

To use this reporter, pass the --cdash-upload-url argument to spack install:

spack install --cdash-upload-url=\

https://mydomain.com/cdash/submit.php?project=Spack <spec>


In this example, results will be uploaded to the Spack project on the CDash instance hosted at https://mydomain.com/cdash.




Extract stand-alone test outputs for the package.




Generate and upload the test report(s) for the package.

Set to False if any error occurs when building the CDash report

Generate reports for each package in each spec.


Explicitly report spec as being skipped (e.g., CI).

Examples are the installation failed or the package is known to have broken tests.

  • report_dir -- directory where the report is to be written
  • spec -- spec being tested
  • reason -- optional reason the test is being skipped





Bases: tuple
Alias for field number 2

Alias for field number 4

Alias for field number 1

Alias for field number 3

Alias for field number 5

Alias for field number 0




Submodules

spack.reporters.base module


spack.reporters.cdash module

Bases: Reporter

Generate reports of spec installations for CDash.

To use this reporter, pass the --cdash-upload-url argument to spack install:

spack install --cdash-upload-url=\

https://mydomain.com/cdash/submit.php?project=Spack <spec>


In this example, results will be uploaded to the Spack project on the CDash instance hosted at https://mydomain.com/cdash.





Extract stand-alone test outputs for the package.




Generate and upload the test report(s) for the package.

Set to False if any error occurs when building the CDash report

Generate reports for each package in each spec.


Explicitly report spec as being skipped (e.g., CI).

Examples are the installation failed or the package is known to have broken tests.

  • report_dir -- directory where the report is to be written
  • spec -- spec being tested
  • reason -- optional reason the test is being skipped





Bases: tuple
Alias for field number 2

Alias for field number 4

Alias for field number 1

Alias for field number 3

Alias for field number 5

Alias for field number 0



spack.reporters.extract module











spack.reporters.junit module


spack.schema package

This module contains jsonschema files for all of Spack's YAML formats.

Submodules

spack.schema.bootstrap module

Schema for bootstrap.yaml configuration file.

http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'bootstrap': {'properties': {'enable': {'type': 'boolean'}, 'root': {'type': 'string'}, 'sources': {'items': {'additionalProperties': False, 'properties': {'metadata': {'type': 'string'}, 'name': {'type': 'string'}}, 'required': ['name', 'metadata'], 'type': 'object'}, 'type': 'array'}, 'trusted': {'patternProperties': {'\\w[\\w-]*': {'type': 'boolean'}}, 'type': 'object'}}, 'type': 'object'}}, 'title': 'Spack bootstrap configuration file schema', 'type': 'object'}
Full schema with metadata

spack.schema.buildcache_spec module

Schema for a buildcache spec.yaml file

schema = {

"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Spack buildcache specfile schema",
"type": "object",
"additionalProperties": False,
"properties": {
# `buildinfo` is no longer needed as of Spack 0.21
"buildinfo": {"type": "object"},
"spec": {
"type": "object",
"additionalProperties": True,
"items": spack.schema.spec.properties,
},
"binary_cache_checksum": {
"type": "object",
"properties": {"hash_algorithm": {"type": "string"}, "hash": {"type": "string"}},
},
"buildcache_layout_version": {"type": "number"},
}, }


spack.schema.cdash module

Schema for cdash.yaml configuration file.

#: Properties for inclusion in other schemas
properties = {

"cdash": {
"type": "object",
"additionalProperties": False,
# "required": ["build-group", "url", "project", "site"],
"required": ["build-group"],
"patternProperties": {
r"build-group": {"type": "string"},
r"url": {"type": "string"},
r"project": {"type": "string"},
r"site": {"type": "string"},
},
} } #: Full schema with metadata schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Spack cdash configuration file schema",
"type": "object",
"additionalProperties": False,
"properties": properties, }



http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'cdash': {'additionalProperties': False, 'patternProperties': {'build-group': {'type': 'string'}, 'project': {'type': 'string'}, 'site': {'type': 'string'}, 'url': {'type': 'string'}}, 'required': ['build-group'], 'type': 'object'}}, 'title': 'Spack cdash configuration file schema', 'type': 'object'}
Full schema with metadata

spack.schema.ci module

Schema for gitlab-ci.yaml configuration file.

import spack.schema.gitlab_ci
# Schema for script fields
# List of lists and/or strings
# This is similar to what is allowed in
# the gitlab schema
script_schema = {

"type": "array",
"items": {"anyOf": [{"type": "string"}, {"type": "array", "items": {"type": "string"}}]}, } # Schema for CI image image_schema = {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"name": {"type": "string"},
"entrypoint": {"type": "array", "items": {"type": "string"}},
},
},
] } # Additional attributes are allow # and will be forwarded directly to the # CI target YAML for each job. attributes_schema = {
"type": "object",
"additionalProperties": True,
"properties": {
"image": image_schema,
"tags": {"type": "array", "items": {"type": "string"}},
"variables": {
"type": "object",
"patternProperties": {r"[\w\d\-_\.]+": {"type": "string"}},
},
"before_script": script_schema,
"script": script_schema,
"after_script": script_schema,
}, } submapping_schema = {
"type": "object",
"additionalProperties": False,
"required": ["submapping"],
"properties": {
"match_behavior": {"type": "string", "enum": ["first", "merge"], "default": "first"},
"submapping": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": False,
"required": ["match"],
"properties": {
"match": {"type": "array", "items": {"type": "string"}},
"build-job": attributes_schema,
"build-job-remove": attributes_schema,
},
},
},
}, } named_attributes_schema = {
"oneOf": [
{
"type": "object",
"additionalProperties": False,
"properties": {"noop-job": attributes_schema, "noop-job-remove": attributes_schema},
},
{
"type": "object",
"additionalProperties": False,
"properties": {"build-job": attributes_schema, "build-job-remove": attributes_schema},
},
{
"type": "object",
"additionalProperties": False,
"properties": {"copy-job": attributes_schema, "copy-job-remove": attributes_schema},
},
{
"type": "object",
"additionalProperties": False,
"properties": {
"reindex-job": attributes_schema,
"reindex-job-remove": attributes_schema,
},
},
{
"type": "object",
"additionalProperties": False,
"properties": {
"signing-job": attributes_schema,
"signing-job-remove": attributes_schema,
},
},
{
"type": "object",
"additionalProperties": False,
"properties": {
"cleanup-job": attributes_schema,
"cleanup-job-remove": attributes_schema,
},
},
{
"type": "object",
"additionalProperties": False,
"properties": {"any-job": attributes_schema, "any-job-remove": attributes_schema},
},
] } pipeline_gen_schema = {
"type": "array",
"items": {"oneOf": [submapping_schema, named_attributes_schema]}, } core_shared_properties = union_dicts(
{
"pipeline-gen": pipeline_gen_schema,
"rebuild-index": {"type": "boolean"},
"broken-specs-url": {"type": "string"},
"broken-tests-packages": {"type": "array", "items": {"type": "string"}},
"target": {"type": "string", "enum": ["gitlab"], "default": "gitlab"},
} ) # TODO: Remove in Spack 0.23 ci_properties = {
"anyOf": [
{
"type": "object",
"additionalProperties": False,
# "required": ["mappings"],
"properties": union_dicts(
core_shared_properties, {"enable-artifacts-buildcache": {"type": "boolean"}}
),
},
{
"type": "object",
"additionalProperties": False,
# "required": ["mappings"],
"properties": union_dicts(
core_shared_properties, {"temporary-storage-url-prefix": {"type": "string"}}
),
},
] } #: Properties for inclusion in other schemas properties = {
"ci": {
"oneOf": [
# TODO: Replace with core-shared-properties in Spack 0.23
ci_properties,
# Allow legacy format under `ci` for `config update ci`
spack.schema.gitlab_ci.gitlab_ci_properties,
]
} } #: Full schema with metadata schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Spack CI configuration file schema",
"type": "object",
"additionalProperties": False,
"properties": properties, } def update(data):
import llnl.util.tty as tty
import spack.ci
import spack.environment as ev
# Warn if deprecated section is still in the environment
ci_env = ev.active_environment()
if ci_env:
env_config = ci_env.manifest[ev.TOP_LEVEL_KEY]
if "gitlab-ci" in env_config:
tty.die("Error: `gitlab-ci` section detected with `ci`, these are not compatible")
# Detect if the ci section is using the new pipeline-gen
# If it is, assume it has already been converted
return spack.ci.translate_deprecated_config(data)


Properties for inclusion in other schemas

http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'ci': {'oneOf': [{'anyOf': [{'additionalProperties': False, 'properties': {'broken-specs-url': {'type': 'string'}, 'broken-tests-packages': {'items': {'type': 'string'}, 'type': 'array'}, 'enable-artifacts-buildcache': {'type': 'boolean'}, 'pipeline-gen': {'items': {'oneOf': [{'additionalProperties': False, 'properties': {'match_behavior': {'default': 'first', 'enum': ['first', 'merge'], 'type': 'string'}, 'submapping': {'items': {'additionalProperties': False, 'properties': {'build-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'build-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'match': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['match'], 'type': 'object'}, 'type': 'array'}}, 'required': ['submapping'], 'type': 'object'}, {'oneOf': [{'additionalProperties': False, 'properties': {'noop-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'noop-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'build-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'build-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'copy-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'copy-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'reindex-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'reindex-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'signing-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'signing-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'cleanup-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'cleanup-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'any-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'any-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}]}]}, 'type': 'array'}, 'rebuild-index': {'type': 'boolean'}, 'target': {'default': 'gitlab', 'enum': ['gitlab'], 'type': 'string'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'broken-specs-url': {'type': 'string'}, 'broken-tests-packages': {'items': {'type': 'string'}, 'type': 'array'}, 'pipeline-gen': {'items': {'oneOf': [{'additionalProperties': False, 'properties': {'match_behavior': {'default': 'first', 'enum': ['first', 'merge'], 'type': 'string'}, 'submapping': {'items': {'additionalProperties': False, 'properties': {'build-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'build-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'match': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['match'], 'type': 'object'}, 'type': 'array'}}, 'required': ['submapping'], 'type': 'object'}, {'oneOf': [{'additionalProperties': False, 'properties': {'noop-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'noop-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'build-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'build-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'copy-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'copy-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'reindex-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'reindex-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'signing-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'signing-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'cleanup-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'cleanup-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'any-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'any-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}]}]}, 'type': 'array'}, 'rebuild-index': {'type': 'boolean'}, 'target': {'default': 'gitlab', 'enum': ['gitlab'], 'type': 'string'}, 'temporary-storage-url-prefix': {'type': 'string'}}, 'type': 'object'}]}, {'anyOf': [{'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'bootstrap': {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': False, 'properties': {'compiler-agnostic': {'default': False, 'type': 'boolean'}, 'name': {'type': 'string'}}, 'required': ['name'], 'type': 'object'}]}, 'type': 'array'}, 'broken-specs-url': {'type': 'string'}, 'broken-tests-packages': {'items': {'type': 'string'}, 'type': 'array'}, 'enable-artifacts-buildcache': {'type': 'boolean'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'mappings': {'items': {'additionalProperties': False, 'properties': {'match': {'items': {'type': 'string'}, 'type': 'array'}, 'remove-attributes': {'additionalProperties': False, 'properties': {'tags': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['tags'], 'type': 'object'}, 'runner-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}}, 'required': ['match'], 'type': 'object'}, 'type': 'array'}, 'match_behavior': {'default': 'first', 'enum': ['first', 'merge'], 'type': 'string'}, 'rebuild-index': {'type': 'boolean'}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'service-job-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}, 'signing-job-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['mappings'], 'type': 'object'}, {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'bootstrap': {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': False, 'properties': {'compiler-agnostic': {'default': False, 'type': 'boolean'}, 'name': {'type': 'string'}}, 'required': ['name'], 'type': 'object'}]}, 'type': 'array'}, 'broken-specs-url': {'type': 'string'}, 'broken-tests-packages': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'mappings': {'items': {'additionalProperties': False, 'properties': {'match': {'items': {'type': 'string'}, 'type': 'array'}, 'remove-attributes': {'additionalProperties': False, 'properties': {'tags': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['tags'], 'type': 'object'}, 'runner-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}}, 'required': ['match'], 'type': 'object'}, 'type': 'array'}, 'match_behavior': {'default': 'first', 'enum': ['first', 'merge'], 'type': 'string'}, 'rebuild-index': {'type': 'boolean'}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'service-job-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}, 'signing-job-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'temporary-storage-url-prefix': {'type': 'string'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['mappings'], 'type': 'object'}]}]}}, 'title': 'Spack CI configuration file schema', 'type': 'object'}
Full schema with metadata


spack.schema.compilers module

Schema for compilers.yaml configuration file.

#: Properties for inclusion in other schemas
properties = {

"compilers": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": False,
"properties": {
"compiler": {
"type": "object",
"additionalProperties": False,
"required": ["paths", "spec", "modules", "operating_system"],
"properties": {
"paths": {
"type": "object",
"required": ["cc", "cxx", "f77", "fc"],
"additionalProperties": False,
"properties": {
"cc": {"anyOf": [{"type": "string"}, {"type": "null"}]},
"cxx": {"anyOf": [{"type": "string"}, {"type": "null"}]},
"f77": {"anyOf": [{"type": "string"}, {"type": "null"}]},
"fc": {"anyOf": [{"type": "string"}, {"type": "null"}]},
},
},
"flags": {
"type": "object",
"additionalProperties": False,
"properties": {
"cflags": {"anyOf": [{"type": "string"}, {"type": "null"}]},
"cxxflags": {"anyOf": [{"type": "string"}, {"type": "null"}]},
"fflags": {"anyOf": [{"type": "string"}, {"type": "null"}]},
"cppflags": {"anyOf": [{"type": "string"}, {"type": "null"}]},
"ldflags": {"anyOf": [{"type": "string"}, {"type": "null"}]},
"ldlibs": {"anyOf": [{"type": "string"}, {"type": "null"}]},
},
},
"spec": {"type": "string"},
"operating_system": {"type": "string"},
"target": {"type": "string"},
"alias": {"anyOf": [{"type": "string"}, {"type": "null"}]},
"modules": {
"anyOf": [{"type": "string"}, {"type": "null"}, {"type": "array"}]
},
"implicit_rpaths": {
"anyOf": [
{"type": "array", "items": {"type": "string"}},
{"type": "boolean"},
]
},
"environment": spack.schema.environment.definition,
"extra_rpaths": {
"type": "array",
"default": [],
"items": {"type": "string"},
},
},
}
},
},
} } #: Full schema with metadata schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Spack compiler configuration file schema",
"type": "object",
"additionalProperties": False,
"properties": properties, }


Properties for inclusion in other schemas

http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'compilers': {'items': {'additionalProperties': False, 'properties': {'compiler': {'additionalProperties': False, 'properties': {'alias': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'extra_rpaths': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'flags': {'additionalProperties': False, 'properties': {'cflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cppflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxxflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'fflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldlibs': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'type': 'object'}, 'implicit_rpaths': {'anyOf': [{'items': {'type': 'string'}, 'type': 'array'}, {'type': 'boolean'}]}, 'modules': {'anyOf': [{'type': 'string'}, {'type': 'null'}, {'type': 'array'}]}, 'operating_system': {'type': 'string'}, 'paths': {'additionalProperties': False, 'properties': {'cc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxx': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'f77': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'fc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'required': ['cc', 'cxx', 'f77', 'fc'], 'type': 'object'}, 'spec': {'type': 'string'}, 'target': {'type': 'string'}}, 'required': ['paths', 'spec', 'modules', 'operating_system'], 'type': 'object'}}, 'type': 'object'}, 'type': 'array'}}, 'title': 'Spack compiler configuration file schema', 'type': 'object'}
Full schema with metadata

spack.schema.concretizer module

Schema for concretizer.yaml configuration file.


"concretizer": {
"type": "object",
"additionalProperties": False,
"properties": {
"reuse": {
"oneOf": [{"type": "boolean"}, {"type": "string", "enum": ["dependencies"]}]
},
"enable_node_namespace": {"type": "boolean"},
"targets": {
"type": "object",
"properties": {
"host_compatible": {"type": "boolean"},
"granularity": {"type": "string", "enum": ["generic", "microarchitectures"]},
},
},
"unify": {
"oneOf": [{"type": "boolean"}, {"type": "string", "enum": ["when_possible"]}]
},
"duplicates": {
"type": "object",
"properties": {
"strategy": {"type": "string", "enum": ["none", "minimal", "full"]}
},
},
},
} } #: Full schema with metadata schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Spack concretizer configuration file schema",
"type": "object",
"additionalProperties": False,
"properties": properties, }


http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'concretizer': {'additionalProperties': False, 'properties': {'duplicates': {'properties': {'strategy': {'enum': ['none', 'minimal', 'full'], 'type': 'string'}}, 'type': 'object'}, 'enable_node_namespace': {'type': 'boolean'}, 'reuse': {'oneOf': [{'type': 'boolean'}, {'enum': ['dependencies'], 'type': 'string'}]}, 'targets': {'properties': {'granularity': {'enum': ['generic', 'microarchitectures'], 'type': 'string'}, 'host_compatible': {'type': 'boolean'}}, 'type': 'object'}, 'unify': {'oneOf': [{'type': 'boolean'}, {'enum': ['when_possible'], 'type': 'string'}]}}, 'type': 'object'}}, 'title': 'Spack concretizer configuration file schema', 'type': 'object'}
Full schema with metadata

spack.schema.config module

Schema for config.yaml configuration file.

import spack.schema.projections
#: Properties for inclusion in other schemas
properties = {

"config": {
"type": "object",
"default": {},
"properties": {
"flags": {
"type": "object",
"properties": {
"keep_werror": {"type": "string", "enum": ["all", "specific", "none"]}
},
},
"shared_linking": {
"anyOf": [
{"type": "string", "enum": ["rpath", "runpath"]},
{
"type": "object",
"properties": {
"type": {"type": "string", "enum": ["rpath", "runpath"]},
"bind": {"type": "boolean"},
},
},
]
},
"install_tree": {
"anyOf": [
{
"type": "object",
"properties": union_dicts(
{"root": {"type": "string"}},
{
"padded_length": {
"oneOf": [
{"type": "integer", "minimum": 0},
{"type": "boolean"},
]
}
},
spack.schema.projections.properties,
),
},
{"type": "string"}, # deprecated
]
},
"install_hash_length": {"type": "integer", "minimum": 1},
"install_path_scheme": {"type": "string"}, # deprecated
"build_stage": {
"oneOf": [{"type": "string"}, {"type": "array", "items": {"type": "string"}}]
},
"stage_name": {"type": "string"},
"test_stage": {"type": "string"},
"extensions": {"type": "array", "items": {"type": "string"}},
"template_dirs": {"type": "array", "items": {"type": "string"}},
"license_dir": {"type": "string"},
"source_cache": {"type": "string"},
"misc_cache": {"type": "string"},
"environments_root": {"type": "string"},
"connect_timeout": {"type": "integer", "minimum": 0},
"verify_ssl": {"type": "boolean"},
"suppress_gpg_warnings": {"type": "boolean"},
"install_missing_compilers": {"type": "boolean"},
"debug": {"type": "boolean"},
"checksum": {"type": "boolean"},
"deprecated": {"type": "boolean"},
"locks": {"type": "boolean"},
"dirty": {"type": "boolean"},
"build_language": {"type": "string"},
"build_jobs": {"type": "integer", "minimum": 1},
"ccache": {"type": "boolean"},
"concretizer": {"type": "string", "enum": ["original", "clingo"]},
"db_lock_timeout": {"type": "integer", "minimum": 1},
"package_lock_timeout": {
"anyOf": [{"type": "integer", "minimum": 1}, {"type": "null"}]
},
"allow_sgid": {"type": "boolean"},
"install_status": {"type": "boolean"},
"binary_index_root": {"type": "string"},
"url_fetch_method": {"type": "string", "enum": ["urllib", "curl"]},
"additional_external_search_paths": {"type": "array", "items": {"type": "string"}},
"binary_index_ttl": {"type": "integer", "minimum": 0},
"aliases": {"type": "object", "patternProperties": {r"\w[\w-]*": {"type": "string"}}},
},
"deprecatedProperties": {
"properties": ["terminal_title"],
"message": "config:terminal_title has been replaced by "
"install_status and is ignored",
"error": False,
},
} } #: Full schema with metadata schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Spack core configuration file schema",
"type": "object",
"additionalProperties": False,
"properties": properties, } def update(data):
"""Update the data in place to remove deprecated properties.
Args:
data (dict): dictionary to be updated
Returns:
True if data was changed, False otherwise
"""
# currently deprecated properties are
# install_tree: <string>
# install_path_scheme: <string>
# updated: install_tree: {root: <string>,
# projections: <projections_dict}
# root replaces install_tree, projections replace install_path_scheme
changed = False
install_tree = data.get("install_tree", None)
if isinstance(install_tree, str):
# deprecated short-form install tree
# add value as `root` in updated install_tree
data["install_tree"] = {"root": install_tree}
changed = True
install_path_scheme = data.pop("install_path_scheme", None)
if install_path_scheme:
projections_data = {"projections": {"all": install_path_scheme}}
# update projections with install_scheme
# whether install_tree was updated or not
# we merge the yaml to ensure we don't invalidate other projections
update_data = data.get("install_tree", {})
update_data = spack.config.merge_yaml(update_data, projections_data)
data["install_tree"] = update_data
changed = True
use_curl = data.pop("use_curl", None)
if use_curl is not None:
data["url_fetch_method"] = "curl" if use_curl else "urllib"
changed = True
shared_linking = data.get("shared_linking", None)
if isinstance(shared_linking, str):
# deprecated short-form shared_linking: rpath/runpath
# add value as `type` in updated shared_linking
data["shared_linking"] = {"type": shared_linking, "bind": False}
changed = True
return changed


Properties for inclusion in other schemas

http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'config': {'default': {}, 'deprecatedProperties': {'error': False, 'message': 'config:terminal_title has been replaced by install_status and is ignored', 'properties': ['terminal_title']}, 'properties': {'additional_external_search_paths': {'items': {'type': 'string'}, 'type': 'array'}, 'aliases': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}, 'allow_sgid': {'type': 'boolean'}, 'binary_index_root': {'type': 'string'}, 'binary_index_ttl': {'minimum': 0, 'type': 'integer'}, 'build_jobs': {'minimum': 1, 'type': 'integer'}, 'build_language': {'type': 'string'}, 'build_stage': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'ccache': {'type': 'boolean'}, 'checksum': {'type': 'boolean'}, 'concretizer': {'enum': ['original', 'clingo'], 'type': 'string'}, 'connect_timeout': {'minimum': 0, 'type': 'integer'}, 'db_lock_timeout': {'minimum': 1, 'type': 'integer'}, 'debug': {'type': 'boolean'}, 'deprecated': {'type': 'boolean'}, 'dirty': {'type': 'boolean'}, 'environments_root': {'type': 'string'}, 'extensions': {'items': {'type': 'string'}, 'type': 'array'}, 'flags': {'properties': {'keep_werror': {'enum': ['all', 'specific', 'none'], 'type': 'string'}}, 'type': 'object'}, 'install_hash_length': {'minimum': 1, 'type': 'integer'}, 'install_missing_compilers': {'type': 'boolean'}, 'install_path_scheme': {'type': 'string'}, 'install_status': {'type': 'boolean'}, 'install_tree': {'anyOf': [{'properties': {'padded_length': {'oneOf': [{'minimum': 0, 'type': 'integer'}, {'type': 'boolean'}]}, 'projections': {'patternProperties': {'all|\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}, 'root': {'type': 'string'}}, 'type': 'object'}, {'type': 'string'}]}, 'license_dir': {'type': 'string'}, 'locks': {'type': 'boolean'}, 'misc_cache': {'type': 'string'}, 'package_lock_timeout': {'anyOf': [{'minimum': 1, 'type': 'integer'}, {'type': 'null'}]}, 'shared_linking': {'anyOf': [{'enum': ['rpath', 'runpath'], 'type': 'string'}, {'properties': {'bind': {'type': 'boolean'}, 'type': {'enum': ['rpath', 'runpath'], 'type': 'string'}}, 'type': 'object'}]}, 'source_cache': {'type': 'string'}, 'stage_name': {'type': 'string'}, 'suppress_gpg_warnings': {'type': 'boolean'}, 'template_dirs': {'items': {'type': 'string'}, 'type': 'array'}, 'test_stage': {'type': 'string'}, 'url_fetch_method': {'enum': ['urllib', 'curl'], 'type': 'string'}, 'verify_ssl': {'type': 'boolean'}}, 'type': 'object'}}, 'title': 'Spack core configuration file schema', 'type': 'object'}
Full schema with metadata

Update the data in place to remove deprecated properties.
data (dict) -- dictionary to be updated
True if data was changed, False otherwise


spack.schema.container module

Schema for the 'container' subsection of Spack environments.

Schema for the container attribute included in Spack environments

spack.schema.cray_manifest module

Schema for Cray descriptive manifest: this describes a set of installed packages on the system and also specifies dependency relationships between them (so this provides more information than external entries in packages configuration).

This does not specify a configuration - it is an input format that is consumed and transformed into Spack DB records.

spack.schema.database_index module

Schema for database index.json file


"installed": {"type": "boolean"},
"ref_count": {"type": "integer", "minimum": 0},
"explicit": {"type": "boolean"},
"installation_time": {"type": "number"},
},
}
},
},
"version": {"type": "string"},
},
}
}, }


http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'database': {'additionalProperties': False, 'properties': {'installs': {'patternProperties': {'^[\\w\\d]{32}$': {'properties': {'explicit': {'type': 'boolean'}, 'installation_time': {'type': 'number'}, 'installed': {'type': 'boolean'}, 'path': {'oneOf': [{'type': 'string'}, {'type': 'null'}]}, 'ref_count': {'minimum': 0, 'type': 'integer'}, 'spec': {'spec': {'additionalProperties': False, 'properties': {'_meta': {'properties': {'version': {'type': 'number'}}, 'type': 'object'}, 'nodes': {'items': {'additionalProperties': False, 'properties': {'arch': {'additionalProperties': False, 'properties': {'platform': {}, 'platform_os': {}, 'target': {'oneOf': [{'type': 'string'}, {'additionalProperties': False, 'properties': {'features': {'items': {'type': 'string'}, 'type': 'array'}, 'generation': {'type': 'integer'}, 'name': {'type': 'string'}, 'parents': {'items': {'type': 'string'}, 'type': 'array'}, 'vendor': {'type': 'string'}}, 'required': ['name', 'vendor', 'features', 'generation', 'parents'], 'type': 'object'}]}}, 'type': 'object'}, 'build_hash': {'type': 'string'}, 'build_spec': {'additionalProperties': False, 'properties': {'hash': {'type': 'string'}, 'name': {'type': 'string'}}, 'required': ['name', 'hash'], 'type': 'object'}, 'compiler': {'additionalProperties': False, 'properties': {'name': {'type': 'string'}, 'version': {'type': 'string'}}, 'type': 'object'}, 'dependencies': {'patternProperties': {'\\w[\\w-]*': {'properties': {'hash': {'type': 'string'}, 'type': {'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}}, 'type': 'object'}, 'develop': {'anyOf': [{'type': 'boolean'}, {'type': 'string'}]}, 'full_hash': {'type': 'string'}, 'hash': {'type': 'string'}, 'name': {'type': 'string'}, 'namespace': {'type': 'string'}, 'package_hash': {'type': 'string'}, 'parameters': {'additionalProperties': True, 'properties': {'cflags': {'items': {'type': 'string'}, 'type': 'array'}, 'cppflags': {'items': {'type': 'string'}, 'type': 'array'}, 'cxxflags': {'items': {'type': 'string'}, 'type': 'array'}, 'fflags': {'items': {'type': 'string'}, 'type': 'array'}, 'ldflags': {'items': {'type': 'string'}, 'type': 'array'}, 'ldlib': {'items': {'type': 'string'}, 'type': 'array'}, 'patches': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['cflags', 'cppflags', 'cxxflags', 'fflags', 'ldflags', 'ldlibs'], 'type': 'object'}, 'patches': {'items': {}, 'type': 'array'}, 'version': {'oneOf': [{'type': 'string'}, {'type': 'number'}]}}, 'required': ['version', 'arch', 'compiler', 'namespace', 'parameters'], 'type': 'object'}, 'type': 'array'}}, 'required': ['_meta', 'nodes'], 'type': 'object'}}}, 'type': 'object'}}, 'type': 'object'}, 'version': {'type': 'string'}}, 'required': ['installs', 'version'], 'type': 'object'}}, 'required': ['database'], 'title': 'Spack spec schema', 'type': 'object'}
Full schema with metadata

spack.schema.definitions module

Schema for definitions

#: Properties for inclusion in other schemas
properties = {

"definitions": {
"type": "array",
"default": [],
"items": {
"type": "object",
"properties": {"when": {"type": "string"}},
"patternProperties": {r"^(?!when$)\w*": spack.schema.spec_list_schema},
},
} } #: Full schema with metadata schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Spack definitions configuration file schema",
"type": "object",
"additionalProperties": False,
"properties": properties, }



http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'definitions': {'default': [], 'items': {'patternProperties': {'^(?!when$)\\w*': {'default': [], 'items': {'anyOf': [{'additionalProperties': False, 'properties': {'exclude': {'items': {'type': 'string'}, 'type': 'array'}, 'matrix': {'items': {'items': {'type': 'string'}, 'type': 'array'}, 'type': 'array'}}, 'type': 'object'}, {'type': 'string'}, {'type': 'null'}]}, 'type': 'array'}}, 'properties': {'when': {'type': 'string'}}, 'type': 'object'}, 'type': 'array'}}, 'title': 'Spack definitions configuration file schema', 'type': 'object'}
Full schema with metadata

spack.schema.env module

Schema for env.yaml configuration file.


spack.schema.merged.properties,
# extra environment schema properties
{
"include": {"type": "array", "default": [], "items": {"type": "string"}},
"develop": {
"type": "object",
"default": {},
"additionalProperties": False,
"patternProperties": {
r"\w[\w-]*": {
"type": "object",
"additionalProperties": False,
"properties": {
"spec": {"type": "string"},
"path": {"type": "string"},
},
}
},
},
"specs": spack.schema.spec_list_schema,
"view": {
"anyOf": [
{"type": "boolean"},
{"type": "string"},
{
"type": "object",
"patternProperties": {
r"\w+": {
"required": ["root"],
"additionalProperties": False,
"properties": {
"root": {"type": "string"},
"link": {
"type": "string",
"pattern": "(roots|all|run)",
},
"link_type": {"type": "string"},
"select": {
"type": "array",
"items": {"type": "string"},
},
"exclude": {
"type": "array",
"items": {"type": "string"},
},
"projections": projections_scheme,
},
}
},
},
]
},
},
),
}
}, } def update(data):
"""Update the data in place to remove deprecated properties.
Args:
data (dict): dictionary to be updated
Returns:
True if data was changed, False otherwise
"""
import spack.ci
if "gitlab-ci" in data:
data["ci"] = data.pop("gitlab-ci")
if "ci" in data:
return spack.ci.translate_deprecated_config(data["ci"])
# There are not currently any deprecated attributes in this section
# that have not been removed
return False


Top level key in a manifest file

Update the data in place to remove deprecated properties.
data (dict) -- dictionary to be updated
True if data was changed, False otherwise


spack.schema.environment module

Schema for environment modifications. Meant for inclusion in other schemas.

Returns an EnvironmentModifications object containing the modifications parsed from input.
config_obj -- a configuration dictionary conforming to the schema definition for environment modifications


spack.schema.gitlab_ci module

Schema for gitlab-ci.yaml configuration file.

image_schema = {

"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"name": {"type": "string"},
"entrypoint": {"type": "array", "items": {"type": "string"}},
},
},
] } runner_attributes_schema_items = {
"image": image_schema,
"tags": {"type": "array", "items": {"type": "string"}},
"variables": {"type": "object", "patternProperties": {r"[\w\d\-_\.]+": {"type": "string"}}},
"before_script": {"type": "array", "items": {"type": "string"}},
"script": {"type": "array", "items": {"type": "string"}},
"after_script": {"type": "array", "items": {"type": "string"}}, } runner_selector_schema = {
"type": "object",
"additionalProperties": False,
"required": ["tags"],
"properties": runner_attributes_schema_items, } remove_attributes_schema = {
"type": "object",
"additionalProperties": False,
"required": ["tags"],
"properties": {"tags": {"type": "array", "items": {"type": "string"}}}, } core_shared_properties = union_dicts(
runner_attributes_schema_items,
{
"bootstrap": {
"type": "array",
"items": {
"anyOf": [
{"type": "string"},
{
"type": "object",
"additionalProperties": False,
"required": ["name"],
"properties": {
"name": {"type": "string"},
"compiler-agnostic": {"type": "boolean", "default": False},
},
},
]
},
},
"match_behavior": {"type": "string", "enum": ["first", "merge"], "default": "first"},
"mappings": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": False,
"required": ["match"],
"properties": {
"match": {"type": "array", "items": {"type": "string"}},
"remove-attributes": remove_attributes_schema,
"runner-attributes": runner_selector_schema,
},
},
},
"service-job-attributes": runner_selector_schema,
"signing-job-attributes": runner_selector_schema,
"rebuild-index": {"type": "boolean"},
"broken-specs-url": {"type": "string"},
"broken-tests-packages": {"type": "array", "items": {"type": "string"}},
}, ) gitlab_ci_properties = {
"anyOf": [
{
"type": "object",
"additionalProperties": False,
"required": ["mappings"],
"properties": union_dicts(
core_shared_properties, {"enable-artifacts-buildcache": {"type": "boolean"}}
),
},
{
"type": "object",
"additionalProperties": False,
"required": ["mappings"],
"properties": union_dicts(
core_shared_properties, {"temporary-storage-url-prefix": {"type": "string"}}
),
},
] } #: Properties for inclusion in other schemas properties = {"gitlab-ci": gitlab_ci_properties} #: Full schema with metadata schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Spack gitlab-ci configuration file schema",
"type": "object",
"additionalProperties": False,
"properties": properties, }


Properties for inclusion in other schemas

http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'gitlab-ci': {'anyOf': [{'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'bootstrap': {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': False, 'properties': {'compiler-agnostic': {'default': False, 'type': 'boolean'}, 'name': {'type': 'string'}}, 'required': ['name'], 'type': 'object'}]}, 'type': 'array'}, 'broken-specs-url': {'type': 'string'}, 'broken-tests-packages': {'items': {'type': 'string'}, 'type': 'array'}, 'enable-artifacts-buildcache': {'type': 'boolean'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'mappings': {'items': {'additionalProperties': False, 'properties': {'match': {'items': {'type': 'string'}, 'type': 'array'}, 'remove-attributes': {'additionalProperties': False, 'properties': {'tags': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['tags'], 'type': 'object'}, 'runner-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}}, 'required': ['match'], 'type': 'object'}, 'type': 'array'}, 'match_behavior': {'default': 'first', 'enum': ['first', 'merge'], 'type': 'string'}, 'rebuild-index': {'type': 'boolean'}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'service-job-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}, 'signing-job-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['mappings'], 'type': 'object'}, {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'bootstrap': {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': False, 'properties': {'compiler-agnostic': {'default': False, 'type': 'boolean'}, 'name': {'type': 'string'}}, 'required': ['name'], 'type': 'object'}]}, 'type': 'array'}, 'broken-specs-url': {'type': 'string'}, 'broken-tests-packages': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'mappings': {'items': {'additionalProperties': False, 'properties': {'match': {'items': {'type': 'string'}, 'type': 'array'}, 'remove-attributes': {'additionalProperties': False, 'properties': {'tags': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['tags'], 'type': 'object'}, 'runner-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}}, 'required': ['match'], 'type': 'object'}, 'type': 'array'}, 'match_behavior': {'default': 'first', 'enum': ['first', 'merge'], 'type': 'string'}, 'rebuild-index': {'type': 'boolean'}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'service-job-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}, 'signing-job-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'temporary-storage-url-prefix': {'type': 'string'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['mappings'], 'type': 'object'}]}}, 'title': 'Spack gitlab-ci configuration file schema', 'type': 'object'}
Full schema with metadata

spack.schema.merged module

Schema for configuration merged into one file.


spack.schema.packages.properties,
spack.schema.repos.properties,
spack.schema.upstreams.properties, ) #: Full schema with metadata schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Spack merged configuration file schema",
"type": "object",
"additionalProperties": False,
"properties": properties, }


https://spack.readthedocs.io/en/latest/packages_yaml.html#package-requirements\n", 'properties': ['target', 'compiler', 'providers']}, 'properties': {'buildable': {'default': True, 'type': 'boolean'}, 'compiler': {}, 'externals': {'items': {'additionalProperties': True, 'properties': {'extra_attributes': {'type': 'object'}, 'modules': {'items': {'type': 'string'}, 'type': 'array'}, 'prefix': {'type': 'string'}, 'spec': {'type': 'string'}}, 'required': ['spec'], 'type': 'object'}, 'type': 'array'}, 'package_attributes': {'additionalProperties': False, 'patternProperties': {'\\w+': {}}, 'type': 'object'}, 'permissions': {'additionalProperties': False, 'properties': {'group': {'type': 'string'}, 'read': {'enum': ['user', 'group', 'world'], 'type': 'string'}, 'write': {'enum': ['user', 'group', 'world'], 'type': 'string'}}, 'type': 'object'}, 'providers': {}, 'require': {'oneOf': [{'items': {'oneOf': [{'additionalProperties': False, 'properties': {'any_of': {'items': {'type': 'string'}, 'type': 'array'}, 'message': {'type': 'string'}, 'one_of': {'items': {'type': 'string'}, 'type': 'array'}, 'spec': {'type': 'string'}, 'when': {'type': 'string'}}, 'type': 'object'}, {'type': 'string'}]}, 'type': 'array'}, {'type': 'string'}]}, 'target': {}, 'variants': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'version': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}}, 'properties': {'all': {'additionalProperties': False, 'default': {}, 'deprecatedProperties': {'error': False, 'message': "setting version preferences in the 'all' section of packages.yaml is deprecated and will be removed in v0.22\n\n\tThese preferences will be ignored by Spack. You can set them only in package-specific sections of the same file.\n", 'properties': ['version']}, 'properties': {'buildable': {'default': True, 'type': 'boolean'}, 'compiler': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'package_attributes': {'additionalProperties': False, 'patternProperties': {'\\w+': {}}, 'type': 'object'}, 'permissions': {'additionalProperties': False, 'properties': {'group': {'type': 'string'}, 'read': {'enum': ['user', 'group', 'world'], 'type': 'string'}, 'write': {'enum': ['user', 'group', 'world'], 'type': 'string'}}, 'type': 'object'}, 'providers': {'additionalProperties': False, 'default': {}, 'patternProperties': {'\\w[\\w-]*': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'require': {'oneOf': [{'items': {'oneOf': [{'additionalProperties': False, 'properties': {'any_of': {'items': {'type': 'string'}, 'type': 'array'}, 'message': {'type': 'string'}, 'one_of': {'items': {'type': 'string'}, 'type': 'array'}, 'spec': {'type': 'string'}, 'when': {'type': 'string'}}, 'type': 'object'}, {'type': 'string'}]}, 'type': 'array'}, {'type': 'string'}]}, 'target': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'variants': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'version': {}}, 'type': 'object'}}, 'type': 'object'}, 'repos': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'upstreams': {'default': {}, 'patternProperties': {'\\w[\\w-]*': {'additionalProperties': False, 'default': {}, 'properties': {'install_tree': {'type': 'string'}, 'modules': {'properties': {'lmod': {'type': 'string'}, 'tcl': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}}
Properties for inclusion in other schemas

http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'bootstrap': {'properties': {'enable': {'type': 'boolean'}, 'root': {'type': 'string'}, 'sources': {'items': {'additionalProperties': False, 'properties': {'metadata': {'type': 'string'}, 'name': {'type': 'string'}}, 'required': ['name', 'metadata'], 'type': 'object'}, 'type': 'array'}, 'trusted': {'patternProperties': {'\\w[\\w-]*': {'type': 'boolean'}}, 'type': 'object'}}, 'type': 'object'}, 'cdash': {'additionalProperties': False, 'patternProperties': {'build-group': {'type': 'string'}, 'project': {'type': 'string'}, 'site': {'type': 'string'}, 'url': {'type': 'string'}}, 'required': ['build-group'], 'type': 'object'}, 'ci': {'oneOf': [{'anyOf': [{'additionalProperties': False, 'properties': {'broken-specs-url': {'type': 'string'}, 'broken-tests-packages': {'items': {'type': 'string'}, 'type': 'array'}, 'enable-artifacts-buildcache': {'type': 'boolean'}, 'pipeline-gen': {'items': {'oneOf': [{'additionalProperties': False, 'properties': {'match_behavior': {'default': 'first', 'enum': ['first', 'merge'], 'type': 'string'}, 'submapping': {'items': {'additionalProperties': False, 'properties': {'build-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'build-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'match': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['match'], 'type': 'object'}, 'type': 'array'}}, 'required': ['submapping'], 'type': 'object'}, {'oneOf': [{'additionalProperties': False, 'properties': {'noop-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'noop-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'build-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'build-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'copy-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'copy-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'reindex-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'reindex-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'signing-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'signing-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'cleanup-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'cleanup-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'any-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'any-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}]}]}, 'type': 'array'}, 'rebuild-index': {'type': 'boolean'}, 'target': {'default': 'gitlab', 'enum': ['gitlab'], 'type': 'string'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'broken-specs-url': {'type': 'string'}, 'broken-tests-packages': {'items': {'type': 'string'}, 'type': 'array'}, 'pipeline-gen': {'items': {'oneOf': [{'additionalProperties': False, 'properties': {'match_behavior': {'default': 'first', 'enum': ['first', 'merge'], 'type': 'string'}, 'submapping': {'items': {'additionalProperties': False, 'properties': {'build-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'build-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'match': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['match'], 'type': 'object'}, 'type': 'array'}}, 'required': ['submapping'], 'type': 'object'}, {'oneOf': [{'additionalProperties': False, 'properties': {'noop-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'noop-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'build-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'build-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'copy-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'copy-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'reindex-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'reindex-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'signing-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'signing-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'cleanup-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'cleanup-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'any-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'any-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}]}]}, 'type': 'array'}, 'rebuild-index': {'type': 'boolean'}, 'target': {'default': 'gitlab', 'enum': ['gitlab'], 'type': 'string'}, 'temporary-storage-url-prefix': {'type': 'string'}}, 'type': 'object'}]}, {'anyOf': [{'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'bootstrap': {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': False, 'properties': {'compiler-agnostic': {'default': False, 'type': 'boolean'}, 'name': {'type': 'string'}}, 'required': ['name'], 'type': 'object'}]}, 'type': 'array'}, 'broken-specs-url': {'type': 'string'}, 'broken-tests-packages': {'items': {'type': 'string'}, 'type': 'array'}, 'enable-artifacts-buildcache': {'type': 'boolean'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'mappings': {'items': {'additionalProperties': False, 'properties': {'match': {'items': {'type': 'string'}, 'type': 'array'}, 'remove-attributes': {'additionalProperties': False, 'properties': {'tags': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['tags'], 'type': 'object'}, 'runner-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}}, 'required': ['match'], 'type': 'object'}, 'type': 'array'}, 'match_behavior': {'default': 'first', 'enum': ['first', 'merge'], 'type': 'string'}, 'rebuild-index': {'type': 'boolean'}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'service-job-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}, 'signing-job-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['mappings'], 'type': 'object'}, {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'bootstrap': {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': False, 'properties': {'compiler-agnostic': {'default': False, 'type': 'boolean'}, 'name': {'type': 'string'}}, 'required': ['name'], 'type': 'object'}]}, 'type': 'array'}, 'broken-specs-url': {'type': 'string'}, 'broken-tests-packages': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'mappings': {'items': {'additionalProperties': False, 'properties': {'match': {'items': {'type': 'string'}, 'type': 'array'}, 'remove-attributes': {'additionalProperties': False, 'properties': {'tags': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['tags'], 'type': 'object'}, 'runner-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}}, 'required': ['match'], 'type': 'object'}, 'type': 'array'}, 'match_behavior': {'default': 'first', 'enum': ['first', 'merge'], 'type': 'string'}, 'rebuild-index': {'type': 'boolean'}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'service-job-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}, 'signing-job-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'temporary-storage-url-prefix': {'type': 'string'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['mappings'], 'type': 'object'}]}]}, 'compilers': {'items': {'additionalProperties': False, 'properties': {'compiler': {'additionalProperties': False, 'properties': {'alias': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'extra_rpaths': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'flags': {'additionalProperties': False, 'properties': {'cflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cppflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxxflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'fflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldlibs': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'type': 'object'}, 'implicit_rpaths': {'anyOf': [{'items': {'type': 'string'}, 'type': 'array'}, {'type': 'boolean'}]}, 'modules': {'anyOf': [{'type': 'string'}, {'type': 'null'}, {'type': 'array'}]}, 'operating_system': {'type': 'string'}, 'paths': {'additionalProperties': False, 'properties': {'cc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxx': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'f77': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'fc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'required': ['cc', 'cxx', 'f77', 'fc'], 'type': 'object'}, 'spec': {'type': 'string'}, 'target': {'type': 'string'}}, 'required': ['paths', 'spec', 'modules', 'operating_system'], 'type': 'object'}}, 'type': 'object'}, 'type': 'array'}, 'concretizer': {'additionalProperties': False, 'properties': {'duplicates': {'properties': {'strategy': {'enum': ['none', 'minimal', 'full'], 'type': 'string'}}, 'type': 'object'}, 'enable_node_namespace': {'type': 'boolean'}, 'reuse': {'oneOf': [{'type': 'boolean'}, {'enum': ['dependencies'], 'type': 'string'}]}, 'targets': {'properties': {'granularity': {'enum': ['generic', 'microarchitectures'], 'type': 'string'}, 'host_compatible': {'type': 'boolean'}}, 'type': 'object'}, 'unify': {'oneOf': [{'type': 'boolean'}, {'enum': ['when_possible'], 'type': 'string'}]}}, 'type': 'object'}, 'config': {'default': {}, 'deprecatedProperties': {'error': False, 'message': 'config:terminal_title has been replaced by install_status and is ignored', 'properties': ['terminal_title']}, 'properties': {'additional_external_search_paths': {'items': {'type': 'string'}, 'type': 'array'}, 'aliases': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}, 'allow_sgid': {'type': 'boolean'}, 'binary_index_root': {'type': 'string'}, 'binary_index_ttl': {'minimum': 0, 'type': 'integer'}, 'build_jobs': {'minimum': 1, 'type': 'integer'}, 'build_language': {'type': 'string'}, 'build_stage': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'ccache': {'type': 'boolean'}, 'checksum': {'type': 'boolean'}, 'concretizer': {'enum': ['original', 'clingo'], 'type': 'string'}, 'connect_timeout': {'minimum': 0, 'type': 'integer'}, 'db_lock_timeout': {'minimum': 1, 'type': 'integer'}, 'debug': {'type': 'boolean'}, 'deprecated': {'type': 'boolean'}, 'dirty': {'type': 'boolean'}, 'environments_root': {'type': 'string'}, 'extensions': {'items': {'type': 'string'}, 'type': 'array'}, 'flags': {'properties': {'keep_werror': {'enum': ['all', 'specific', 'none'], 'type': 'string'}}, 'type': 'object'}, 'install_hash_length': {'minimum': 1, 'type': 'integer'}, 'install_missing_compilers': {'type': 'boolean'}, 'install_path_scheme': {'type': 'string'}, 'install_status': {'type': 'boolean'}, 'install_tree': {'anyOf': [{'properties': {'padded_length': {'oneOf': [{'minimum': 0, 'type': 'integer'}, {'type': 'boolean'}]}, 'projections': {'patternProperties': {'all|\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}, 'root': {'type': 'string'}}, 'type': 'object'}, {'type': 'string'}]}, 'license_dir': {'type': 'string'}, 'locks': {'type': 'boolean'}, 'misc_cache': {'type': 'string'}, 'package_lock_timeout': {'anyOf': [{'minimum': 1, 'type': 'integer'}, {'type': 'null'}]}, 'shared_linking': {'anyOf': [{'enum': ['rpath', 'runpath'], 'type': 'string'}, {'properties': {'bind': {'type': 'boolean'}, 'type': {'enum': ['rpath', 'runpath'], 'type': 'string'}}, 'type': 'object'}]}, 'source_cache': {'type': 'string'}, 'stage_name': {'type': 'string'}, 'suppress_gpg_warnings': {'type': 'boolean'}, 'template_dirs': {'items': {'type': 'string'}, 'type': 'array'}, 'test_stage': {'type': 'string'}, 'url_fetch_method': {'enum': ['urllib', 'curl'], 'type': 'string'}, 'verify_ssl': {'type': 'boolean'}}, 'type': 'object'}, 'container': {'additionalProperties': False, 'properties': {'depfile': {'default': False, 'type': 'boolean'}, 'docker': {'additionalProperties': False, 'default': {}, 'type': 'object'}, 'format': {'enum': ['docker', 'singularity'], 'type': 'string'}, 'images': {'anyOf': [{'additionalProperties': False, 'properties': {'os': {'type': 'string'}, 'spack': {'anyOf': [{'type': 'string'}, {'additional_properties': False, 'properties': {'ref': {'type': 'string'}, 'resolve_sha': {'default': False, 'type': 'boolean'}, 'url': {'type': 'string'}, 'verify': {'default': False, 'type': 'boolean'}}, 'type': 'object'}]}}, 'required': ['os', 'spack'], 'type': 'object'}, {'additionalProperties': False, 'properties': {'build': {'type': 'string'}, 'final': {'type': 'string'}}, 'required': ['build', 'final'], 'type': 'object'}]}, 'labels': {'type': 'object'}, 'os_packages': {'additionalProperties': False, 'properties': {'build': {'items': {'type': 'string'}, 'type': 'array'}, 'command': {'enum': ['apt', 'yum', 'zypper', 'apk', 'yum_amazon'], 'type': 'string'}, 'final': {'items': {'type': 'string'}, 'type': 'array'}, 'update': {'type': 'boolean'}}, 'type': 'object'}, 'singularity': {'additionalProperties': False, 'default': {}, 'properties': {'help': {'type': 'string'}, 'runscript': {'type': 'string'}, 'startscript': {'type': 'string'}, 'test': {'type': 'string'}}, 'type': 'object'}, 'strip': {'default': True, 'type': 'boolean'}, 'template': {'default': None, 'type': 'string'}}, 'type': 'object'}, 'definitions': {'default': [], 'items': {'patternProperties': {'^(?!when$)\\w*': {'default': [], 'items': {'anyOf': [{'additionalProperties': False, 'properties': {'exclude': {'items': {'type': 'string'}, 'type': 'array'}, 'matrix': {'items': {'items': {'type': 'string'}, 'type': 'array'}, 'type': 'array'}}, 'type': 'object'}, {'type': 'string'}, {'type': 'null'}]}, 'type': 'array'}}, 'properties': {'when': {'type': 'string'}}, 'type': 'object'}, 'type': 'array'}, 'mirrors': {'additionalProperties': False, 'default': {}, 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'additionalProperties': False, 'anyOf': [{'required': ['url']}, {'required': ['fetch']}, {'required': ['pull']}], 'properties': {'access_pair': {'items': {'maxItems': 2, 'minItems': 2, 'type': ['string', 'null']}, 'type': 'array'}, 'access_token': {'type': ['string', 'null']}, 'binary': {'type': 'boolean'}, 'endpoint_url': {'type': ['string', 'null']}, 'fetch': {'anyOf': [{'type': 'string'}, {'additionalProperties': False, 'properties': {'access_pair': {'items': {'maxItems': 2, 'minItems': 2, 'type': ['string', 'null']}, 'type': 'array'}, 'access_token': {'type': ['string', 'null']}, 'endpoint_url': {'type': ['string', 'null']}, 'profile': {'type': ['string', 'null']}, 'url': {'type': 'string'}}, 'type': 'object'}]}, 'profile': {'type': ['string', 'null']}, 'push': {'anyOf': [{'type': 'string'}, {'additionalProperties': False, 'properties': {'access_pair': {'items': {'maxItems': 2, 'minItems': 2, 'type': ['string', 'null']}, 'type': 'array'}, 'access_token': {'type': ['string', 'null']}, 'endpoint_url': {'type': ['string', 'null']}, 'profile': {'type': ['string', 'null']}, 'url': {'type': 'string'}}, 'type': 'object'}]}, 'source': {'type': 'boolean'}, 'url': {'type': 'string'}}, 'type': 'object'}]}}, 'type': 'object'}, 'modules': {'additionalProperties': False, 'patternProperties': {'^(?!prefix_inspections$)\\w[\\w-]*$': {'additionalProperties': False, 'default': {}, 'properties': {'arch_folder': {'type': 'boolean'}, 'enable': {'default': [], 'items': {'enum': ['tcl', 'lmod'], 'type': 'string'}, 'type': 'array'}, 'lmod': {'allOf': [{'allOf': [{'properties': {'all': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}, 'defaults': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'exclude': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'exclude_implicits': {'default': False, 'type': 'boolean'}, 'hash_length': {'default': 7, 'minimum': 0, 'type': 'integer'}, 'hide_implicits': {'default': False, 'type': 'boolean'}, 'include': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'naming_scheme': {'type': 'string'}, 'projections': {'patternProperties': {'all|\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}, 'verbose': {'default': False, 'type': 'boolean'}}}, {'patternProperties': {'(?!hierarchy|core_specs|verbose|hash_length|defaults|filter_hierarchy_specs|hide|include|exclude|projections|naming_scheme|core_compilers|all)(^\\w[\\w-]*)': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}, '^[\\^@%+~]': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}}, 'validate_spec': True}], 'default': {}, 'type': 'object'}, {'properties': {'core_compilers': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'core_specs': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'filter_hierarchy_specs': {'patternProperties': {'(?!hierarchy|core_specs|verbose|hash_length|defaults|filter_hierarchy_specs|hide|include|exclude|projections|naming_scheme|core_compilers|all)(^\\w[\\w-]*)': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'hierarchy': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}]}, 'prefix_inspections': {'additionalProperties': False, 'patternProperties': {'^[\\w-]*': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'roots': {'properties': {'lmod': {'type': 'string'}, 'tcl': {'type': 'string'}}, 'type': 'object'}, 'tcl': {'allOf': [{'allOf': [{'properties': {'all': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}, 'defaults': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'exclude': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'exclude_implicits': {'default': False, 'type': 'boolean'}, 'hash_length': {'default': 7, 'minimum': 0, 'type': 'integer'}, 'hide_implicits': {'default': False, 'type': 'boolean'}, 'include': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'naming_scheme': {'type': 'string'}, 'projections': {'patternProperties': {'all|\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}, 'verbose': {'default': False, 'type': 'boolean'}}}, {'patternProperties': {'(?!hierarchy|core_specs|verbose|hash_length|defaults|filter_hierarchy_specs|hide|include|exclude|projections|naming_scheme|core_compilers|all)(^\\w[\\w-]*)': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}, '^[\\^@%+~]': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}}, 'validate_spec': True}], 'default': {}, 'type': 'object'}, {}]}, 'use_view': {'anyOf': [{'type': 'string'}, {'type': 'boolean'}]}}, 'type': 'object'}}, 'properties': {'prefix_inspections': {'additionalProperties': False, 'patternProperties': {'^[\\w-]*': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}}, 'type': 'object'}, 'packages': {'additionalProperties': False, 'default': {}, 'patternProperties': {'(?!^all$)(^\\w[\\w-]*)': {'additionalProperties': False, 'default': {}, 'deprecatedProperties': {'error': False, 'message': "setting 'compiler:', 'target:' or 'provider:' preferences in a package-specific section of packages.yaml is deprecated, and will be removed in v0.22.\n\n\tThese preferences will be ignored by Spack, and can be set only in the 'all' section of the same file. You can run:\n\n\t\t$ spack audit configs\n\n\tto get better diagnostics, including files:lines where the deprecated attributes are used.\n\n\tUse requirements to enforce conditions on specific packages: https://spack.readthedocs.io/en/latest/packages_yaml.html#package-requirements\n", 'properties': ['target', 'compiler', 'providers']}, 'properties': {'buildable': {'default': True, 'type': 'boolean'}, 'compiler': {}, 'externals': {'items': {'additionalProperties': True, 'properties': {'extra_attributes': {'type': 'object'}, 'modules': {'items': {'type': 'string'}, 'type': 'array'}, 'prefix': {'type': 'string'}, 'spec': {'type': 'string'}}, 'required': ['spec'], 'type': 'object'}, 'type': 'array'}, 'package_attributes': {'additionalProperties': False, 'patternProperties': {'\\w+': {}}, 'type': 'object'}, 'permissions': {'additionalProperties': False, 'properties': {'group': {'type': 'string'}, 'read': {'enum': ['user', 'group', 'world'], 'type': 'string'}, 'write': {'enum': ['user', 'group', 'world'], 'type': 'string'}}, 'type': 'object'}, 'providers': {}, 'require': {'oneOf': [{'items': {'oneOf': [{'additionalProperties': False, 'properties': {'any_of': {'items': {'type': 'string'}, 'type': 'array'}, 'message': {'type': 'string'}, 'one_of': {'items': {'type': 'string'}, 'type': 'array'}, 'spec': {'type': 'string'}, 'when': {'type': 'string'}}, 'type': 'object'}, {'type': 'string'}]}, 'type': 'array'}, {'type': 'string'}]}, 'target': {}, 'variants': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'version': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}}, 'properties': {'all': {'additionalProperties': False, 'default': {}, 'deprecatedProperties': {'error': False, 'message': "setting version preferences in the 'all' section of packages.yaml is deprecated and will be removed in v0.22\n\n\tThese preferences will be ignored by Spack. You can set them only in package-specific sections of the same file.\n", 'properties': ['version']}, 'properties': {'buildable': {'default': True, 'type': 'boolean'}, 'compiler': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'package_attributes': {'additionalProperties': False, 'patternProperties': {'\\w+': {}}, 'type': 'object'}, 'permissions': {'additionalProperties': False, 'properties': {'group': {'type': 'string'}, 'read': {'enum': ['user', 'group', 'world'], 'type': 'string'}, 'write': {'enum': ['user', 'group', 'world'], 'type': 'string'}}, 'type': 'object'}, 'providers': {'additionalProperties': False, 'default': {}, 'patternProperties': {'\\w[\\w-]*': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'require': {'oneOf': [{'items': {'oneOf': [{'additionalProperties': False, 'properties': {'any_of': {'items': {'type': 'string'}, 'type': 'array'}, 'message': {'type': 'string'}, 'one_of': {'items': {'type': 'string'}, 'type': 'array'}, 'spec': {'type': 'string'}, 'when': {'type': 'string'}}, 'type': 'object'}, {'type': 'string'}]}, 'type': 'array'}, {'type': 'string'}]}, 'target': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'variants': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'version': {}}, 'type': 'object'}}, 'type': 'object'}, 'repos': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'upstreams': {'default': {}, 'patternProperties': {'\\w[\\w-]*': {'additionalProperties': False, 'default': {}, 'properties': {'install_tree': {'type': 'string'}, 'modules': {'properties': {'lmod': {'type': 'string'}, 'tcl': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}}, 'title': 'Spack merged configuration file schema', 'type': 'object'}
Full schema with metadata

spack.schema.mirrors module

Schema for mirrors.yaml configuration file.

#: Common properties for connection specification
connection = {

"url": {"type": "string"},
# todo: replace this with named keys "username" / "password" or "id" / "secret"
"access_pair": {
"type": "array",
"items": {"type": ["string", "null"], "minItems": 2, "maxItems": 2},
},
"access_token": {"type": ["string", "null"]},
"profile": {"type": ["string", "null"]},
"endpoint_url": {"type": ["string", "null"]}, } #: Mirror connection inside pull/push keys fetch_and_push = {
"anyOf": [
{"type": "string"},
{
"type": "object",
"additionalProperties": False,
"properties": {**connection}, # type: ignore
},
] } #: Mirror connection when no pull/push keys are set mirror_entry = {
"type": "object",
"additionalProperties": False,
"anyOf": [{"required": ["url"]}, {"required": ["fetch"]}, {"required": ["pull"]}],
"properties": {
"source": {"type": "boolean"},
"binary": {"type": "boolean"},
"fetch": fetch_and_push,
"push": fetch_and_push,
**connection, # type: ignore
}, } #: Properties for inclusion in other schemas properties = {
"mirrors": {
"type": "object",
"default": {},
"additionalProperties": False,
"patternProperties": {r"\w[\w-]*": {"anyOf": [{"type": "string"}, mirror_entry]}},
} } #: Full schema with metadata schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Spack mirror configuration file schema",
"type": "object",
"additionalProperties": False,
"properties": properties, }






http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'mirrors': {'additionalProperties': False, 'default': {}, 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'additionalProperties': False, 'anyOf': [{'required': ['url']}, {'required': ['fetch']}, {'required': ['pull']}], 'properties': {'access_pair': {'items': {'maxItems': 2, 'minItems': 2, 'type': ['string', 'null']}, 'type': 'array'}, 'access_token': {'type': ['string', 'null']}, 'binary': {'type': 'boolean'}, 'endpoint_url': {'type': ['string', 'null']}, 'fetch': {'anyOf': [{'type': 'string'}, {'additionalProperties': False, 'properties': {'access_pair': {'items': {'maxItems': 2, 'minItems': 2, 'type': ['string', 'null']}, 'type': 'array'}, 'access_token': {'type': ['string', 'null']}, 'endpoint_url': {'type': ['string', 'null']}, 'profile': {'type': ['string', 'null']}, 'url': {'type': 'string'}}, 'type': 'object'}]}, 'profile': {'type': ['string', 'null']}, 'push': {'anyOf': [{'type': 'string'}, {'additionalProperties': False, 'properties': {'access_pair': {'items': {'maxItems': 2, 'minItems': 2, 'type': ['string', 'null']}, 'type': 'array'}, 'access_token': {'type': ['string', 'null']}, 'endpoint_url': {'type': ['string', 'null']}, 'profile': {'type': ['string', 'null']}, 'url': {'type': 'string'}}, 'type': 'object'}]}, 'source': {'type': 'boolean'}, 'url': {'type': 'string'}}, 'type': 'object'}]}}, 'type': 'object'}}, 'title': 'Spack mirror configuration file schema', 'type': 'object'}
Full schema with metadata

spack.schema.modules module

Schema for modules.yaml configuration file.

#: Matches a spec or a multi-valued variant but not another
#: valid keyword.
#:
#: THIS NEEDS TO BE UPDATED FOR EVERY NEW KEYWORD THAT
#: IS ADDED IMMEDIATELY BELOW THE MODULE TYPE ATTRIBUTE
spec_regex = (

r"(?!hierarchy|core_specs|verbose|hash_length|defaults|filter_hierarchy_specs|hide|"
r"include|exclude|projections|naming_scheme|core_compilers|all)(^\w[\w-]*)" ) #: Matches a valid name for a module set valid_module_set_name = r"^(?!prefix_inspections$)\w[\w-]*$" #: Matches an anonymous spec, i.e. a spec without a root name anonymous_spec_regex = r"^[\^@%+~]" #: Definitions for parts of module schema array_of_strings = {"type": "array", "default": [], "items": {"type": "string"}} dictionary_of_strings = {"type": "object", "patternProperties": {r"\w[\w-]*": {"type": "string"}}} dependency_selection = {"type": "string", "enum": ["none", "direct", "all"]} module_file_configuration = {
"type": "object",
"default": {},
"additionalProperties": False,
"properties": {
"filter": {
"type": "object",
"default": {},
"additionalProperties": False,
"properties": {
"exclude_env_vars": {"type": "array", "default": [], "items": {"type": "string"}}
},
},
"template": {"type": "string"},
"autoload": dependency_selection,
"prerequisites": dependency_selection,
"conflict": array_of_strings,
"load": array_of_strings,
"suffixes": {
"type": "object",
"validate_spec": True,
"patternProperties": {r"\w[\w-]*": {"type": "string"}}, # key
},
"environment": spack.schema.environment.definition,
}, } projections_scheme = spack.schema.projections.properties["projections"] module_type_configuration = {
"type": "object",
"default": {},
"allOf": [
{
"properties": {
"verbose": {"type": "boolean", "default": False},
"hash_length": {"type": "integer", "minimum": 0, "default": 7},
"include": array_of_strings,
"exclude": array_of_strings,
"exclude_implicits": {"type": "boolean", "default": False},
"defaults": array_of_strings,
"hide_implicits": {"type": "boolean", "default": False},
"naming_scheme": {"type": "string"}, # Can we be more specific here?
"projections": projections_scheme,
"all": module_file_configuration,
}
},
{
"validate_spec": True,
"patternProperties": {
spec_regex: module_file_configuration,
anonymous_spec_regex: module_file_configuration,
},
},
], } module_config_properties = {
"use_view": {"anyOf": [{"type": "string"}, {"type": "boolean"}]},
"arch_folder": {"type": "boolean"},
"roots": {
"type": "object",
"properties": {"tcl": {"type": "string"}, "lmod": {"type": "string"}},
},
"enable": {
"type": "array",
"default": [],
"items": {"type": "string", "enum": ["tcl", "lmod"]},
},
"lmod": {
"allOf": [
# Base configuration
module_type_configuration,
{
"type": "object",
"properties": {
"core_compilers": array_of_strings,
"hierarchy": array_of_strings,
"core_specs": array_of_strings,
"filter_hierarchy_specs": {
"type": "object",
"patternProperties": {spec_regex: array_of_strings},
},
},
}, # Specific lmod extensions
]
},
"tcl": {
"allOf": [
# Base configuration
module_type_configuration,
{}, # Specific tcl extensions
]
},
"prefix_inspections": {
"type": "object",
"additionalProperties": False,
"patternProperties": {
# prefix-relative path to be inspected for existence
r"^[\w-]*": array_of_strings
},
}, } # Properties for inclusion into other schemas (requires definitions) properties = {
"modules": {
"type": "object",
"additionalProperties": False,
"properties": {
"prefix_inspections": {
"type": "object",
"additionalProperties": False,
"patternProperties": {
# prefix-relative path to be inspected for existence
r"^[\w-]*": array_of_strings
},
}
},
"patternProperties": {
valid_module_set_name: {
"type": "object",
"default": {},
"additionalProperties": False,
"properties": module_config_properties,
}
},
} } #: Full schema with metadata schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Spack module file configuration file schema",
"type": "object",
"additionalProperties": False,
"properties": properties, }


Matches an anonymous spec, i.e. a spec without a root name


http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'modules': {'additionalProperties': False, 'patternProperties': {'^(?!prefix_inspections$)\\w[\\w-]*$': {'additionalProperties': False, 'default': {}, 'properties': {'arch_folder': {'type': 'boolean'}, 'enable': {'default': [], 'items': {'enum': ['tcl', 'lmod'], 'type': 'string'}, 'type': 'array'}, 'lmod': {'allOf': [{'allOf': [{'properties': {'all': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}, 'defaults': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'exclude': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'exclude_implicits': {'default': False, 'type': 'boolean'}, 'hash_length': {'default': 7, 'minimum': 0, 'type': 'integer'}, 'hide_implicits': {'default': False, 'type': 'boolean'}, 'include': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'naming_scheme': {'type': 'string'}, 'projections': {'patternProperties': {'all|\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}, 'verbose': {'default': False, 'type': 'boolean'}}}, {'patternProperties': {'(?!hierarchy|core_specs|verbose|hash_length|defaults|filter_hierarchy_specs|hide|include|exclude|projections|naming_scheme|core_compilers|all)(^\\w[\\w-]*)': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}, '^[\\^@%+~]': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}}, 'validate_spec': True}], 'default': {}, 'type': 'object'}, {'properties': {'core_compilers': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'core_specs': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'filter_hierarchy_specs': {'patternProperties': {'(?!hierarchy|core_specs|verbose|hash_length|defaults|filter_hierarchy_specs|hide|include|exclude|projections|naming_scheme|core_compilers|all)(^\\w[\\w-]*)': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'hierarchy': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}]}, 'prefix_inspections': {'additionalProperties': False, 'patternProperties': {'^[\\w-]*': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'roots': {'properties': {'lmod': {'type': 'string'}, 'tcl': {'type': 'string'}}, 'type': 'object'}, 'tcl': {'allOf': [{'allOf': [{'properties': {'all': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}, 'defaults': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'exclude': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'exclude_implicits': {'default': False, 'type': 'boolean'}, 'hash_length': {'default': 7, 'minimum': 0, 'type': 'integer'}, 'hide_implicits': {'default': False, 'type': 'boolean'}, 'include': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'naming_scheme': {'type': 'string'}, 'projections': {'patternProperties': {'all|\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}, 'verbose': {'default': False, 'type': 'boolean'}}}, {'patternProperties': {'(?!hierarchy|core_specs|verbose|hash_length|defaults|filter_hierarchy_specs|hide|include|exclude|projections|naming_scheme|core_compilers|all)(^\\w[\\w-]*)': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}, '^[\\^@%+~]': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}}, 'validate_spec': True}], 'default': {}, 'type': 'object'}, {}]}, 'use_view': {'anyOf': [{'type': 'string'}, {'type': 'boolean'}]}}, 'type': 'object'}}, 'properties': {'prefix_inspections': {'additionalProperties': False, 'patternProperties': {'^[\\w-]*': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}}, 'type': 'object'}}, 'title': 'Spack module file configuration file schema', 'type': 'object'}
Full schema with metadata

Matches a spec or a multi-valued variant but not another valid keyword.

THIS NEEDS TO BE UPDATED FOR EVERY NEW KEYWORD THAT IS ADDED IMMEDIATELY BELOW THE MODULE TYPE ATTRIBUTE



spack.schema.packages module

Schema for packages.yaml configuration files.


"additionalProperties": False,
"properties": {
"read": {"type": "string", "enum": ["user", "group", "world"]},
"write": {"type": "string", "enum": ["user", "group", "world"]},
"group": {"type": "string"},
}, } variants = {"oneOf": [{"type": "string"}, {"type": "array", "items": {"type": "string"}}]} requirements = {
"oneOf": [
# 'require' can be a list of requirement_groups.
# each requirement group is a list of one or more
# specs. Either at least one or exactly one spec
# in the group must be satisfied (depending on
# whether you use "any_of" or "one_of",
# repectively)
{
"type": "array",
"items": {
"oneOf": [
{
"type": "object",
"additionalProperties": False,
"properties": {
"one_of": {"type": "array", "items": {"type": "string"}},
"any_of": {"type": "array", "items": {"type": "string"}},
"spec": {"type": "string"},
"message": {"type": "string"},
"when": {"type": "string"},
},
},
{"type": "string"},
]
},
},
# Shorthand for a single requirement group with
# one member
{"type": "string"},
] } permissions = {
"type": "object",
"additionalProperties": False,
"properties": {
"read": {"type": "string", "enum": ["user", "group", "world"]},
"write": {"type": "string", "enum": ["user", "group", "world"]},
"group": {"type": "string"},
}, } package_attributes = {
"type": "object",
"additionalProperties": False,
"patternProperties": {r"\w+": {}}, } REQUIREMENT_URL = "https://spack.readthedocs.io/en/latest/packages_yaml.html#package-requirements" #: Properties for inclusion in other schemas properties = {
"packages": {
"type": "object",
"default": {},
"additionalProperties": False,
"properties": {
"all": { # package name
"type": "object",
"default": {},
"additionalProperties": False,
"properties": {
"require": requirements,
"version": {}, # Here only to warn users on ignored properties
"target": {
"type": "array",
"default": [],
# target names
"items": {"type": "string"},
},
"compiler": {
"type": "array",
"default": [],
"items": {"type": "string"},
}, # compiler specs
"buildable": {"type": "boolean", "default": True},
"permissions": permissions,
# If 'get_full_repo' is promoted to a Package-level
# attribute, it could be useful to set it here
"package_attributes": package_attributes,
"providers": {
"type": "object",
"default": {},
"additionalProperties": False,
"patternProperties": {
r"\w[\w-]*": {
"type": "array",
"default": [],
"items": {"type": "string"},
}
},
},
"variants": variants,
},
"deprecatedProperties": {
"properties": ["version"],
"message": "setting version preferences in the 'all' section of packages.yaml "
"is deprecated and will be removed in v0.22\n\n\tThese preferences "
"will be ignored by Spack. You can set them only in package-specific sections "
"of the same file.\n",
"error": False,
},
}
},
"patternProperties": {
r"(?!^all$)(^\w[\w-]*)": { # package name
"type": "object",
"default": {},
"additionalProperties": False,
"properties": {
"require": requirements,
"version": {
"type": "array",
"default": [],
# version strings
"items": {"anyOf": [{"type": "string"}, {"type": "number"}]},
},
"target": {}, # Here only to warn users on ignored properties
"compiler": {}, # Here only to warn users on ignored properties
"buildable": {"type": "boolean", "default": True},
"permissions": permissions,
# If 'get_full_repo' is promoted to a Package-level
# attribute, it could be useful to set it here
"package_attributes": package_attributes,
"providers": {}, # Here only to warn users on ignored properties
"variants": variants,
"externals": {
"type": "array",
"items": {
"type": "object",
"properties": {
"spec": {"type": "string"},
"prefix": {"type": "string"},
"modules": {"type": "array", "items": {"type": "string"}},
"extra_attributes": {"type": "object"},
},
"additionalProperties": True,
"required": ["spec"],
},
},
},
"deprecatedProperties": {
"properties": ["target", "compiler", "providers"],
"message": "setting 'compiler:', 'target:' or 'provider:' preferences in "
"a package-specific section of packages.yaml is deprecated, and will be "
"removed in v0.22.\n\n\tThese preferences will be ignored by Spack, and "
"can be set only in the 'all' section of the same file. "
"You can run:\n\n\t\t$ spack audit configs\n\n\tto get better diagnostics, "
"including files:lines where the deprecated attributes are used.\n\n"
"\tUse requirements to enforce conditions on specific packages: "
f"{REQUIREMENT_URL}\n",
"error": False,
},
}
},
} } #: Full schema with metadata schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Spack package configuration file schema",
"type": "object",
"additionalProperties": False,
"properties": properties, } def update(data):
changed = False
for key in data:
version = data[key].get("version")
if not version or all(isinstance(v, str) for v in version):
continue
data[key]["version"] = [str(v) for v in version]
changed = True
return changed


https://spack.readthedocs.io/en/latest/packages_yaml.html#package-requirements\n", 'properties': ['target', 'compiler', 'providers']}, 'properties': {'buildable': {'default': True, 'type': 'boolean'}, 'compiler': {}, 'externals': {'items': {'additionalProperties': True, 'properties': {'extra_attributes': {'type': 'object'}, 'modules': {'items': {'type': 'string'}, 'type': 'array'}, 'prefix': {'type': 'string'}, 'spec': {'type': 'string'}}, 'required': ['spec'], 'type': 'object'}, 'type': 'array'}, 'package_attributes': {'additionalProperties': False, 'patternProperties': {'\\w+': {}}, 'type': 'object'}, 'permissions': {'additionalProperties': False, 'properties': {'group': {'type': 'string'}, 'read': {'enum': ['user', 'group', 'world'], 'type': 'string'}, 'write': {'enum': ['user', 'group', 'world'], 'type': 'string'}}, 'type': 'object'}, 'providers': {}, 'require': {'oneOf': [{'items': {'oneOf': [{'additionalProperties': False, 'properties': {'any_of': {'items': {'type': 'string'}, 'type': 'array'}, 'message': {'type': 'string'}, 'one_of': {'items': {'type': 'string'}, 'type': 'array'}, 'spec': {'type': 'string'}, 'when': {'type': 'string'}}, 'type': 'object'}, {'type': 'string'}]}, 'type': 'array'}, {'type': 'string'}]}, 'target': {}, 'variants': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'version': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}}, 'properties': {'all': {'additionalProperties': False, 'default': {}, 'deprecatedProperties': {'error': False, 'message': "setting version preferences in the 'all' section of packages.yaml is deprecated and will be removed in v0.22\n\n\tThese preferences will be ignored by Spack. You can set them only in package-specific sections of the same file.\n", 'properties': ['version']}, 'properties': {'buildable': {'default': True, 'type': 'boolean'}, 'compiler': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'package_attributes': {'additionalProperties': False, 'patternProperties': {'\\w+': {}}, 'type': 'object'}, 'permissions': {'additionalProperties': False, 'properties': {'group': {'type': 'string'}, 'read': {'enum': ['user', 'group', 'world'], 'type': 'string'}, 'write': {'enum': ['user', 'group', 'world'], 'type': 'string'}}, 'type': 'object'}, 'providers': {'additionalProperties': False, 'default': {}, 'patternProperties': {'\\w[\\w-]*': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'require': {'oneOf': [{'items': {'oneOf': [{'additionalProperties': False, 'properties': {'any_of': {'items': {'type': 'string'}, 'type': 'array'}, 'message': {'type': 'string'}, 'one_of': {'items': {'type': 'string'}, 'type': 'array'}, 'spec': {'type': 'string'}, 'when': {'type': 'string'}}, 'type': 'object'}, {'type': 'string'}]}, 'type': 'array'}, {'type': 'string'}]}, 'target': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'variants': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'version': {}}, 'type': 'object'}}, 'type': 'object'}}
Properties for inclusion in other schemas

http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'packages': {'additionalProperties': False, 'default': {}, 'patternProperties': {'(?!^all$)(^\\w[\\w-]*)': {'additionalProperties': False, 'default': {}, 'deprecatedProperties': {'error': False, 'message': "setting 'compiler:', 'target:' or 'provider:' preferences in a package-specific section of packages.yaml is deprecated, and will be removed in v0.22.\n\n\tThese preferences will be ignored by Spack, and can be set only in the 'all' section of the same file. You can run:\n\n\t\t$ spack audit configs\n\n\tto get better diagnostics, including files:lines where the deprecated attributes are used.\n\n\tUse requirements to enforce conditions on specific packages: https://spack.readthedocs.io/en/latest/packages_yaml.html#package-requirements\n", 'properties': ['target', 'compiler', 'providers']}, 'properties': {'buildable': {'default': True, 'type': 'boolean'}, 'compiler': {}, 'externals': {'items': {'additionalProperties': True, 'properties': {'extra_attributes': {'type': 'object'}, 'modules': {'items': {'type': 'string'}, 'type': 'array'}, 'prefix': {'type': 'string'}, 'spec': {'type': 'string'}}, 'required': ['spec'], 'type': 'object'}, 'type': 'array'}, 'package_attributes': {'additionalProperties': False, 'patternProperties': {'\\w+': {}}, 'type': 'object'}, 'permissions': {'additionalProperties': False, 'properties': {'group': {'type': 'string'}, 'read': {'enum': ['user', 'group', 'world'], 'type': 'string'}, 'write': {'enum': ['user', 'group', 'world'], 'type': 'string'}}, 'type': 'object'}, 'providers': {}, 'require': {'oneOf': [{'items': {'oneOf': [{'additionalProperties': False, 'properties': {'any_of': {'items': {'type': 'string'}, 'type': 'array'}, 'message': {'type': 'string'}, 'one_of': {'items': {'type': 'string'}, 'type': 'array'}, 'spec': {'type': 'string'}, 'when': {'type': 'string'}}, 'type': 'object'}, {'type': 'string'}]}, 'type': 'array'}, {'type': 'string'}]}, 'target': {}, 'variants': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'version': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}}, 'properties': {'all': {'additionalProperties': False, 'default': {}, 'deprecatedProperties': {'error': False, 'message': "setting version preferences in the 'all' section of packages.yaml is deprecated and will be removed in v0.22\n\n\tThese preferences will be ignored by Spack. You can set them only in package-specific sections of the same file.\n", 'properties': ['version']}, 'properties': {'buildable': {'default': True, 'type': 'boolean'}, 'compiler': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'package_attributes': {'additionalProperties': False, 'patternProperties': {'\\w+': {}}, 'type': 'object'}, 'permissions': {'additionalProperties': False, 'properties': {'group': {'type': 'string'}, 'read': {'enum': ['user', 'group', 'world'], 'type': 'string'}, 'write': {'enum': ['user', 'group', 'world'], 'type': 'string'}}, 'type': 'object'}, 'providers': {'additionalProperties': False, 'default': {}, 'patternProperties': {'\\w[\\w-]*': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'require': {'oneOf': [{'items': {'oneOf': [{'additionalProperties': False, 'properties': {'any_of': {'items': {'type': 'string'}, 'type': 'array'}, 'message': {'type': 'string'}, 'one_of': {'items': {'type': 'string'}, 'type': 'array'}, 'spec': {'type': 'string'}, 'when': {'type': 'string'}}, 'type': 'object'}, {'type': 'string'}]}, 'type': 'array'}, {'type': 'string'}]}, 'target': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'variants': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'version': {}}, 'type': 'object'}}, 'type': 'object'}}, 'title': 'Spack package configuration file schema', 'type': 'object'}
Full schema with metadata


spack.schema.projections module

Schema for projections.yaml configuration file.

#: Properties for inclusion in other schemas
properties = {

"projections": {"type": "object", "patternProperties": {r"all|\w[\w-]*": {"type": "string"}}} } #: Full schema with metadata schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Spack view projection configuration file schema",
"type": "object",
"additionalProperties": False,
"properties": properties, }



http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'projections': {'patternProperties': {'all|\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}}, 'title': 'Spack view projection configuration file schema', 'type': 'object'}
Full schema with metadata

spack.schema.repos module

Schema for repos.yaml configuration file.

#: Properties for inclusion in other schemas
properties = {"repos": {"type": "array", "default": [], "items": {"type": "string"}}}
#: Full schema with metadata
schema = {

"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Spack repository configuration file schema",
"type": "object",
"additionalProperties": False,
"properties": properties, }



http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'repos': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'title': 'Spack repository configuration file schema', 'type': 'object'}
Full schema with metadata

spack.schema.spec module

Schema for a spec found in spec descriptor or database index.json files

TODO: This needs to be updated? Especially the hashes under properties.

target = {

"oneOf": [
{"type": "string"},
{
"type": "object",
"additionalProperties": False,
"required": ["name", "vendor", "features", "generation", "parents"],
"properties": {
"name": {"type": "string"},
"vendor": {"type": "string"},
"features": {"type": "array", "items": {"type": "string"}},
"generation": {"type": "integer"},
"parents": {"type": "array", "items": {"type": "string"}},
},
},
] } arch = {
"type": "object",
"additionalProperties": False,
"properties": {"platform": {}, "platform_os": {}, "target": target}, } dependencies = {
"type": "object",
"patternProperties": {
r"\w[\w-]*": { # package name
"type": "object",
"properties": {
"hash": {"type": "string"},
"type": {"type": "array", "items": {"type": "string"}},
},
}
}, } build_spec = {
"type": "object",
"additionalProperties": False,
"required": ["name", "hash"],
"properties": {"name": {"type": "string"}, "hash": {"type": "string"}}, } #: Properties for inclusion in other schemas properties = {
"spec": {
"type": "object",
"additionalProperties": False,
"required": ["_meta", "nodes"],
"properties": {
"_meta": {"type": "object", "properties": {"version": {"type": "number"}}},
"nodes": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": False,
"required": ["version", "arch", "compiler", "namespace", "parameters"],
"properties": {
"name": {"type": "string"},
"hash": {"type": "string"},
"package_hash": {"type": "string"},
# these hashes were used on some specs prior to 0.18
"full_hash": {"type": "string"},
"build_hash": {"type": "string"},
"version": {"oneOf": [{"type": "string"}, {"type": "number"}]},
"arch": arch,
"compiler": {
"type": "object",
"additionalProperties": False,
"properties": {
"name": {"type": "string"},
"version": {"type": "string"},
},
},
"develop": {"anyOf": [{"type": "boolean"}, {"type": "string"}]},
"namespace": {"type": "string"},
"parameters": {
"type": "object",
"required": [
"cflags",
"cppflags",
"cxxflags",
"fflags",
"ldflags",
"ldlibs",
],
"additionalProperties": True,
"properties": {
"patches": {"type": "array", "items": {"type": "string"}},
"cflags": {"type": "array", "items": {"type": "string"}},
"cppflags": {"type": "array", "items": {"type": "string"}},
"cxxflags": {"type": "array", "items": {"type": "string"}},
"fflags": {"type": "array", "items": {"type": "string"}},
"ldflags": {"type": "array", "items": {"type": "string"}},
"ldlib": {"type": "array", "items": {"type": "string"}},
},
},
"patches": {"type": "array", "items": {}},
"dependencies": dependencies,
"build_spec": build_spec,
},
},
},
},
} } #: Full schema with metadata schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Spack spec schema",
"type": "object",
"additionalProperties": False,
"patternProperties": properties, }


Properties for inclusion in other schemas

http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'patternProperties': {'spec': {'additionalProperties': False, 'properties': {'_meta': {'properties': {'version': {'type': 'number'}}, 'type': 'object'}, 'nodes': {'items': {'additionalProperties': False, 'properties': {'arch': {'additionalProperties': False, 'properties': {'platform': {}, 'platform_os': {}, 'target': {'oneOf': [{'type': 'string'}, {'additionalProperties': False, 'properties': {'features': {'items': {'type': 'string'}, 'type': 'array'}, 'generation': {'type': 'integer'}, 'name': {'type': 'string'}, 'parents': {'items': {'type': 'string'}, 'type': 'array'}, 'vendor': {'type': 'string'}}, 'required': ['name', 'vendor', 'features', 'generation', 'parents'], 'type': 'object'}]}}, 'type': 'object'}, 'build_hash': {'type': 'string'}, 'build_spec': {'additionalProperties': False, 'properties': {'hash': {'type': 'string'}, 'name': {'type': 'string'}}, 'required': ['name', 'hash'], 'type': 'object'}, 'compiler': {'additionalProperties': False, 'properties': {'name': {'type': 'string'}, 'version': {'type': 'string'}}, 'type': 'object'}, 'dependencies': {'patternProperties': {'\\w[\\w-]*': {'properties': {'hash': {'type': 'string'}, 'type': {'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}}, 'type': 'object'}, 'develop': {'anyOf': [{'type': 'boolean'}, {'type': 'string'}]}, 'full_hash': {'type': 'string'}, 'hash': {'type': 'string'}, 'name': {'type': 'string'}, 'namespace': {'type': 'string'}, 'package_hash': {'type': 'string'}, 'parameters': {'additionalProperties': True, 'properties': {'cflags': {'items': {'type': 'string'}, 'type': 'array'}, 'cppflags': {'items': {'type': 'string'}, 'type': 'array'}, 'cxxflags': {'items': {'type': 'string'}, 'type': 'array'}, 'fflags': {'items': {'type': 'string'}, 'type': 'array'}, 'ldflags': {'items': {'type': 'string'}, 'type': 'array'}, 'ldlib': {'items': {'type': 'string'}, 'type': 'array'}, 'patches': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['cflags', 'cppflags', 'cxxflags', 'fflags', 'ldflags', 'ldlibs'], 'type': 'object'}, 'patches': {'items': {}, 'type': 'array'}, 'version': {'oneOf': [{'type': 'string'}, {'type': 'number'}]}}, 'required': ['version', 'arch', 'compiler', 'namespace', 'parameters'], 'type': 'object'}, 'type': 'array'}}, 'required': ['_meta', 'nodes'], 'type': 'object'}}, 'title': 'Spack spec schema', 'type': 'object'}
Full schema with metadata

spack.schema.upstreams module


http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'upstreams': {'default': {}, 'patternProperties': {'\\w[\\w-]*': {'additionalProperties': False, 'default': {}, 'properties': {'install_tree': {'type': 'string'}, 'modules': {'properties': {'lmod': {'type': 'string'}, 'tcl': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}}, 'title': 'Spack core configuration file schema', 'type': 'object'}
Full schema with metadata

spack.solver package

Submodules

spack.solver.asp module



Bases: object

Object representing a piece of ASP code.


Bases: Mapping

Mapping containing concrete specs keyed by DAG hash.

The mapping is ensured to be consistent, i.e. if a spec in the mapping has a dependency with hash X, it is ensured to be the same object in memory as the spec keyed by X.

Adds a new concrete spec to the mapping. Returns True if the spec was just added, False if the spec was already in the mapping.
spec -- spec to be added
ValueError -- if the spec is not concrete




Bases: NamedTuple

Data class to contain information on declared versions used in the solve

Unique index assigned to this version

Provenance of the version

String representation of the version


Bases: object
Get the cause tree associated with the given cause.
cause -- The root cause of the tree (final condition)
A list of strings describing the causes, formatted to display tree structure.


Handle an error state derived by the solver.






Bases: UnsatisfiableSpecError

Subclass for new constructor signature for new concretizer


Bases: NamedTuple
Alias for field number 0

Alias for field number 1


Bases: tuple

Data class that contain configuration on what a clingo solve should output.

  • timers (bool) -- Print out coarse timers for different solve phases.
  • stats (bool) -- Whether to output Clingo's internal solver statistics.
  • out -- Optional output stream for the generated ASP program.
  • setup_only (bool) -- if True, stop after setup and don't solve (default False).


Alias for field number 2

Alias for field number 3

Alias for field number 1

Alias for field number 0



Bases: object
ASP fact (a rule without a body).
head (AspFunction) -- ASP function to generate as fact





Set up the input and solve for dependencies of specs.
  • setup (SpackSolverSetup) -- An object to set up the ASP problem.
  • specs (list) -- List of Spec objects to solve for.
  • reuse (None or list) -- list of concrete specs that can be reused
  • output (None or OutputConfiguration) -- configuration object to set the output of this solve.
  • control (clingo.Control) -- configuration for the solver. If None, default values will be used
  • allow_deprecated -- if True, allow deprecated versions in the solve

A tuple of the solve result, the timer for the different phases of the solve, and the internal statistics from clingo.




Bases: Enum

Purpose / provenance of a requirement

Default requirement expressed under the 'all' attribute of packages.yaml

Requirement expressed on a specific package

Requirement expressed on a virtual package


Bases: tuple

Data class to collect information on a requirement

Alias for field number 3

Alias for field number 4

Alias for field number 5

Alias for field number 0

Alias for field number 1

Alias for field number 2


Bases: object

Result of an ASP solve.

Format an unsatisfiable core for human readability

Returns a list of strings, where each string is the human readable representation of a single fact in the core, including a newline.

Modeled after traceback.format_stack.


List of facts for each core

Separate cores are separated by an empty line Cores are not minimized


List of facts for each core

Separate cores are separated by an empty line


Return a list of subset-minimal unsatisfiable cores.

Return a subset-minimal subset of the core.

Clingo cores may be thousands of lines when two facts are sufficient to ensure unsatisfiability. This algorithm reduces the core to only those essential facts.


Raise an appropriate error if the result is unsatisfiable.

The error is an InternalConcretizerError, and includes the minimized cores resulting from the solve, formatted to be human readable.


List of concretized specs satisfying the initial abstract request.


List of abstract input specs that were not solved.


Bases: object

This is the main external interface class for solving.

It manages solver configuration and preferences in one place. It sets up the solve and passes the setup method to the driver, as well.

Properties of interest:

Whether to try to reuse existing installs/binaries



  • specs (list) -- List of Spec objects to solve for.
  • out -- Optionally write the generate ASP program to a file-like object.
  • timers (bool) -- Print out coarse timers for different solve phases.
  • stats (bool) -- Print out detailed stats from clingo.
  • tests (bool or tuple) -- If True, concretize test dependencies for all packages. If a tuple of package names, concretize test dependencies for named packages (defaults to False: do not concretize test dependencies).
  • setup_only (bool) -- if True, stop after setup and don't solve (default False).
  • allow_deprecated (bool) -- allow deprecated version in the solve



Solve for a stable model of specs in multiple rounds.

This relaxes the assumption of solve that everything must be consistent and solvable in a single round. Each round tries to maximize the reuse of specs from previous rounds.

The function is a generator that yields the result of each round.

  • specs (list) -- list of Specs to solve.
  • out -- Optionally write the generate ASP program to a file-like object.
  • timers (bool) -- print timing if set to True
  • stats (bool) -- print internal statistics if set to True
  • tests (bool) -- add test dependencies to the solve
  • allow_deprecated (bool) -- allow deprecated version in the solve




Bases: object

Class to set up and run a Spack concretization solve.

Define versions for constraints on virtuals.

Must be called before define_version_constraints().


Facts about available compilers.

Emit facts for reusable specs

Generate facts for a dependency or virtual provider condition.
  • required_spec -- the constraints that triggers this condition
  • imposed_spec -- the constraints that are imposed when this condition is triggered
  • name -- name for required_spec (required if required_spec is anonymous, ignored if not)
  • msg -- description of the condition
  • transform_required -- transformation applied to facts from the required spec. Defaults to leave facts as they are.
  • transform_imposed -- transformation applied to facts from the imposed spec. Defaults to removing "node" and "virtual_node" facts.

id of the condition created by this function
Return type
int



Add concrete versions to possible versions from lists of CLI/dev specs.





Validate variant values from the command line.

Also add valid variant values from the command line to the possible values for a variant.


Define what version_satisfies(...) means in ASP logic.

Flushes all the effect rules collected so far, and clears the cache.

Generate facts to enforce requirements.
rules -- rules for which we want facts to be emitted


Facts on external packages, as read from packages.yaml






Translate 'depends_on' directives into ASP logic.





Output declared versions of a package.

This uses self.declared_versions so that we include any versions that arise from a spec.



Facts on concretization preferences, as read from packages.yaml






Generate an ASP program with relevant constraints for specs.

This calls methods on the solve driver to set up the problem with facts and rules from all possible dependencies of the input specs, as well as constraints from the specs themselves.

  • driver -- driver instance of this solve
  • specs -- list of Specs to solve
  • reuse -- list of concrete specs that can be reused
  • allow_deprecated -- if True adds deprecated versions into the solve



Wrap a call to _spec_clauses() into a try/except block that raises a comprehensible error message in case of failure.

Return list of clauses expressing spec's version constraints.

Add facts about targets and target compatibility.



Flushes all the trigger rules collected so far, and clears the cache.

If package requirements mention concrete versions that are not mentioned elsewhere, then we need to collect those to mark them as possible versions. If they are abstract and statically have no match, then we need to throw an error. This function assumes all possible versions are already registered in self.possible_versions.


Call func(vspec, provider, i) for each of pkg's provider prefs.



Bases: object

Class with actions to rebuild a spec from ASP results.




This means that the external spec and index idx has been selected for this package.



Given a package name, returns the string representation of the "min_dupe_id" node in the ASP encoding.
pkg -- name of a package












Order compiler flags on specs in predefined order.

We order flags so that any node's flags will take priority over those of its dependents. That is, the deepest node in the DAG's flags will appear last on the compile line, in the order they were specified.

The solver determines which flags are on nodes; this routine imposes order afterwards.


Ensure attributes are evaluated in the correct order.

hash attributes are handled first, since they imply entire concrete specs node attributes are handled next, since they instantiate nodes external_spec_selected attributes are handled last, so that external extensions can find the concrete specs on which they depend because all nodes are fully constructed before we consider which ones are external.






Bases: UnsatisfiableSpecError

Subclass for new constructor signature for new concretizer







Construct an ordered mapping from criteria names to costs.

Priority offset for "build" criteria (regular criterio shifted to higher priority for specs we have to build)

Ensure all packages mentioned in specs exist.

Return a control object with the default settings used in Spack

Extend a list of flags, preserving order and precedence.

Add new_flags at the end of flag_list. If any flags in new_flags are already in flag_list, they are moved to the end so that they take higher precedence on the compile line.


Extract the arguments to predicates with the provided name from a model.

Pull out all the predicates with name predicate_name from the model, and return their intermediate representation.


Priority offset of "fixed" criteria (those w/o build criteria)

High fixed priority offset for criteria that supersede all build criteria

Returns an intermediate representation of clingo models for Spack's spec builder.

Currently, transforms symbols from clingo models either to strings or to NodeArgument objects.

This will turn a clingo.Symbol into a string or NodeArgument, or a sequence of clingo.Symbol objects into a tuple of those objects.





Transformation that removes all "node" and "virtual_node" from the input list of facts.


spack.solver.counter module

Bases: object

Computes the possible packages and the maximum number of duplicates allowed for each of them.

  • specs -- abstract specs to concretize
  • tests -- if True, add test dependencies to the list of possible packages


Ensure the cache values have been computed

Returns the list of possible dependencies

Emit facts associated with the possible packages

Returns the list of possible virtuals


Bases: MinimalDuplicatesCounter
Emit facts associated with the possible packages


Bases: NoDuplicatesCounter
Emit facts associated with the possible packages



spack.util package

Subpackages

spack.util.unparse package


Submodules

spack.util.unparse.unparser module

Usage: unparse.py <path to source file>

Bases: object

Methods in this class recursively traverse an AST and output source code for the abstract syntax; original formatting is disregarded.








A context manager for preparing the source for expressions. It adds start to the buffer and enters, after exit it adds end.


Dispatcher function, dispatching tree type T to method _T.

Indent a piece of text, according to the current indentation level


Traverse and separate the given items with a comma and append it to the buffer. If items is a single item sequence, a trailing comma will be added.

Shortcut to adding precedence related parens




Traverse tree and write source code to output_file.




















































































Append a piece of text to the current line.


Call f on each item in seq, calling inter() in between.




Submodules

spack.util.classes module

Given a parent path (e.g., spack.platforms or spack.analyzers), use list_modules to derive the module names, and then mod_to_class to derive class names. Import the classes and return them in a list

spack.util.compression module

Bases: CompressedFileTypeInterface
This method decompresses and loads the first 200 or so bytes of a compressed file to check for compressed archives. This does not decompress the entire file and should not be used for direct expansion of archives/compressed files




Bases: FileTypeInterface

Interface class for FileTypes that include compression information


This method decompresses and loads the first 200 or so bytes of a compressed file to check for compressed archives. This does not decompress the entire file and should not be used for direct expansion of archives/compressed files


Bases: object

Base interface class for describing and querying file type information. FileType describes information about a single file type such as extension, and byte header properties, and provides an interface to check a given file against said type based on magic number.

This class should be subclassed each time a new type is to be described.

Note: This class should not be used directly as it does not define any specific file. Attempts to directly use this class will fail, as it does not define a magic number or extension string.

Subclasses should each describe a different type of file. In order to do so, they must define the extension string, magic number, and header offset (if non zero). If a class has multiple magic numbers, it will need to override the method describin that file types magic numbers and the method that checks a types magic numbers against a given file's.



Return size of largest magic number associated with file type

Query byte stream for appropriate magic number
iostream -- file byte stream
Bool denoting whether file is of class file type based on magic number


Return a list of all potential magic numbers for a filetype



Bases: CompressedFileTypeInterface
This method decompresses and loads the first 200 or so bytes of a compressed file to check for compressed archives. This does not decompress the entire file and should not be used for direct expansion of archives/compressed files




Bases: CompressedFileTypeInterface
This method decompresses and loads the first 200 or so bytes of a compressed file to check for compressed archives. This does not decompress the entire file and should not be used for direct expansion of archives/compressed files





Bases: CompressedFileTypeInterface
This method decompresses and loads the first 200 or so bytes of a compressed file to check for compressed archives. This does not decompress the entire file and should not be used for direct expansion of archives/compressed files





Returns appropriate decompression/extraction algorithm function pointer for provided extension. If extension is none, it is computed from the path and the decompression function is derived from that information.

Returns a function pointer to appropriate decompression algorithm based on extension type and unix specific considerations i.e. a reasonable expectation system utils like gzip, bzip2, and xz are available
path (str) -- path of the archive file requiring decompression


Returns a function pointer to appropriate decompression algorithm based on extension type and Windows specific considerations

Windows natively vendors only tar, no other archive/compression utilities So we must rely exclusively on Python module support for all compression operations, tar for tarballs and zip files, and 7zip for Z compressed archives and files as Python does not provide support for the UNIX compress algorithm

  • path (str) -- path of the archive file requiring decompression
  • extension (str) -- extension



Return extension from archive file path Extension is derived based on magic number parsing similar to the file utility. Attempts to return abbreviated file extensions whenever a file has an abbreviated extension such as .tgz or .txz. This distinction in abbreivated extension names is accomplished by string parsing.
  • file (os.PathLike) -- path descibing file on system for which ext will be determined.
  • decompress (bool) -- If True, method will peek into compressed files to check for archive file types. default is False. If false, method will be unable to distinguish .tar.gz from .gz or similar.

file name. If file is not on system or is of an type not recognized by Spack as an archive or compression type, None is returned.



Return extension represented by stream corresponding to archive file If stream does not represent an archive type recongized by Spack (see spack.util.compression.ALLOWED_ARCHIVE_TYPES) method will return None

Extension type is derived by searching for identifying bytes in file stream.

  • stream -- stream representing a file on system
  • decompress (bool) -- if True, compressed files are checked for archive types beneath compression i.e. tar.gz default is False, otherwise, return top level type i.e. gz







spack.util.cpus module

Returns the number of CPUs available for the current process, or the number of phyiscal CPUs when that information cannot be retrieved. The number of available CPUs might differ from the number of physical CPUs when using spack through Slurm or container runtimes.

Packages that require sequential builds need 1 job. Otherwise we use the number of jobs set on the command line. If not set, then we use the config defaults (which is usually set through the builtin config scope), but we cap to the number of CPUs available to avoid oversubscription.
  • parallel -- true when package supports parallel builds
  • max_cpus -- maximum number of CPUs to use (defaults to cpus_available())
  • config -- configuration object (defaults to global config)



spack.util.crypto module

Bases: object

A checker checks files against one particular hex digest. It will automatically determine what hashing algorithm to used based on the length of the digest it's initialized with. e.g., if the digest is 32 hex characters long this will use md5.

Example: know your tarball should hash to 'abc123'. You want to check files against this. You would use this class like so:

hexdigest = 'abc123'
checker = Checker(hexdigest)
success = checker.check('downloaded.tar.gz')


After the call to check, the actual checksum is available in checker.sum, in case it's needed for error output.

You can trade read performance and memory usage by adjusting the block_size optional arg. By default it's a 1MB (2**20 bytes) buffer.

Read the file with the specified name and check its checksum against self.hexdigest. Return True if they match, False otherwise. Actual checksum is stored in self.sum.

Get the name of the hash function this Checker is using.



Number of bits required to represent an integer in binary.

Returns a hex digest of the filename generated using an algorithm from hashlib.


Gets name of the hash algorithm for a hex digest.

Get a function that can perform the specified hash algorithm.


Set of hash algorithms that Spack can use, mapped to digest size in bytes

Return the first <bits> bits of a byte array as an integer.

spack.util.debug module

Debug signal handler: prints a stack trace and enters interpreter.

register_interrupt_handler() enables a ctrl-C handler that prints a stack trace and drops the user into an interpreter.

Bases: Pdb

This class allows the python debugger to follow forked processes and can set tracepoints allowing the Python Debugger Pdb to be used from a python multiprocessing child process.

This is used the same way one would normally use Pdb, simply import this class and use as a drop in for Pdb, although the syntax here is slightly different, requiring the instantiton of this class, i.e. ForkablePdb().set_trace().

This should be used when attempting to call a debugger from a child process spawned by the python multiprocessing such as during the run of Spack.install, or any where else Spack spawns a child process.


Interrupt running process, and provide a python prompt for interactive debugging.

Print traceback and enter an interpreter on Ctrl-C

spack.util.editor module

Module for finding the user's preferred text editor.

Defines one function, editor(), which invokes the editor defined by the user's VISUAL environment variable if set. We fall back to the editor defined by the EDITOR environment variable if VISUAL is not set or the specified editor fails (e.g. no DISPLAY for a graphical editor). If neither variable is set, we fall back to one of several common editors, raising an EnvironmentError if we are unable to find one.

Invoke the user's editor.

This will try to execute the following, in order:

1.
$VISUAL <args> # the "visual" editor (per POSIX)
2.
$EDITOR <args> # the regular editor (per POSIX)
3.
some default editor (see _default_editors) with <args>



If an environment variable isn't defined, it is skipped. If it points to something that can't be executed, we'll print a warning. And if we can't find anything that can be executed after searching the full list above, we'll raise an error.

args -- args to pass to editor

want something that returns, instead of the default os.execv().



Wrapper that makes spack.util.executable.Executable look like os.execv().

Use this with editor() if you want it to return instead of running execv.


spack.util.elf module




Bases: tuple
Alias for field number 7

Alias for field number 3

Alias for field number 6

Alias for field number 1

Alias for field number 8

Alias for field number 9

Alias for field number 4

Alias for field number 10

Alias for field number 11

Alias for field number 5

Alias for field number 12

Alias for field number 0

Alias for field number 2



Bases: tuple
Alias for field number 7

Alias for field number 4

Alias for field number 6

Alias for field number 5

Alias for field number 1

Alias for field number 3

Alias for field number 0

Alias for field number 2


Bases: tuple
Alias for field number 7

Alias for field number 5

Alias for field number 1

Alias for field number 6

Alias for field number 2

Alias for field number 4

Alias for field number 0

Alias for field number 3


Bases: tuple
Alias for field number 3

Alias for field number 8

Alias for field number 9

Alias for field number 2

Alias for field number 7

Alias for field number 6

Alias for field number 0

Alias for field number 4

Alias for field number 5

Alias for field number 1


Modifies a binary to remove the rpath. It zeros out the rpath string and also drops the DT_R(UN)PATH entry from the dynamic section, so it doesn't show up in 'readelf -d file', nor in 'strings file'.

Retrieve the size of a string table section at a particular known offset
  • f -- file handle
  • elf (ElfFile) -- ELF file parser data
  • offset (int) -- offset of the section in the file (i.e. sh_offset)

the size of the string table in bytes
Return type
int


Returns list of rpaths of the given file as UTF-8 strings, or None if the file does not have any rpaths.

Retrieve a C-string at a given offset in a byte string
  • byte_string (bytes) -- String
  • start (int) -- Offset into the string

A copy of the C-string excluding the terminating null byte
Return type
bytes


Given a file handle f for an ELF file opened in binary mode, return an ElfFile object that is stores data about rpaths


Parse program headers
  • f -- file handle
  • elf (ElfFile) -- ELF file parser data



Parse the dynamic section of an ELF file
  • f -- file handle
  • elf (ElfFile) -- ELF file parse data



Parse the interpreter (i.e. absolute path to the dynamic linker)
  • f -- file handle
  • elf (ElfFile) -- ELF file parser data



Read exactly num_bytes at the current offset, otherwise raise a parsing error with the given error message.
  • f -- file handle
  • num_bytes (int) -- Number of bytes to read
  • msg (str) -- Error to show when bytes cannot be read

the num_bytes bytes that were read.
Return type
bytes



Read a full string table at the given offset, which requires looking it up in the section headers.
  • elf (ElfFile) -- ELF file parser data
  • vaddr (int) -- virtual address

file offset
Return type
bytes


Given a virtual address, find the corresponding offset in the ELF file itself.
  • elf (ElfFile) -- ELF file parser data
  • vaddr (int) -- virtual address



spack.util.environment module

Set, unset or modify environment variables.




Bases: object

Keeps track of requests to modify the current environment.

Stores a request to append 'flags' to an environment variable.
  • name -- name of the environment variable
  • value -- flags to be appended
  • sep -- separator for the flags (default: " ")



Stores a request to append a path to list of paths.
  • name -- name of the environment variable
  • path -- path to be appended
  • separator -- separator for the paths (default: os.pathsep)



Applies the modifications and clears the list.
env -- environment to be modified. If None, os.environ will be used.


Clears the current list of modifications.

Stores a request to deprioritize system paths in a path list, otherwise preserving the order.
  • name -- name of the environment variable
  • separator -- separator for the paths (default: os.pathsep)



Drop all modifications to the variable with the given name.


Constructs the environment modifications from the diff of two environments.
  • before -- environment before the modifications are applied
  • after -- environment after the modifications are applied
  • clean -- in addition to removing empty entries, also remove duplicate entries



Returns the environment modifications that have the same effect as sourcing the input file.
  • filename -- the file to be sourced
  • *arguments -- arguments to pass on the command line

  • shell (str) -- the shell to use (default: bash)
  • shell_options (str) -- options passed to the shell (default: -c)
  • source_command (str) -- the command to run (default: source)
  • suppress_output (str) -- redirect used to suppress output of command (default: &> /dev/null)
  • concatenate_on_success (str) -- operator used to execute a command only when the previous command succeeds (default: &&)
  • exclude ([str or re]) -- ignore any modifications of these variables (default: [])
  • include ([str or re]) -- always respect modifications of these variables (default: []). Supersedes any excluded variables.
  • clean (bool) -- in addition to removing empty entries, also remove duplicate entries (default: False).



Returns a dict of the current modifications keyed by variable name.

Returns True if the last modification to a variable is to unset it, False otherwise.

Stores a request to prepend a path to list of paths.
  • name -- name of the environment variable
  • path -- path to be prepended
  • separator -- separator for the paths (default: os.pathsep)



Stores a request to remove duplicates from a path list, otherwise preserving the order.
  • name -- name of the environment variable
  • separator -- separator for the paths (default: os.pathsep)



Stores a request to remove flags from an environment variable
  • name -- name of the environment variable
  • value -- flags to be removed
  • sep -- separator for the flags (default: " ")



Stores a request to remove a path from a list of paths.
  • name -- name of the environment variable
  • path -- path to be removed
  • separator -- separator for the paths (default: os.pathsep)



Returns the EnvironmentModifications object that will reverse self

Only creates reversals for additions to the environment, as reversing unset and remove_path modifications is impossible.

Reversable operations are set(), prepend_path(), append_path(), set_path(), and append_flags().


Stores a request to set an environment variable.
  • name -- name of the environment variable
  • value -- value of the environment variable
  • force -- if True, audit will not consider this modification a warning
  • raw -- if True, format of value string is skipped



Stores a request to set an environment variable to a list of paths, separated by a character defined in input.
  • name -- name of the environment variable
  • elements -- ordered list paths
  • separator -- separator for the paths (default: os.pathsep)




Stores a request to unset an environment variable.
name -- name of the environment variable



Bases: object

Base class for modifiers that act on the environment variable as a whole, and thus store just its name

Apply the modification to the mapping passed as input





Bases: object

Base class for modifiers that modify the value of an environment variable.

Apply the modification to the mapping passed as input














Reorders input paths by putting system paths at the end of the list, otherwise preserving order.

Return a shell-escaped version of the string s.

This is similar to how shlex.quote works, but it escapes with double quotes instead of single quotes, to allow environment variable expansion within quoted strings.


Dump an environment dictionary to a source-able file.
  • path -- path of the file to write
  • environment -- environment to be writte. If None os.environ is used.



Given the name of an environment variable, returns True if it is set to 'true' or to '1', False otherwise.

Returns a dictionary with the environment that one would have after sourcing the files passed as argument.
*files -- each item can either be a string containing the path of the file to be sourced or a sequence, where the first element is the file to be sourced and the remaining are arguments to be passed to the command line
  • env (dict) -- the initial environment (default: current environment)
  • shell (str) -- the shell to use (default: /bin/bash or cmd.exe (Windows))
  • shell_options (str) -- options passed to the shell (default: -c or /C (Windows))
  • source_command (str) -- the command to run (default: source)
  • suppress_output (str) -- redirect used to suppress output of command (default: &> /dev/null)
  • concatenate_on_success (str) -- operator used to execute a command only when the previous command succeeds (default: &&)



Returns a copy of the input where system paths are filtered out.

Given the name of an environment variable containing multiple paths separated by 'os.pathsep', returns a list of the paths.

Inspects root to search for the subdirectories in inspections. Adds every path found to a list of prepend-path commands and returns it.
  • root -- absolute path where to search for subdirectories
  • inspections -- maps relative paths to a list of environment variables that will be modified if the path exists. The modifications are not performed immediately, but stored in a command object that is returned to client
  • exclude -- optional callable. If present it must accept an absolute path and return True if it should be excluded from the inspection


Examples:

The following lines execute an inspection in /usr to search for /usr/include and /usr/lib64. If found we want to prepend /usr/include to CPATH and /usr/lib64 to MY_LIB64_PATH.

# Set up the dictionary containing the inspection
inspections = {

'include': ['CPATH'],
'lib64': ['MY_LIB64_PATH'] } # Get back the list of command needed to modify the environment env = inspect_path('/usr', inspections) # Eventually execute the commands env.apply_modifications()





Returns True if the argument is a system path, False otherwise.

Puts the provided directories first in the path, adding them if they're not already there.

Sets the variable passed as input to the os.pathsep joined list of directories.


Ensures that the value of the environment variables passed as arguments is the same before entering to the context manager and after exiting it.

Variables that are unset before entering the context manager will be explicitly unset on exit.

variables -- list of environment variables to be preserved


Returns the input list with duplicates removed, otherwise preserving order.

Returns a copy of the input dictionary where all the keys that match an excluded pattern and don't match an included pattern are removed.
  • environment (dict) -- input dictionary
  • exclude (list) -- literals or regex patterns to be excluded
  • include (list) -- literals or regex patterns to be included



Temporarily sets and restores environment variables.

Variables can be set as keyword arguments to this function.


Decorator wrapping calls to system env modifications, converting all env variable names to all upper case on Windows, no-op on other platforms before calling env modification method.

Windows, due to a DOS holdover, treats all env variable names case insensitively, however Spack's env modification class does not, meaning setting Path and PATH would be distinct env operations for Spack, but would cause a collision when actually performing the env modification operations on the env. Normalize all env names to all caps to prevent this collision from the Spack side.


Validates the environment modifications to check for the presence of suspicious patterns. Prompts a warning for everything that was found.

Current checks: - set or unset variables after other changes on the same variable

  • env -- list of environment modifications
  • errstream -- callable to log error messages



spack.util.executable module

Bases: object

Class representing a program that can be run on the command line.

Add default argument(s) to the command.

Set an environment variable when the command is run.
  • key -- The environment variable to set
  • value -- The value to set it to



Set an EnvironmentModifications to use when the command is run.

The command-line string.
The executable and default arguments
Return type
str


The executable name.
The basename of the executable
Return type
str


The path to the executable.
The path to the executable
Return type
str



Bases: SpackError

ProcessErrors are raised when Executables exit with an error code.


Finds an executable in the path like command-line which.

If given multiple executables, returns the first one that is found. If no executables are found, returns None.

*args (str) -- One or more executables to search for
  • path (list or str) -- The path to search. Defaults to PATH
  • required (bool) -- If set to True, raise an error if executable not found

The first executable that is found in the path
Return type
Executable


spack.util.file_cache module


Bases: object

This class manages cached data in the filesystem.

  • Cache files are fetched and stored by unique keys. Keys can be relative paths, so that there can be some hierarchy in the cache.
  • The FileCache handles locking cache files for reading and writing, so client code need not manage locks for cache entries.

Path to the file in the cache for a particular key.

Remove all files under the cache root.

Ensure we can access a cache file. Create a lock for it if needed.

Return whether the cache file exists yet or not.


Return modification time of cache file, or -inf if it does not exist.

Time is in units returned by os.stat in the mtime field, which is platform-dependent.


Get a read transaction on a file cache item.

Returns a ReadTransaction context manager and opens the cache file for reading. You can use it like this:




Get a write transaction on a file cache item.

Returns a WriteTransaction context manager that opens a temporary file for writing. Once the context manager finishes, if nothing went wrong, moves the file into place on top of the old file atomically.



spack.util.file_permissions module

Bases: SpackError

Error class for invalid permission setters




spack.util.format module

Renders out a set of versions like those found in a package's package.py file for a given set of versions and hashes.
  • version_hashes_dict (dict) -- A dictionary of the form: version -> checksum.
  • url_dict (dict) -- A dictionary of the form: version -> URL.

Rendered version lines.
Return type
(str)


spack.util.gcs module

This file contains the definition of the GCS Blob storage Class used to integrate GCS Blob storage with spack buildcache.


Bases: object

GCS Bucket Object Create a wrapper object for a GCS Bucket. Provides methods to wrap spack related tasks, such as destroy.



Bucket destruction method

Deletes all blobs within the bucket, and then deletes the bucket itself.

Uses GCS Batch operations to bundle several delete operations together.



Get a list of all blobs Returns a list of all blobs within this bucket.
relative -- .INDENT 7.0
relative to 'build_cache' directory.
destruction of bucket)







Create a GCS client Creates an authenticated GCS client to access GCS buckets and blobs

Open a reader stream to a blob object on GCS

spack.util.git module

Single util module where Spack should get a git executable.

Get a git executable.
required -- if True, fail if git is not found. By default return None.


spack.util.gpg module

GNUPGHOME environment variable in the context of this Python module

Executable instance for "gpg", initialized lazily

Executable instance for "gpgconf", initialized lazily

Socket directory required if a non default home directory is used

Bases: SpackError

Class raised when GPG errors are detected.


Reset the global state to uninitialized.

Create a new key pair.

Export public keys to a location passed as argument.
  • location (str) -- where to export the keys
  • keys (list) -- keys to be exported
  • secret (bool) -- whether to export secret keys or not



Set the GNUPGHOME to a new location for this context.
dir (str) -- new value for GNUPGHOME


Initialize the global objects in the module, if not set.

When calling any gpg executable, the GNUPGHOME environment variable is set to:

1.
The value of the gnupghome argument, if not None
2.
The value of the "SPACK_GNUPGHOME" environment variable, if set
3.
The default gpg path for Spack otherwise

  • gnupghome (str) -- value to be used for GNUPGHOME when calling GnuPG executables
  • force (bool) -- if True forces the re-initialization even if the global objects are set already



List known keys.
  • trusted (bool) -- if True list public keys
  • signing (bool) -- if True list private keys



Return a list of fingerprints

Return the keys that can be used to verify binaries.

Sign a file with a key.
  • key -- key to be used to sign
  • file (str) -- file to be signed
  • output (str) -- output file (either the clearsigned file or the detached signature)
  • clearsign (bool) -- if True wraps the document in an ASCII-armored signature, if False creates a detached signature



Return the keys that can be used to sign binaries.

Import a public key from a file and trust it.
keyfile (str) -- file with the public key


Delete known keys.
  • signing (bool) -- if True deletes the secret keys
  • *keys -- keys to be deleted



Verify the signature on a file.
  • signature (str) -- signature of the file (or clearsigned file)
  • file (str) -- file to be verified. If None, then signature is assumed to be a clearsigned file.
  • suppress_warnings (bool) -- whether or not to suppress warnings from GnuPG



spack.util.hash module

Return the b32 encoded sha1 hash of the input string as a string.

Return the first <bits> bits of a base32 string as an integer.

spack.util.ld_so_conf module


Retrieve the current host runtime search paths for shared libraries; for GNU and musl Linux we try to retrieve the dynamic linker from the current Python interpreter and then find the corresponding config file (e.g. ld.so.conf or ld-musl-<arch>.path). Similar can be done for BSD and others, but this is not implemented yet. The default paths are always returned. We don't check if the listed directories exist.

Parse glibc style ld.so.conf file, which specifies default search paths for the dynamic linker. This can in principle also be used for musl libc.
conf_file (str or bytes) -- Path to config file
List of absolute search paths
Return type
list


spack.util.lock module

Wrapper for llnl.util.lock allows locking to be enabled/disabled.

Bases: Lock

Lock that can be disabled.

This overrides the _lock() and _unlock() methods from llnl.util.lock so that all the lock API calls will succeed, but the actual locking mechanism can be disabled via _enable_locks.



Do some extra checks to ensure disabling locks is safe.

This will raise an error if path can is group- or world-writable AND the current user can write to the directory (i.e., if this user AND others could write to the path).

This is intended to run on the Spack prefix, but can be run on any path for testing.


spack.util.log_parse module

Get error context from a log file.
  • log_events (list) -- list of events created by ctest_log_parser.parse()
  • width (int or None) -- wrap width; 0 for no limit; None to auto-size for terminal

context from the build log with errors highlighted
Return type
str

Parses the log file for lines containing errors, and prints them out with line numbers and context. Errors are highlighted with '>>' and with red highlighting (if color is enabled).

Events are sorted by line number before they are displayed.


Extract interesting events from a log file as a list of LogEvent.
  • stream (str or IO) -- build log name or file object
  • context (int) -- lines of context to extract around each log event
  • jobs (int) -- number of jobs to parse with; default ncpus
  • profile (bool) -- print out profile information for parsing

BuildWarning objects.

Return type
(tuple)

This is a wrapper around ctest_log_parser.CTestLogParser that lazily constructs a single CTestLogParser object. This ensures that all the regex compilation is only done once.


spack.util.module_cmd module

This module contains routines related to the module command for accessing and parsing environment modules.



Takes a module name and removes modules until it is possible to load that module. It then loads the provided module. Depends on the modulecmd implementation of modules used in cray and lmod.


Inspect a list of Tcl modules for entries that indicate the absolute path at which the library supported by said module can be found.
modules (list) -- module files to be loaded to get an external package
Guess of the prefix path where the package


spack.util.naming module

Bases: object
Bases: object

True if there is a value set for the given namespace.

True if this namespace has no children in the trie.

True if the namespace has a value, or if it's the prefix of one that does.


Convert a name from module style to class name style. Spack mostly follows PEP-8:
  • Module and package names use lowercase_with_underscores.
  • Class names use the CapWords convention.



Regular source code follows these convetions. Spack is a bit more liberal with its Package names and Compiler names:

  • They can contain '-' as well as '_', but cannot start with '-'.
  • They can start with numbers, e.g. "3proxy".



This function converts from the module convention to the class convention by removing _ and - and converting surrounding lowercase text to CapWords. If mod_name starts with a number, the class name returned will be prepended with '_' to make a valid Python identifier.


Given a Python module name, return a list of all possible spack module names that could correspond to it.

Simplify package name to only lowercase, digits, and dashes.

Simplifies a name which may include uppercase letters, periods, underscores, and pluses. In general, we want our package names to only contain lowercase letters, digits, and dashes.

name (str) -- The original name of the package
The new name of the package
Return type
str


Given a Spack module name, returns the name by which it can be imported in Python.

Return whether mod_name is a valid namespaced module name.

Return whether mod_name is valid for use in Spack.

Raise an exception if mod_name is not a valid namespaced module name.

Raise an exception if mod_name is not valid.

spack.util.package_hash module

Bases: SpackError

Raised for all errors encountered during package hashing.


Bases: NodeTransformer

Remove Spack directives from a package AST.

This removes Spack directives (e.g., depends_on, conflicts, etc.) and metadata attributes (e.g., tags, homepage, url) in a top-level class definition within a package.py, but it does not modify nested classes or functions.

If removing directives causes a for, with, or while statement to have an empty body, we remove the entire statement. Similarly, If removing directives causes an if statement to have an empty body or else block, we'll remove the block (or replace the body with pass if there is an else block but no body).










Bases: NodeTransformer

Transformer that removes docstrings from a Python AST.

This removes all strings that aren't on the RHS of an assignment statement from the body of functions, classes, and modules -- even if they're not directly after the declaration.






Bases: NodeTransformer

Remove multi-methods when we know statically that they won't be used.

Say we have multi-methods like this:

class SomePackage:

def foo(self): print("implementation 1")
@when("@1.0")
def foo(self): print("implementation 2")
@when("@2.0")
@when(sys.platform == "darwin")
def foo(self): print("implementation 3")
@when("@3.0")
def foo(self): print("implementation 4")


The multimethod that will be chosen at runtime depends on the package spec and on whether we're on the darwin platform at build time (the darwin condition for implementation 3 is dynamic). We know the package spec statically; we don't know statically what the runtime environment will be. We need to include things that can possibly affect package behavior in the package hash, and we want to exclude things when we know that they will not affect package behavior.

If we're at version 4.0, we know that implementation 1 will win, because some @when for 2, 3, and 4 will be False. We should only include implementation 1.

If we're at version 1.0, we know that implementation 2 will win, because it overrides implementation 1. We should only include implementation 2.

If we're at version 3.0, we know that implementation 4 will win, because it overrides implementation 1 (the default), and some @when on all others will be False.

If we're at version 2.0, it's a bit more complicated. We know we can remove implementations 2 and 4, because their @when's will never be satisfied. But, the choice between implementations 1 and 3 will happen at runtime (this is a bad example because the spec itself has platform information, and we should prefer to use that, but we allow arbitrary boolean expressions in @when's, so this example suffices). For this case, we end up needing to include both implementation 1 and 3 in the package hash, because either could be chosen.

Given list of nodes and conditions, figure out which node will be chosen.



Bases: NodeVisitor

Tag @when-decorated methods in a package AST.



Get canonical source for a spec's package.py by unparsing its AST.
  • filter_multimethods (bool) -- By default, filter multimethods out of the AST if they are known statically to be unused. Supply False to disable.
  • source (str) -- Optionally provide a string to read python code from.



Get the AST for the package.py file corresponding to spec.
  • filter_multimethods (bool) -- By default, filter multimethods out of the AST if they are known statically to be unused. Supply False to disable.
  • source (str) -- Optionally provide a string to read python code from.



Get a hash of a package's canonical source code.

This function is used to determine whether a spec needs a rebuild when a package's source code changes.

source (str) -- Optionally provide a string to read python code from.


spack.util.parallel module

Bases: object

Wrapper class to report an error from a worker process



Bases: object

Wrapped task that trap every Exception and return it as an ErrorFromWorker object.

We are using a wrapper class instead of a decorator since the class is pickleable, while a decorator with an inner closure is not.


Wrapper around multiprocessing.Pool.imap_unordered.
  • f -- function to apply
  • list_of_args -- list of tuples of args for the task
  • processes -- maximum number of processes allowed
  • debug -- if False, raise an exception containing just the error messages from workers, if True an exception with complete stacktraces
  • maxtaskperchild -- number of tasks to be executed by a child before being killed and substituted

RuntimeError -- if any error occurred in the worker processes


spack.util.path module

Utilities for managing paths in Spack.

TODO: this is really part of spack.config. Consolidate it.

Same as substitute_path_variables, but also take absolute path.

If the string is a yaml object with file annotations, make absolute paths relative to that file's directory. Otherwise, use default_wd if specified, otherwise os.getcwd()

path (str) -- path being converted as needed
An absolute path with path variable substitution
Return type
(str)


Substitute placeholders into paths.

Spack allows paths in configs to have some placeholders, as follows:

  • $env The active Spack environment.
  • $spack The Spack instance's prefix
  • $tempdir Default temporary directory returned by tempfile.gettempdir()
  • $user The current user's username
  • $user_cache_path The user cache directory (~/.spack, unless overridden)
  • $architecture The spack architecture triple for the current system
  • $arch The spack architecture triple for the current system
  • $platform The spack platform for the current system
  • $os The OS of the current system
  • $operating_system The OS of the current system
  • $target The ISA target detected for the system
  • $target_family The family of the target detected for the system
  • $date The current date (YYYY-MM-DD)

These are substituted case-insensitively into the path, and users can use either $var or ${var} syntax for the variables. $env is only replaced if there is an active environment, and should only be used in environment yaml files.


Substitute config vars, expand environment vars, expand user home.

spack.util.pattern module

Bases: Bunch

Subclass of Bunch to write argparse args more naturally.


Bases: object

Carries a bunch of named attributes (from Alex Martelli bunch)




Decorator implementing the GoF composite pattern.
  • interface (type) -- class exposing the interface to which the composite object must conform. Only non-private and non-special methods will be taken into account
  • method_list (list) -- names of methods that should be part of the composite
  • container (collections.abc.MutableSequence) -- container for the composite object (default = list). Must fulfill the MutableSequence contract. The composite class will expose the container API to manage object composition

a class decorator that patches a class adding all the methods it needs to be a composite for a given interface.


spack.util.prefix module

This file contains utilities for managing the installation prefix of a package.

Bases: str

This class represents an installation prefix, but provides useful attributes for referring to directories inside the prefix.

Attributes of this object are created on the fly when you request them, so any of the following are valid:

>>> prefix = Prefix("/usr")
>>> prefix.bin
/usr/bin
>>> prefix.lib64
/usr/lib64
>>> prefix.share.man
/usr/share/man
>>> prefix.foo.bar.baz
/usr/foo/bar/baz
>>> prefix.join("dashed-directory").bin64
/usr/dashed-directory/bin64
    

Prefix objects behave identically to strings. In fact, they subclass str, so operators like + are legal:

print("foobar " + prefix)


This prints foobar /usr. All of this is meant to make custom installs easy.

Concatenate a string to a prefix.

Useful for strings that are not valid variable names. This includes strings containing characters like - and ..

string -- the string to append to the prefix
the newly created installation prefix



spack.util.s3 module


Bases: BufferedReader
Disconnect this buffer from its underlying raw stream and return it.

After the raw stream has been detached, the buffer is in an unusable state.


Read and return up to n bytes.

If the argument is omitted, None, or negative, reads and returns all data until EOF.

If the argument is positive, and the underlying raw stream is not 'interactive', multiple raw reads may be issued to satisfy the byte count (unless EOF is reached first). But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent.

Returns an empty bytes object on EOF.

Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment.



Create s3 config for session/client from a Mirror instance (or just set defaults when no mirror is given.)


Map (mirror name, method) tuples to s3 client instances.

spack.util.spack_json module

Simple wrapper around JSON to guarantee consistent use of load/dump.

Bases: SpackError

Raised when there are issues with JSON parsing.


Dump JSON with a reasonable amount of indentation and separation.

Spack JSON needs to be ordered to support specs.

spack.util.spack_yaml module

Enhanced YAML parsing for Spack.

  • load() preserves YAML Marks on returned objects -- this allows us to access file and line information later.
  • Our load methods use ``OrderedDict class instead of YAML's default unorderd dict.

Bases: SpackError

Raised when there are issues with YAML parsing.




spack.util.timer module

Debug signal handler: prints a stack trace and enters interpreter.

register_interrupt_handler() enables a ctrl-C handler that prints a stack trace and drops the user into an interpreter.



Bases: BaseTimer

Timer interface that does nothing, useful in for "tell don't ask" style code when timers are optional.


Bases: tuple
Alias for field number 2

Alias for field number 3

Alias for field number 1

Alias for field number 0


Bases: BaseTimer

Simple interval timer

Get the time in seconds of a named timer, or the total time if no name is passed. The duration is always 0 for timers that have not been started, no error is raised.
name (str) -- (Optional) name of the timer
duration of timer.
Return type
float


Context manager that allows you to time a block of code.
name (str) -- Name of the timer


Get all named timers (excluding the global/total timer)

Start or restart a named timer, or the global timer when no name is given.
name (str) -- Optional name of the timer. When no name is passed, the global timer is started.


Stop a named timer, or all timers when no name is given. Stopping a timer that has not started has no effect.
name (str) -- Optional name of the timer. When no name is passed, all timers are stopped.



Write a human-readable summary of timings (depth is 1)


Bases: tuple
Alias for field number 2

Alias for field number 1

Alias for field number 0


name for the global timer (used in start(), stop(), duration() without arguments)

spack.util.url module

Utility functions for parsing, formatting, and manipulating URLs.

This method computes a default file name for a given URL. Note that it makes no request, so this is not the same as the option curl -O, which uses the remote file name from the response header.


Format a URL string

Returns a canonicalized format of the given URL as a string.


Historically some config files and spack commands used paths where urls should be used. This utility can be used to validate and promote paths to urls.

Joins a base URL with one or more local URL path components

If resolve_href is True, treat the base URL as though it where the locator of a web page, and the remaining URL path components as though they formed a relative URL to be resolved against it (i.e.: as in posixpath.join(...)). The result is an absolute URL to the resource to which a user's browser would navigate if they clicked on a link with an "href" attribute equal to the relative URL.

If resolve_href is False (default), then the URL path components are joined as in posixpath.join().

Note: file:// URL path components are not canonicalized as part of this operation. To canonicalize, pass the joined url to format().

Examples

base_url = 's3://bucket/index.html' body = fetch_body(prefix) link = get_href(body) # link == '../other-bucket/document.txt'

# wrong - link is a local URL that needs to be resolved against base_url spack.util.url.join(base_url, link) 's3://bucket/other_bucket/document.txt'

# correct - resolve local URL against base_url spack.util.url.join(base_url, link, resolve_href=True) 's3://other_bucket/document.txt'

prefix = 'https://mirror.spack.io/build_cache'

# wrong - prefix is just a URL prefix spack.util.url.join(prefix, 'my-package', resolve_href=True) 'https://mirror.spack.io/my-package'

# correct - simply append additional URL path components spack.util.url.join(prefix, 'my-package', resolve_href=False) # default 'https://mirror.spack.io/build_cache/my-package'

# For canonicalizing file:// URLs, take care to explicitly differentiate # between absolute and relative join components.


Get a local file path from a url.

If url is a file:// URL, return the absolute path to the local file or directory referenced by it. Otherwise, return None.



Returns true if the URL scheme is generally known to Spack. This function helps mostly in validation of paths vs urls, as Windows paths such as C:/x/y/z (with backward not forward slash) may parse as a URL with scheme C and path /x/y/z.

spack.util.web module


Bases: HTMLParser

This parser takes an HTML page and selects the include-fragments, used on GitHub, https://github.github.io/include-fragment-element, as well as a possible base url.




Bases: HTMLParser

This parser just takes an HTML page and strips out the hrefs on the links. Good enough for a really simple spider.



Bases: SpackWebError

Raised when an operation can't get an internet connection.


User-Agent used in Request objects


Bases: SpackError

Superclass for Spack web spidering errors.


Return the basic fetch arguments typically used in calls to curl.

The arguments include those for ensuring behaviors such as failing on errors for codes over 400, printing HTML headers, resolving 3xx redirects, status or failure handling, and connection timeouts.

It also uses the following configuration option to set an additional argument as needed:

  • config:connect_timeout (int): connection timeout
  • config:verify_ssl (str): Perform SSL verification



  • url (str) -- URL whose contents will be fetched
  • timeout (int) -- Connection timeout, which is only used if higher than config:connect_timeout


Returns (list): list of argument strings


Check standard return code failures for provided arguments.
returncode (int) -- curl return code

Raises FetchError if the curl returncode indicates failure


Retrieves text-only URL content using the configured fetch method. It determines the fetch method from:
config:url_fetch_method (str): fetch method to use (e.g., 'curl')



If the method is curl, it also uses the following configuration options:

  • config:connect_timeout (int): connection time out
  • config:verify_ssl (str): Perform SSL verification



  • url (str) -- URL whose contents are to be fetched
  • curl (spack.util.executable.Executable or None) -- (optional) curl executable if curl is the configured fetch method
  • dest_dir (str) -- (optional) destination directory for fetched text file


Returns (str or None): path to the fetched file

Raises FetchError if the curl returncode indicates failure


Looks up a dict of headers for the given header value.

Looks up a dict of headers, [headers], for a header value given by [header_name]. Returns headers[header_name] if header_name is in headers. Otherwise, the first fuzzy match is returned, if any.

This fuzzy matching is performed by discarding word separators and capitalization, so that for example, "Content-length", "content_length", "conTENtLength", etc., all match. In the case of multiple fuzzy-matches, the returned value is the "first" such match given the underlying mapping's ordering, or unspecified if no such ordering is defined.

If header_name is not in headers, and no such fuzzy match exists, then a KeyError is raised.



Parse a strong etag from an ETag: <value> header value. We don't allow for weakness indicators because it's unclear what that means for cache invalidation.




Get web pages from root URLs.

If depth is specified (e.g., depth=2), then this will also follow up to <depth> levels of links from each root.

  • root_urls -- root urls used as a starting point for spidering
  • depth -- level of recursion into links
  • concurrency -- number of simultaneous requests that can be sent

A dict of pages visited (URL) mapped to their full text and the set of visited links.


Determines whether url exists.

A scheme-specific process is used for Google Storage (gs) and Amazon Simple Storage Service (s3) URLs; otherwise, the configured fetch method defined by config:url_fetch_method is used.

  • url (str) -- URL whose existence is being checked
  • curl (spack.util.executable.Executable or None) -- (optional) curl executable if curl is the configured fetch method


Returns (bool): True if it exists; False otherwise.


Dispatches to the correct OpenerDirector.open, based on Spack configuration.

spack.util.windows_registry module

Utility module for dealing with Windows Registry.

Bases: object

Predefined, open registry HKEYs From the Microsoft docs: An application must open a key before it can read data from the registry. To open a key, an application must supply a handle to another key in the registry that is already open. The system defines predefined keys that are always open. Predefined keys help an application navigate in the registry.








Bases: RuntimeError

Runtime Error describing issue with invalid key access to Windows registry


Bases: object

Class wrapping a Windows registry key

Returns subkey of name sub_key in a RegistryKey objects

Returns value associated with this key in RegistryValue object


Returns list of all subkeys of this key as RegistryKey objects

Returns all subvalues of this key as RegistryValue objects in dictionary of value name : RegistryValue object


Bases: object

Class defining a Windows registry entry


Bases: object

Interface to provide access, querying, and searching to Windows registry entries. This class represents a single key entrypoint into the Windows registry and provides an interface to this key's values, its subkeys, and those subkey's values. This class cannot be used to move freely about the registry, only subkeys/values of the root key used to instantiate this class.


Perform a BFS of subkeys until a key matching subkey name regex is found Returns None or the first RegistryKey object corresponding to requested key name
subkey_name (str) --
the desired subkey as a RegistryKey object, or none

For more details, see the WindowsRegistryView._find_subkey_s method docstring


Perform a BFS of subkeys until desired key is found Returns None or RegistryKey object corresponding to requested key name
subkey_name (str) --
the desired subkey as a RegistryKey object, or none

For more details, see the WindowsRegistryView._find_subkey_s method docstring


Exactly the same as find_subkey, except this function tries to match a regex to multiple keys
subkey_name (str) --
the desired subkeys as a list of RegistryKey object, or none

For more details, see the WindowsRegistryView._find_subkey_s method docstring


If non recursive, return RegistryValue object corresponding to name
  • val_name (str) -- name of value desired from registry
  • recursive (bool) -- optional argument, if True, the registry is searched recursively for the value of name val_name, else only the current key is searched

The desired registry value as a RegistryValue object if it exists, otherwise, None


Returns all subkeys regex matching subkey name

Note: this method obtains only direct subkeys of the given key and does not desced to transtitve subkeys. For this behavior, see find_matching_subkeys




Return registry value corresponding to provided argument (if it exists)





spack.version package

This module implements Version and version-ish objects. These are:

StandardVersion: A single version of a package. ClosedOpenRange: A range of versions of a package. VersionList: A ordered list of Version and VersionRange elements.

The set of Version and ClosedOpenRange is totally ordered wiht < defined as Version(x) < VersionRange(Version(y), Version(x)) if Version(x) <= Version(y).


Bases: VersionError

Raised when constructing an empty version range.


Bases: ConcreteVersion

Class to represent versions interpreted from git refs.

There are two distinct categories of git versions:

1.
GitVersions instantiated with an associated reference version (e.g. 'git.foo=1.2')
2.
GitVersions requiring commit lookups

Git ref versions that are not paired with a known version are handled separately from all other version comparisons. When Spack identifies a git ref version, it associates a CommitLookup object with the version. This object handles caching of information from the git repo. When executing comparisons with a git ref version, Spack queries the CommitLookup for the most recent version previous to this git ref, as well as the distance between them expressed as a number of commits. If the previous version is X.Y.Z and the distance is D, the git commit version is represented by the tuple (X, Y, Z, '', D). The component '' cannot be parsed as part of any valid version, but is a valid component. This allows a git ref version to be less than (older than) every Version newer than its previous version, but still newer than its previous version.

To find the previous version from a git ref version, Spack queries the git repo for its tags. Any tag that matches a version known to Spack is associated with that version, as is any tag that is a known version prepended with the character v (i.e., a tag v1.0 is associated with the known version 1.0). Additionally, any tag that represents a semver version (X.Y.Z with X, Y, Z all integers) is associated with the version it represents, even if that version is not known to Spack. Each tag is then queried in git to see whether it is an ancestor of the git ref in question, and if so the distance between the two. The previous version is the version that is an ancestor with the least distance from the git ref in question.

This procedure can be circumvented if the user supplies a known version to associate with the GitVersion (e.g. [hash]=develop). If the user prescribes the version then there is no need to do a lookup and the standard version comparison operations are sufficient.

Use the git fetcher to look up a version for a commit.

Since we want to optimize the clone and lookup, we do the clone once and store it in the user specified git repository cache. We also need context of the package to get known versions, which could be tags if they are linked to Git Releases. If we are unable to determine the context of the version, we cannot continue. This implementation is alongside the GitFetcher because eventually the git repos cache will be one and the same with the source cache.
















up_to(index) -> StandardVersion


Bases: ConcreteVersion

Class to represent versions

The dashed representation of the version.

Example: >>> version = Version('1.2.3b') >>> version.dashed Version('1-2-3b')

The version with separator characters replaced by dashes
Return type
Version


The dotted representation of the version.

Example: >>> version = Version('1-2-3b') >>> version.dotted Version('1.2.3b')

The version with separator characters replaced by dots
Return type
Version





Triggers on the special case of the @develop-like version.

The joined representation of the version.

Example: >>> version = Version('1.2.3b') >>> version.joined Version('123b')

The version with separator characters removed
Return type
Version








The underscored representation of the version.

Example: >>> version = Version('1.2.3b') >>> version.underscored Version('1_2_3b')




The version up to the specified component.

Examples: >>> version = Version('1.23-4b') >>> version.up_to(1) Version('1') >>> version.up_to(2) Version('1.23') >>> version.up_to(3) Version('1.23-4') >>> version.up_to(4) Version('1.23-4b') >>> version.up_to(-1) Version('1.23-4') >>> version.up_to(-2) Version('1.23') >>> version.up_to(-3) Version('1')

The first index components of the version
Return type
Version





Bases: VersionError

Raised for version checksum errors.


Bases: SpackError

This is raised when something is wrong with a version.


Bases: object

Sorted, non-redundant list of Version and ClosedOpenRange elements.



Like concrete, but collapses VersionRange(x, x) to Version(x). This is just for compatibility with old Spack.


Parse dict from to_dict.

Get the highest version in the list.

Get the highest numeric version in the list.

Intersect this spec's list with other.

Return True if the spec changed as a result; False otherwise




Get the lowest version in the list.


Get the preferred (latest) version in the list.

satisfies(other) -> bool

Generate human-readable dict for YAML.




Bases: VersionError

Raised for errors looking up git commits as versions.



This version contains all possible versions.

Converts a string to a version object. This is private. Client code should use ver().




Parses a Version, VersionRange, or VersionList from a string or list of strings.

Submodules

spack.version.common module

Bases: VersionError

Raised when constructing an empty version range.


Bases: VersionError

Raised for version checksum errors.


Bases: SpackError

This is raised when something is wrong with a version.


Bases: VersionError

Raised for errors looking up git commits as versions.



spack.version.git_ref_lookup module

Bases: AbstractRefLookup

An object for cached lookups of git refs

GitRefLookup objects delegate to the MISC_CACHE for locking. GitRefLookup objects may be attached to a GitVersion to allow for comparisons between git refs and versions as represented by tags in the git repository.




Get the version string and distance for a given git ref.
ref (str) -- git ref to lookup

Returns: optional version string and distance


Load data if the path already exists.

Lookup the previous version and distance for a given commit.

We use git to compare the known versions from package to the git tags, as well as any git tags that are SEMVER versions, and find the latest known version prior to the commit, as well as the distance from that version to the commit in the git repo. Those values are used to compare Version objects.



Identifier for git repos used within the repo and metadata caches.

Save the data to file


spack.version.lookup module

Bases: object
Get the version string and distance for a given git ref.
ref (str) -- git ref to lookup

Returns: optional version string and distance



spack.version.version_types module



Bases: ConcreteVersion

Class to represent versions interpreted from git refs.

There are two distinct categories of git versions:

1.
GitVersions instantiated with an associated reference version (e.g. 'git.foo=1.2')
2.
GitVersions requiring commit lookups

Git ref versions that are not paired with a known version are handled separately from all other version comparisons. When Spack identifies a git ref version, it associates a CommitLookup object with the version. This object handles caching of information from the git repo. When executing comparisons with a git ref version, Spack queries the CommitLookup for the most recent version previous to this git ref, as well as the distance between them expressed as a number of commits. If the previous version is X.Y.Z and the distance is D, the git commit version is represented by the tuple (X, Y, Z, '', D). The component '' cannot be parsed as part of any valid version, but is a valid component. This allows a git ref version to be less than (older than) every Version newer than its previous version, but still newer than its previous version.

To find the previous version from a git ref version, Spack queries the git repo for its tags. Any tag that matches a version known to Spack is associated with that version, as is any tag that is a known version prepended with the character v (i.e., a tag v1.0 is associated with the known version 1.0). Additionally, any tag that represents a semver version (X.Y.Z with X, Y, Z all integers) is associated with the version it represents, even if that version is not known to Spack. Each tag is then queried in git to see whether it is an ancestor of the git ref in question, and if so the distance between the two. The previous version is the version that is an ancestor with the least distance from the git ref in question.

This procedure can be circumvented if the user supplies a known version to associate with the GitVersion (e.g. [hash]=develop). If the user prescribes the version then there is no need to do a lookup and the standard version comparison operations are sufficient.

Use the git fetcher to look up a version for a commit.

Since we want to optimize the clone and lookup, we do the clone once and store it in the user specified git repository cache. We also need context of the package to get known versions, which could be tags if they are linked to Git Releases. If we are unable to determine the context of the version, we cannot continue. This implementation is alongside the GitFetcher because eventually the git repos cache will be one and the same with the source cache.
















up_to(index) -> StandardVersion


Bases: ConcreteVersion

Class to represent versions

The dashed representation of the version.

Example: >>> version = Version('1.2.3b') >>> version.dashed Version('1-2-3b')

The version with separator characters replaced by dashes
Return type
Version


The dotted representation of the version.

Example: >>> version = Version('1-2-3b') >>> version.dotted Version('1.2.3b')

The version with separator characters replaced by dots
Return type
Version





Triggers on the special case of the @develop-like version.

The joined representation of the version.

Example: >>> version = Version('1.2.3b') >>> version.joined Version('123b')

The version with separator characters removed
Return type
Version








The underscored representation of the version.

Example: >>> version = Version('1.2.3b') >>> version.underscored Version('1_2_3b')




The version up to the specified component.

Examples: >>> version = Version('1.23-4b') >>> version.up_to(1) Version('1') >>> version.up_to(2) Version('1.23') >>> version.up_to(3) Version('1.23-4') >>> version.up_to(4) Version('1.23-4b') >>> version.up_to(-1) Version('1.23-4') >>> version.up_to(-2) Version('1.23') >>> version.up_to(-3) Version('1')

The first index components of the version
Return type
Version





Bases: object

Sorted, non-redundant list of Version and ClosedOpenRange elements.



Like concrete, but collapses VersionRange(x, x) to Version(x). This is just for compatibility with old Spack.


Parse dict from to_dict.

Get the highest version in the list.

Get the highest numeric version in the list.

Intersect this spec's list with other.

Return True if the spec changed as a result; False otherwise




Get the lowest version in the list.


Get the preferred (latest) version in the list.

satisfies(other) -> bool

Generate human-readable dict for YAML.






Converts a string to a version object. This is private. Client code should use ver().

Produce the next string of A-Z and a-z characters


Produce the next VersionStrComponent, where masteq -> mastes master -> main


Produce the previous string of A-Z and a-z characters


Produce the previous VersionStrComponent, where mastes -> masteq master -> head

Parses a Version, VersionRange, or VersionList from a string or list of strings.

Submodules

spack.abi module

Bases: object

This class provides methods to test ABI compatibility between specs. The current implementation is rather rough and could be improved.

Return true if architecture of target spec is ABI compatible to the architecture of constraint spec. If either the target or constraint specs have no architecture, target is also defined as architecture ABI compatible to constraint.

Returns true if target spec is ABI compatible to constraint spec

Return true if compilers for parent and child are ABI compatible.


spack.audit module

Classes and functions to register audit checks for various parts of Spack and run them on-demand.

To register a new class of sanity checks (e.g. sanity checks for compilers.yaml), the first action required is to create a new AuditClass object:

audit_cfgcmp = AuditClass(

tag='CFG-COMPILER',
description='Sanity checks on compilers.yaml',
kwargs=() )


This object is to be used as a decorator to register functions that will perform each a single check:

@audit_cfgcmp
def _search_duplicate_compilers(error_cls):

pass


These functions need to take as argument the keywords declared when creating the decorator object plus an error_cls argument at the end, acting as a factory to create Error objects. It should return a (possibly empty) list of errors.

Calls to each of these functions are triggered by the run method of the decorator object, that will forward the keyword arguments passed as input.



Bases: object

Information on an error reported in a test.






Generic checks relying on global state


Return the list of packages with a corresponding detection_test.yaml file.

Run the checks associated with a single tag.
  • tag (str) -- tag of the check
  • **kwargs -- keyword arguments forwarded to the checks

Errors occurred during the checks


Run the checks that are part of the group passed as argument.
  • group (str) -- group of checks to be run
  • **kwargs -- keyword arguments forwarded to the checks

List of (tag, errors) that failed.


spack.binary_distribution module


Bases: object

The BinaryCacheIndex tracks what specs are available on (usually remote) binary caches.

This index is "best effort", in the sense that whenever we don't find what we're looking for here, we will attempt to fetch it directly from configured mirrors anyway. Thus, it has the potential to speed things up, but cache misses shouldn't break any spack functionality.

At the moment, everything in this class is initialized as lazily as possible, so that it avoids slowing anything in spack down until absolutely necessary.

TODO: What's the cost if, e.g., we realize in the middle of a spack install that the cache is out of date, and we fetch directly? Does it mean we should have paid the price to update the cache earlier?

For testing purposes we need to be able to empty the cache and clear associated data structures.

Look in our cache for the built spec corresponding to spec.

If the spec can be found among the configured binary mirrors, a list is returned that contains the concrete spec and the mirror url of each mirror where it can be found. Otherwise, None is returned.

This method does not trigger reading anything from remote mirrors, but rather just checks if the concrete spec is found within the cache.

The cache can be updated by calling update() on the cache.

  • spec (spack.spec.Spec) -- Concrete spec to find
  • mirrors_to_check -- Optional mapping containing mirrors to check. If None, just assumes all configured mirrors.

each can be found, e.g.:

[

{
"spec": <concrete-spec>,
"mirror_url": <mirror-root-url>
} ]





Same as find_built_spec but uses the hash of a spec.
  • find_hash (str) -- hash of the spec to search
  • mirrors_to_check -- Optional mapping containing mirrors to check. If None, just assumes all configured mirrors.




Populate the local cache of concrete specs (_mirrors_for_spec) from the locally cached buildcache index files. This is essentially a no-op if it has already been done, as we keep track of the index hashes for which we have already associated the built specs.

Make sure local cache of buildcache index files is up to date. If the same mirrors are configured as the last time this was called and none of the remote buildcache indices have changed, calling this method will only result in fetching the index hash from each mirror to confirm it is the same as what is stored locally. Otherwise, the buildcache index.json and index.json.hash files are retrieved from each configured mirror and stored locally (both in memory and on disk under _index_cache_root).

Take list of {'mirror_url': m, 'spec': s} objects and update the local built_spec_cache


Bases: object

Callable object to query if a spec is in a binary cache


Bases: Database

A database for binary buildcaches.

A database supports writing buildcache index files, in which case certain fields are not needed in each install record, and no locking is required. To use this feature, it provides lock_cfg=NO_LOCK, and override the list of record_fields.



Bases: BaseDirectoryVisitor

Visitor that collects a list of files and symlinks that can be checked for need of relocation. It knows how to dedupe hardlinks and deal with symlinks to files and directories.

Return True from this function to recurse into the directory at os.path.join(root, rel_path). Return False in order not to recurse further.
  • root (str) -- root directory
  • rel_path (str) -- relative path to current directory from root
  • depth (int) -- depth of current directory from the root directory

True when the directory should be recursed into. False when not
Return type
bool


Return True to recurse into the symlinked directory and False in order not to. Note: rel_path is the path to the symlink itself. Following symlinked directories blindly can cause infinite recursion due to cycles.
  • root (str) -- root directory
  • rel_path (str) -- relative path to current symlink from root
  • depth (int) -- depth of current symlink from the root directory

True when the directory should be recursed into. False when not
Return type
bool



Handle the non-symlink file at os.path.join(root, rel_path)
  • root (str) -- root directory
  • rel_path (str) -- relative path to current file from root
  • depth (int) -- depth of current file from the root directory



Handle the symlink to a file at os.path.join(root, rel_path). Note: rel_path is the location of the symlink, not to what it is pointing to. The symlink may be dangling.
  • root (str) -- root directory
  • rel_path (str) -- relative path to current symlink from root
  • depth (int) -- depth of current symlink from the root directory




Bases: SpackError

Raised when a buildcache cannot be read for any reason


Bases: BufferedIOBase

Checksum writer computes a checksum while writing to a file.

Flush and close the IO object.

This method has no effect if the file is already closed.



Returns underlying file descriptor if one exists.

OSError is raised if the IO object does not use a file descriptor.


Flush write buffers, if applicable.

This is not implemented for read-only and non-blocking streams.





Read and return up to n bytes.

If the argument is omitted, None, or negative, reads and returns all data until EOF.

If the argument is positive, and the underlying raw stream is not 'interactive', multiple raw reads may be issued to satisfy the byte count (unless EOF is reached first). But for interactive raw streams (as well as sockets and pipes), at most one raw read will be issued, and a short result does not imply that EOF is imminent.

Returns an empty bytes object on EOF.

Returns None if the underlying raw stream was open in non-blocking mode and no data is available at the moment.


Read and return up to n bytes, with at most one read() call to the underlying raw stream. A short result does not imply that EOF is imminent.

Returns an empty bytes object on EOF.


Return whether object was opened for reading.

If False, read() will raise OSError.


Read and return a line from the stream.

If size is specified, at most size bytes will be read.

The line terminator is always b'n' for binary files; for text files, the newlines argument to open can be used to select the line terminator(s) recognized.



Change the stream position to the given byte offset.
The stream position, relative to 'whence'.
The relative position to seek from.



The offset is interpreted relative to the position indicated by whence. Values for whence are:

  • os.SEEK_SET or 0 -- start of stream (the default); offset should be zero or positive
  • os.SEEK_CUR or 1 -- current stream position; offset may be negative
  • os.SEEK_END or 2 -- end of stream; offset is usually negative

Return the new absolute position.


Return whether object supports random access.

If False, seek(), tell() and truncate() will raise OSError. This method may need to do a test seek().


Return current stream position.

Return whether object was opened for writing.

If False, write() will raise OSError.


Write the given buffer to the IO stream.

Returns the number of bytes written, which is always the length of b in bytes.

Raises BlockingIOError if the buffer is full and the underlying raw stream cannot accept more data at the moment.





Bases: Exception

Error thrown when fetching the cache failed, usually a composite error list.



Bases: tuple
Alias for field number 2

Alias for field number 0

Alias for field number 3

Alias for field number 1



Bases: SpackError

Raised when unable to retrieve list of specs from the mirror


Bases: SpackError

Raised if directory layout is different from buildcache.



Bases: SpackError

Raised when gpg2 is not in PATH


Bases: SpackError

Raised when gpg has no default key added.


Bases: SpackError

Raised when a file would be overwritten


Bases: SpackError

Raised if file fails signature verification.


Bases: SpackError

Raised when a spec is not installed but picked to be packaged.



Bases: SpackError

Raised when multiple keys can be used to sign.


Bases: NamedTuple
Overwrite existing tarball/metadata files in buildcache

What key to use for signing

Regenerated indices after pushing

Whether to sign or not.


Bases: SpackError

Raised if installation of unsigned package is attempted without the use of --no-check-signature.


Set up a BinaryCacheIndex for remote buildcache dbs in the user's homedir.




Filename of the binary package meta-data file

Check all the given specs against buildcaches on the given mirrors and determine if any of the specs need to be rebuilt. Specs need to be rebuilt when their hash doesn't exist in the mirror.
  • mirrors (dict) -- Mirrors to check against
  • specs (Iterable) -- Specs to check against mirrors
  • output_file (str) -- Path to output file to be written. If provided, mirrors with missing or out-of-date specs will be formatted as a JSON object and written to this file.


Returns: 1 if any spec was out-of-date on any mirror, 0 otherwise.




Updates a buildinfo dict for old archives that did not dedupe hardlinks. De-duping hardlinks is necessary when relocating files in parallel and in-place. This means we must preserve inodes when relocating.


Download the buildcache files for a single concrete spec.
  • concrete_spec -- concrete spec to be downloaded
  • destination (str) -- path where to put the downloaded buildcache
  • mirror_url (str) -- url of the mirror from which to download



Download binary tarball for given package into stage area, returning path to downloaded tarball if successful, None otherwise.
  • spec (spack.spec.Spec) -- Concrete spec
  • unsigned (bool) -- Whether or not to require signed binaries
  • mirrors_for_spec (list) -- Optional list of concrete specs and mirrors obtained by calling binary_distribution.get_mirrors_for_spec(). These will be checked in order first before looking in other configured mirrors.

None if the tarball could not be downloaded (maybe also verified, depending on whether new-style signed binary packages were found). Otherwise, return an object indicating the path to the downloaded tarball, the path to the downloaded specfile (in the case of new-style buildcache), and whether or not the tarball is already verified.

{

"tarball_path": "path-to-locally-saved-tarfile",
"specfile_path": "none-or-path-to-locally-saved-specfile",
"signature_verified": "true-if-binary-pkg-was-already-verified" }





Create the key index page.

Creates (or replaces) the "index.json" page at the location given in key_prefix. This page contains an entry for each key (.pub) under key_prefix.


Create or replace the build cache index on the given mirror. The buildcache index contains an entry for each binary package under the cache_prefix.
  • cache_prefix (str) -- Base url of binary mirror.
  • concurrency -- (int): The desired threading concurrency to use when fetching the spec files from the mirror.

None


Return a data structure with information about a build, including text_to_relocate, binary_to_relocate, binary_to_relocate_fullpath link_to_relocate, and other, which means it doesn't fit any of previous checks (and should not be relocated). We exclude docs (man) and metadata (.spack). This can be used to find a particular kind of file in spack, or to generate the build metadata.



Check if concrete spec exists on mirrors and return a list indicating the mirrors on which it can be found
  • spec (spack.spec.Spec) -- The spec to look for in binary mirrors
  • mirrors_to_check (dict) -- Optionally override the configured mirrors with the mirrors in this dictionary.
  • index_only (bool) -- When index_only is set to True, only the local cache is checked, no requests are made.

indicating all mirrors where the spec can be found.



Create a reproducible, compressed tarfile

Return a dictionary of hashes to prefixes for a spec and its deps, excluding externals

Install the root node of a concrete spec from a buildcache.

Checking the sha256 sum of a node before installation is usually needed only for software installed during Spack's bootstrapping (since we might not have a proper signature verification mechanism available).

  • spec -- spec to be installed (note that only the root node will be installed)
  • unsigned (bool) -- if True allows installing unsigned binaries
  • force (bool) -- force installation if the spec is already present in the local store
  • sha256 (str) -- optional sha256 of the binary package, to be checked before installation



Install a single concrete spec from a buildcache.
  • spec (spack.spec.Spec) -- spec to be installed
  • unsigned (bool) -- if True allows installing unsigned binaries
  • force (bool) -- force installation if the spec is already present in the local store




Create and push binary package for a single spec to the specified mirror url.
  • spec -- Spec to package and push
  • mirror_url -- Desired destination url for binary package
  • options --

True if package was pushed, False otherwise.


Upload pgp public keys to the given mirrors

Build a tarball from given spec and put it into the directory structure used at the mirror (following <tarball_directory_name>).

This method raises NoOverwriteException when force=False and the tarball or spec.json file already exist in the buildcache.






Return the list of nodes to be packaged, given a list of specs. Raises NotInstalledError if a spec is not installed but picked to be packaged.
  • specs -- list of root specs to be processed
  • root -- include the root of each spec in the nodes
  • dependencies -- include the dependencies of each spec in the nodes



Return name of the tarball directory according to the convention <os>-<architecture>/<compiler>/<package>-<version>/

Return the name of the tarfile according to the convention <os>-<architecture>-<package>-<dag_hash><ext>

Return the full path+name for a given spec according to the convention <tarball_directory_name>/<tarball_name>

Create a tarfile of an install prefix of a spec. Skips existing buildinfo file. Only adds regular files, symlinks and dirs. Skips devices, fifos. Preserves hardlinks. Normalizes permissions like git. Tar entries are added in depth-first pre-order, with dir entries partitioned by file | dir, and sorted alphabetically, for reproducibility. Partitioning ensures only one dir is in memory at a time, and sorting improves compression.
  • tar -- tarfile object to add files to
  • prefix -- absolute install prefix of spec



Try to find the spec directly on the configured mirrors

Utility function to try and fetch a file from a url, stage it locally, and return the path to the staged file.
url_to_fetch (str) -- Url pointing to remote resource to fetch
Path to locally staged resource or None if it could not be fetched.


Utility function to attempt to verify a local file. Assumes the file is a clearsigned signature file.
specfile_path (str) -- Path to file to be verified.
True if the signature could be verified, False otherwise.


Get all concrete specs for build caches available on configured mirrors. Initialization of internal cache data structures is done as lazily as possible, so this method will also attempt to initialize and update the local index cache (essentially a no-op if it has been done already and nothing has changed on the configured mirrors.)
FetchCacheError


spack.build_environment module

This module contains all routines related to setting up the package build environment. All of this is set up by package.py just before install() is called.

There are two parts to the build environment:

1.
Python build environment (i.e. install() method)

This is how things are set up when install() is called. Spack takes advantage of each package being in its own module by adding a bunch of command-like functions (like configure(), make(), etc.) in the package's module scope. Ths allows package writers to call them all directly in Package.install() without writing 'self.' everywhere. No, this isn't Pythonic. Yes, it makes the code more readable and more like the shell script from which someone is likely porting.

2.
Build execution environment

This is the set of environment variables, like PATH, CC, CXX, etc. that control the build. There are also a number of environment variables used to pass information (like RPATHs and other information about dependencies) to Spack's compiler wrappers. All of these env vars are also set up here.


Skimming this module is a nice way to get acquainted with the types of calls you can make from within the install() function.

Bases: InstallError

The main features of a ChildError are:

1.
They're serializable, so when a child build fails, we can send one of these to the parent and let the parent report what happened.
2.
They have a traceback field containing a traceback generated on the child immediately after failure. Spack will print this on failure in lieu of trying to run sys.excepthook on the parent process, so users will see the correct stack trace from a child.
3.
They also contain context, which shows context in the Package implementation where the error happened. This helps people debug Python code in their packages. To get it, Spack searches the stack trace for the deepest frame where self is in scope and is an instance of PackageBase. This will generally find a useful spot in the package.py file.

The long_message of a ChildError displays one of two things:

1.
If the original error was a ProcessError, indicating a command died during the build, we'll show context from the build log.
2.
If the original error was any other type of error, we'll show context from the Python code.



SpackError handles displaying the special traceback if we're in debug mode with spack -d.





Bases: Executable

Special callable executable object for make so the user can specify parallelism options on a per-invocation basis. Specifying 'parallel' to the call will override whatever the package's global setting is, so you can either default to true or false and override particular calls. Specifying 'jobs_env' to a particular call will name an environment variable which will be set to the parallelism level (without affecting the normal invocation with -j).


Bases: object

Wrapper class to accept changes to a package.py Python module, and propagate them in the MRO of the package.

It is mainly used as a substitute of the package.py module, when calling the "setup_dependent_package" function during build environment setup.



Bases: object

This class encapsulates the logic to determine environment modifications, and is used as well to set globals in modules of package.py.

Returns the environment variable modifications for the given input specs and context. Environment modifications include: - Updating PATH for packages that are required at runtime - Updating CMAKE_PREFIX_PATH and PKG_CONFIG_PATH so that their respective tools can find Spack-built dependencies (when context=build) - Running custom package environment modifications: setup_run_environment, setup_dependent_run_environment, setup_build_environment, setup_dependent_build_environment.

The (partial) order imposed on the specs is externals first, then topological from leaf to root. That way externals cannot contribute search paths that would shadow Spack's prefixes, and dependents override variables set by dependencies.


Set the globals in modules of package.py files.


Bases: SpackError

Pickle-able exception to control stopped builds.


Bases: Flag
Flag is set when the (node, mode) is finalized

A spec that should be visible in search paths in a build env.

A spec that's a direct build or test dep

Entrypoint spec (a spec to be built; an env root, etc)

A spec used at runtime, but no executables in PATH

A spec used at runtime, with executables in PATH



Given a list of input specs and a context, return a list of tuples of all specs that contribute to (environment) modifications, together with a flag specifying in what way they do so. The list is ordered topologically from root to leaf, meaning that environment modifications should be applied in reverse so that dependents override dependencies, not the other way around.


Return the number of jobs, or None if supports_jobserver and a jobserver is detected.

Return some context for an error message when the build fails.
  • traceback -- A traceback from some exception raised during install
  • context (int) -- Lines of context to show before and after the line where the error happened


This function inspects the stack to find where we failed in the package file, and it adds detailed context to the long_message from there.


Return immediate or transitive RPATHs depending on the package.

Get a list of all the rpaths for a package.

Returns true if a posix jobserver (make) is detected.

Traverse a package's spec DAG and load any external modules.

Traverse a package's dependencies and load any external modules associated with them.

pkg (spack.package_base.PackageBase) -- package to load deps for



Populate the Python module of a package with some useful global names. This makes things easier for package writers.

Set environment variables used by the Spack compiler wrapper (which have the prefix SPACK_) and also add the compiler wrappers to PATH.

This determines the injected -L/-I/-rpath options; each of these specifies a search order and this function computes these options in a manner that is intended to match the DAG traversal order in SetupContext. TODO: this is not the case yet, we're using post order, SetupContext is using topo order.



Create a child process to do part of a spack build.
  • pkg (spack.package_base.PackageBase) -- package whose environment we should set up the child process for.
  • function (Callable) -- argless function to run in the child process.


Usage:

def child_fun():

# do stuff build_env.start_build_process(pkg, child_fun)


The child process is run with the build environment set up by spack.build_environment. This allows package authors to have full control over the environment, etc. without affecting other builds that might be executed in the same spack call.

If something goes wrong, the child process catches the error and passes it to the parent wrapped in a ChildError. The parent is expected to handle (or re-raise) the ChildError.

This uses multiprocessing.Process to create the child process. The mechanism used to create the process differs on different operating systems and for different versions of Python. In some cases "fork" is used (i.e. the "fork" system call) and some cases it starts an entirely new Python interpreter process (in the docs this is referred to as the "spawn" start method). Breaking it down by OS:

  • Linux always uses fork.
  • Mac OS uses fork before Python 3.8 and "spawn" for 3.8 and after.
  • Windows always uses the "spawn" start method.

For more information on multiprocessing child process creation mechanisms, see https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods



spack.builder module


Bases: Sequence

A builder is a class that, given a package object (i.e. associated with concrete spec), knows how to install it.

The builder behaves like a sequence, and when iterated over return the "phases" of the installation in the correct order.

pkg (spack.package_base.PackageBase) -- package object to be built

List of glob expressions. Each expression must either be absolute or relative to the package source path. Matching artifacts found at the end of the build process will be copied in the same directory tree as _spack_build_logfile and _spack_build_envfile.

Build system name. Must also be defined in derived classes.





Sequence of phases. Must be defined in derived classes


Sets up the build environment for a package.

This method will be called before the current package prefix exists in Spack's store.

env (spack.util.environment.EnvironmentModifications) -- environment modifications to be applied when the package is built. Package authors can call methods on it to alter the build environment.


Sets up the build environment of packages that depend on this one.

This is similar to setup_build_environment, but it is used to modify the build environments of packages that depend on this one.

This gives packages like Python and others that follow the extension model a way to implement common environment or compile-time settings for dependencies.

This method will be called before the dependent package prefix exists in Spack's store.

Examples

1. Installing python modules generally requires PYTHONPATH to point to the lib/pythonX.Y/site-packages directory in the module's install prefix. This method could be used to set that variable.

  • env (spack.util.environment.EnvironmentModifications) -- environment modifications to be applied when the dependent package is built. Package authors can call methods on it to alter the build environment.
  • dependent_spec (spack.spec.Spec) -- the spec of the dependent package about to be built. This allows the extendee (self) to query the dependent's state. Note that this package's spec is available as self.spec







Bases: PhaseCallbacksMeta, ABCMeta

Bases: tuple

An object of this kind is a shared global state used to collect callbacks during class definition time, and is flushed when the class object is created at the end of the class definition

  • attribute_name (str) -- name of the attribute that will be attached to the builder
  • callbacks (list) -- container used to temporarily aggregate the callbacks


Alias for field number 0

Alias for field number 1


Bases: object

Manages a single phase of the installation.

This descriptor stores at creation time the name of the method it should search for execution. The method is retrieved at __get__ time, so that it can be overridden by subclasses of whatever class declared the phases.

It also provides hooks to execute arbitrary callbacks before and after the phase.




Bases: type

Permit to register arbitrary functions during class definition and run them later, before or after a given install phase.

Each method decorated with run_before or run_after gets temporarily stored in a global shared state when a class being defined is parsed by the Python interpreter. At class definition time that temporary storage gets flushed and a list of callbacks is attached to the class being defined.

Decorator to register a function for running after a given phase.
  • phase (str) -- phase after which the function must run.
  • when (str) -- condition under which the function is run (if None, it is always run).



Decorator to register a function for running before a given phase.
  • phase (str) -- phase before which the function must run.
  • when (str) -- condition under which the function is run (if None, it is always run).




Class decorator used to register the default builder for a given build-system.
build_system_name (str) -- name of the build-system


Given a package object with an associated concrete spec, return the name of its build system.
pkg (spack.package_base.PackageBase) -- package for which we want the build system name


Given a package object with an associated concrete spec, return the builder object that can install it.
pkg (spack.package_base.PackageBase) -- package for which we want the builder


Decorator to register a function for running after a given phase.
  • phase (str) -- phase after which the function must run.
  • when (str) -- condition under which the function is run (if None, it is always run).



Decorator to register a function for running before a given phase.
  • phase (str) -- phase before which the function must run.
  • when (str) -- condition under which the function is run (if None, it is always run).



spack.caches module

Caches used by Spack to store data



Bases: object
Fetch and relocate the fetcher's target into our mirror cache.

Symlink a human readible path in our mirror to the actual storage location.


Filesystem cache of downloaded archives.

This prevents Spack from repeatedly fetch the same files when building the same package different ways or multiple times.


The MISC_CACHE is Spack's cache for small data.

Currently the MISC_CACHE stores indexes for virtual dependency providers and for which packages provide which tags.


spack.ci module

Bases: object

Class for managing CDash data and processing.


Returns the CDash build name.

A name will be generated if the current_spec property is set; otherwise, the value will be retrieved from the environment through the SPACK_CDASH_BUILD_NAME variable.

Returns: (str) current spec's CDash build name.


Returns the CDash build stamp.

The one defined by SPACK_CDASH_BUILD_STAMP environment variable is preferred due to the representation of timestamps; otherwise, one will be built.

Returns: (str) current CDash build stamp


Copy test results to artifacts directory.




Explicitly report skipping testing of a spec (e.g., it's CI configuration identifies it as known to have broken tests or the CI installation failed).
  • spec -- spec being tested
  • report_dir -- directory where the report will be written
  • reason -- reason the test is being skipped





Bases: tuple
Alias for field number 0

Alias for field number 1



Bases: object

Spack CI object used to generate intermediate representation used by the CI generator(s).

Generate the IR from the Spack CI configurations.



Utility method to determine if this spack instance is capable of signing binary packages. This is currently only possible if the spack gpg keystore contains exactly one secret key.

Utility method to determin if this spack instance is capable (at least in theory) of verifying signed binaries.

Determine which packages were added, removed or changed between rev1 and rev2, and return the names as a set

Copy file(s) to the given artifacts directory
  • src (str) -- the glob-friendly path expression for the file(s) to copy
  • artifacts_dir (str) -- the destination directory



Copy selected build stage file(s) to the given artifacts directory

Looks for build logs in the stage directory of the given job_spec, and attempts to copy the files into the directory given by job_log_dir.

  • job_spec -- spec associated with spack install log
  • job_log_dir -- path into which build log should be copied



Copy test log file(s) to the given artifacts directory
  • test_stage (str) -- test stage path
  • job_test_dir (str) -- the destination artifacts test directory



Create the buildcache at the provided mirror(s).
  • input_spec -- Installed spec to package and push
  • destination_mirror_urls -- List of urls to push to
  • sign_binaries -- Whether or not to sign buildcache entry


Returns: A list of PushResults, indicating success or failure.


Fetch the broken spec file for each of the hashes under the base_url and print a message with some details about each one.

and extract the contents into the given work_dir

  • url (str) -- Complete url to artifacts.zip file
  • work_dir (str) -- Path to destination where artifacts should be extracted



the spec matrix in the active environment.

  • env (spack.environment.Environment) -- Activated environment object which must contain a gitlab-ci section describing how to map specs to runners
  • print_summary (bool) -- Should we print a summary of all the jobs in the stages in which they were placed.
  • output_file (str) -- File path where generated file should be written
  • prune_dag (bool) -- If True, do not generate jobs for specs already exist built on the mirror.
  • check_index_only (bool) -- If True, attempt to fetch the mirror index and only use that to determine whether built specs on the mirror this mode results in faster yaml generation time). Otherwise, also check each spec directly by url (useful if there is no index or it might be out of date).
  • run_optimizer (bool) -- If True, post-process the generated yaml to try try to reduce the size (attempts to collect repeated configuration and replace with definitions).)
  • use_dependencies (bool) -- If true, use "dependencies" rather than "needs" ("needs" allows DAG scheduling). Useful if gitlab instance cannot be configured to handle more than a few "needs" per job.
  • artifacts_root (str) -- Path where artifacts like logs, environment files (spack.yaml, spack.lock), etc should be written. GitLab requires this to be within the project directory.
  • remote_mirror_override (str) -- Typically only needed when one spack.yaml is used to populate several mirrors with binaries, based on some criteria. Spack protected pipelines populate different mirrors based on branch name, facilitated by this option. DEPRECATED



If this is a git repo get the revisions to use when checking for changed packages and spack core modules.

Given a spec and possibly a build group, return the job name. If the resulting name is longer than 255 characters, it will be truncated.
  • spec (spack.spec.Spec) -- Spec job will build
  • build_group (str) -- Name of build group this job belongs to (a CDash
  • notion) --


Returns: The job name


If spack is running from a git repo, return the most recent git log entry, otherwise, return a string containing the spack version.

environment, return the set of all concrete specs from the environment that could have been affected by changing the list of packages.

If a dependent_traverse_depth is given, it is used to limit upward (in the parent direction) traversal of specs of touched packages. E.g. if 1 is provided, then only direct dependents of touched package specs are traversed to produce specs that could have been affected by changing the package, while if 0 is provided, only the changed specs themselves are traversed. If None is given, upward traversal of touched package specs is done all the way to the environment roots. Providing a negative number results in no traversals at all, yielding an empty set.


  • env (spack.environment.Environment) -- Active concrete environment
  • affected_pkgs (List[str]) -- Affected package names
  • dependent_traverse_depth -- Optional integer to limit dependent traversal, or None to disable the limit.

A set of concrete specs from the active environment including those associated with affected packages, their dependencies and dependents, as well as their dependents dependencies.


Given an environment manifest path and two revisions to compare, return whether or not the stack was changed. Returns True if the environment manifest changed between the provided revisions (or additionally if the .gitlab-ci.yml file itself changed). Returns False otherwise.


  • base64_signing_key (str) -- A gpg key including the secret key, armor-exported and base64 encoded, so it can be stored in a gitlab CI variable. For an example of how to generate such a key, see:
  • https -- //github.com/spack/spack-infrastructure/blob/main/gitlab-docker/files/gen-key



Create a script for and run the command. Copy the script to the reproducibility directory.
  • name (str) -- name of the command being processed
  • commands (list) -- list of arguments for single command or list of lists of arguments for multiple commands. No shell escape is performed.
  • repro_dir (str) -- Job reproducibility directory
  • run (bool) -- Run the script and return the exit code if True


Returns: the exit code from processing the command


Push one or more binary packages to the mirror.
  • input_spec (spack.spec.Spec) -- Installed spec to push
  • mirror_url (str) -- Base url of target mirror
  • sign_binaries (bool) -- If True, spack will attempt to sign binary package before pushing.



Read data from broken specs file located at the url, return as a yaml object.

Remove all mirrors from the given config scope, the exceptions being any listed in in mirrors_to_keep, which is a list of mirror urls.

Given a url to gitlab artifacts.zip from a failed 'spack ci rebuild' job, attempt to setup an environment in which the failure can be reproduced locally. This entails the following:

First download and extract artifacts. Then look through those artifacts to glean some information needed for the reproduer (e.g. one of the artifacts contains information about the version of spack tested by gitlab, another is the generated pipeline yaml containing details of the job like the docker image used to run it). The output of this function is a set of printed instructions for running docker and then commands to run to reproduce the build once inside the container.


Run stand-alone tests on the current spec.
kwargs (dict) -- dictionary of arguments used to run the tests

List of recognized keys:

  • "cdash" (CDashHandler): (optional) cdash handler instance
  • "fail_fast" (bool): (optional) terminate tests after the first failure
  • "log_file" (str): (optional) test log file name if NOT CDash reporting
  • "job_spec" (Spec): spec that was built
  • "repro_dir" (str): reproduction directory


provided, the merge_commit given as arguments. If those commits can be found locally, then clone spack and attempt to recreate a merge commit with the same parent commits as tested in gitlab. This looks something like 1) git clone repo && cd repo 2) git checkout <checkout_commit> 3) git merge <merge_commit>. If there is no merge_commit provided, then skip step (3).

  • repro_dir (str) -- Location where spack should be cloned
  • checkout_commit (str) -- SHA of PR branch commit
  • merge_commit (str) -- SHA of target branch parent




jobs in any stage are dependent only on jobs in previous stages. This allows us to maximize build parallelism within the gitlab-ci framework.

specs (Iterable) -- Specs to build

and stages:
as pkg-name/hash-prefix) to concrete specs.
the spec_labels dictionary, and the values are the set of dependencies for that spec.
built in that stage. The jobs are expressed in the same format as the keys in the spec_labels and deps objects.




Given a url to write to and the details of the failed job, write an entry in the broken specs list.

spack.ci_needs_workaround module




spack.ci_optimization module

Modifies the given object "yaml" so that it includes an "extends" key whose value features "key".

If "extends" is not in yaml, then yaml is modified such that yaml["extends"] == key.

If yaml["extends"] is a str, then yaml is modified such that yaml["extends"] == [yaml["extends"], key]

If yaml["extends"] is a list that does not include key, then key is appended to the list.

Otherwise, yaml is left unchanged.


Builds a histogram of values given an iterable of mappings and a key.

For each mapping "m" with key "key" in iterator, the value m[key] is considered.

Returns a list of tuples (hash, count, proportion, value), where

  • "hash" is a sha1sum hash of the value.
  • "count" is the number of occurences of values that hash to "hash".
  • "proportion" is the proportion of all values considered above that hash to "hash".
  • "value" is one of the values considered above that hash to "hash". Which value is chosen when multiple values hash to the same "hash" is undefined.



The list is sorted in descending order by count, yielding the most frequently occuring hashes first.


Factor prototype object "sub" out of the values of mapping "yaml".

Consider a modified copy of yaml, "new", where for each key, "key" in yaml:

  • If yaml[key] matches sub, then new[key] = subkeys(yaml[key], sub).
  • Otherwise, new[key] = yaml[key].



If the above match criteria is not satisfied for any such key, then (yaml, None) is returned. The yaml object is returned unchanged.

Otherwise, each matching value in new is modified as in add_extends(new[key], common_key), and then new[common_key] is set to sub. The common_key value is chosen such that it does not match any preexisting key in new. In this case, (new, common_key) is returned.


Returns True if the test object "obj" matches the prototype object "proto".

If obj and proto are mappings, obj matches proto if (key in obj) and (obj[key] matches proto[key]) for every key in proto.

If obj and proto are sequences, obj matches proto if they are of the same length and (a matches b) for every (a,b) in zip(obj, proto).

Otherwise, obj matches proto if obj == proto.

Precondition: proto must not have any reference cycles





Returns the test mapping "obj" after factoring out the items it has in common with the prototype mapping "proto".

Consider a recursive merge operation, merge(a, b) on mappings a and b, that returns a mapping, m, whose keys are the union of the keys of a and b, and for every such key, "k", its corresponding value is:

  • merge(a[key], b[key]) if a[key] and b[key] are mappings, or

a[key] otherwise



If obj and proto are mappings, the returned object is the smallest object, "a", such that merge(a, proto) matches obj.

Otherwise, obj is returned.


Try applying an optimization pass and return information about the result

"name" is a string describing the nature of the pass. If it is a non-empty string, summary statistics are also printed to stdout.

"yaml" is the object to apply the pass to.

"optimization_pass" is the function implementing the pass to be applied.

"args" and "kwargs" are the additional arguments to pass to optimization pass. The pass is applied as

>>> (new_yaml, *other_results) = optimization_pass(yaml, *args, **kwargs)
    

The pass's results are greedily rejected if it does not modify the original yaml document, or if it produces a yaml document that serializes to a larger string.

Returns (new_yaml, yaml, applied, other_results) if applied, or (yaml, new_yaml, applied, other_results) otherwise.


spack.compiler module

Bases: object

This class encapsulates a Spack "compiler", which includes C, C++, and Fortran compilers. Subclasses should implement support for specific compilers, their possible names, arguments, and how to identify the particular type of compiler.






Returns the flag used by the C compiler to produce Position Independent Code (PIC).









Returns the flag used by the C++ compiler to produce Position Independent Code (PIC).




Override just this to override all compiler version functions.



Extracts the version from compiler's output.


Returns the flag used by the F77 compiler to produce Position Independent Code (PIC).




Returns the flag used by the FC compiler to produce Position Independent Code (PIC).



Query the compiler for its version.

This is the "real" compiler version, regardless of what is in the compilers.yaml file, which the user can change to name their compiler.

Use the runtime environment of the compiler (modules and environment modifications) to enable the compiler to run properly on any platform.


Return values to ignore when invoking the compiler to get its version


Platform matcher for Platform objects supported by compiler

Flag that need to be used to pass an argument to the linker.



Query the compiler for its install prefix. This is the install path as reported by the compiler. Note that paths for cc, cxx, etc are not enough to find the install prefix of the compiler, since the can be symlinks, wrappers, or filenames instead of absolute paths.


Executable reported compiler version used for API-determinations

E.g. C++11 flag checks.


For executables created with this compiler, the compiler libraries that would be generally required to run it.


Set any environment variables necessary to use the compiler.


This property should be overridden in the compiler subclass if a verbose flag is available.

If it is not overridden, it is assumed to not be supported.


Raise an error if any of the compiler executables is not valid.

This method confirms that for all of the compilers (cc, cxx, f77, fc) that have paths, those paths exist and are executable by the current user. Raises a CompilerAccessError if any of the non-null paths for the compiler are not accessible.



Compiler argument that produces version information

Regex used to extract version from compiler's output


spack.concretize module

Functions here are used to take abstract specs and make them concrete. For example, if a spec asks for a version between 1.8 and 1.9, these functions might take will take the most recent 1.9 version of the package available. Or, if the user didn't specify a compiler for a spec, then this will assign a compiler to the spec based on defaults or user preferences.


Bases: object

You can subclass this class to override some of the default concretization strategies, or you can override all of them.

Adjusts the target microarchitecture if the compiler is too old to support the default one.
spec -- spec to be concretized
True if spec was modified, False otherwise


Controls whether we check that compiler versions actually exist during concretization. Used for testing and for mirror creation

Given a list of candidate virtual and external packages, try to find one that is most ABI compatible.

If the spec is empty provide the defaults of the platform. If the architecture is not a string type, then check if either the platform, target or operating system are concretized. If any of the fields are changed then return True. If everything is concretized (i.e the architecture attribute is a namedtuple of classes) then return False. If the target is a string type, then convert the string into a concretized architecture. If it has no architecture and the root of the DAG has an architecture, then use the root otherwise use the defaults on the platform.

If the spec already has a compiler, we're done. If not, then take the compiler used for the nearest ancestor with a compiler spec and use that. If the ancestor's compiler is not concrete, then used the preferred compiler as specified in spackconfig.

Intuition: Use the spackconfig default if no package that depends on this one has a strict compiler requirement. Otherwise, try to build with the compiler that will be used by libraries that link to this one, to maximize compatibility.


The compiler flags are updated to match those of the spec whose compiler is used, defaulting to no compiler flags in the spec. Default specs set at the compiler level will still be added later.

Add dev_path=* variant to packages built from local source.

If the spec already has variants filled in, return. Otherwise, add the user preferences from packages.yaml or the default variants from the package specification.

If the spec is already concrete, return. Otherwise take the preferred version from spackconfig, and default to the package's version if there are no available versions.
versions of each package and use an installed version if we can link to it. The policy implemented here will tend to rebuild a lot of stuff becasue it will prefer a compiler in the spec to any compiler already- installed things were built with. There is likely some better policy that finds some middle ground between these two extremes.


Returns the preferred target from the package preferences if there's any.
spec -- abstract spec to be concretized



Bases: SpackError

Raised when details on architecture cannot be collected from the system


Bases: SpecError

Raised when a package is configured with the buildable option False, but no satisfactory external versions can be found



Bases: SpackError

Raised when there is no way to have a concrete version for a particular spec.


Bases: SpackError

Raised when there is no available compiler that satisfies a compiler spec.


Given a number of specs as input, tries to concretize them together.
  • tests (bool or list or set) -- False to run no tests, True to test all packages, or a list of package names to run tests for some
  • *abstract_specs -- abstract specs to be concretized, given either as Specs or strings

List of concretized specs




Searches the dag from spec in an intelligent order and looks for a spec that matches a condition

Bases: object

Helper for creating key functions.

This is a wrapper that inverts the sense of the natural comparisons on the object.


spack.config module

This module implements Spack's configuration file handling.

This implements Spack's configuration system, which handles merging multiple scopes with different levels of precedence. See the documentation on Configuration Scopes for details on how Spack's configuration system behaves. The scopes are:

1.
default
2.
system
3.
site
4.
user



And corresponding per-platform scopes. Important functions in this module are:

  • get_config()
  • update_config()

get_config reads in YAML data for a particular scope and returns it. Callers can then modify the data and write it back with update_config.

When read in, Spack validates configurations with jsonschemas. The schemas are in submodules of spack.schema.

configuration scopes added on the command line set by spack.main.main().

This is the singleton configuration instance for Spack.



Bases: SpackError

Superclass for all Spack config related errors.


Bases: ConfigError

Issue reading or accessing a configuration file.


Bases: ConfigError

Raised when a configuration format does not match its schema.


Bases: object

This class represents a configuration scope.

A scope is one directory containing named configuration files. Each file is a config "section" (e.g., mirrors, compilers, etc).

Empty cached config information.





Bases: ConfigError

Error for referring to a bad config section name in a configuration.


Bases: object

A full Spack configuration, from a hierarchy of config files.

This class makes it easy to add a new scope on top of an existing one.

Clears the caches for configuration files,

This will cause files to be re-read upon the next request.


List of writable scopes with an associated file.

Get a config section or a single value from one.

Accepts a path syntax that allows us to grab nested config map entries. Getting the 'config' section would look like:

spack.config.get('config')


and the dirty section in the config scope would be:

spack.config.get('config:dirty')


We use : as the separator, like YAML objects.


Get configuration settings for a section.

If scope is None or not provided, return the merged contents of all of Spack's configuration scopes. If scope is provided, return only the configuration as specified in that scope.

This off the top-level name from the YAML section. That is, for a YAML config file that looks like this:

config:

install_tree:
root: $spack/opt/spack
build_stage:
- $tmpdir/$user/spack-stage


get_config('config') will return:

{ 'install_tree': {

'root': '$spack/opt/spack',
}
'build_stage': ['$tmpdir/$user/spack-stage'] }



For some scope and section, get the name of the configuration file.

Non-internal non-platform scope with highest precedence

Platform-specific scopes are of the form scope/platform


Non-internal scope with highest precedence.

List of all scopes whose names match the provided regular expression.

For example, matching_scopes(r'^command') will return all scopes whose names begin with command.


Remove the highest precedence scope and return it.

Print a configuration to stdout.

Add a higher precedence scope to the Configuration.

Remove scope by name; has no effect when scope_name does not exist


Convenience function for setting single values in config files.

Accepts the path syntax described in get().


Update the configuration file for a particular scope.

Overwrites contents of a section in a scope with update_data, then writes out the config file.

update_data should have the top-level section name stripped off (it will be re-added). Data itself can be a list, dict, or any other yaml-ish structure.

Configuration scopes that are still written in an old schema format will fail to update unless force is True.

  • section (str) -- section of the configuration to be updated
  • update_data (dict) -- data to be used for the update
  • scope (str) -- scope to be updated
  • force (str) -- force the update




Bases: ConfigScope

A configuration scope that cannot be written to.

This is used for ConfigScopes passed on the command line.


Bases: ConfigScope

An internal configuration scope that is not persisted to a file.

This is for spack internal use so that command-line options and config file settings are accessed the same way, and Spack can easily override settings from files.

Empty cached config information.

Just reads from an internal dictionary.



metavar to use for commands that accept scopes this is shorter and more readable than listing all choices

http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'bootstrap': {'properties': {'enable': {'type': 'boolean'}, 'root': {'type': 'string'}, 'sources': {'items': {'additionalProperties': False, 'properties': {'metadata': {'type': 'string'}, 'name': {'type': 'string'}}, 'required': ['name', 'metadata'], 'type': 'object'}, 'type': 'array'}, 'trusted': {'patternProperties': {'\\w[\\w-]*': {'type': 'boolean'}}, 'type': 'object'}}, 'type': 'object'}}, 'title': 'Spack bootstrap configuration file schema', 'type': 'object'}, 'cdash': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'cdash': {'additionalProperties': False, 'patternProperties': {'build-group': {'type': 'string'}, 'project': {'type': 'string'}, 'site': {'type': 'string'}, 'url': {'type': 'string'}}, 'required': ['build-group'], 'type': 'object'}}, 'title': 'Spack cdash configuration file schema', 'type': 'object'}, 'ci': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'ci': {'oneOf': [{'anyOf': [{'additionalProperties': False, 'properties': {'broken-specs-url': {'type': 'string'}, 'broken-tests-packages': {'items': {'type': 'string'}, 'type': 'array'}, 'enable-artifacts-buildcache': {'type': 'boolean'}, 'pipeline-gen': {'items': {'oneOf': [{'additionalProperties': False, 'properties': {'match_behavior': {'default': 'first', 'enum': ['first', 'merge'], 'type': 'string'}, 'submapping': {'items': {'additionalProperties': False, 'properties': {'build-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'build-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'match': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['match'], 'type': 'object'}, 'type': 'array'}}, 'required': ['submapping'], 'type': 'object'}, {'oneOf': [{'additionalProperties': False, 'properties': {'noop-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'noop-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'build-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'build-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'copy-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'copy-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'reindex-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'reindex-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'signing-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'signing-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'cleanup-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'cleanup-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'any-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'any-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}]}]}, 'type': 'array'}, 'rebuild-index': {'type': 'boolean'}, 'target': {'default': 'gitlab', 'enum': ['gitlab'], 'type': 'string'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'broken-specs-url': {'type': 'string'}, 'broken-tests-packages': {'items': {'type': 'string'}, 'type': 'array'}, 'pipeline-gen': {'items': {'oneOf': [{'additionalProperties': False, 'properties': {'match_behavior': {'default': 'first', 'enum': ['first', 'merge'], 'type': 'string'}, 'submapping': {'items': {'additionalProperties': False, 'properties': {'build-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'build-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'match': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['match'], 'type': 'object'}, 'type': 'array'}}, 'required': ['submapping'], 'type': 'object'}, {'oneOf': [{'additionalProperties': False, 'properties': {'noop-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'noop-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'build-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'build-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'copy-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'copy-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'reindex-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'reindex-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'signing-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'signing-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'cleanup-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'cleanup-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}, {'additionalProperties': False, 'properties': {'any-job': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}, 'any-job-remove': {'additionalProperties': True, 'properties': {'after_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'before_script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'anyOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}]}]}, 'type': 'array'}, 'rebuild-index': {'type': 'boolean'}, 'target': {'default': 'gitlab', 'enum': ['gitlab'], 'type': 'string'}, 'temporary-storage-url-prefix': {'type': 'string'}}, 'type': 'object'}]}, {'anyOf': [{'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'bootstrap': {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': False, 'properties': {'compiler-agnostic': {'default': False, 'type': 'boolean'}, 'name': {'type': 'string'}}, 'required': ['name'], 'type': 'object'}]}, 'type': 'array'}, 'broken-specs-url': {'type': 'string'}, 'broken-tests-packages': {'items': {'type': 'string'}, 'type': 'array'}, 'enable-artifacts-buildcache': {'type': 'boolean'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'mappings': {'items': {'additionalProperties': False, 'properties': {'match': {'items': {'type': 'string'}, 'type': 'array'}, 'remove-attributes': {'additionalProperties': False, 'properties': {'tags': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['tags'], 'type': 'object'}, 'runner-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}}, 'required': ['match'], 'type': 'object'}, 'type': 'array'}, 'match_behavior': {'default': 'first', 'enum': ['first', 'merge'], 'type': 'string'}, 'rebuild-index': {'type': 'boolean'}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'service-job-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}, 'signing-job-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['mappings'], 'type': 'object'}, {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'bootstrap': {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': False, 'properties': {'compiler-agnostic': {'default': False, 'type': 'boolean'}, 'name': {'type': 'string'}}, 'required': ['name'], 'type': 'object'}]}, 'type': 'array'}, 'broken-specs-url': {'type': 'string'}, 'broken-tests-packages': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'mappings': {'items': {'additionalProperties': False, 'properties': {'match': {'items': {'type': 'string'}, 'type': 'array'}, 'remove-attributes': {'additionalProperties': False, 'properties': {'tags': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['tags'], 'type': 'object'}, 'runner-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}}, 'required': ['match'], 'type': 'object'}, 'type': 'array'}, 'match_behavior': {'default': 'first', 'enum': ['first', 'merge'], 'type': 'string'}, 'rebuild-index': {'type': 'boolean'}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'service-job-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}, 'signing-job-attributes': {'additionalProperties': False, 'properties': {'after_script': {'items': {'type': 'string'}, 'type': 'array'}, 'before_script': {'items': {'type': 'string'}, 'type': 'array'}, 'image': {'oneOf': [{'type': 'string'}, {'properties': {'entrypoint': {'items': {'type': 'string'}, 'type': 'array'}, 'name': {'type': 'string'}}, 'type': 'object'}]}, 'script': {'items': {'type': 'string'}, 'type': 'array'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['tags'], 'type': 'object'}, 'tags': {'items': {'type': 'string'}, 'type': 'array'}, 'temporary-storage-url-prefix': {'type': 'string'}, 'variables': {'patternProperties': {'[\\w\\d\\-_\\.]+': {'type': 'string'}}, 'type': 'object'}}, 'required': ['mappings'], 'type': 'object'}]}]}}, 'title': 'Spack CI configuration file schema', 'type': 'object'}, 'compilers': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'compilers': {'items': {'additionalProperties': False, 'properties': {'compiler': {'additionalProperties': False, 'properties': {'alias': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'extra_rpaths': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'flags': {'additionalProperties': False, 'properties': {'cflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cppflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxxflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'fflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldlibs': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'type': 'object'}, 'implicit_rpaths': {'anyOf': [{'items': {'type': 'string'}, 'type': 'array'}, {'type': 'boolean'}]}, 'modules': {'anyOf': [{'type': 'string'}, {'type': 'null'}, {'type': 'array'}]}, 'operating_system': {'type': 'string'}, 'paths': {'additionalProperties': False, 'properties': {'cc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxx': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'f77': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'fc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'required': ['cc', 'cxx', 'f77', 'fc'], 'type': 'object'}, 'spec': {'type': 'string'}, 'target': {'type': 'string'}}, 'required': ['paths', 'spec', 'modules', 'operating_system'], 'type': 'object'}}, 'type': 'object'}, 'type': 'array'}}, 'title': 'Spack compiler configuration file schema', 'type': 'object'}, 'concretizer': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'concretizer': {'additionalProperties': False, 'properties': {'duplicates': {'properties': {'strategy': {'enum': ['none', 'minimal', 'full'], 'type': 'string'}}, 'type': 'object'}, 'enable_node_namespace': {'type': 'boolean'}, 'reuse': {'oneOf': [{'type': 'boolean'}, {'enum': ['dependencies'], 'type': 'string'}]}, 'targets': {'properties': {'granularity': {'enum': ['generic', 'microarchitectures'], 'type': 'string'}, 'host_compatible': {'type': 'boolean'}}, 'type': 'object'}, 'unify': {'oneOf': [{'type': 'boolean'}, {'enum': ['when_possible'], 'type': 'string'}]}}, 'type': 'object'}}, 'title': 'Spack concretizer configuration file schema', 'type': 'object'}, 'config': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'config': {'default': {}, 'deprecatedProperties': {'error': False, 'message': 'config:terminal_title has been replaced by install_status and is ignored', 'properties': ['terminal_title']}, 'properties': {'additional_external_search_paths': {'items': {'type': 'string'}, 'type': 'array'}, 'aliases': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}, 'allow_sgid': {'type': 'boolean'}, 'binary_index_root': {'type': 'string'}, 'binary_index_ttl': {'minimum': 0, 'type': 'integer'}, 'build_jobs': {'minimum': 1, 'type': 'integer'}, 'build_language': {'type': 'string'}, 'build_stage': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'ccache': {'type': 'boolean'}, 'checksum': {'type': 'boolean'}, 'concretizer': {'enum': ['original', 'clingo'], 'type': 'string'}, 'connect_timeout': {'minimum': 0, 'type': 'integer'}, 'db_lock_timeout': {'minimum': 1, 'type': 'integer'}, 'debug': {'type': 'boolean'}, 'deprecated': {'type': 'boolean'}, 'dirty': {'type': 'boolean'}, 'environments_root': {'type': 'string'}, 'extensions': {'items': {'type': 'string'}, 'type': 'array'}, 'flags': {'properties': {'keep_werror': {'enum': ['all', 'specific', 'none'], 'type': 'string'}}, 'type': 'object'}, 'install_hash_length': {'minimum': 1, 'type': 'integer'}, 'install_missing_compilers': {'type': 'boolean'}, 'install_path_scheme': {'type': 'string'}, 'install_status': {'type': 'boolean'}, 'install_tree': {'anyOf': [{'properties': {'padded_length': {'oneOf': [{'minimum': 0, 'type': 'integer'}, {'type': 'boolean'}]}, 'projections': {'patternProperties': {'all|\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}, 'root': {'type': 'string'}}, 'type': 'object'}, {'type': 'string'}]}, 'license_dir': {'type': 'string'}, 'locks': {'type': 'boolean'}, 'misc_cache': {'type': 'string'}, 'package_lock_timeout': {'anyOf': [{'minimum': 1, 'type': 'integer'}, {'type': 'null'}]}, 'shared_linking': {'anyOf': [{'enum': ['rpath', 'runpath'], 'type': 'string'}, {'properties': {'bind': {'type': 'boolean'}, 'type': {'enum': ['rpath', 'runpath'], 'type': 'string'}}, 'type': 'object'}]}, 'source_cache': {'type': 'string'}, 'stage_name': {'type': 'string'}, 'suppress_gpg_warnings': {'type': 'boolean'}, 'template_dirs': {'items': {'type': 'string'}, 'type': 'array'}, 'test_stage': {'type': 'string'}, 'url_fetch_method': {'enum': ['urllib', 'curl'], 'type': 'string'}, 'verify_ssl': {'type': 'boolean'}}, 'type': 'object'}}, 'title': 'Spack core configuration file schema', 'type': 'object'}, 'definitions': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'definitions': {'default': [], 'items': {'patternProperties': {'^(?!when$)\\w*': {'default': [], 'items': {'anyOf': [{'additionalProperties': False, 'properties': {'exclude': {'items': {'type': 'string'}, 'type': 'array'}, 'matrix': {'items': {'items': {'type': 'string'}, 'type': 'array'}, 'type': 'array'}}, 'type': 'object'}, {'type': 'string'}, {'type': 'null'}]}, 'type': 'array'}}, 'properties': {'when': {'type': 'string'}}, 'type': 'object'}, 'type': 'array'}}, 'title': 'Spack definitions configuration file schema', 'type': 'object'}, 'mirrors': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'mirrors': {'additionalProperties': False, 'default': {}, 'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'additionalProperties': False, 'anyOf': [{'required': ['url']}, {'required': ['fetch']}, {'required': ['pull']}], 'properties': {'access_pair': {'items': {'maxItems': 2, 'minItems': 2, 'type': ['string', 'null']}, 'type': 'array'}, 'access_token': {'type': ['string', 'null']}, 'binary': {'type': 'boolean'}, 'endpoint_url': {'type': ['string', 'null']}, 'fetch': {'anyOf': [{'type': 'string'}, {'additionalProperties': False, 'properties': {'access_pair': {'items': {'maxItems': 2, 'minItems': 2, 'type': ['string', 'null']}, 'type': 'array'}, 'access_token': {'type': ['string', 'null']}, 'endpoint_url': {'type': ['string', 'null']}, 'profile': {'type': ['string', 'null']}, 'url': {'type': 'string'}}, 'type': 'object'}]}, 'profile': {'type': ['string', 'null']}, 'push': {'anyOf': [{'type': 'string'}, {'additionalProperties': False, 'properties': {'access_pair': {'items': {'maxItems': 2, 'minItems': 2, 'type': ['string', 'null']}, 'type': 'array'}, 'access_token': {'type': ['string', 'null']}, 'endpoint_url': {'type': ['string', 'null']}, 'profile': {'type': ['string', 'null']}, 'url': {'type': 'string'}}, 'type': 'object'}]}, 'source': {'type': 'boolean'}, 'url': {'type': 'string'}}, 'type': 'object'}]}}, 'type': 'object'}}, 'title': 'Spack mirror configuration file schema', 'type': 'object'}, 'modules': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'modules': {'additionalProperties': False, 'patternProperties': {'^(?!prefix_inspections$)\\w[\\w-]*$': {'additionalProperties': False, 'default': {}, 'properties': {'arch_folder': {'type': 'boolean'}, 'enable': {'default': [], 'items': {'enum': ['tcl', 'lmod'], 'type': 'string'}, 'type': 'array'}, 'lmod': {'allOf': [{'allOf': [{'properties': {'all': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}, 'defaults': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'exclude': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'exclude_implicits': {'default': False, 'type': 'boolean'}, 'hash_length': {'default': 7, 'minimum': 0, 'type': 'integer'}, 'hide_implicits': {'default': False, 'type': 'boolean'}, 'include': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'naming_scheme': {'type': 'string'}, 'projections': {'patternProperties': {'all|\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}, 'verbose': {'default': False, 'type': 'boolean'}}}, {'patternProperties': {'(?!hierarchy|core_specs|verbose|hash_length|defaults|filter_hierarchy_specs|hide|include|exclude|projections|naming_scheme|core_compilers|all)(^\\w[\\w-]*)': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}, '^[\\^@%+~]': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}}, 'validate_spec': True}], 'default': {}, 'type': 'object'}, {'properties': {'core_compilers': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'core_specs': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'filter_hierarchy_specs': {'patternProperties': {'(?!hierarchy|core_specs|verbose|hash_length|defaults|filter_hierarchy_specs|hide|include|exclude|projections|naming_scheme|core_compilers|all)(^\\w[\\w-]*)': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'hierarchy': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}]}, 'prefix_inspections': {'additionalProperties': False, 'patternProperties': {'^[\\w-]*': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'roots': {'properties': {'lmod': {'type': 'string'}, 'tcl': {'type': 'string'}}, 'type': 'object'}, 'tcl': {'allOf': [{'allOf': [{'properties': {'all': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}, 'defaults': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'exclude': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'exclude_implicits': {'default': False, 'type': 'boolean'}, 'hash_length': {'default': 7, 'minimum': 0, 'type': 'integer'}, 'hide_implicits': {'default': False, 'type': 'boolean'}, 'include': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'naming_scheme': {'type': 'string'}, 'projections': {'patternProperties': {'all|\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}, 'verbose': {'default': False, 'type': 'boolean'}}}, {'patternProperties': {'(?!hierarchy|core_specs|verbose|hash_length|defaults|filter_hierarchy_specs|hide|include|exclude|projections|naming_scheme|core_compilers|all)(^\\w[\\w-]*)': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}, '^[\\^@%+~]': {'additionalProperties': False, 'default': {}, 'properties': {'autoload': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'conflict': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'environment': {'additionalProperties': False, 'default': {}, 'properties': {'append_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'remove_path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}, 'filter': {'additionalProperties': False, 'default': {}, 'properties': {'exclude_env_vars': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'load': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'prerequisites': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'suffixes': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object', 'validate_spec': True}, 'template': {'type': 'string'}}, 'type': 'object'}}, 'validate_spec': True}], 'default': {}, 'type': 'object'}, {}]}, 'use_view': {'anyOf': [{'type': 'string'}, {'type': 'boolean'}]}}, 'type': 'object'}}, 'properties': {'prefix_inspections': {'additionalProperties': False, 'patternProperties': {'^[\\w-]*': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}}, 'type': 'object'}}, 'title': 'Spack module file configuration file schema', 'type': 'object'}, 'packages': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'packages': {'additionalProperties': False, 'default': {}, 'patternProperties': {'(?!^all$)(^\\w[\\w-]*)': {'additionalProperties': False, 'default': {}, 'deprecatedProperties': {'error': False, 'message': "setting 'compiler:', 'target:' or 'provider:' preferences in a package-specific section of packages.yaml is deprecated, and will be removed in v0.22.\n\n\tThese preferences will be ignored by Spack, and can be set only in the 'all' section of the same file. You can run:\n\n\t\t$ spack audit configs\n\n\tto get better diagnostics, including files:lines where the deprecated attributes are used.\n\n\tUse requirements to enforce conditions on specific packages: https://spack.readthedocs.io/en/latest/packages_yaml.html#package-requirements\n", 'properties': ['target', 'compiler', 'providers']}, 'properties': {'buildable': {'default': True, 'type': 'boolean'}, 'compiler': {}, 'externals': {'items': {'additionalProperties': True, 'properties': {'extra_attributes': {'type': 'object'}, 'modules': {'items': {'type': 'string'}, 'type': 'array'}, 'prefix': {'type': 'string'}, 'spec': {'type': 'string'}}, 'required': ['spec'], 'type': 'object'}, 'type': 'array'}, 'package_attributes': {'additionalProperties': False, 'patternProperties': {'\\w+': {}}, 'type': 'object'}, 'permissions': {'additionalProperties': False, 'properties': {'group': {'type': 'string'}, 'read': {'enum': ['user', 'group', 'world'], 'type': 'string'}, 'write': {'enum': ['user', 'group', 'world'], 'type': 'string'}}, 'type': 'object'}, 'providers': {}, 'require': {'oneOf': [{'items': {'oneOf': [{'additionalProperties': False, 'properties': {'any_of': {'items': {'type': 'string'}, 'type': 'array'}, 'message': {'type': 'string'}, 'one_of': {'items': {'type': 'string'}, 'type': 'array'}, 'spec': {'type': 'string'}, 'when': {'type': 'string'}}, 'type': 'object'}, {'type': 'string'}]}, 'type': 'array'}, {'type': 'string'}]}, 'target': {}, 'variants': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'version': {'default': [], 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'type': 'array'}}, 'type': 'object'}}, 'properties': {'all': {'additionalProperties': False, 'default': {}, 'deprecatedProperties': {'error': False, 'message': "setting version preferences in the 'all' section of packages.yaml is deprecated and will be removed in v0.22\n\n\tThese preferences will be ignored by Spack. You can set them only in package-specific sections of the same file.\n", 'properties': ['version']}, 'properties': {'buildable': {'default': True, 'type': 'boolean'}, 'compiler': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'package_attributes': {'additionalProperties': False, 'patternProperties': {'\\w+': {}}, 'type': 'object'}, 'permissions': {'additionalProperties': False, 'properties': {'group': {'type': 'string'}, 'read': {'enum': ['user', 'group', 'world'], 'type': 'string'}, 'write': {'enum': ['user', 'group', 'world'], 'type': 'string'}}, 'type': 'object'}, 'providers': {'additionalProperties': False, 'default': {}, 'patternProperties': {'\\w[\\w-]*': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'type': 'object'}, 'require': {'oneOf': [{'items': {'oneOf': [{'additionalProperties': False, 'properties': {'any_of': {'items': {'type': 'string'}, 'type': 'array'}, 'message': {'type': 'string'}, 'one_of': {'items': {'type': 'string'}, 'type': 'array'}, 'spec': {'type': 'string'}, 'when': {'type': 'string'}}, 'type': 'object'}, {'type': 'string'}]}, 'type': 'array'}, {'type': 'string'}]}, 'target': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}, 'variants': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'version': {}}, 'type': 'object'}}, 'type': 'object'}}, 'title': 'Spack package configuration file schema', 'type': 'object'}, 'repos': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'repos': {'default': [], 'items': {'type': 'string'}, 'type': 'array'}}, 'title': 'Spack repository configuration file schema', 'type': 'object'}, 'upstreams': {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'upstreams': {'default': {}, 'patternProperties': {'\\w[\\w-]*': {'additionalProperties': False, 'default': {}, 'properties': {'install_tree': {'type': 'string'}, 'modules': {'properties': {'lmod': {'type': 'string'}, 'tcl': {'type': 'string'}}, 'type': 'object'}}, 'type': 'object'}}, 'type': 'object'}}, 'title': 'Spack core configuration file schema', 'type': 'object'}}
Dict from section names -> schema for that section


Add the given configuration to the specified config scope. Add accepts a path. If you want to add from a filename, use add_from_file


Add updates to a config from a filename

Return a list of configuration URLs.
base_url -- URL for a configuration (yaml) file or a directory containing yaml file(s)
List of configuration file(s) or empty list if none


Singleton Configuration instance.

This constructs one instance associated with this module and returns it. It is bundled inside a function so that configuration can be initialized lazily.

object for accessing spack configuration
Return type
(Configuration)


Return the config scope that is listed by default.

Commands that list configuration list all scopes (merged) by default.


Return the config scope that commands should modify by default.

Commands that modify configuration by default modify the highest priority scope.

section (bool) -- Section for which to get the default scope. If this is not 'compilers', a general (non-platform) scope is used.


Return a function that takes as input a dictionary read from a configuration file and update it to the latest format.

The function returns True if there was any update, False otherwise.

section (str) -- section of the configuration e.g. "packages", "config", etc.


Retrieve configuration file(s) at the specified URL.
  • url -- URL for a configuration (yaml) file or a directory containing yaml file(s)
  • dest_dir -- destination directory
  • skip_existing -- Skip files that already exist in dest_dir if True; otherwise, replace those files

Path to the corresponding file if URL is or contains a single file and it is the only file in the destination directory or the root (dest_dir) directory if multiple configuration files exist or are retrieved.


Module-level wrapper for Configuration.get().

Returns an instance of a type that will pass validation for path.

The instance is created by calling the constructor with no arguments. If multiple types will satisfy validation for data at the configuration path given, the priority order is list, dict, str, bool, int, float.


Merges source into dest; entries in source take precedence over dest.

This routine may modify dest and should be assigned to dest, in case dest was None to begin with, e.g.:

dest = merge_yaml(dest, source)


In the result, elements from lists from source will appear before elements of lists from dest. Likewise, when iterating over keys or items in merged OrderedDict objects, keys from source will appear before keys from dest.

Config file authors can optionally end any attribute in a dict with :: instead of :, and the key will override that of the parent instead of merging.

+: will extend the default prepend merge strategy to include string concatenation -: will change the merge strategy to append, it also includes string concatentation


Simple way to override config settings within a context.
  • path_or_scope (ConfigScope or str) -- scope or single option to override
  • value (object or None) -- value for the single option


Temporarily push a scope on the current configuration, then remove it after the context completes. If a single option is provided, create an internal config scope for it and push/pop that scope.


Process a path argument to config.set() that may contain overrides ('::' or trailing ':')
quoted path components outside of the value will be considered ill formed and will raise. e.g. this:is:a:path:'value:with:colon' will yield:
[this, is, a, path, value:with:colon]




Transform a github URL to the raw form to avoid undesirable html.
url -- url to be converted to raw form

Returns: (str) raw github/gitlab url or the original url


Read a YAML configuration file.

User can provide a schema for validation. If no schema is provided, we will infer the schema from the top-level key.


UnMerges source from dest; entries in source take precedence over dest.

This routine may modify dest and should be assigned to dest, in case dest was None to begin with, e.g.:

dest = remove_yaml(dest, source)


In the result, elements from lists from source will not appear as elements of lists from dest. Likewise, when iterating over keys or items in merged OrderedDict objects, keys from source will not appear as keys in dest.

Config file authors can optionally end any attribute in a dict with :: instead of :, and the key will remove the entire section from dest


Convenience function to get list of configuration scopes.

Convenience function for setting single values in config files.

Accepts the path syntax described in get().


Use the configuration scopes passed as arguments within the context manager.
*scopes_or_paths -- scope objects or paths to be used
Configuration object associated with the scopes passed as arguments


Validate data read in from a Spack YAML file.
  • data (dict or list) -- data read from a Spack YAML file
  • schema (dict or list) -- jsonschema to validate data


This leverages the line information (start_mark, end_mark) stored on Spack YAML structures.


spack.context module

This module provides classes used in user and build environment

Bases: Enum

Enum used to indicate the context in which an environment has to be setup: build, run or test.






spack.cray_manifest module



Cray systems can store a Spack-compatible description of system packages here.




When creating a Compiler object, Spack expects a name matching one of the classes in spack.compilers. Names in the Cray manifest may differ; for cases where we know the name refers to a compiler in Spack, this function translates it automatically.

This function will raise an error if there is no recorded translation and the name doesn't match a known compiler name.


spack.database module

Spack's installation tracking database.

The database serves two purposes:

1.
It implements a cache on top of a potentially very large Spack directory hierarchy, speeding up many operations that would otherwise require filesystem access.
2.
It will allow us to track external installations as well as lost packages and their dependencies.



Prior to the implementation of this store, a directory layout served as the authoritative database of packages in Spack. This module provides a cache and a sanity checking mechanism for what is in the filesystem.

Bases: SpackError

Raised when errors are found while reading the database.




Bases: object



Return the spec that the given spec is deprecated for, or None

Look up a spec by DAG hash, or by a DAG hash prefix.
  • dag_hash (str) -- hash (or hash prefix) to look up
  • default (object or None) -- default value to return if dag_hash is not in the DB (default: None)
  • installed (bool or InstallStatus or Iterable or None) -- if True, includes only installed specs in the search; if False only missing specs, and if any, all specs in database. If an InstallStatus or iterable of InstallStatus, returns specs whose install status (installed, deprecated, or missing) matches (one of) the InstallStatus. (default: any)


installed defaults to any so that we can refer to any known hash. Note that query() and query_one() differ in that they only return installed specs by default.

a list of specs matching the hash or hash prefix
Return type
(list)


Look up a spec in this DB by DAG hash, or by a DAG hash prefix.
  • dag_hash (str) -- hash (or hash prefix) to look up
  • default (object or None) -- default value to return if dag_hash is not in the DB (default: None)
  • installed (bool or InstallStatus or Iterable or None) -- if True, includes only installed specs in the search; if False only missing specs, and if any, all specs in database. If an InstallStatus or iterable of InstallStatus, returns specs whose install status (installed, deprecated, or missing) matches (one of) the InstallStatus. (default: any)


installed defaults to any so that we can refer to any known hash. Note that query() and query_one() differ in that they only return installed specs by default.

a list of specs matching the hash or hash prefix
Return type
(list)








Query the Spack database including all upstream databases.
  • query_spec -- queries iterate through specs in the database and return those that satisfy the supplied query_spec. If query_spec is any, This will match all specs in the database. If it is a spec, we'll evaluate spec.satisfies(query_spec)
  • known (bool or None) -- Specs that are "known" are those for which Spack can locate a package.py file -- i.e., Spack "knows" how to install them. Specs that are unknown may represent packages that existed in a previous version of Spack, but have since either changed their name or been removed
  • installed (bool or InstallStatus or Iterable or None) -- if True, includes only installed specs in the search; if False only missing specs, and if any, all specs in database. If an InstallStatus or iterable of InstallStatus, returns specs whose install status (installed, deprecated, or missing) matches (one of) the InstallStatus. (default: True)
  • explicit (bool or None) -- A spec that was installed following a specific user request is marked as explicit. If instead it was pulled-in as a dependency of a user requested spec it's considered implicit.
  • start_date (datetime.datetime or None) -- filters the query discarding specs that have been installed before start_date.
  • end_date (datetime.datetime or None) -- filters the query discarding specs that have been installed after end_date.
  • hashes (Container) -- list or set of hashes that we can use to restrict the search
  • in_buildcache (bool or None) -- Specs that are marked in this database as part of an associated binary cache are in_buildcache. All other specs are not. This field is used for querying mirror indices. Default is any.

list of specs that match the query


Get a spec for hash, and whether it's installed upstream.
(bool, optional InstallRecord): bool tells us whether
the spec is installed upstream. Its InstallRecord is also returned if it's installed at all; otherwise None.

Return type
(tuple)


Query only the local Spack database.

This function doesn't guarantee any sorting of the returned data for performance reason, since comparing specs for __lt__ may be an expensive operation.

  • query_spec -- queries iterate through specs in the database and return those that satisfy the supplied query_spec. If query_spec is any, This will match all specs in the database. If it is a spec, we'll evaluate spec.satisfies(query_spec)
  • known (bool or None) -- Specs that are "known" are those for which Spack can locate a package.py file -- i.e., Spack "knows" how to install them. Specs that are unknown may represent packages that existed in a previous version of Spack, but have since either changed their name or been removed
  • installed (bool or InstallStatus or Iterable or None) -- if True, includes only installed specs in the search; if False only missing specs, and if any, all specs in database. If an InstallStatus or iterable of InstallStatus, returns specs whose install status (installed, deprecated, or missing) matches (one of) the InstallStatus. (default: True)
  • explicit (bool or None) -- A spec that was installed following a specific user request is marked as explicit. If instead it was pulled-in as a dependency of a user requested spec it's considered implicit.
  • start_date (datetime.datetime or None) -- filters the query discarding specs that have been installed before start_date.
  • end_date (datetime.datetime or None) -- filters the query discarding specs that have been installed after end_date.
  • hashes (Container) -- list or set of hashes that we can use to restrict the search
  • in_buildcache (bool or None) -- Specs that are marked in this database as part of an associated binary cache are in_buildcache. All other specs are not. This field is used for querying mirror indices. Default is any.

list of specs that match the query


Get a spec by hash in the local database
locally, otherwise None.

Return type
(InstallRecord or None)


Query for exactly one spec that matches the query spec.

Raises an assertion error if more than one spec matches the query. Returns None if no installed package matches.


Get a read lock context manager for use in a with block.


Build database index from scratch based on a directory layout.

Locks the DB if it isn't locked already.



Return all specs deprecated in favor of the given spec

Return all the specs that are currently installed but not needed at runtime to satisfy user's requests.
1.
Installed on an explicit user request
2.
Installed as a "run" or "link" dependency (even transitive) of a spec at point 1.



Update the spec's explicit state in the database.
  • spec (spack.spec.Spec) -- the spec whose install record is being updated
  • explicit (bool) -- True if the package was requested explicitly by the user, False if it was pulled in as a dependency of an explicit package.



Get a write lock context manager for use in a with block.


Bases: object

Tracks installation failures.

Prefix failure marking takes the form of a byte range lock on the nth byte of a file for coordinating between concurrent parallel build processes and a persistent file, named with the full hash and containing the spec, in a subdirectory of the database to enable persistence across overlapping but separate related build processes.

The failure lock file lives alongside the install DB.

n is the sys.maxsize-bit prefix of the associated DAG hash to make the likelihood of collision very low with no cleanup required.

Removes any persistent and cached failure tracking for the spec.

see mark().

  • spec -- the spec whose failure indicators are being removed
  • force -- True if the failure information should be cleared when a failure lock exists for the file, or False if the failure should not be cleared (e.g., it may be associated with a concurrent build)



Force remove install failure tracking files.

Ensure a persistent location for dealing with parallel installation failures (e.g., across near-concurrent processes).

Return True if the spec is marked as failed.

Return True if another process has a failure lock on the spec.

Marks a spec as failing to install.
spec -- spec that failed to install


Determine if the spec has a persistent failure marking.



Bases: SpackError

Raised when an upstream DB attempts to acquire a lock


Bases: object

A record represents one installation in the DB.

The record keeps track of the spec for the installation, its install path, AND whether or not it is installed. We need the installed flag in case a user either:

blew away a directory, or
used spack uninstall -f to get rid of it



If, in either case, the package was removed but others still depend on it, we still need to track its spec, so we don't actually remove from the database until a spec has no installed dependents left.

  • spec -- spec tracked by the install record
  • path -- path where the spec has been installed
  • installed -- whether or not the spec is currently installed
  • ref_count (int) -- number of specs that depend on this one
  • explicit (bool or None) -- whether or not this spec was explicitly installed, or pulled-in as a dependency of something else
  • installation_time (datetime.datetime or None) -- time of the installation








Bases: SpackError

Exception raised when the database metadata is newer than current Spack.



Bases: NamedTuple

Data class to configure locks in Database objects

  • enable -- whether to enable locks or not.
  • database_timeout -- timeout for the database lock
  • package_timeout -- timeout for the package lock


Alias for field number 1

Alias for field number 0

Alias for field number 2


Bases: SpackError

Raised when DB cannot find records for dependencies


Configure a database to avoid using locks

Configure the database to use locks without a timeout

Bases: KeyError

Raised when a spec is not found in the database.


Bases: SpackError

Raised when attempting to add non-concrete spec to DB.


Bases: object

Manages acquiring and releasing read or write locks on concrete specs.



Returns True if the spec is already managed by this spec locker

Returns a lock on a concrete spec.

The lock is a byte range lock on the nth byte of a file.

The lock file is self.lock_path.

n is the sys.maxsize-bit prefix of the DAG hash. This makes likelihood of collision is very low AND it gives us readers-writer lock semantics with just a single lockfile, so no cleanup required.


Returns a raw lock for a Spec, but doesn't keep track of it.



Bases: SpackError

Raised when an operation would need to lock an upstream database


Returns the path of the failures lock file, given the root directory.
root_dir -- root directory containing the database directory


Return a LockConfiguration from a spack.config.Configuration object.

Returns the path of the prefix lock file, given the root directory.
root_dir -- root directory containing the database directory



spack.dependency module

Data structures that represent Spack's dependency relationships.

Bases: object

Class representing metadata for a dependency on a package.

This class differs from spack.spec.DependencySpec because it represents metadata at the Package level. spack.spec.DependencySpec is a descriptor for an actual package configuration, while Dependency is a descriptor for a package's dependency requirements.

A dependency is a requirement for a configuration of another package that satisfies a particular spec. The dependency can have types, which determine how that package configuration is required, e.g. whether it is required for building the package, whether it needs to be linked to, or whether it is needed at runtime so that Spack can call commands from it.

A package can also depend on another package with patches. This is for cases where the maintainers of one package also maintain special patches for their dependencies. If one package depends on another with patches, a special version of that dependency with patches applied will be built for use by the dependent package. The patches are included in the new version's spec hash to differentiate it from unpatched versions of the same package, so that unpatched versions of the dependency package can coexist with the patched version.

Merge constraints, deptypes, and patches of other into self.

Get the name of the dependency package.


spack.deptypes module

Data structures that represent Spack's edge types.

A flag with all dependency types set


The types of dependency relationships that Spack understands.

Default dependency type if none is specified

Default dependency type if none is specified

Type hint for the low-level dependency input (enum.Flag is too slow)

Individual dependency types

Type hint for the high-level dependency input

alias of Union[str, List[str], Tuple[str, ...]]


Convert deptype user input to a DepFlag, or raise ValueError.
deptype -- string representing dependency type, or a list/tuple of such strings. Can also be the builtin function all or the string 'all', which result in a tuple of all dependency types known to Spack.



Transform an iterable of deptype strings into a flag.

Create a string representing deptypes for many dependencies.

The string will be some subset of 'blrt', like 'bl ', 'b t', or ' lr ' where each letter in 'blrt' stands for 'build', 'link', 'run', and 'test' (the dependency types).

For a single dependency, this just indicates that the dependency has the indicated deptypes. For a list of dependnecies, this shows whether ANY dpeendency in the list has the deptypes (so the deptypes are merged).




spack.directives module

This package contains directives that can be used within a package.

Directives are functions that can be called inside a package definition to modify the package, for example:

depends_on("hwloc") provides("mpi") ...



provides and depends_on are spack directives.

The available directives are:

  • build_system
  • conflicts
  • depends_on
  • extends
  • patch
  • provides
  • resource
  • variant
  • version
  • requires



Bases: SpackError

This is raised when something is wrong with a package directive.


Bases: type

Flushes the directives that were temporarily stored in the staging area into the package.

Decorator for Spack directives.

Spack directives allow you to modify a package while it is being defined, e.g. to add version or dependency information. Directives are one of the key pieces of Spack's package "language", which is embedded in python.

Here's an example directive:

@directive(dicts='versions')
version(pkg, ...):

...


This directive allows you write:

class Foo(Package):

version(...)


The @directive decorator handles a couple things for you:

1.
Adds the class scope (pkg) as an initial parameter when called, like a class method would. This allows you to modify a package from within a directive, while the package is still being defined.
2.
It automatically adds a dictionary called "versions" to the package so that you can refer to pkg.versions.



The (dicts='versions') part ensures that ALL packages in Spack will have a versions attribute after they're constructed, and that if no directive actually modified it, it will just be an empty dict.

This is just a modular way to add storage attributes to the Package class, and it's how Spack gets information from the packages to the core.


Pop default arguments

Pop the last constraint from the context


Add a spec to the context constraints.



Allows a package to define a conflict.

Currently, a "conflict" is a concretized configuration that is known to be non-valid. For example, a package that is known not to be buildable with intel compilers can declare:

conflicts('%intel')


To express the same constraint only when the 'foo' variant is activated:

conflicts('%intel', when='+foo')


  • conflict_spec (spack.spec.Spec) -- constraint defining the known conflict
  • when (spack.spec.Spec) -- optional constraint that triggers the conflict
  • msg (str) -- optional user defined message



Creates a dict of deps with specs defining when they apply.
  • spec (spack.spec.Spec or str) -- the package and constraints depended on
  • when (spack.spec.Spec or str) -- when the dependent satisfies this, it has the dependency represented by spec
  • type (str or tuple) -- str or tuple of legal Spack deptypes
  • patches (Callable or list) -- single result of patch() directive, a str to be passed to patch, or a list of these


This directive is to be used inside a Package definition to declare that the package requires other packages to be built first. @see The section "Dependency specs" in the Spack Packaging Guide.


Same as depends_on, but also adds this package to the extendee list.

keyword arguments can be passed to extends() so that extension packages can pass parameters to the extendee's extension mechanism.


Add a new license directive, to specify the SPDX identifier the software is distributed under.
  • license_identifiers -- A list of SPDX identifiers specifying the licenses the software is distributed under.
  • when -- A spec specifying when the license applies.



Add a new maintainer directive, to specify maintainers in a declarative way.
names -- GitHub username for the maintainer


Packages can declare patches to apply to source. You can optionally provide a when spec to indicate that a particular patch should only be applied when the package's spec meets certain conditions (e.g. a particular version).
  • url_or_filename (str) -- url or relative filename of the patch
  • level (int) -- patch level (as in the patch shell command)
  • when (spack.spec.Spec) -- optional anonymous spec that specifies when to apply the patch
  • working_dir (str) -- dir to change to before applying

  • sha256 (str) -- sha256 sum of the patch, used to verify the patch (only required for URL patches)
  • archive_sha256 (str) -- sha256 sum of the archive, if the patch is compressed (only required for compressed URL patches)



Allows packages to provide a virtual dependency.

If a package provides "mpi", other packages can declare that they depend on "mpi", and spack can use the providing package to satisfy the dependency.

  • *specs -- virtual specs provided by this package
  • when -- condition when this provides clause needs to be considered



Allows a package to request a configuration to be present in all valid solutions.

For instance, a package that is known to compile only with GCC can declare:

requires("%gcc")


A package that requires Apple-Clang on Darwin can declare instead:

requires("%apple-clang", when="platform=darwin", msg="Apple Clang is required on Darwin")


  • requirement_specs -- spec expressing the requirement
  • when -- optional constraint that triggers the requirement. If None the requirement is applied unconditionally.
  • msg -- optional user defined message



Define an external resource to be fetched and staged when building the package. Based on the keywords present in the dictionary the appropriate FetchStrategy will be used for the resource. Resources are fetched and staged in their own folder inside spack stage area, and then moved into the stage area of the package that needs them.

List of recognized keywords:

  • 'when' : (optional) represents the condition upon which the resource is needed
  • 'destination' : (optional) path where to move the resource. This path must be relative to the main package stage area.
  • 'placement' : (optional) gives the possibility to fine tune how the resource is moved into the main package stage area.


Define a variant for the package.

Packager can specify a default value as well as a text description.

  • name -- Name of the variant
  • default -- Default value for the variant, if not specified otherwise the default will be False for a boolean variant and 'nothing' for a multi-valued variant
  • description -- Description of the purpose of the variant
  • values -- Either a tuple of strings containing the allowed values, or a callable accepting one value and returning True if it is valid
  • multi -- If False only one value per spec is allowed for this variant
  • validator -- Optional group validator to enforce additional logic. It receives the package name, the variant name and a tuple of values and should raise an instance of SpackError if the group doesn't meet the additional constraints
  • when -- Optional condition on which the variant applies
  • sticky -- The variant should not be changed by the concretizer to find a valid concrete spec

DirectiveError -- If arguments passed to the directive are invalid



spack.directory_layout module

Bases: object

A directory layout is used to associate unique paths with specs. Different installations are going to want different layouts for their install, and they can use this to customize the nesting structure of spack installs. The default layout is:

<install root>/
<platform-os-target>/
<compiler>-<compiler version>/
<name>-<version>-<hash>




The hash here is a SHA-1 hash for the full DAG plus the build spec.

The installation directory projections can be modified with the projections argument.





Gets full path to spec file for deprecated spec

If the deprecator_spec is provided, use that. Otherwise, assume deprecated_spec is already deprecated and its prefix links to the prefix of its deprecator.



Throws InconsistentInstallDirectoryError if: 1. spec prefix does not exist 2. spec prefix does not contain a spec file, or 3. We read a spec with the wrong DAG hash out of an existing install directory.




Return absolute path from the root to a directory for the spec.

Read the contents of a file and parse them as a spec


Removes a prefix and any empty parent directories from the root. Raised RemoveFailedError if something goes wrong.

Gets full path to spec file


The host environment is a json file with os, kernel, and spack versioning. We use it in the case that an analysis later needs to easily access this information.

Write a spec out to a file.


Bases: SpackError

Superclass for directory layout errors.


Bases: DirectoryLayoutError

Raised when an extension is added to a package that already has it.


Bases: DirectoryLayoutError

Raised when an extension is added to a package that already has it.


Bases: DirectoryLayoutError

Raised when a package seems to be installed to the wrong place.


Bases: DirectoryLayoutError

Raised when a invalid directory layout parameters are supplied


Bases: DirectoryLayoutError

Raised when an extension file has a bad spec in it.


Bases: DirectoryLayoutError

Raised when a DirectoryLayout cannot remove an install prefix.


Bases: DirectoryLayoutError

Raised when directory layout can't read a spec.


spack.error module

Bases: SpackError

Superclass for fetch-related errors.


Bases: SpackError

Raised when package headers are requested but cannot be found


Bases: SpackError

Raised when package libraries are requested but cannot be found


Bases: Exception

This is the superclass for all Spack errors. Subclasses can be found in the modules they have to do with.



Print extended debug information about this exception.

This is usually printed when the top-level Spack error handler calls die(), but it can be called separately beforehand if a lower-level error handler needs to print error context and continue without raising the exception to the top level.



Bases: SpackError

Superclass for all errors that occur while constructing specs.


Bases: SpecError

Raised when a spec conflicts with package constraints.

For original concretizer, provide the requirement that was violated when raising.


Bases: SpackError

Raised by packages when a platform is not supported


at what level we should write stack traces or short error messages this is module-scoped because it needs to be set very early

spack.extensions module

Service functions and classes to implement the hooks for Spack's command extensions.

Bases: SpackError

Exception class thrown when a requested command is not recognized as such.


Bases: SpackError

Exception class thrown when a configured extension does not follow the expected naming convention.


Returns the name of the extension in the path passed as argument.
path (str) -- path where the extension resides
The extension name.
ExtensionNamingError -- if path does not match the expected format
for a Spack command extension.


Return the list of paths where to search for command files.

Return the list of canonicalized extension paths from config:extensions.

Imports the extension module for a particular command name and returns it.
cmd_name (str) -- name of the command for which to get a module (contains -, not _).


Returns the list of directories where to search for templates in extensions.

Loads a command extension from the path passed as argument.
  • command (str) -- name of the command (contains -, not _).
  • path (str) -- base path of the command extension

A valid module if found and loadable; None if not found. Module

loading exceptions are passed through.


Return the test root dir for a given extension.
  • target_name (str) -- name of the extension to test
  • *paths -- paths where the extensions reside

Root directory where tests should reside or None


spack.fetch_strategy module

Fetch strategies are used to download source code into a staging area in order to build it. They need to define the following methods:

This should attempt to download/check out source from somewhere.

Apply a checksum to the downloaded source code, e.g. for an archive. May not do anything if the fetch method was safe to begin with.

Expand (e.g., an archive) downloaded file to source, with the standard stage source path as the destination directory.

Restore original state of downloaded code. Used by clean commands. This may just remove the expanded source and re-expand an archive, or it may run something like git reset --hard.

Archive a source directory, e.g. for creating a mirror.




Bases: FetchStrategy

Fetch strategy associated with bundle, or no-code, packages.

Having a basic fetch strategy is a requirement for executing post-install hooks. Consequently, this class provides the API but does little more than log messages.

TODO: Remove this class by refactoring resource handling and the link between composite stages and composite fetch strategies (see #11981).

Report False as there is no code to cache.

Simply report success -- there is no code to fetch.

BundlePackages don't have a mirror id.

BundlePackages don't have a source id.

There is no associated URL keyword in version() for no-code packages but this property is required for some strategy-related functions (e.g., check_pkg_attributes).


Bases: URLFetchStrategy

The resource associated with a cache URL may be out of date.

Fetch source code archive or repo.
True on success, False on failure.
Return type
bool



Bases: FetchError

Raised when archive fails to checksum.


Bases: VCSFetchStrategy
Use like this in a package:
cvs=':pserver:anonymous@www.example.com:/cvsroot%module=modulename')



Optionally, you can provide a branch and/or a date for the URL:

cvs=':pserver:anonymous@www.example.com:/cvsroot%module=modulename', branch='branchname', date='date')




Repositories are checked out into the standard stage source path directory.

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.


Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

True if can cache, False otherwise.
Return type
bool



Fetch source code archive or repo.
True on success, False on failure.
Return type
bool


This is a unique ID for a source that is intended to help identify reuse of resources across packages.

It is unique like source-id, but it does not include the package name and is not necessarily easy for a human to create themselves.



Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.


A unique ID for the source.

It is intended that a human could easily generate this themselves using the information available to them in the Spack package.

The returned value is added to the content which determines the full hash for a package using str().


The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.


Bases: FetchError

Raised when we can't extrapolate a version for a package.


Bases: FetchError

Raised when a download fails.


Bases: URLFetchStrategy

Fetch strategy that verifies the content digest during fetching, as well as after expanding it.

Verify checksum after expanding the archive.


Bases: object

Superclass of all fetch strategies.

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.


Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

True if can cache, False otherwise.
Return type
bool


Checksum the archive fetched by this FetchStrategy.

Expand the downloaded archive into the stage source path.

Fetch source code archive or repo.
True on success, False on failure.
Return type
bool


Predicate that matches fetch strategies to arguments of the version directive.
args -- arguments of the version directive


This is a unique ID for a source that is intended to help identify reuse of resources across packages.

It is unique like source-id, but it does not include the package name and is not necessarily easy for a human to create themselves.



Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.



A unique ID for the source.

It is intended that a human could easily generate this themselves using the information available to them in the Spack package.

The returned value is added to the content which determines the full hash for a package using str().


The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.


Bases: FetchError

Raised for packages with invalid fetch attributes.



Bases: URLFetchStrategy

FetchStrategy that pulls from a GCS bucket.

Fetch source code archive or repo.
True on success, False on failure.
Return type
bool


The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.


Bases: VCSFetchStrategy

Fetch strategy that gets source code from a git repository. Use like this in a package:



Optionally, you can provide a branch, or commit to check out, e.g.:

version('1.1', git='https://github.com/project/repo.git', tag='v1.1')


You can use these three optional attributes in addition to git:


  • tag: Particular tag to check out
  • commit: Particular commit hash in the repo



Repositories are cloned into the standard stage source path directory.

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.


Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

True if can cache, False otherwise.
Return type
bool


Clone a repository to a path.

This method handles cloning from git, but does not require a stage.

  • dest (str or None) -- The path into which the code is cloned. If None, requires a stage and uses the stage's source path.
  • commit (str or None) -- A commit to fetch from the remote. Only one of commit, branch, and tag may be non-None.
  • branch (str or None) -- A branch to fetch from the remote.
  • tag (str or None) -- A tag to fetch from the remote.
  • bare (bool) -- Execute a "bare" git clone (--bare option to git)



Fetch source code archive or repo.
True on success, False on failure.
Return type
bool





This is a unique ID for a source that is intended to help identify reuse of resources across packages.

It is unique like source-id, but it does not include the package name and is not necessarily easy for a human to create themselves.



Shallow clone operations (--depth #) are not supported by the basic HTTP protocol or by no-protocol file specifications. Use (e.g.) https:// or file:// instead.

Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.


A unique ID for the source.

It is intended that a human could easily generate this themselves using the information available to them in the Spack package.

The returned value is added to the content which determines the full hash for a package using str().


The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.

Given a git executable, return the Version (this will fail if the output cannot be parsed into a valid Version).


Bases: VCSFetchStrategy

Fetch strategy that employs the go get infrastructure.

Use like this in a package:

go='github.com/monochromegane/the_platinum_searcher/...')



Go get does not natively support versions, they can be faked with git.

The fetched source will be moved to the standard stage sourcepath directory during the expand step.

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.


Expand the downloaded archive into the stage source path.

Fetch source code archive or repo.
True on success, False on failure.
Return type
bool




Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.


The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.


Bases: VCSFetchStrategy

Fetch strategy that gets source code from a Mercurial repository. Use like this in a package:


Optionally, you can provide a branch, or revision to check out, e.g.:


You can use the optional 'revision' attribute to check out a branch, tag, or particular revision in hg. To prevent non-reproducible builds, using a moving target like a branch is discouraged.

revision: Particular revision, branch, or tag.



Repositories are cloned into the standard stage source path directory.

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.


Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

True if can cache, False otherwise.
Return type
bool


Fetch source code archive or repo.
True on success, False on failure.
Return type
bool


Returns: Executable: the hg executable

This is a unique ID for a source that is intended to help identify reuse of resources across packages.

It is unique like source-id, but it does not include the package name and is not necessarily easy for a human to create themselves.



Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.


A unique ID for the source.

It is intended that a human could easily generate this themselves using the information available to them in the Spack package.

The returned value is added to the content which determines the full hash for a package using str().


The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.


Bases: FetchError

Raised when a version can't be deduced from a set of arguments.


Bases: FetchError

Raised when an archive file is expected but none exists.


Bases: FetchError

Raised when there is no cached archive for a package.


Bases: FetchError

Raised after attempt to checksum when URL has no digest.


Bases: FetchError

Raised when fetch operations are called before set_stage().


Bases: URLFetchStrategy
Fetch source code archive or repo.
True on success, False on failure.
Return type
bool



Bases: URLFetchStrategy

FetchStrategy that pulls from an S3 bucket.

Fetch source code archive or repo.
True on success, False on failure.
Return type
bool


The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.


Bases: VCSFetchStrategy
Use like this in a package:
version('name', svn='http://www.example.com/svn/trunk')


Optionally, you can provide a revision for the URL:



Repositories are checked out into the standard stage source path directory.

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.


Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

True if can cache, False otherwise.
Return type
bool


Fetch source code archive or repo.
True on success, False on failure.
Return type
bool


This is a unique ID for a source that is intended to help identify reuse of resources across packages.

It is unique like source-id, but it does not include the package name and is not necessarily easy for a human to create themselves.



Revert to freshly downloaded state.

For archive files, this may just re-expand the archive.


A unique ID for the source.

It is intended that a human could easily generate this themselves using the information available to them in the Spack package.

The returned value is added to the content which determines the full hash for a package using str().



The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.


Bases: FetchStrategy

URLFetchStrategy pulls source code from a URL for an archive, check the archive against a checksum, and decompresses the archive.

The destination for the resulting file(s) is the standard stage path.

Just moves this archive to the destination.

Path to the source archive within this stage directory.

Whether fetcher is capable of caching the resource it retrieves.

This generally is determined by whether the resource is identifiably associated with a specific package version.

True if can cache, False otherwise.
Return type
bool



Check the downloaded archive against a checksum digest. No-op if this stage checks code out of a repository.


Expand the downloaded archive into the stage source path.

Fetch source code archive or repo.
True on success, False on failure.
Return type
bool


This is a unique ID for a source that is intended to help identify reuse of resources across packages.

It is unique like source-id, but it does not include the package name and is not necessarily easy for a human to create themselves.



Removes the source path if it exists, then re-expands the archive.

A unique ID for the source.

It is intended that a human could easily generate this themselves using the information available to them in the Spack package.

The returned value is added to the content which determines the full hash for a package using str().


The URL attribute must be specified either at the package class level, or as a keyword argument to version(). It is used to distinguish fetchers for different versions in the package DSL.


Bases: FetchStrategy

Superclass for version control system fetch strategies.

Like all fetchers, VCS fetchers are identified by the attributes passed to the version directive. The optional_attrs for a VCS fetch strategy represent types of revisions, e.g. tags, branches, commits, etc.

The required attributes (git, svn, etc.) are used to specify the URL and to distinguish a VCS fetch strategy from a URL fetch strategy.

Create an archive of the downloaded data for a mirror.

For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository.


Checksum the archive fetched by this FetchStrategy.

Expand the downloaded archive into the stage source path.



Find ambiguous top-level fetch attributes in a package.

Currently this only ensures that two or more VCS fetch strategies are not specified at once.


Decorator used to register fetch strategies.

Determine a fetch strategy based on the arguments supplied to version() in the package description.

Construct an appropriate FetchStrategy from the given keyword arguments.
**kwargs -- dictionary of keyword arguments, e.g. from a version() directive in a package.
on attribute names (e.g., git, hg, etc.)

Return type
Callable
spack.error.FetchError -- If no fetch_strategy matches the args.


If a package provides a URL which lists URLs for resources by version, this can can create a fetcher for a URL discovered for the specified package's version.

Given a URL, find an appropriate fetch strategy for it. Currently just gives you a URLFetchStrategy that uses curl.


Finds a suitable FetchStrategy by matching its url_attr with the scheme in the given url.

Returns whether the fetcher target is expected to have a stable checksum. This is only true if the target is a preexisting archive file.



spack.filesystem_view module

Bases: object

Governs a filesystem view that is located at certain root-directory.

Packages are linked from their install directories into a common file hierachy.

In distributed filesystems, loading each installed package seperately can lead to slow-downs due to too many directories being traversed. This can be circumvented by loading all needed modules into a common directory structure.

Add given specs to view.

Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be activated as well.

Should except an exclude keyword argument containing a list of regexps that filter out matching spec names.

This method should make use of activate_standalone.


Add (link) a standalone package into this view.

Check if the given concrete spec is active in this view.

Get all specs currently active in this view.

Get the projection in this view for a spec.

Return the actual spec linked in this view (i.e. do not look it up in the database by name).

spec can be a name or a spec from which the name is extracted.

As there can only be a single version active for any spec the name is enough to identify the spec in the view.

If no spec is present, returns None.


  • ..they are active in the view.
  • ..they are active but the activated version differs.
  • ..they are not activte in the view.


Takes with_dependencies keyword argument so that the status of dependencies is printed as well.


Removes given specs from view.

Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be deactivated as well.

Should accept with_dependents as keyword argument (default True) to indicate wether or not dependents on the deactivated specs should be removed as well.

Should except an exclude keyword argument containing a list of regexps that filter out matching spec names.

This method should make use of deactivate_standalone.


Remove (unlink) a standalone package from this view.


Bases: FilesystemView

Filesystem view to work with a yaml based directory layout.

Add given specs to view.

Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be activated as well.

Should except an exclude keyword argument containing a list of regexps that filter out matching spec names.

This method should make use of activate_standalone.


Add (link) a standalone package into this view.

Check if the given concrete spec is active in this view.


Get all specs currently active in this view.

Return list of tuples (<spec>, <spec in view>) where the spec active in the view differs from the one to be activated.

Get path to meta folder for either spec or spec name.

Return the projection for a spec in this view.

Relies on the ordering of projections to avoid ambiguity.


Return the actual spec linked in this view (i.e. do not look it up in the database by name).

spec can be a name or a spec from which the name is extracted.

As there can only be a single version active for any spec the name is enough to identify the spec in the view.

If no spec is present, returns None.




Singular print function for spec conflicts.

  • ..they are active in the view.
  • ..they are active but the activated version differs.
  • ..they are not activte in the view.


Takes with_dependencies keyword argument so that the status of dependencies is printed as well.




Removes given specs from view.

Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be deactivated as well.

Should accept with_dependents as keyword argument (default True) to indicate wether or not dependents on the deactivated specs should be removed as well.

Should except an exclude keyword argument containing a list of regexps that filter out matching spec names.

This method should make use of deactivate_standalone.


Remove (unlink) a standalone package from this view.





spack.graph module

Functions for graphing DAGs of dependencies.

This file contains code for graphing DAGs of software packages (i.e. Spack specs). There are two main functions you probably care about:

graph_ascii() will output a colored graph of a spec in ascii format, kind of like the graph git shows with "git log --graph", e.g.:

o  mpileaks
|\
| |\
| o |  callpath
|/| |
| |\|
| |\ \
| | |\ \
| | | | o  adept-utils
| |_|_|/|
|/| | | |
o | | | |  mpi

/ / / / | | o | dyninst | |/| | |/|/| | | | |/ | o | libdwarf |/ / o | libelf
/ o boost


graph_dot() will output a graph of a spec (or multiple specs) in dot format.

Bases: object
Write out an ascii graph of the provided spec.

Arguments: spec -- spec to graph. This only handles one spec at a time.

Optional arguments:

out -- file object to write out to (default is sys.stdout)




Bases: DotGraphBuilder

DOT graph with link,run nodes grouped together and edges colored according to the dependency types.

Return a tuple of (parent_id, child_id, edge_options)

Return a tuple of (node_id, node_options)

Visit an edge and builds up entries to render the graph


Bases: object

Visit edges of a graph a build DOT options for nodes and edges

Return the context to be used to render the DOT graph template

Return a tuple of (parent_id, child_id, edge_options)

Return a tuple of (node_id, node_options)

Return a string with the output in DOT format

Visit an edge and builds up entries to render the graph


Bases: DotGraphBuilder

Simple DOT graph, with nodes colored uniformly and edges without properties

Return a tuple of (parent_id, child_id, edge_options)

Return a tuple of (node_id, node_options)


Bases: DotGraphBuilder

DOT graph for possible dependencies

Return a tuple of (parent_id, child_id, edge_options)

Return a tuple of (node_id, node_options)


Find index in seq for which predicate is True.

Searches the sequence and returns the index of the element for which the predicate evaluates to True. Returns -1 if the predicate does not evaluate to True for any element in seq.



DOT graph of the concrete specs passed as input.
  • specs -- specs to be represented
  • builder -- builder to use to render the graph
  • depflag -- dependency types to consider
  • out -- optional output stream. If None sys.stdout is used



Static DOT graph with edges to all possible dependencies.
  • specs -- abstract specs to be represented
  • depflag -- dependency types to consider
  • out -- optional output stream. If None sys.stdout is used



spack.hash_types module

Definitions that control how Spack creates Spec hashes.

Bases: object

This class defines how hashes are generated on Spec objects.

Spec hashes in Spack are generated from a serialized (e.g., with YAML) representation of the Spec graph. The representation may only include certain dependency types, and it may optionally include a canonicalized hash of the package.py for each node in the graph.

We currently use different hashes for different use cases.

Private attribute stored on spec


Spack's deployment hash. Includes all inputs that can affect how a package is built.


Hash descriptor used only to transfer a DAG, as is, across processes

spack.install_test module

Bases: object

The class that manages stand-alone (post-install) package tests.

Add the failure details to the current list.



The current logger or, if none, sets to one.

The total number of (checked) test parts.

Execute the builder's package phase-time tests.
  • builder -- builder for package being tested
  • phase_name -- the name of the build-time phase (e.g., build, install)
  • method_names -- phase-specific callback method names



Print the test log file path.

True if ran tests, False otherwise.

Run the package's stand-alone tests.
kwargs (dict) -- arguments to be used by the test process


Track and print the test status for the test part name.

Collect test results summary lines for this spec.

Context manager for setting up the test logger
  • verbose -- Display verbose output, including echoing to stdout, otherwise suppress it
  • externals -- True for performing tests if external package, False to skip them



Write the overall status to the tested file.

If there any test part failures, then the tests failed. If all test parts are skipped, then the tests were skipped. If any tests passed then the tests passed; otherwise, there were not tests executed.



Bases: Exception

Raised when a test (part) is being skipped.


Bases: SpackError

Raised when package tests have failed for an installation.


Stand-alone test failure info type

alias of Tuple[BaseException, str]



Bases: object

The class that manages specs for spack test run execution.

The hash used to uniquely identify the test suite.

Path to the test stage directory where the current spec's cached build-time files were automatically copied.
path to the current spec's staged, cached build-time files.
Return type
str
TestSuiteSpecError -- If there is no spec being tested


Path to the test stage directory where the current spec's custom package (data) files were automatically copied.
path to the current spec's staged, custom package (data) files
Return type
str
TestSuiteSpecError -- If there is no spec being tested


Ensure the test suite stage directory exists.

Instantiates a TestSuite based on a dictionary specs and an optional alias:
specs: list of the test suite's specs in dictionary form alias: the test suite alias


Instance created from the specs
Return type
TestSuite


Instantiate a TestSuite using the specs and optional alias provided in the given file.
filename (str) -- The path to the JSON file containing the test suite specs and optional alias.
BaseException -- sjson.SpackJSONError if problem parsing the file


The test log file path for the provided spec.
spec (spack.spec.Spec) -- instance of the spec under test
the path to the spec's log file
Return type
str


The name (alias or, if none, hash) of the test suite.

The path to the results summary file.

The root test suite stage directory.
the spec's test stage directory path
Return type
str


The path to the test stage directory for the provided spec.
spec (spack.spec.Spec) -- instance of the spec under test
the spec's test stage directory path
Return type
str


The standard log filename for a spec.
spec (spack.spec.Spec) -- instance of the spec under test
the spec's log filename
Return type
str


The standard install test package identifier.
spec -- instance of the spec under test
the install test package identifier
Return type
str


Determine the overall test results status for the spec.
  • spec -- instance of the spec under test
  • externals -- True if externals are to be tested, else False

the spec's test status if available or None


The test status file path for the spec.
spec (spack.spec.Spec) -- instance of the spec under test
the spec's test status file path
Return type
str


The standard test status filename for the spec.
spec (spack.spec.Spec) -- instance of the spec under test
the spec's test status filename
Return type
str


Build a dictionary for the test suite.
The dictionary contains entries for up to two keys:
specs: list of the test suite's specs in dictionary form alias: the alias, or name, given to the test suite if provided


Return type
dict



Write the spec's test result to the test suite results file.
  • spec (spack.spec.Spec) -- instance of the spec under test
  • result (str) -- result from the spec's test execution (e.g, PASSED)




Bases: SpackError

Raised when there is an error with the test suite.


Bases: SpackError

Raised when one or more tests in a suite have failed.


Bases: SpackError

Raised when there is an issue with the naming of the test suite.


Bases: SpackError

Raised when there is an issue associated with the spec being tested.


Copy relative source paths to the corresponding install test subdir

This routine is intended as an optional install test setup helper for grabbing source files/directories during the installation process and copying them to the installation test subdirectory for subsequent use during install testing.

  • pkg -- package being tested
  • srcs -- relative path for file(s) and or subdirectory(ies) located in the staged source path that are to be copied to the corresponding location(s) under the install testing directory.

spack.installer.InstallError -- if any of the source paths are absolute
or do not exist
under the build stage


Ensure the expected outputs are contained in the actual outputs.
  • expected -- expected raw output string(s)
  • actual -- actual output string

RuntimeError -- the expected output is not found in the actual output


Copy the spec's cached and custom test files to the test stage directory.
  • pkg -- package being tested
  • test_spec -- spec being tested, where the spec may be virtual

TestSuiteError -- package must be part of an active test suite


Find the required file(s) under the root directory.
  • root -- root directory for the search
  • filename -- name of the file being located
  • expected -- expected number of files to be found under the directory (default is 1)
  • recursive -- True if subdirectories are to be recursively searched, else False (default is True)


Returns: the path(s), relative to root, to the required file(s)

Exception -- SkipTest when number of files detected does not match expected


Retrieves all validly staged TestSuites
a list of TestSuite objects, which may be empty if there are none
Return type
list


Retrieve and escape the expected text output from the file
filename -- path to the file
escaped text lines read from the file


Retrieves test suites with the provided name.
a list of matching TestSuite instances, which may be empty if none
Return type
list
Exception -- TestSuiteNameError if no name is provided


Retrieves the config:test_stage path to the configured test stage root directory

Return type
str


Ensure there is only one matching test suite with the provided name.
the name if one matching test suite, else None
TestSuiteNameError -- If there are more than one matching TestSuites


The install test root directory.
pkg -- package being tested


Determine the overall status based on the current and associated sub status values.
  • current_status -- current overall status, assumed to default to PASSED
  • substatuses -- status of each test part or overall status of each test spec

test status encompassing the main test and all subtests


Print the message to the log, optionally echoing.
  • logger -- instance of the output logger (e.g. nixlog or winlog)
  • msg -- message being output
  • verbose -- True displays verbose output, False suppresses it (False is default)



Process test parts associated with the package.
  • pkg -- package being tested
  • test_specs -- list of test specs
  • verbose -- Display verbose output (suppress by default)

TestSuiteError -- package must be part of an active test suite


Name of the test suite results (summary) file

Name of the Spack install phase-time test log file

Grab the names of all non-empty test functions.
  • pkg -- package or package class of interest
  • add_virtuals -- True adds test methods of provided package virtual, False only returns test functions of the package

names of non-empty test functions
ValueError -- occurs if pkg is not a package class


Grab all non-empty test functions.
  • pkg -- package or package class of interest
  • add_virtuals -- True adds test methods of provided package virtual, False only returns test functions of the package

list of non-empty test functions' (name, function)
ValueError -- occurs if pkg is not a package class




Name of the test suite's (JSON) lock file

Return a list of unique virtuals for the package.
pkg -- package of interest

Returns: names of unique virtual packages


Write the test suite to its (JSON) lock file.

Write summary of the totals for each relevant status category.
counts -- counts of the occurrences of relevant test status types


spack.installer module

This module encapsulates package installation functionality.

The PackageInstaller coordinates concurrent builds of packages for the same Spack instance by leveraging the dependency DAG and file system locks. It also proceeds with the installation of non-dependent packages of failed dependencies in order to install as many dependencies of a package as possible.

Bottom-up traversal of the dependency DAG while prioritizing packages with no uninstalled dependencies allows multiple processes to perform concurrent builds of separate packages associated with a spec.

File system locks enable coordination such that no two processes attempt to build the same or a failed dependency package.

Failures to install dependency packages result in removal of their dependents' build tasks from the current process. A failure file is also written (and locked) so that other processes can detect the failure and adjust their build tasks accordingly.

This module supports the coordination of local and distributed concurrent installations of packages in a Spack instance.

Bases: InstallError

Raised for an install phase option is not allowed for a package.


Bases: object

This class implements the part installation that happens in the child process.

Main entry point from build_process to kick off install in child.


Bases: object

Class for representing an installation request.

Determine the required dependency types for the associated package.
pkg -- explicit or implicit package being installed
required dependency type(s) for the package
Return type
tuple


Returns True if the package id represents a known dependency of the requested package, False otherwise.

Determine if the tests should be run for the provided packages
pkg -- explicit or implicit package being installed
True if they should be run; False otherwise
Return type
bool


The specification associated with the package.

Yield any dependencies of the appropriate type(s)


Bases: object

Class for representing the build task for a package.

Ensure the dependent package id is in the task's list so it will be properly updated when this package is installed.
pkg_id -- package identifier of the dependent package



The package was explicitly requested by the user.

Ensure the dependency is not considered to still be uninstalled.
installed -- the identifiers of packages that have been installed so far


The package was requested directly, but may or may not be explicit in an environment.

The key is the tuple (# uninstalled dependencies, sequence).

Create a new, updated task for the next installation attempt.

The priority is based on the remaining uninstalled dependencies.



Bases: InstallError

Raised by install() when a package is only for external use.


Bases: object
Do a standard install

Don't perform an install

Do an overwrite install


Bases: SpackError

Raised when something goes wrong during install or uninstall.

The error can be annotated with a pkg attribute to allow the caller to get the package for which the exception was raised.


Bases: InstallError

Raised during install when something goes wrong with package locking.



Bases: object
Try to run the install task overwriting the package prefix. If this fails, try to recover the original install prefix. If that fails too, mark the spec as uninstalled. This function always the original install error if installation fails.


Bases: object

Class for managing the install process for a Spack instance based on a bottom-up DAG approach.

This installer can coordinate concurrent batch and interactive, local and distributed (on a shared file system) builds for the same Spack instance.

install() -> None
Install the requested package(s) and or associated dependencies.


Build status indicating task has been added.

Build status indicating the task has been popped from the queue

Build status indicating the spec failed to install

Build status indicating the spec was sucessfully installed

Build status indicating the spec is being installed (possibly by another process)

Build status indicating task has been removed (to maintain priority queue invariants).

Bases: object

This class is used in distributed builds to inform the user that other packages are being installed by another process.

Add a package to the waiting list, and if it is new, update the status line.

Clear the status line.


Bases: InstallError

Raised during install when something goes wrong with an upstream package.


Copy install logs to their destination directory(ies) :param pkg: the package that was built and installed :param phase_log_dir: path to the archive directory

Perform the installation/build of the package.

This runs in a separate child process, and has its own process and python module space set up by build_environment.start_build_process().

This essentially wraps an instance of BuildProcessInstaller so that we can more easily create one in a subprocess.

This function's return value is returned to the parent process.

  • pkg -- the package being installed.
  • install_args -- arguments to do_install() from parent process.



Read set or list of logs and combine them into one file.

Each phase will produce it's own log, so this function aims to cat all the separate phase log output files into the pkg.log_path. It is written generally to accept some list of files, and a log path to combine them to.

  • phase_log_files -- a list or iterator of logs to combine
  • log_path -- the path to combine them to



Dump all package information for a spec and its dependencies.

This creates a package repository within path for every namespace in the spec DAG, and fills the repos with package files and patch files for every node in the DAG.

  • spec -- the Spack spec whose package information is to be dumped
  • path -- the path to the build packages directory



Return a list of package ids for the spec's dependents
spec -- Concretized spec

Returns: list of package ids


Colorize the name/id of the package being installed
  • name -- Name/id of the package being installed
  • pid -- id of the installer process


Return: Colorized installing message


Copy provenance into the install directory on success
pkg -- the package that was built and installed


A "unique" package identifier for installation purposes

The identifier is used to track build tasks, locks, install, and failure statuses.

The identifier needs to distinguish between combinations of compilers and packages for combinatorial environments.

pkg -- the package from which the identifier is derived


Output install test log file path but only if have test failures.
pkg -- instance of the package under test


spack.main module

This is the implementation of the Spack command line executable.

In a normal Spack installation, this is invoked from the bin/spack script after the system path is set up.

Whether to print backtraces on error


Bases: object

Callable object that invokes a spack command (for testing).

Example usage:

install = SpackCommand('install')
install('-v', 'mpich')


Use this to invoke Spack commands directly from Python and check their output.


Bases: Exception

Raised when SpackCommand execution fails.



Add all spack subcommands to the parser.

Implements really simple argument injection for unknown arguments.

Commands may add an optional argument called "unknown args" to indicate they can handle unknonwn args, and we'll pass the unknown args in.



Get the Spack git commit sha.
(str or None) the commit sha if available, otherwise None


Get a descriptive version of this instance of Spack.

Outputs '<PEP440 version> (<git commit sha>)'.

The commit sha is only added when available.


create an index of commands by section for this help level


help levels in order of detail (i.e., number of commands shown)

This is the entry point for the Spack command.

main() itself is just an error handler -- it handles errors for everything in Spack that makes it to the top level.

The logic is all in _main().

argv (list or None) -- command line arguments, NOT including the executable name. If None, parses from sys.argv.


Create an basic argument parser without any subcommands added.

control top-level spack options shown in basic vs. advanced help

Print basic information needed by setup-env.[c]sh.
info (list) -- list of things to print: comma-separated list of 'csh', 'sh', or 'modules'

This is in main.py to make it fast; the setup scripts need to invoke spack in login scripts, and it needs to be quick.



Resolves aliases in the given command.
  • cmd_name -- command name.
  • cmd -- command line arguments.

new command name and arguments.


Spack mutates DYLD_* variables in spack load and spack env activate. Unlike Linux, macOS SIP clears these variables in new processes, meaning that os.environ["DYLD_*"] in our Python process is not the same as the user's shell. Therefore, we store the user's DYLD_* variables in SPACK_DYLD_* and restore them here.



Redirects messages to tty.warn.

Change the working directory to getcwd, or spack prefix if no cwd.

Configure spack globals based on the basic options.

Recorded directory where spack command was originally invoked


spack.mirror module

This file contains code for creating spack mirror directories. A mirror is an organized hierarchy containing specially named archive files. This enabled spack to know where to find files in a mirror if the main server for a particular package is down. Or, if the computer where spack is run is not connected to the internet, it allows spack to download packages directly from a mirror (e.g., on an intranet).

Bases: object

Represents a named location for storing source tarballs and binary packages.

Mirrors have a fetch_url that indicate where and how artifacts are fetched from them, and a push_url that indicate where and how artifacts are pushed to them. These two URLs are usually the same.



Get the valid, canonicalized fetch URL



Create an anonymous mirror by URL. This method validates the URL.








Get the valid, canonicalized fetch URL





Modify the mirror with the given data. This takes care of expanding trivial mirror definitions by URL to something more rich with a dict if necessary
  • data (dict) -- The data to update the mirror with.
  • direction (str) -- The direction to update the mirror in (fetch or push or None for top-level update)

True if the mirror was updated, False otherwise.
Return type
bool



Bases: Mapping

A mapping of mirror names to mirrors.





Looks up and returns a Mirror.

If this MirrorCollection contains a named Mirror under the name [name_or_url], then that mirror is returned. Otherwise, [name_or_url] is assumed to be a mirror URL, and an anonymous mirror with the given URL is returned.






Bases: SpackError

Superclass of all mirror-creation related errors.


Bases: object

A MirrorReference stores the relative paths where you can store a package/resource in a mirror directory.

The appropriate storage location is given by storage_path. The cosmetic_path property provides a reference that a human could generate themselves based on reading the details of the package.

A user can iterate over a MirrorReference object to get all the possible names that might be used to refer to the resource in a mirror; this includes names generated by previous naming schemes that are no-longer reported by storage_path or cosmetic_path.




Bases: object

Follow the OCI Image Layout Specification to archive blobs

Paths are of the form blobs/<algorithm>/<digest>


Add a named mirror in the given scope

Create a directory to be used as a spack mirror, and fill it with package archives.
  • path -- Path to create a mirror directory hierarchy in.
  • specs -- Any package versions matching these specs will be added to the mirror.
  • skip_unstable_versions -- if true, this skips adding resources when they do not have a stable archive checksum (as determined by fetch_strategy.stable_target)


Return Value:
Returns a tuple of lists: (present, mirrored, error)
  • present: Package specs that were already present.
  • mirrored: Package specs that were successfully mirrored.
  • error: Package specs that failed to mirror due to some error.



Add a single package object to a mirror.

The package object is only required to have an associated spec with a concrete version.

  • pkg_obj (spack.package_base.PackageBase) -- package object with to be added.
  • mirror_cache (spack.caches.MirrorCache) -- mirror where to add the spec.
  • mirror_stats (spack.mirror.MirrorStats) -- statistics on the current mirror

True if the spec was added successfully, False otherwise


Given a set of initial specs, return a new set of specs that includes each version of each package in the original set.

Note that if any spec in the original set specifies properties other than version, this information will be omitted in the new set; for example; the new set of specs will not include variant settings.


Get a spec for EACH known version matching any spec in the list. For concrete specs, this retrieves the concrete version and, if more than one version per spec is requested, retrieves the latest versions of the package.

Returns a MirrorReference object which keeps track of the relative storage path of the resource associated with the specified fetcher.

Return both a mirror cache and a mirror stats, starting from the path where a mirror ought to be created.
  • path (str) -- path to create a mirror directory hierarchy in.
  • skip_unstable_versions -- if true, this skips adding resources when they do not have a stable archive checksum (as determined by fetch_strategy.stable_target)



Remove the named mirror in the given scope

Find a mirror by name and raise if it does not exist


spack.mixins module

This module contains additional behavior that can be attached to any given package.

Substitutes any path referring to a Spack compiler wrapper with the path of the underlying compiler that has been used.

If this isn't done, the files will have CC, CXX, F77, and FC set to Spack's generic cc, c++, f77, and f90. We want them to be bound to whatever compiler they were built with.

  • *files -- files to be filtered relative to the search root (which is, by default, the installation prefix)
  • **kwargs --

    allowed keyword arguments

specifies after which phase the files should be filtered (defaults to 'install')
path relative to prefix where to start searching for the files to be filtered. If not set the install prefix wil be used as the search root. It is highly recommended to set this, as searching from the installation prefix may affect performance severely in some cases.
these two keyword arguments, if present, will be forwarded to filter_file (see its documentation for more information on their behavior)
this keyword argument, if present, will be forwarded to find (see its documentation for more information on the behavior)




spack.multimethod module

This module contains utilities for using multi-methods in spack. You can think of multi-methods like overloaded methods -- they're methods with the same name, and we need to select a version of the method based on some criteria. e.g., for overloaded methods, you would select a version of the method to call based on the types of its arguments.

In spack, multi-methods are used to ease the life of package authors. They allow methods like install() (or other methods called by install()) to declare multiple versions to be called when the package is instantiated with different specs. e.g., if the package is built with OpenMPI on x86_64,, you might want to call a different install method than if it was built for mpich2 on BlueGene/Q. Likewise, you might want to do a different type of install for different versions of the package.

Multi-methods provide a simple decorator-based syntax for this that avoids overly complicated rat nests of if statements. Obviously, depending on the scenario, regular old conditionals might be clearer, so package authors should use their judgement.

Bases: SpackError

Superclass for multimethod dispatch errors


Bases: type

This allows us to track the class's dict during instantiation.


Bases: SpackError

Raised when we can't find a version of a multi-method.


Bases: object

This implements a multi-method for Spack specs. Packages are instantiated with a particular spec, and you may want to execute different versions of methods based on what the spec looks like. For example, you might want to call a different version of install() for one platform than you call on another.

The SpecMultiMethod class implements a callable object that handles method dispatch. When it is called, it looks through registered methods and their associated specs, and it tries to find one that matches the package's spec. If it finds one (and only one), it will call that method.

This is intended for use with decorators (see below). The decorator (see docs below) creates SpecMultiMethods and registers method versions with them.

mm = SpecMultiMethod() mm.register("^chaos_5_x86_64_ib", some_method)

The object registered needs to be a Spec or some string that will parse to be a valid spec.

When the mm is actually called, it selects a version of the method to call based on the sys_type of the object it is called on.

See the docs for decorators below for more details.

Register a version of a method for a particular spec.




spack.package module

spack.util.package is a set of useful build tools and directives for packages.

Everything in this module is automatically imported into Spack package files.

spack.package_base module

This is where most of the action happens in Spack.

The spack package class structure is based strongly on Homebrew (http://brew.sh/), mainly because Homebrew makes it very easy to create packages.

Bases: ExtensionError

Raised when there are problems activating an extension.


Bases: SpackError

Raised when the dependencies cannot be flattened as asked for.


Bases: type

Check if a package is detectable and add default implementations for the detection function.



Bases: PackageError

Superclass for all errors having to do with extension packages.


Allowed URL schemes for spack packages.

alias of Callable[[str, Iterable[str]], Tuple[Optional[Iterable[str]], Optional[Iterable[str]], Optional[Iterable[str]]]]


Bases: PackageError

Raised when someone tries perform an invalid operation on a package.


Bases: PackageError

Raised when someone tries to build a URL for a package with no URLs.


Bases: WindowsRPath, PackageViewMixin

This is the superclass for all spack packages.

*The Package class*

At its core, a package consists of a set of software to be installed. A package may focus on a piece of software and its associated software dependencies or it may simply be a set, or bundle, of software. The former requires defining how to fetch, verify (via, e.g., sha256), build, and install that software and the packages it depends on, so that dependencies can be installed along with the package itself. The latter, sometimes referred to as a no-source package, requires only defining the packages to be built.

Packages are written in pure Python.

There are two main parts of a Spack package:

1.
The package class. Classes contain directives, which are special functions, that add metadata (versions, patches, dependencies, and other information) to packages (see directives.py). Directives provide the constraints that are used as input to the concretizer.
2.
Package instances. Once instantiated, a package is essentially a software installer. Spack calls methods like do_install() on the Package object, and it uses those to drive user-implemented methods like patch(), install(), and other build steps. To install software, an instantiated package needs a concrete spec, which guides the behavior of the various install methods.



Packages are imported from repos (see repo.py).

Package DSL

Look in lib/spack/docs or check https://spack.readthedocs.io for the full documentation of the package domain-specific language. That used to be partially documented here, but as it grew, the docs here became increasingly out of date.

Package Lifecycle

A package's lifecycle over a run of Spack looks something like this:

p = Package()             # Done for you by spack
p.do_fetch()              # downloads tarball from a URL (or VCS)
p.do_stage()              # expands tarball in a temp directory
p.do_patch()              # applies patches to expanded source
p.do_install()            # calls package's install() function
p.do_uninstall()          # removes install directory


although packages that do not have code have nothing to fetch so omit p.do_fetch().

There are also some other commands that clean the build area:

p.do_clean()              # removes the stage directory entirely
p.do_restage()            # removes the build directory and

# re-expands the archive.


The convention used here is that a do_* function is intended to be called internally by Spack commands (in spack.cmd). These aren't for package writers to override, and doing so may break the functionality of the Package class.

Package creators have a lot of freedom, and they could technically override anything in this class. That is not usually required.

For most use cases. Package creators typically just add attributes like homepage and, for a code-based package, url, or functions such as install(). There are many custom Package subclasses in the spack.build_systems package that make things even easier for specific build systems.

Retrieve all patches associated with the package.

Retrieves patches on the package itself as well as patches on the dependencies of the package.


A list of all URLs in a package.

Check both class-level and version-specific URLs.

a list of URLs
Return type
list


Return all URLs derived from version_urls(), url, urls, and list_url (if it contains a version) in a package in that order.
version (spack.version.Version) -- the version for which a URL is sought


Archive the install-phase test log, if present.

Return the expected (or current) build log file path. The path points to the staging build file until the software is successfully installed, when it points to the file in the installation directory.

flag_handler that passes flags to the build system arguments. Any package using build_system_flags must also implement flags_to_build_system_args, or derive from a class that implements it. Currently, AutotoolsPackage and CMakePackage implement it.


Copy relative source paths to the corresponding install test subdir

This method is intended as an optional install test setup helper for grabbing source files/directories during the installation process and copying them to the installation test subdirectory for subsequent use during install testing.

srcs (str or list) -- relative path for files and or subdirectories located in the staged source path that are to be copied to the corresponding location(s) under the install testing directory.



Get the spack.compiler.Compiler object used to build this package

Return the configure args file path associated with staging.

Create a hash based on the artifacts and patches used to build this package.
  • source artifacts (tarballs, repositories) used to build;
  • content hashes (sha256's) of all patches applied by Spack; and
  • canonicalized contents the package.py recipe used to build.


This hash is only included in Spack's DAG hash for concrete specs, but if it happens to be called on a package with an abstract spec, only applicable (i.e., determinable) portions of the hash will be included.



Get dependencies that can possibly have these deptypes.

This analyzes the package and determines which dependencies can be a certain kind of dependency. Note that they may not always be this kind of dependency, since dependencies can be optional, so something may be a build dependency in one configuration and a run dependency in another.


Removes the package's build stage and source tarball.

Deprecate this package in favor of deprecator spec

Creates a stage directory and downloads the tarball for this package. Working directory will be set to the stage directory.

Called by commands to install a package and or its dependencies.

Package implementations should override install() to describe their build process.

  • cache_only (bool) -- Fail if binary package unavailable.
  • dirty (bool) -- Don't clean the build environment before installing.
  • explicit (bool) -- True if package was explicitly installed, False if package was implicitly installed (as a dependency).
  • fail_fast (bool) -- Fail if any dependency fails to install; otherwise, the default is to install as many dependencies as possible (i.e., best effort installation).
  • fake (bool) -- Don't really build; install fake stub files instead.
  • force (bool) -- Install again, even if already installed.
  • install_deps (bool) -- Install dependencies before installing this package
  • install_source (bool) -- By default, source is not installed, but for debugging it might be useful to keep it around.
  • keep_prefix (bool) -- Keep install prefix on failure. By default, destroys it.
  • keep_stage (bool) -- By default, stage is destroyed only if there are no exceptions during build. Set to True to keep the stage even with exceptions.
  • restage (bool) -- Force spack to restage the package source.
  • skip_patch (bool) -- Skip patch stage of build if True.
  • stop_before (str) -- stop execution before this installation phase (or None)
  • stop_at (str) -- last installation phase to be executed (or None)
  • tests (bool or list or set) -- False to run no tests, True to test all packages, or a list of package names to run tests for some
  • use_cache (bool) -- Install from binary package, if available.
  • verbose (bool) -- Display verbose build output (by default, suppresses it)



Applies patches if they haven't been applied already.

Reverts expanded/checked out source to a pristine state.

Unpacks and expands the fetched tarball.


Uninstall this package by spec.

Defines the default manual download instructions. Packages can override the property to provide more information.
default manual download instructions
Return type
(str)



Return the build environment modifications file path associated with staging.

Return the build environment file path associated with staging.

extendable = False
Most packages are NOT extendable. Set to True if you want extensions.

Spec of the extendee of this package, or None if it is not an extension

Spec of the extendee of this package, or None if it is not an extension

Returns True if this package extends the given spec.

If self.spec is concrete, this returns whether this package extends the given spec.

If self.spec is not concrete, this returns whether this package may extend the given spec.


Set of additional options used when fetching package versions.

Find remote versions of this package.

Uses list_url and any other URLs listed in the package file.

a dictionary mapping versions to URLs
Return type
dict



Returns a URL from which the specified version of this package may be downloaded after testing whether the url is valid. Will try url, urls, and list_url before failing.
The version for which a URL is sought.

See Class Version (version.py)




Wrap doc string at 72 characters and format nicely




Returns the path where a global license file for this particular package should be stored.

Most Spack packages are used to install source or binary code while those that do not can be used to install a set of other Spack packages.


Package homepage where users can find more information about the package


Return the configure args file path on successful installation.

Return the build environment file path on successful installation.

Return the build log file path on successful installation.

Return the install test root directory.





String. Contains the symbol used by the license manager to denote a comment. Defaults to #.

List of strings. These are files that the software searches for when looking for a license. All file paths must be relative to the installation directory. More complex packages like Intel may require multiple licenses for individual components. Defaults to the empty list.

Boolean. If set to True, this software requires a license. If set to False, all of the license_* attributes will be ignored. Defaults to False.

String. A URL pointing to license setup instructions for the software. Defaults to the empty string.

List of strings. Environment variables that can be set to tell the software where to look for a license if it is not in the usual location. Defaults to the empty list.

Link depth to which list_url should be searched for new versions

Default list URL (place to find available versions)

Return the build log file path associated with staging.

List of strings which contains GitHub usernames of package maintainers. Do not include @ here in order not to unnecessarily ping the users.

Boolean. Set to True for packages that require a manual download. This is currently used by package sanity tests and generation of a more meaningful fetch failure error.


Return the install metadata directory.


name = 'package_base'

namespace = 'spack'

Finds the URL with the "closest" version to version.

This uses the following precedence order:

1.
Find the next lowest or equal version with a URL.
2.
If no lower URL, return the next higher URL.
3.
If no higher URL, return None.




List of shared objects that should be replaced with a different library at runtime. Typically includes stub libraries like libcuda.so. When linking against a library listed here, the dependent will only record its soname or filename, not its absolute path, so that the dynamic linker will search for it. Note: accepts both file names and directory names, for example ["libcuda.so", "stubs"] will ensure libcuda.so and all libraries in the stubs directory are not bound by path."""


By default we build in parallel. Subclasses can override this.

Find sorted phase log files written to the staging directory

Return dict of possible dependencies of this package.
  • transitive (bool or None) -- return all transitive dependencies if True, only direct dependencies if False (default True)..
  • expand_virtuals (bool or None) -- expand virtual dependencies into all possible implementations (default True)
  • depflag -- dependency types to consider
  • visited (dict or None) -- dict of names of dependencies visited so far, mapped to their immediate dependencies' names.
  • missing (dict or None) -- dict to populate with packages and their missing dependencies.
  • virtuals (set) -- if provided, populate with virtuals seen so far.


Return type
(dict)

Each item in the returned dictionary maps a (potentially transitive) dependency of this package to its possible immediate dependencies. If expand_virtuals is False, virtual package names wil be inserted as keys mapped to empty sets of dependencies. Virtuals, if not expanded, are treated as though they have no immediate dependencies.

Missing dependencies by default are ignored, but if a missing dict is provided, it will be populated with package names mapped to any dependencies they have that are in no repositories. This is only populated if transitive is True.

Note: the returned dict includes the package itself.


Get the prefix into which this package should be installed.

True if this package provides a virtual package with the specified name

Removes the prefix for a package along with any empty parent directories

Get the rpath this package links with, as a list of paths.

Get the rpath args as a string, with -Wl,-rpath, for each element

Run the test and confirm the expected results are obtained

Log any failures and continue, they will be re-raised later

  • exe (str) -- the name of the executable
  • options (str or list) -- list of options to pass to the runner
  • expected (str or list) -- list of expected output strings. Each string is a regex expected to match part of the output.
  • status (int or list) -- possible passing status values with 0 meaning the test is expected to succeed
  • installed (bool) -- if True, the executable must be in the install prefix
  • purpose (str) -- message to display before running test
  • skip_missing (bool) -- skip the test if the executable is not in the install prefix bin directory or the provided work_dir
  • work_dir (str or None) -- path to the smoke test directory



By default do not run tests within package's install()

List of prefix-relative directory paths (or a single path). If these do not exist after install, or if they exist but are not directories, sanity checks will fail.

List of prefix-relative file paths (or a single path). If these do not exist after install, or if they exist but are not files, sanity checks fail.

Set up Python module-scope variables for dependent packages.

Called before the install() method of dependents.

Default implementation does nothing, but this can be overridden by an extendable package to set up the module of its extensions. This is useful if there are some common steps to installing all extensions for a certain package.

Examples:

1.
Extensions often need to invoke the python interpreter from the Python installation being extended. This routine can put a python() Executable object in the module scope for the extension package to simplify extension installs.
2.
MPI compilers could set some variables in the dependent's scope that point to mpicc, mpicxx, etc., allowing them to be called by common name regardless of which MPI is used.
3.
BLAS/LAPACK implementations can set some variables indicating the path to their libraries, since these paths differ by BLAS/LAPACK implementation.

  • module (spack.package_base.PackageBase.module) -- The Python module object of the dependent package. Packages can use this to set module-scope variables for the dependent to use.
  • dependent_spec (spack.spec.Spec) -- The spec of the dependent package about to be built. This allows the extendee (self) to query the dependent's state. Note that this package's spec is available as self.spec.



Sets up the run environment of packages that depend on this one.

This is similar to setup_run_environment, but it is used to modify the run environments of packages that depend on this one.

This gives packages like Python and others that follow the extension model a way to implement common environment or run-time settings for dependencies.

  • env (spack.util.environment.EnvironmentModifications) -- environment modifications to be applied when the dependent package is run. Package authors can call methods on it to alter the build environment.
  • dependent_spec (spack.spec.Spec) -- The spec of the dependent package about to be run. This allows the extendee (self) to query the dependent's state. Note that this package's spec is available as self.spec



Sets up the run environment for a package.
env (spack.util.environment.EnvironmentModifications) -- environment modifications to be applied when the package is run. Package authors can call methods on it to alter the run environment.


Get the build staging area for this package.

This automatically instantiates a Stage object if the package doesn't have one yet, but it does not create the Stage directory on the filesystem.



Set to True to indicate the stand-alone test requires a compiler. It is used to ensure a compiler and build dependencies like 'cmake' are available to build a custom test code.

TestSuite instance used to manage stand-alone tests for 1+ specs.


Return the times log json file.

When True, add RPATHs for the entire DAG. When False, add RPATHs only for immediate dependencies.


Hook for unit tests to assert things about package internals.

Unit tests can override this function to perform checks after Package.install and all post-install hooks run, but before the database is updated.

The overridden function may indicate that the install procedure should terminate early (before updating the database) by returning False (or any value such that bool(result) is False).

True to continue, False to skip install()
Return type
(bool)


Method to override in package classes to handle external dependencies

Returns a URL from which the specified version of this package may be downloaded.
The version for which a URL is sought.

See Class Version (version.py)


Given a version, this returns a string that should be substituted into the package's URL to download that version.

By default, this just returns the version string. Subclasses may need to override this, e.g. for boost versions where you need to ensure that there are _'s in the download URL.



OrderedDict of explicitly defined URLs for versions of this package.
An OrderedDict (version -> URL) different versions of this package, sorted by version.

A version's URL only appears in the result if it has an an explicitly defined url argument. So, this list may be empty if a package only defines url at the top level.



Create a view with the prefix of this package as the root. Extensions added to this view will modify the installation prefix of this package.

By default, packages are not virtual Virtual packages override this attribute

virtual packages provided by this package with its spec


Bases: SpackError

Raised when something is wrong with a package definition.


Bases: PhaseCallbacksMeta, DetectablePackageMeta, DirectiveMeta, MultiMethodMeta

Package metaclass for supporting directives (e.g., depends_on) and phases


Bases: InstallError

Raised when package is still needed by another on uninstall.


Bases: object

This collects all functionality related to adding installed Spack package to views. Packages can customize how they are added to views by overriding these functions.

Given a map of package files to destination paths in the view, add the files to the view. By default this adds all files. Alternative implementations may skip some files, for example if other packages linked into the view already include the file.
  • view (spack.filesystem_view.FilesystemView) -- the view that's updated
  • merge_map (dict) -- maps absolute source paths to absolute dest paths for all files in from this package.
  • skip_if_exists (bool) -- when True, don't link files in view when they already exist. When False, always link files, without checking if they already exist.



Given a map of package files to files currently linked in the view, remove the files from the view. The default implementation removes all files. Alternative implementations may not remove all files. For example if two packages include the same file, it should only be removed when both packages are removed.

The target root directory: each file is added relative to this directory.

Report any files which prevent adding this package to the view. The default implementation looks for any files which already exist. Alternative implementations may allow some of the files to exist in the view (in this case they would be omitted from the results).

The source root directory that will be added to the view: files are added such that their path relative to the view destination matches their path relative to the view source.


Bases: object

Collection of functionality surrounding Windows RPATH specific features

This is essentially meaningless for all other platforms due to their use of RPATH. All methods within this class are no-ops on non Windows. Packages can customize and manipulate this class as they would a genuine RPATH, i.e. adding directories that contain runtime library dependencies

Return extra set of directories that require linking for package

This method should be overridden by packages that produce binaries/libraries/python extension modules/etc that are installed into directories outside a package's bin, lib, and lib64 directories, but still require linking against one of the packages dependencies, or other components of the package itself. No-op otherwise.

List of additional directories that require linking


Return extra set of rpaths for package

This method should be overridden by packages needing to include additional paths to be searched by rpath. No-op otherwise

List of additional rpaths


Establish RPATH on Windows

Performs symlinking to incorporate rpath dependencies to Windows runtime search paths



flag_handler that passes flags to the build system arguments. Any package using build_system_flags must also implement flags_to_build_system_args, or derive from a class that implements it. Currently, AutotoolsPackage and CMakePackage implement it.

Return True if the version is deprecated, False otherwise.
  • pkg (PackageBase) -- The package whose version is to be checked.
  • version (str or spack.version.StandardVersion) -- The version being checked



Registers which are the detectable packages, by repo and package name Need a pass of package repositories to be filled.


Make each dependency of spec present in dir via symlink.


Execute a dummy install and flatten dependencies.

This routine can be used in a package.py definition by setting install = install_dependency_symlinks.

This feature comes in handy for creating a common location for the the installation of third-party libraries.


Decorator: executes instance function only if object has attr valuses.

Executes the decorated method only if at the moment of calling the instance has attributes that are equal to certain values.

attr_dict (dict) -- dictionary mapping attribute names to their required values



Returns a sorted list of the preferred versions of the package.
pkg (PackageBase) -- The package whose versions are to be assessed.


Filename of json with total build and phase times (seconds)

Compiler names for builds that rely on cray compiler names.

spack.package_prefs module

Bases: object

Defines the sort order for a set of specs.

Spack's package preference implementation uses PackagePrefss to define sort order. The PackagePrefs class looks at Spack's packages.yaml configuration and, when called on a spec, returns a key that can be used to sort that spec in order of the user's preferences.

You can use it like this:

# key function sorts CompilerSpecs for mpich in order of preference kf = PackagePrefs('mpich', 'compiler') compiler_list.sort(key=kf)


Or like this:

# key function to sort VersionLists for OpenMPI in order of preference. kf = PackagePrefs('openmpi', 'version') version_list.sort(key=kf)


Optionally, you can sort in order of preferred virtual dependency providers. To do that, provide 'providers' and a third argument denoting the virtual package (e.g., mpi):

kf = PackagePrefs('trilinos', 'providers', 'mpi') provider_spec_list.sort(key=kf)


Whether specific package has a preferred vpkg providers.

Whether specific package has a preferred vpkg providers.

Given a package name, sort component (e.g, version, compiler, ...), and an optional vpkg, return the list from the packages config.

Return a VariantMap of preferred variants/values for a spec.


Bases: SpackError

Raised when a disallowed virtual is found in packages.yaml


Return the permissions configured for the spec.

Include the GID bit if group permissions are on. This makes the group attribute sticky for the directory. Package-specific settings take precedent over settings for all


Return the unix group associated with the spec.

Package-specific settings take precedence over settings for all


Return the permissions configured for the spec.

Package-specific settings take precedence over settings for all


Return true if the spec is configured as buildable

Return a list of external specs (w/external directory path filled in), one for each known external installation.

spack.package_test module

Compare blessed and current output of executables.

Same as above, but when the blessed output is given as a file.

Compile C @p source_file with @p include_flags and @p link_flags, run and return the output.

spack.parser module

Parser for spec literals

Here is the EBNF grammar for a spec:

spec          = [name] [node_options] { ^[edge_properties] node } |

[name] [node_options] hash |
filename node = name [node_options] |
[name] [node_options] hash |
filename node_options = [@(version_list|version_pair)] [%compiler] { variant } edge_properties = [ { bool_variant | key_value } ] hash = / id filename = (.|/|[a-zA-Z0-9-_]*/)([a-zA-Z0-9-_./]*)(.json|.yaml) name = id | namespace id namespace = { id . } variant = bool_variant | key_value | propagated_bv | propagated_kv bool_variant = +id | ~id | -id propagated_bv = ++id | ~~id | --id key_value = id=id | id=quoted_id propagated_kv = id==id | id==quoted_id compiler = id [@version_list] version_pair = git_version=vid version_list = (version|version_range) [ { , (version|version_range)} ] version_range = vid:vid | vid: | :vid | : version = vid git_version = git.(vid) | git_hash git_hash = [A-Fa-f0-9]{40} quoted_id = " id_with_ws " | ' id_with_ws ' id_with_ws = [a-zA-Z0-9_][a-zA-Z_0-9-.\s]* vid = [a-zA-Z0-9_][a-zA-Z_0-9-.]* id = [a-zA-Z0-9_][a-zA-Z_0-9-]*


Identifiers using the <name>=<value> command, such as architectures and compiler flags, require a space before the name.

There is one context-sensitive part: ids in versions may contain '.', while other ids may not.

There is one ambiguity: since '-' is allowed in an id, you need to put whitespace space before -variant for it to be tokenized properly. You can either use whitespace, or you can just use ~variant since it means the same thing. Spack uses ~variant in directory names and in the canonical form of specs to avoid ambiguity. Both are provided because ~ can cause shell expansion when it is the first character in an id typed on the command line.



List of all valid regexes followed by error analysis regexes



Bases: object

Parse a single spec from a JSON or YAML file


Parse a spec tree from a specfile.
initial_spec -- object where to parse the spec

The initial_spec passed as argument, once constructed



Git refs include branch names, and can contain "." and "/"

Valid name for specs and variants. Here we are not using the previous "w[w.-]*" since that would match most characters that can be part of a word in any language

Bases: object

Parse a single spec node from a stream of tokens




Parse a single spec node from a stream of tokens
initial_spec -- object to be constructed

The object passed as argument



Bases: object

Parse text into specs

all_specs() -> List[Spec]
Return all the specs that remain to be parsed



Return the next spec parsed from text.
initial_spec -- object where to parse the spec. If None a new one will be created.

The spec that was parsed


Return the entire list of token from the initial text. White spaces are filtered out.


Bases: SpecSyntaxError

Error when parsing tokens


Bases: Exception

Base class for Spec syntax errors


Bases: SpecSyntaxError

Syntax error in a spec string


List of all the regexes used to match spec parts, in order of precedence

Bases: object

Represents tokens; generated from input by lexer and fed to parse().







Bases: object

Token context passed around by parsers

If the next token is of the specified kind, advance the stream and return True. Otherwise return False.

Advance one token







Parse text into a list of strings
text (str) -- text to be parsed
List of specs


Parse exactly one spec from text and return it, or raise
  • text (str) -- text to be parsed
  • initial_spec -- buffer where to parse the spec. If None a new one will be created.



Return a token generator from the text passed as input.
SpecTokenizationError -- if we can't tokenize anymore, but didn't reach the
end of the input text.


spack.patch module

Bases: Patch

Describes a patch that is retrieved from a file in the repository.

  • pkg (str) -- the class object for the package that owns the patch
  • relative_path (str) -- path to patch, relative to the repository directory for a package.
  • level (int) -- level to pass to patch command
  • working_dir (str) -- path within the source directory where patch should be applied



Partial dictionary -- subclases should add to this.


Bases: SpackError

Raised when a patch file doesn't exist.


Bases: object

Base class for patches.

pkg (str) -- the package that owns the patch

The owning package is not necessarily the package to apply the patch to -- in the case where a dependent package patches its dependency, it is the dependent's fullname.

Apply a patch to source in a stage.
stage (spack.stage.Stage) -- stage where source code lives



Partial dictionary -- subclases should add to this.


Bases: object

Index of patches used in a repository, by sha256 hash.

This allows us to look up patches without loading all packages. It's also needed to properly implement dependency patching, as need a way to look up patches that come from packages not in the Spec sub-DAG.

The patch index is structured like this in a file (this is YAML, but we write JSON):

patches:

sha256:
namespace1.package1:
<patch json>
namespace2.package2:
<patch json>
... etc. ...



Look up a patch in the index and build a patch object for it.
  • sha256 -- sha256 hash to look up
  • pkg (spack.package_base.PackageBase) -- Package object to get patch for.


We build patch objects lazily because building them requires that we have information about the package's location in its repo.



Update this cache with the contents of another.



Bases: SpackError

Raised when the wrong arguments are suppled to the patch directive.


Bases: NoSuchPatchError

Raised when a patch file cannot be located from sha256.


Bases: Patch

Describes a patch that is retrieved from a URL.

  • pkg (str) -- the package that owns the patch
  • url (str) -- URL where the patch can be fetched
  • level (int) -- level to pass to patch command
  • working_dir (str) -- path within the source directory where patch should be applied


Apply a patch to source in a stage.
stage (spack.stage.Stage) -- stage where source code lives



Partial dictionary -- subclases should add to this.


Apply the patch at patch_path to code in the stage.
  • stage (spack.stage.Stage) -- stage with code that will be patched
  • patch_path (str) -- filesystem location for the patch to apply
  • level (int or None) -- patch level (default 1)
  • working_dir (str) -- relative path within the stage to change to (default '.')



Create a patch from json dictionary.

spack.paths module

Defines paths that are part of Spack's directory structure.

Do not import other spack modules here. This module is used throughout Spack and should bring in a minimal number of external dependencies.


transient caches for Spack data (virtual cache, patch sha256 lookup, etc.)


installation test (spack test) output

bootstrap store for bootstrapping clingo and other tools

This file lives in $prefix/lib/spack/spack/__file__

junit, cdash, etc. reports about builds




System configuration location


git repositories fetched to compare commits to versions

spack.projections module

Get the projection for a spec from a projections dict.

spack.provider_index module

Classes and functions to manage providers of virtual dependencies

Bases: _IndexBase
Return a deep copy of this index.

Construct a provider index from its JSON representation.
stream -- stream where to read from the JSON data


Merge another provider index into this one.
other (ProviderIndex) -- provider index to be merged


Remove a provider from the ProviderIndex.

Dump a JSON representation of this object.
stream -- stream where to dump


Update the provider index with additional virtual specs.
spec -- spec potentially providing additional virtual specs



Bases: SpackError

Raised when there is a problem with a ProviderIndex.


spack.relocate module


Apply rpath fixups to the given file.
  • root -- absolute path to the parent directory
  • filename -- relative path to the library or binary

True if fixups were applied, else False


Remove duplicate and nonexistent rpaths.

Some autotools packages write their own -rpath entries in addition to those implicitly added by the Spack compiler wrappers. On Linux these duplicate rpaths are eliminated, but on macOS they result in multiple entries which makes it harder to adjust with install_name_tool -delete_rpath.


Returns true if a file is binary, False otherwise
filename -- file to be tested
True or False


Inputs original rpaths from mach-o binaries dependency libraries for mach-o binaries id path of mach-o libraries old install directory layout root prefix_to_prefix dictionary which maps prefixes in the old directory layout to directories in the new directory layout Output paths_to_paths dictionary which maps all of the old paths to new paths

Return a dictionary mapping the relativized rpaths to the original rpaths. This dictionary is used to replace paths in mach-o binaries. Replace '@loader_path' with the dirname of the origname path name in rpaths and deps; idpath is replaced with the original path name

Return a dictionary mapping the original rpaths to the relativized rpaths. This dictionary is used to replace paths in mach-o binaries. Replace old_dir with relative path from dirname of path name in rpaths and deps; idpath is replaced with @rpath/libname.

Get rpaths, dependent libraries, and library id of mach-o objects.

Replace the original RPATHs in the new binaries making them relative to the original layout root.
  • new_binaries (list) -- new binaries whose RPATHs is to be made relative
  • orig_binaries (list) -- original binaries
  • orig_layout_root (str) -- path to be used as a base for making RPATHs relative



Compute the relative target from the original link and make the new link relative.
  • new_links (list) -- new links to be made relative
  • orig_links (list) -- original links



Replace old RPATHs with paths relative to old_dir in binary files

This function is used to make machO buildcaches on macOS by replacing old paths with new paths using install_name_tool Inputs: mach-o binary to be modified original rpaths original dependency paths original id path if a mach-o library dictionary mapping paths in old install layout to new install layout

This function is used when install machO buildcaches on linux by rewriting mach-o loader commands for dependency library paths of mach-o binaries and the id path for mach-o libraries. Rewritting of rpaths is handled by replace_prefix_bin. Inputs mach-o binary to be modified dictionary mapping paths in old install layout to new install layout

Returns True if the file with MIME type/subtype passed as arguments needs binary relocation, False otherwise.
  • m_type (str) -- MIME type of the file
  • m_subtype (str) -- MIME subtype of the file



Returns True if the file with MIME type/subtype passed as arguments needs text relocation, False otherwise.
  • m_type (str) -- MIME type of the file
  • m_subtype (str) -- MIME subtype of the file



Take a list of binaries, and an ordered dictionary of prefix to prefix mapping, and update the rpaths accordingly.

Relocate the binaries passed as arguments by changing their RPATHs.

Use patchelf to get the original RPATHs and then replace them with rpaths in the new directory layout.

New RPATHs are determined from a dictionary mapping the prefixes in the old directory layout to the prefixes in the new directory layout if the rpath was in the old layout root, i.e. system paths are not replaced.

  • binaries (list) -- list of binaries that might need relocation, located in the new prefix
  • orig_root (str) -- original root to be substituted
  • new_root (str) -- new root to be used, only relevant for relative RPATHs
  • new_prefixes (dict) -- dictionary that maps the original prefixes to where they should be relocated
  • rel (bool) -- True if the RPATHs are relative, False if they are absolute
  • orig_prefix (str) -- prefix where the executable was originally located
  • new_prefix (str) -- prefix where we want to relocate the executable



Relocate links to a new install prefix.

Use macholib python package to get the rpaths, depedent libraries and library identity for libraries from the MachO object. Modify them with the replacement paths queried from the dictionary mapping old layout prefixes to hashes and the dictionary mapping hashes to the new layout prefixes.

Relocate text file from the original installation prefix to the new prefix.

Relocation also affects the the path in Spack's sbang script.

  • files (list) -- Text files to be relocated
  • prefixes (OrderedDict) -- String prefixes which need to be changed



Replace null terminated path strings hard-coded into binaries.

The new install prefix must be shorter than the original one.

  • binaries (list) -- binaries to be relocated
  • prefixes (OrderedDict) -- String prefixes which need to be changed.

spack.relocate_text.BinaryTextReplaceError -- when the new path is longer than the old path



spack.relocate_text module

This module contains pure-Python classes and functions for replacing paths inside text files and binaries.

Bases: PrefixReplacer
Create a regex that looks for exact matches of prefixes, and also tries to match a C-string type null terminator in a small lookahead window.
  • binary_prefixes (list) -- List of byte strings of prefixes to match
  • suffix_safety_size (int) -- Sizeof the lookahed for null-terminated string.


Returns: compiled regex


Create a BinaryFilePrefixReplacer from an ordered prefix to prefix map.
  • prefix_to_prefix (OrderedDict) -- Ordered mapping of prefix to prefix.
  • suffix_safety_size (int) -- Number of bytes to retain at the end of a C-string to avoid binary string-aliasing issues.








Bases: object

Base class for applying a prefix to prefix map to a list of binaries or text files. Child classes implement _apply_to_file to do the actual work, which is different when it comes to binaries and text files.

Returns a list of files that were modified



Returns true when the prefix to prefix map is mapping everything to the same location (identity) or there are no prefixes to replace.


Bases: PrefixReplacer

This class applies prefix to prefix mappings for relocation on text files.

Note that UTF-8 encoding is assumed.





Create a binary regex that matches the input path in utf8

Create a (binary) regex that matches any input path in utf8

spack.repo module

Bases: RepoError

Raised when repo layout is invalid.


Bases: RepoError

Raised when a package's class constructor fails.


Bases: Mapping

Cache that maps package names to the stats obtained on the 'package.py' files associated with them.

For each repository a cache is maintained at class level, and shared among all instances referring to it. Update of the global cache is done lazily during instance initialization.

Regenerate cache for this checker.




Bases: object

Bases: object

Adaptor for indexes that need to be generated when repos are updated.


Whether an update is needed when the package file hasn't changed.
updated, False otherwise.

Return type
(bool)

We already automatically update indexes when package files change, but other files (like patches) may change underneath the package file. This method can be used to check additional package-specific files whenever they're loaded, to tell the RepoIndex to update the index just for that package.


Read this index from a provided file object.

Update the index in memory with information about a package.

Write the index to a file object.


Bases: RepoError

Raised when an invalid namespace is encountered.


Bases: object

Build a mock repository in a directory

Create a mock package in the repository, using a Jinja2 template.
  • name (str) -- name of the new package
  • dependencies (list) -- list of ("dep_spec", "dep_type", "condition") tuples. Both "dep_type" and "condition" can default to None in which case spack.dependency.default_deptype and spack.spec.Spec() are used.






Guaranteed unused default value for some functions.

Bases: RepoError

Raised when there are no repositories configured.



Bases: Indexer

Lifecycle methods for patch cache.

Whether an update is needed when the package file hasn't changed.
updated, False otherwise.

Return type
(bool)

We already automatically update indexes when package files change, but other files (like patches) may change underneath the package file. This method can be used to check additional package-specific files whenever they're loaded, to tell the RepoIndex to update the index just for that package.


Read this index from a provided file object.

Update the index in memory with information about a package.

Write the index to a file object.


Bases: Indexer

Lifecycle methods for virtual package providers.

Read this index from a provided file object.

Update the index in memory with information about a package.

Write the index to a file object.


Package modules are imported as spack.pkg.<repo-namespace>.<pkg-name>

Bases: object

Class representing a package repository in the filesystem.

Each package repository must have a top-level configuration file called repo.yaml.

Currently, repo.yaml this must define:

namespace:
A Python namespace where the repository's packages should live.

Iterator over all package classes in the repository.

Use this with care, because loading packages is slow.


Returns a sorted list of all package names in the Repo.


Get the directory name for a particular package. This is the directory that contains its package.py file.

Dump provenance information for a spec to a particular path.

This dumps the package file and any associated patch files. Raises UnknownPackageError if not found.


Whether a package with the supplied name exists.


Get the filename for the module we should load for a particular package. Packages for a Repo live in $root/<package_name>/package.py

This will return a proper package.py path even if the package doesn't exist yet, so callers will need to ensure the package exists before importing.


Returns the package associated with the supplied spec.

Get the class for the package out of its module.

First loads (or fetches from cache) a module for the package. Then extracts the package class from the module according to Spack's naming convention.


Construct the index for this repo lazily.

True if fullname is a prefix of this Repo's namespace.

Return True if the package with this name is virtual, False otherwise.

This function use the provider index. If calling from a code block that is used to construct the provider index use the is_virtual_safe function.

pkg_name (str) -- name of the package we want to check


Return True if the package with this name is virtual, False otherwise.

This function doesn't use the provider index.

pkg_name (str) -- name of the package we want to check


Time a package file in this repo was last updated.

Get path to package.py file for this repo.



Index of patches and packages they're defined on.

A provider index with names specific to this repo.


Clear entire package instance cache.

Allow users to import Spack packages using Python identifiers.

A python identifier might map to many different Spack package names due to hyphen/underscore ambiguity.

num3proxy -> 3proxy
foo_bar -> foo_bar, foo-bar
foo_bar_baz -> foo_bar_baz, foo-bar-baz, foo_bar-baz, foo-bar_baz


Index of tags and which packages they're defined on.


Bases: SpackError

Superclass for repository-related errors.


Bases: object

Container class that manages a set of Indexers for a Repo.

This class is responsible for checking packages in a repository for updates (using FastPackageChecker) and for regenerating indexes when they're needed.

Indexers should be added to the RepoIndex using add_indexer(name, indexer), and they should support the interface defined by Indexer, so that the RepoIndex can read, generate, and update stored indices.

Generated indexes are accessed by name via __getitem__().

Add an indexer to the repo index.
  • name -- name of this indexer
  • indexer -- object implementing the Indexer interface




Bases: _PrependFileLoader

Loads a Python module associated with a package in specific repository


Bases: object

A RepoPath is a list of repos that function as one.

It functions exactly like a Repo, but it operates on the combined results of the Repos in its list instead of on a single package repository.

repos (list) -- list Repo objects or paths to put in this RepoPath





Dump provenance information for a spec to a particular path.

This dumps the package file and any associated patch files. Raises UnknownPackageError if not found.


Whether package with the give name exists in the path's repos.

Note that virtual packages do not "exist".




Get the first repo in precedence order.

Returns the package associated with the supplied spec.

Find a class for the spec's package and return the class object.

Get a repository by namespace.
namespace -- Look up this namespace in the RepoPath, and return it if found.

Optional Arguments:

default:
If default is provided, return it when the namespace isn't found. If not, raise an UnknownNamespaceError.





Return True if the package with this name is virtual, False otherwise.

This function use the provider index. If calling from a code block that is used to construct the provider index use the is_virtual_safe function.

pkg_name (str) -- name of the package we want to check


Return True if the package with this name is virtual, False otherwise.

This function doesn't use the provider index.

pkg_name (str) -- name of the package we want to check


Time a package file in this repo was last updated.

Get path to package.py file for this repo.

Returns a list of packages matching any of the tags in input.
full -- if True the package names in the output are fully-qualified


Merged PatchIndex from all Repos in the RepoPath.

Merged ProviderIndex from all Repos in the RepoPath.


Add repo first in the search path.

Add repo last in the search path.

Remove a repo from the search path.

Given a spec, get the repository for its package.

Merged TagIndex from all Repos in the RepoPath.


Bases: object

MetaPathFinder class that loads a Python module corresponding to a Spack package

Return a loader based on the inspection of the current global repository list.




Bases: module

Allow lazy loading of modules.



Bases: Indexer

Lifecycle methods for a TagIndex on a Repo.

Read this index from a provided file object.

Update the index in memory with information about a package.

Write the index to a file object.


Bases: RepoError

Raised when we encounter a package spack doesn't have.


Bases: UnknownEntityError

Raised when we encounter an unknown namespace


Bases: UnknownEntityError

Raised when we encounter a package spack doesn't have.


add a package to the git stage with git add

Convenience wrapper around spack.repo.all_package_names().

Decorator that automatically converts the first argument of a function to a Spec.

Create a RepoPath from a configuration object.
configuration (spack.config.Configuration) -- configuration object


Create a repository, or just return a Repo if it already exists.

Create a new repository in root with the specified namespace.

If the namespace is not provided, use basename of root. Return the canonicalized path and namespace of the created repository.


Compute packages lists for the two revisions and return a tuple containing all the packages in rev1 but not in rev2 and all the packages in rev2 but not in rev1.


  • type (str) -- String containing one or more of 'A', 'B', 'C'
  • rev1 (str) -- Revision to compare against, default is 'HEAD^'
  • rev2 (str) -- Revision to compare to rev1, default is 'HEAD'

A set contain names of affected packages.


Determine whether we are in a package file from a repo.

List all packages associated with the given revision

Return the repository namespace only for the full module name.

For instance:

namespace_from_fullname('spack.pkg.builtin.hdf5') == 'builtin'


fullname (str) -- full name for the Python module


Get the test repo if it is active, otherwise the builtin repo.

Given a package name that might be fully-qualified, returns the namespace part, if present and the unqualified package name.

If the package name is unqualified, the namespace is an empty string.

pkg_name -- a package name, either unqualified like "llvl", or fully-qualified, like "builtin.llvm"


Returns the full namespace of a repository, given its relative one

For instance:

python_package_for_repo('builtin') == 'spack.pkg.builtin'


namespace (str) -- repo namespace


Use the repositories passed as arguments within the context manager.
  • *paths_and_repos -- paths to the repositories to be used, or already constructed Repo objects
  • override (bool) -- if True use only the repositories passed as input, if False add them to the top of the list of current repositories.

Corresponding RepoPath object


spack.report module

Tools to produce reports of spec installations

Bases: InfoCollector

Collect information for the PackageInstaller._install_task method.

specs -- specs whose install information will be recorded

Return the package instance, given the signature of the wrapped function.

Return the stdout log associated with the function being monitored
pkg -- package under consideration


Add additional entries to a spec record when entering the collection context.

Add additional properties on function call success.


Bases: object

Base class for context manager objects that collect information during the execution of certain package functions.

The data collected is available through the specs attribute once exited, and it's organized as a list where each item represents the installation of one spec.

Action to be reported on

Return the package instance, given the signature of the wrapped function.

Return the stdout log associated with the function being monitored
pkg -- package under consideration


Add additional entries to a spec record when entering the collection context.

Specs that will be acted on

Add additional properties on function call success.

This is where we record the data that will be included in our report

Class for which to wrap a function


Bases: InfoCollector

Collect information for the PackageBase.do_test method.

  • specs -- specs whose install information will be recorded
  • record_directory -- record directory for test log paths



Return the package instance, given the signature of the wrapped function.

Return the stdout log associated with the function being monitored
pkg -- package under consideration


Add additional properties on function call success.


Decorate a package to generate a report after the installation function is executed.
  • reporter -- object that generates the report
  • filename -- filename for the report
  • specs -- specs that need reporting



Decorate a package to generate a report after the test function is executed.
  • reporter -- object that generates the report
  • filename -- filename for the report
  • specs -- specs that need reporting
  • raw_logs_dir -- record directory for test log paths



spack.resource module

Describes an optional resource needed for a build.

Typically a bunch of sources that can be built in-tree within another package to enable optional features.

Bases: object

Represents an optional resource to be fetched by a package.

Aggregates a name, a fetcher, a destination and a placement.


spack.rewiring module

Bases: RewireError

Raised when the build_spec for a splice was not installed.


Bases: SpackError

Raised when something goes wrong with rewiring.


Given a spliced spec, this function conducts all the rewiring on all nodes in the DAG of that spec.

This function rewires a single node, worrying only about references to its subgraph. Binaries, text, and links are all changed in accordance with the splice. The resulting package is then 'installed.'

spack.spec module

Spack allows very fine-grained control over how packages are installed and over how they are built and configured. To make this easy, it has its own syntax for declaring a dependence. We call a descriptor of a particular package configuration a "spec".

The syntax looks like this:

$ spack install mpileaks ^openmpi @1.2:1.4 +debug %intel @12.1 target=zen

0 1 2 3 4 5 6


The first part of this is the command, 'spack install'. The rest of the line is a spec for a particular installation of the mpileaks package.

0.
The package to install
1.
A dependency of the package, prefixed by ^
2.
A version descriptor for the package. This can either be a specific version, like "1.2", or it can be a range of versions, e.g. "1.2:1.4". If multiple specific versions or multiple ranges are acceptable, they can be separated by commas, e.g. if a package will only build with versions 1.0, 1.2-1.4, and 1.6-1.8 of mavpich, you could say:
depends_on("mvapich@1.0,1.2:1.4,1.6:1.8")


3.
A compile-time variant of the package. If you need openmpi to be built in debug mode for your package to work, you can require it by adding +debug to the openmpi spec when you depend on it. If you do NOT want the debug option to be enabled, then replace this with -debug. If you would like for the variant to be propagated through all your package's dependencies use "++" for enabling and "--" or "~~" for disabling.
4.
The name of the compiler to build with.
5.
The versions of the compiler to build with. Note that the identifier for a compiler version is the same '@' that is used for a package version. A version list denoted by '@' is associated with the compiler only if if it comes immediately after the compiler name. Otherwise it will be associated with the current package spec.
6.
The architecture to build with. This is needed on machines where cross-compilation is required


Bases: SpecError

Raised when the double equal symbols are used to assign the spec's architecture.


Bases: object

The CompilerSpec field represents the compiler or range of compiler versions that a package should be built with. CompilerSpecs have a name and a version list.

A CompilerSpec is concrete if its versions are concrete and there is an available compiler with the right version.

Intersect self's versions with other.

Return whether the CompilerSpec changed.



Equivalent to {compiler.name}{@compiler.version} for Specs, without extra @= for readability.


Return True if all concrete specs matching self also match other, otherwise False.

For compiler specs this means that the name of the compiler must be the same for self and other, and that the versions ranges should intersect.

other -- spec to be satisfied



Return True if all concrete specs matching self also match other, otherwise False.

For compiler specs this means that the name of the compiler must be the same for self and other, and that the version range of self is a subset of that of other.

other -- spec to be satisfied






Bases: SpecError

Raised when the same architecture occurs in a spec twice.


Bases: SpecError

Raised when the same compiler occurs in a spec twice.


Bases: SpecError

Raised when the same dependency occurs in a spec twice.


Bases: SpecError

Raised when two nodes in the same spec DAG have inconsistent constraints.


Bases: SpecError

Raised when a dependency in a spec is not actually a dependency of the package.



Bases: SpecError

Raised when there is no package that provides a particular virtual dependency.


Bases: SpecError

Raised when there is no package that provides a particular virtual dependency.


Bases: object

Add a dependency edge to this spec.
  • dependency_spec -- spec of the dependency
  • deptypes -- dependency types for this edge
  • virtuals -- virtuals provided by this edge






Same as format, but color defaults to auto instead of False.

Clears all cached hashes in a Spec, while preserving other properties.

Trim the dependencies of this spec.

Trim the dependencies and dependents of this spec.



Return names of dependencies that self an other have in common.

A spec is concrete if it describes a single build of a package.

More formally, a spec is concrete if concretize() has been called on it and it has been marked _concrete.

Concrete specs either can be or have been built. All constraints have been resolved, optional dependencies have been added or removed, a compiler has been chosen, and all variants have values.


Concretize the current spec.
tests (bool or list) -- if False disregard 'test' dependencies, if a list of names activate them for the packages in the list, if True activate 'test' dependencies for all packages.


This is a non-destructive version of concretize().

First clones, then returns a concrete version of this package without modifying this package.

tests (bool or list) -- if False disregard 'test' dependencies, if a list of names activate them for the packages in the list, if True activate 'test' dependencies for all packages.


Intersect self with other in-place. Return True if self changed, False otherwise.
  • other -- constraint to be added to self
  • deps -- if False, constrain only the root node, otherwise constrain dependencies as well.

spack.error.UnsatisfiableSpecError -- when self cannot be constrained


Return a constrained copy without modifying this spec.

Make a copy of this spec.
  • deps -- Defaults to True. If boolean, controls whether dependencies are copied (copied if True). If a DepTypes or DepFlag is provided, only matching dependencies are copied.
  • kwargs -- additional arguments for internal use (passed to _dup).

A copy of this spec.

Examples

Deep copy with dependencies:

spec.copy()
spec.copy(deps=True)


Shallow copy (no dependencies):

spec.copy(deps=False)


Only build and run dependencies:

deps=('build', 'run'):



Returns an auto-colorized version of self.short_spec.

This is Spack's default hash, used to identify installations.

Same as the full hash (includes package hash and build/link/run deps). Tells us when package files and any dependencies have changes.

NOTE: Versions of Spack prior to 0.18 only included link and run deps.


Get the first <bits> bits of the DAG hash as an integer type.

Return an anonymous spec for the default architecture

Return a list of direct dependencies (nodes in the DAG).
  • name (str) -- filter dependencies by package name
  • deptype -- allowed dependency types



Return a list of direct dependents (nodes in the DAG).
  • name (str) -- filter dependents by package name
  • deptype -- allowed dependency types



Remove any reference that dependencies have of this node.
deptype (str or tuple) -- dependency types tracked by the current spec


Returns dependencies in self that are not in other.

Helper method to print edge attributes in spec literals

Return a list of edges connecting this node in the DAG to parents.
  • name (str) -- filter dependents by package name
  • depflag -- allowed dependency types



Return a list of edges connecting this node in the DAG to children.
  • name (str) -- filter dependencies by package name
  • depflag -- allowed dependency types




Raise if a deprecated spec is in the dag.
root (Spec) -- root spec to be analyzed
SpecDeprecatedError -- if any deprecated spec is found


Ensures that the variant attached to a spec are valid.
spec (Spec) -- spec to be analyzed
spack.variant.UnknownVariantError -- on the first unknown variant found


True if the full dependency DAGs of specs are equal.

Equality with another spec, not including dependencies.




Return a DependencyMap containing all of this spec's dependencies with their constraints merged.

If copy is True, returns merged copies of its dependencies without modifying the spec it's called on.

If copy is False, clears this spec's dependencies and returns them. This disconnects all dependency links including transitive dependencies, except for concrete specs: if a spec is concrete it will not be disconnected from its dependencies (although a non-concrete spec with concrete dependencies will be disconnected from those dependencies).


Prints out particular pieces of a spec, depending on what is in the format string.

Using the {attribute} syntax, any field of the spec can be selected. Those attributes can be recursive. For example, s.format({compiler.version}) will print the version of the compiler.

Commonly used attributes of the Spec for format strings include:

name
version
compiler
compiler.name
compiler.version
compiler_flags
variants
architecture
architecture.platform
architecture.os
architecture.target
prefix


Some additional special-case properties can be added:

hash[:len]    The DAG hash with optional length argument
spack_root    The spack root directory
spack_install The spack install directory


The ^ sigil can be used to access dependencies by name. s.format({^mpi.name}) will print the name of the MPI implementation in the spec.

The @, %, arch=, and / sigils can be used to include the sigil with the printed string. These sigils may only be used with the appropriate attributes, listed below:

@        ``{@version}``, ``{@compiler.version}``
%        ``{%compiler}``, ``{%compiler.name}``
arch=    ``{arch=architecture}``
/        ``{/hash}``, ``{/hash:7}``, etc


The @ sigil may also be used for any other property named version. Sigils printed with the attribute string are only printed if the attribute string is non-empty, and are colored according to the color of the attribute.

Sigils are not used for printing variants. Variants listed by name naturally print with their sigil. For example, spec.format('{variants.debug}') would print either +debug or ~debug depending on the name of the variant. Non-boolean variants print as name=value. To print variant names or values independently, use spec.format('{variants.<name>.name}') or spec.format('{variants.<name>.value}').

Spec format strings use \ as the escape character. Use \{ and \} for literal braces, and \\ for the literal \ character.

format_string (str) -- string containing the format to be expanded
  • color (bool) -- True if returned string is colored
  • transform (dict) -- maps full-string formats to a callable that accepts a string and returns another one



Given a format_string that is intended as a path, generate a string like from Spec.format, but eliminate extra path separators introduced by formatting of Spec properties.

Path separators explicitly added to the string are preserved, so for example "{name}/{version}" would generate a directory based on the Spec's name, and a subdirectory based on its version; this function guarantees though that the resulting string would only have two directories (i.e. that if under normal circumstances that str(Spec.version) would contain a path separator, it would not in this case).


Construct a spec from a spec string determined during external detection and attach extra attributes to it.
  • spec_str (str) -- spec string
  • extra_attributes (dict) -- dictionary containing extra attributes

external spec
Return type
spack.spec.Spec


Construct a spec from JSON/YAML.
data -- a nested dict/list data structure read from YAML or JSON.


Construct a spec from JSON.
stream -- string or file object to read from.


Builds a Spec from a dictionary containing the spec literal.

The dictionary must have a single top level key, representing the root, and as many secondary level keys as needed in the spec.

The keys can be either a string or a Spec or a tuple containing the Spec and the dependency types.

  • spec_dict (dict) -- the dictionary containing the spec literal
  • normal (bool) -- if True the same key appearing at different levels of the spec_dict will map to the same object in memory.


Examples

A simple spec foo with no dependencies:

{'foo': None}


A spec foo with a (build, link) dependency bar:

{'foo':

{'bar:build,link': None}}


A spec with a diamond dependency and various build types:

{'dt-diamond': {

'dt-diamond-left:build,link': {
'dt-diamond-bottom:build': None
},
'dt-diamond-right:build,link': {
'dt-diamond-bottom:build,link,run': None
} }}


The same spec with a double copy of dt-diamond-bottom and no diamond structure:

{'dt-diamond': {

'dt-diamond-left:build,link': {
'dt-diamond-bottom:build': None
},
'dt-diamond-right:build,link': {
'dt-diamond-bottom:build,link,run': None
} }, normal=False}


Constructing a spec using a Spec object as key:

mpich = Spec('mpich')
libelf = Spec('libelf@1.8.11')
expected_normalized = Spec.from_literal({

'mpileaks': {
'callpath': {
'dyninst': {
'libdwarf': {libelf: None},
libelf: None
},
mpich: None
},
mpich: None
}, })



Construct a spec from clearsigned json spec file.
stream -- string or file object to read from.


Construct a spec from a JSON or YAML spec file path

Construct a spec from YAML.
stream -- string or file object to read from.



Return a dictionary that points to all the dependencies in this spec.


Helper for tree to print DB install status.

Installation status of a package.
True if the package has been installed, False otherwise.


Whether the spec is installed in an upstream repository.
True if the package is installed in an upstream, False otherwise.


Return True if there exists at least one concrete spec that matches both self and other, otherwise False.

This operation is commutative, and if two specs intersect it means that one can constrain the other.

  • other -- spec to be checked for compatibility
  • deps -- if True check compatibility of dependency nodes too, if False only check root



Given a spec with an abstract hash, return a copy of the spec with all properties and dependencies by looking up the hash in the environment, store, or finally, binary caches. This is non-destructive.

Returns a node_dict of this spec with the dag hash added. If this spec is concrete, the full hash is added as well. If 'build' is in the hash_type, the build hash is also added.

When specs are parsed, any dependencies specified are hanging off the root, and ONLY the ones that were explicitly provided are there. Normalization turns a partial flat spec into a DAG, where:
1.
Known dependencies of the root package are in the DAG.
2.
Each node's dependencies dict only contains its known direct deps.
3.
There is only ONE unique spec for each package in the DAG.
This includes virtual packages. If there a non-virtual package that provides a virtual package that is in the spec, then we replace the virtual package with the non-virtual one.


TODO: normalize should probably implement some form of cycle detection, to ensure that the spec is actually a DAG.


Return a normalized copy of this spec without modifying this spec.




Internal package call gets only the class object for a package. Use this to just get package metadata.

Compute the hash of the contents of the package for this node

Return patch objects for any patch sha256 sums on this Spec.

This is for use after concretization to iterate over any patches associated with this spec.

TODO: this only checks in the package; it doesn't resurrect old patches from install directories, but it probably should.




Hash used to transfer specs among processes.

This hash includes build and test dependencies and is only used to serialize a spec and pass it around among processes.


Get the first <bits> bits of the DAG hash as an integer type.

Given a spec with an abstract hash, attempt to populate all properties and dependencies by looking up the hash in the environment, store, or finally, binary caches. This is destructive.

Follow dependent links and find the root of this spec's DAG.

Spack specs have a single root (the package being installed).


Return True if all concrete specs matching self also match other, otherwise False.
  • other -- spec to be satisfied
  • deps -- if True descend to dependencies, otherwise only check root node



Returns a version of the spec with the dependencies hashed instead of completely enumerated.

Utility method for computing different types of Spec hashes.
hash (spack.hash_types.SpecHashDescriptor) -- type of hash to generate.


Splices dependency "other" into this ("target") Spec, and return the result as a concrete Spec. If transitive, then other and its dependencies will be extrapolated to a list of Specs and spliced in accordingly. For example, let there exist a dependency graph as follows: T | Z<-H In this example, Spec T depends on H and Z, and H also depends on Z. Suppose, however, that we wish to use a different H, known as H'. This function will splice in the new H' in one of two ways: 1. transitively, where H' depends on the Z' it was built with, and the new T* also directly depends on this new Z', or 2. intransitively, where the new T* and H' both depend on the original Z. Since the Spec returned by this splicing function is no longer deployed the same way it was built, any such changes are tracked by setting the build_spec to point to the corresponding dependency from the original Spec. TODO: Extend this for non-concrete Specs.

Returns whether or not this Spec is being deployed as built i.e. whether or not this Spec has ever been spliced.


Create a dictionary suitable for writing this spec to YAML or JSON.

This dictionaries like the one that is ultimately written to a spec.json file in each Spack installation directory. For example, for sqlite:

{
"spec": {

"_meta": {
"version": 2
},
"nodes": [
{
"name": "sqlite",
"version": "3.34.0",
"arch": {
"platform": "darwin",
"platform_os": "catalina",
"target": "x86_64"
},
"compiler": {
"name": "apple-clang",
"version": "11.0.0"
},
"namespace": "builtin",
"parameters": {
"column_metadata": true,
"fts": true,
"functions": false,
"rtree": false,
"cflags": [],
"cppflags": [],
"cxxflags": [],
"fflags": [],
"ldflags": [],
"ldlibs": []
},
"dependencies": [
{
"name": "readline",
"hash": "4f47cggum7p4qmp3xna4hi547o66unva",
"type": [
"build",
"link"
]
},
{
"name": "zlib",
"hash": "uvgh6p7rhll4kexqnr47bvqxb3t33jtq",
"type": [
"build",
"link"
]
}
],
"hash": "tve45xfqkfgmzwcyfetze2z6syrg7eaf",
},
# ... more node dicts for readline and its dependencies ...
] }


Note that this dictionary starts with the 'spec' key, and what follows is a list starting with the root spec, followed by its dependencies in preorder. Each node in the list also has a 'hash' key that contains the hash of the node without the hash field included.

In the example, the package content hash is not included in the spec, but if package_hash were true there would be an additional field on each node called package_hash.

from_dict() can be used to read back in a spec that has been converted to a dictionary, serialized, and read back in.

  • deptype (tuple or str) -- dependency types to include when traversing the spec.
  • package_hash (bool) -- whether to include package content hashes in the dictionary.




Create a dictionary representing the state of this Spec.

to_node_dict creates the content that is eventually hashed by Spack to create identifiers like the DAG hash (see dag_hash()). Example result of to_node_dict for the sqlite package:

{

'sqlite': {
'version': '3.28.0',
'arch': {
'platform': 'darwin',
'platform_os': 'mojave',
'target': 'x86_64',
},
'compiler': {
'name': 'apple-clang',
'version': '10.0.0',
},
'namespace': 'builtin',
'parameters': {
'fts': 'true',
'functions': 'false',
'cflags': [],
'cppflags': [],
'cxxflags': [],
'fflags': [],
'ldflags': [],
'ldlibs': [],
},
'dependencies': {
'readline': {
'hash': 'zvaa4lhlhilypw5quj3akyd3apbq5gap',
'type': ['build', 'link'],
}
},
} }


Note that the dictionary returned does not include the hash of the root of the spec, though it does include hashes for each dependency, and (optionally) the package file corresponding to each node.

See to_dict() for a "complete" spec hash, with hashes for each node and nodes for each dependency (instead of just their hashes).

hash (spack.hash_types.SpecHashDescriptor) --



Shorthand for traverse_nodes()

Shorthand for traverse_edges()

Prints out this spec and its dependencies, tree-formatted with indentation.

Status function may either output a boolean or an InstallStatus

  • color -- if True, always colorize the tree. If False, don't colorize the tree. If None, use the default from llnl.tty.color
  • depth -- print the depth from the root
  • hashes -- if True, print the hash of each node
  • hashlen -- length of the hash to be printed
  • cover -- either "nodes" or "edges"
  • indent -- extra indentation for the tree being printed
  • format -- format to be used to print each node
  • deptypes -- dependency types to be represented in the tree
  • show_types -- if True, show the (merged) dependency type of a node
  • depth_first -- if True, traverse the DAG depth first when representing it as a tree
  • recurse_dependencies -- if True, recurse on dependencies
  • status_fn -- optional callable that takes a node as an argument and return its installation status
  • prefix -- optional callable that takes a node as an argument and return its installation prefix



If it is not already there, adds the variant named variant_name to the spec spec based on the definition contained in the package metadata. Validates the variant and values before returning.

Used to add values to a variant without being sensitive to the variant being single or multi-valued. If the variant already exists on the spec it is assumed to be multi-valued and the values are appended.

  • variant_name -- the name of the variant to add or append to
  • values -- the value or values (as a tuple) to add/append to the variant



Validate the detection of an external spec.

This method is used as part of Spack's detection protocol, and is not meant for client code use.


Checks that names and values in this spec are real. If they're not, it will raise an appropriate exception.



Return list of any virtual deps in this spec.


Bases: SpecError

Raised when a spec concretizes to a deprecated spec or dependency.


Bases: SpecError

Wrapper for ParseError for when we're parsing specs.



Bases: UnsatisfiableSpecError

Raised when a spec architecture conflicts with package constraints.


Bases: UnsatisfiableSpecError

Raised when a spec variant conflicts with package constraints.


Bases: UnsatisfiableSpecError

Raised when a spec comiler conflicts with package constraints.


Bases: UnsatisfiableSpecError

Raised when some dependency of constrained specs are incompatible


Bases: UnsatisfiableSpecError

Raised when a provider is supplied but constraints don't match a vpkg requirement


Bases: UnsatisfiableSpecError

Raised when two specs aren't even for the same package.


Bases: UnsatisfiableSpecError

Raised when a spec version conflicts with package constraints.


Bases: SpecError

Raised when the user asks for a compiler spack doesn't know about.


spack.spec_list module

Bases: SpecListError

Error class for invalid spec constraints at concretize time.



Bases: SpackError

Error class for all errors related to SpecList objects.


Bases: SpecListError

Error class for undefined references in Spack stacks.


spack.stage module

Bases: object

Simple class that allows any directory to be a spack stage. Consequently, it does not expect or require that the source path adhere to the standard directory naming convention.






Returns True since the source_path must exist.





Bases: Stage
Changes to the stage directory and attempt to expand the downloaded archive. Fail if the stage is not set up or if the archive is not yet downloaded.

Removes the expanded archive path if it exists, then re-expands the archive.


Bases: StageError

"Error encountered during restaging.


Bases: object

Manages a temporary stage directory for building.

A Stage object is a context manager that handles a directory where some source code is downloaded and built before being installed. It handles fetching the source code, either as an archive to be expanded or by checking it out of a repository. A stage's lifecycle looks like this:

with Stage() as stage:      # Context manager creates and destroys the

# stage directory
stage.fetch() # Fetch a source archive into the stage.
stage.expand_archive() # Expand the archive into source_path.
<install> # Build and install the archive.
# (handled by user of Stage)


When used as a context manager, the stage is automatically destroyed if no exception is raised by the context. If an excpetion is raised, the stage is left in the filesystem and NOT destroyed, for potential reuse later.

You can also use the stage's create/destroy functions manually, like this:

stage = Stage()
try:

stage.create() # Explicitly create the stage directory.
stage.fetch() # Fetch a source archive into the stage.
stage.expand_archive() # Expand the archive into source_path.
<install> # Build and install the archive.
# (handled by user of Stage) finally:
stage.destroy() # Explicitly destroy the stage directory.


There are two kinds of stages: named and unnamed. Named stages can persist between runs of spack, e.g. if you fetched a tarball but didn't finish building it, you won't have to fetch it again.

Unnamed stages are created using standard mkdtemp mechanisms or similar, and are intended to persist for only one run of spack.

Path to the source archive within this stage directory.


Perform a fetch if the resource is not already cached
  • mirror (spack.caches.MirrorCache) -- the mirror to cache this Stage's resource in
  • stats (spack.mirror.MirrorStats) -- this is updated depending on whether the caching operation succeeded or failed



Check the downloaded archive against a checksum digest. No-op if this stage checks code out of a repository.

Ensures the top-level (config:build_stage) directory exists.

Removes this stage directory.

The Stage will not attempt to look for the associated fetcher target in any of Spack's mirrors (including the local download cache).

Changes to the stage directory and attempt to expand the downloaded archive. Fail if the stage is not set up or if the archive is not yet downloaded.

Returns True if source path expanded; else False.

Possible archive file paths.

Retrieves the code or archive
  • mirror_only (bool) -- only fetch from a mirror
  • err_msg (str or None) -- the error message to display if all fetchers fail or None for the default fetch failure message



Most staging is managed by Spack. DIYStage is one exception.

Removes the expanded archive path if it exists, then re-expands the archive.


Returns the well-known source directory path.

Copy the source_path directory in its entirety to directory dest

This operation creates/fetches/expands the stage if it is not already, and destroys the stage when it is done.



Bases: Composite

Composite for Stage type objects. The first item in this composite is considered to be the root package, and operations that return a value are forwarded to it.



Create a new composite from an iterable of stages.





Bases: SpackError

"Superclass for all errors encountered during staging.


Bases: StageError

"Error encountered with stage path.


Bases: StageError

Raised when we can't determine a URL to fetch a package.


Determine stage name given a spec

Create the stage root directory and ensure appropriate access perms.

Ensure we can access a directory and die with an error if we can't.

Computes the checksums for each version passed in input, and returns the results.

Archives are fetched according to the usl dictionary passed as input.

The first_stage_function argument allows the caller to inspect the first downloaded archive, e.g., to determine the build system.

  • url_by_version -- URL keyed by version
  • package_name -- name of the package
  • first_stage_function -- function that takes a Stage and a URL; this is run on the stage of the first URL downloaded
  • keep_stage -- whether to keep staging area when command completes
  • batch -- whether to ask user how many versions to fetch (false) or fetch all versions (true)
  • fetch_options -- options used for the fetcher (such as timeout or cookies)
  • concurrency -- maximum number of workers to use for retrieving archives

A dictionary mapping each version to the corresponding checksum




Remove all build directories in the top-level stage path.

spack.store module

Components that manage Spack's installation tree.

An install tree, or "build store" consists of two parts:

1.
A package database that tracks what is installed.
2.
A directory layout that determines how the installations are laid out.



The store contains all the install prefixes for packages installed by Spack. The simplest store could just contain prefixes named by DAG hash, but we use a fancier directory layout to make browsing the store and debugging easier.

default installation root, relative to the Spack install path

Bases: SpackError

Error occurring when trying to match specs in store against a constraint



Bases: object

A store is a path full of installed Spack packages.

Stores consist of packages installed according to a DirectoryLayout, along with a database of their contents.

The directory layout controls what paths look like and how Spack ensures that each unique spec gets its own unique directory (or not, though we don't recommend that).

The database is a single file that caches metadata for the entire Spack installation. It prevents us from having to spider the install tree to figure out what's there.

The store is also able to lock installation prefixes, and to mark installation failures.

  • root -- path to the root of the install tree
  • unpadded_root -- path to the root of the install tree without padding. The sbang script has to be installed here to work with padded roots
  • projections -- expression according to guidelines that describes how to construct a path to a package prefix in this store
  • hash_length -- length of the hashes used in the directory layout. Spec hash suffixes will be truncated to this length
  • upstreams -- optional list of upstream databases
  • lock_cfg -- lock configuration for the database


Convenience function to reindex the store DB with its own layout.


Create a store from the configuration passed as input.
configuration -- configuration to create a store.


Ensures the lazily evaluated singleton is created

Returns a list of specs matching the constraints passed as inputs.

At least one spec per constraint must match, otherwise the function will error with an appropriate message.

By default, this function queries the current store, but a custom query function can be passed to hit any other source of concretized specs (e.g. a binary cache).

The query function must accept a spec as its first argument.

  • constraints -- spec(s) to be matched against installed packages
  • multiple -- if True multiple matches per constraint are admitted
  • query_fn (Callable) -- query function to get matching specs. By default, spack.store.STORE.db.query
  • **kwargs -- keyword arguments forwarded to the query function



Parse config settings and return values relevant to the store object.
config_dict (dict) -- dictionary of config values, as returned from spack.config.get('config')
root (before padding was applied), and the projections for the install tree

Return type
(tuple)

Encapsulate backwards compatibility capabilities for install_tree and deprecated values that are now parsed as part of install_tree.


Restore globals to the same state they would have at start-up. Return a token containing the state of the store before reinitialization.

Restore the environment from a token returned by reinitialize

Same as find but reads the query from a spec file.
  • filename -- YAML or JSON file from which to read the query.
  • **kwargs -- keyword arguments forwarded to "find"



Use the store passed as argument within the context manager.
  • path -- path to the store.
  • extra_data -- extra configuration under "config:install_tree" to be taken into account.

Store object associated with the context manager's store


spack.subprocess_context module

This module handles transmission of Spack state to child processes started using the 'spawn' start method. Notably, installations are performed in a subprocess and require transmitting the Package object (in such a way that the repository is available for importing when it is deserialized); installations performed in Spack unit tests may include additional modifications to global state in memory that must be replicated in the child process.

Bases: object

Captures the in-memory process state of a package installation that needs to be transmitted to a child process.





Bases: object

Spack tests may modify state that is normally read from disk in memory; this object is responsible for properly serializing that state to be applied to a subprocess. This isn't needed outside of a testing environment but this logic is designed to behave the same inside or outside of tests.







spack.tag module

Classes and functions to manage package tags

Bases: Mapping

Maps tags to list of packages.

Return a deep copy of this index.


Returns all packages associated with the tag.

Merge another tag index into this one.
other (TagIndex) -- tag index to be merged




Updates a package in the tag index.
pkg_name (str) -- name of the package to be removed from the index



Bases: SpackError

Raised when there is a problem with a TagIndex.


Returns a dict, indexed by tag, containing lists of names of packages containing the tag or, if no tags, for all available tags.
  • tags (list or None) -- list of tags of interest or None for all
  • installed (bool) -- True if want names of packages that are installed; otherwise, False if want all packages with the tag
  • skip_empty (bool) -- True if exclude tags with no associated packages; otherwise, False if want entries for all tags even when no such tagged packages



spack.target module

Bases: object


Returns the flags needed to optimize for this target using the compiler passed as argument.
compiler (spack.spec.CompilerSpec or spack.compiler.Compiler) -- object that contains both the name and the version of the compiler we want to use


Returns a dict or a value representing the current target.

String values are used to keep backward compatibility with generic targets, like e.g. x86_64 or ppc64. More specific micro-architectures will return a dictionary which contains information on the name, features, vendor, generation and parents of the current target.



spack.tengine module

Bases: object

Base class for context classes that are used with the template engine.


Returns a dictionary containing all the context properties.


Bases: type

Meta class for Context. It helps reducing the boilerplate in client code.

Decorator that adds a function name to the list of new context properties, and then returns a property.


A saner way to use the decorator

Encloses each line of text in curly braces

Returns a configured environment for template rendering.

Prepends a token to each line in text

Quotes each line in text

spack.traverse module

Generator that yields edges from the DAG, starting from a list of root specs.
  • specs (list) -- List of root specs (considered to be depth 0)
  • root (bool) -- Yield the root nodes themselves
  • order (str) -- What order of traversal to use in the DAG. For depth-first search this can be pre or post. For BFS this should be breadth. For topological order use topo
  • cover (str) -- Determines how extensively to cover the dag. Possible values: nodes -- Visit each unique node in the dag only once. edges -- If a node has been visited once but is reached along a new path, it's accepted, but not recurisvely followed. This traverses each 'edge' in the DAG once. paths -- Explore every unique path reachable from the root. This descends into visited subtrees and will accept nodes multiple times if they're reachable by multiple paths.
  • direction (str) -- children or parents. If children, does a traversal of this spec's children. If parents, traverses upwards in the DAG towards the root.
  • deptype -- allowed dependency types
  • depth (bool) -- When False, yield just edges. When True yield the tuple (depth, edge), where depth corresponds to the depth at which edge.spec was discovered.
  • key -- function that takes a spec and outputs a key for uniqueness test.
  • visited (set or None) -- a set of nodes not to follow

A generator that yields DependencySpec if depth is False or a tuple of (depth, DependencySpec) if depth is True.


Generator that yields specs from the DAG, starting from a list of root specs.
  • specs (list) -- List of root specs (considered to be depth 0)
  • root (bool) -- Yield the root nodes themselves
  • order (str) -- What order of traversal to use in the DAG. For depth-first search this can be pre or post. For BFS this should be breadth.
  • cover (str) -- Determines how extensively to cover the dag. Possible values: nodes -- Visit each unique node in the dag only once. edges -- If a node has been visited once but is reached along a new path, it's accepted, but not recurisvely followed. This traverses each 'edge' in the DAG once. paths -- Explore every unique path reachable from the root. This descends into visited subtrees and will accept nodes multiple times if they're reachable by multiple paths.
  • direction (str) -- children or parents. If children, does a traversal of this spec's children. If parents, traverses upwards in the DAG towards the root.
  • deptype -- allowed dependency types
  • depth (bool) -- When False, yield just edges. When True yield the tuple (depth, edge), where depth corresponds to the depth at which edge.spec was discovered.
  • key -- function that takes a spec and outputs a key for uniqueness test.
  • visited (set or None) -- a set of nodes not to follow

By default Spec, or a tuple (depth, Spec) if depth is set to True.


Generator that yields (depth, DependencySpec) tuples in the depth-first pre-order, so that a tree can be printed from it.
  • specs (list) -- List of root specs (considered to be depth 0)
  • cover (str) -- Determines how extensively to cover the dag. Possible values: nodes -- Visit each unique node in the dag only once. edges -- If a node has been visited once but is reached along a new path, it's accepted, but not recurisvely followed. This traverses each 'edge' in the DAG once. paths -- Explore every unique path reachable from the root. This descends into visited subtrees and will accept nodes multiple times if they're reachable by multiple paths.
  • deptype -- allowed dependency types
  • key -- function that takes a spec and outputs a key for uniqueness test.
  • depth_first (bool) -- Explore the tree in depth-first or breadth-first order. When setting depth_first=True and cover=nodes, each spec only occurs once at the shallowest level, which is useful when rendering the tree in a terminal.

A generator that yields (depth, DependencySpec) tuples in such an order that a tree can be printed.


spack.url module

This module has methods for parsing names and versions of packages from URLs. The idea is to allow package creators to supply nothing more than the download location of the package, and figure out version and name information from there.

Example: when spack is given the following URL:


It can figure out that the package name is hdf, and that it is at version 4.2.12. This is useful for making the creation of packages simple: a user just supplies a URL and skeleton code is generated automatically.

Spack can also figure out that it can most likely download 4.2.6 at this URL:


This is useful if a user asks for a package at a particular version number; spack doesn't need anyone to tell it where to get the tarball even though it's never been told about that version before.

Bases: UrlParseError

Raised when we can't parse a package name from a string.


Bases: UrlParseError

Raised when we can't parse a version from a string.


Bases: SpackError

Raised when the URL module can't parse something correctly.


Color the parts of the url according to Spack's parsing.
Cyan: The version found by parse_version_offset().
Red:  The name found by parse_name_offset().

Green:   Instances of version string from substitute_version().
Magenta: Instances of the name (protected from substitution).


  • path (str) -- The filename or URL for the package
  • errors (bool) -- Append parse errors at end of string.
  • subs (bool) -- Color substitutions as well as parsed name/version.



Returns a list containing the indices of every occurrence of substring in string.

Scrape web pages for new versions of a tarball. This function prefers URLs in the following order: links found on the scraped page that match a url generated by the reference package, found and in the archive_urls list, found and derived from those in the archive_urls list, and if none are found for a version then the item in the archive_urls list is included for the version.
  • archive_urls (str or list or tuple) -- URL or sequence of URLs for different versions of a package. Typically these are just the tarballs from the package file itself. By default, this searches the parent directories of archives.
  • list_url (str or None) -- URL for a listing of archives. Spack will scrape these pages for download links that look like the archive URL.
  • list_depth (int) -- max depth to follow links on list_url pages. Defaults to 0.
  • concurrency (int) -- maximum number of concurrent requests
  • reference_package (spack.package_base.PackageBase or None) -- a spack package used as a reference for url detection. Uses the url_for_version method on the package to produce reference urls which, if found, are preferred.



Try to determine the name of a package from its filename or URL.
  • path (str) -- The filename or URL for the package
  • ver (str) -- The version of the package

The name of the package
Return type
str
UndetectableNameError -- If the URL does not match any regexes


Try to determine the name of a package and extract its version from its filename or URL.
path (str) -- The filename or URL for the package
a tuple containing the package (name, version)
Return type
tuple
  • UndetectableVersionError -- If the URL does not match any regexes
  • UndetectableNameError -- If the URL does not match any regexes



Try to determine the name of a package from its filename or URL.
  • path (str) -- The filename or URL for the package
  • v (str) -- The version of the package

name of the package, first index of name, length of name, the index of the matching regex, the matching regex

Return type
tuple
UndetectableNameError -- If the URL does not match any regexes


Try to extract a version string from a filename or URL.
path (str) -- The filename or URL for the package
The version of the package
Return type
spack.version.Version
UndetectableVersionError -- If the URL does not match any regexes


Try to extract a version string from a filename or URL.
path (str) -- The filename or URL for the package
version of the package, first index of version, length of version string, the index of the matching regex, the matching regex

Return type
tuple
UndetectableVersionError -- If the URL does not match any regexes


Most tarballs contain a package name followed by a version number. However, some also contain extraneous information in-between the name and version:
  • rgb-1.0.6
  • converge_install_2.3.16
  • jpegsrc.v9b

These strings are not part of the package name and should be ignored. This function strips the version number and any extraneous suffixes off and returns the remaining string. The goal is that the name is always the last thing in path:

  • rgb
  • converge
  • jpeg

  • path (str) -- The filename or URL for the package
  • version (str) -- The version detected for this URL

The path with any extraneous suffixes removed
Return type
str


Given a URL or archive name, find the version in the path and substitute the new version for it. Replace all occurrences of the version if they don't overlap with the package name.

Simple example:


Complex example:



This returns offsets for substituting versions and names in the provided path. It is a helper for substitute_version().

Find the version in the supplied path, and return a regular expression that will match this path with any version in its place.

spack.user_environment module

List of environment (shell) modifications to be processed for spec.

This list is specific to the location of the spec or its projection in the view.

  • specs -- spec(s) for which to list the environment modifications
  • view -- view associated with the spec passed as first argument
  • set_package_py_globals -- whether or not to set the global variables in the package.py files (this may be problematic when using buildcaches that have been built on a different but compatible OS)



Get list of prefix inspections for platform
platform (str) -- the name of the platform to consider. The platform determines what environment variables Spack will use for some inspections.
variables to modify with that directory if it exists.



Temporarily replace every Spec's prefix with projection(s)

Environment variable name Spack uses to track individually loaded packages

List of environment (shell) modifications to be processed for view.

This list does not depend on the specs in this environment


spack.variant module

The variant module contains data structures that are needed to manage variants both in packages and in specs.

Bases: object

A variant that has not yet decided who it wants to be. It behaves like a multi valued variant which could do things.

This kind of variant is generated during parsing of expressions like foo=bar and differs from multi valued variants because it will satisfy any other variant with the same name. This is because it could do it if it grows up to be a multi valued variant with the right set of values.

Returns True if self and other are compatible, False otherwise.

As there is no semantic check, two VariantSpec are compatible if either they contain the same value or they are both multi-valued.

other -- instance against which we test compatibility
True or False
Return type
bool


Modify self to match all the constraints for other if both instances are multi-valued. Returns True if self changed, False otherwise.
other -- instance against which we constrain self
True or False
Return type
bool


Returns an instance of a variant equivalent to self
a copy of self
Return type
AbstractVariant

>>> a = MultiValuedVariant('foo', True)
>>> b = a.copy()
>>> assert a == b
>>> assert a is not b

Reconstruct a variant from a node dict.

Returns True if there are variant matching both self and other, False otherwise.

Returns true if other.name == self.name, because any value that other holds and is not in self yet could be added.
other -- constraint to be met for the method to return True
True or False
Return type
bool


Returns a tuple of strings containing the values stored in the variant.
values stored in the variant
Return type
tuple


Returns a key, value tuple suitable to be an entry in a yaml dict.
(name, value_representation)
Return type
tuple



Bases: SingleValuedVariant

A variant that can hold either True or False.

BoolValuedVariant can also hold the value '*', for coerced comparisons between foo=* and +foo or ~foo.


Bases: Sequence

Allows combinations from one of many mutually exclusive sets.

The value ('none',) is reserved to denote the empty set and therefore no other set can contain the item 'none'.

*sets (list) -- mutually exclusive sets of values

Adds the empty set to the current list of disjoint sets.

Attribute used to track values which correspond to features which can be enabled or disabled as understood by the package's build system.

Removes the empty set from the current list of disjoint sets.


Sets the default value and returns self.

Sets the error message format and returns self.

Marks a few values as not being tied to a feature.


Bases: SpecError

Raised when the same variant occurs in a spec twice.


Bases: SpecError

Raised if the wrong validator is used to validate a variant.


Bases: SpecError

Raised when an invalid conditional variant is specified.


Bases: SpecError

Raised when a variant has values '*' or 'none' with other values.


Bases: SpecError

Raised when a valid variant has at least an invalid value.


Bases: AbstractVariant

A variant that can hold multiple values at once.

Add another value to this multi-valued variant.

Returns true if other.name == self.name and other.value is a strict subset of self. Does not try to validate.
other -- constraint to be met for the method to return True
True or False
Return type
bool



Bases: SpecError, ValueError

Raised when multiple values are present in a variant that wants only one.


Bases: AbstractVariant

A variant that can hold multiple values, but one at a time.

Returns True if self and other are compatible, False otherwise.

As there is no semantic check, two VariantSpec are compatible if either they contain the same value or they are both multi-valued.

other -- instance against which we test compatibility
True or False
Return type
bool


Modify self to match all the constraints for other if both instances are multi-valued. Returns True if self changed, False otherwise.
other -- instance against which we constrain self
True or False
Return type
bool


Returns True if there are variant matching both self and other, False otherwise.

Returns true if other.name == self.name, because any value that other holds and is not in self yet could be added.
other -- constraint to be met for the method to return True
True or False
Return type
bool


Returns a key, value tuple suitable to be an entry in a yaml dict.
(name, value_representation)
Return type
tuple



Bases: SpecError

Raised when an unknown variant occurs in a spec.


Bases: UnsatisfiableSpecError

Raised when a spec variant conflicts with package constraints.


Bases: object

Conditional value that might be used in variants.


Bases: object

Represents a variant in a package, as declared in the variant directive.

Returns a string representation of the allowed values for printing purposes
representation of the allowed values
Return type
str


Factory that creates a variant holding the default value.
instance of the proper variant
Return type
MultiValuedVariant or SingleValuedVariant or BoolValuedVariant


Factory that creates a variant holding the value passed as a parameter.
value -- value that will be hold by the variant
instance of the proper variant
Return type
MultiValuedVariant or SingleValuedVariant or BoolValuedVariant


Validate a variant spec against this package variant. Raises an exception if any error is found.
  • vspec (Variant) -- instance to be validated
  • pkg_cls (spack.package_base.PackageBase) -- the package class that required the validation, if available

  • InconsistentValidationError -- if vspec.name != self.name
  • MultipleValuesInExclusiveVariantError -- if vspec has
    multiple values but self.multi == False
  • InvalidVariantValueError -- if vspec.value contains
    invalid values



Proper variant class to be used for this configuration.


Bases: HashableMap

Map containing variant instances. New values can be added only if the key is not already present.

Returns True if the spec is concrete in terms of variants.
True or False
Return type
bool


Add all variants in other that aren't in self to self. Also constrain all multi-valued variants that are already present. Return True if self changed, False otherwise
other (VariantMap) -- instance against which we constrain self
True or False
Return type
bool


Return an instance of VariantMap equivalent to self.
a copy of self
Return type
VariantMap





Substitutes the entry under vspec.name with vspec.
vspec -- variant spec to be substituted



Multi-valued variant that allows any combination of the specified values, and also allows the user to specify 'none' (as a string) to choose none of them.

It is up to the package implementation to handle the value 'none' specially, if at all.

*values -- allowed variant values
a properly initialized instance of DisjointSetsOfValues


Multi-valued variant that allows any combination of a set of values (but not the empty set) or 'auto'.
*values -- allowed variant values
a properly initialized instance of DisjointSetsOfValues


Conditional values that can be used in variant declarations.

Multi-valued variant that allows any combination picking from one of multiple disjoint sets of values, and also allows the user to specify 'none' (as a string) to choose none of them.

It is up to the package implementation to handle the value 'none' specially, if at all.

*sets --
a properly initialized instance of DisjointSetsOfValues


Converts other to type(self) and calls method(self, other)
method -- any predicate method that takes another variant as an argument

Returns: decorated method


Uses the information in spec.package to turn any variant that needs it into a SingleValuedVariant.

This method is best effort. All variants that can be substituted will be substituted before any error is raised.

spec -- spec on which to operate the substitution


spack.verify module








LLNL PACKAGE

Subpackages

llnl.util package

Subpackages

llnl.util.tty package

Bases: object

Class for disabling output in a scope using 'with' keyword










Draw a labeled horizontal line.
  • char (str) -- Char to draw the line with. Default '-'
  • max_width (int) -- Maximum width of the line. Default is 64 chars.









Context manager that applies a filter to all output.

Gives file and line frame 'countback' frames from the bottom









Gets the dimensions of the console: (rows, cols).




Submodules

llnl.util.tty.colify module

Routines for printing columnar output. See colify() for more information.



Takes a list of elements as input and finds a good columnization of them, similar to how gnu ls does. This supports both uniform-width and variable-width (tighter) columns.

If elts is not a list of strings, each element is first conveted using str().

  • output -- A file object to write to. Default is sys.stdout
  • indent -- Optionally indent all columns by some number of spaces
  • padding -- Spaces between columns. Default is 2
  • width -- Width of the output. Default is 80 if tty not detected
  • cols -- Force number of columns. Default is to size to terminal, or single-column if no tty
  • tty -- Whether to attempt to write to a tty. Default is to autodetect a tty. Set to False to force single-column output
  • method -- Method to use to fit columns. Options are variable or uniform. Variable-width columns are tighter, uniform columns are all the same width and fit less data on the screen
  • console_cols -- number of columns on this console (default: autodetect)



Version of colify() for data expressed in rows, (list of lists).

Same as regular colify but:

1.
This takes a list of lists, where each sub-list must be the same length, and each is interpreted as a row in a table. Regular colify displays a sequential list of values in columns.
2.
Regular colify will always print with 1 column when the output is not a tty. This will always print with same dimensions of the table argument.


Uniform-width column fitting algorithm.

Determines the longest element in the list, and determines how many columns of that width will fit on screen. Returns a corresponding column config.


Variable-width column fitting algorithm.

This function determines the most columns that can fit in the screen width. Unlike uniform fitting, where all columns take the width of the longest element in the list, each column takes the width of its own longest element. This packs elements more efficiently on screen.

If cols is nonzero, force


llnl.util.tty.color module

This file implements an expression syntax, similar to printf, for adding ANSI colors to text.

See colorize(), cwrite(), and cprint() for routines that can generate colored output.

colorize will take a string and replace all color expressions with ANSI control codes. If the isatty keyword arg is set to False, then the color expressions will be converted to null strings, and the returned string will have no color.

cwrite and cprint are equivalent to write() and print() calls in python, but they colorize their output. If the stream argument is not supplied, they write to sys.stdout.

Here are some example color expressions:

Expression Meaning
@r Turn on red coloring
@R Turn on bright red coloring
@*{foo} Bold foo, but don't change text color
@_{bar} Underline bar, but don't change text color
@*b Turn on bold, blue text
@_B Turn on bright blue text with an underline
@. Revert to plain formatting
@*g{green} Print out 'green' in bold, green text, then reset to plain.
@*ggreen@. Print out 'green' in bold, green text, then reset to plain.

The syntax consists of:

color-expr '@' [style] color-code '{' text '}' | '@.' | '@@'
style '*' | '_'
color-code [krgybmcwKRGYBMCW]
text .*

'@' indicates the start of a color expression. It can be followed by an optional * or _ that indicates whether the font should be bold or underlined. If * or _ is not provided, the text will be plain. Then an optional color code is supplied. This can be [krgybmcw] or [KRGYBMCW], where the letters map to black(k), red(r), green(g), yellow(y), blue(b), magenta(m), cyan(c), and white(w). Lowercase letters denote normal ANSI colors and capital letters denote bright ANSI colors.

Finally, the color expression can be followed by text enclosed in {}. If braces are present, only the text in braces is colored. If the braces are NOT present, then just the control codes to enable the color will be output. The console can be reset later to plain text with '@.'.

To output an @, use '@@'. To output a } inside braces, use '}}'.

Bases: Exception

Raised when a color format fails to parse.



Escapes special characters needed for color codes.

Replaces the following symbols with their equivalent literal forms:

@ @@
} }}
string (str) -- the string to escape
the string with color codes escaped
Return type
(str)


Length of extra color characters in a string

Return the length of a string, excluding ansi color sequences.

Context manager to temporarily use a particular color setting.

Replace all color expressions in a string with ANSI control codes.
string (str) -- The string to replace
The filtered string
Return type
str
  • color (bool) -- If False, output will be plain text without control codes, for output to non-console devices.
  • enclose (bool) -- If True, enclose ansi color sequences with square brackets to prevent misestimation of terminal width.



Same as cwrite, but writes a trailing newline to the stream.

Replace all color expressions in string with ANSI control codes and write the result to the stream. If color is False, this will write plain text with no color. If True, then it will always write colored output. If not supplied, then it will be set based on stream.isatty().

Return whether commands should print color or not.

Bases: object
Returns a TTY escape sequence for a color


Set when color should be applied. Options are:
  • True or 'always': always print color
  • False or 'never': never print color
  • None or 'auto': only print color if sys.stdout is a tty.


Turns coloring in Windows terminal by enabling VTP in Windows consoles (CMD/PWSH/CONHOST) Method based on the link below https://learn.microsoft.com/en-us/windows/console/console-virtual-terminal-sequences#example-of-enabling-virtual-terminal-processing

Note: No-op on non windows platforms


llnl.util.tty.log module

Utility classes for logging the output of blocks of code.

Bases: object

Represents a file. Can be an open stream, a path to a file (not opened yet), or neither. When unwrapped, it returns an open file (or file-like) object.




Bases: object

Return an object which stores a file descriptor and can be passed as an argument to a function run with multiprocessing.Process, such that the file descriptor is available in the subprocess.




Bases: object

Wrapper class to handle redirection of io streams

Redirect back to the original system stream, and close stream


Redirect stdout to the given file descriptor.


Bases: object

Wrapper for Python streams that forces them to be unbuffered.

This is implemented by forcing a flush after each write.





Context manager to temporarily ignore a signal.

Bases: object

Context manager to disable line editing and echoing.

Use this with sys.stdin for keyboard input, e.g.:

with keyboard_input(sys.stdin) as kb:

while True:
kb.check_fg_bg()
r, w, x = select.select([sys.stdin], [], [])
# ... do something with keypresses ...


The keyboard_input context manager disables canonical (line-based) input and echoing, so that keypresses are available on the stream immediately, and they are not printed to the terminal. Typically, standard input is line-buffered, which means keypresses won't be sent until the user hits return. In this mode, a user can hit, e.g., 'v', and it will be read on the other end of the pipe immediately but not printed.

The handler takes care to ensure that terminal changes only take effect when the calling process is in the foreground. If the process is backgrounded, canonical mode and echo are re-enabled. They are disabled again when the calling process comes back to the foreground.

This context manager works through a single signal handler for SIGTSTP, along with a poolling routine called check_fg_bg(). Here are the relevant states, transitions, and POSIX signals:

[Running] -------- Ctrl-Z sends SIGTSTP ------------.
[ in FG ] <------- fg sends SIGCONT --------------. |

^ | |
| fg (no signal) | |
| | v [Running] <------- bg sends SIGCONT ---------- [Stopped] [ in BG ] [ in BG ]


We handle all transitions exept for SIGTSTP generated by Ctrl-Z by periodically calling check_fg_bg(). This routine notices if we are in the background with canonical mode or echo disabled, or if we are in the foreground without canonical disabled and echo enabled, and it fixes the terminal settings in response.

check_fg_bg() works except for when the process is stopped with SIGTSTP. We cannot rely on a periodic timer in this case, as it may not rrun before the process stops. We therefore restore terminal settings in the SIGTSTP handler.

Additional notes:

We mostly use polling here instead of a SIGARLM timer or a thread. This is to avoid the complexities of many interrupts, which seem to make system calls (like I/O) unreliable in older Python versions (2.6 and 2.7). See these issues for details:

There are essentially too many ways for asynchronous signals to go wrong if we also have to support older Python versions, so we opt not to use them.

  • SIGSTOP can stop a process (in the foreground or background), but it can't be caught. Because of this, we can't fix any terminal settings on SIGSTOP, and the terminal will be left with ICANON and ECHO disabled until it is resumes execution.
  • Technically, a process could be sent SIGTSTP while running in the foreground, without the shell backgrounding that process. This doesn't happen in practice, and we assume that SIGTSTP always means that defaults should be restored.
  • We rely on termios support. Without it, or if the stream isn't a TTY, keyboard_input has no effect.



Context manager that logs its output to a file.

In the simplest case, the usage looks like this:

with log_output('logfile.txt'):

# do things ... output will be logged


Any output from the with block will be redirected to logfile.txt. If you also want the output to be echoed to stdout, use the echo parameter:

with log_output('logfile.txt', echo=True):

# do things ... output will be logged and printed out


The following is available on Unix only. No-op on Windows. And, if you just want to echo some stuff from the parent, use force_echo:

with log_output('logfile.txt', echo=False) as logger:

# do things ... output will be logged
with logger.force_echo():
# things here will be echoed *and* logged


See individual log classes for more information.

This method is actually a factory serving a per platform (unix vs windows) log_output class


Bases: object

Under the hood, we spawn a daemon and set up a pipe between this process and the daemon. The daemon writes our output to both the file and to stdout (if echoing). The parent process can communicate with the daemon to tell it when and when not to echo; this is what force_echo does. You can also enable/disable echoing by typing 'v'.

We try to use OS-level file descriptors to do the redirection, but if stdout or stderr has been set to some Python-level file object, we use Python-level redirection instead. This allows the redirection to work within test frameworks like nose and pytest.

Context manager to force local echo, even if echo is off.


Replace the current environment (os.environ) with env.

If env is empty (or None), this unsets all current environment variables.


Bases: object

Similar to nixlog, with underlying functionality ported to support Windows.

Does not support the use of 'v' toggling as nixlog does.

Context manager to force local echo, even if echo is off.


llnl.util.tty.pty module

The pty module handles pseudo-terminals.

Currently, the infrastructure here is only used to test llnl.util.tty.log.

If this is used outside a testing environment, we will want to reconsider things like timeouts in ProcessController.wait(), which are set to get tests done quickly, not to avoid high CPU usage.

Note: The functionality in this module is unsupported on Windows

Bases: object

Wrapper around some fundamental process control operations.

This allows one process (the controller) to drive another (the minion) similar to the way a shell would, by sending signals and I/O.

True if pgid is in a background pgroup of controller_fd's tty.




Get echo and canon attributes of the terminal of controller_fd.

Labled horizontal line for debugging.

True if keyboard input is enabled on the controller_fd pty.


Print debug message with status info for the minion.

Send SIGTSTP to the controlled process.









Bases: object

Sets up controller and minion processes with a PTY.

You can create a PseudoShell if you want to test how some function responds to terminal input. This is a pseudo-shell from a job control perspective; controller_function and minion_function are set up with a pseudoterminal (pty) so that the controller can drive the minion through process control signals and I/O.

The two functions should have signatures like this:

def controller_function(proc, ctl, **kwargs)
def minion_function(**kwargs)


controller_function is spawned in its own process and passed three arguments:

the multiprocessing.Process object representing the minion
a ProcessController object tied to the minion
keyword arguments passed from PseudoShell.start().

minion_function is only passed kwargs delegated from PseudoShell.start().

The ctl.controller_fd will have its controller_fd connected to sys.stdin in the minion process. Both processes will share the same sys.stdout and sys.stderr as the process instantiating PseudoShell.

Here are the relationships between processes created:

._________________________________________________________.
| Minion Process                                          | pid     2
| - runs minion_function                                  | pgroup  2
|_________________________________________________________| session 1

^
| create process with controller_fd connected to stdin
| stdout, stderr are the same as caller ._________________________________________________________. | Controller Process | pid 1 | - runs controller_function | pgroup 1 | - uses ProcessController and controller_fd to | session 1 | control minion | |_________________________________________________________|
^
| create process
| stdin, stdout, stderr are the same as caller ._________________________________________________________. | Caller | pid 0 | - Constructs, starts, joins PseudoShell | pgroup 0 | - provides controller_function, minion_function | session 0 |_________________________________________________________|


Wait for the minion process to finish, and return its exit code.

Start the controller and minion processes.
kwargs (dict) -- arbitrary keyword arguments that will be passed to controller and minion functions

The controller process will create the minion, then call controller_function. The minion process will call minion_function.



Submodules

llnl.util.argparsewriter module

Bases: ArgparseWriter

Write argparse output as rst sections.

Text to print before a command.
prog -- Program name.
Text before a command.


Text to print before optional arguments.
Optional arguments header.


Text to print before positional arguments.
Positional arguments header.


Table with links to other subcommands.
subcommands -- List of subcommands.
Subcommand linking text.


Description of a command.
description -- Command description.
Description of a command.


Text to print after optional arguments.
Optional arguments footer.


Text to print after positional arguments.
Positional arguments footer.


Return the string representation of a single node in the parser tree.
cmd -- Parsed information about a command or subcommand.
String representation of a node.


Description of an optional argument.
  • opts -- Optional argument.
  • help -- Help text.

Optional argument description.


Description of a positional argument.
  • name -- Argument name.
  • help -- Help text.

Positional argument description.


Example usage of a command.
usage -- Command usage.
Usage of a command.



Bases: HelpFormatter, ABC

Analyze an argparse ArgumentParser for easy generation of help.

Return the string representation of a single node in the parser tree.

Override this in subclasses to define how each subcommand should be displayed.

cmd -- Parsed information about a command or subcommand.
String representation of this subcommand.


Parse the parser object and return the relavent components.
  • parser -- Command parser.
  • prog -- Program name.

Information about the command from the parser.


Write out details about an ArgumentParser.
parser -- Command parser.



Bases: object

Parsed representation of a command from argparse.

This is a single command from an argparse parser. ArgparseWriter creates these and returns them from parse(), and it passes one of these to each call to format() so that we can take an action for a single command.


llnl.util.filesystem module

Bases: object

Base class and interface for visit_directory_tree().

Called after recursion into rel_path finished. This function is not called when rel_path was not recursed into.
  • root (str) -- root directory
  • rel_path (str) -- relative path to current directory from root
  • depth (int) -- depth of current directory from the root directory



Called after recursion into rel_path finished. This function is not called when rel_path was not recursed into.
  • root (str) -- root directory
  • rel_path (str) -- relative path to current symlink from root
  • depth (int) -- depth of current symlink from the root directory



Return True from this function to recurse into the directory at os.path.join(root, rel_path). Return False in order not to recurse further.
  • root (str) -- root directory
  • rel_path (str) -- relative path to current directory from root
  • depth (int) -- depth of current directory from the root directory

True when the directory should be recursed into. False when not
Return type
bool


Return True to recurse into the symlinked directory and False in order not to. Note: rel_path is the path to the symlink itself. Following symlinked directories blindly can cause infinite recursion due to cycles.
  • root (str) -- root directory
  • rel_path (str) -- relative path to current symlink from root
  • depth (int) -- depth of current symlink from the root directory

True when the directory should be recursed into. False when not
Return type
bool


Handle the non-symlink file at os.path.join(root, rel_path)
  • root (str) -- root directory
  • rel_path (str) -- relative path to current file from root
  • depth (int) -- depth of current file from the root directory



Handle the symlink to a file at os.path.join(root, rel_path). Note: rel_path is the location of the symlink, not to what it is pointing to. The symlink may be dangling.
  • root (str) -- root directory
  • rel_path (str) -- relative path to current symlink from root
  • depth (int) -- depth of current symlink from the root directory





Bases: Sequence

Sequence of absolute paths to files.

Provides a few convenience methods to manipulate file paths.

Stable de-duplication of the base-names in the list

>>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir3/liba.a'])
>>> l.basenames
['liba.a', 'libb.a']
>>> h = HeaderList(['/dir1/a.h', '/dir2/b.h', '/dir3/a.h'])
>>> h.basenames
['a.h', 'b.h']
    
A list of base-names
Return type
list


Stable de-duplication of the directories where the files reside.

>>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir1/libc.a'])
>>> l.directories
['/dir1', '/dir2']
>>> h = HeaderList(['/dir1/a.h', '/dir1/b.h', '/dir2/c.h'])
>>> h.directories
['/dir1', '/dir2']
    
A list of directories
Return type
list




Bases: FileList

Sequence of absolute paths to headers.

Provides a few convenience methods to manipulate header paths and get commonly used compiler flags or names.

Add a macro definition
macro (str) -- The macro to add


Include flags + macro definitions

>>> h = HeaderList(['/dir1/a.h', '/dir1/b.h', '/dir2/c.h'])
>>> h.cpp_flags
'-I/dir1 -I/dir2'
>>> h.add_macro('-DBOOST_DYN_LINK')
>>> h.cpp_flags
'-I/dir1 -I/dir2 -DBOOST_DYN_LINK'
    
A joined list of include flags and macro definitions
Return type
str


Directories to be searched for header files.

Stable de-duplication of the headers.
A list of header files
Return type
list


Include flags

>>> h = HeaderList(['/dir1/a.h', '/dir1/b.h', '/dir2/c.h'])
>>> h.include_flags
'-I/dir1 -I/dir2'
    
A joined list of include flags
Return type
str



Macro definitions

>>> h = HeaderList(['/dir1/a.h', '/dir1/b.h', '/dir2/c.h'])
>>> h.add_macro('-DBOOST_LIB_NAME=boost_regex')
>>> h.add_macro('-DBOOST_DYN_LINK')
>>> h.macro_definitions
'-DBOOST_LIB_NAME=boost_regex -DBOOST_DYN_LINK'
    
A joined list of macro definitions
Return type
str


Stable de-duplication of header names in the list without extensions

>>> h = HeaderList(['/dir1/a.h', '/dir2/b.h', '/dir3/a.h'])
>>> h.names
['a', 'b']
    
A list of files without extensions
Return type
list



Bases: FileList

Sequence of absolute paths to libraries

Provides a few convenience methods to manipulate library paths and get commonly used compiler flags or names

Search flags + link flags

>>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir1/liba.so'])
>>> l.ld_flags
'-L/dir1 -L/dir2 -la -lb'
    
A joined list of search flags and link flags
Return type
str


Stable de-duplication of library files.
A list of library files
Return type
list


Link flags for the libraries

>>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir1/liba.so'])
>>> l.link_flags
'-la -lb'
    
A joined list of link flags
Return type
str


Stable de-duplication of library names in the list

>>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir3/liba.so'])
>>> l.names
['a', 'b']
    
A list of library names
Return type
list


Search flags for the libraries

>>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir1/liba.so'])
>>> l.search_flags
'-L/dir1 -L/dir2'
    
A joined list of search flags
Return type
str



Get the nth ancestor of a directory.

True if we have read/write access to the file.

Find all sed search/replace commands and change the delimiter.

e.g., if the file contains seds that look like 's///', you can call change_sed_delimiter('/', '@', file) to change the delimiter to '@'.

Note that this routine will fail if the delimiter is ' or ". Handling those is left for future work.

  • old_delim (str) -- The delimiter to search for
  • new_delim (str) -- The delimiter to replace with
  • *filenames -- One or more files to search and replace



Implement the bash chgrp function on a single path

Implements chmod, treating all executable bits as set using the chmod utility's +X option.

Copy the file(s) src to the file or directory dest.

If dest specifies a directory, the file will be copied into dest using the base filename from src.

src may contain glob characters.

  • src (str) -- the file(s) to copy
  • dest (str) -- the destination file or directory
  • _permissions (bool) -- for internal use only

  • IOError -- if src does not match any files or directories
  • ValueError -- if src matches multiple files but dest is
    not a directory



Set the mode of dest to that of src unless it is a link.

Recursively copy an entire directory tree rooted at src.

If the destination directory dest does not already exist, it will be created as well as missing parent directories.

src may contain glob characters.

If symlinks is true, symbolic links in the source tree are represented as symbolic links in the new tree and the metadata of the original links will be copied as far as the platform allows; if false, the contents and metadata of the linked files are copied to the new tree.

If ignore is set, then each path relative to src will be passed to this function; the function returns whether that path should be skipped.

  • src (str) -- the directory to copy
  • dest (str) -- the destination directory
  • symlinks (bool) -- whether or not to preserve symlinks
  • allow_broken_symlinks (bool) -- whether or not to allow broken (dangling) symlinks, On Windows, setting this to True will raise an exception. Defaults to true on unix.
  • ignore (Callable) -- function indicating which files to ignore
  • _permissions (bool) -- for internal use only

  • IOError -- if src does not match any files or directories
  • ValueError -- if src is a parent directory of dest



Like sed, but uses python regular expressions.

Filters every line of each file through regex and replaces the file with a filtered version. Preserves mode of filtered files.

As with re.sub, repl can be either a string or a callable. If it is a callable, it is passed the match object and should return a suitable replacement string. If it is a string, it can contain \1, \2, etc. to represent back-substitution as sed would allow.

  • regex (str) -- The regular expression to search for
  • repl (str) -- The string to replace matches with
  • *filenames -- One or more files to search and replace
  • string (bool) -- Treat regex as a plain string. Default it False
  • backup (bool) -- Make backup file(s) suffixed with ~. Default is False
  • ignore_absent (bool) -- Ignore any files that don't exist. Default is False
  • start_at (str) -- Marker used to start applying the replacements. If a text line matches this marker filtering is started at the next line. All contents before the marker and the marker itself are copied verbatim. Default is to start filtering from the first line of the file.
  • stop_at (str) -- Marker used to stop scanning the file further. If a text line matches this marker filtering is stopped and the rest of the file is copied verbatim. Default is to filter until the end of the file.



Search for files starting from the root directory.

Like GNU/BSD find but written entirely in Python.

Examples:

$ find /usr -name python


is equivalent to:

>>> find('/usr', 'python')

$ find /usr/local/bin -maxdepth 1 -name python


is equivalent to:

>>> find('/usr/local/bin', 'python', recursive=False)

Accepts any glob characters accepted by fnmatch:

Pattern Meaning
* matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any character not in seq
  • root (str) -- The root directory to start searching from
  • files (str or collections.abc.Sequence) -- Library name(s) to search for
  • recursive (bool) -- if False search only root folder, if True descends top-down from the root. Defaults to True.

The files that have been found
Return type
list


Convenience function that returns the list of all headers found in the directory passed as argument.
root (str) -- directory where to look recursively for header files
List of all headers found in root and subdirectories.


Returns an iterable object containing a list of full paths to headers if found.

Accepts any glob characters accepted by fnmatch:

Pattern Meaning
7.0 • 2 168u matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any character not in seq
  • headers (str or list) -- Header name(s) to search for
  • root (str) -- The root directory to start searching from
  • recursive (bool) -- if False search only root folder, if True descends top-down from the root. Defaults to False.

The headers that have been found
Return type
HeaderList


Returns an iterable of full paths to libraries found in a root dir.

Accepts any glob characters accepted by fnmatch:

Pattern Meaning
7.0 • 2 168u matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any character not in seq
  • libraries (str or list) -- Library name(s) to search for
  • root (str) -- The root directory to start searching from
  • shared (bool) -- if True searches for shared libraries, otherwise for static. Defaults to True.
  • recursive (bool) -- if False search only root folder, if True descends top-down from the root. Defaults to False.
  • runtime (bool) -- Windows only option, no-op elsewhere. If true, search for runtime shared libs (.DLL), otherwise, search for .Lib files. If shared is false, this has no meaning. Defaults to True.

The libraries that have been found
Return type
LibraryList


Searches the usual system library locations for libraries.

Search order is as follows:

1.
/lib64
2.
/lib
3.
/usr/lib64
4.
/usr/lib
5.
/usr/local/lib64
6.
/usr/local/lib

Accepts any glob characters accepted by fnmatch:

Pattern Meaning
7.0 • 2 168u matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any character not in seq
  • libraries (str or list) -- Library name(s) to search for
  • shared (bool) -- if True searches for shared libraries, otherwise for static. Defaults to True.

The libraries that have been found
Return type
LibraryList


Fix install name of dynamic libraries on Darwin to have full path.

There are two parts of this task:

1.
Use install_name('-id', ...) to change install name of a single lib
2.
Use install_name('-change', ...) to change the cross linking between libs. The function assumes that all libraries are in one folder and currently won't follow subfolders.

path (str) -- directory in which .dylib files are located


Remove files without printing errors. Like rm -f, does NOT remove directories.



Install the file(s) src to the file or directory dest.

Same as copy() with the addition of setting proper permissions on the installed file.

  • src (str) -- the file(s) to install
  • dest (str) -- the destination file or directory

  • IOError -- if src does not match any files or directories
  • ValueError -- if src matches multiple files but dest is
    not a directory



Recursively install an entire directory tree rooted at src.

Same as copy_tree() with the addition of setting proper permissions on the installed files and directories.

  • src (str) -- the directory to install
  • dest (str) -- the destination directory
  • symlinks (bool) -- whether or not to preserve symlinks
  • ignore (Callable) -- function indicating which files to ignore
  • allow_broken_symlinks (bool) -- whether or not to allow broken (dangling) symlinks, On Windows, setting this to True will raise an exception.

  • IOError -- if src does not match any files or directories
  • ValueError -- if src is a parent directory of dest



True if path is an executable file.


Context manager to keep the modification timestamps of the input files. Tolerates and has no effect on non-existent files and files that are deleted by the nested code.
*filenames -- one or more files that must have their modification timestamps unchanged



This generates the library filenames that may appear on any OS.

Creates a directory, as well as parent directories if needed.
  • paths -- paths to create with mkdirp
  • mode -- optional permissions to set on the created directory -- use OS default if not provided
  • group -- optional group for permissions of final created directory -- use OS default if not provided. Only used if world write permissions are not set
  • default_perms -- one of 'parents' or 'args'. The default permissions that are set for directories that are not themselves an argument for mkdirp. 'parents' means intermediate directories get the permissions of their direct parent directory, 'args' means intermediate get the same permissions specified in the arguments to mkdirp -- default value is 'args'



Split the prefixes of the path at the first occurrence of entry and return a 3-tuple containing a list of the prefixes before the entry, a string of the prefix ending with the entry, and a list of the prefixes after the entry.

If the entry is not a node in the path, the result will be the prefix list followed by an empty string and an empty list.


Returns a list containing the path and its ancestors, top-to-bottom.

The list for an absolute path will not include an os.sep entry. For example, assuming os.sep is /, given path /ab/cd/efg the resulting paths will be, in order: /ab, /ab/cd, and /ab/cd/efg

The list for a relative path starting ./ will not include .. For example, path ./hi/jkl/mn results in a list with the following paths, in order: ./hi, ./hi/jkl, and ./hi/jkl/mn.

On Windows, paths will be normalized to use / and / will always be used as the separator instead of os.sep.

path (str) -- the string used to derive ancestor paths
A list containing ancestor paths in order and ending with the path


Recursively removes any dead link that is present in root.
root (str) -- path where to search for dead links


Remove all contents of a directory.

Removes the argument if it is a dead link.
path (str) -- The potential dead link


Removes a directory and its contents.

If the directory is a symlink, follows the link and removes the real directory before removing the link.

This method will force-delete files on Windows

path (str) -- Directory to be removed




Set appropriate permissions on the installed file.

Creates an empty file at the specified path.

Like touch, but creates any parent directories needed for the file.

Traverse two filesystem trees simultaneously.

Walks the LinkTree directory in pre or post order. Yields each file in the source directory with a matching path from the dest directory, along with whether the file is a directory. e.g., for this tree:

root/

a/
file1
file2
b/
file3


When called on dest, this yields:

('root',         'dest')
('root/a',       'dest/a')
('root/a/file1', 'dest/a/file1')
('root/a/file2', 'dest/a/file2')
('root/b',       'dest/b')
('root/b/file3', 'dest/b/file3')


  • order (str) -- Whether to do pre- or post-order traversal. Accepted values are 'pre' and 'post'
  • ignore (Callable) -- function indicating which files to ignore. This will also ignore symlinks if they point to an ignored file (regardless of whether the symlink is explicitly ignored); note this only supports one layer of indirection (i.e. if you have x -> y -> z, and z is ignored but x/y are not, then y would be ignored but not x). To avoid this, make sure the ignore function also ignores the symlink paths too.
  • follow_nonexisting (bool) -- Whether to descend into directories in src that do not exit in dest. Default is True
  • follow_links (bool) -- Whether to descend into symlinks in src




Recurses the directory root depth-first through a visitor pattern using the interface from BaseDirectoryVisitor
  • root (str) -- path of directory to recurse into
  • visitor (BaseDirectoryVisitor) -- what visitor to use
  • rel_path (str) -- current relative path from the root
  • depth (str) -- current depth from the root




llnl.util.lang module

Bases: object

Null stream with less overhead than os.devnull.

See https://stackoverflow.com/a/2929954.



Bases: object

A contextmanager to capture exceptions and forward them to a GroupedExceptionHandler.


Bases: object

A generic mechanism to coalesce multiple exceptions and preserve tracebacks.

Return a contextmanager which extracts tracebacks and prefixes a message.

Print out an error message coalescing all the forwarded errors.


Bases: MutableMapping

This is a hashable, comparable dictionary. Hash is performed on a tuple of the values in the dictionary.

Type-agnostic clone method. Preserves subclass type.



Bases: object

Base class that wraps an object. Derived classes can add new behavior while staying undercover.

This class is modeled after the stackoverflow answer: * http://stackoverflow.com/a/1445289/771663



Bases: object

Simple wrapper for lazily initialized singleton objects.



Bases: MutableSequence

Base class that behaves like a list, just with a different type.

Client code can inherit from this base class:


and later perform checks based on types:

# do something



S.insert(index, value) -- insert value before index


Bases: TypeError

Raise when an @memoized function receives unhashable arg or kwarg values.


Ensure that a class has a required attribute.

Like dict.setdefault, but for objects.

This will return the locals of the parent of the caller. This allows a function to insert variables into its caller's scope. Yes, this is some black magic, and yes it's useful for implementing things like depends_on and provides.

Helper for making functions with kwargs. Checks whether the kwargs are empty after all of them have been popped off. If they're not, raises an error describing which kwargs are invalid.

Example:

def foo(self, **kwargs):

x = kwargs.pop('x', None)
y = kwargs.pop('y', None)
z = kwargs.pop('z', None)
check_kwargs(kwargs, self.foo) # This raises a TypeError: foo(w='bad kwarg')



Bases: object

Non-data descriptor to evaluate a class-level property. The function that performs the evaluation is injected at creation time and take an instance (could be None) and an owner (i.e. the class that originated the instance)


Allows a decorator to be used with or without arguments, e.g.:

# Calls the decorator function some args
@decorator(with, arguments, and=kwargs)


or:

# Calls the decorator function with zero arguments
@decorator



Yields a stable de-duplication of an hashable sequence by key
  • sequence -- hashable sequence to be de-duplicated
  • key -- callable applied on values before uniqueness test; identity by default.

stable de-duplication of the sequence

Examples

Dedupe a list of integers:

[x for x in dedupe([1, 2, 1, 3, 2])] == [1, 2, 3]

[x for x in llnl.util.lang.dedupe([1,-2,1,3,2], key=abs)] == [1, -2, 3]




sentinel for testing that iterators are done in lazy_lexicographic_ordering

Takes a long list and limits it to a smaller number of elements, replacing intervening elements with '...'. For example:

elide_list([1,2,3,4,5,6], 4)


gives:

[1, 2, 3, '...', 6]



Performs a stable partition of lst, ensuring that elements occur at the end of lst in specified order. Mutates lst. Raises ValueError if any elements are not already in lst.

Return an enum-like class.
**kwargs -- explicit dictionary of enums


Make sure that the caller is a class definition, and return the enclosing module's name.


True if the caller was called from some function with the supplied Name, False otherwise.

Create a hierarchy of dictionaries by splitting the supplied set of objects on unique values of the supplied functions.

Values are used as keys. For example, suppose you have four objects with attributes that look like this:

a = Spec("boost %gcc target=skylake")
b = Spec("mrnet %intel target=zen2")
c = Spec("libelf %xlc target=skylake")
d = Spec("libdwarf %intel target=zen2")
list_of_specs = [a,b,c,d]
index1 = index_by(list_of_specs, lambda s: str(s.target),

lambda s: s.compiler) index2 = index_by(list_of_specs, lambda s: s.compiler)


index1 now has two levels of dicts, with lists at the leaves, like this:

{ 'zen2'    : { 'gcc' : [a], 'xlc' : [c] },

'skylake' : { 'intel' : [b, d] } }


And index2 is a single level dictionary of lists that looks like this:

{ 'gcc'    : [a],

'intel' : [b,d],
'xlc' : [c] }


If any elements in funcs is a string, it is treated as the name of an attribute, and acts like getattr(object, name). So shorthand for the above two indexes would be:

index1 = index_by(list_of_specs, 'arch', 'compiler')
index2 = index_by(list_of_specs, 'compiler')


You can also index by tuples by passing tuples:

index1 = index_by(list_of_specs, ('target', 'compiler'))


Keys in the resulting dict will look like ('gcc', 'skylake').


Decorates a class with extra methods that implement rich comparison operations and __hash__. The decorator assumes that the class implements a function called _cmp_key(). The rich comparison operations will compare objects using this key, and the __hash__ function will return the hash of this key.

If a class already has __eq__, __ne__, __lt__, __le__, __gt__, or __ge__ defined, this decorator will overwrite them.

TypeError -- If the class does not have a _cmp_key method


Equality comparison for two lazily generated sequences.

See lazy_lexicographic_ordering.


Decorates a class with extra methods that implement rich comparison.

This is a lazy version of the tuple comparison used frequently to implement comparison in Python. Given some objects with fields, you might use tuple keys to implement comparison, e.g.:

class Widget:

def _cmp_key(self):
return (
self.a,
self.b,
(self.c, self.d),
self.e
)
def __eq__(self, other):
return self._cmp_key() == other._cmp_key()
def __lt__(self):
return self._cmp_key() < other._cmp_key()
# etc.


Python would compare Widgets lexicographically based on their tuples. The issue there for simple comparators is that we have to bulid the tuples and we have to generate all the values in them up front. When implementing comparisons for large data structures, this can be costly.

Lazy lexicographic comparison maps the tuple comparison shown above to generator functions. Instead of comparing based on pre-constructed tuple keys, users of this decorator can compare using elements from a generator. So, you'd write:

@lazy_lexicographic_ordering
class Widget:

def _cmp_iter(self):
yield a
yield b
def cd_fun():
yield c
yield d
yield cd_fun
yield e
# operators are added by decorator


There are no tuples preconstructed, and the generator does not have to complete. Instead of tuples, we simply make functions that lazily yield what would've been in the tuple. The @lazy_lexicographic_ordering decorator handles the details of implementing comparison operators, and the Widget implementor only has to worry about writing _cmp_iter, and making sure the elements in it are also comparable.

Some things to note:

  • If a class already has __eq__, __ne__, __lt__, __le__, __gt__, __ge__, or __hash__ defined, this decorator will overwrite them.
  • If set_hash is False, this will not overwrite __hash__.
  • This class uses Python 2 None-comparison semantics. If you yield None and it is compared to a non-None type, None will always be less than the other object.



TypeError -- If the class does not have a _cmp_iter method


Less-than comparison for two lazily generated sequences.

See lazy_lexicographic_ordering.


Lists all of the modules, excluding __init__.py, in a particular directory. Listed packages have no particular order.

Loads a python module from the path of the corresponding file.

If the module is already in sys.modules it will be returned as is and not reloaded.

  • module_name (str) -- namespace where the python module will be loaded, e.g. foo.bar
  • module_path (str) -- path of the python file containing the module

A valid module object
  • ImportError -- when the module can't be loaded
  • FileNotFoundError -- when module_path doesn't exist



Utility function for making string matching predicates.

Each arg can be a: * regex * list or tuple of regexes * predicate that takes a string.

This returns a predicate that is true if: * any arg regex matches * any regex in a list or tuple of regexes matches. * any predicate in args matches.


Decorator that caches the results of a function, storing them in an attribute of that function.

Empty context manager. TODO: replace with contextlib.nullcontext() if we ever require python 3.7.

Convert a datetime or timestamp to a pretty, relative date.
  • time (datetime.datetime or int) -- date to print prettily
  • now (datetime.datetime) -- datetime for 'now', i.e. the date the pretty date is relative to (default is datetime.now())

'3 months ago', 'just now', etc.

Return type
(str)

Adapted from https://stackoverflow.com/questions/1551382.


Seconds to string with appropriate units
seconds (float) -- Number of seconds
Time string with units
Return type
str



Parses a string representing a date and returns a datetime object.
date_str (str) -- string representing a date. This string might be in different format (like YYYY, YYYY-MM, YYYY-MM-DD, YYYY-MM-DD HH:MM, YYYY-MM-DD HH:MM:SS) or be a pretty date (like yesterday or two months ago)
datetime object corresponding to date_str
Return type
(datetime.datetime)


A key factory that performs a stable sort of the parameters.

Partition the input iterable according to a custom predicate.
  • input_iterable -- input iterable to be partitioned.
  • predicate_fn -- predicate function accepting an iterable item as argument.

Tuple of the list of elements evaluating to True, and list of elements evaluating to False.


Unpacks arguments for use with Multiprocessing mapping functions

Helper for lazy_lexicographic_ordering().

Use update() to combine all dicts into one.

This builds a new dictionary, into which we update() each element of dicts in order. Items from later dictionaries will override items from earlier dictionaries.

dicts (list) -- list of dictionaries



Remove strings of duplicate elements from a list.

This works like the command-line uniq tool. It filters strings of duplicate elements in a list. Adjacent matching elements are merged into the first occurrence.

For example:

uniq([1, 1, 1, 1, 2, 2, 2, 3, 3]) == [1, 2, 3]
uniq([1, 1, 1, 1, 2, 2, 2, 1, 1]) == [1, 2, 1]



LinkTree class for setting up trees of symbolic links.

Bases: object

Class to create trees of symbolic links from a source directory.

LinkTree objects are constructed with a source root. Their methods allow you to create and delete trees of symbolic links back to the source tree in specific destination directories. Trees comprise symlinks only to files; directries are never symlinked to, to prevent the source directory from ever being modified.

Returns the first file in dest that conflicts with src





Unlink all files in dest that exist in src.

Unlinks directories in dest if they are empty.




llnl.util.lock module

Bases: LockPermissionError

Attempt to create a lock in an unwritable location.


Bases: object

This is an implementation of a filesystem lock using Python's lockf.

In Python, lockf actually calls fcntl, so this should work with any filesystem implementation that supports locking through the fcntl calls. This includes distributed filesystems like Lustre (when flock is enabled) and recent NFS versions.

Note that this is for managing contention over resources between processes and not for managing contention between threads in a process: the functions of this object are not thread-safe. A process also must not maintain multiple locks on the same file (or, more specifically, on overlapping byte ranges in the same file).

Acquires a recursive, shared lock for reading.

Read and write locks can be acquired and released in arbitrary order, but the POSIX lock is held until all local read and write locks are released.

Returns True if it is the first acquire and actually acquires the POSIX lock, False if it is a nested transaction.


Acquires a recursive, exclusive lock for writing.

Read and write locks can be acquired and released in arbitrary order, but the POSIX lock is held until all local read and write locks are released.

Returns True if it is the first acquire and actually acquires the POSIX lock, False if it is a nested transaction.



Downgrade from an exclusive write lock to a shared read.
LockDowngradeError -- if this is an attempt at a nested transaction


Check if the file is write locked
True if the path is write locked, otherwise, False
Return type
(bool)


Releases a read lock.
release_fn (Callable) -- function to call before the last recursive lock (read or write) is released.

If the last recursive lock will be released, then this will call release_fn and return its result (if provided), or return True (if release_fn was not provided).

Otherwise, we are still nested inside some other lock, so do not call the release_fn and, return False.

Does limited correctness checking: if a read lock is released when none are held, this will raise an assertion error.


Releases a write lock.
release_fn (Callable) -- function to call before the last recursive write is released.

If the last recursive write lock will be released, then this will call release_fn and return its result (if provided), or return True (if release_fn was not provided). Otherwise, we are still nested inside some other write lock, so do not call the release_fn, and return False.

Does limited correctness checking: if a read lock is released when none are held, this will raise an assertion error.


Attempts to upgrade from a shared read lock to an exclusive write.
LockUpgradeError -- if this is an attempt at a nested transaction



Bases: LockError

Raised when unable to downgrade from a write to a read lock.


Bases: Exception

Raised for any errors related to locks.


Bases: LockError

Raised when there are permission issues with a lock.


Bases: LockPermissionError

Tried to take an exclusive lock on a read-only file.


Bases: LockError

Raised when an attempt to acquire a lock times out.


Bases: object

Simple nested transaction context manager that uses a file lock.

  • lock (Lock) -- underlying lock for this transaction to be accquired on enter and released on exit
  • acquire (Callable or contextlib.contextmanager) -- function to be called after lock is acquired, or contextmanager to enter after acquire and leave before release.
  • release (Callable) -- function to be called before release. If acquire is a contextmanager, this will be called after exiting the nexted context and before the lock is released.
  • timeout (float) -- number of seconds to set for the timeout when accquiring the lock (default no timeout)


If the acquire_fn returns a value, it is used as the return value for __enter__, allowing it to be passed as the as argument of a with statement.

If acquire_fn returns a context manager, its __enter__ function will be called after the lock is acquired, and its __exit__ funciton will be called before release_fn in __exit__, allowing you to nest a context manager inside this one.

Timeout for lock is customizable.


Bases: LockError

Raised when unable to upgrade from a read to a write lock.




llnl.util.multiproc module

This implements a parallel map operation but it can accept more values than multiprocessing.Pool.apply() can. For example, apply() will fail to pickle functions if they're passed indirectly as parameters.

Bases: object

Simple reusable semaphore barrier.

Python 2 doesn't have multiprocessing barriers so we implement this.

See https://greenteapress.com/semaphores/LittleBookOfSemaphores.pdf, p. 41.



Bases: SymlinkError

Link path already exists.


Bases: RuntimeError

Exception class for errors raised while creating symlinks, junctions and hard links


Override os.islink to give correct answer for spack logic.

For Non-Windows: a link can be determined with the os.path.islink method. Windows-only methods will return false for other operating systems.

For Windows: spack considers symlinks, hard links, and junctions to all be links, so if any of those are True, return True.

path (str) -- path to check if it is a link.
bool - whether the path is any kind link or not.


Spack utility to override of os.readlink method to work cross platform

os.path.isdir uses os.path.exists, which for links will check the existence of the link target. If the link target is relative to the link, we need to construct a pathname that is valid from our cwd (which may not be the same as the link's directory)

Create a link.

On non-Windows and Windows with System Administrator privleges this will be a normal symbolic link via os.symlink.

On Windows without privledges the link will be a junction for a directory and a hardlink for a file. On Windows the various link types are:

Symbolic Link: A link to a file or directory on the same or different volume (drive letter) or even to a remote file or directory (using UNC in its path). Need System Administrator privileges to make these.

Hard Link: A link to a file on the same volume (drive letter) only. Every file (file's data) has at least 1 hard link (file's name). But when this method creates a new hard link there will be 2. Deleting all hard links effectively deletes the file. Don't need System Administrator privileges.

Junction: A link to a directory on the same or different volume (drive letter) but not to a remote directory. Don't need System Administrator privileges.

  • source_path (str) -- The real file or directory that the link points to. Must be absolute OR relative to the link.
  • link_path (str) -- The path where the link will exist.
  • allow_broken_symlinks (bool) -- On Linux or Mac, don't raise an exception if the source_path doesn't exist. This will still raise an exception on Windows.



Submodules

llnl.path module

Path primitives that just require Python standard library.

Bases: object

Enum to identify the path-style.





Converts the input path to the current platform's native style.

Converts the input path to POSIX style.

Converts the input path to Windows style.

Formats the input path to use consistent, platform specific separators.

Absolute paths are converted between drive letters and a prepended '/' as per platform requirement.

  • path -- the path to be normalized, must be a string or expose the replace method.
  • mode -- the path file separator style to normalize the passed path to. Default is unix style, i.e. '/'



Takes an arbitrary number of positional parameters, converts each argument of type string to use a normalized filepath separator, and returns a list of all values.

Filters function arguments to account for platform path separators. Optional slicing range can be specified to select specific arguments

This decorator takes all (or a slice) of a method's positional arguments and normalizes usage of filepath separators on a per platform basis.

Note: **kwargs, urls, and any type that is not a string are ignored so in such cases where path normalization is required, that should be handled by calling path_to_os_path directly as needed.

arg_slice -- a slice object specifying the slice of arguments in the decorated method over which filepath separators are normalized


llnl.string module

String manipulation functions that do not have other dependencies than Python standard library

Return a string with all the elements of the input joined by comma, but the last one (which is joined by 'and').


Return a string with all the elements of the input joined by comma, but the last one (which is joined by 'or').

Pluralize <singular> word by adding an s if n != 1.
  • n -- number of things there are
  • singular -- singular form of word
  • plural -- optional plural form, for when it's not just singular + 's'
  • show_n -- whether to include n in the result string (default True)

"1 thing" if n == 1 or "n things" if n != 1


Quotes each item in the input list with the quote character passed as second argument.

llnl.url module

URL primitives that just require Python standard library.

Returns true if the input is a valid archive, False otherwise.

Returns the input path with the extension removed, if the extension is present in path. Otherwise, returns the input unchanged.

Returns compression extension for a compressed archive

This returns the type of archive a URL refers to. This is sometimes confusing because of URLs like:

Where the URL doesn't actually contain the filename. We need to know what type it is so that we can appropriately name files in mirrors.


Returns the expanded version of a known contracted extension.

This function maps extensions like ".tgz" to ".tar.gz". On unknown extensions, return the input unmodified.


Returns the input path or URL with any contraction extension expanded.
  • path_or_url -- path or URL to be expanded
  • extension -- if specified, only attempt to expand that extension



Tries to match an allowed archive extension to the input. Returns the first match, or None if no match was found.
ValueError -- if the input is None


Find good list URLs for the supplied URL.

By default, returns the dirname of the archive path.

Provides special treatment for the following websites, which have a unique list URL different from the dirname of the download URL:

GitHub https://github.com/<repo>/<name>/releases
GitLab https://gitlab.*/<repo>/<name>/tags
BitBucket https://bitbucket.org/<repo>/<name>/downloads/?tab=tags
CRAN https://*.r-project.org/src/contrib/Archive/<name>
PyPI https://pypi.org/simple/<name>/
LuaRocks https://luarocks.org/modules/<repo>/<name>

Note: this function is called by spack versions, spack checksum, and spack create, but not by spack fetch or spack install.

url (str) -- The download URL for the package
One or more list URLs for the package
Return type
set


Returns true if the extension in input is present in path, false otherwise.

Returns the input with the extension removed

Some URLs have a query string, e.g.:

In (1), the query string needs to be stripped to get at the extension, but in (2) & (3), the filename is IN a single final query argument.

This strips the URL into three pieces: prefix, ext, and suffix. The suffix contains anything that was stripped off the URL to get at the file extension. In (1), it will be '?raw=true', but in (2), it will be empty. In (3) the suffix is a parameter that follows after the file extension, e.g.:



If the input is a sourceforge URL, returns base URL and "/download" suffix. Otherwise, returns the input URL and an empty string.

Strips the compression extension from the input, and returns it. For instance, "foo.tgz" becomes "foo.tar".

If no extension is given, try a default list of extensions.

  • path_or_url -- input to be stripped
  • ext -- if given, extension to be stripped



If a path contains the extension in input, returns the path stripped of the extension. Otherwise, returns the input path.

If extension is None, attempts to strip any allowed extension from path.


Strips query and fragment from a url, then returns the base url and the suffix.
url -- URL to be stripped
ValueError -- when there is any error parsing the URL


Some tarballs contain extraneous information after the version:
  • bowtie2-2.2.5-source
  • libevent-2.0.21-stable
  • cuda_8.0.44_linux.run

These strings are not part of the version number and should be ignored. This function strips those suffixes off and returns the remaining string. The goal is that the version is always the last thing in path:

  • bowtie2-2.2.5
  • libevent-2.0.21
  • cuda_8.0.44

path_or_url -- The filename or URL for the package
The path with any extraneous suffixes removed


  • Index
  • Module Index
  • Search Page

AUTHOR

Todd Gamblin

COPYRIGHT

2013-2024, Lawrence Livermore National Laboratory.

April 18, 2024 0.21