Skip to contents

A batchtools slurm backend resolves futures in parallel via a Slurm job scheduler

Usage

batchtools_slurm(
  ...,
  template = "slurm",
  scheduler.latency = 1,
  fs.latency = 65,
  resources = list(),
  delete = getOption("future.batchtools.delete", "on-success"),
  workers = getOption("future.batchtools.workers", default = 100L)
)

Arguments

template

(optional) Name of job-script template to be searched for by batchtools::findTemplateFile(). If not found, it defaults to the templates/slurm.tmpl part of this package (see below).

scheduler.latency

[numeric(1)]
Time to sleep after important interactions with the scheduler to ensure a sane state. Currently only triggered after calling submitJobs.

fs.latency

[numeric(1)]
Expected maximum latency of the file system, in seconds. Set to a positive number for network file systems like NFS which enables more robust (but also more expensive) mechanisms to access files and directories. Usually safe to set to 0 to disable the heuristic, e.g. if you are working on a local file system.

resources

(optional) A named list passed to the batchtools job-script template as variable resources. This is based on how batchtools::submitJobs() works, with the exception for specially reserved names defined by the future.batchtools package;

  • resources[["asis"]] is a character vector that are passed as-is to the job script and are injected as job resource declarations.

  • resources[["modules"]] is character vector of Linux environment modules to be loaded.

  • resources[["startup"]] and resources[["shutdown"]] are character vectors of shell code to be injected to the job script as-is.

  • resources[["details"]], if TRUE, results in the job script outputting job details and job summaries at the beginning and then end.

  • All remaining resources named elements are injected as named resource specification for the scheduler.

delete

Controls if and when the batchtools job registry folder is deleted. If "on-success" (default), it is deleted if the future was resolved successfully and the expression did not produce an error. If "never", then it is never deleted. If "always", then it is always deleted.

workers

The maximum number of workers the batchtools backend may use at any time, which for HPC schedulers corresponds to the maximum number of queued jobs. The default is getOption("future.batchtools.workers", 100).

...

Not used.

Details

Batchtools slurm futures use batchtools cluster functions created by batchtools::makeClusterFunctionsSlurm(), which requires that Slurm commands sbatch, squeue, and scancel are installed on the current machine.

The default template script templates/slurm.tmpl can be found in:

system.file("templates", "slurm.tmpl", package = "future.batchtools")

and comprise:

#!/bin/bash
######################################################################
# A batchtools launch script template for Slurm
#
# Author: Henrik Bengtsson
######################################################################

## Job name
#SBATCH --job-name=<%= job.name %>
## Direct streams to logfile
#SBATCH --output=<%= log.file %>

## Resources needed:
<%
  ## As-is resource specifications
  job_declarations <- resources[["asis"]]
  resources[["asis"]] <- NULL

  ## Shell "details" code to evaluate
  details <- isTRUE(resources[["details"]])
  resources[["details"]] <- NULL

  ## Shell "startup" code to evaluate
  startup <- resources[["startup"]]
  resources[["startup"]] <- NULL

  ## Shell "shutdown" code to evaluate
  shutdown <- resources[["shutdown"]]
  resources[["shutdown"]] <- NULL

  ## Environment modules specifications
  modules <- resources[["modules"]]
  resources[["modules"]] <- NULL

  ## Remaining resources are assumed to be of type '--<key>=<value>'
  opts <- unlist(resources, use.names = TRUE)
  opts <- sprintf("--%s=%s", names(opts), opts)
  job_declarations <- sprintf("#SBATCH %s", c(job_declarations, opts))
  writeLines(job_declarations)
%>

## Bash settings
set -e          # exit on error
set -u          # error on unset variables
set -o pipefail # fail a pipeline if any command fails
trap 'echo "ERROR: future.batchtools job script failed on line $LINENO" >&2; exit 1' ERR

echo "Batchtools information:"
echo "- job name: '<%= job.name %>'"
echo "- job log file: '<%= log.file %>'"
echo

<% if (length(startup) > 0) {
  writeLines(startup)
} %>

echo "Load environment modules:"
<% if (length(modules) > 0) {
  writeLines(c(sprintf("module load %s", modules), "module list"))
} %>

echo "Session information:"
echo "- timestamp: $(date)"
echo "- hostname: $(hostname)"
echo "- Rscript path: $(which Rscript)"
echo "- Rscript version: $(Rscript --version)"
echo "- Rscript library paths: $(Rscript -e "cat(shQuote(.libPaths()), sep = ' ')")"
echo

echo "Job submission declarations:"
<%
  writeLines(sprintf("echo '%s'", job_declarations))
%>

<% if (details) { %>
if command -v scontrol > /dev/null; then
  echo "Slurm job information:"
  scontrol show job "${SLURM_JOB_ID}"
  echo
fi
<% } %>

## Launch R and evaluate the batchtools R job
echo "Command: Rscript -e 'batchtools::doJobCollection("<%= uri %>")' ..."
Rscript -e 'batchtools::doJobCollection("<%= uri %>")'
res=$?
echo " - exit code: ${res}"
echo "Command: Rscript -e 'batchtools::doJobCollection("<%= uri %>")' ... done"

<% if (details) { %>
if command -v sstat > /dev/null; then
  echo "Job summary:"
  sstat --format="JobID,AveCPU,MaxRSS,MaxPages,MaxDiskRead,MaxDiskWrite" --allsteps --jobs="${SLURM_JOB_ID}"
fi
<% } %>

<% if (length(shutdown) > 0) {
  writeLines(shutdown)
} %>

## Relay the exit code from Rscript
exit "${res}"

This template and the built-in batchtools::makeClusterFunctionsSlurm() have been verified to work on a few different Slurm HPC clusters;

  1. Slurm 21.08.4, Rocky 8 Linux, NFS global filesystem (August 2025)

  2. Slurm 22.05.11, Rocky 8 Linux, NFS global filesystem (August 2025)

  3. Slurm 23.02.6, Ubuntu 24.04 LTS, NFS global filesystem (August 2025)

Examples

if (FALSE) { # interactive()
library(future)

# Limit runtime to 10 minutes and memory to 400 MiB per future,
# request a parallel environment with four slots on a single host.
# Submit to the 'freecycle' partition. Load environment modules 'r' and
# 'jags'. Report on job details at startup and at the end of the job.
plan(future.batchtools::batchtools_slurm, resources = list(
  time = "00:10:00", mem = "400M",
  asis = c("--nodes=1", "--ntasks=4", "--partition=freecycle"),
  modules = c("r", "jags"),
  details = TRUE
))

f <- future({
  data.frame(
    hostname = Sys.info()[["nodename"]],
          os = Sys.info()[["sysname"]],
       cores = unname(parallelly::availableCores()),
     modules = Sys.getenv("LOADEDMODULES")
  )
})
info <- value(f)
print(info)
}