fbpx
Wikipedia

Slurm Workload Manager

The Slurm Workload Manager, formerly known as Simple Linux Utility for Resource Management (SLURM), or simply Slurm, is a free and open-source job scheduler for Linux and Unix-like kernels, used by many of the world's supercomputers and computer clusters.

Slurm
Developer(s)SchedMD
Stable release
www.schedmd.com/downloads.php
Repository
  • github.com/SchedMD/slurm
Written inC
Operating systemLinux, BSDs
TypeJob Scheduler for Clusters and Supercomputers
LicenseGNU General Public License
Websiteslurm.schedmd.com

It provides three key functions:

  • allocating exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work,
  • providing a framework for starting, executing, and monitoring work, typically a parallel job such as Message Passing Interface (MPI) on a set of allocated nodes, and
  • arbitrating contention for resources by managing a queue of pending jobs.

Slurm is the workload manager on about 60% of the TOP500 supercomputers.[1]

Slurm uses a best fit algorithm based on Hilbert curve scheduling or fat tree network topology in order to optimize locality of task assignments on parallel computers.[2]

History edit

Slurm began development as a collaborative effort primarily by Lawrence Livermore National Laboratory, SchedMD,[3] Linux NetworX, Hewlett-Packard, and Groupe Bull as a Free Software resource manager. It was inspired by the closed source Quadrics RMS and shares a similar syntax. The name is a reference to the soda in Futurama.[4] Over 100 people around the world have contributed to the project. It has since evolved into a sophisticated batch scheduler capable of satisfying the requirements of many large computer centers.

As of November 2021, TOP500 list of most powerful computers in the world indicates that Slurm is the workload manager on more than half of the top ten systems.

Structure edit

Slurm's design is very modular with about 100 optional plugins. In its simplest configuration, it can be installed and configured in a couple of minutes. More sophisticated configurations provide database integration for accounting, management of resource limits and workload prioritization.

Features edit

Slurm features include:[citation needed]

  • No single point of failure, backup daemons, fault-tolerant job options
  • Highly scalable (schedules up to 100,000 independent jobs on the 100,000 sockets of IBM Sequoia)
  • High performance (up to 1000 job submissions per second and 600 job executions per second)
  • Free and open-source software (GNU General Public License)
  • Highly configurable with about 100 plugins
  • Fair-share scheduling with hierarchical bank accounts
  • Preemptive and gang scheduling (time-slicing of parallel jobs)
  • Integrated with database for accounting and configuration
  • Resource allocations optimized for network topology and on-node topology (sockets, cores and hyperthreads)
  • Advanced reservation
  • Idle nodes can be powered down
  • Different operating systems can be booted for each job
  • Scheduling for generic resources (e.g. Graphics processing unit)
  • Real-time accounting down to the task level (identify specific tasks with high CPU or memory usage)
  • Resource limits by user or bank account
  • Accounting for power consumption by job
  • Support of IBM Parallel Environment (PE/POE)
  • Support for job arrays
  • Job profiling (periodic sampling of each task's CPU use, memory use, power consumption, network and file system use)
  • Sophisticated multifactor job prioritization algorithms
  • Support for MapReduce+
  • Support for burst buffer that accelerates scientific data movement

The following features are announced for version 14.11 of Slurm, was released in November 2014:[5]

  • Improved job array data structure and scalability
  • Support for heterogeneous generic resources
  • Add user options to set the CPU governor
  • Automatic job requeue policy based on exit value
  • Report API use by user, type, count and time consumed
  • Communication gateway nodes improve scalability

Supported platforms edit

Slurm is primarily developed to work alongside Linux distributions, although there is also support for a few other POSIX-based operating systems, including BSDs (FreeBSD, NetBSD and OpenBSD).[6] Slurm also supports several unique computer architectures, including:

  • IBM BlueGene/Q models, including the 20 petaflop IBM Sequoia
  • Cray XT, XE and Cascade
  • Tianhe-2 a 33.9 petaflop system with 32,000 Intel Ivy Bridge chips and 48,000 Intel Xeon Phi chips with a total of 3.1 million cores
  • IBM Parallel Environment
  • Anton

License edit

Slurm is available under the GNU General Public License v2.

Commercial support edit

In 2010, the developers of Slurm founded SchedMD, which maintains the canonical source, provides development, level 3 commercial support and training services. Commercial support is also available from Bull, Cray, and Science + Computing.

Usage edit

The `slurm` system has three main parts:

  • a central `slurmctld` (slurm control) daemon running on a single control node (optionally with failover backups);
  • many computing nodes, each with one or more `slurmd` daemons;
  • clients that connect to the manager node, often with ssh.

The clients can issue commands to the control daemon, which would accept and divide the workload to the computing daemons.

For clients, the main commands are `srun` (queue up an interactive job), `sbatch` (queue up a job), `squeue` (print the job queue), `scancel` (remove a job from the queue).

Jobs can be run in batch mode or interactive mode. For interactive mode, a compute node would start a shell, connects the client into it, and run the job. From there the user may observe and interact with the job while it is running. Usually, interactive jobs are used for initial debugging, and after debugging, the same job would be submitted by `sbatch`. For a batch mode job, its `stdout` and `stderr` outputs are typically directed to text files for later inspection.

See also edit

References edit

  1. ^ . hpcc.usc.edu. Archived from the original on 2019-03-06. Retrieved 2019-03-05.
  2. ^ Pascual, Jose Antonio; Navaridas, Javier; Miguel-Alonso, Jose (2009). Effects of Topology-Aware Allocation Policies on Scheduling Performance. Job Scheduling Strategies for Parallel Processing. Lecture Notes in Computer Science. Vol. 5798. pp. 138–144. doi:10.1007/978-3-642-04633-9_8. ISBN 978-3-642-04632-2.
  3. ^ "Slurm Commercial Support, Development, and Installation". SchedMD. Retrieved 2014-02-23.
  4. ^ "SLURM: Simple Linux Utility for Resource Management" (PDF). 23 June 2003. Retrieved 11 January 2016.
  5. ^ "Slurm - What's New". SchedMD. Retrieved 2014-08-29.
  6. ^ Slurm Platforms

Further reading edit

  • Balle, Susanne M.; Palermo, Daniel J. (2008). Enhancing an Open Source Resource Manager with Multi-core/Multi-threaded Support. Job Scheduling Strategies for Parallel Processing. Lecture Notes in Computer Science. Vol. 4942. p. 37. doi:10.1007/978-3-540-78699-3_3. ISBN 978-3-540-78698-6.
  • Jette, M.; Grondona, M. (June 2003). "SLURM: Simple Linux Utility for Resource Management" (PDF). Proceedings of ClusterWorld Conference and Expo. San Jose, California.
  • Layton, Jeffrey B. (5 February 2009). . Linux Magazine. Archived from the original on February 11, 2009.{{cite journal}}: CS1 maint: unfit URL (link)
  • Yoo, Andy B.; Jette, Morris A.; Grondona, Mark (2003). SLURM: Simple Linux Utility for Resource Management. Job Scheduling Strategies for Parallel Processing. Lecture Notes in Computer Science. Vol. 2862. p. 44. CiteSeerX 10.1.1.10.6834. doi:10.1007/10968987_3. ISBN 978-3-540-20405-3.

External links edit

  • Slurm Documentation
  • SchedMD
  • Slurm Workload Manager Architecture Configuration and Use
  • Caltech HPC Center: Job Script Generator

slurm, workload, manager, this, article, relies, excessively, references, primary, sources, please, improve, this, article, adding, secondary, tertiary, sources, find, sources, news, newspapers, books, scholar, jstor, july, 2010, learn, when, remove, this, mes. This article relies excessively on references to primary sources Please improve this article by adding secondary or tertiary sources Find sources Slurm Workload Manager news newspapers books scholar JSTOR July 2010 Learn how and when to remove this message The Slurm Workload Manager formerly known as Simple Linux Utility for Resource Management SLURM or simply Slurm is a free and open source job scheduler for Linux and Unix like kernels used by many of the world s supercomputers and computer clusters SlurmDeveloper s SchedMDStable releasewww wbr schedmd wbr com wbr downloads wbr phpRepositorygithub wbr com wbr SchedMD wbr slurmWritten inCOperating systemLinux BSDsTypeJob Scheduler for Clusters and SupercomputersLicenseGNU General Public LicenseWebsiteslurm wbr schedmd wbr com It provides three key functions allocating exclusive and or non exclusive access to resources computer nodes to users for some duration of time so they can perform work providing a framework for starting executing and monitoring work typically a parallel job such as Message Passing Interface MPI on a set of allocated nodes and arbitrating contention for resources by managing a queue of pending jobs Slurm is the workload manager on about 60 of the TOP500 supercomputers 1 Slurm uses a best fit algorithm based on Hilbert curve scheduling or fat tree network topology in order to optimize locality of task assignments on parallel computers 2 Contents 1 History 2 Structure 3 Features 4 Supported platforms 5 License 6 Commercial support 7 Usage 8 See also 9 References 10 Further reading 11 External linksHistory editSlurm began development as a collaborative effort primarily by Lawrence Livermore National Laboratory SchedMD 3 Linux NetworX Hewlett Packard and Groupe Bull as a Free Software resource manager It was inspired by the closed source Quadrics RMS and shares a similar syntax The name is a reference to the soda in Futurama 4 Over 100 people around the world have contributed to the project It has since evolved into a sophisticated batch scheduler capable of satisfying the requirements of many large computer centers As of November 2021 update TOP500 list of most powerful computers in the world indicates that Slurm is the workload manager on more than half of the top ten systems Structure editSlurm s design is very modular with about 100 optional plugins In its simplest configuration it can be installed and configured in a couple of minutes More sophisticated configurations provide database integration for accounting management of resource limits and workload prioritization Features editSlurm features include citation needed No single point of failure backup daemons fault tolerant job options Highly scalable schedules up to 100 000 independent jobs on the 100 000 sockets of IBM Sequoia High performance up to 1000 job submissions per second and 600 job executions per second Free and open source software GNU General Public License Highly configurable with about 100 plugins Fair share scheduling with hierarchical bank accounts Preemptive and gang scheduling time slicing of parallel jobs Integrated with database for accounting and configuration Resource allocations optimized for network topology and on node topology sockets cores and hyperthreads Advanced reservation Idle nodes can be powered down Different operating systems can be booted for each job Scheduling for generic resources e g Graphics processing unit Real time accounting down to the task level identify specific tasks with high CPU or memory usage Resource limits by user or bank account Accounting for power consumption by job Support of IBM Parallel Environment PE POE Support for job arrays Job profiling periodic sampling of each task s CPU use memory use power consumption network and file system use Sophisticated multifactor job prioritization algorithms Support for MapReduce Support for burst buffer that accelerates scientific data movement The following features are announced for version 14 11 of Slurm was released in November 2014 5 Improved job array data structure and scalability Support for heterogeneous generic resources Add user options to set the CPU governor Automatic job requeue policy based on exit value Report API use by user type count and time consumed Communication gateway nodes improve scalabilitySupported platforms editSlurm is primarily developed to work alongside Linux distributions although there is also support for a few other POSIX based operating systems including BSDs FreeBSD NetBSD and OpenBSD 6 Slurm also supports several unique computer architectures including IBM BlueGene Q models including the 20 petaflop IBM Sequoia Cray XT XE and Cascade Tianhe 2 a 33 9 petaflop system with 32 000 Intel Ivy Bridge chips and 48 000 Intel Xeon Phi chips with a total of 3 1 million cores IBM Parallel Environment AntonLicense editSlurm is available under the GNU General Public License v2 Commercial support editIn 2010 the developers of Slurm founded SchedMD which maintains the canonical source provides development level 3 commercial support and training services Commercial support is also available from Bull Cray and Science Computing Usage editThe slurm system has three main parts a central slurmctld slurm control daemon running on a single control node optionally with failover backups many computing nodes each with one or more slurmd daemons clients that connect to the manager node often with ssh The clients can issue commands to the control daemon which would accept and divide the workload to the computing daemons For clients the main commands are srun queue up an interactive job sbatch queue up a job squeue print the job queue scancel remove a job from the queue Jobs can be run in batch mode or interactive mode For interactive mode a compute node would start a shell connects the client into it and run the job From there the user may observe and interact with the job while it is running Usually interactive jobs are used for initial debugging and after debugging the same job would be submitted by sbatch For a batch mode job its stdout and stderr outputs are typically directed to text files for later inspection See also edit nbsp Free and open source software portal Job Scheduler and Batch Queuing for Clusters Beowulf cluster Maui Cluster Scheduler Open Source Cluster Application Resources OSCAR TORQUE Univa Grid Engine Platform LSFReferences edit Running a Job on HPC using Slurm HPC USC hpcc usc edu Archived from the original on 2019 03 06 Retrieved 2019 03 05 Pascual Jose Antonio Navaridas Javier Miguel Alonso Jose 2009 Effects of Topology Aware Allocation Policies on Scheduling Performance Job Scheduling Strategies for Parallel Processing Lecture Notes in Computer Science Vol 5798 pp 138 144 doi 10 1007 978 3 642 04633 9 8 ISBN 978 3 642 04632 2 Slurm Commercial Support Development and Installation SchedMD Retrieved 2014 02 23 SLURM Simple Linux Utility for Resource Management PDF 23 June 2003 Retrieved 11 January 2016 Slurm What s New SchedMD Retrieved 2014 08 29 Slurm PlatformsFurther reading editBalle Susanne M Palermo Daniel J 2008 Enhancing an Open Source Resource Manager with Multi core Multi threaded Support Job Scheduling Strategies for Parallel Processing Lecture Notes in Computer Science Vol 4942 p 37 doi 10 1007 978 3 540 78699 3 3 ISBN 978 3 540 78698 6 Jette M Grondona M June 2003 SLURM Simple Linux Utility for Resource Management PDF Proceedings of ClusterWorld Conference and Expo San Jose California Layton Jeffrey B 5 February 2009 Caos NSA and Perceus All in one Cluster Software Stack Linux Magazine Archived from the original on February 11 2009 a href Template Cite journal html title Template Cite journal cite journal a CS1 maint unfit URL link Yoo Andy B Jette Morris A Grondona Mark 2003 SLURM Simple Linux Utility for Resource Management Job Scheduling Strategies for Parallel Processing Lecture Notes in Computer Science Vol 2862 p 44 CiteSeerX 10 1 1 10 6834 doi 10 1007 10968987 3 ISBN 978 3 540 20405 3 External links editSlurm Documentation SchedMD Slurm Workload Manager Architecture Configuration and Use Caltech HPC Center Job Script Generator Retrieved from https en wikipedia org w index php title Slurm Workload Manager amp oldid 1207033853, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.