首页 \ 问答 \ 通过SLURM每个节点一次运行多个作业(Run multiple jobs at a time per node through SLURM)

通过SLURM每个节点一次运行多个作业(Run multiple jobs at a time per node through SLURM)

我有一个我使用的集群,它有3个节点,每个节点有110GB的RAM,每个节点上有16个核心。 只要指定的内存可用,我就想保留子节点的作业。

我正在使用这个名为test_slurm.sh bash脚本:

#!/bin/sh
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=10G
python test.py

所以如果我有33个10gb作业,并且我有3个节点和110gb的RAM,我希望能够一次运行全部33个,而不是只有3个,这是我目前的设置。

这就是squeue的样子: 在这里输入图像描述

因此,尽管我有更多的内存,但只有三个作业同时运行。

sinfo -o "%all"返回:

    AVAIL|CPUS|TMP_DISK|FEATURES|GROUPS|SHARE|TIMELIMIT|MEMORY|HOSTNAMES|NODE_ADDR|PRIORITY|ROOT|JOB_SIZE|STATE|USER|VERSION|WEIGHT|S:C:T|NODES(A/I) |MAX_CPUS_PER_NODE |CPUS(A/I/O/T) |NODES |REASON |NODES(A/I/O/T) |GRES |TIMESTAMP |DEFAULTTIME |PREEMPT_MODE |NODELIST |CPU_LOAD |PARTITION |PARTITION |ALLOCNODES |STATE |USER |SOCKETS |CORES |THREADS
    up|16|0|(null)|all|NO|infinite|115328|parrot101|parrot101|1|no|1-infinite|alloc|Unknown|14.03|1|16:1:1|1/0 |UNLIMITED |16/0/0/16 |1 |none |1/0/0/1 |(null) |Unknown |n/a |OFF |parrot101 |0.01 |myNodes* |myNodes |all |allocated |Unknown |16 |1 |1
    up|16|0|(null)|all|NO|infinite|115328|parrot102|parrot102|1|no|1-infinite|alloc|Unknown|14.03|1|16:1:1|1/0 |UNLIMITED |16/0/0/16 |1 |none |1/0/0/1 |(null) |Unknown |n/a |OFF |parrot102 |0.14 |myNodes* |myNodes |all |allocated |Unknown |16 |1 |1
    up|16|0|(null)|all|NO|infinite|115328|parrot103|parrot103|1|no|1-infinite|alloc|Unknown|14.03|1|16:1:1|1/0 |UNLIMITED |16/0/0/16 |1 |none |1/0/0/1 |(null) |Unknown |n/a |OFF |parrot103 |0.26 |myNodes* |myNodes |all |allocated |Unknown |16 |1 |1

squeue -o "%all"的输出

收益:

ACCOUNT|GRES|MIN_CPUS|MIN_TMP_DISK|END_TIME|FEATURES|GROUP|SHARED|JOBID|NAME|COMMENT|TIMELIMIT|MIN_MEMORY|REQ_NODES|COMMAND|PRIORITY|QOS|REASON||ST|USER|RESERVATION|WCKEY|EXC_NODES|NICE|S:C:T|JOBID |EXEC_HOST |CPUS |NODES |DEPENDENCY |ARRAY_JOB_ID |GROUP |SOCKETS_PER_NODE |CORES_PER_SOCKET |THREADS_PER_CORE |ARRAY_TASK_ID |TIME_LEFT |TIME |NODELIST |CONTIGUOUS |PARTITION |PRIORITY |NODELIST(REASON) |START_TIME |STATE |USER |SUBMIT_TIME |LICENSES |CORE_SPECWORK_DIR
    (null)|(null)|1|0|N/A|(null)|j1101|no|26609|slurm_py_submit.sh|(null)|UNLIMITED|40K||/att/gpfsfs/home/spotter5/python/slurm_py_submit.sh 1 rcp85 26|0.99998411652632|(null)|Resources||PD|spotter5|(null)|(null)||0|*:*:*|26609 |n/a |1 |1 | |26609 |61101 |* |* |* |N/A |UNLIMITED |0:00 | |0 |myNodes |4294899076 |(Resources) |2019-03-19T13:03:57 |PENDING |474609391 |2018-03-19T11:57:39 |(null) |0/att/gpfsfs/home/spotter5/python
    (null)|(null)|1|0|N/A|(null)|j1101|no|26610|slurm_py_submit.sh|(null)|UNLIMITED|40K||/att/gpfsfs/home/spotter5/python/slurm_py_submit.sh 1 rcp85 27|0.99998411629349|(null)|Resources||PD|spotter5|(null)|(null)||0|*:*:*|26610 |n/a |1 |1 | |26610 |61101 |* |* |* |N/A |UNLIMITED |0:00 | |0 |myNodes |4294899075 |(Resources) |2019-03-19T13:03:57 |PENDING |474609391 |2018-03-19T11:57:39 |(null) |0/att/gpfsfs/home/spotter5/python
    (null)|(null)|1|0|N/A|(null)|j1101|no|26611|slurm_py_submit.sh|(null)|UNLIMITED|40K||/att/gpfsfs/home/spotter5/python/slurm_py_submit.sh 1 rcp85 28|0.99998411606066|(null)|Resources||PD|spotter5|(null)|(null)||0|*:*:*|26611 |n/a |1 |1 | |26611 |61101 |* |* |* |N/A |UNLIMITED |0:00 | |0 |myNodes |4294899074 |(Resources) |2019-03-19T13:03:57 |PENDING |474609391 |2018-03-19T11:57:39 |(null) |0/att/gpfsfs/home/spotter5/python
    (null)|(null)|1|0|N/A|(null)|j1101|no|26612|slurm_py_submit.sh|(null)|UNLIMITED|40K||/att/gpfsfs/home/spotter5/python/slurm_py_submit.sh 1 rcp85 29|0.99998411582782|(null)|Resources||PD|spotter5|(null)|(null)||0|*:*:*|26612 |n/a |1 |1 | |26612 |61101 |* |* |* |N/A |UNLIMITED |0:00 | |0 |myNodes |4294899073 |(Resources) |2019-03-19T13:03:57 |PENDING |474609391 |2018-03-19T11:57:39 |(null) |0/att/gpfsfs/home/spotter5/python
    (null)|(null)|1|0|N/A|(null)|j1101|no|26613|slurm_py_submit.sh|(null)|UNLIMITED|40K||/att/gpfsfs/home/spotter5/python/slurm_py_submit.sh 1 rcp85 30|0.99998411559499|(null)|Resources||PD|spotter5|(null)|(null)||0|*:*:*|26613 |n/a |1 |1 | |26613 |61101 |* |* |* |N/A |UNLIMITED |0:00 | |0 |myNodes |4294899072 |(Resources) |2019-03-19T13:03:57 |PENDING |474609391 |2018-03-19T11:57:39 |(null) |0/att/gpfsfs/home/spotter5/python

I have a cluster I am using which has 3 nodes with 110GB of RAM each, and on each node there are 16 cores. I want to keep submmitting jobs to the nodes as long as the memory specified is available.

I am using this bash script called test_slurm.sh:

#!/bin/sh
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=10G
python test.py

So if I have 33 10gb jobs, and I have 3 nodes with 110gb of RAM I want to be able to run all 33 at once if possible instead of only 3 at once which is what my current setup does.

This is what by squeue looks like: enter image description here

So only three jobs run at once even though I have plenty of memory for more.

sinfo -o "%all" returns:

    AVAIL|CPUS|TMP_DISK|FEATURES|GROUPS|SHARE|TIMELIMIT|MEMORY|HOSTNAMES|NODE_ADDR|PRIORITY|ROOT|JOB_SIZE|STATE|USER|VERSION|WEIGHT|S:C:T|NODES(A/I) |MAX_CPUS_PER_NODE |CPUS(A/I/O/T) |NODES |REASON |NODES(A/I/O/T) |GRES |TIMESTAMP |DEFAULTTIME |PREEMPT_MODE |NODELIST |CPU_LOAD |PARTITION |PARTITION |ALLOCNODES |STATE |USER |SOCKETS |CORES |THREADS
    up|16|0|(null)|all|NO|infinite|115328|parrot101|parrot101|1|no|1-infinite|alloc|Unknown|14.03|1|16:1:1|1/0 |UNLIMITED |16/0/0/16 |1 |none |1/0/0/1 |(null) |Unknown |n/a |OFF |parrot101 |0.01 |myNodes* |myNodes |all |allocated |Unknown |16 |1 |1
    up|16|0|(null)|all|NO|infinite|115328|parrot102|parrot102|1|no|1-infinite|alloc|Unknown|14.03|1|16:1:1|1/0 |UNLIMITED |16/0/0/16 |1 |none |1/0/0/1 |(null) |Unknown |n/a |OFF |parrot102 |0.14 |myNodes* |myNodes |all |allocated |Unknown |16 |1 |1
    up|16|0|(null)|all|NO|infinite|115328|parrot103|parrot103|1|no|1-infinite|alloc|Unknown|14.03|1|16:1:1|1/0 |UNLIMITED |16/0/0/16 |1 |none |1/0/0/1 |(null) |Unknown |n/a |OFF |parrot103 |0.26 |myNodes* |myNodes |all |allocated |Unknown |16 |1 |1

The output of squeue -o "%all"

returns:

ACCOUNT|GRES|MIN_CPUS|MIN_TMP_DISK|END_TIME|FEATURES|GROUP|SHARED|JOBID|NAME|COMMENT|TIMELIMIT|MIN_MEMORY|REQ_NODES|COMMAND|PRIORITY|QOS|REASON||ST|USER|RESERVATION|WCKEY|EXC_NODES|NICE|S:C:T|JOBID |EXEC_HOST |CPUS |NODES |DEPENDENCY |ARRAY_JOB_ID |GROUP |SOCKETS_PER_NODE |CORES_PER_SOCKET |THREADS_PER_CORE |ARRAY_TASK_ID |TIME_LEFT |TIME |NODELIST |CONTIGUOUS |PARTITION |PRIORITY |NODELIST(REASON) |START_TIME |STATE |USER |SUBMIT_TIME |LICENSES |CORE_SPECWORK_DIR
    (null)|(null)|1|0|N/A|(null)|j1101|no|26609|slurm_py_submit.sh|(null)|UNLIMITED|40K||/att/gpfsfs/home/spotter5/python/slurm_py_submit.sh 1 rcp85 26|0.99998411652632|(null)|Resources||PD|spotter5|(null)|(null)||0|*:*:*|26609 |n/a |1 |1 | |26609 |61101 |* |* |* |N/A |UNLIMITED |0:00 | |0 |myNodes |4294899076 |(Resources) |2019-03-19T13:03:57 |PENDING |474609391 |2018-03-19T11:57:39 |(null) |0/att/gpfsfs/home/spotter5/python
    (null)|(null)|1|0|N/A|(null)|j1101|no|26610|slurm_py_submit.sh|(null)|UNLIMITED|40K||/att/gpfsfs/home/spotter5/python/slurm_py_submit.sh 1 rcp85 27|0.99998411629349|(null)|Resources||PD|spotter5|(null)|(null)||0|*:*:*|26610 |n/a |1 |1 | |26610 |61101 |* |* |* |N/A |UNLIMITED |0:00 | |0 |myNodes |4294899075 |(Resources) |2019-03-19T13:03:57 |PENDING |474609391 |2018-03-19T11:57:39 |(null) |0/att/gpfsfs/home/spotter5/python
    (null)|(null)|1|0|N/A|(null)|j1101|no|26611|slurm_py_submit.sh|(null)|UNLIMITED|40K||/att/gpfsfs/home/spotter5/python/slurm_py_submit.sh 1 rcp85 28|0.99998411606066|(null)|Resources||PD|spotter5|(null)|(null)||0|*:*:*|26611 |n/a |1 |1 | |26611 |61101 |* |* |* |N/A |UNLIMITED |0:00 | |0 |myNodes |4294899074 |(Resources) |2019-03-19T13:03:57 |PENDING |474609391 |2018-03-19T11:57:39 |(null) |0/att/gpfsfs/home/spotter5/python
    (null)|(null)|1|0|N/A|(null)|j1101|no|26612|slurm_py_submit.sh|(null)|UNLIMITED|40K||/att/gpfsfs/home/spotter5/python/slurm_py_submit.sh 1 rcp85 29|0.99998411582782|(null)|Resources||PD|spotter5|(null)|(null)||0|*:*:*|26612 |n/a |1 |1 | |26612 |61101 |* |* |* |N/A |UNLIMITED |0:00 | |0 |myNodes |4294899073 |(Resources) |2019-03-19T13:03:57 |PENDING |474609391 |2018-03-19T11:57:39 |(null) |0/att/gpfsfs/home/spotter5/python
    (null)|(null)|1|0|N/A|(null)|j1101|no|26613|slurm_py_submit.sh|(null)|UNLIMITED|40K||/att/gpfsfs/home/spotter5/python/slurm_py_submit.sh 1 rcp85 30|0.99998411559499|(null)|Resources||PD|spotter5|(null)|(null)||0|*:*:*|26613 |n/a |1 |1 | |26613 |61101 |* |* |* |N/A |UNLIMITED |0:00 | |0 |myNodes |4294899072 |(Resources) |2019-03-19T13:03:57 |PENDING |474609391 |2018-03-19T11:57:39 |(null) |0/att/gpfsfs/home/spotter5/python

原文:https://stackoverflow.com/questions/49310200
更新时间:2023-09-07 15:09

最满意答案

如果将其存储为有效的JSON,则可以解析它,并获取内容。

<p class='message' data-dependencies='{"#first":{"equal":"Yes"}}'>
  Relevant Content
</p>

var json = $(".message").first().attr("data-dependencies");

// HTML5 browsers
// var json = document.querySelector(".message").dataset.dependencies;

var parsed = $.parseJSON(data);

alert(parsed["#first"].equal); // "Yes"

或者如果你使用jQuery的.data() ,它会自动解析它。

var parsed = $(".message").first().data("dependencies");

alert(parsed["#first"].equal); // "Yes"

If you store it as valid JSON, you can parse it, and get is content.

<p class='message' data-dependencies='{"#first":{"equal":"Yes"}}'>
  Relevant Content
</p>

var json = $(".message").first().attr("data-dependencies");

// HTML5 browsers
// var json = document.querySelector(".message").dataset.dependencies;

var parsed = $.parseJSON(data);

alert(parsed["#first"].equal); // "Yes"

Or if you use jQuery's .data(), it will parse it automatically.

var parsed = $(".message").first().data("dependencies");

alert(parsed["#first"].equal); // "Yes"

相关问答

更多
  • 而不是将其嵌入到文本中,只需使用$('#myElement').data('key',jsonObject); 它实际上不会存储在html中,但是如果你使用jquery.data,那么所有这些都是抽象的。 要得到JSON 不要解析 ,只需调用: var getBackMyJSON = $('#myElement').data('key'); 如果您正在获取[Object Object]而不是直接的JSON,那么只需使用数据键访问JSON: var getBackMyJSON = $('#myElement ...
  • 尝试 alert(ui.item.attr("data-entry_id")); try alert(ui.item.attr("data-entry_id"));
  • 用这个 $( ".data" ).change(function() { var Id = $(this).find('option:selected').data("id") alert("id: "+Id); }); Use this $( ".data" ).change(function() { var Id = $(this).find('option:selected').data("id") alert("id: "+Id); });
  • 您可以像使用属性选择器的任何其他属性一样选择data-属性。 在这种情况下,您需要属性等于选择器 ,如下所示: $("div[data-role='footer']") 它们由jQuery特别处理,例如允许.data()从正确的输入中获取它们,但是就DOM遍历而言,它们只是另一个属性,所以在编写选择器时会想到它们。 You can select on a data- attribute like any other attribute...using an attribute selector. In t ...
  • 在这里我发现这个例子: