{"id":42,"date":"2022-06-14T13:55:24","date_gmt":"2022-06-14T08:25:24","guid":{"rendered":"https:\/\/varuna.aero.iitb.ac.in\/ace\/?page_id=42"},"modified":"2022-09-02T17:07:25","modified_gmt":"2022-09-02T11:37:25","slug":"openpbs-manual","status":"publish","type":"page","link":"https:\/\/varuna.aero.iitb.ac.in\/ace\/index.php\/openpbs-manual\/","title":{"rendered":"OpenPbs-manual"},"content":{"rendered":"\n<p style=\"font-size:16px\">OpenPBS software optimizes job scheduling and workload management in high-performance computing (HPC) environments \u2013 clusters, clouds, and supercomputers \u2013 improving system efficiency and people\u2019s productivity.<\/p>\n\n\n\n<p style=\"font-size:16px\">PBS Cheatsheet &#8211;<\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\" style=\"font-size:16px\"><table><tbody><tr><td class=\"has-text-align-center\" data-align=\"center\">pbsnodes -a<\/td><td class=\"has-text-align-center\" data-align=\"center\">Shows you the list of nodes with all details<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">pbsnodes -aSj<\/td><td class=\"has-text-align-center\" data-align=\"center\">Shows you detail overview of resources<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">qsub <\/td><td class=\"has-text-align-center\" data-align=\"center\">use to submit a job<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">qdel jobid<\/td><td class=\"has-text-align-center\" data-align=\"center\">delete a submitted job<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">qstat -a<\/td><td class=\"has-text-align-center\" data-align=\"center\">shows you the status of all jobs<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">tracejob jobid<\/td><td class=\"has-text-align-center\" data-align=\"center\">shows you the details status of a job<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p style=\"font-size:16px\">Sample OpenPBS script &#8211;<\/p>\n\n\n\n<pre class=\"wp-block-code\" style=\"font-size:15px\"><code>#!\/bin\/bash\n#PBS -N Jobname\n#PBS -q Queue_name\n#PBS -l select=16:ncpus=1:mpiprocs=1\n#PBS -l walltime=hh:mm:ss\n#PBS -j oe\n#PBS -V\n#PBS -o log.out\n\ncd $PBS_O_WORKDIR\ncat $PBS_NODEFILE &gt; .\/pbsnodes\nPROCS1=$(cat $PBS_NODEFILE | wc -l)\n\nmpirun -machinefile $PBS_NODEFILE -np 16  .\/filename\n\/bin\/hostname<\/code><\/pre>\n\n\n\n<p style=\"font-size:15px\"><em>Where the line \u201c-l select=2:ncpus=16 \u201d is the number of processors required for the job. &#8216;SELECT&#8217; specifies the number of nodes (or chunks of resource) required. <\/em><\/p>\n\n\n\n<p style=\"font-size:15px\"><em>ncpus indicates the number of CPUs per chunk required. Per CPU core is also considered as a resource. So you always keep the value of \u2018SELECT\u2019 as the number of cores you want if you are not bothered of splitting your codes and want the PBS to decide and schedule it. So for a 64 core job, the value will be <\/em><\/p>\n\n\n\n<pre class=\"wp-block-code\" style=\"font-size:15px\"><code>#PBS -l select=64:ncpus=1<em> <\/em>\n\n<em>(In this case, PBS will start to fill on all the cores available from node 1 and continue till it satisfies 64 core request<\/em>)<\/code><\/pre>\n\n\n\n<p style=\"font-size:15px\"><em>If we want the 64 core job to run on 4 nodes with 16 core on each node (i.e. 16 core x 4 nodes = 64 cores)<\/em><\/p>\n\n\n\n<p style=\"font-size:15px\">Your PBS value should be \u2013<\/p>\n\n\n\n<pre class=\"wp-block-code\" style=\"font-size:14px\"><code>#PBS -l select=4:ncpus=16 \n#PBS -l place=scatter\n\n<em>(In this case, PBS will look for all 4 nodes available with free 16 cores and start to fill it to satisfy 64 core request. Place value is important here, by default the place value is free)<\/em><\/code><\/pre>\n\n\n\n<p style=\"font-size:15px\"><em>In case you want your job to run on single node and does not want PBS to split it as per the availability of the cores from node1. (i.e if node1 has 6 core free and node2 has 32 core free, your 32 core job will be distributed among node1 and node2 , PBS will first fill the 6 core available on node1 and then rest 26 core no node2<\/em>)<\/p>\n\n\n\n<p style=\"font-size:15px\"><em>To avoid this you can use place=pack, So your request should look like \u2013<\/em><\/p>\n\n\n\n<pre class=\"wp-block-code\" style=\"font-size:14px\"><code>#PBS -l select=32:ncpus=1 \n#PBS -l place=pack<\/code><\/pre>\n\n\n\n<p style=\"font-size:15px\"><em>Our cluster has 32 core on each node via hyper-threading, so you cannot have a pack request for a job of 64 core as PBS will not be able to satisfy your request<\/em>.<\/p>\n\n\n\n<p style=\"font-size:15px\"><em>\u2018mpiprocs\u2019 should only be given if your code is running on mpi and the value of mpiprocs will always be equal to ncpus.<\/em><\/p>\n\n\n\n<p class=\"has-vivid-red-color has-text-color has-small-font-size\"><em>You use the place statement to specify how the job\u2019s chunks are placed.<br>The place statement can contain the following elements in any order: free(default) \/ pack \/ scatter<\/em> . <\/p>\n\n\n\n<p style=\"font-size:14.5px\">The above example will submit 16 core jobs on the cluster and place your job as per the availability of resources. Suppose you want your job to run on a single node and do not want it to split. You can use the below parameter<\/p>\n\n\n\n<pre class=\"wp-block-code\" style=\"font-size:14px\"><code>#PBS -l select=16:ncpus=1:mpiprocs=1\n#PBS -l place=pack<\/code><\/pre>\n\n\n\n<p style=\"font-size:14.5px\">Suppose you want your code split into two nodes equally, you can use scatter.<\/p>\n\n\n\n<pre class=\"wp-block-code\" style=\"font-size:14px\"><code>#PBS -l select=2:ncpus=8:mpiprocs=8\n#PBS -l place=scatter\n.\n.\n.\nmpirun -machinefile $PBS_NODEFILE -np 16  .\/filename<\/code><\/pre>\n\n\n\n<p class=\"has-vivid-red-color has-text-color\" style=\"font-size:15px\"><strong><em>**In case you want to Submit GPU code, you can do as below &#8211;<\/em><\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\" style=\"font-size:14.5px\"><code>#PBS -l select=1:ncpus=1:ngpus=1<\/code><\/pre>\n\n\n\n<p class=\"has-vivid-purple-color has-text-color\" style=\"font-size:15px\"><strong>Above, will submit your CUDA code to one of the GPU nodes available in the cluster.<\/strong><\/p>\n\n\n\n<p style=\"font-size:14px\"><span style=\"color: #800080;\"><em>**Sample OpenPBS Scripts which you can download and modify as per your requirements<\/em><\/span><\/p>\n\n\n\n<ul class=\"wp-block-list\" style=\"font-size:14px\"><li>click to see the code &#8211; <a rel=\"noreferrer noopener\" href=\"https:\/\/varuna.aero.iitb.ac.in\/ace\/docs\/of.txt\" target=\"_blank\"><strong>OpenFoam<\/strong><\/a> <\/li><\/ul>\n\n\n\n<ul class=\"has-black-color has-text-color wp-block-list\" style=\"font-size:14px\"><li>click to see the code &#8211; <a rel=\"noreferrer noopener\" href=\"https:\/\/varuna.aero.iitb.ac.in\/ace\/docs\/matlab.txt\" target=\"_blank\"><strong>Matlab<\/strong><\/a> &#8211;<em> Use the sample Matlab Code &#8211;<\/em> <strong><em>TestCode.m<\/em><\/strong> :-<\/li><\/ul>\n\n\n\n<pre class=\"wp-block-code has-small-font-size\"><code>% TEST CODE\na = 2;\nb = 3;\nc = a+b\ndisplay('Simulation completed successfully!')\n<\/code><\/pre>\n\n\n\n<p style=\"font-size:15px\">OpenPBS job submission &#8211;<\/p>\n\n\n\n<iframe loading=\"lazy\" src=\"https:\/\/www.youtube.com\/embed\/9xdQ5nQUBAM\" title=\"YouTube video player\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen=\"\" width=\"560\" height=\"315\" frameborder=\"0\"><\/iframe>\n","protected":false},"excerpt":{"rendered":"<p>OpenPBS software optimizes job scheduling and workload management in high-performance computing (HPC) environments \u2013 clusters, clouds, and supercomputers \u2013 improving system efficiency and people\u2019s productivity. PBS Cheatsheet &#8211; pbsnodes -a Shows you the list of nodes with all details pbsnodes -aSj Shows you detail overview of resources qsub use to submit a job qdel jobid <a class=\"read-more\" href=\"https:\/\/varuna.aero.iitb.ac.in\/ace\/index.php\/openpbs-manual\/\">&hellip;&nbsp;<span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"ngg_post_thumbnail":0,"footnotes":""},"class_list":["post-42","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/varuna.aero.iitb.ac.in\/ace\/index.php\/wp-json\/wp\/v2\/pages\/42","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/varuna.aero.iitb.ac.in\/ace\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/varuna.aero.iitb.ac.in\/ace\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/varuna.aero.iitb.ac.in\/ace\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/varuna.aero.iitb.ac.in\/ace\/index.php\/wp-json\/wp\/v2\/comments?post=42"}],"version-history":[{"count":42,"href":"https:\/\/varuna.aero.iitb.ac.in\/ace\/index.php\/wp-json\/wp\/v2\/pages\/42\/revisions"}],"predecessor-version":[{"id":254,"href":"https:\/\/varuna.aero.iitb.ac.in\/ace\/index.php\/wp-json\/wp\/v2\/pages\/42\/revisions\/254"}],"wp:attachment":[{"href":"https:\/\/varuna.aero.iitb.ac.in\/ace\/index.php\/wp-json\/wp\/v2\/media?parent=42"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}