...
- Use the sample script available from https://jira.hpdd.intel.com/secure/attachment/31378/script_backup.sh
- Change below parameter based on your requirement
Slurm Header Description #SBATCH -p skx-dev Partition name where you want to queue the JOB. Each partition has it's own limitation of node and number of hours node can be used. How many JOB can be queued on the partition, You can refer https://portal.tacc.utexas.edu/user-guides/stampede2#running-queues #SBATCH -N 3
# Total Number of nodes, In this case It's 3 [You need to have NO_OF_SERVERS + NO_OF_CLIENTS +1 one more system needs to be reserved, which will be used for initiating tests. So if you need 1 server and 1 client for testing,need to reserve 3 system for it. If you want 126 server and 1 CN need to reserve 128] #SBATCH -n 144
# Total Number of mpi tasks (48 x Total No of nodes) #SBATCH -t 02:00:00
Run time keep it close so in worst case some thing goes stuck it wont end up utilizing the node hours #SBATCH --mail-user=samir.raval@intel.com
Your email ID so once the script is launched you will notify when JOB started and when JOB finished with it's return code
For Example:
Slurm Job_id=4546499 Name=test_daos1 Began, Queued time 04:30:54 (4:30 is the time took to start the JOB)
Slurm Job_id=4546499 Name=test_daos1 Ended, Run time 00:01:33, COMPLETED, ExitCode 0 (00:01:33 is the time took to complete the JOB)
- Change the number of DAOS server/Client count
System used for Count DAOS_SERVERS
1
DAOS_CLIENTS
1
URI_FILE
/<LOCAL_PATH>/uri.txt
DAOS_SERVER_YAML
/<LOCAL_PATH>daos_server_psm2.yml
In start_agent()
/<LOCAL_PATH>/daos_agent
In start_server() --attach_info
/<LOCAL_PATH>/tmp
- Create the log directory for example /scratch/12345/samirrav/Log and make sure it matches in sbatch script.
- Now run the sbatch script.
sbatch scripts/main.sh IOR
-----------------------------------------------------------------
Welcome to the Stampede2 Supercomputer12345
-----------------------------------------------------------------No reservation for this job
--> Verifying valid submit host (login1)...OK
--> Verifying valid jobname...OK
--> Enforcing max jobs per user...OK
--> Verifying availability of your home dir (/home1/12345/samirrav)...OK
--> Verifying availability of your work dir (/work/12345/samirrav/stampede2)...OK
--> Verifying availability of your scratch dir (/scratch/12345/samirrav)...OK
--> Verifying valid ssh keys...OK
--> Verifying access to desired queue (skx-dev)...OK
--> Verifying job request is within current queue limits...OK
--> Checking available allocation (STAR-Intel)...OK
Submitted batch job 4551152- JOB will be queued and you will see status getting printed "OK", if some thing goes wrong at any stage, it will not queue the JOB and use needs to debug the sbatch script.
- Check the status of the JOB using below command. It will update as job gets the resource and runs.
- login1(1038)$ squeue | grep samir
4551152 skx-dev test_dao samirrav PD 0:00 3 (Resources) - Once the JOB is finished logs will be copied to Log/4551152/ folder. It will copy all the server/client/agent logs from all the system part of JOB.
- User can cancel the job any time using scancel JOB_ID (scancel 4551152)
Avocado setup on TACC (With Python2):
Package needs to be install:
- pip install --user avocado-framework==57.0
- pip install --user avocado_framework_plugin_loader_yaml==57.0
- pip install --user avocado_framework_plugin_result_html==57.0
- pip install --user avocado_framework_plugin_varianter_yaml_to_mux==57.0
- pip install --user clustershell
Avocado Sanity test:
login2(1221)$ avocado variants --tree -m daos/src/tests/ftest/pool/attribute.yaml Multiplex tree representation: ┗━━ run ┣━━ hosts ┣━━ server_config ┗━━ attrtests ┣━━ createmode ┣━━ createset ┣━━ createsize ┣━━ name_handles ┃ ╚══ validlongname ┗━━ value_handles ╚══ validvalue