Download Administrator Manual - Support

Transcript
9.5 Configuring And Running Individual Workload Managers
The same is run on the passive head node in case of an existing
failover setup.
• To verify that installation is proceeding as intended so far, the
status can be checked with:
[root@bright60 ~]# /etc/init.d/lsf status
Show status of the LSF subsystem
lim (pid 21138) is running...
res (pid 17381) is running...
sbatchd (pid 17383) is running...
while default queues can be seen by running (continuing in the
same shell):
[root@bright60 ~]# . /cm/shared/apps/lsf/var/conf/profile.lsf
[root@bright60 ~]# bqueues
5. The LSF startup script is installed on to the software images, It
should not be added via hostsetup or chkconfig. One way to
copy over the files required is:
for image in $(find /cm/images/ -mindepth 1 -maxdepth 1 -type d)
do cp /etc/init.d/lsf $image/etc/init.d/; done
6. Optionally, the LSF environment can be added to .bashrc with:
. /cm/shared/apps/lsf/var/conf/profile.lsf
7. The LSF master server role is added to the head node (bright60 in
the following) with:
cmsh -c "device roles bright60; assign lsfserver; commit"
In case of an existing failover setup, role assignment should be repeated on the passive head node, say head2.
Example
cmsh -c "device roles head2; assign lsfserver; commit"
8. The LSF client role is added to each node category containing LFS
nodes. If the only category is default, then the command run is:
cmsh -c "category roles default; assign lsfclient; set allqueues\
yes; commit"
9. Additional NFS entries for the export path of LFS are configured
for the head node and for LSF node categories. If the path is already NFS exported, for example by using the shared directory
/cm/shared, then this step is unnecessary. In cmgui the NFS
path can be set in the LSF client role from the “FS Exports” tab
for a “Head Node”, or “Node Category”, or “Nodes” item (section 4.10.1).
10. The nodes are then rebooted, and the LSF command bhosts then
shows a display like:
© Bright Computing, Inc.
305