Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • The login node is not meant for computation.  Computationally intensive programs should be run as jobs using the queueing system. 
    • Users caught running computationally extensive programs (including containers) on the login node without permission from the system administrators may have their privileges on Grace suspended temporarily until after they have a conversation with a system administrator. 
    • The system administrators may kill computationally intensive programs running on the login node without warning. 
    • Compiling programs is allowed, if you have no facilities on your personal computer for compiling programs.
  • Users may not login (e.g. use ssh) directly into any of the computational nodes, unless the user has an active job running on that node.
    • Users caught logging into computational nodes may have their Grace accounts temporarily suspended until they have a conversation with a system administrator. 
    • The one exception is if you have a non-interactive job running on a particular computational node, you may ssh into that node to check status, nothing else.
    • Users should only log into the login node, or utilize the Open OnDemand portal, and should only access the computational nodes through a scheduled and running job, nothing else.
  • If you need an interactive shell or similar on a computational node, please run an interactive job.. 
    • Launching a non-interactive job to do interactive work is strongly discouraged, and may lead to a conversation with a system administrator.  Interactive work includes, for example, manually typing in shell commands, and should be done from an interactive job, not a regular non-interactive job.
    • Grace's Web Portal has a number options for graphical interactive jobs, such as Jupyter Lab/Notebook, or a Linux desktop. 
  • Users shall accurately estimate the resources needed for a job, such as number of cores, amount of memory per core, wall clock time needed to run the job, etc.
    • Failure to accurately estimate these resource requests, especially underestimating them, could lead to problems both for you and others in running jobs. 
    • Inaccuracies in resource requests also can lead to inefficiencies in the scheduler, which potentially impacts everyone.
    • Users who consistently overestimate their resource request, essentially reserving large blocks of resources with a job, but then not using them, could be penalized, for example by having the priorities of all of their jobs lowered.
  • Users may not log into any of the management nodes by any means for any purpose without permission from a system administrator. 
    • Users caught doing so may have their Grace accounts temporarily suspended until they have a conversation with the system administrator. 
  • Grace's storage system (e.g. /home, /scratch, /storage) is not an appropriate place to archive or permanently store data or programs. 
    • Data or programs that are not in routine use by running jobs should be offloaded by the user to other storage, such as ROSS (the Research Object Storage System), the Research NAS, a user's own workstation, lab or departmental storage, or even to cloud storage, such as Box.
    • Keep in mind that Grace's storage system, though highly redundant and quite reliable, is not backed up.  The system operators take no responsibility for any data left on Grace that hasn't been backed up by the user.  
  • The system administrators have the option to remove data in any directory on Grace's storage systems that has not be accessed for 4 weeks months or more (14 days if in /scratch, /local-scratch or /tmp). 
    • The data or programs in a user's  or a group's home directory, up to 1 TB, is exempt from this rule and may be kept indefinitely. 
    • Users that maintain more than 1 TB in their home directory may be asked to remove data to come within the 1 TB exemption limit.  If the user does not drop usage to less than 1 TB after the request from the system administrators, then the 4 week rule will apply to that user's or group's home directory.
    • If you have a special project that requires that data be held on Grace's internal storage systems for more than 4 weeksmonths, or that requires more than 1 TB space in your home directory, or that requires a group shared directory, please submit a proposal in writing (e-mail to hpcadmin@uams.edu is sufficient) to the system administrators, detailing:  
      1. Succinctly, what the project or group is and why the space needs to be held on Grace's internal storage for more than 4 weeksmonths.  
      2. A short name for the project or group that will be used as the name of the top level directory where this data will reside.  
      3. How much space the project or group anticipates needing.  
      4. For how long the project or group anticipates keeping the space before archiving it elsewhere.  
      5. What is the backup or archiving plan for that data. (The plan could be, 'this data does not require backup nor archiving.')

...