- Microsoft HPC Pack SDK
- Purpose
- Developer audience
- Run-time requirements
- In this section
- Microsoft HPC Pack SDK
- Purpose
- Developer audience
- Run-time requirements
- In this section
- Overview of Microsoft HPC Pack 2019
- Overview of Microsoft HPC Pack 2016
- Windows hpc server sdk
- NVIDIA HPC SDK Installation Guide
- 1.В Installations on Linux
- 1.1.В Prepare to Install on Linux
- To Prepare for the Installation:
- 1.2.В Installation Steps for Linux
- 1.3.В End-user Environment Settings
- Notices
- Notice
- Trademarks
- Copyright
Microsoft HPC Pack SDK
Purpose
Microsoft HPC Pack (Windows HPC Server) provides secure, scalable cluster resource management, a job scheduler, and a Message Passing Interface (MPI) stack for parallel programming. This document provides details for writing client applications that interact with the job scheduler. For information about MPI, see Microsoft MPI.
You can use the Microsoft HPC Pack SDKs to schedule jobs on clusters that are created by using Microsoft HPC Pack 2008, Microsoft HPC Pack 2008 R2, Microsoft HPC Pack 2012, or Microsoft HPC Pack 2012 R2. You cannot use these SDKs to schedule jobs on Microsoft Compute Cluster Server 2003 (CCS). For information on the SDK for CCS, see Microsoft Compute Cluster Pack.
The following HPC Pack 2012 R2 and HPC Pack 2012 SDK downloads are available:
The following HPC Pack 2008 R2 SDK downloads are available:
The following HPC Pack 2008 SDK downloads are available:
For information about IT pro documentation for Windows HPC Server, see Windows HPC Server. For information about IT pro Technical Reference documentation for Microsoft HPC Pack, see the Microsoft HPC Pack Technical Reference.
Developer audience
HPC is designed for C, C++, Fortran, and .NET developers and those writing scripts.
Run-time requirements
To run an application that uses the HPC SDK, the computer must have the HPC Pack Client Utilities installed.
For information about run-time requirements for a particular programming element, see the Requirements section of the reference page for that element.
In this section
Using HPC
Procedural guide for developing HPC client applications.
HPC Reference
Reference information for the native HPC API.
HPC .NET Reference
Reference information for the .NET HPC API.
Microsoft HPC Pack SDK
Purpose
Microsoft HPC Pack (Windows HPC Server) provides secure, scalable cluster resource management, a job scheduler, and a Message Passing Interface (MPI) stack for parallel programming. This document provides details for writing client applications that interact with the job scheduler. For information about MPI, see Microsoft MPI.
You can use the Microsoft HPC Pack SDKs to schedule jobs on clusters that are created by using Microsoft HPC Pack 2008, Microsoft HPC Pack 2008 R2, Microsoft HPC Pack 2012, or Microsoft HPC Pack 2012 R2. You cannot use these SDKs to schedule jobs on Microsoft Compute Cluster Server 2003 (CCS). For information on the SDK for CCS, see Microsoft Compute Cluster Pack.
The following HPC Pack 2012 R2 and HPC Pack 2012 SDK downloads are available:
The following HPC Pack 2008 R2 SDK downloads are available:
The following HPC Pack 2008 SDK downloads are available:
For information about IT pro documentation for Windows HPC Server, see Windows HPC Server. For information about IT pro Technical Reference documentation for Microsoft HPC Pack, see the Microsoft HPC Pack Technical Reference.
Developer audience
HPC is designed for C, C++, Fortran, and .NET developers and those writing scripts.
Run-time requirements
To run an application that uses the HPC SDK, the computer must have the HPC Pack Client Utilities installed.
For information about run-time requirements for a particular programming element, see the Requirements section of the reference page for that element.
In this section
Using HPC
Procedural guide for developing HPC client applications.
HPC Reference
Reference information for the native HPC API.
HPC .NET Reference
Reference information for the .NET HPC API.
Overview of Microsoft HPC Pack 2019
Learn how to evaluate, set up, deploy, maintain, and submit jobs to a high-performance computing (HPC) cluster that is created by using Microsoft HPC Pack 2019. HPC Pack allows you to create and manage HPC clusters consisting of dedicated on-premises Windows or Linux compute nodes, part-time servers, workstation computers, and dedicated or on-demand compute resources that are deployed in Microsoft Azure.
Based on where the compute resources are located, Microsoft HPC Pack can be categorized into three cluster Modes:
Cluster Mode | Highlights | Topology |
---|---|---|
HPC Pack On-premises Get started with HPC Pack On-premises | — Supports Windows and Linux compute nodes — Advanced job scheduling and resource management — Proved and scale-tested capabilities — Free of charge — Easy to extend to hybrid | |
HPC Pack Hybrid Get started with HPC Pack Hybrid | — Burst to cloud to handle peaks in demand or special projects — Automate the deployment of Windows and Linux Azure VMs — Use your current HPC scheduler or HPC Pack — Pay only for what you use | |
HPC Pack IaaS Get started with HPC Pack IaaS | — Deploy a cluster all in the cloud, on demand — Use your current scheduler or HPC Pack — Readily shift existing applications to the cloud — Use templates, scripts, and gallery images to deploy on demand |
Microsoft also has cloud-born HPC Scheduler Service called Azure Batch. You can either use Azure Batch directly or you can use HPC Pack as your scheduler and have your job to burst to azure batch.
Follow below links to start with your HPC Pack:
Overview of Microsoft HPC Pack 2016
Learn how to evaluate, set up, deploy, maintain, and submit jobs to a high-performance computing (HPC) cluster that is created by using Microsoft HPC Pack 2016. HPC Pack allows you to create and manage HPC clusters consisting of dedicated on-premises Windows or Linux compute nodes, part-time servers, workstation computers, and dedicated or on-demand compute resources that are deployed in Microsoft Azure.
Based on where the compute resources are located, Microsoft HPC Pack can be categorized into three cluster Modes:
Cluster Mode | Highlights | Topology |
---|---|---|
HPC Pack On-premises Get started with HPC Pack On-premises | — Supports Windows and Linux compute nodes — Advanced job scheduling and resource management — Proved and scale-tested capabilities — Free of charge — Easy to extend to hybrid | |
HPC Pack Hybrid Get started with HPC Pack Hybrid | — Burst to cloud to handle peaks in demand or special projects — Automate the deployment of Windows and Linux Azure VMs — Use your current HPC scheduler or HPC Pack — Pay only for what you use | |
HPC Pack IaaS Get started with HPC Pack IaaS | — Deploy a cluster all in the cloud, on demand — Use your current scheduler or HPC Pack — Readily shift existing applications to the cloud — Use templates, scripts, and gallery images to deploy on demand |
Microsoft also has cloud-born HPC Scheduler Service called Azure Batch. You can either use Azure Batch directly or you can use HPC Pack as your scheduler and have your job to burst to azure batch.
Follow below links to start with your HPC Pack:
Windows hpc server sdk
NVIDIA HPC SDK Installation Guide
1.В Installations on Linux
This section describes how to install the HPC SDK in a generic manner on Linux x86_64, OpenPOWER, or Arm Server systems with NVIDIA GPUs. It covers both local and network installations.
For a complete description of supported processors, Linux distributions, and CUDA versions please see the HPC SDK Release Notes.
1.1.В Prepare to Install on Linux
Linux installations require some version of the GNU Compiler Collection (gcc) to be installed and in your $PATH prior to installing HPC SDK software. For HPC compilers to produce 64-bit executables, a 64-bit gcc compiler must be present. For C++ compiling and linking, the same must be true for g++ . To determine if such a compiler is installed on your system, do the following:
- Create a hello.c program.
- Compile with the -m64 option to create a 64-bit executable.
Run the file command on the produced executable. The output should look similar to the following:
For support with C++ compilation, g++ version 4.4 is required at a minimum. A more recent version will suffice. Create a hello.cpp program and invoke g++ with the -m64 argument. Make sure you are able to compile, link, and run the simple hello.cpp program first before proceeding.
The file command on the hello_64_cpp binary should produce similar results as the C example.
For cluster installations, access to all the nodes is required. In addition, you should be able to connect between nodes using rsh or ssh , including to/from the same node you are on. The hostnames for each node should be the same as those in the cluster machine list for the system ( machines.LINUX file).
In a typical local installation, the default installation base directory is /opt/nvidia/hpc_sdk .
If you choose to perform a network installation, you should specify:
- A shared file system for the installation base directory. All systems using the compilers should use a common pathname.
- A second directory name that is local to each of the systems where the HPC compilers and tools are used. This local directory contains the libraries to use when compiling and running on that machine. Use the same pathname on every system, and point to a private (i.e. non-shared) directory location.
This directory selection approach allows a network installation to support a network of machines running different versions of Linux. If all the platforms are identical, the shared installation location can perform a standard installation that all can use.
To Prepare for the Installation:
After downloading the HPC SDK installation package, bring up a shell command window on your system.
The installation instructions assume you are using csh, sh, ksh, bash, or some compatible shell. If you are using a shell that is not compatible with one of these shells, appropriate modifications are necessary when setting environment variables.
Verify you have enough free disk space for the HPC SDK installation.
- The uncompressed installation packages requires 8 GB of total free disk space.
1.2.В Installation Steps for Linux
Follow these instructions to install the software:
- Unpack the HPC SDK software. In the instructions that follow, replace with the name of the file that you downloaded. Use the following command sequence to unpack the tar file before installation. The tar file will extract an install script and an install_components folder to a directory with the same name as the tar file.
- Run the installation script(s). Install the compilers by running [sudo] ./install from the directory.
NVHPC_SILENT | (required) Set this variable to «true» to enable silent installation. |
NVHPC_INSTALL_DIR | (required) Set this variable to a string containing the desired installation location, e.g. /opt/nvidia/hpc_sdk . |
NVHPC_INSTALL_TYPE | (required) Set this variable to select the type of install. The accepted values are «single» for a single system install or «network» for a network install. |
NVHPC_INSTALL_LOCAL_DIR | (required for network install) Set this variable to a string containing the path to a local file system when choosing a network install. |
NVHPC_DEFAULT_CUDA | (optional) Set this variable to the desired CUDA version in the form of XX.Y, e.g. 10.1 or 11.0. |
NVHPC_STDPAR_CUDACC | (optional) Set this variable to force C++ stdpar GPU-compilation to target a specific compute capability by default, e.g. 60, 70, 75, etc. |
NVIDIA HPC Compiler documentation is available online in both HTML and PDF formats.
Complete network installation tasks.
For a network installation, you must run the local installation script on each system on the network where the compilers and tools will be available for use.
These commands create a system-dependent file localrc.machinename in the /opt/nvidia/hpc_sdk/$NVARCH/ 21.3 /compilers/bin directory. The commands also create the following three directories containing libraries and shared objects specific to the operating system and system libraries on that machine:
- /usr/nvidia/shared/ 21.3 /lib
- /usr/nvidia/shared/ 21.3 /liblf
- /usr/nvidia/shared/ 21.3 /lib64
Installation of the HPC SDK for Linux is now complete. For assistance with difficulties related to the installation, please reach out on the NVIDIA Developer Forums.
The following sections contain information detailing the directory structure of the HPC SDK installation, and instructions for end-users to initialize environment and path settings to use the compilers and tools.
1.3.В End-user Environment Settings
After the software installation is complete, each user’s shell environment must be initialized to use the HPC SDK.
Each user must issue the following sequence of commands to initialize the shell environment before using the HPC SDK.
The HPC SDK keeps version numbers under an architecture type directory, e.g. Linux_x86_64/ 21.3 . The name of the architecture is in the form of `uname -s`_`uname -m` . For OpenPOWER and Arm Server platforms the expected architecture name is «Linux_ppc64le» and «Linux_aarch64» respectively. The guide below sets the value of the necessary uname commands to «NVARCH», but you can explicitly specify the name of the architecture if desired.
To make the HPC SDK available:
In csh, use these commands:
In bash, sh, or ksh, use these commands:
Once the 64-bit compilers are available, you can make the OpenMPI commands and man pages accessible using these commands.
And the equivalent in bash, sh, and ksh:
Notices
Notice
ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, «MATERIALS») ARE BEING PROVIDED «AS IS.» NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE.
Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no responsibility for the consequences of use of such information or for any infringement of patents or other rights of third parties that may result from its use. No license is granted by implication of otherwise under any patent rights of NVIDIA Corporation. Specifications mentioned in this publication are subject to change without notice. This publication supersedes and replaces all other information previously supplied. NVIDIA Corporation products are not authorized as critical components in life support devices or systems without express written approval of NVIDIA Corporation.
Trademarks
NVIDIA, the NVIDIA logo, CUDA, CUDA-X, GPUDirect, HPC SDK, NGC, NVIDIA Volta, NVIDIA DGX, NVIDIA Nsight, NVLink, NVSwitch, and Tesla are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.
Copyright
© 2013– 2021 NVIDIA Corporation. All rights reserved.