315 views
--- title: Public Images description: OneAI Documentation tags: EN --- [OneAI Documentation](/s/user-guide-en) # Public Images The system provides optimized container images for various AI operations, a variety of AI training frameworks. Use the system resources for rapidly deploy GPU processor work environments to improve work efficiency! The system's built-in images include various optimized NGC AI container images and AI training frameworks and can be divided into three categories. When using Container Service, Job Scheduling Service or AI Maker, you can choose these public images and use the system resources to quickly deploy the working environment and improve work efficiency! * [**NVIDIA Official Images**](#NVIDIA-Official-Images): NGC-based optimized images * [**Notebook Images**](#Notebook-Images): a development environment provided exclusively for [**Notebook Service**](/s/notebook-en). * [**AI Maker Case Study Images**](#AI-Maker-Case-Study-Image): Development environment for [**AI Maker Case Study**](/s/user-guide-en#case-study). ## NVIDIA Official Images NGC-based optimized images, pre-loaded with SSH and JupyterLab services and AI Maker related packages to provide a ready-to-use deep learning development environment. | Image Version | Description | Image Source | |-|-|-| | Cheminformatics | Cheminformatics can visualize chemical compounds and show the corresponding chemical structures and physical properties through web UI. This helps users demonstrate real-time exploration and analysis of a database of chemical compounds. Users can also generate new molecules either by exploring the latent space between two molecules or sampling around a molecule.| [nvcr.io/nvidia/clara/<br>cheminformatics_<br>demo:0.1.2](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/containers/cheminformatics_demo) | | Kaldi-21.02-py3 | Kaldi is an open-source software framework for speech processing. | [nvcr.io/nvidia/kaldi:21.02-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/kaldi) | | Kaldi-21.08-py3 | Kaldi is an open-source software framework for speech processing. | [nvcr.io/nvidia/kaldi:22.08-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/kaldi) | | MegamolBart| MegaMolBART is a drug discovery model trained on SMILES chemical notation, and this image includes infrencing for MegaMolBART models. | [nvcr.io/nvidia/clara/<br>megamolbart:0.1.2](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/containers/megamolbart) | | MXNet-21.02-py3 | MXNet is an open-source deep learning framework that allows you to define, train, and deploy deep neural networks on a wide array of devices, from cloud infrastructure to mobile devices. It’s highly scalable, allowing for fast model training, and supports a flexible programming model and multiple languages. | [nvcr.io/nvidia/mxnet:21.02-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/rapidsai/containers/rapidsai) | | PyTorch-21.02-py3 | PyTorch is a GPU accelerated tensor computational framework. Functionality can be extended with common Python libraries such as NumPy and SciPy. | [nvcr.io/nvidia/pytorch:21.02-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) | | TensorFlow-21.02-tf1-py3<br><sup style="black">No longer maintain this public image since 2023/01/01</sup> | TensorFlow is an open source platform for machine learning. It provides comprehensive tools and libraries in a flexible architecture allowing easy deployment across a variety of platforms and devices. | [nvcr.io/nvidia/tensorflow:21.02-tf1-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorflow) | | TensorFlow-21.02-tf2-py3 | TensorFlow is an open source platform for machine learning. It provides comprehensive tools and libraries in a flexible architecture allowing easy deployment across a variety of platforms and devices. | [nvcr.io/nvidia/tensorflow:21.02-tf2-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorflow) | | TensorRT-21.02-py3 | NVIDIA TensorRT is a C++ library that facilitates high-performance inference on NVIDIA graphics processing units (GPUs). TensorRT takes a trained network and produces a highly optimized runtime engine that performs inference for that network. | [nvcr.io/nvidia/tensorrt:21.02-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorrt) | | TensorRT-22.08-py3 | NVIDIA TensorRT is a C++ library that facilitates high-performance inference on NVIDIA graphics processing units (GPUs). TensorRT takes a trained network and produces a highly optimized runtime engine that performs inference for that network. | [nvcr.io/nvidia/tensorrt:22.08-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorrt) | | TritonServer-21.02-py3 | Triton Inference Server is an open source software that lets teams deploy trained AI models from any framework, from local or cloud storage and on any GPU- or CPU-based infrastructure in the cloud, data center, or embedded devices. | [nvcr.io/nvidia/tritonserver:21.02-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver) | | TritonServer-22.08-py3 | Triton Inference Server is an open source software that lets teams deploy trained AI models from any framework, from local or cloud storage and on any GPU- or CPU-based infrastructure in the cloud, data center, or embedded devices. | [nvcr.io/nvidia/tritonserver:22.08-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver) | ## Notebook Images The image [**Notebook Service**](/s/notebook-en) integrates mainstream development environments, including the JupyterLab IDE, deep learning frameworks (TensorFlow, PyTorch, MXNet) and packages, with support for data science languages (Julia, R) and data analysis engines (Spark). | Image Version | Description | Image Source | |-|-|-| | AutoDock | AutoDock is an open-source molecular simulation application. It is mainly used in molecular docking and virtual screening, and predicts how small molecules bind to a receptor of a known 3D structure. This tool is applicable to structure-based drug discovery and exploration of the basic mechanisms of biomolecular structure and function. | [nvcr.io/hpc/<br>autodock:2020.06](https://catalog.ngc.nvidia.com/orgs/hpc/containers/autodock/tags) | | AutoDock Vina |AutoDock is an open-source molecular simulation application. It is mainly used in molecular docking and virtual screening, and predicts how small molecules bind to a receptor of a known 3D structure. This tool is applicable to structure-based drug discovery and exploration of the basic mechanisms of biomolecular structure and function. | [nvidia/cuda:<br>11.2.0-devel-ubuntu20.04](https://catalog.ngc.nvidia.com/orgs/hpc/containers/autodock/tags) | | CUDA-11.2.0-cudnn8-devel-ubuntu20.04 | CUDA is a parallel computing platform and programming model that enables dramatic increases in computing performance by harnessing the power of the NVIDIA GPUs. | [nvcr.io/nvidia/cuda:11.2.0-cudnn8-devel-ubuntu20.04](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cuda) | | DataScience-lab-3.2.1<br><sup style="color:red">Will no longer provide this public image on 2023/03/30</sup> | Data Sciencet work environment with Julia, Python and R programming languages and packages. | [jupyter/datascience-notebook:lab-3.2.1](https://hub.docker.com/r/jupyter/datascience-notebook) | | DataScience-lab | Data Sciencet work environment with Julia, Python and R programming languages and packages. | [jupyter/datascience-notebook:lab-3.2.1](https://hub.docker.com/r/jupyter/datascience-notebook) | | Gromacs|GROMACS is a popular molecular dynamics application mainly designed for simulations of biochemical molecules like proteins, lipids, and nucleic acids that have a lot of complicated bonded interactions. This release is optimized by NVIDIA for GPU usage. | [nvcr.io/hpc/<br>gromacs:2021.3](https://catalog.ngc.nvidia.com/orgs/hpc/containers/gromacs/tags) | |Monai-1.0.0 | MONAI is a community-supported, PyTorch-based framework for deep learning in healthcare imaging. It provides domain-optimized foundational capabilities for developing healthcare imaging training workflows in a native PyTorch paradigm. | [nvcr.io/nvidia/pytorch:22.08-py3](http://nvcr.io/nvidia/pytorch:22.08-py3) | | MXNet-21.02-py3 | MXNet is an open-source deep learning framework that allows you to define, train, and deploy deep neural networks on a wide array of devices, from cloud infrastructure to mobile devices. It’s highly scalable, allowing for fast model training, and supports a flexible programming model and multiple languages. | [nvcr.io/nvidia/mxnet:21.02-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/rapidsai/containers/rapidsai) | | PySpark-spark-3.2.0 | PySpark is an interface for Apache Spark in Python. It not only allows you to write Spark applications using Python APIs, but also provides the PySpark shell for interactively analyzing your data in a distributed environment. | [jupyter/pyspark-notebook:spark-3.2.0](https://hub.docker.com/r/jupyter/pyspark-notebook) | | PyTorch-21.02-py3 | PyTorch is a GPU accelerated tensor computational framework. Functionality can be extended with common Python libraries such as NumPy and SciPy. | [nvcr.io/nvidia/pytorch:21.02-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) | | PyTorch-22.08-py3 | PyTorch is a GPU accelerated tensor computational framework. Functionality can be extended with common Python libraries such as NumPy and SciPy. | [nvcr.io/nvidia/pytorch:22.08-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) | | RAPIDS-22.04 | The RAPIDS suite of software libraries gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. | [nvcr.io/nvidia/rapidsai/rapidsai:22.04-cuda11.2-runtime-ubuntu20.04](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/rapidsai/containers/rapidsai) | | RAPIDS-22.08 | The RAPIDS suite of software libraries gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. | [nvcr.io/nvidia/rapidsai/rapidsai:22.08-cuda11.2-runtime-ubuntu20.04-py3.9](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/rapidsai/containers/rapidsai) | | TensorFlow-21.02-tf1-py3<br><sup style="black">No longer maintain this public image since 2023/01/01</sup> | TensorFlow is an open source platform for machine learning. It provides comprehensive tools and libraries in a flexible architecture allowing easy deployment across a variety of platforms and devices. | [nvcr.io/nvidia/tensorflow:21.02-tf1-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorflow) | | TensorFlow-21.02-tf2-py3 | TensorFlow is an open source platform for machine learning. It provides comprehensive tools and libraries in a flexible architecture allowing easy deployment across a variety of platforms and devices. | [nvcr.io/nvidia/tensorflow:21.02-tf2-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorflow) | | TensorFlow-22.08-tf2-py3 | TensorFlow is an open source platform for machine learning. It provides comprehensive tools and libraries in a flexible architecture allowing easy deployment across a variety of platforms and devices.| [nvcr.io/nvidia/Tensorflow:22.08-tf2-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorflow) | | Parabricks-4.0.0|NVIDIA Clara Parabricks is an accelerated compute framework that supports applications across the genomics industry, primarily supporting analytical workflows for DNA, RNA, and somatic mutation detection applications. | [nvcr.io/nvidia/clara/clara-parabricks:4.0.0-1](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/containers/clara-parabricks) | :::warning :warning: **Note:** If you need to use the Elyra package, it is recommended to run it through the CLI command. For details, please refer to [**Elyra Running Pipelines Documentation**](https://elyra.readthedocs.io/en/latest/user_guide/pipelines.html#running-pipelines)。 ::: ## AI Maker Case Study Images This type of image is a development environment specially prepared for AI Maker Case Study, to be used with [**AI Maker Case Study**](/s/user-guide-en#Case-Study). | Image | Image Version | Description | Image Source | |-|-|-|-| | yolo | v3<br><sup style="black">No longer maintain this public image since 2023/01/01</sup> | Provides a deep learning development environment for object detection based on the YOLO neural network architecture. | [nvidia/cuda:11.0.3-cudnn8-devel-ubuntu18.04](https://hub.docker.com/layers/cuda/nvidia/cuda/11.0.3-cudnn8-devel-ubuntu18.04/images/sha256-808c17333cebdc65e914bd1211903695652bc52dd14fe109ac505955f744f465?context=explore) | | yolo | v4 | Provides a deep learning development environment for object detection based on the YOLO neural network architecture. | [nvidia/cuda:11.0.3-cudnn8-devel-ubuntu18.04](https://hub.docker.com/layers/cuda/nvidia/cuda/11.0.3-cudnn8-devel-ubuntu18.04/images/sha256-808c17333cebdc65e914bd1211903695652bc52dd14fe109ac505955f744f465?context=explore) | | yolo | v7 | Provides a deep learning development environment for object detection based on the YOLO neural network architecture. | [nvcr.io/nvidia/pytorch:21.08-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) | | clara | v3.0<br><sup style="color:red">Will no longer provide this public image on 2023/03/30</sup> | Provides a deep learning development environment for medical image recognition using the NVIDIA Clara Train SDK. | [nvcr.io/nvidia/clara-train-sdk:v3.0](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/clara-train-sdk/tags) | | clara | v4.0 | Provides a deep learning development environment for medical image recognition using the NVIDIA Clara Train SDK. | [nvcr.io/nvidia/clara-train-sdk:v4.0](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/clara-train-sdk/tags) | | clara-nginx | v1 | Provides auxiliary inference service for medical image recognition by using NGINX reverse proxy to connect with Clara AIAA Server. | [debian:buster-slim](https://hub.docker.com/layers/debian/library/debian/buster-slim/images/sha256-22a36d295282f4cfc4faaf40819177884ea0b2942591ee47b118af367e4c5152?context=explore) | | cvat-tritonserver | v21.03 | Using the NVIDIA Triton inference server as the base image, it can be used to optimize model deployment process and inference efficiency and connect to computer vision annotation tools (CVAT) to develop applications related to automatic image annotation. | [nvcr.io/nvidia/tensorrtserver:19.10-py3](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorrtserver/tags) | | image-classification | v1 | Provides a development environment for image classifiers based on the TensorFlow machine learning framework, with seven built-in deep learning pre-trained models. | [tensorflow/tensorflow:2.5.0-gpu](https://hub.docker.com/layers/tensorflow/tensorflow/2.5.0-gpu/images/sha256-0cb24474909c8ef0a3772c64a0fd1cf4e5ff2b806d39fd36abf716d6ea7eefb3?context=explore) | | ml-sklearn | v1<br><sup style="black">No longer maintain this public image since 2023/01/01</sup> | Provides various classification and regression tools based on the scikit-learn machine learning framework. | [ubuntu:18.04](https://hub.docker.com/layers/ubuntu/library/ubuntu/18.04/images/sha256-8da4e9509bfe5e09df6502e7a8e93c63e4d0d9dbaa9f92d7d767f96d6c20a78a?context=explore) | | ml-sklearn | v2 | Provides various classification and regression tools based on the scikit-learn machine learning framework. |[nvidia/cuda:11.3.0-devel-ubuntu20.04](https://hub.docker.com/layers/nvidia/cuda/11.3.0-cudnn8-devel-ubuntu20.04/images/sha256-d2526bbffdfe9db308edce3ab1a1d8e009581c5417466705fd61a0ccc39c041b?context=explore)| | pedestrian-attribute-recognition | v1 | Provides a development environment for pedestrian attribute recognition based on the Keras and TensorFlow machine learning frameworks, using ResNet50 pre-trained models. | [nvidia/cuda:11.2.0-cudnn8-devel-ubuntu18.04](https://gitlab.com/nvidia/container-images/cuda/-/tree/master) | | huggingface | v1 | Fine-tune or train models based on the Hugging Face machine learning framework, use custom datasets and build related applications using the Hub's tens of thousands of pre-trained models. | [nvidia/cuda:11.3.0-base-ubuntu18.04](https://hub.docker.com/layers/cuda/nvidia/cuda/11.3.0-base-ubuntu18.04/images/sha256-8c14110b0366db71e9d0dfb461834e4ab4c68a5ec1896d3c13011777637c3ced?context=explore) | :::info :bulb:**Tips:** [**Case Study**](/s/user-guide-en#Case-Study) images and versions are as follows. <br> | Case Study | Image and version | |-|-| | [AI Maker Case Study - YOLOv7 Image Recognition](/s/casestudy-yolov7-en) | yolo:v7 <br>cvat-tritonserver:v21.03| | [AI Maker Case Study - YOLOv4 Image Recognition](/s/JyKyKQe1ce) | yolo:v4 <br>cvat-tritonserver:v21.03 | | [AI Maker Case Study - MONAI 1.0 Tutorial: Train 3D Segmentation Model Using Spleen CT Data](/s/casestudy-monai-en) | notebook:Monai-1.0.0 <br>clara-nginx:v1 | | [AI Maker Case Study - Clara 4.0 Tutorial: Train 3D Segmentation Model Using Spleen CT Data](/s/vbbC8y7kDe) | clara:v4 <br>clara-nginx:v1 | | [AI Maker Case Study - Image Classification](/s/6FCAc5sdIe) | image-classification:v1 | | [AI Maker Case Study - Machine Learning with Tabular Data: Classification](/s/WBhgQVp5ge)| ml-sklearn:v2 | | [AI Maker Case Study - Machine Learning with Tabular Data: Regression](/s/kgygJZQNxe) | ml-sklearn:v2 | | [AI Maker Case Study - Hugging Face Text Classification](/s/hf-text-classification-en)|huggingface:v1| | [AI Maker Case Study - Hugging Face Audio Classification](/s/hf-audio-classification-en)|huggingface:v1| | [AI Maker Case Study - Hugging Face Image Classification](/s/hf-image-classification-en)|huggingface:v1| | [AI Maker Case Study- Hugging Face Token Classification](/s/hf-token-classification-en)| huggingface:v1| | [AI Maker Case Study - Hugging Face Question Answering](/s/hf-question-answering-en)| huggingface:v1| | [AI Maker Case Study- Hugging Face Translation](/s/hf-translation-en)| huggingface:v1| | [AI Maker Case Study - Hugging Face Summarization](/s/hf-summarization-en)| huggingface:v1| | [AI Maker Case Study - Hugging Face Speech Recognition](/s/hf-speech-recognition-en)| huggingface:v1| | [AI Maker Case Study - Hugging Face Object Detection](/s/hf-object-detection-en)| huggingface:v1| | [AI Maker Case Study - Pedestrian Attribute Recognition](/s/Uj5yw5Qu_e) | pedestrian-attribute-recognition:v1 | | [AI Maker Case Study - Implement Assisted Inference Module to CVAT](/s/P5WlQlSmce) | clara:v4 | | [Case Study - Deploy NVIDIA Clara Federated Learning in OneAI](/s/UIzgOPv5Re) | clara:v4 | | [Case Study - Deploy NVIDIA Federated Learning Application Runtime Environment (NVIDIA FLARE) in OneAI](/s/CPWng_lxZke) | clara:v4<br>nvidia-official-images:PyTorch-21.02-py3 | :::