jp6/cu128/: ai-dynamo-0.2.0 metadata and description

Simple index Newer version available

Distributed Inference Framework

author_email "NVIDIA Inc." <sw-dl-dynamo@nvidia.com>
classifiers
  • Development Status :: 4 - Beta
  • Intended Audience :: Developers
  • Intended Audience :: Information Technology
  • Intended Audience :: Science/Research
  • License :: OSI Approved :: Apache Software License
  • Operating System :: POSIX :: Linux
  • Programming Language :: Python :: 3
  • Programming Language :: Python :: 3.10
  • Programming Language :: Python :: 3.11
  • Programming Language :: Python :: 3.12
  • Topic :: Scientific/Engineering
  • Topic :: Scientific/Engineering :: Artificial Intelligence
description_content_type text/markdown
keywords distributed, dynamo, genai, inference, llm, nvidia
license Apache-2.0
project_urls
  • Repository, https://github.com/ai-dynamo/dynamo.git
requires_dist
  • ai-dynamo-runtime==0.2.0
  • bentoml==1.4.8
  • circus>=0.17.0
  • distro
  • fastapi==0.115.6
  • kubernetes==32.0.1
  • pytest>=8.3.4
  • typer
  • types-psutil==7.0.0.20250218
  • ai-dynamo-vllm~=0.8.4; extra == 'all'
  • nixl; extra == 'all'
  • ai-dynamo-vllm~=0.8.4; extra == 'vllm'
requires_python >=3.10

Because this project isn't in the mirror_whitelist, no releases from root/pypi are included.

File Tox results History
ai_dynamo-0.2.0-py3-none-any.whl
Size
31 MB
Type
Python Wheel
Python
3

NVIDIA Dynamo

License GitHub Release Discord

| Roadmap | Support Matrix | Guides | Architecture and Features | APIs | SDK |

NVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:

Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach.

Installation

The following examples require a few system level packages. Recommended to use Ubuntu 24.04 with a x86_64 CPU. See support_matrix.md

apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -yq python3-dev python3-pip python3-venv libucx0
python3 -m venv venv
source venv/bin/activate

pip install ai-dynamo[all]

[!NOTE] To ensure compatibility, please refer to the examples in the release branch or tag that matches the version you installed.

Building the Dynamo Base Image

Although not needed for local development, deploying your Dynamo pipelines to Kubernetes will require you to build and push a Dynamo base image to your container registry. You can use any container registry of your choice, such as:

Here's how to build it:

./container/build.sh
docker tag dynamo:latest-vllm <your-registry>/dynamo-base:latest-vllm
docker login <your-registry>
docker push <your-registry>/dynamo-base:latest-vllm

Notes about builds for specific frameworks:

After building, you can use this image by setting the DYNAMO_IMAGE environment variable to point to your built image:

export DYNAMO_IMAGE=<your-registry>/dynamo-base:latest-vllm

[!NOTE] We are working on leaner base images that can be built using the targets in the top-level Earthfile.

Running and Interacting with an LLM Locally

To run a model and interact with it locally you can call dynamo run with a hugging face model. dynamo run supports several backends including: mistralrs, sglang, vllm, and tensorrtllm.

Example Command

dynamo run out=vllm deepseek-ai/DeepSeek-R1-Distill-Llama-8B
? User › Hello, how are you?
✔ User · Hello, how are you?
Okay, so I'm trying to figure out how to respond to the user's greeting. They said, "Hello, how are you?" and then followed it with "Hello! I'm just a program, but thanks for asking." Hmm, I need to come up with a suitable reply. ...

LLM Serving

Dynamo provides a simple way to spin up a local set of inference components including:

To run a minimal configuration you can use a pre-configured example.

Start Dynamo Distributed Runtime Services

First start the Dynamo Distributed Runtime services:

docker compose -f deploy/docker-compose.yml up -d

Start Dynamo LLM Serving Components

Next serve a minimal configuration with an http server, basic round-robin router, and a single worker.

cd examples/llm
dynamo serve graphs.agg:Frontend -f configs/agg.yaml

Send a Request

curl localhost:8000/v1/chat/completions   -H "Content-Type: application/json"   -d '{
    "model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
    "messages": [
    {
        "role": "user",
        "content": "Hello, how are you?"
    }
    ],
    "stream":false,
    "max_tokens": 300
  }' | jq

Local Development

If you use vscode or cursor, we have a .devcontainer folder built on Microsofts Extension. For instructions see the ReadMe for more details.

Otherwise, to develop locally, we recommend working inside of the container

./container/build.sh
./container/run.sh -it --mount-workspace

cargo build --release
mkdir -p /workspace/deploy/dynamo/sdk/src/dynamo/sdk/cli/bin
cp /workspace/target/release/http /workspace/deploy/dynamo/sdk/src/dynamo/sdk/cli/bin
cp /workspace/target/release/llmctl /workspace/deploy/dynamo/sdk/src/dynamo/sdk/cli/bin
cp /workspace/target/release/dynamo-run /workspace/deploy/dynamo/sdk/src/dynamo/sdk/cli/bin

uv pip install -e .

Conda Environment

Alternately, you can use a conda environment

conda activate <ENV_NAME>

pip install nixl # Or install https://github.com/ai-dynamo/nixl from source

cargo build --release

# To install ai-dynamo-runtime from source
cd lib/bindings/python
pip install .

cd ../../../
pip install .[all]

# To test
docker compose -f deploy/docker-compose.yml up -d
cd examples/llm
dynamo serve graphs.agg:Frontend -f configs/agg.yaml