jp6/cu128/: ai-dynamo-runtime-0.2.1 metadata and description

Homepage Simple index

Dynamo Inference Framework Runtime

author NVIDIA
author_email "NVIDIA Inc." <sw-dl-dynamo@nvidia.com>
classifiers
  • Development Status :: 4 - Beta
  • Intended Audience :: Developers
  • Intended Audience :: Science/Research
  • Intended Audience :: Information Technology
  • License :: OSI Approved :: Apache Software License
  • Programming Language :: Python :: 3
  • Programming Language :: Python :: 3.10
  • Programming Language :: Python :: 3.11
  • Programming Language :: Python :: 3.12
  • Topic :: Scientific/Engineering
  • Topic :: Scientific/Engineering :: Artificial Intelligence
  • Operating System :: POSIX :: Linux
description_content_type text/markdown; charset=UTF-8; variant=GFM
keywords llm, genai, inference, nvidia, distributed, dynamo
license Apache-2.0
project_urls
  • Source Code, https://github.com/ai-dynamo/dynamo.git
requires_dist
  • pydantic>=2.10.6,<2.11.0
  • uvloop>=0.21.0
  • nats-py>=2.6.0
requires_python >=3.10

Because this project isn't in the mirror_whitelist, no releases from root/pypi are included.

File Tox results History
ai_dynamo_runtime-0.2.1-cp312-cp312-manylinux_2_38_aarch64.whl
Size
15 MB
Type
Python Wheel
Python
3.12
  • Replaced 1 time(s)
  • Uploaded to jp6/cu128 by jp6 2025-05-11 22:20:40

<!– SPDX-FileCopyrightText: Copyright (c) 2024-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved. SPDX-License-Identifier: Apache-2.0

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. –>

# Dynamo Python Bindings

Python bindings for the Dynamo runtime system, enabling distributed computing capabilities for machine learning workloads.

## 🚀 Quick Start

1. Install uv: https://docs.astral.sh/uv/#getting-started ` curl -LsSf https://astral.sh/uv/install.sh | sh `

  1. Install protoc protobuf compiler: https://grpc.io/docs/protoc-installation/.

For example on an Ubuntu/Debian system: ` apt install protobuf-compiler `

  1. Setup a virtualenv

` uv venv source .venv/bin/activate uv pip install maturin `

4. Build and install dynamo wheel ` maturin develop --uv `

# Run Examples

## Pre-requisite

See [README.md](../../runtime/README.md#️-prerequisites).

## Hello World Example

1. Start 3 separate shells, and activate the virtual environment in each ` source .venv/bin/activate `

2. In one shell (shell 1), run example server the instance-1 ` python3 ./examples/hello_world/server.py `

3. (Optional) In another shell (shell 2), run example the server instance-2 ` python3 ./examples/hello_world/server.py `

4. In the last shell (shell 3), run the example client: ` python3 ./examples/hello_world/client.py `

If you run the example client in rapid succession, and you started more than one server instance above, you should see the requests from the client being distributed across the server instances in each server’s output. If only one server instance is started, you should see the requests go to that server each time.

# Performance

The performance impacts of synchronizing the Python and Rust async runtimes is a critical consideration when optimizing the performance of a highly concurrent and parallel distributed system.

The Python GIL is a global critical section and is ultimately the death of parallelism. To compound that, when Rust async futures become ready, accessing the GIL on those async event loop needs to be considered carefully. Under high load, accessing the GIL or performing CPU intensive tasks on on the event loop threads can starve out other async tasks for CPU resources. However, performing a tokio::task::spawn_blocking is not without overheads as well.

If bouncing many small message back-and-forth between the Python and Rust event loops where Rust requires GIL access, this is pattern where moving the code from Python to Rust will give you significant gains.