jp6/cu128/: xgrammar-0.1.15 metadata and description

Simple index Newer version available

Efficient, Flexible and Portable Structured Generation

author MLC Team
classifiers
  • License :: OSI Approved :: Apache Software License
  • Development Status :: 4 - Beta
  • Intended Audience :: Developers
  • Intended Audience :: Education
  • Intended Audience :: Science/Research
description_content_type text/markdown
keywords machine learning, inference
license Apache 2.0
project_urls
  • Homepage, https://xgrammar.mlc.ai/
  • GitHub, https://github.com/mlc-ai/xgrammar
provides_extras test
requires_dist
  • pydantic
  • sentencepiece
  • tiktoken
  • torch>=1.10.0
  • transformers>=4.38.0
  • triton; platform_system == "linux" and platform_machine == "x86_64"
  • pytest; extra == "test"
  • protobuf; extra == "test"
  • huggingface-hub[cli]; extra == "test"
requires_python <4,>=3.9

Because this project isn't in the mirror_whitelist, no releases from root/pypi are included.

File Tox results History
xgrammar-0.1.15-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Size
5 MB
Type
Python Wheel
Python
3.10
xgrammar-0.1.15-cp312-cp312-linux_aarch64.whl
Size
5 MB
Type
Python Wheel
Python
3.12
  • Replaced 2 time(s)
  • Uploaded to jp6/cu128 by jp6 2025-03-15 05:18:07
xgrammar-0.1.15-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Size
5 MB
Type
Python Wheel
Python
3.12
  • Replaced 1 time(s)
  • Uploaded to jp6/cu128 by jp6 2025-03-06 18:04:33

XGrammar

Documentation License PyPI

Efficient, Flexible and Portable Structured Generation

Get Started | Documentation | Blogpost | Technical Report

News

Overview

XGrammar is an open-source library for efficient, flexible, and portable structured generation. It supports general context-free grammar to enable a broad range of structures while bringing careful system optimizations to enable fast executions. XGrammar features a minimal and portable C++ backend that can be easily integrated into multiple environments and frameworks, and is co-designed with the LLM inference engine and enables zero-overhead structured generation in LLM inference.

Get Started

Please visit our documentation to get started with XGrammar.