Open Machine Learning Compiler Framework
-
Updated
Apr 19, 2026 - Python
Open Machine Learning Compiler Framework
A GPU cluster manager that configures and orchestrates inference engines like vLLM and SGLang for high-performance AI model deployment.
Stable Diffusion web UI
A deep learning package for many-body potential energy representation and molecular dynamics
Self-host the powerful Chatterbox TTS model. This server offers a user-friendly Web UI, flexible API endpoints (incl. OpenAI compatible), predefined voices, voice cloning, and large audiobook-scale text processing. Runs accelerated on NVIDIA (CUDA), AMD (ROCm), and CPU.
AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming
A Cli, a webUI, and a MCP server for the Z-Image-Turbo text-to-image generation model (Tongyi-MAI/Z-Image-Turbo base model as well as quantized models)
vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs
Windows-only version of ComfyUI which uses AMD's official ROCm and PyTorch libraries to get better performance with AMD GPUs. [auto-installation and popular performance enhancing packages like triton * sage-attention * flash-attention * bitsandbytes included ]
Horizon chart for CPU/GPU/Neural Engine utilization monitoring. Supports Apple M1-M4, Nvidia GPUs, AMD GPUs
A "standard library" of Triton kernels.
Experimental support for many TTS/STT LLMs wrapped in a Wyoming API for consumption via Homeassistant
Add a description, image, and links to the rocm topic page so that developers can more easily learn about it.
To associate your repository with the rocm topic, visit your repo's landing page and select "manage topics."