VLA ADAPTER
// VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model
VLA Adapter
VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model
13EmergingUnknown
What it does
> π Paper: https://arxiv.org/abs/2509.09372 > π Project page: https://vla-adapter.github.io/ > π€ HuggingFace: https://huggingface.co/VLA-Adapter > Github: https://github.com/OpenHelix-Team/VLA-Adapter - [2025/09/22] We released our codes! An enhanced Pro version is also released (this version conforms to the pipeline in the original paper, but is optimized in implementation). Everyone is
Getting Started
git
git clone https://github.com/OpenHelix-Team/VLA-Adapter