From Observation to Action: Latent Action-based Primitive Segmentation for VLA Pre-training in Industrial Settings

1ShanghaiTech University, 2Hangzhou Dianzi University
Teaser

Our framework automatically segments continuous industrial videos into semantically coherent action primitives for VLA pre-training.

Abstract

We present a novel unsupervised framework to unlock vast unlabeled human demonstration data from continuous industrial video streams for Vision-Language-Action (VLA) model pre-training.

Our method first trains a lightweight motion tokenizer to encode motion dynamics, then employs an unsupervised action segmenter leveraging a novel "Latent Action Energy" metric to discover and segment semantically coherent action primitives. The pipeline outputs both segmented video clips and their corresponding latent action sequences, providing structured data directly suitable for VLA pre-training.

Evaluations on public benchmarks and a proprietary electric motor assembly dataset demonstrate effective segmentation of key tasks performed by humans at workstations. Further clustering and quantitative assessment via a Vision-Language Model confirm the semantic coherence of the discovered action primitives. To our knowledge, this is the first fully automated end-to-end system for extracting and organizing VLA pre-training data from unstructured industrial videos, offering a scalable solution for embodied AI integration in manufacturing.

Video

Method Pipeline

Overview of the LAPS pipeline:

(1) Motion Tracking: Extracts motion keypoints from raw video using a point tracker.

(2) Action Detection & Segmentation: Generates a latent vector stream via a motion tokenizer and identifies action boundaries to segment latent vectors, video clips, and action codes.

(3) Semantic Action Clustering: Groups the segmented latent vectors into meaningful semantic action clusters.

Method Pipeline

Clustering Results

Visualization of discovered action primitives through clustering. Each cluster represents a semantically coherent action type.

Exocentric View Dataset

Top-down View Dataset

Related Links

Foundations: Latent Action Representations

AMPLIFY: Actionless Motion Priors for Robot Learning Collins et al., 2025
The architectural foundation for our Motion Tokenizer. We adapt their quantized autoencoder to define "Action Energy" for temporal segmentation rather than direct policy learning.

LAPA: Latent Action Pretraining from Videos Ye et al., 2024
A key precedent in latent pre-training. While LAPA focuses on reconstruction objectives, our work addresses the upstream challenge of automatically segmenting primitives from continuous, unstructured streams.

LAPO: Learning to Act without Actions Schmidt & Jiang, 2023
Demonstrates the power of latent spaces. We advance this by moving away from pixel-level prediction (which captures noise) to a semantic behavioral intent metric for boundary detection.

Context & Tools

OpenVLA: An Open-Source Vision-Language-Action Model Kim et al., 2024
A representative generalist VLA. Our pipeline aims to solve the critical "data sourcing bottleneck" for training such models in industrial domains.

CoTracker3: Point Tracking by Pseudo-labelling Karaev et al., 2025
The state-of-the-art point tracker utilized in our pipeline to extract dense motion dynamics from raw industrial footage.

BibTeX

@misc{zhang2025observationactionlatentactionbased,
  title={From Observation to Action: Latent Action-based Primitive Segmentation for VLA Pre-training in Industrial Settings}, 
  author={Jiajie Zhang and Sören Schwertfeger and Alexander Kleiner},
  year={2025},
  eprint={2511.21428},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2511.21428}, 
}