Hugging Face's New Robotics AI Model Can Run Locally on a MacBook

1 day ago 3
ARTICLE AD BOX

Hugging Face on Tuesday released SmolVLA, an open source vision language action (VLA) artificial intelligence (AI) model. The large language model is aimed at robotics workflows and training-related tasks. The company claims that the AI model is small and efficient enough to run locally on a computer with a single consumer GPU, or a MacBook. The New York, US-based AI model repository also claimed that SmolVLA can outperform models that are much large than it. The AI model is currently available to download.

Hugging Face's SmolVLA AI Model Can Run Locally on a MacBook

According to Hugging Face, advancements in robotics have been slow, despite the growth in the AI space. The company says that this is due to a lack of high-quality and diverse data, and large language models (LLMs) that are designed for robotics workflows.

VLAs have emerged as a solution to one of the problems, but most of the leading models from companies such as Google and Nvidia are proprietary and are trained on private datasets. As a result, the larger robotics research community, which relies on open-source data, faces major bottlenecks in reproducing or building on these AI models, the post highlighted.

These VLA models can capture images, videos, or direct camera feed, understand the real-world condition and then carry out a prompted task using robotics hardware.

Hugging Face says SmolVLA addresses both the pain points currently faced by the robotics research community — it is an open-source robotics-focused model which is trained on an open dataset from the LeRobot community. SmolVLA is a 450 million parameter AI model which can run on a desktop computer with a single compatible GPU, or even one of the newer MacBook devices.

Coming to the architecture, it is built on the company's VLM models. It consists of a SigLip vision encoder and a language decoder (SmolLM2). The visual information is captured and extracted via the vision encoder, while natural language prompts are tokenised and fed into the decoder.

When dealing with movements or physical action (executing the task via a robotic hardware), sensorimotor signals are added to a single token. The decoder then combines all of this information into a single stream and processes it together. This enables the model in understanding the real-world data and task at hand contextually, and not as separate entities.

SmolVLA sends everything it has learned to another component called the action expert, which figures out what action to take. The action expert is a transformer-based architecture with 100 million parameters. It predicts a series of future moves for the robot (walking steps, arm movements, etc), also known as action chunks.

While it applies to a niche demographic, those working with robotics can download the open weights, datasets, and training recipes to either reproduce or build on the SmolVLA model. Additionally, robotics enthusiasts who have access to a robotic arm or similar hardware can also download these to run the model and try out real-time robotics workflows.

Read Entire Article