B, an open-weight multimodal vision AI model designed to deliver strong math, science, document and UI reasoning with far ...
Meta’s Llama 3.2 has been developed to redefined how large language models (LLMs) interact with visual data. By introducing a groundbreaking architecture that seamlessly integrates image understanding ...
Explore how vision-language-action models like Helix, GR00T N1, and RT-1 are enabling robots to understand instructions and act autonomously.
Deepseek VL-2 is a sophisticated vision-language model designed to address complex multimodal tasks with remarkable efficiency and precision. Built on a new mixture of experts (MoE) architecture, this ...
Microsoft's Phi-4-reasoning-vision-15B uses careful data curation and selective reasoning to compete with models trained on ...
Microsoft’s Phi-4-reasoning-vision-15B model shows how compact AI systems can combine vision and reasoning, signalling a broader industry move towards efficiency rather than simply building ever ...
February brought new coding models, and vision-language models impress with OCR. Open Responses aims to establish itself as a ...
Hugging Face Inc. today open-sourced SmolVLM-256M, a new vision language model with the lowest parameter count in its category. The algorithm’s small footprint allows it to run on devices such as ...
Baidu Inc., China's largest search engine company, released a new artificial intelligence model on Monday that its developers claim outperforms competitors from Google and OpenAI on several ...
Ambient.ai has introduced Pulsar, a new vision-language model that brings agentic monitoring, investigation, and real-time decision support to enterprise physical security. Ambient.ai’s Pulsar model ...
Safely achieving end-to-end autonomous driving is the cornerstone of Level 4 autonomy and the primary reason it hasn’t been widely adopted. The main difference between Level 3 and Level 4 is the ...