Chinese researchers unveil LLaVA-o1 to challenge OpenAI’s o1 model


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


OpenAI‘s o1 model has shown that inference-time scaling—using more compute during inference—can significantly boost a language model’s reasoning abilities. LLaVA-o1, a new model developed by researchers from multiple universities in China, brings this paradigm to open-source vision language models (VLMs).

Early open-source VLMs typically use a direct prediction approach, generating answers without reasoning about the prompt and the steps required to solve the prompt. Without a structured reasoning process, they are less effective at tasks that require logical reasoning. Advanced prompting techniques such as chain-of-thought (CoT) prompting, where the model is encouraged to generate intermediate reasoning steps, produce some marginal improvements. But VLMs often produce errors or hallucinate.

The researchers observed that a key issue is that the reasoning process in existing VLMs is not sufficiently systematic and structured. The models do not generate reasoning chains and often get stuck in reasoning processes where they don’t know at what stage they are and what specific problem they must solve.

“We observe that VLMs often initiate responses without adequately organizing the problem and the available information,” the researchers write. “Moreover, they frequently deviate from a logical reasoning toward conclusions, instead of presenting a conclusion prematurely and subsequently attempting to justify it. Given that language models generate responses token-by-token, once an erroneous conclusion is introduced, the model typically continues along a flawed reasoning path.”

Multistage reasoning

OpenAI o1 uses inference-time scaling to solve the systematic and structured reasoning problem and allows the model to pause and review its results as it gradually solves the problem. While OpenAI has not released much detail about the underlying mechanism of o1, its results show promising directions for improving the reasoning abilities of foundational models.

Inspired by o1, the researchers designed LLaVA-o1 to perform stage-by-stage reasoning. Instead of generating a direct reasoning chain, LLaVA-o1 breaks down the reasoning process into four distinct stages:

Summary: The model first provides a high-level summary of the question, outlining the core problem it needs to address.

Caption:  If an image is present, the model describes the relevant parts, focusing on elements related to the question.

Reasoning:  Building on the summary, the model performs structured, logical reasoning to derive a preliminary answer.

Conclusion: Finally, the model presents a concise summary of the answer based on the preceding reasoning.

Only the conclusion stage is visible to the user; the other three stages represent the model’s internal reasoning process, similar to the hidden reasoning trace of o1. This structured approach allows LLaVA-o1 to manage its reasoning process independently, leading to improved performance on complex tasks.

“This structured approach enables the model to independently manage its reasoning process, improving its adaptability and performance on complex reasoning tasks,” the researchers write.

Stage-level beam search (right) vs other inference-time scaling techniques Source: arXiv

LLaVA-o1 also introduces a novel inference-time scaling technique called “stage-level beam search.” Stage-level beam search generates multiple candidate outputs at each reasoning stage. It then selects the best candidate at each stage to continue the generation process. This is in contrast to the classic best-of-N approach, in which the model is prompted to generate multiple complete responses before selecting one.

“Notably, it is the structured output design of LLaVA-o1 that makes this approach feasible, enabling efficient and accurate verification at each stage,” the researchers write. “This validates the effectiveness of structured output in improving inference time scaling.”

Training LLaVA-o1

Llava o1 training data
LLaVA-o1 training data is annotated with GPT-4o Source: arXiv

To train LLaVA-o1, the researchers compiled a new dataset of around 100,000 image-question-answer pairs obtained from several widely used VQA datasets. The dataset covers a variety of tasks, from multi-turn question answering to chart interpretation and geometric reasoning.

The researchers used GPT-4o to generate the detailed four-stage reasoning processes for each example, including the summary, caption, reasoning and conclusion stages. 

The researchers then fine-tuned Llama-3.2-11B-Vision-Instruct on this dataset to obtain the final LLaVA-o1 model. The researchers have not released the model but plan to release the dataset, called the LLaVA-o1-100k.

LLaVA-o1 in action

The researchers evaluated LLaVA-o1 on several multimodal reasoning benchmarks.  Despite being trained on only 100,000 examples, LLaVA-o1 showed significant performance improvements over the base Llama model, with an average benchmark score increase of 6.9%.  

LLaVA-o1 results
LLaVA-o1 vs other open and closed models Source: arXiv

Furthermore, stage-level beam search led to additional performance gains, demonstrating the effectiveness of inference-time scaling. Due to computational resource constraints, the researchers were only able to test the technique with a beam size of 2. They expect even greater improvements with larger beam sizes.

Impressively, LLaVA-o1 outperformed not only other open-source models of the same size or larger but also some closed-source models like GPT-4-o-mini and Gemini 1.5 Pro.

“LLaVA-o1 establishes a new standard for multimodal reasoning in VLMs, offering robust performance and scalability, especially in inference time,” the researchers write. “Our work paves the way for future research on structured reasoning in VLMs, including potential expansions with external verifiers and the use of reinforcement learning to further enhance complex multimodal reasoning capabilities.”



Source link

Share

Latest Updates

Frequently Asked Questions

Related Articles

Satya Nadella’s pay hits a record $96.5 million as Microsoft shares soar

Microsoft CEO Satya Nadella received a total pay package of 96.5 million for...

Access Denied

Access Denied You don't have permission to access "http://www.gadgets360.com/tablets/news/apple-foldable-ipad-launch-timeline-price-delay-samsung-display-specifications-report-9496480" on this server. Reference #18.73cfdb17.1761130274.2f9fb211 https://errors.edgesuite.net/18.73cfdb17.1761130274.2f9fb211 Source...

Crypto markets extend fall, Bitcoin trades at $108,000, Ethereum at $3,800

After a strong start to October, optimism in the cryptocurrency market has started...
custom cakes home inspections business brokerage life counseling rehab center residences chiropractic clinic surf school merchant advisors poker room med spa facility services creative academy tea shop life coach restaurant life insurance fitness program electrician NDIS provider medical academy sabung ayam online judi bola judi bola judi bola judi bola Slot Mahjong slot mahjong Slot Mahjong judi bola sabung ayam online mahjong ways mahjong ways mahjong ways judi bola SV388 SABUNG AYAM ONLINE GA28 judi bola online sabung ayam online live casino online live casino online SV388 SV388 SV388 SV388 SV388 Mix parlay sabung ayam online SV388 SBOBET88 judi bola judi bola judi bola Reset Pola Blackjack Jadi Kasus Study Mahjong Ways Mahjong Ways Mahjong Ways Mahjong Ways sabung ayam online sabung ayam online judi bola sabung ayam online judi bola Judi Bola Sabung Ayam Online Live Casino Online Sabung Ayam Online Sabung Ayam Online Sabung Ayam Online Sabung Ayam Online Sabung Ayam Online Sabung Ayam Online sabung ayam online judi bola mahjong ways sabung ayam online judi bola mahjong ways mahjong ways sabung ayam online sv388 Sv388 judi bola judi bola judi bola JUARA303 Mahjong ways Judi Bola Judi Bola Sabung Ayam Online Live casino mahjong ways 2 sabung ayam online sabung ayam online mahjong ways mahjong ways mahjong ways SV388 SBOBET88 judi bola judi bola judi bola judi bola judi bola https://himakom.fisip.ulm.ac.id/ SABUNG AYAM ONLINE MIX PARLAY SLOT GACOR JUDI BOLA SV388 LIVE CASINO LIVE CASINO ONLINE Judi Bola Online SABUNG AYAM ONLINE JUDI BOLA ONLINE LIVE CASINO ONLINE JUDI BOLA ONLINE LIVE CASINO ONLINE LIVE CASINO ONLINE sabung ayam online Portal SV388 SBOBET88 SABUNG AYAM ONLINE JUDI BOLA ONLINE CASINO ONLINE MAHJONG WAYS 2 sabung ayam online judi bola SABUNG AYAM ONLINE JUDI BOLA ONLINE Sabung Ayam Online JUDI BOLA Sabung Ayam Online JUDI BOLA SV388, WS168 & GA28 SBOBET88 SV388, WS168 & GA28 SBOBET88 SBOBET88 CASINO ONLINE SLOT GACOR Sabung Ayam Online judi bola