LLaVA-OneVision-1.5 Instruction Data Paper | Code 📌 Introduction This dataset, LLaVA-OneVision-1.5-Instruct, was collected and integrated during the development of LLaVA-OneVision-1.5. LLaVA-OneVision-1.5 is a novel family of Large Multimodal Models (LMMs) that achieve state-of-the-art performance with significantly reduced computational and financial costs. This meticulously curated 22M instruction dataset (LLaVA-OneVision-1.5-Instruct) is part of a comprehensive and… See the full description on the dataset page: https://huggingface.co/datasets/mvp-lab/LLaVA-OneVision-1.5-Instruct-Data.
Use this model
Pull with QuantumShield
quantumshield pull mvp-lab/LLaVA-OneVision-1.5-Instruct-Data Verify integrity
quantumshield verify mvp-lab/LLaVA-OneVision-1.5-Instruct-Data pip install
pip install quantumshield && quantumshield pull mvp-lab/LLaVA-OneVision-1.5-Instruct-Data Unverified Model
This model has not been PQC-verified. File integrity cannot be guaranteed against quantum threats.
README.md
LLaVA-OneVision-1.5-Instruct-Data
LLaVA-OneVision-1.5 Instruction Data Paper | Code 📌 Introduction This dataset, LLaVA-OneVision-1.5-Instruct, was collected and integrated during the development of LLaVA-OneVision-1.5. LLaVA-OneVision-1.5 is a novel family of Large Multimodal Models (LMMs) that achieve state-of-the-art performance with significantly reduced computational and financial costs. This meticulously curated 22M instruction dataset (LLaVA-OneVision-1.5-Instruct) is part of a comprehensive and… See the full description on the dataset page: https://huggingface.co/datasets/mvp-lab/LLaVA-OneVision-1.5-Instruct-Data.
Intended Uses
This model is registered on the QuantaMrkt quantum-safe registry. This model has not yet been PQC-verified.
Quick Start
# Install the CLI pip install quantumshield # Pull the model quantumshield pull mvp-lab/LLaVA-OneVision-1.5-Instruct-Data # Verify file integrity quantumshield verify mvp-lab/LLaVA-OneVision-1.5-Instruct-Data
About
LLaVA-OneVision-1.5 Instruction Data Paper | Code 📌 Introduction This dataset, LLaVA-OneVision-1.5-Instruct, was collected and integrated during the development of LLaVA-OneVision-1.5. LLaVA-OneVision-1.5 is a novel family of Large Multimodal Models (LMMs) that achieve state-of-the-art performance with significantly reduced computational and financial costs. This meticulously curated 22M instruction dataset (LLaVA-OneVision-1.5-Instruct) is part of a comprehensive and… See the full description on the dataset page: https://huggingface.co/datasets/mvp-lab/LLaVA-OneVision-1.5-Instruct-Data.
Get this model
Pull with QuantumShield
quantumshield pull mvp-lab/LLaVA-OneVision-1.5-Instruct-Data Verify signatures
quantumshield verify mvp-lab/LLaVA-OneVision-1.5-Instruct-Data