Model Hub
Browse PQC-verified AI models, datasets, and tools
Meta's most capable open-weight model. 70B parameters with 128K context window, multilingual support, and tool use.
Meta's Llama 3.1 8B parameter instruction-tuned model. Optimized for dialogue and instruction following with 128K context.
Subset of LAION-5B filtered for aesthetic quality. 600M image-text pairs scored >5.0 by aesthetic predictor. Standard for image generation training.
OpenAI's first open-source model release. 20B parameter GPT architecture trained on diverse web data.
Multilingual speech dataset with 30K+ hours across 120+ languages. Crowdsourced and validated. De facto standard for ASR training.
Open corpus of 3T tokens for language model pretraining. Sourced from web, academic papers, code, encyclopedic, and book content.
DeepSeek's reasoning model. Chain-of-thought reasoning with 671B MoE architecture, rivaling frontier closed models.
Instruction-following dataset of 52K examples generated from text-davinci-003. Foundational instruction tuning dataset.