Model Hub

Browse PQC-verified AI models, datasets, and tools

C
cross-encoder/ms-marco-MiniLM-L6-v2 HF PQC Verified

Text-RankingSentence-TransformersPyTorchJAXONNXSafetensors MEDIUM
bigcode/The Stack v2 HF PQC Verified

Largest open code dataset. 67.5TB of permissively licensed source code across 600+ programming languages from Software Heritage.

DatasetCode600+ Languages67.5TB CRITICAL
C
cross-encoder/ms-marco-MiniLM-L4-v2 HF PQC Verified

Text-RankingSentence-TransformersPyTorchJAXONNXSafetensors MEDIUM
C
cross-encoder/ms-marco-MiniLM-L12-v2 HF PQC Verified

Text-RankingSentence-TransformersPyTorchJAXONNXSafetensors HIGH
Q
Qwen/Qwen3-Coder-30B-A3B-Instruct HF PQC Verified

Text GenerationTransformersSafetensorsQwen3_moeConversational CRITICAL
Q
Qwen/Qwen2.5-Coder-32B-Instruct HF Ollama PQC Verified

Specialized code generation model. 32B parameters trained on 5.5T tokens of code data across 90+ languages.

TransformerCode Generation32BInstruct HIGH
Q
Qwen/Qwen2.5-Coder-7B-Instruct HF PQC Verified

Text GenerationTransformersSafetensorsQwen2CodeCodeqwen CRITICAL
Q
Qwen/Qwen2.5-Coder-7B HF PQC Verified

Text GenerationTransformersSafetensorsQwen2CodeQwen CRITICAL
B
bigcode/starcoder2-15b HF Ollama PQC Verified

Code LLM trained on The Stack v2 with 600+ programming languages. 4x the training data of StarCoder1.

TransformerCode Generation15B HIGH
NTU-NLP-sg/xCodeEval HF PQC Verified

The ability to solve problems is a hallmark of intelligence and has been an enduring goal in AI. AI systems that can create programs as solutions to problems or assist developers in writing programs can increase productivity and make programming more accessible. Recently, pre-trained large language models have shown impressive abilities in generating new codes from natural language descriptions, repairing buggy codes, translating codes between languages, and retrieving relevant code segments. However, the evaluation of these models has often been performed in a scattered way on only one or two specific tasks, in a few languages, at a partial granularity (e.g., function) level and in many cases without proper training data. Even more concerning is that in most cases the evaluation of generated codes has been done in terms of mere lexical overlap rather than actual execution whereas semantic similarity (or equivalence) of two code segments depends only on their ``execution similarity'', i.e., being able to get the same output for a given input.

Task_categories:translationTask_categories:token-ClassificationTask_categories:text-RetrievalTask_categories:text-GenerationTask_categories:text-ClassificationTask_categories:feature-Extraction
C
cross-encoder/nli-MiniLM2-L6-H768 HF Unverified

Zero-Shot ClassificationSentence-TransformersPyTorchONNXSafetensorsOpenvino HIGH
C
cross-encoder/nli-deberta-v3-small HF Unverified

Zero-Shot ClassificationSentence-TransformersPyTorchONNXSafetensorsDeberta-V2 HIGH
C
cross-encoder/nli-deberta-v3-base HF Unverified

Zero-Shot ClassificationSentence-TransformersPyTorchONNXSafetensorsDeberta-V2 HIGH
0
0xgr3y/Qwen2.5-Coder-0.5B-Instruct-Gensyn-Swarm-tall_tame_panther HF Unverified

Text GenerationTransformersSafetensorsGGUFQwen2Qwen2.5-Coder HIGH
Showing 14 of 14 items (page 1 of 1)
Prev Next