Model Hub
Browse PQC-verified AI models, datasets, and tools
Largest open code dataset. 67.5TB of permissively licensed source code across 600+ programming languages from Software Heritage.
Specialized code generation model. 32B parameters trained on 5.5T tokens of code data across 90+ languages.
Code LLM trained on The Stack v2 with 600+ programming languages. 4x the training data of StarCoder1.
The ability to solve problems is a hallmark of intelligence and has been an enduring goal in AI. AI systems that can create programs as solutions to problems or assist developers in writing programs can increase productivity and make programming more accessible. Recently, pre-trained large language models have shown impressive abilities in generating new codes from natural language descriptions, repairing buggy codes, translating codes between languages, and retrieving relevant code segments. However, the evaluation of these models has often been performed in a scattered way on only one or two specific tasks, in a few languages, at a partial granularity (e.g., function) level and in many cases without proper training data. Even more concerning is that in most cases the evaluation of generated codes has been done in terms of mere lexical overlap rather than actual execution whereas semantic similarity (or equivalence) of two code segments depends only on their ``execution similarity'', i.e., being able to get the same output for a given input.