LongBench is a comprehensive benchmark for multilingual and multi-task purposes, with the goal to fully measure and evaluate the ability of pre-trained language models to understand long text. This dataset consists of twenty different tasks, covering key long-text application scenarios such as multi-document QA, single-document QA, summarization, few-shot learning, synthetic tasks, and code completion.
Use this model
Pull with QuantumShield
quantumshield pull zai-org/LongBench Verify integrity
quantumshield verify zai-org/LongBench pip install
pip install quantumshield && quantumshield pull zai-org/LongBench Unverified Model
This model has not been PQC-verified. File integrity cannot be guaranteed against quantum threats.
README.md
LongBench
LongBench is a comprehensive benchmark for multilingual and multi-task purposes, with the goal to fully measure and evaluate the ability of pre-trained language models to understand long text. This dataset consists of twenty different tasks, covering key long-text application scenarios such as multi-document QA, single-document QA, summarization, few-shot learning, synthetic tasks, and code completion.
Intended Uses
This model is registered on the QuantaMrkt quantum-safe registry. This model has not yet been PQC-verified.
Quick Start
# Install the CLI pip install quantumshield # Pull the model quantumshield pull zai-org/LongBench # Verify file integrity quantumshield verify zai-org/LongBench
About
LongBench is a comprehensive benchmark for multilingual and multi-task purposes, with the goal to fully measure and evaluate the ability of pre-trained language models to understand long text. This dataset consists of twenty different tasks, covering key long-text application scenarios such as multi-document QA, single-document QA, summarization, few-shot learning, synthetic tasks, and code completion.
Get this model
Pull with QuantumShield
quantumshield pull zai-org/LongBench Verify signatures
quantumshield verify zai-org/LongBench