README.md
8.1 KB · 243 lines · markdown Raw
1 ---
2 license: mit
3 language:
4 - zh
5 - en
6 - fr
7 - es
8 - ru
9 - de
10 - ja
11 - ko
12 pipeline_tag: image-to-text
13 library_name: transformers
14 ---
15
16 # GLM-OCR
17
18 <div align="center">
19 <img src=https://raw.githubusercontent.com/zai-org/GLM-OCR/refs/heads/main/resources/logo.svg width="40%"/>
20 </div>
21 <p align="center">
22 👋 Join our <a href="https://raw.githubusercontent.com/zai-org/GLM-OCR/refs/heads/main/resources/wechat.jpg" target="_blank">WeChat</a> and <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community
23 <br>
24 📍 Use GLM-OCR's <a href="https://docs.z.ai/guides/vlm/glm-ocr" target="_blank">API</a>
25 <br>
26 👉 <a href="https://github.com/zai-org/GLM-OCR" target="_blank">GLM-OCR SDK</a> Recommended
27 <br>
28 📖 <a href="https://arxiv.org/abs/2603.10910" target="_blank"> Technical Report</a>
29 </p>
30
31
32 ## Introduction
33
34 GLM-OCR is a multimodal OCR model for complex document understanding, built on the GLM-V encoder–decoder architecture. It introduces Multi-Token Prediction (MTP) loss and stable full-task reinforcement learning to improve training efficiency, recognition accuracy, and generalization. The model integrates the CogViT visual encoder pre-trained on large-scale image–text data, a lightweight cross-modal connector with efficient token downsampling, and a GLM-0.5B language decoder. Combined with a two-stage pipeline of layout analysis and parallel recognition based on PP-DocLayout-V3, GLM-OCR delivers robust and high-quality OCR performance across diverse document layouts.
35
36 **Key Features**
37
38 - **State-of-the-Art Performance**: Achieves a score of 94.62 on OmniDocBench V1.5, ranking #1 overall, and delivers state-of-the-art results across major document understanding benchmarks, including formula recognition, table recognition, and information extraction.
39
40 - **Optimized for Real-World Scenarios**: Designed and optimized for practical business use cases, maintaining robust performance on complex tables, code-heavy documents, seals, and other challenging real-world layouts.
41
42 - **Efficient Inference**: With only 0.9B parameters, GLM-OCR supports deployment via vLLM, SGLang, and Ollama, significantly reducing inference latency and compute cost, making it ideal for high-concurrency services and edge deployments.
43
44 - **Easy to Use**: Fully open-sourced and equipped with a comprehensive [SDK](https://github.com/zai-org/GLM-OCR) and inference toolchain, offering simple installation, one-line invocation, and smooth integration into existing production pipelines.
45
46 ## Performance
47
48 - Document Parsing & Information Extraction
49
50 ![image](https://raw.githubusercontent.com/zai-org/GLM-OCR/refs/heads/main/resources/docparse.png)
51
52
53 - Real-World Scenarios Performance
54
55 ![image](https://raw.githubusercontent.com/zai-org/GLM-OCR/refs/heads/main/resources/realworld.png)
56
57
58 - Speed Test
59
60 For speed, we compared different OCR methods under identical hardware and testing conditions (single replica, single concurrency), evaluating their performance in parsing and exporting Markdown files from both image and PDF inputs. Results show GLM-OCR achieves a throughput of 1.86 pages/second for PDF documents and 0.67 images/second for images, significantly outperforming comparable models.
61
62 ![image](https://raw.githubusercontent.com/zai-org/GLM-OCR/refs/heads/main/resources/speed.png)
63
64 ## Usage
65
66 ### Official SDK
67
68 For document parsing tasks, we strongly recommend using our [official SDK](https://github.com/zai-org/GLM-OCR).
69 Compared with model-only inference, the SDK integrates PP-DocLayoutV3 and provides a complete, easy-to-use pipeline for document parsing, including layout analysis and structured output generation. This significantly reduces the engineering overhead required to build end-to-end document intelligence systems.
70
71 Note that the SDK is currently designed for document parsing tasks only. For information extraction tasks, please refer to the following section and run inference directly with the model.
72
73 ### vLLM
74
75 1. run
76
77 ```bash
78 pip install -U vllm --extra-index-url https://wheels.vllm.ai/nightly
79 ```
80
81 or using docker with:
82 ```
83 docker pull vllm/vllm-openai:nightly
84 ```
85
86 2. run with:
87
88 ```bash
89 pip install git+https://github.com/huggingface/transformers.git
90 vllm serve zai-org/GLM-OCR --allowed-local-media-path / --port 8080
91 ```
92
93 ### SGLang
94
95
96 1. using docker with:
97
98 ```bash
99 docker pull lmsysorg/sglang:dev
100 ```
101
102 or build it from source with:
103
104 ```bash
105 pip install git+https://github.com/sgl-project/sglang.git#subdirectory=python
106 ```
107
108 2. run with:
109
110 ```bash
111 pip install git+https://github.com/huggingface/transformers.git
112 python -m sglang.launch_server --model zai-org/GLM-OCR --port 8080
113 ```
114
115 ### Ollama
116
117 1. Download [Ollama](https://ollama.com/download).
118 2. run with:
119
120 ```bash
121 ollama run glm-ocr
122 ```
123
124 Ollama will automatically use image file path when an image is dragged into the terminal:
125
126 ```bash
127 ollama run glm-ocr Text Recognition: ./image.png
128 ```
129
130 ### Transformers
131
132 ```
133 pip install git+https://github.com/huggingface/transformers.git
134 ```
135
136 ```python
137 from transformers import AutoProcessor, AutoModelForImageTextToText
138 import torch
139
140 MODEL_PATH = "zai-org/GLM-OCR"
141 messages = [
142 {
143 "role": "user",
144 "content": [
145 {
146 "type": "image",
147 "url": "test_image.png"
148 },
149 {
150 "type": "text",
151 "text": "Text Recognition:"
152 }
153 ],
154 }
155 ]
156 processor = AutoProcessor.from_pretrained(MODEL_PATH)
157 model = AutoModelForImageTextToText.from_pretrained(
158 pretrained_model_name_or_path=MODEL_PATH,
159 torch_dtype="auto",
160 device_map="auto",
161 )
162 inputs = processor.apply_chat_template(
163 messages,
164 tokenize=True,
165 add_generation_prompt=True,
166 return_dict=True,
167 return_tensors="pt"
168 ).to(model.device)
169 inputs.pop("token_type_ids", None)
170 generated_ids = model.generate(**inputs, max_new_tokens=8192)
171 output_text = processor.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=False)
172 print(output_text)
173 ```
174
175 ### Prompt Limited
176
177 GLM-OCR currently supports two types of prompt scenarios:
178
179 1. **Document Parsing** – extract raw content from documents. Supported tasks include:
180
181 ```python
182 {
183 "text": "Text Recognition:",
184 "formula": "Formula Recognition:",
185 "table": "Table Recognition:"
186 }
187 ```
188
189 2. **Information Extraction** – extract structured information from documents. Prompts must follow a strict JSON schema. For example, to extract personal ID information:
190
191 ```python
192 请按下列JSON格式输出图中信息:
193 {
194 "id_number": "",
195 "last_name": "",
196 "first_name": "",
197 "date_of_birth": "",
198 "address": {
199 "street": "",
200 "city": "",
201 "state": "",
202 "zip_code": ""
203 },
204 "dates": {
205 "issue_date": "",
206 "expiration_date": ""
207 },
208 "sex": ""
209 }
210 ```
211
212 ⚠️ Note: When using information extraction, the output must strictly adhere to the defined JSON schema to ensure downstream processing compatibility.
213
214 ## Acknowledgement
215
216 This project is inspired by the excellent work of the following projects and communities:
217
218 - [PP-DocLayout-V3](https://huggingface.co/PaddlePaddle/PP-DocLayoutV3)
219 - [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)
220 - [MinerU](https://github.com/opendatalab/MinerU)
221
222 ## License
223
224 The GLM-OCR model is released under the MIT License.
225
226 The complete OCR pipeline integrates [PP-DocLayoutV3](https://huggingface.co/PaddlePaddle/PP-DocLayoutV3) for document layout analysis, which is licensed under the Apache License 2.0. Users should comply with both licenses when using this project.
227
228 ## Citation
229
230 If you find GLM-OCR useful in your research, please cite our technical report:
231
232 ```bibtex
233 @misc{duan2026glmocrtechnicalreport,
234 title={GLM-OCR Technical Report},
235 author={Shuaiqi Duan and Yadong Xue and Weihan Wang and Zhe Su and Huan Liu and Sheng Yang and Guobing Gan and Guo Wang and Zihan Wang and Shengdong Yan and Dexin Jin and Yuxuan Zhang and Guohong Wen and Yanfeng Wang and Yutao Zhang and Xiaohan Zhang and Wenyi Hong and Yukuo Cen and Da Yin and Bin Chen and Wenmeng Yu and Xiaotao Gu and Jie Tang},
236 year={2026},
237 eprint={2603.10910},
238 archivePrefix={arXiv},
239 primaryClass={cs.CL},
240 url={https://arxiv.org/abs/2603.10910},
241 }
242 ```
243