浏览代码

Update inference-with-fastertransformer.md

papersnake 2 年之前
父节点
当前提交
62db1c9119
共有 1 个文件被更改,包括 2 次插入0 次删除
  1. 2 0
      docs/inference-with-fastertransformer.md

+ 2 - 0
docs/inference-with-fastertransformer.md

@@ -8,6 +8,8 @@ We adapted the GLM-130B based on Fastertransformer for fast inference, with deta
 
 See [Get Model](/README.md#environment-setup).
 
+To run in int4 or int8 mode, please run [convert_tp.py](/tools/convert_tp.py) to generate the quantmized ckpt.
+
 ## Recommend: Run With Docker
 
 Use Docker to quickly build a Flask API application for GLM-130B.