Explorar o código

Update inference-with-fastertransformer.md

papersnake %!s(int64=2) %!d(string=hai) anos
pai
achega
62db1c9119
Modificáronse 1 ficheiros con 2 adicións e 0 borrados
  1. 2 0
      docs/inference-with-fastertransformer.md

+ 2 - 0
docs/inference-with-fastertransformer.md

@@ -8,6 +8,8 @@ We adapted the GLM-130B based on Fastertransformer for fast inference, with deta
 
 See [Get Model](/README.md#environment-setup).
 
+To run in int4 or int8 mode, please run [convert_tp.py](/tools/convert_tp.py) to generate the quantmized ckpt.
+
 ## Recommend: Run With Docker
 
 Use Docker to quickly build a Flask API application for GLM-130B.