Use convert.py to transform ChatGLM-6B into quantized GGML format. For example, to convert the fp16 original model to q4_0 (quantized int4) GGML model, run: python3 ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results