AutoAWQ 量化

100阅读 0评论2025-02-21 badb0y
分类:系统运维

安装
git clone https :/ / github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip install -vvv --no-build-isolation -e .

代码:

点击(此处)折叠或打开

  1. from datasets import load_dataset
  2. from awq import AutoAWQForCausalLM
  3. from transformers import AutoTokenizer

  4. # Specify paths and hyperparameters for quantization
  5. model_path = "/data/qwen3b/Qwen/Qwen2___5-3B-Instruct/"
  6. quant_path = "/data/qwen3b/Qwen/Qwen2___5-3B-Instruct-AWQ/"
  7. quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }

  8. # Load model
  9. model = AutoAWQForCausalLM.from_pretrained(model_path)
  10. tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)

  11. # Define data loading methods
  12. def load_dolly():
  13.     data = load_dataset('databricks/databricks-dolly-15k', split="train")

  14.     # concatenate data
  15.     def concatenate_data(x):
  16.         return {"text": x['instruction'] + '\n' + x['context'] + '\n' + x['response']}

  17.     concatenated = data.map(concatenate_data)
  18.     return [text for text in concatenated["text"]]

  19. def load_wikitext():
  20.     data = load_dataset('wikitext', 'wikitext-2-raw-v1', split="train")
  21.     return [text for text in data["text"] if text.strip() != '' and len(text.split(' ')) > 20]

  22. # Quantize
  23. model.quantize(tokenizer, quant_config=quant_config, calib_data=load_wikitext())

  24. # Save quantized model
  25. model.save_quantized(quant_path)
  26. tokenizer.save_pretrained(quant_path)

  27. print(f'Model is quantized and saved at "{quant_path}"')
运行:
export HF_ENDPOINT=https :// hf-mirror.com
python 1.py





上一篇:AutoGPTQ 量化
下一篇:kafka 4.0安装