Python GLM-4-9B-Chat微调代码和步骤,glm-4-9b-chat微调代码实战;GLM-4-9B-Chat微调:Lora,P-Tuning V2,SFT

一、模型介绍

GLM-4-9B 是智谱 AI 推出的最新一代预训练模型 GLM-4 系列中的开源版本。 在语义、数学、推理、代码和知识等多方面的数据集测评中, GLM-4-9B 及其人类偏好对齐的版本 GLM-4-9B-Chat 均表现出超越 Llama-3-8B 的卓越性能。除了能进行多轮对话,GLM-4-9B-Chat 还具备网页浏览、代码执行、自定义工具调用(Function Call)和长文本推理(支持最大 128K 上下文)等高级功能。本代模型增加了多语言支持,支持包括日语,韩语,德语在内的 26 种语言。我们还推出了支持 1M 上下文长度(约 200 万中文字符)的 GLM-4-9B-Chat-1M 模型和基于 GLM-4-9B 的多模态模型 GLM-4V-9B。GLM-4V-9B 具备 1120 * 1120 高分辨率下的中英双语多轮对话能力,在中英文综合能力、感知推理、文字识别、图表理解等多方面多模态评测中,GLM-4V-9B 表现出超越 GPT-4-turbo-2024-04-09、Gemini 1.0 Pro、Qwen-VL-Max 和 Claude 3 Opus 的卓越性能。

Model List

Model Type Seq Length Download Online Demo
GLM-4-9B Base 8K 🤗 Huggingface 🤖 ModelScope /
GLM-4-9B-Chat Chat 128K 🤗 Huggingface 🤖 ModelScope 🤖 ModelScope CPU
🤖 ModelScope vLLM
GLM-4-9B-Chat-1M Chat 1M 🤗 Huggingface 🤖 ModelScope /
GLM-4V-9B Chat 8K 🤗 Huggingface 🤖 ModelScope 🤖 ModelScope

评测结果

对话模型典型任务

Model AlignBench-v2 MT-Bench IFEval MMLU C-Eval GSM8K MATH HumanEval NCB
Llama-3-8B-Instruct 5.12 8.00 68.58 68.4 51.3 79.6 30.0 62.2 24.7
ChatGLM3-6B 3.97 5.50 28.1 66.4 69.0 72.3 25.7 58.5 11.3
GLM-4-9B-Chat 6.61 8.35 69.0 72.4 75.6 79.6 50.6 71.8 32.2

长文本

在 1M 的上下文长度下进行大海捞针实验,结果如下:

多语言能力

在六个多语言数据集上对 GLM-4-9B-Chat 和 Llama-3-8B-Instruct 进行了测试,测试结果及数据集对应选取语言如下表

Dataset Llama-3-8B-Instruct GLM-4-9B-Chat Languages
M-MMLU 49.6 56.6 all
FLORES 25.0 28.8 ru, es, de, fr, it, pt, pl, ja, nl, ar, tr, cs, vi, fa, hu, el, ro, sv, uk, fi, ko, da, bg, no
MGSM 54.0 65.3 zh, en, bn, de, es, fr, ja, ru, sw, te, th
XWinograd 61.7 73.1 zh, en, fr, jp, ru, pt
XStoryCloze 84.7 90.7 zh, en, ar, es, eu, hi, id, my, ru, sw, te
XCOPA 73.3 80.1 zh, et, ht, id, it, qu, sw, ta, th, tr, vi

 二、代码实战

核心代码

# -*- coding: utf-8 -*-
import json
import os
import jieba
import dataclasses as dc
import functools
from collections.abc import Callable, Mapping, Sequence
from pathlib import Path
from typing import Annotated, Any, Optional, Union
import numpy as np
import ruamel.yaml as yaml
import torch
import typer
from datasets import Dataset, NamedSplit, Split
from nltk.translate.bleu_score import sentence_bleu, SmoothingFunction
from peft import PeftConfig, get_peft_config, get_peft_model
from rouge_chinese import Rouge
from torch import nn
from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer,
    EvalPrediction,
    GenerationConfig,
    PreTrainedTokenizer,
    Seq2SeqTrainingArguments,
)
from transformers import DataCollatorForSeq2Seq as _DataCollatorForSeq2Seq
from transformers import Seq2SeqTrainer as _Seq2SeqTrainer

app = typer.Typer(pretty_exceptions_show_locals=False)


class DataCollatorForSeq2Seq(_DataCollatorForSeq2Seq):
    def __call__(self, features, return_tensors=None):
        output_ids = ([feature['output_ids'] for feature in features] if 'output_ids' in features[0].keys() else None)
        if output_ids is not None:
            max_output_length = max(len(out) for out in output_ids)
            if self.pad_to_multiple_of is not None:
                max_output_length = (
                        (
                                max_output_length + self.pad_to_multiple_of - 1) //
                        self.pad_to_multiple_of * self.pad_to_multiple_of
                )
            for feature in features:
                remainder = [self.tokenizer.pad_token_id] * (
                        max_output_length - len(feature['output_ids'])
                )
                if isinstance(feature['output_ids'], list):
                    feature['output_ids'] = feature['output_ids'] + remainder
                else:
                    feature['output_ids'] = np.concatenate(
                        [feature['output_ids'], remainder]
                    ).astype(np.int64)
        return super().__call__(features, return_tensors)


class Seq2SeqTrainer(_Seq2SeqTrainer):
    def prediction_step(
            self,
            model: nn.Module,
            inputs: dict[str, Any],
            prediction_loss_only: bool,
            ignore_keys=None,
            **gen_kwargs,
    ) -> tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]:
        if self.args.predict_with_generate:
            output_ids = inputs.pop('output_ids')
        input_ids = inputs['input_ids']
        loss, generated_tokens, labels = super().prediction_step(
            model, inputs, prediction_loss_only, ignore_keys, **gen_kwargs
        )
        generated_tokens = generated_tokens[:, input_ids.size()[1]:]
        labels = output_ids
        return loss, generated_tokens, labels


@dc.dataclass
class DataConfig(object):
    train_file: Optional[str] = None
    val_file: Optional[str] = None
    test_file: Optional[str] = None
    num_proc: Optional[int] = None

    @property
    def data_format(self) -> str:
        return Path(self.train_file).suffix

    @property
    def data_files(self) -> dict[NamedSplit, str]:
        return {
            split: data_file
            for split, data_file in zip(
                [Split.TRAIN, Split.VALIDATION, Split.TEST],
                [self.train_file, self.val_file, self.test_file],
            )
            if data_file is not None
        }


@dc.dataclass
class FinetuningConfig(object):
    data_config: DataConfig

    max_input_length: int
    max_output_length: int

    training_args: Seq2SeqTrainingArguments = dc.field(
        default_factory=lambda: Seq2SeqTrainingArguments(output_dir='./output')
    )
    peft_config: Optional[PeftConfig] = None

    def __post_init__(self):
        if not self.training_args.do_eval or self.data_config.val_file is None:
            self.training_args.do_eval = False
            self.training_args.evaluation_strategy = 'no'
            self.data_config.val_file = None
        else:
            self.training_args.per_device_eval_batch_size = (
                    self.training_args.per_device_eval_batch_size
                    or self.training_args.per_device_train_batch_size
            )

    @classmethod
    def from_dict(cls, **kwargs) -> 'FinetuningConfig':
        training_args = kwargs.get('training_args', None)
        if training_args is not None and not isinstance(
                training_args, Seq2SeqTrainingArguments
        ):
            gen_config = training_args.get('generation_config')
            # TODO: a bit hacky
            if not isinstance(gen_config, GenerationConfig):
                training_args['generation_config'] = GenerationConfig(
                    **gen_config
                )
            kwargs['training_args'] = Seq2SeqTrainingArguments(**training_args)

        data_config = kwargs.get('data_config')
        if not isinstance(data_config, DataConfig):
            kwargs['data_config'] = DataConfig(**data_config)

        peft_config = kwargs.get('peft_config', None)
        if peft_config is not None and not isinstance(peft_config, PeftConfig):
            kwargs['peft_config'] = get_peft_config(config_dict=peft_config)
        return cls(**kwargs)

    @classmethod
    def from_file(cls, path: Union[str, Path]) -> 'FinetuningConfig':
        path = Path(path)
        parser = yaml.YAML(typ='safe', pure=True)
        parser.indent(mapping=2, offset=2, sequence=4)
        parser.default_flow_style = False
        kwargs = parser.load(path)
        return cls.from_dict(**kwargs)


from datasets import load_dataset, DatasetDict, NamedSplit
from typing import Optional


def _load_datasets(
        data_dir: str,
        data_format: str,
        data_files: dict[NamedSplit, str],
        num_proc: Optional[int],
) -> DatasetDict:
    if data_format == '.jsonl':
        dataset_dct = load_dataset(
            data_dir,
            data_files=data_files,
            split=None,
            num_proc=num_proc,
        )
    else:
        raise NotImplementedError(f"Cannot load dataset in the '{data_format}' format.")
    return dataset_dct


class DataManager(object):
    def __init__(self, data_dir: str, data_config: DataConfig):
        self._num_proc = data_config.num_proc

        self._dataset_dct = _load_datasets(
            data_dir,
            data_config.data_format,
            data_config.data_files,
            self._num_proc,
        )

    def _get_dataset(self, split: NamedSplit) -> Optional[Dataset]:
        return self._dataset_dct.get(split, None)

    def get_dataset(
            self,
            split: NamedSplit,
            process_fn: Callable[[dict[str, Any]], dict[str, Any]],
            batched: bool = True,
            remove_orig_columns: bool = True,
    ) -> Optional[Dataset]:
        orig_dataset = self._get_dataset(split)
        if orig_dataset is None:
            return

        if remove_orig_columns:
            remove_columns = orig_dataset.column_names
        else:
            remove_columns = None
        return orig_dataset.map(
            process_fn,
            batched=batched,
            remove_columns=remove_columns,
            num_proc=self._num_proc,
        )


def process_message(message):
    if 'tools' in message and message['role'] == 'system':
        for tool in message['tools']:
            parameters = tool['function']['parameters']['properties']
            tool['function']['parameters']['properties'] = \
                {k: v for k, v in parameters.items() if
                 v is not None}
    elif 'tools' in message:
        del message['tools']
    return message


def process_batch(
        batch: Mapping[str, Sequence],
        tokenizer: PreTrainedTokenizer,
        max_input_length: int,
        max_output_length: int,
) -> dict[str, list]:
    batched_conv = batch['messages']
    batched_input_ids = []
    batched_labels = []

    for conv in batched_conv:
        input_ids = [151331, 151333]
        loss_masks = [False, False]
        for message in conv:
            message = process_message(message)
            loss_mask_val = False if message['role'] in ('system', 'user', 'observation') else True
            new_input_ids = tokenizer.apply_chat_template([message], tokenize=True, return_dict=False)[0][2:]
            new_loss_masks = [loss_mask_val] * len(new_input_ids)
            input_ids += new_input_ids
            loss_masks += new_loss_masks
        input_ids.append(tokenizer.eos_token_id)
        loss_masks = [False, *loss_masks]
        labels = []
        for input_id, mask in zip(input_ids, loss_masks):
            if mask:
                labels.append(input_id)
            else:
                labels.append(-100)
        max_length = max_input_length + max_output_length + 1
        batched_input_ids.append(input_ids[:max_length])
        batched_labels.append(labels[:max_length])
    return {'input_ids': batched_input_ids, 'labels': batched_labels}


def process_batch_eval(
        batch: Mapping[str, Sequence],
        tokenizer: PreTrainedTokenizer,
        max_input_length: int,
        max_output_length: int,
) -> dict[str, list]:
    batched_conv = batch['messages']
    batched_input_ids = []
    batched_output_ids = []

    for conv in batched_conv:

        input_ids = [151331, 151333]
        for message in conv:
            if len(input_ids) >= max_input_length:
                break
            else:
                message = process_message(message)
                new_input_ids = tokenizer.apply_chat_template([message], tokenize=True, return_dict=False)[0][2:]
                if message['role'] == 'assistant':
                    output_prompt, output_ids = (
                        new_input_ids[:1],
                        new_input_ids[1:],
                    )
                    output_ids.append(tokenizer.eos_token_id)
                    batched_input_ids.append(
                        input_ids[:max_input_length] + output_prompt[:1]
                    )
                    batched_output_ids.append(output_ids[:max_output_length])
                input_ids += new_input_ids
    return {'input_ids': batched_input_ids, 'output_ids': batched_output_ids}


def load_tokenizer_and_model(
        model_dir: str,
        peft_config: Optional[PeftConfig] = None,
):
    tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
    if peft_config is not None:
        model = AutoModelForCausalLM.from_pretrained(
            model_dir,
            trust_remote_code=True,
            empty_init=False,
            use_cache=False,
            torch_dtype=torch.bfloat16  # Must use BFloat 16
        )
        model = get_peft_model(model, peft_config)
        model.print_trainable_parameters()
    else:
        model = AutoModelForCausalLM.from_pretrained(
            model_dir,
            trust_remote_code=True,
            empty_init=False,
            use_cache=False,
            torch_dtype=torch.bfloat16
        )
    return tokenizer, model


def compute_metrics(eval_preds: EvalPrediction, tokenizer):
    batched_pred_ids, batched_label_ids = eval_preds
    metrics_dct = {'rouge-1': [], 'rouge-2': [], 'rouge-l': [], 'bleu-4': []}
    for pred_ids, label_ids in zip(batched_pred_ids, batched_label_ids):
        pred_txt = tokenizer.decode(pred_ids).strip()
        label_txt = tokenizer.decode(label_ids).strip()
        pred_tokens = list(jieba.cut(pred_txt))
        label_tokens = list(jieba.cut(label_txt))
        rouge = Rouge()
        scores = rouge.get_scores(' '.join(pred_tokens), ' '.join(label_tokens))
        for k, v in scores[0].items():
            metrics_dct[k].append(round(v['f'] * 100, 4))
        metrics_dct['bleu-4'].append(
            sentence_bleu([label_tokens], pred_tokens, smoothing_function=SmoothingFunction().method3))
    return {k: np.mean(v) for k, v in metrics_dct.items()}


@app.command()
def main(
        data_dir: Annotated[str, typer.Argument(help='')],
        model_dir: Annotated[
            str,
            typer.Argument(
                help='A string that specifies the model id of a pretrained model configuration hosted on huggingface.co, or a path to a directory containing a model configuration file.'
            ),
        ],
        config_file: Annotated[str, typer.Argument(help='')],
        auto_resume_from_checkpoint: str = typer.Argument(
            default='',
            help='If entered as yes, automatically use the latest save checkpoint. If it is a numerical example 12 15, use the corresponding save checkpoint. If the input is no, restart training'
        ),

):
    ft_config = FinetuningConfig.from_file(config_file)
    tokenizer, model = load_tokenizer_and_model(model_dir, peft_config=ft_config.peft_config)
    data_manager = DataManager(data_dir, ft_config.data_config)

    train_dataset = data_manager.get_dataset(
        Split.TRAIN,
        functools.partial(
            process_batch,
            tokenizer=tokenizer,
            max_input_length=ft_config.max_input_length,
            max_output_length=ft_config.max_output_length,
        ),
        batched=True,
    )
    print('train_dataset:', train_dataset)
    val_dataset = data_manager.get_dataset(
        Split.VALIDATION,
        functools.partial(
            process_batch_eval,
            tokenizer=tokenizer,
            max_input_length=ft_config.max_input_length,
            max_output_length=ft_config.max_output_length,
        ),
        batched=True,
    )
    if val_dataset is not None:
        print('val_dataset:', val_dataset)
    test_dataset = data_manager.get_dataset(
        Split.TEST,
        functools.partial(
            process_batch_eval,
            tokenizer=tokenizer,
            max_input_length=ft_config.max_input_length,
            max_output_length=ft_config.max_output_length,
        ),
        batched=True,
    )
    if test_dataset is not None:
        print('test_dataset:', test_dataset)

    model.gradient_checkpointing_enable()
    model.enable_input_require_grads()

    trainer = Seq2SeqTrainer(
        model=model,
        args=ft_config.training_args,
        data_collator=DataCollatorForSeq2Seq(
            tokenizer=tokenizer,
            padding='longest',
            return_tensors='pt',
        ),
        train_dataset=train_dataset,
        eval_dataset=val_dataset.select(list(range(50))),
        compute_metrics=functools.partial(compute_metrics, tokenizer=tokenizer),
    )

    if auto_resume_from_checkpoint.upper() == "" or auto_resume_from_checkpoint is None:
        trainer.train()
    else:
        output_dir = ft_config.training_args.output_dir
        dirlist = os.listdir(output_dir)
        checkpoint_sn = 0
        for checkpoint_str in dirlist:
            if checkpoint_str.find("eckpoint") > 0 and checkpoint_str.find("tmp") == -1:
                checkpoint = int(checkpoint_str.replace("checkpoint-", ""))
                if checkpoint > checkpoint_sn:
                    checkpoint_sn = checkpoint
        if auto_resume_from_checkpoint.upper() == "YES":
            if checkpoint_sn > 0:
                model.gradient_checkpointing_enable()
                model.enable_input_require_grads()
                checkpoint_directory = os.path.join(output_dir, "checkpoint-" + str(checkpoint_sn))
                print("resume checkpoint from  checkpoint-" + str(checkpoint_sn))
                trainer.train(resume_from_checkpoint=checkpoint_directory)
            else:
                trainer.train()
        else:
            if auto_resume_from_checkpoint.isdigit():
                if int(auto_resume_from_checkpoint) > 0:
                    checkpoint_sn = int(auto_resume_from_checkpoint)
                    model.gradient_checkpointing_enable()
                    model.enable_input_require_grads()
                    checkpoint_directory = os.path.join(output_dir, "checkpoint-" + str(checkpoint_sn))
                    print("resume checkpoint from  checkpoint-" + str(checkpoint_sn))
                    trainer.train(resume_from_checkpoint=checkpoint_directory)
            else:
                print(auto_resume_from_checkpoint,
                      "The specified checkpoint sn(" + auto_resume_from_checkpoint + ") has not been saved. Please search for the correct checkpoint in the model output directory")

    if test_dataset is not None:
        trainer.predict(test_dataset)

if __name__ == '__main__':
    app()

配置文件说明:

微调配置文件位于 config 目录下,包括以下文件:

  1. ds_zereo_2 / ds_zereo_3.json: deepspeed 配置文件。
  2. `lora.yaml / ptuning_v2.yaml / sft.yaml`: 模型不同方式的配置文件,包括模型参数、优化器参数、训练参数等。 部分重要参数解释如下:
  3. data_config 部分
  4. train_file: 训练数据集的文件路径。
  5. val_file: 验证数据集的文件路径。
  6. test_file: 测试数据集的文件路径。
  7. num_proc: 在加载数据时使用的进程数量。
  8. max_input_length: 输入序列的最大长度。
  9. max_output_length: 输出序列的最大长度。
  10. training_args 部分
  11. output_dir: 用于保存模型和其他输出的目录。
  12. max_steps: 训练的最大步数。
  13. per_device_train_batch_size: 每个设备(如 GPU)的训练批次大小。
  14. dataloader_num_workers: 加载数据时使用的工作线程数量。
  15. remove_unused_columns: 是否移除数据中未使用的列。
  16. save_strategy: 模型保存策略(例如,每隔多少步保存一次)。
  17. save_steps: 每隔多少步保存一次模型。
  18. log_level: 日志级别(如 info)。
  19. logging_strategy: 日志记录策略。
  20. logging_steps: 每隔多少步记录一次日志。
  21. per_device_eval_batch_size: 每个设备的评估批次大小。
  22. evaluation_strategy: 评估策略(例如,每隔多少步进行一次评估)。
  23. eval_steps: 每隔多少步进行一次评估。
  24. predict_with_generate: 是否使用生成模式进行预测。
  25. generation_config 部分
  26. max_new_tokens: 生成的最大新 token 数量。
  27. peft_config 部分
  28. peft_type: 使用的参数有效调整类型 (支持 LORA 和 PREFIX_TUNING)。
  29. task_type: 任务类型,这里是因果语言模型 (不要改动)。
  30. Lora 参数:
  31. r: LoRA 的秩。
  32. lora_alpha: LoRA 的缩放因子。
  33. lora_dropout: 在 LoRA 层使用的 dropout 概率。
  34. P-TuningV2 参数:
  35. num_virtual_tokens: 虚拟 token 的数量。
  36. num_attention_heads: 2: P-TuningV2 的注意力头数(不要改动)。
  37. token_dim: 256: P-TuningV2 的 token 维度(不要改动)。

lora.yaml 

  train_file: train.jsonl
  val_file: val.jsonl
  test_file: test.jsonl
  num_proc: 1
max_input_length: 512
max_output_length: 512
training_args:
  # see `transformers.Seq2SeqTrainingArguments`
  output_dir: ./output
  max_steps: 1
  # needed to be fit for the dataset
  learning_rate: 5e-4
  # settings for data loading
  per_device_train_batch_size: 1
  dataloader_num_workers: 4
  remove_unused_columns: false
  # settings for saving checkpoints
  save_strategy: steps
  save_steps: 1
  # settings for logging
  log_level: info
  logging_strategy: steps
  logging_steps: 10
  # settings for evaluation
  per_device_eval_batch_size: 1
  evaluation_strategy: steps
  eval_steps: 500
  # settings for optimizer
  # adam_epsilon: 1e-6
  # uncomment the following line to detect nan or inf values
  # debug: underflow_overflow
  predict_with_generate: true
  # see `transformers.GenerationConfig`
  generation_config:
    max_new_tokens: 512
  # set your absolute deepspeed path here
  #deepspeed: ds_zero_2.json
peft_config:
  peft_type: LORA
  task_type: CAUSAL_LM
  r: 4
  lora_alpha: 4
  lora_dropout: 0.1

ptuning_v2.yaml

data_config:
  train_file: train.jsonl
  val_file: val.jsonl
  test_file: test.jsonl
  num_proc: 1
max_input_length: 128
max_output_length: 128
training_args:
  # see `transformers.Seq2SeqTrainingArguments`
  output_dir: ./output
  max_steps: 1
  # needed to be fit for the dataset
  learning_rate: 5e-4
  # settings for data loading
  per_device_train_batch_size: 1
  dataloader_num_workers: 4
  remove_unused_columns: false
  # settings for saving checkpoints
  save_strategy: steps
  save_steps: 1
  # settings for logging
  log_level: info
  logging_strategy: steps
  logging_steps: 500
  # settings for evaluation
  per_device_eval_batch_size: 1
  evaluation_strategy: steps
  eval_steps: 500
  # settings for optimizer
  # adam_epsilon: 1e-6
  # uncomment the following line to detect nan or inf values
  # debug: underflow_overflow
  predict_with_generate: true
  # see `transformers.GenerationConfig`
  generation_config:
    max_new_tokens: 512
  # set your absolute deepspeed path here
  #deepspeed: ds_zero_3.json
peft_config:
  peft_type: PREFIX_TUNING
  task_type: CAUSAL_LM
  num_virtual_tokens: 512
  num_attention_heads: 2
  token_dim: 256

sft.yaml

data_config:
  train_file: train.jsonl
  val_file: val.jsonl
  test_file: test.jsonl
  num_proc: 1
max_input_length: 256
max_output_length: 512
training_args:
  # see `transformers.Seq2SeqTrainingArguments`
  output_dir: ./output
  max_steps: 1
  # needed to be fit for the dataset
  learning_rate: 5e-5
  # settings for data loading
  per_device_train_batch_size: 1
  dataloader_num_workers: 4
  remove_unused_columns: false
  # settings for saving checkpoints
  save_strategy: steps
  save_steps: 1
  # settings for logging
  log_level: info
  logging_strategy: steps
  logging_steps: 10
  # settings for evaluation
  per_device_eval_batch_size: 1
  evaluation_strategy: steps
  eval_steps: 500
  # settings for optimizer
  # adam_epsilon: 1e-6
  # uncomment the following line to detect nan or inf values
  # debug: underflow_overflow
  predict_with_generate: true
  generation_config:
    max_new_tokens: 512
  # set your absolute deepspeed path here
  deepspeed: configs/ds_zero_3.json

ds_zero_2.json

{
    "fp16": {
        "enabled": "auto",
        "loss_scale": 0,
        "loss_scale_window": 1000,
        "initial_scale_power": 16,
        "hysteresis": 2,
        "min_loss_scale": 1
    },
    "bf16": {
        "enabled": "auto"
    },
    "zero_optimization": {
        "stage": 2,
        "allgather_partitions": true,
        "allgather_bucket_size": 5e8,
        "overlap_comm": true,
        "reduce_scatter": true,
        "reduce_bucket_size": 5e8,
        "contiguous_gradients": true
    },

    "gradient_accumulation_steps": "auto",
    "gradient_clipping": "auto",
    "steps_per_print": 2000,
    "train_batch_size": "auto",
    "train_micro_batch_size_per_gpu": "auto",
    "wall_clock_breakdown": false
}

ds_zero_3.json

{
  "train_micro_batch_size_per_gpu": "auto",
  "zero_allow_untested_optimizer": true,
  "bf16": {
    "enabled": "auto"
  },
  "optimizer": {
    "type": "AdamW",
    "params": {
      "lr": "auto",
      "betas": "auto",
      "eps": "auto",
      "weight_decay": "auto"
    }
  },
  "zero_optimization": {
    "stage": 3,
    "allgather_partitions": true,
    "allgather_bucket_size": 5e8,
    "reduce_scatter": true,
    "contiguous_gradients": true,
    "overlap_comm": true,
    "sub_group_size": 1e9,
    "reduce_bucket_size": "auto",
    "stage3_prefetch_bucket_size": "auto",
    "stage3_param_persistence_threshold": "auto",
    "stage3_max_live_parameters": 1e6, 
    "stage3_max_reuse_distance": 1e6, 
    "stage3_gather_16bit_weights_on_model_save": true
  }
}

三、硬件配置和环境配置

本文档的数据均在以下硬件环境测试,实际运行环境需求和运行占用的显存略有不同,请以实际运行环境为准。 测试硬件信息:

  • OS: Ubuntu 22.04
  • Memory: 512GB
  • Python: 3.10.12 / 3.12.3 (如果您使用 Python 3.12.3 目前需要使用 git 源码安装 nltk)
  • CUDA Version: 12.3
  • GPU Driver: 535.104.05
  • GPU: NVIDIA A100-SXM4-80GB * 8
  • 微调方案 显存占用 权重保存点大小
    lora (PEFT) 21531MiB 17M
    p-tuning v2 (PEFT) 21381MiB 121M
    SFT (Zero3 method) 80935MiB
    (Each GPU,需要使用8张GPU)
    20G

    torch>=2.3.0
    torchvision>=0.18.0
    transformers==4.40.0
    huggingface-hub>=0.23.1
    sentencepiece>=0.2.0
    pydantic>=2.7.1
    timm>=0.9.16
    tiktoken>=0.7.0
    accelerate>=0.30.1
    sentence_transformers>=2.7.0

    # web demo
    gradio>=4.33.0

    # openai demo
    openai>=1.31.1
    einops>=0.7.0
    sse-starlette>=2.1.0

    # INT4
    bitsandbytes>=0.43.1

    # PEFT model, not need if you don't use PEFT finetune model.
    # peft>=0.11.0

    jieba>=0.42.1
    datasets>=2.19.1
    peft>=0.11.0
    deepspeed>=0.13.3
    nltk==3.8.1 

    四、推理代码

    from pathlib import Path
    from typing import Annotated, Union
    
    import typer
    from peft import AutoPeftModelForCausalLM, PeftModelForCausalLM
    from transformers import (
        AutoModelForCausalLM,
        AutoTokenizer,
        PreTrainedModel,
        PreTrainedTokenizer,
        PreTrainedTokenizerFast
    )
    
    ModelType = Union[PreTrainedModel, PeftModelForCausalLM]
    TokenizerType = Union[PreTrainedTokenizer, PreTrainedTokenizerFast]
    
    app = typer.Typer(pretty_exceptions_show_locals=False)
    
    
    def load_model_and_tokenizer(
            model_dir: Union[str, Path], trust_remote_code: bool = True
    ) -> tuple[ModelType, TokenizerType]:
        model_dir = Path(model_dir).expanduser().resolve()
        if (model_dir / 'adapter_config.json').exists():
            model = AutoPeftModelForCausalLM.from_pretrained(
                model_dir, trust_remote_code=trust_remote_code, device_map='auto'
            )
            tokenizer_dir = model.peft_config['default'].base_model_name_or_path
        else:
            model = AutoModelForCausalLM.from_pretrained(
                model_dir, trust_remote_code=trust_remote_code, device_map='auto'
            )
            tokenizer_dir = model_dir
        tokenizer = AutoTokenizer.from_pretrained(
            tokenizer_dir, trust_remote_code=trust_remote_code, encode_special_tokens=True, use_fast=False
        )
        return model, tokenizer
    
    
    @app.command()
    def main(
            model_dir: Annotated[str, typer.Argument(help='')],
    ):
        # messages = [
        #     {
        #         "role": "system", "content": "",
        #         "tools":
        #             [
        #                 {
        #                     "type": "function",
        #                     "function": {
        #                         "name": "create_calendar_event",
        #                         "description": "Create a new calendar event",
        #                         "parameters": {
        #                             "type": "object",
        #                             "properties": {
        #                                 "title": {
        #                                     "type": "string",
        #                                     "description": "The title of the event"
        #                                 },
        #                                 "start_time": {
        #                                     "type": "string",
        #                                     "description": "The start time of the event in the format YYYY-MM-DD HH:MM"
        #                                 },
        #                                 "end_time": {
        #                                     "type": "string",
        #                                     "description": "The end time of the event in the format YYYY-MM-DD HH:MM"
        #                                 }
        #                             },
        #                             "required": [
        #                                 "title",
        #                                 "start_time",
        #                                 "end_time"
        #                             ]
        #                         }
        #                     }
        #                 }
        #             ]
    
        #     },
        #     {
        #         "role": "user",
        #         "content": "Can you help me create a calendar event for my meeting tomorrow? The title is \"Team Meeting\". It starts at 10:00 AM and ends at 11:00 AM."
        #     },
        # ]
        messages = [
            {
                "role": "user",
                "content": "창의적인 로그라인 하나 만들어",
            },
        ]
        model, tokenizer = load_model_and_tokenizer(model_dir)
        inputs = tokenizer.apply_chat_template(
            messages,
            add_generation_prompt=True,
            tokenize=True,
            return_tensors="pt"
        ).to(model.device)
        generate_kwargs = {
            "input_ids": inputs,
            "max_new_tokens": 1024,
            "do_sample": True,
            "top_p": 0.8,
            "temperature": 0.8,
            "repetition_penalty": 1.2,
            "eos_token_id": model.config.eos_token_id,
        }
        outputs = model.generate(**generate_kwargs)
        response = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True).strip()
        print("=========")
        print(response)
    
    
    if __name__ == '__main__':
        app()

     

    作者:医学小达人

    物联沃分享整理
    物联沃-IOTWORD物联网 » Python GLM-4-9B-Chat微调代码和步骤,glm-4-9b-chat微调代码实战;GLM-4-9B-Chat微调:Lora,P-Tuning V2,SFT

    发表回复