一、总体序言
1.whisper处置惩罚环节先容
颠末上一次的example模型跑通,我们大致了解到whisper模型调用过程和语音识别具体环节。whisper模型语音识别的具体环节和whisper模型的调用可以拆解为以下图片:
首先我们来先容左手边灰色对话框里的内容:这一部分先容了whisper可以输入的音频类型和根据输入类型可以完成的任务,例如:语音识别笔墨、语种互相翻译、噪音识别等
接下来看右边的流程图,流程图有两大部分组成:encoder部分(编码部分)和decoder部分(解码部分)。
在编码和解码部分分别有两个输入,在解码的最后为输出部分。在编码部分的输入,重要输入将频率标度转换为梅尔标度的频谱和位置编码的音频文件,;而解码部分重要输入多任务训练情势中的tokens。(这一部分将在报告tokenizer时详细讲到)
在encoder和decoder部分是由一层层的处置惩罚层叠起来的;每一个小的处置惩罚层都由(MLP和atteintion)组合而成。MLP指的是多层感知机制,用于担当特性和整合前一层带来的信息;atteion是注意力机制,用于动态衡量各个输入信息之间的关联,调整加权。¶
最后的输出是对下一个单位的预测
在whisper的模仿运行中,我们重要打仗到了三个重要代码文件:训练代码文件(train.py)、运行参数代码文件(run.sh)、whisper源代码文件(src/transformers/models/whisper)下面举行详细解说
二、代码详细解说
(一) run.sh 参数解析
- python train.py \
- --model_name_or_path="/mnt/e/王嘟嘟/wsl/asr_large_model/whisper_model/whisper-tiny" \ #调用的模型在哪个路径下?
- --dataset_name="mozilla-foundation/common_voice_11_0" \ #调用的数据库叫什么?
- --dataset_config_name="hi" \ #dataset config 指的是数据集配置。
- 它通常涉及到对用于训练或处理的数据集的各种参数和设置的描述。这可能包括数据集的位置、数据的格式、数据的分割方式(如训练集、验证集、测试集的划分比例)、数据的预处理步骤(如数据清洗、转换)等相关信息的定义和配置。
- 通过明确数据集配置,可以确保模型或算法能够正确地读取、处理和利用给定的数据集。
- --language="hindi" \ #语言选择
- --train_split_name="test" \ #训练集名称
- --eval_split_name="test" \ #验证集名称
- --max_steps="5000" \ #最大迭代步数
- --output_dir="./whisper-small-hi" \ #输出路径
- --per_device_train_batch_size="16" \ #每次训练批次大小,即每次训练几个数据
- --gradient_accumulation_steps="2" \ #梯度累积步长
- --per_device_eval_batch_size="16" \ #每次验证批次大小,即每次验证几个数据
- --logging_steps="25" \ #日志步数
- --learning_rate="1e-5" \ #
- --warmup_steps="500" \ #预热步骤调整参数
- --evaluation_strategy="steps" \ #验证策略和步数
- --eval_steps="1000" \
- --save_strategy="steps" \ #保存策略和步数
- --save_steps="1000" \
- --generation_max_length="225" \
- --preprocessing_num_workers="16" \#预处理的工作线程数量
- --length_column_name="input_length" \
- --max_duration_in_seconds="30" \
- --text_column_name="sentence" \ #文本名称
- --freeze_feature_encoder="False" \ # 冻结编码器
- --gradient_checkpointing \
- --group_by_length \
- --overwrite_output_dir \
- --do_train \
- --do_eval \
- --predict_with_generate \
- --num_train_epochs="1" #epoch的含义是跑几遍数据的意思
- "run.sh" 31L, 965B 1,17 Top
复制代码 (二) train.py 总流程:读取参数➡️设置日志➡️检测中断位置➡️下载数据➡️下载预训练模型、特性提取、tohenizer➡️语音数据重新采样(同一语音数据的格式)➡️预处置惩罚数据库(audio files as arrays and tokenize the targets)➡️下载评估标准(同一评判标准)➡️ 定义数据收集器(同一数据输入格式)➡️创建单独的语音吸收器➡️初始化训练器➡️训练➡️测试➡️记录训练状态
- #!/usr/bin/env python
- # coding=utf-8
- # Copyright 2021 The HuggingFace Team. All rights reserved.
- #
- # Licensed under the Apache License, Version 2.0 (the "License");
- # you may not use this file except in compliance with the License.
- # You may obtain a copy of the License at
- #
- # http://www.apache.org/licenses/LICENSE-2.0
- #
- # Unless required by applicable law or agreed to in writing, software
- # distributed under the License is distributed on an "AS IS" BASIS,
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- # See the License for the specific language governing permissions and
- # limitations under the License.
- """
- Fine-tuning the library models for sequence to sequence speech recognition.
- """
- # You can also adapt this script on your own sequence to sequence speech
- # recognition task. Pointers for this are left as comments.
- import logging
- import os
- import sys
- import warnings
- from dataclasses import dataclass, field
- from typing import Any, Dict, List, Optional, Union
- import datasets
- import evaluate
- import torch
- from datasets import DatasetDict, load_dataset
- import transformers
- from transformers import (
- AutoConfig,
- AutoFeatureExtractor,
- AutoModelForSpeechSeq2Seq,
- AutoProcessor,
- AutoTokenizer,
- HfArgumentParser,
- Seq2SeqTrainer,
- Seq2SeqTrainingArguments,
- set_seed,
- )
- from transformers.trainer_utils import get_last_checkpoint, is_main_process
- from transformers.utils import check_min_version, send_example_telemetry
- from transformers.utils.versions import require_version
- # Will error if the minimal version of Transformers is not installed. Remove at your own risks.
- check_min_version("4.32.0")
- require_version("datasets>=1.18.0", "To fix: pip install -r examples/pytorch/speech-recognition/requirements.txt")
- logger = logging.getLogger(__name__)
- @dataclass
- class ModelArguments:
- """
- Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
- """
- model_name_or_path: str = field(
- metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
- )
- config_name: Optional[str] = field(
- default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
- )
- tokenizer_name: Optional[str] = field(
- default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
- )
- feature_extractor_name: Optional[str] = field(
- default=None, metadata={"help": "feature extractor name or path if not the same as model_name"}
- )
- cache_dir: Optional[str] = field(
- default=None,
- metadata={"help": "Where to store the pretrained models downloaded from huggingface.co"},
- )
- use_fast_tokenizer: bool = field(
- default=True,
- metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
- )
- model_revision: str = field(
- default="main",
- metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
- )
- token: str = field(
- default=None,
- metadata={
- "help": (
- "The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`).
- "如果在进行远程文件访问时没有指定用于HTTP Bearer授权的令牌,"
- "那么系统将使用在运行huggingface-cli login命令时生成的令牌。"
- "这个令牌会被存储在用户的主目录下的.huggingface文件夹中。简单来说,它说明了如何进行身份验证以及令牌的存储位置。"
- )
- },
- )
- use_auth_token: bool = field(
- default=None,
- metadata={
- "help": "The `use_auth_token` argument is deprecated and will be removed in v4.34. Please use `token`."
- },
- )
- trust_remote_code: bool = field(
- default=False,
- metadata={
- "help": (
- "Whether or not to allow for custom models defined on the Hub in their own modeling files. This option"
- "should only be set to `True` for repositories you trust and in which you have read the code, as it will"
- "execute code present on the Hub on your local machine."
- )
- },
- )
- freeze_feature_encoder: bool = field(
- default=True, metadata={"help": "Whether to freeze the feature encoder layers of the model."}
- )
- freeze_encoder: bool = field(
- default=False, metadata={"help": "Whether to freeze the entire encoder of the seq2seq model."}
- )
- forced_decoder_ids: List[List[int]] = field(
- default=None,
- metadata={
- "help": (
- "A list of pairs of integers which indicates a mapping from generation indices to token indices "
- "that will be forced before sampling. For example, [[0, 123]] means the first generated token "
- "will always be a token of index 123."
- "[[0, 123]]表示生成的第一个令牌将始终是索引为123的令牌。这种机制可以确保在生成文本时,特定位置的令牌是预先定义的,从而影响生成的内容和结构。"
- )
- },
- )
- suppress_tokens: List[int] = field(
- default=None, metadata={"help": "A list of tokens that will be suppressed at generation."}
- )
- apply_spec_augment: bool = field(
- default=False,
- metadata={
- "help": "Whether to apply *SpecAugment* data augmentation to the input features. This is currently only relevant for Wav2Vec2, HuBERT, WavLM and Whisper models."
- },
- )
- @dataclass
- class DataTrainingArguments:
- """
- Arguments pertaining to what data we are going to input our model for training and eval.
- """
- dataset_name: str = field(
- default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
- )
- dataset_config_name: Optional[str] = field(
- default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
- )
- overwrite_cache: bool = field(
- default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
- )
- preprocessing_num_workers: Optional[int] = field(
- default=None,
- metadata={"help": "The number of processes to use for the preprocessing."},
- )
- max_train_samples: Optional[int] = field(
- default=None,
- metadata={
- "help": (
- "For debugging purposes or quicker training, truncate the number of training examples to this "
- "value if set."#在训练机器学习模型时,可能需要对训练示例的数量进行截断,以便控制数据集的大小或提高训练效率。以下是一些关于如何截断训练示例的要点:
- #截断的目的:降低计算资源的需求。加快模型训练速度。避免过拟合,特别是在数据量过大时。
- #实现方法:
- #选择特定数量的示例:可以直接选择前N个示例进行训练。
- #随机抽样:从整个数据集中随机选择一定数量的示例,以确保样本的多样性。
- #基于条件的筛选:根据特定条件(如标签、特征等)筛选出符合条件的示例。
- #在训练过程中使用截断:
- #在使用某些库(如Hugging Face的Tokenizers)时,可以通过设置参数来控制输入数据的截断。例如,可以使用return_overflowing_tokens和stride参数来管理截断的方式和效果[2]。
- #注意事项:
- #确保截断后的数据集仍然具有代表性,以避免模型学习到偏差。
- #在截断过程中,考虑到数据的分布和特征,以保持训练的有效性。
- #通过合理地截断训练示例,可以有效地管理训练过程,提高模型的性能和训练效率。
- },
- )
- max_eval_samples: Optional[int] = field(
- default=None,
- metadata={
- "help": (
- "For debugging purposes or quicker training, truncate the number of evaluation examples to this "
- "value if set."
- )
- },
- )
- audio_column_name: str = field(
- default="audio",
- metadata={"help": "The name of the dataset column containing the audio data. Defaults to 'audio'"},
- )
- text_column_name: str = field(
- default="text",
- metadata={"help": "The name of the dataset column containing the text data. Defaults to 'text'"},
- )
- max_duration_in_seconds: float = field(
- default=20.0,
- metadata={
- "help": (
- "Truncate audio files that are longer than `max_duration_in_seconds` seconds to"
- " 'max_duration_in_seconds`"
- )
- },
- )
- min_duration_in_seconds: float = field(
- default=0.0, metadata={"help": "Filter audio files that are shorter than `min_duration_in_seconds` seconds"}
- )
- preprocessing_only: bool = field(
- default=False,
- metadata={
- "help": (
- "Whether to only do data preprocessing and skip training. This is especially useful when data"
- " preprocessing errors out in distributed training due to timeout. In this case, one should run the"
- " preprocessing in a non-distributed setup with `preprocessing_only=True` so that the cached datasets"
- " can consequently be loaded in distributed training" #用于预训练报错导致的训练时间超时
- )
- },
- )
- train_split_name: str = field(
- default="train",
- metadata={
- "help": "The name of the training data set split to use (via the datasets library). Defaults to 'train'"
- },
- )
- eval_split_name: str = field(
- default="test",
- metadata={
- "help": "The name of the training data set split to use (via the datasets library). Defaults to 'train'"
- },
- )
- do_lower_case: bool = field(
- default=True,
- metadata={"help": "Whether the target text should be lower cased."},
- )
- language: str = field(
- default=None,
- metadata={
- "help": (
- "Language for multilingual fine-tuning. This argument should be set for multilingual fine-tuning "
- "only. For English speech recognition, it should be set to `None`."
- )
- },
- )
- task: str = field(
- default="transcribe",
- metadata={"help": "Task, either `transcribe` for speech recognition or `translate` for speech translation."},
- )
- @dataclass
- class DataCollatorSpeechSeq2SeqWithPadding: #数据收集器 用于动态编码输入的数据
- """
- Data collator that will dynamically pad the inputs received.
- Args:
- processor ([`WhisperProcessor`])
- The processor used for processing the data.
- decoder_start_token_id (`int`) #从哪开始编码或解码
- The begin-of-sentence of the decoder.
- forward_attention_mask (`bool`) #前向掩碼是否开启
- Whether to return attention_mask.
- """
- processor: Any
- decoder_start_token_id: int
- forward_attention_mask: bool
- def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
- # split inputs and labels since they have to be of different lengths and need
- # different padding methods
- model_input_name = self.processor.model_input_names[0]
- input_features = [{model_input_name: feature[model_input_name]} for feature in features]
- label_features = [{"input_ids": feature["labels"]} for feature in features]
- batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt")
- if self.forward_attention_mask:
- batch["attention_mask"] = torch.LongTensor([feature["attention_mask"] for feature in features])
- labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt")
- # replace padding with -100 to ignore loss correctly
- labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
- # if bos token is appended in previous tokenization step,
- # cut bos token here as it's append later anyways
- if (labels[:, 0] == self.decoder_start_token_id).all().cpu().item():
- labels = labels[:, 1:]
- batch["labels"] = labels
- return batch
- def main():
- # 1. Parse input arguments
- # See all possible arguments in src/transformers/training_args.py
- # or by passing the --help flag to this script.
- # We now keep distinct sets of args, for a cleaner separation of concerns.
- parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments))
- #json的读取和其他格式的读取
- if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
- # If we pass only one argument to the script and it's the path to a json file,
- # let's parse it to get our arguments.
- model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
- else:
- model_args, data_args, training_args = parser.parse_args_into_dataclasses()
- if model_args.use_auth_token is not None:
- warnings.warn("The `use_auth_token` argument is deprecated and will be removed in v4.34.", FutureWarning)
- if model_args.token is not None:
- raise ValueError("`token` and `use_auth_token` are both specified. Please set only the argument `token`.")
- model_args.token = model_args.use_auth_token
- # Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
- # information sent is the one passed as arguments along with your Python/PyTorch versions.
- #send_example_telemetry("run_speech_recognition_seq2seq", model_args, data_args)通过跟踪示例的使用情况,开发团队可以了解哪些功能被频繁使用,从而优化资源分配和维护工作
- send_example_telemetry("train", model_args, data_args)
- # 2. Setup logging
- logging.basicConfig(
- format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
- datefmt="%m/%d/%Y %H:%M:%S",
- handlers=[logging.StreamHandler(sys.stdout)],
- )
- log_level = training_args.get_process_log_level()
- logger.setLevel(log_level)
- datasets.utils.logging.set_verbosity(log_level)
- transformers.utils.logging.set_verbosity(log_level)
- transformers.utils.logging.enable_default_handler()
- transformers.utils.logging.enable_explicit_format()
- logger.setLevel(logging.INFO if is_main_process(training_args.local_rank) else logging.WARN)
- # Log on each process the small summary:
- logger.warning(
- f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
- f"distributed training: {training_args.parallel_mode.value == 'distributed'}, 16-bits training: {training_args.fp16}"
- )
- logger.info(f"Training/evaluation parameters {training_args}")
- # Set the verbosity to info of the Transformers logger (on main process only):
- if is_main_process(training_args.local_rank):
- transformers.utils.logging.set_verbosity_info()
- logger.info("Training/evaluation parameters %s", training_args)
- # 3. Detecting last checkpoint and eventually continue from last checkpoint
- last_checkpoint = None
- if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:#输出文件夹为空 做了训练 且输出路径存在
- last_checkpoint = get_last_checkpoint(training_args.output_dir)
- if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
- raise ValueError(
- f"Output directory ({training_args.output_dir}) already exists and is not empty. "
- "Use --overwrite_output_dir to overcome."
- )
- elif last_checkpoint is not None and training_args.resume_from_checkpoint is None:
- logger.info(
- f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
- "the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
- )
- ####---###Detecting last checkpoint and eventually continue from last checkpoint" 意味着在深度学习或机器学习的训练过程中,自动识别最近一次保存的模型检查点(checkpoint),并能够从这个检查点开始继续训练。这是一套机制,用于确保训练的连续性和效率。当训练因为任何原因(如计划中断、系统错误、电力故障等)被中断时,通过检测到的最后一个保存的模型状态,训练可以不必从头开始,而是从中断的地方恢复,这样可以节省大量的时间和计算资源。
- ### 在实际操作中,这通常涉及到使用特定的库函数(如PyTorch中的torch.load)来加载之前保存的模型参数和状态,然后继续执行训练过程。"
- # Set seed before initializing model.
- set_seed(training_args.seed)
- ###这行代码强调了在深度学习或机器学习模型初始化之前设置随机种子的重要性。设置随机种子(seed)是为了确保训练过程中的随机性是可复现的。
- ###这意味着每次运行代码时,如果使用相同的种子值,将会得到相同初始化的权重和偏置,进而使得实验结果可重复。
- ###这对于调试、比较不同模型设置的效果以及发表研究结果时保持一致性至关重要。在PyTorch中,这通常通过调用torch.manual_seed(training_args.seed)或类似的函数来实现,确保从数据预处理到模型训练的整个流程中生成的随机数序列是一致的。
- ###这样,即使在分布式训练或不同时间运行实验时,也能得到一致的初始模型状态和实验环境。
-
- # 4. Load dataset
- raw_datasets = DatasetDict()
- if training_args.do_train:
- raw_datasets["train"] = load_dataset(
- data_args.dataset_name,
- data_args.dataset_config_name,
- split=data_args.train_split_name,
- cache_dir=model_args.cache_dir,
- token=model_args.token,
- )
- if training_args.do_eval:
- raw_datasets["eval"] = load_dataset(
- data_args.dataset_name,
- data_args.dataset_config_name,
- split=data_args.eval_split_name,
- cache_dir=model_args.cache_dir,
- token=model_args.token,
- )
- #拿到数据当中 音频的路径(path)音频的采样点(array)sentence(文本)以及一些数据——可以理解为表格
- if data_args.audio_column_name not in next(iter(raw_datasets.values())).column_names:
- raise ValueError(
- f"--audio_column_name '{data_args.audio_column_name}' not found in dataset '{data_args.dataset_name}'. "
- "Make sure to set `--audio_column_name` to the correct audio column - one of "
- f"{', '.join(next(iter(raw_datasets.values())).column_names)}."
- )
- if data_args.text_column_name not in next(iter(raw_datasets.values())).column_names:
- raise ValueError(
- f"--text_column_name {data_args.text_column_name} not found in dataset '{data_args.dataset_name}'. "
- "Make sure to set `--text_column_name` to the correct text column - one of "
- f"{', '.join(next(iter(raw_datasets.values())).column_names)}."
- )
- # 5. Load pretrained model, tokenizer, and feature extractor(特征提取器)
- #从Auto中下载这三个
- # Distributed training:
- # The .from_pretrained methods guarantee that only one local process can concurrently
- config = AutoConfig.from_pretrained(
- model_args.config_name if model_args.config_name else model_args.model_name_or_path,
- cache_dir=model_args.cache_dir,
- revision=model_args.model_revision,
- token=model_args.token,
- trust_remote_code=model_args.trust_remote_code,
- )
- config.update({"forced_decoder_ids": model_args.forced_decoder_ids, "suppress_tokens": model_args.suppress_tokens})
- # SpecAugment for whisper models
- if getattr(config, "model_type", None) == "whisper":
- config.update({"apply_spec_augment": model_args.apply_spec_augment})
- feature_extractor = AutoFeatureExtractor.from_pretrained(
- model_args.feature_extractor_name if model_args.feature_extractor_name else model_args.model_name_or_path,
- cache_dir=model_args.cache_dir,
- revision=model_args.model_revision,
- token=model_args.token,
- trust_remote_code=model_args.trust_remote_code,
- )
- tokenizer = AutoTokenizer.from_pretrained(
- model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
- cache_dir=model_args.cache_dir,
- use_fast=model_args.use_fast_tokenizer,
- revision=model_args.model_revision,
- token=model_args.token,
- trust_remote_code=model_args.trust_remote_code,
- )
- ###TOKENIZER是怎么支持96种语言的?
- model = AutoModelForSpeechSeq2Seq.from_pretrained(
- model_args.model_name_or_path,
- config=config,
- cache_dir=model_args.cache_dir,
- revision=model_args.model_revision,
- token=model_args.token,
- trust_remote_code=model_args.trust_remote_code,
- )
- if model.config.decoder_start_token_id is None:
- raise ValueError("Make sure that `config.decoder_start_token_id` is correctly defined")
- if model_args.freeze_feature_encoder:
- model.freeze_feature_encoder()
- if model_args.freeze_encoder:
- model.freeze_encoder()
- model.model.encoder.gradient_checkpointing = False
- if data_args.language is not None:
- # We only need to set the task id when the language is specified (i.e. in a multilingual setting)
- tokenizer.set_prefix_tokens(language=data_args.language, task=data_args.task)
- # 6. Resample speech dataset if necessary #对语音数据集进行重新采样。重新采样通常是指将音频数据的采样率调整为模型所需的特定采样率。
- dataset_sampling_rate = next(iter(raw_datasets.values())).features[data_args.audio_column_name].sampling_rate
- if dataset_sampling_rate != feature_extractor.sampling_rate:
- raw_datasets = raw_datasets.cast_column(
- data_args.audio_column_name, datasets.features.Audio(sampling_rate=feature_extractor.sampling_rate)
- )
- # 7. Preprocessing the datasets.
- # We need to read the audio files as arrays and tokenize the targets.
- max_input_length = data_args.max_duration_in_seconds * feature_extractor.sampling_rate
- min_input_length = data_args.min_duration_in_seconds * feature_extractor.sampling_rate
- audio_column_name = data_args.audio_column_name
- num_workers = data_args.preprocessing_num_workers
- text_column_name = data_args.text_column_name
- model_input_name = feature_extractor.model_input_names[0]
- do_lower_case = data_args.do_lower_case
- # if SpecAugment is used for whisper models, return attention_mask to guide the mask along time axis
- forward_attention_mask = (
- getattr(config, "model_type", None) == "whisper"
- and getattr(config, "apply_spec_augment", False)
- and getattr(config, "mask_time_prob", 0) > 0
- )
- if data_args.max_train_samples is not None:
- raw_datasets["train"] = raw_datasets["train"].select(range(data_args.max_train_samples))
- if data_args.max_eval_samples is not None:
- raw_datasets["eval"] = raw_datasets["eval"].select(range(data_args.max_eval_samples))
- def prepare_dataset(batch):
- # process audio
- sample = batch[audio_column_name]
- inputs = feature_extractor(
- sample["array"], sampling_rate=sample["sampling_rate"], return_attention_mask=forward_attention_mask
- )
- # process audio length
- batch[model_input_name] = inputs.get(model_input_name)[0]
- batch["input_length"] = len(sample["array"])
- if forward_attention_mask:
- batch["attention_mask"] = inputs.get("attention_mask")[0]
- # process targets
- input_str = batch[text_column_name].lower() if do_lower_case else batch[text_column_name]
- batch["labels"] = tokenizer(input_str).input_ids
- return batch
- with training_args.main_process_first(desc="dataset map pre-processing"):
- vectorized_datasets = raw_datasets.map(
- prepare_dataset,
- remove_columns=next(iter(raw_datasets.values())).column_names,
- num_proc=data_args.preprocessing_num_workers,
- desc="preprocess train dataset",
- )
- # filter data that is shorter than min_input_length or longer than
- # max_input_length
- def is_audio_in_length_range(length):
- return length > min_input_length and length < max_input_length
- vectorized_datasets = vectorized_datasets.filter(
- is_audio_in_length_range,
- num_proc=num_workers,
- input_columns=["input_length"],
- )
- # for large datasets it is advised to run the preprocessing on a
- # single machine first with `args.preprocessing_only` since there will mostly likely
- # be a timeout when running the script in distributed mode.
- # In a second step `args.preprocessing_only` can then be set to `False` to load the
- # cached dataset
- if data_args.preprocessing_only:
- cache = {k: v.cache_files for k, v in vectorized_datasets.items()}
- logger.info(f"Data preprocessing finished. Files cached at {cache}.")
- return
-
- # 8. Load metric:加载特定的评估指标
- metric = evaluate.load("wer")
- #词错误率 WER,即 Word Error Rate)
- def compute_metrics(pred):
- pred_ids = pred.predictions
- pred.label_ids[pred.label_ids == -100] = tokenizer.pad_token_id
- pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
- # we do not want to group tokens when computing the metrics
- label_str = tokenizer.batch_decode(pred.label_ids, skip_special_tokens=True)
- wer = metric.compute(predictions=pred_str, references=label_str)
- return {"wer": wer}
- # 9. Create a single speech processor
- # make sure all processes wait until data is saved
- with training_args.main_process_first():
- # only the main process saves them
- if is_main_process(training_args.local_rank):
- # save feature extractor, tokenizer and config
- feature_extractor.save_pretrained(training_args.output_dir)
- tokenizer.save_pretrained(training_args.output_dir)
- config.save_pretrained(training_args.output_dir)
- processor = AutoProcessor.from_pretrained(training_args.output_dir)
- #这个语音处理器后续可以用于对输入的语音或者文本数据进行预处理、特征提取等一系列符合模型要求的操作,
- #例如将语音信号转换为适合模型输入的特征表示,或者对文本进行分词、编码等处理,从而为模型的输入做好准备工作。
- # 10. Define data collator:分散的、单个的数据样本整理成适合批量输入到模型中的格式,并且处理不同样本之间长度不一致等问题,通过填充操作使一个批次内的数据在维度等方面达到统一。
- data_collator = DataCollatorSpeechSeq2SeqWithPadding(
- processor=processor,
- decoder_start_token_id=model.config.decoder_start_token_id,
- forward_attention_mask=forward_attention_mask,
- )
- # 11. Initialize Trainer
- trainer = Seq2SeqTrainer(
- model=model,
- args=training_args,
- train_dataset=vectorized_datasets["train"] if training_args.do_train else None,
- eval_dataset=vectorized_datasets["eval"] if training_args.do_eval else None,
- tokenizer=feature_extractor,
- data_collator=data_collator,
- compute_metrics=compute_metrics if training_args.predict_with_generate else None,
- )
- # 12. Training
- if training_args.do_train:
- checkpoint = None
- if training_args.resume_from_checkpoint is not None:
- checkpoint = training_args.resume_from_checkpoint
- elif last_checkpoint is not None:
- checkpoint = last_checkpoint
- train_result = trainer.train(resume_from_checkpoint=checkpoint)
- trainer.save_model() # Saves the feature extractor too for easy upload
- metrics = train_result.metrics
- max_train_samples = (
- data_args.max_train_samples
- if data_args.max_train_samples is not None
- else len(vectorized_datasets["train"])
- )
- metrics["train_samples"] = min(max_train_samples, len(vectorized_datasets["train"]))
- trainer.log_metrics("train", metrics)
- trainer.save_metrics("train", metrics)
- trainer.save_state()
- # 13. Evaluation
- results = {}
- if training_args.do_eval:
- logger.info("*** Evaluate ***")
- metrics = trainer.evaluate(
- metric_key_prefix="eval",
- max_length=training_args.generation_max_length,
- num_beams=training_args.generation_num_beams,
- )
- max_eval_samples = (
- data_args.max_eval_samples if data_args.max_eval_samples is not None else len(vectorized_datasets["eval"])
- )
- metrics["eval_samples"] = min(max_eval_samples, len(vectorized_datasets["eval"]))
- trainer.log_metrics("eval", metrics)
- trainer.save_metrics("eval", metrics)
- # 14. Write Training Stats
- kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "automatic-speech-recognition"}
- if data_args.dataset_name is not None:
- kwargs["dataset_tags"] = data_args.dataset_name
- if data_args.dataset_config_name is not None:
- kwargs["dataset_args"] = data_args.dataset_config_name
- kwargs["dataset"] = f"{data_args.dataset_name} {data_args.dataset_config_name}"
- else:
- kwargs["dataset"] = data_args.dataset_name
- if training_args.push_to_hub:
- trainer.push_to_hub(**kwargs)
- else:
- trainer.create_model_card(**kwargs)
- return results
- if __name__ == "__main__":
- main()
复制代码 whisper中的tokennizer是从Autotokenizer内里下载过来的“whisper tokenizer”
那么这一部分有什么特别之处呢?又为什么得从autotokenizer内里下载呢?
我们先来回复第二个问题:Autotokenizer是什么?
可以理解为transformer的一个类,这个类里都是用来处置惩罚token的方法,只不过他们针对的任务有所不同,whispertokenizer是用来专门处置惩罚语音的。
【知识点】transformer“预训练语言模型”
在当前的自然语言处置惩罚研究中, 为了解决语言数据贫乏 (language data poverty) 的问题, 学者们开始探究小规模语言数据资源下自然语言处置惩罚的可行性问题, 因 提出了 “预训练语言模型” (pre-trained language models)。 如许的语言模型利用规 模的文本语料库数据举行 “预训练” (pre-training), 创建 “预训练语言模型” 然 后利用面向特定任务的小规模语言数据集, 根据迁徙学习的原理举行 调” (fine-tuning), 形成 “卑鄙任务————————引自:冯志伟,丁晓梅.人工智能的发展与大语言模型的对齐[J].语言治理学刊,2024,(01):108-12模型”
1.whispertokenizer是怎么支持96种语言的
原先的wenet等其他识别都是基于BPE(例如字建模、phone建模),但是whisper基于的是字节(BBPE),因此对于电脑来说什么语言都是一样的(Tiktoken)
举例来说:“今每天气真好”➡️“今,天,天,气,真,好”(字建模) “今每天气真好”➡️XXXXXX➡️“今每天气真好”(字节建模)
下面写一个步伐来检察tokenizer
- from transformers import (
- AutoConfig,
- AutoFeatureExtractor,
- AutoModelForSpeechSeq2Seq,
- AutoProcessor,
- AutoTokenizer,
- HfArgumentParser,
- Seq2SeqTrainer,
- Seq2SeqTrainingArguments,
- set_seed,
- )
- tokenizer = AutoTokenizer.from_pretrained("/mnt/e/王嘟嘟/wsl/asr_large_model/whisper_model/whisper-tiny") #输入你模型的位置
- print(tokenizer)
复制代码
赤色框框中是whisper的special token即功能类型:翻译、转录等等;黄色框框是语言的token
2.tokenizer的工作次序:载入tokenizer➡️编码具体语言➡️解码具体语言¶
(1)token是怎么编码的? 将任务和笔墨都转换为对应的数字串
- from transformers import (
- AutoConfig,
- AutoFeatureExtractor,
- AutoModelForSpeechSeq2Seq,
- AutoProcessor,
- AutoTokenizer,
- HfArgumentParser,
- Seq2SeqTrainer,
- Seq2SeqTrainingArguments,
- set_seed,
- )
- #准备文本
- in_str="今天天气真好"
- #加载tokenizer
- tokenizer = AutoTokenizer.from_pretrained("/mnt/e/王嘟嘟/wsl/asr_large_model/whisper_model/whisper-tiny") #输入你模型的位置
- tokenizer.set_prefix_tokens(language=data_args.language, task=data_args.task)
- #编码
- res = tokenizer(input_str).input_ids
复制代码 得出结果:今每天气真好 --> [50258, 50363, 12074, 6135, 42204, 6303, 2131, 50257]
可以看到这句话已经加载成了一系列表现字节的数字
1️⃣检察input_id到底对应的是token内里的什么?
tokenizer内里有一个函数《_convert_id_to_token》:
- def _convert_id_to_token(self, index):
- ""
- "
- Converts an index (integer) in a token (str) using the vocab. Whisper's base tokenizer always decodes O
- OV
- tokens as "", thus we do not use the `unk_token` he
- re.
-
- """
- return self.decoder.get(i""), "")
复制代码 2️⃣还有许多特殊的token我们怎么理解呢?
我们可以理解为这些特殊的token是用来给模型提示,告诉他们现在处置惩罚的是什么语言(语言 任务 时间戳等)
def set_prefix_tokens(self, language: str = None, task: str = None, predict_timestamps: bool = None)
(2)tokenizer的解码
- def decode(
- self,
- token_ids,
- skip_special_tokens: bool = False, #是否保留special token
- clean_up_tokenization_spaces: bool = None,
- output_offsets: bool = False,
- time_precision: float = 0.02,
- decode_with_timestamps: bool = False,
- normalize: bool = False,
- basic_normalize: bool = False,
- remove_diacritics: bool = False,
- **kwargs,
- ) -> str:
- res_jie= tokenizer.decode(res)
复制代码
四、手敲代码
我们自己手敲代码的过程没有官方给出的训练文档这么复杂,我们的代码可以简化为以下几个环节:加载数据➡️初始化训练器➡️训练➡️测试
(一)加载数据
加载数据又可以拆分为加载数据库、加载特性提取器、加载tokenizer、加载processor,这里我们没有像train.py那样拆开步骤,而是继续了一个巨大的IterableDataset来讲这几样东西一次性准备好。
- import transformers
- from torch.utils.data import IterableDataset
- from tqdm import tqdm
- from transformers import WhisperFeatureExtractor
- import torchaudio
- import torch
- class IterWhisperDataset(IterableDataset):
- def __init__(wave_scp,text,whisper_feature_extractor):
- pass
- def __len___(self):
- pass
- def __iter__(self): #传入模型以遍历
- pass
复制代码 这一类分为三大方法,第一大方法:__init__初始化板块,负责担当需要加载的对象做为参数,而且举行开端加工,将对象变为whisper便于识别的字典模式。第二大方法__len__获取长度板块。第三大板块__iter__遍历参数出场利用板块,这一板块重要运用初始化完成的数据和函数,对所有的对象举行遍历并运用函数举行处置惩罚。
(二)初始化训练器
1.__init__方法:在这一方法中我们重要对担当的参数和函数举行初始化。
(1)对传入进来的wav.scp和对应的text文本举行处置惩罚,让其变为whisper更轻易担当的字典情势.{id:[wav_path,text],id:[wav_path,text],id:[wav_path,text]}
【Tips】部分小伙伴只有wav格式的音频而没有wav.scp,这里分享一下师弟写的步伐,帮助大家转换为wav.scp格式
- import os
- def save_data(data,filename):
- #保存文件函数
- with open(os.path.join(save_path,filename),'w',encoding='utf-8') as f:
- for i in data:
- f.writelines(i[0]+' '+i[1]+'\n')
- print("%s Saving succeeded!" % filename)
-
- def get_wav_scp():
- #用于生成wav.scp
- wav_scp=[]
- #遍历音频
- for file_name in os.listdir(data_path):
- #判断后缀是否为wav
- if file_name[-3:] == 'wav':
- wav_scp.append([file_name.split(".")[0],os.path.join(data_path,file_name)])
-
- save_data(wav_scp,"wav.scp")
- if __name__ == "__main__":
- #数据文件存放路径
- data_path ="/mnt/e/王嘟嘟/wsl/asr_large_model/train_model/diyproject/data/audio"
- #保存的路径
- save_path = "/mnt/e/王嘟嘟/wsl/asr_large_model/train_model/diyproject/data"
- #生成wav.scp
- get_wav_scp()
复制代码 【提取字典】+【传入函数】:传入的函数都是需要在前面加上self.举行绑定,以保证可以在每个方法调用到
- class IterWhisperDataset(IterableDataset):
- def __init__(wave_scp,text,whisper_feature_extractor):
- #处理为字典
- self.data_list={}
- #拿到wave的id和路径
- with open(wave_scp,"r",encoding="utf-8") as file:
- for line in tqdm(file.readlines()):
- line=line.strip()
- idx=line.split(" ")[0]
- wav_path=" ".join(line.split(" ")[1:])
- self.data_list[idx]=[]
- self.data_list[idx].append(wav_path)
- pass
- pass
- #拿到text
- with open(text,"r",encoding="utf-8") as file:
- for line in tqdm(file.readlines()):
- line=line.strip()
- idx=line.split(" ")[0]
- text=" ".join(line.split(" ")[1:])
- self.data_list[idx].append(text)
- pass
- pass
- self.whisper_feature_extractor=whisper_feature_extractor #传入特征提取器
- print("文本全部个数为:",len(self.data_list))
- pass
复制代码 2.__len__模块
- def __len__(self):
- return len(self.data_list)
复制代码 3.__iter___模块:在这一方法中,我们对函数中的对象举行遍历,然后开始举行操纵,比如:预处置惩罚
- def __iter__(self):
- #遍历我们的所有数据
- for idx in self.data_list:
- #音频的路径
- wav_path = self.data_list[idx][0]
- #音频的文本
- text = self.data_list[idx][1]
-
- example = {}
- #提取特征
- data_audio = torchaudio.load(wav_path)
- example['input_features'] = self.whisper_feature_extractor(data_audio[0].numpy(),sampling_rate=16000).input_features[0]
- #token
- example['labels'] = self.whisper_tokenizer(text).input_ids[1:]
- # res_jie=self.whisper_tokenizer.decode(example['labels'])
- # print("----解码---->",res_jie)
- yield example
-
-
- pass
- pass
- pass
复制代码 【知识点】特性提取器的工作原理:获取采样率(torchaudio.load)➡️一维数据阵➡️调用特性提取
- whisper_feature_extractor=WhisperFeatureExtractor()
复制代码 在这里传入的特性提取器是泉源于我们之前解说过的Auto中获取的,是一个已经定义好了大类,直接传入实例我们就可以利用了。
获取采样率
- data_audio=torchaudio.load("/mnt/e/王嘟嘟/wsl/asr_large_model/train_model/diyproject/data/audio/BAC009S0150W0009.wav")#load(path)
复制代码 表现:
- (whisper) root@小徐的板子:/mnt/e/王嘟嘟/wsl/asr_large_model/train_model/diyproject# python3 audio_train_whisper_small.py
- (tensor([[-2.7466e-04, -4.2725e-04, -3.6621e-04, ..., 3.0518e-05,
- 3.0518e-05, 2.1362e-04]]), 16000)
复制代码 提取特性
- print(whisper_feature_extractor(data_audio[0].numpy(),sampling_rate=16000))
复制代码 表现:
(whisper) root@小徐的板子:/mnt/e/王嘟嘟/wsl/asr_large_model/train_model/diyproject# python3 audio_train_whisper_small.py
{'input_features': [array([[-0.11046922, 0.2726825 , 0.1670869 , ..., -1.0561461 ,
-1.0561461 , -1.0561461 ],
[-0.17119169, 0.14174527, -0.10604203, ..., -1.0561461 ,
-1.0561461 , -1.0561461 ],
[-0.37666786, 0.0177812 , -0.22535133, ..., -1.0561461 ,
-1.0561461 , -1.0561461 ],
...,
[-0.7373694 , -0.7566987 , -0.7817011 , ..., -1.0561461 ,
-1.0561461 , -1.0561461 ],
[-0.7835511 , -0.8563596 , -0.7267041 , ..., -1.0561461 ,
-1.0561461 , -1.0561461 ],
[-0.87479997, -1.0272436 , -0.9189577 , ..., -1.0561461 ,
-1.0561461 , -1.0561461 ]], dtype=float32)]}
(三)训练模型
- import transformersfrom torch.utils.data import IterableDatasetfrom tqdm import tqdmfrom transformers import ( WhisperFeatureExtractor, WhisperTokenizer, AutoProcessor, Seq2SeqTrainer, Seq2SeqTrainingArguments, WhisperForConditionalGeneration)import torchaudioimport torchfrom dataclasses import dataclass, fieldfrom typing import Any, Dict, List, Optional, Unionimport torch@dataclassclass DataCollatorSpeechSeq2SeqWithPadding: """ Data collator that will dynamically pad the inputs received. Args: processor ([`WhisperProcessor`]) The processor used for processing the data. decoder_start_token_id (`int`) The begin-of-sentence of the decoder. forward_attention_mask (`bool`) Whether to return attention_mask. """ processor: Any decoder_start_token_id: int forward_attention_mask: bool def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: # split inputs and labels since they have to be of different lengths and need # different padding methods model_input_name = self.processor.model_input_names[0] input_features = [{model_input_name: feature[model_input_name]} for feature in features] label_features = [{"input_ids": feature["labels"]} for feature in features] batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt") if self.forward_attention_mask: batch["attention_mask"] = torch.LongTensor([feature["attention_mask"] for feature in features]) labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt") # replace padding with -100 to ignore loss correctly labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) # if bos token is appended in previous tokenization step, # cut bos token here as it's append later anyways if (labels[:, 0] == self.decoder_start_token_id).all().cpu().item(): labels = labels[:, 1:] batch["labels"] = labels return batch#准备数据class IterWhisperDataset(IterableDataset): def __init__(self,wav_scp,text,whisper_feature_extractor,whisper_tokenizer): #处置惩罚为字典 self.data_list={} #音频路径 with open(wav_scp,"r",encoding="utf-8") as file: for line in tqdm(file.readlines()): line = line.strip() idx = line.split(" ")[0] wav_path = " ".join(line.split(" ")[1:]) self.data_list[idx] = [] self.data_list[idx].append(wav_path) pass pass pass #音频文本 with open(text,"r",encoding="utf-8") as file: for line in tqdm(file.readlines()): line = line.strip() idx = line.split(" ")[0] text = " ".join(line.split(" ")[1:]) self.data_list[idx].append(text) pass pass self.whisper_feature_extractor = whisper_feature_extractor self.whisper_tokenizer = whisper_tokenizer print("文本个数为:",len(self.data_list)) pass #文本个数 def __len__(self):
- return len(self.data_list) #传入模型遍历 def __iter__(self):
- #遍历我们的所有数据
- for idx in self.data_list:
- #音频的路径
- wav_path = self.data_list[idx][0]
- #音频的文本
- text = self.data_list[idx][1]
-
- example = {}
- #提取特征
- data_audio = torchaudio.load(wav_path)
- example['input_features'] = self.whisper_feature_extractor(data_audio[0].numpy(),sampling_rate=16000).input_features[0]
- #token
- example['labels'] = self.whisper_tokenizer(text).input_ids[1:]
- # res_jie=self.whisper_tokenizer.decode(example['labels'])
- # print("----解码---->",res_jie)
- yield example
-
-
- pass
- pass
- pass#基本路径 whisper_model="/mnt/e/王嘟嘟/wsl/asr_large_model/whisper_model/whisper-tiny"train_wav_scp="/mnt/e/王嘟嘟/wsl/asr_large_model/train_model/diyproject/data/wav.scp"train_text="/mnt/e/王嘟嘟/wsl/asr_large_model/train_model/diyproject/data/text.txt"#特性提取whisper_feature_extractor=WhisperFeatureExtractor.from_pretrained(whisper_model)#tokenwhisper_tokenizer=WhisperTokenizer.from_pretrained(whisper_model)whisper_tokenizer.set_prefix_tokens(language = "chinese", task = "transcribe")#处置惩罚数据完成train_data_list = IterWhisperDataset( train_wav_scp, train_text, whisper_feature_extractor, whisper_tokenizer)#加载资源model= WhisperForConditionalGeneration.from_pretrained(whisper_model)processor = AutoProcessor.from_pretrained(whisper_model)#初始化训练器data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor,decoder_start_token_id=model.config.decoder_start_token_id,forward_attention_mask=False,)# def compute_metrics(pred):# pred_ids = pred.predictions# pred.label_ids[pred.label_ids == -100] = whisper_tokenizer.pad_token_id# pred_str = whisper_tokenizer.batch_decode(pred_ids, skip_special_tokens=True)# # we do not want to group tokens when computing the metrics# label_str = whisper_tokenizer.batch_decode(pred.label_ids, skip_special_tokens=True)# wer = metric.compute(predictions=pred_str, references=label_str)# return {"wer": wer}training_args = Seq2SeqTrainingArguments( output_dir="model/v1", # change to a repo name of your choice per_device_train_batch_size=1, gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size learning_rate=0.001, warmup_steps=50, num_train_epochs=1, evaluation_strategy="epoch", fp16=False, per_device_eval_batch_size=2, generation_max_length=128, logging_steps=4, #迭代多少轮打一次日志 remove_unused_columns=False, # required as the PeftModel forward doesn't have the signature of the wrapped model's forward label_names=["labels"], # same reason as above)trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=train_data_list, eval_dataset=train_data_list, tokenizer=whisper_feature_extractor, data_collator=data_collator, #compute_metrics=compute_metrics if training_args.predict_with_generate else None, )#训练train_result = trainer.train()trainer.save_model() #保存训练#测试
复制代码 (四)模型的测评
- import numpy as npfrom torch.utils.data import IterableDatasetfrom tqdm import tqdm#评估模型所需要的函数from transformers import ( WhisperFeatureExtractor, WhisperTokenizer, WhisperForConditionalGeneration,)from dataclasses import dataclass,fieldfrom typing import Any,Dict,List,Optional,Unionimport torchimport torchaudioclass IterWhisperDataset(IterableDataset): def __init__(self,wav_scp,text,whisper_feature_extractor,whisper_tokenizer): #处置惩罚为字典 self.data_list={} #音频路径 with open(wav_scp,"r",encoding="utf-8") as file: for line in tqdm(file.readlines()): line = line.strip() idx = line.split(" ")[0] wav_path = " ".join(line.split(" ")[1:]) self.data_list[idx] = [] self.data_list[idx].append(wav_path) pass pass pass #音频文本 with open(text,"r",encoding="utf-8") as file: for line in tqdm(file.readlines()): line = line.strip() idx = line.split(" ")[0] text = " ".join(line.split(" ")[1:]) self.data_list[idx].append(text) pass pass self.whisper_feature_extractor = whisper_feature_extractor self.whisper_tokenizer = whisper_tokenizer print("文本个数为:",len(self.data_list)) pass #文本个数 def __len__(self):
- return len(self.data_list) #传入模型遍历 def __iter__(self): #遍历我们的所有数据 for idx in self.data_list: #音频的路径 wav_path = self.data_list[idx][0] #音频的文本 text = self.data_list[idx][1] example = {} example['idx'] = idx #给idx赋值,传递到后面的解码过程,看是否对应 #提取特性 data_audio = torchaudio.load(wav_path) example['input_features'] = self.whisper_feature_extractor(data_audio[0].numpy(),sampling_rate=16000).input_features[0] #token example['labels'] = self.whisper_tokenizer(text).input_ids #res_jie = self.whisper_tokenizer.decode(example["labels"],skip_special_tokens = False) #print('解码----->',res_jie) #print(example["labels"]) #print(self.whisper_tokenizer.batch_decode(example["labels"])) yield example pass pass passwhisper_model='model/v1' #传入自己训练好的模型举行评估out_file = open("./result",'w',encoding='utf-8') #设置一个输出路径train_wav_scp = "/mnt/e/王嘟嘟/wsl/asr_large_model/train_model/diyproject/data/wav.scp"train_text = "/mnt/e/王嘟嘟/wsl/asr_large_model/train_model/diyproject/data/text.txt"#特性提取whisper_feature_extractor = WhisperFeatureExtractor.from_pretrained(whisper_model)#token#自己训练好的没有token,因此需要借助官方模型whisper_tokenizer = WhisperTokenizer.from_pretrained("/mnt/e/王嘟嘟/wsl/asr_large_model/whisper_model/whisper-tiny",language='chinese',task='transcribe')#处置惩罚数据train_data_list = IterWhisperDataset( train_wav_scp, train_text, whisper_feature_extractor, whisper_tokenizer)#引入预训练模型model = WhisperForConditionalGeneration.from_pretrained(whisper_model)#eval_dataloader = DataLoader(common_voice["test"], batch_size=8, collate_fn=data_collator)model.eval() #将模型设置为评估模式for step, batch in enumerate(tqdm(train_data_list)): #print(step,batch) with torch.cuda.amp.autocast(): with torch.no_grad(): generated_tokens = ( model.generate( #多次迭代模型特性并传入generated_tokens input_features=torch.from_numpy(batch["input_features"][np.newaxis,:,:]), #解码模型中训练集的id decoder_input_ids=torch.from_numpy(np.array([batch["labels"][:4]])), #设置长度 max_new_tokens=255, ) .cpu() .numpy() ) labels = batch["labels"] labels = np.where(labels != -100, labels, whisper_tokenizer.pad_token_id) #解码文本 decoded_preds = whisper_tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) decoded_labels = whisper_tokenizer.batch_decode(labels, skip_special_tokens=True) out_file.write(batch['idx']+' '+decoded_preds[0]+'\n') pass pass del generated_tokens, labels, batch
复制代码
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。 |