o
�J�hm � @ s� d dl Z d dlZd dlZd dlZd dlZd dlZd dlmZmZm Z m
Z
d dlmZ d dl
mZ d dlmZ d dlmZmZmZmZmZ d dlmZ d dlmZ d d
lmZ d dlmZmZm Z m!Z!m"Z"m#Z# d dl$m%Z%m&Z&m'Z'm(Z(m)Z)m*Z*m+Z+m,Z,m-Z-m.Z.m/Z/m0Z0m1Z1m2Z2m3Z3m4Z4m5Z5m6Z6m7Z7m8Z8m9Z9 d d
l:m;Z; d dl<m=Z= e8�>e?�Z@e8�A� �B� ZCeDd0i eC�ddi��ZEe-� r�d dlFZFd dlGmHZI d dlJmKZK e(� r�d dlLmMZMmNZN d dlOmPZP d dlQmRZR e6� r�d dlSmT mUZV e3dd��r-ejW�Xd��r-e=� �re@�Yd� n*e@�Zd� d dl[mH m\Z] e^eIj_j`e]ja��s-eIjbdd� e^eIj_j`e]ja��s-ecd��e,� �r>d dldme mFZf ef�g� dehfdd�Zid d!� Zjd"d#deeh fd$d%�ZkG d&d'� d'e&�Zlg d(�Zmd)eDfd*d+�ZneG d,d-� d-��ZoG d.d/� d/e�ZpdS )1� N)�asdict� dataclass�field�fields)� timedelta)�Enum)�Path)�Any�Dict�List�Optional�Union)�get_full_repo_name)�version� ��DebugOption)�EvaluationStrategy�
FSDPOption�HubStrategy�IntervalStrategy�SaveStrategy�
SchedulerType)�ACCELERATE_MIN_VERSION�ExplicitEnum�cached_property�is_accelerate_available�is_ipex_available�is_safetensors_available�is_sagemaker_dp_enabled�is_sagemaker_mp_enabled�is_torch_available�is_torch_bf16_cpu_available�is_torch_bf16_gpu_available�is_torch_mlu_available�is_torch_mps_available�is_torch_musa_available�is_torch_neuroncore_available�is_torch_npu_available�is_torch_tf32_available�is_torch_xla_available�is_torch_xpu_available�logging�requires_backends)� strtobool)�is_optimum_neuron_available�passive�����)�"is_torch_greater_or_equal_than_2_0)�AcceleratorState�PartialState)�DistributedType)�AcceleratorConfigF)�check_device�TORCHELASTIC_RUN_IDzuMake sure that you are performing the training with the NeuronTrainer from optimum[neuron], this will fail otherwise.z�Please use the NeuronTrainer from optimum[neuron] instead of the Transformers library to perform training on AWS Trainium instances. More information here: https://github.com/huggingface/optimum-neuron�xla)�backendzGFailed to initialize torch.distributed process group using XLA backend.�returnc C s<