o
�J�h�\ � @ s� d dl Z d dlZd dlmZmZmZmZmZmZm Z d dl
Z
d dlm m
Z d dl
mZmZ d dlmZ d dlmZmZ d dlmZmZ d dlmZmZmZmZ g d�Zd d
� Z dd� Z! d|d
ede e"ee" f de"deedf fdd�Z#dedefdd�Z$er�dd�de eee f dee% deedf fdd�Z&ndd�dee% deedf fdd�Z&dee% fdd�Z' ! d}d"ed#e"d$ee" d%ee" d&ee d'e(d(e%d)e(d*ee( d+ee( defd,d-�Z)ee
j*d.�Z*er�eZ+neeeef Z+ ! ! d~d"ed/e(d0e(d1e(dee" de+fd2d3�Z, ! ! dd"ed0e(d1e(dee" de+f
d4d5�Z- ! ! d~d6d7�Z. ! ! d~d8d9�Z/ ! ! d~d:d;�Z0ed1d<d!e.e/e1d=d>�Z2ed1d<d!e,e0e1d=d>�Z3ed0d?d!e3e2e1d=d>�Z4e,j5e4_5 ! ! dd@dA�Z6 ! ! ddBdC�Z7 ! ! ddDdE�Z8ed1dFd!e6e7e1dGd>�Z9ed1dFd!e-e8e1dGd>�Z:ed0d?d!e:e9e1dGd>�Z;e-j5e;_5e�r�nNe ? d�dHe"dIee
j fdJdK��Z<e d�dHeee" ee" f dIee
j fdLdK��Z<e d�dHeee" dIee
j fdMdK��Z<e d�dHe
jdIee
j fdNdK��Z< ? d�dIee
j fdOdK�Z<dedefdPdQ�Z=dRdS� Z>d�dVdW�Z?dXdY� Z@dZd[� ZAd\d]� ZBe�r"n0e ^ ! d�d_d`��ZCe ^ ! d�dad`��ZCe ^ ! d�dbd`��ZCe ^ ! d�dcd`��ZC ^ ! d�ddee eDe%f fded`�ZCdfedge e"ee" e
jEf deedf fdhdi�ZFdfedge e"ee" f defdjdk�ZGddl�dmdn�ZHd�dodp�ZIe�r�ee ZJnee ZJdqe"dre(dIeJddfdsdt�ZKd�dudv�ZLd�dwdx�ZMedrd?d!eLeMe1dyd>�ZNeIj5eN_5dzd{� ZOdS )�� N)�Any�List�Optional�Sequence�Tuple�
TYPE_CHECKING�Union)�_VF�Tensor)�_add_docstr)� _overload�boolean_dispatch)�pca_lowrank�svd_lowrank)�handle_torch_function�has_torch_function�has_torch_function_unary�has_torch_function_variadic)�
atleast_1d�
atleast_2d�
atleast_3d�
align_tensors�broadcast_shapes�broadcast_tensors�cartesian_prod�
block_diag�cdist�chain_matmul�einsum�istft�lu�norm�meshgridr �split�stftr � tensordot�unique�unique_consecutive�
unravel_indexc G �$ t | �r
tt| g| �R � S t�| �S )a broadcast_tensors(*tensors) -> List of Tensors
Broadcasts the given tensors according to :ref:`broadcasting-semantics`.
Args:
*tensors: any number of tensors of the same type
.. warning::
More than one element of a broadcasted tensor may refer to a single
memory location. As a result, in-place operations (especially ones that
are vectorized) may result in incorrect behavior. If you need to write
to the tensors, please clone them first.
Example::
>>> x = torch.arange(3).view(1, 3)
>>> y = torch.arange(2).view(2, 1)
>>> a, b = torch.broadcast_tensors(x, y)
>>> a.size()
torch.Size([2, 3])
>>> a
tensor([[0, 1, 2],
[0, 1, 2]])
)r r r r ��tensors� r, �NC:\pinokio\api\whisper-webui.git\app\env\lib\site-packages\torch\functional.pyr / s
r c s� t j�� s�d}| D ]"}t|tt jf�r|dk rd}q t|ttf�r+t|�}||k r+|}q dg| }ddl m
} | D ]^}t|tt jf�rF|f}t|ttf�r�tddt|� d�D ]:}|| dk rntd|| � d|| � d���||| dk�s�||| || k�r�qW|| dkr�td��|| ||<