o
�J�h� � @ s� d dl Z d dlZd dlmZ d dlZd dlmZ dejddfdd�Zdejfdd �Z dej
jfd
d�Zdefdd
�Z
ddd�Zdefdd�Zdae j ddefdd��ZdS )� N)� Generator)�default_generator� new_state�returnc C s t �| � dS )z�Sets the random number generator state.
.. note:: This function only works for CPU. For CUDA, please use
:func:`torch.manual_seed`, which works for both CPU and CUDA.
Args:
new_state (torch.ByteTensor): The desired state
N)r � set_state)r � r �JC:\pinokio\api\whisper-webui.git\app\env\lib\site-packages\torch\random.py�
set_rng_state
s r c C � t �� S )z�Returns the random number generator state as a `torch.ByteTensor`.
.. note:: The returned state is for the default generator on CPU only.
See also: :func:`torch.random.fork_rng`.
)r � get_stater r r r �
get_rng_state s r c C st t | �} ddl}|j�� s|j�| � ddl}|j�� s"|j�| � ddl}|j �� s1|j �| � t
| � t�| �S )a� Sets the seed for generating random numbers on all devices. Returns a
`torch.Generator` object.
Args:
seed (int): The desired seed. Value must be within the inclusive range
`[-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff]`. Otherwise, a RuntimeError
is raised. Negative inputs are remapped to positive values with the formula
`0xffff_ffff_ffff_ffff + seed`.
r N)�int�
torch.cuda�cuda�_is_in_bad_fork�manual_seed_all� torch.mps�mps�manual_seed� torch.xpu�xpu�_seed_custom_devicer ��seed�torchr r r r s
r c C sn t �� } ddl}|j�� s|j�| � ddl}|j�� s"|j�| � ddl }|j
�� s1|j
�| � t| � | S )z�Sets the seed for generating random numbers to a non-deterministic
random number on all devices. Returns a 64 bit number used to seed the RNG.
r N)r r r r r r r r r r r r r r r r r ? s
r c C s� t | �} tj�� }tt|�rPtt|�}d}d}t||�r2t||�r2t||�� s0t||�| � dS dS d|� d�}|d|� d|� d|� d�7 }tj|td d
� dS dS )z�Sets the seed to generate random numbers for custom device.
Args:
seed (int): The desired seed.
See [Note: support the custom device with privateuse1]
r r zSet seed for `z0` device does not take effect, please add API's �`z` and `z` to `z` device module.� )�
stacklevelN) r
r �_C�_get_privateuse1_backend_name�hasattr�getattr�warnings�warn�UserWarning)r �custom_backend_name�custom_device_modZ_bad_fork_nameZ_seed_all_name�messager r r r X s
���r c C r
)z�Returns the initial seed for generating random numbers as a
Python `long`.
.. note:: The returned seed is for the default generator on CPU only.
)r �initial_seedr r r r r( q s r( FT�fork_rng�devicesr c # sf � t �|�j}tt |d�� � du rtd|� d�d ��|s"dV dS | du rp� �� }|dkritsi|�� � d|� d|� d|�� � d |�� � d
|�� � d|�� � d|� d
|� d|�� � d|� d|� d�}t� |� dat
t|��} nt
| �} t �� }� fdd�| D �}zdV W t �
|� t| |�D ]
\} }
� �
|
| � q�dS t �
|� t| |�D ]
\} }
� �
|
| � q�w )a�
Forks the RNG, so that when you return, the RNG is reset
to the state that it was previously in.
Args:
devices (iterable of Device IDs): devices for which to fork
the RNG. CPU RNG state is always forked. By default, :meth:`fork_rng` operates
on all devices, but will emit a warning if your machine has a lot
of devices, since this function will run very slowly in that case.
If you explicitly specify devices, this warning will be suppressed
enabled (bool): if ``False``, the RNG is not forked. This is a convenience
argument for easily disabling the context manager without having
to delete it and unindent your Python code under it.
device_type (str): device type str, default is `cuda`. As for custom device,
see details in [Note: support the custom device with privateuse1]
Nztorch has no module of `z`, you should register z,a module by `torch._register_device_module`.� z reports that you have z& available devices, and you have used z_ without explicitly specifying which devices are being used. For safety, we initialize *every* zA device by default, which can be quite slow if you have a lot of z5s. If you know that you are only making use of a few z' devices, set the environment variable z_VISIBLE_DEVICES or the 'z' keyword argument of z� with the set of devices you are actually using. For example, if you are using CPU only, set device.upper()_VISIBLE_DEVICES= or devices=[]; if you are using device 0 only, set zb_VISIBLE_DEVICES=0 or devices=[0]. To initialize all devices and suppress this warning, set the 'z#' keyword argument to `range(torch.z.device_count())`.Tc s g | ]}� � |��qS r )r )�.0�device�Z
device_modr r �
<listcomp>� s zfork_rng.<locals>.<listcomp>)r r- �typer! �RuntimeError�device_count�_fork_rng_warned_already�upperr"