o �J�hg*�@s dZddgZddlmZmZddlZddlmZmZddl m Z m Z   dd ed e d ee d eed ef dd�Z   dd ed ee d ee d eed eeeeff dd�Z   dd ed ee d ee d eed eeeeff dd�Z   dd ed ee ded e d eeeeff dd�ZdS)zBImplement various linear algebra algorithms for low rank matrices.� svd_lowrank� pca_lowrank�)�Optional�TupleN)� _linalg_utils�Tensor)�handle_torch_function�has_torch_function��A�q�niter�M�returnc Cs�|durdn|}|��st�|�n|j}tj}tj|jd|||jd�}|||�}|dur4||||�}tj � |�j }t |�D]2} ||j |�}|durS|||j |�}tj � |�j }|||�}|durj||||�}tj � |�j }q?|S)a�Return tensor :math:`Q` with :math:`q` orthonormal columns such that :math:`Q Q^H A` approximates :math:`A`. If :math:`M` is specified, then :math:`Q` is such that :math:`Q Q^H (A - M)` approximates :math:`A - M`. without instantiating any tensors of the size of :math:`A` or :math:`M`. .. note:: The implementation is based on the Algorithm 4.4 from Halko et al., 2009. .. note:: For an adequate approximation of a k-rank matrix :math:`A`, where k is not known in advance but could be estimated, the number of :math:`Q` columns, q, can be choosen according to the following criteria: in general, :math:`k <= q <= min(2*k, m, n)`. For large low-rank matrices, take :math:`q = k + 5..10`. If k is relatively small compared to :math:`min(m, n)`, choosing :math:`q = k + 0..2` may be sufficient. .. note:: To obtain repeatable results, reset the seed for the pseudorandom number generator Args:: A (Tensor): the input tensor of size :math:`(*, m, n)` q (int): the dimension of subspace spanned by :math:`Q` columns. niter (int, optional): the number of subspace iterations to conduct; ``niter`` must be a nonnegative integer. In most cases, the default value 2 is more than enough. M (Tensor, optional): the input tensor's mean of size :math:`(*, m, n)`. References:: - Nathan Halko, Per-Gunnar Martinsson, and Joel Tropp, Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions, arXiv:0909.4061 [math.NA; math.PR], 2009 (available at `arXiv <http://arxiv.org/abs/0909.4061>`_). Nr �������dtype�device)� is_complex�_utils�get_floating_dtyper�matmul�torch�randn�shaper�linalg�qr�Q�range�mH) r r r rrr�R�Xr�i�r#�LC:\pinokio\api\whisper-webui.git\app\env\lib\site-packages\torch\_lowrank.py�get_approximate_basis s$1    r%�cCs\tj��s&||f}ttt|���tjtd�f�s&t|�r&t t |||||d�St ||||d�S)a�Return the singular value decomposition ``(U, S, V)`` of a matrix, batches of matrices, or a sparse matrix :math:`A` such that :math:`A \approx U \operatorname{diag}(S) V^{\text{H}}`. In case :math:`M` is given, then SVD is computed for the matrix :math:`A - M`. .. note:: The implementation is based on the Algorithm 5.1 from Halko et al., 2009. .. note:: For an adequate approximation of a k-rank matrix :math:`A`, where k is not known in advance but could be estimated, the number of :math:`Q` columns, q, can be choosen according to the following criteria: in general, :math:`k <= q <= min(2*k, m, n)`. For large low-rank matrices, take :math:`q = k + 5..10`. If k is relatively small compared to :math:`min(m, n)`, choosing :math:`q = k + 0..2` may be sufficient. .. note:: This is a randomized method. To obtain repeatable results, set the seed for the pseudorandom number generator .. note:: In general, use the full-rank SVD implementation :func:`torch.linalg.svd` for dense matrices due to its 10x higher performance characteristics. The low-rank SVD will be useful for huge sparse matrices that :func:`torch.linalg.svd` cannot handle. Args:: A (Tensor): the input tensor of size :math:`(*, m, n)` q (int, optional): a slightly overestimated rank of A. niter (int, optional): the number of subspace iterations to conduct; niter must be a nonnegative integer, and defaults to 2 M (Tensor, optional): the input tensor's mean of size :math:`(*, m, n)`, which will be broadcasted to the size of A in this function. References:: - Nathan Halko, Per-Gunnar Martinsson, and Joel Tropp, Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions, arXiv:0909.4061 [math.NA; math.PR], 2009 (available at `arXiv <https://arxiv.org/abs/0909.4061>`_). N)r r r) r�jit� is_scripting�set�map�type�issubsetrr rr� _svd_lowrank)r r r rZ tensor_opsr#r#r$rVs 5 �� �c Cs�|durdn|}|jdd�\}}tj}|dur|�|���}||kr-|j}|dur-|j}t||||d�}||j|�}|durG|||j|�}tjj |dd�\} } } | j} |�| �} ||krc| | } } | | | fS)Nr&������r rF)� full_matrices) rrr� broadcast_to�sizerr%rr�svd) r r r r�m�nrr�B�U�S�Vh�Vr#r#r$r-�s&    r-T�centercCs�tj��st|�tjurt|f�rtt|f||||d�S|jdd�\}}|dur0t d||�}n|dkr;|t ||�ksHt d|�dt ||�����|dksTt d|�d ���t � |�}|sct |||dd �St �|�r�t|j�d krst d ��tjj|d d�|}|��d}tjd t|�|j|jd�} || d<tj| |��|df||jd�} tj|jdd�d|f||jd�} tj�| | �j} t |||| d �S|jd dd�} t || ||dd �S)a�Performs linear Principal Component Analysis (PCA) on a low-rank matrix, batches of such matrices, or sparse matrix. This function returns a namedtuple ``(U, S, V)`` which is the nearly optimal approximation of a singular value decomposition of a centered matrix :math:`A` such that :math:`A \approx U \operatorname{diag}(S) V^{\text{H}}` .. note:: The relation of ``(U, S, V)`` to PCA is as follows: - :math:`A` is a data matrix with ``m`` samples and ``n`` features - the :math:`V` columns represent the principal directions - :math:`S ** 2 / (m - 1)` contains the eigenvalues of :math:`A^T A / (m - 1)` which is the covariance of ``A`` when ``center=True`` is provided. - ``matmul(A, V[:, :k])`` projects data to the first k principal components .. note:: Different from the standard SVD, the size of returned matrices depend on the specified rank and q values as follows: - :math:`U` is m x q matrix - :math:`S` is q-vector - :math:`V` is n x q matrix .. note:: To obtain repeatable results, reset the seed for the pseudorandom number generator Args: A (Tensor): the input tensor of size :math:`(*, m, n)` q (int, optional): a slightly overestimated rank of :math:`A`. By default, ``q = min(6, m, n)``. center (bool, optional): if True, center the input tensor, otherwise, assume that the input is centered. niter (int, optional): the number of subspace iterations to conduct; niter must be a nonnegative integer, and defaults to 2. References:: - Nathan Halko, Per-Gunnar Martinsson, and Joel Tropp, Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions, arXiv:0909.4061 [math.NA; math.PR], 2009 (available at `arXiv <http://arxiv.org/abs/0909.4061>`_). )r r;r r.Nr&rzq(=z>) must be non-negative integer and not greater than min(m, n)=zniter(=z) must be non-negative integerr/r z8pca_lowrank input is expected to be 2-dimensional tensor)r.)�dimr�T)r<�keepdim)rr'r(r+rr rrr�min� ValueErrorrrr-� is_sparse�len�sparse�sum�indices�zerosrr�sparse_coo_tensor�values�ones�mm�mT�mean)r r r;r r4r5r�cZcolumn_indicesrEZC_tZ ones_m1_tr�Cr#r#r$r�sJ B��   ��$)r N)r&r N)NTr )�__doc__�__all__�typingrrrrrr�torch.overridesrr �intr%rr-�boolrr#r#r#r$�<module>sz����� �L�����  �B�����  �$����� �
Memory