o �J�hp{� @s�UddlmZddlZddlZddlZddlZddlZddlZddlZddl m Z m Z ddl m Z mZmZmZmZmZmZmZmZe rRddlZddlmZed�Zned�Zed�Ze�d�ZeeegefZGd d �d �Z Gd d �d e �Z!Gd d�de �Z"Gdd�de �Z#Gdd�de �Z$Gdd�de �Z%Gdd�de �Z&Gdd�de �Z'Gdd�deeef�Z(Gdd�de �Z)de iZ*de+d<d(d)d&d'�Z,e e!e%e"e$e#e&e'e)f D]Z-e,e-�q�dS)*�)� annotationsN)�Future�ThreadPoolExecutor) � TYPE_CHECKING�Any�Callable�ClassVar�Generic� NamedTuple�Optional� OrderedDict�TypeVar)� ParamSpec�P�T�fsspecc@sPeZdZUdZdZded<dd d �Zddd�Zddd�Zddd�Z ddd�Z dS) � BaseCacheagPass-though cache: doesn't keep anything, calls every time Acts as base class for other cachers Parameters ---------- blocksize: int How far to read ahead in numbers of bytes fetcher: func Function of the form f(start, end) which gets bytes from remote as specified size: int How big this file is �none� ClassVar[str]�name� blocksize�int�fetcher�Fetcher�size�return�NonecCs.||_d|_||_||_d|_d|_d|_dS�Nr)r�nblocksrr� hit_count� miss_count�total_requested_bytes��selfrrr�r$�LC:\pinokio\api\whisper-webui.git\app\env\lib\site-packages\fsspec\caching.py�__init__:s zBaseCache.__init__�start� int | None�stop�bytescCs<|durd}|dur |j}||jks||krdS|�||�S)Nr�)rr�r#r'r)r$r$r%�_fetchDs zBaseCache._fetchcCsd|_d|_d|_dS)zAReset hit and miss counts for a more ganular report e.g. by file.rN)rr r!�r#r$r$r%� _reset_statsMs zBaseCache._reset_stats�strcCs0|jdkr |jdkr dSd|j|j|j|jfS)z2Return a formatted string of the cache statistics.r�z3 , %s: %d hits, %d misses, %d total requested bytes)rr rr!r.r$r$r%� _log_statsSs�zBaseCache._log_statscCs@d|jj�d|j�d|j�d|j�d|j�d|j�d|j�d�S) Nz <z: block size : z block count : z file size : z cache hits : z cache misses: z$ total requested bytes: z > )� __class__�__name__rrrrr r!r.r$r$r%�__repr___s�������zBaseCache.__repr__N�rrrrrrrr�r'r(r)r(rr*�rr)rr0) r4� __module__� __qualname__�__doc__r�__annotations__r&r-r/r2r5r$r$r$r%r(s     rcsTeZdZdZdZ  d d!�fdd� Zd"dd�Zd#dd�Zd$dd�Zd%dd�Z �Z S)&� MMapCachez�memory-mapped sparse file cache Opens temporary file, which is filled blocks-wise when data is requested. Ensure there is enough disc space in the temporary location. This cache method might only work on posix �mmapNrrrrr�location� str | None�blocks�set[int] | Nonerrcs8t��|||�|durt�n||_||_|��|_dS�N)�superr&�setrAr?� _makefile�cache)r#rrrr?rA�r3r$r%r&wszMMapCache.__init__�mmap.mmap | bytearraycCs�ddl}ddl}|jdkrt�S|jdustj�|j�sB|jdur*|��}t �|_ nt |jd�}|� |jd�|� d�|��nt |jd�}|�|��|j�S)Nrzwb+��1zr+b)r>�tempfiler� bytearrayr?�os�path�exists� TemporaryFilerErA�open�seek�write�flush�fileno)r#r>rL�fdr$r$r%rF�s       zMMapCache._makefiler'r(�endr*c sDt�d|�d|���|durd}|dur�j}|�jks!||kr#dS|�j}|�j}�fdd�t||d�D�}�fdd�t||d�D�}�jt|�7_�jt|�7_|r�|�d�}|�j}t |�j�j�} �j | |7_ t�d |�d |�d| �d ���� || ��j || �<�j �|�|s]�j ||�S) NzMMap cache fetching �-rr+csg|] }|�jvr|�qSr$�rA��.0�ir.r$r%� <listcomp>��z$MMapCache._fetch.<locals>.<listcomp>rJcsg|] }|�jvr|�qSr$rZr[r.r$r%r^�r_zMMap get block #z (�))�logger�debugrr�ranger �lenr�pop�minr!rrGrA�add) r#r'rXZ start_blockZ end_blockZneed�hitsr]Zsstart�sendr$r.r%r-�s.     � zMMapCache._fetch�dict[str, Any]cCs|j��}|d=|S)NrG)�__dict__�copy�r#�stater$r$r%� __getstate__�s zMMapCache.__getstate__rncCs|j�|�|��|_dSrC)rk�updaterFrGrmr$r$r%� __setstate__�s zMMapCache.__setstate__)NN) rrrrrrr?r@rArBrr)rrI�r'r(rXr(rr*�rrj�rnrjrr) r4r9r:r;rr&rFr-rorq� __classcell__r$r$rHr%r=ls�  r=c�0eZdZdZdZd�fd d � Zddd�Z�ZS)�ReadAheadCachea!Cache which reads only when we get beyond a block of data This is a much simpler version of BytesCache, and does not attempt to fill holes in the cache or keep fragments alive. It is best suited to many small reads in a sequential order (e.g., reading lines from a file). Z readaheadrrrrrrrcs&t��|||�d|_d|_d|_dS)Nr+r)rDr&rGr'rXr"rHr$r%r&�s zReadAheadCache.__init__r'r(rXr*cCs>|durd}|dus||jkr|j}||jks||krdS||}||jkr?||jkr?|jd7_|j||j||j�S|j|krK|jkrhnn|jd7_|j||jd�}|t|�8}|j}n |jd7_d}t|j||j�}|j ||7_ |� ||�|_||_|jt|j�|_||jd|�S�Nrr+rJ) rr'rXrrGr rdrfrr!r)r#r'rX�l�partr$r$r%r-�s. zReadAheadCache._fetchr6rr�r4r9r:r;rr&r-rur$r$rHr%rw�s rwcrv)�FirstChunkCachez�Caches the first block of a file only This may be useful for file types where the metadata is stored in the header, but is randomly accessed. �firstrrrrrrrcs&||kr|}t��|||�d|_dSrC)rDr&rGr"rHr$r%r&�s zFirstChunkCache.__init__r'r(rXr*cCs*|pd}||jkrt�d�dSt||j�}||jkr|jdurW|jd7_||jkrG|j|7_|�d|�}|d|j�|_||d�S|�d|j�|_|j|j7_|j||�}||jkrv|j||j7_||�|j|�7}|j d7_ |S|jd7_|j||7_|�||�S)Nrz,FirstChunkCache: requested start > file sizer+rJ) rrarbrfrrGr r!rr)r#r'rX�datarzr$r$r%r-s0          zFirstChunkCache._fetchr6rrr{r$r$rHr%r|�s r|csheZdZdZdZ d#d$�fd d � Zdd�Zd%dd�Zd&dd�Zd'dd�Z d(�fdd� Z d)d!d"�Z �Z S)*� BlockCachea� Cache holding memory as a set of blocks. Requests are only ever made ``blocksize`` at a time, and are stored in an LRU cache. The least recently accessed block is discarded when more than ``maxblocks`` are stored. Parameters ---------- blocksize : int The number of bytes to store in each block. Requests are only ever made for ``blocksize``, so this should balance the overhead of making a request against the granularity of the blocks. fetcher : Callable size : int The total size of the file being cached. maxblocks : int The maximum number of blocks to cache for. The maximum memory use for this cache is then ``blocksize * maxblocks``. Z blockcache� rrrrr� maxblocksrrcs<t��|||�t�||�|_||_t�|�|j�|_ dSrC) rDr&�math�ceilrr�� functools� lru_cache� _fetch_block�_fetch_block_cached�r#rrrr�rHr$r%r&7szBlockCache.__init__cC� |j��S�z� The statistics on the block cache. Returns ------- NamedTuple Returned directly from the LRU Cache used internally. �r�� cache_infor.r$r$r%r�?� zBlockCache.cache_inforjcCs|j}|d=|S)Nr��rkrmr$r$r%roJszBlockCache.__getstate__rncCs&|j�|�t�|d�|j�|_dS)Nr�)rkrpr�r�r�r�rmr$r$r%rqOs   �zBlockCache.__setstate__r'r(rXr*cCst|durd}|dur |j}||jks||krdS||j}||j}t||d�D]}|�|�q)|j||||d�S)Nrr+rJ��start_block_number�end_block_number)rrrcr�� _read_cache)r#r'rXr�r�� block_numberr$r$r%r-Us    �zBlockCache._fetchr�cst||jkrtd|�d|j�d���||j}||j}|j||7_|jd7_t�d|�t��||�}|S)�= Fetch the block of data for `block_number`. �'block_number=�(' is greater than the number of blocks (r`rJzBlockCache fetching block %d) r� ValueErrorrr!r ra�inforDr-)r#r�r'rX�block_contentsrHr$r%r�ls ��   zBlockCache._fetch_blockr�r�c C�||j}||j}|jd7_||kr |�|�}|||�S|�|�|d�g}|�t|jt|d|���|�|�|�d|��d�|�S�z� Read from our block cache. Parameters ---------- start, end : int The start and end byte positions. start_block_number, end_block_number : int The start and end block numbers. rJNr+�rrr��extend�maprc�append�join� r#r'rXr�r�Z start_pos�end_pos�block�outr$r$r%r�~s    �� zBlockCache._read_cache�r�� rrrrrrr�rrrrsrtrr)r�rrr*� r'rrXrr�rr�rrr*) r4r9r:r;rr&r�rorqr-r�r�rur$r$rHr%rs�  rcsHeZdZUdZdZded< dd�fdd� Zddd�Zddd�Z�Z S)� BytesCacheaKCache which holds data in a in-memory bytes object Implements read-ahead by the block size, for semi-random reads progressing through the file. Parameters ---------- trim: bool As we read more data, whether to discard the start of the buffer when we are more than a blocksize ahead of it. r*rrTrrrrr�trim�boolrrcs,t��|||�d|_d|_d|_||_dS)Nr+)rDr&rGr'rXr�)r#rrrr�rHr$r%r&�s  zBytesCache.__init__r'r(rXcCs�|durd}|dur |j}||jks||krdS|jdurC||jkrC|jdurC||jkrC||j}|jd7_|j||||�S|jrPt|j||j�}n|}||ks[||jkr]dS|jdusg||jkr�|jdusq||jkr�|j||7_|jd7_|� ||�|_||_n�|jdus�J�|jdus�J�|jd7_||jkr�|jdus�|j||jkr�|j||7_|� ||�|_||_nb|j|j|7_|� ||j�}||_||j|_nG|jdu�r)||jk�r)|j|jkr�n4||j|jk�r|j||7_|� ||�|_||_n|j||j7_|� |j|�}|j||_|jt |j�|_||j}|j||||�}|j �rk|j|j|jd}|dk�rk|j|j|7_|j|j|d�|_|Srx) rr'rXrrGrrfr!r rrdr�)r#r'rX�offsetZbend�newr��numr$r$r%r-�sn          zBytesCache._fetchcCs t|j�SrC)rdrGr.r$r$r%�__len__s zBytesCache.__len__)T) rrrrrrr�r�rrrr)rr) r4r9r:r;rr<r&r-r�rur$r$rHr%r��s  � Ir�csDeZdZUdZdZded<    dd�fdd� Zddd�Z�ZS)�AllBytesz!Cache entire contents of the file�allrrNrr(r�Fetcher | Nonerr~� bytes | NonerrcsNt��|||�|dur"|jd7_|j|j7_|�d|j�}||_dS)NrJr)rDr&r r!rrr~)r#rrrr~rHr$r%r&s  zAllBytes.__init__r'r)r*cCs|jd7_|j||�S)NrJ)rr~r,r$r$r%r-szAllBytes._fetch)NNNN) rr(rr�rr(r~r�rrr7� r4r9r:r;rr<r&r-rur$r$rHr%r� s  �r�csDeZdZUdZdZded<  dd�fdd� Zd�fdd� Z�ZS)�KnownPartsOfAFilea� Cache holding known file parts. Parameters ---------- blocksize: int How far to read ahead in numbers of bytes fetcher: func Function of the form f(start, end) which gets bytes from remote as specified size: int How big this file is data: dict A dictionary mapping explicit `(start, stop)` file-offset tuples with known bytes. strict: bool, default True Whether to fetch reads that go beyond a known byte-range boundary. If `False`, any read that ends outside a known part will be zero padded. Note that zero padding will not be used for reads that begin outside a known byte-range. �partsrrNTrrrrrr~�&Optional[dict[tuple[int, int], bytes]]�strictr��_rc s�t��|||�||_|rdt|���}|dg}|�|d�g} |dd�D]3\} } |d\} } | | krH| | f|d<| d|�| | f�7<q&|�| | f�| �|�| | f��q&tt|| ��|_ dSi|_ dS)NrrJ�����) rDr&r��sorted�keysrer��dict�zipr~)r#rrrr~r�r�Z old_offsets�offsetsrAr'r)Zstart0Zstop0rHr$r%r&=s      zKnownPartsOfAFile.__init__r'r(r)rr*cs:|durd}|dur |j}d}|j��D]J\\}}}||kr$|kr^nq||}|||||�}|jrC||krA|krZnn|d||t|�7}|jd7_|S|}nq|jdurntd||f�d���t� d||f�d��t � d|�d |���|j ||7_ |j d7_ |t��||�S) Nrr+�rJz&Read is outside the known file parts: z. z%. IO/caching performance may be poor!z!KnownPartsOfAFile cache fetching rY)rr~�itemsr�rdrrr��warnings�warnrarbr!r rDr-)r#r'r)r�Zloc0Zloc1r~�offrHr$r%r-[s2� �zKnownPartsOfAFile._fetch)NT) rrrrrrr~r�r�r�r�rr7r�r$r$rHr%r�$s  �r�c@sTeZdZdZGdd�de�Zdd d d �Zd!dd�Zd"dd�Zd#dd�Z d$dd�Z dS)%� UpdatableLRUzg Custom implementation of LRU cache that allows updating keys Used by BackgroudBlockCache c@s.eZdZUded<ded<ded<ded<dS)�UpdatableLRU.CacheInforrh�misses�maxsize�currsizeN)r4r9r:r<r$r$r$r%� CacheInfo�s  r���func�Callable[P, T]�max_sizerrrcCs0t��|_||_||_d|_d|_t��|_ dSr) � collectionsr �_cache�_func� _max_size�_hits�_misses� threading�Lock�_lock)r#r�r�r$r$r%r&�s zUpdatableLRU.__init__�args�P.args�kwargs�P.kwargsrcOs�|r td|������|j�&||jvr-|j�|�|jd7_|j|Wd�SWd�n1s7wY|j|i|��}|j�,||j|<|jd7_t|j�|j krk|jj dd�Wd�|SWd�|S1svwY|S)Nz Got unexpected keyword argument rJF��last) � TypeErrorr�r�r�� move_to_endr�r�r�rdr��popitem)r#r�r��resultr$r$r%�__call__�s.  ��  �� ��zUpdatableLRU.__call__rr�cGs4|j� ||jvWd�S1swYdSrC)r�r�)r#r�r$r$r%� is_key_cached�s$�zUpdatableLRU.is_key_cachedr�cGsd|j�%||j|<t|j�|jkr |jjdd�Wd�dSWd�dS1s+wYdS)NFr�)r�r�rdr�r�)r#r�r�r$r$r%�add_key�s �"�zUpdatableLRU.add_keyr�cCsH|j�|j|jt|j�|j|jd�Wd�S1swYdS)N)r�r�rhr�)r�r�r�rdr�r�r�r.r$r$r%r��s�$�zUpdatableLRU.cache_infoN)r�)r�r�r�rrr)r�r�r�r�rr)r�rrr�)r�rr�rrr�rr�) r4r9r:r;r r�r&r�r�r�r�r$r$r$r%r��s    r�csveZdZUdZdZded< d(d)�fdd� Zd*dd�Zd+dd�Zd,dd�Z d-dd�Z d.d/�fd"d#� Z d0d&d'�Z �Z S)1�BackgroundBlockCachea� Cache holding memory as a set of blocks with pre-loading of the next block in the background. Requests are only ever made ``blocksize`` at a time, and are stored in an LRU cache. The least recently accessed block is discarded when more than ``maxblocks`` are stored. If the next block is not in cache, it is loaded in a separate thread in non-blocking way. Parameters ---------- blocksize : int The number of bytes to store in each block. Requests are only ever made for ``blocksize``, so this should balance the overhead of making a request against the granularity of the blocks. fetcher : Callable size : int The total size of the file being cached. maxblocks : int The maximum number of blocks to cache for. The maximum memory use for this cache is then ``blocksize * maxblocks``. � backgroundrrr�rrrrrr�rrcsZt��|||�t�||�|_||_t|j|�|_t dd�|_ d|_ d|_ t ��|_dS)NrJ�� max_workers)rDr&r�r�rr�r�r�r�r�_thread_executor�_fetch_future_block_number� _fetch_futurer�r��_fetch_future_lockr�rHr$r%r&�s zBackgroundBlockCache.__init__r�cCr�r�r�r.r$r$r%r��r�zBackgroundBlockCache.cache_inforjcCs(|j}|d=|d=|d=|d=|d=|S)Nr�r�r�r�r�r�rmr$r$r%ro�sz!BackgroundBlockCache.__getstate__cCsD|j�|�t|j|d�|_tdd�|_d|_d|_t � �|_ dS)Nr�rJr�) rkrpr�r�r�rr�r�r�r�r�r�rmr$r$r%rqs  z!BackgroundBlockCache.__setstate__r'r(rXr*c Cs�|durd}|dur |j}||jks||krdS||j}||j}d}d}|j�M|jduro|jdus6J�|j��rRt�d�|j� |j� �|j�d|_d|_nt ||jko]|kn�}|ro|j}|j}d|_d|_Wd�n1sywY|dur�t�d�|j� |� �|�t ||d�D]}|�|�q�|d} |j�%|jdur�| |j kr�|j�| �s�| |_|j�|j| d�|_Wd�n1s�wY|j||||d�S)Nrr+z3BlockCache joined background fetch without waiting.z(BlockCache waiting for background fetch.rJ�asyncr�)rrr�r�r��donerar�r�r�r�r�rcrr�r��submitr�r�) r#r'rXr�r�Zfetch_future_block_numberZ fetch_futureZ must_joinr�Zend_block_plus_1r$r$r%r- sv      ������ �    ���� �zBackgroundBlockCache._fetch�syncr��log_infor0csv||jkrtd|�d|j�d���||j}||j}t�d||�|j||7_|jd7_t��||�}|S)r�r�r�r`z!BlockCache fetching block (%s) %drJ) rr�rrar�r!r rDr-)r#r�r�r'rXr�rHr$r%r�Vs ��  z!BackgroundBlockCache._fetch_blockr�r�c Cr�r�r�r�r$r$r%r�hs    �� z BackgroundBlockCache._read_cacher�r�r�rsr8rr)r�)r�rr�r0rr*r�)r4r9r:r;rr<r&r�rorqr-r�r�rur$r$rHr%r��s  � Lr�z!dict[str | None, type[BaseCache]]�cachesF�cls�type[BaseCache]�clobberr�rrcCs6|j}|s|tvrtd|�dt|����|t|<dS)z�'Register' cache implementation. Parameters ---------- clobber: bool, optional If set to True (default is False) - allow to overwrite existing entry. Raises ------ ValueError zCache with name z is already known: N)rr�r�)r�r�rr$r$r%�register_cache�s  r�)F)r�r�r�r�rr).� __future__rr�r��loggingr�rNr�r��concurrent.futuresrr�typingrrrrr r r r r r>Ztyping_extensionsrrr� getLoggerrarr*rrr=rwr|rr�r�r�r�r�r�r<r��cr$r$r$r%�<module>sZ,    DV.. ee<Q � � �
Memory