o �J�h��@s�ddlZddlZddlZddlmZmZmZmZm Z m Z ddl Z ddl Z ddlZddlmZddlmZmZmZmZddlmZddlTGdd�de�ZdS) �N)�BinaryIO�Union�Tuple�List�Callable�Optional)� Namespace)�WHISPER_MODELS_DIR�DIARIZATION_MODELS_DIR� OUTPUT_DIR�UVR_MODELS_DIR)�BaseTranscriptionPipeline)�*c s�eZdZeeeefdedededef�fdd� Ze � �dfde ee j ejfd e j d eed eeeeffd d �Ze � �fdeded e j fdd�Z�ZS)�WhisperInference� model_dir�diarization_model_dir� uvr_model_dir� output_dircst�j||||d�dS)N)rrrr)�super�__init__)�selfrrrr�� __class__��IC:\pinokio\api\whisper-webui.git\app\modules\whisper\whisper_Inference.pyrs  �zWhisperInference.__init__N�audio�progress�progress_callback�returnc s�t��}t�t|��}|j|jks|jdus|j|jkr%|� |j|j���fdd�}|jj ||j d|j |j |j|jr=dnd|jdkrEdnd|j|j|j|j|d � d }g}|D]} |�t| d | d | d d��qXt��|} || fS)a transcribe method for faster-whisper. Parameters ---------- audio: Union[str, BinaryIO, np.ndarray] Audio path or file binary or Audio numpy array progress: gr.Progress Indicator to show progress directly in gradio. progress_callback: Optional[Callable] callback function to show progress. Can be used to update progress in the backend. *whisper_params: tuple Parameters related with whisper. This will be dealt with "WhisperParameters" data class Returns ---------- segments_result: List[Segment] list of Segment that includes start, end timestamps and transcribed text elapsed_time: float elapsed time for transcription Ncs�|dd�dS)NzTranscribing..��descr)�progress_value�rrrr>sz6WhisperInference.transcribe.<locals>.progress_callbackF� translate� transcribe�float16T) r�language�verbose� beam_size�logprob_threshold�no_speech_threshold�task�fp16�best_of�patience� temperature�compression_ratio_thresholdr�segments�start�end�text)r2r3r4)�time� WhisperParams� from_list�list� model_size�current_model_size�model�current_compute_type� compute_type� update_modelr$�langr(�log_prob_thresholdr*� is_translater-r.r/r0�append�Segment) rrrr�whisper_params� start_time�params�result�segments_result�segment� elapsed_timerr"rr$s<"  � �  � zWhisperInference.transcriber9r=cCs2|ddd�||_||_tj||j|jd�|_dS)a| Update current model setting Parameters ---------- model_size: str Size of whisper model compute_type: str Compute type for transcription. see more info : https://opennmt.net/CTranslate2/quantization.html progress: gr.Progress Indicator to show progress directly in gradio. rzInitializing Model..r)�name�device� download_rootN)r<r:�whisper� load_modelrLrr;)rr9r=rrrrr>Ys  �zWhisperInference.update_model)�__name__� __module__� __qualname__r r r r �strr�gr�Progressr�np�ndarray�torch�TensorrrrrrC�floatr$r>� __classcell__rrrrrsB��������� �?����r)rN�gradiorTr5�typingrrrrrr�numpyrVrX�os�argparser�modules.utils.pathsr r r r �+modules.whisper.base_transcription_pipeliner �modules.whisper.data_classesrrrrr�<module>s   
Memory