![]() ![]() natural-language-processing tokenizer embeddings spacy persian dependency-parser lemmatizer ner persian-nlp postagger chunker dataset-loader constituency-parser embedding-vectors nlptoolkit. Data preprocessingĪfter downloading the dataset, first set DATASET_PATH and DATASET_PATH variables in the file scripts/preprocess_commonvoice_fa/preprocess_commonvoice_fa. DadmaTools is a Persian NLP tools developed by Dadmatech Co. I found audio files from one of the speakers more approriate for training whose speaker id is hard-coded in the commonvoice_fa preprocessor. ![]() Unfortunately, only a few number of speakers in the dataset have enough number of utterances for training a Tacotron model and most of the audio files have low quality and are noisy. GitHub - MaddTheSane/perian: QuickTime plug-in for multiple codecs by using ffmpeg. The model is trained on audio files from one of the speakers in Common Voice Persian which can be downloaded from the link below: QuickTime plug-in for multiple codecs by using ffmpeg. The source code in this repository is highly inspired by and partially copied (and also modified) from the following repostories:Įncoder : CNN layers with batch-norm and a bi-directional lstm on top.ĭecoder: 2 LSTMs for the recurrent part and a post-net on top.Īttention type: GMM v2 with k=25. I've included WaveRNN model in the code only for infernece purposes (no trainer included). For generating better quality audios, the acoustic features (mel-spectrogram) are fed to a WaveRNN model. This repository contains implementation of a Persian Tacotron model in PyTorch with a dataset preprocessor for the Common Voice dataset. ![]() Related Work Generic text cleaning packagesįull-blown NLP libraries with some text cleaningīuilt upon the work by Burton DeWilde for Textacy.Visit this demo page to listen to some audio samples Characterizing R10mm rainfall events over Ghana using PERSIANN -PDIR data.ipynb. If you don't like the output of clean-text, consider adding a test with your specific input and desired output. Failed to load latest commit information.ipynbcheckpoints. Pull requests are especially welcomed when they fix bugs or improve the code quality. ![]() If you have a question, found a bug or want to propose a new feature, have a look at the issues page. sklearn import CleanTransformer cleaner = CleanTransformer( no_punct = False, lower = False)Ĭleaner. There is also scikit-learn compatible API to use in your pipelines.Īll of the parameters above work here as well.įrom cleantext. If you need some special handling for your language, feel free to contribute. It should work for the majority of western languages. So far, only English and German are fully supported. For this, take a look at the source code. You may also only use specific functions for cleaning. Lang = "en" # set to 'de' for German special handlingĬarefully choose the arguments that fit your task. From cleantext import clean clean( "some input",įix_unicode = True, # fix various unicode errors to_ascii = True, # transliterate to closest ASCII representation lower = True, # lowercase text no_line_breaks = False, # fully strip line breaks as opposed to only normalizing them no_urls = False, # replace all URLs with a special token no_emails = False, # replace all email addresses with a special token no_phone_numbers = False, # replace all phone numbers with a special token no_numbers = False, # replace all numbers with a special token no_digits = False, # replace all digits with a special token no_currency_symbols = False, # replace all currency symbols with a special token no_punct = False, # remove punctuations replace_with_punct = "", # instead of removing punctuations you may replace them replace_with_url = "", ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |