Speech_Signal_Processing_and_Classification
Front-end speech processing aims at extracting proper features from short- term segments of a speech utterance, known as frames. It is a pre-requisite step toward any pattern recognition problem employing speech or audio (e.g., music). Here, we are interesting in voice disorder classification. That is, to develop two-class classifiers, which can discriminate between utterances of a subject suffering from say vocal fold paralysis and utterances of a healthy subject.The mathematical modeling of the speech production system in humans suggests that an all-pole system function is justified [1-3]. As a consequence, linear prediction coefficients (LPCs) constitute a first choice for modeling the magnitute of the short-term spectrum of speech. LPC-derived cepstral coefficients are guaranteed to discriminate between the system (e.g., vocal tract) contribution and that of the excitation. Taking into account the characteristics of the human ear, the mel-frequency cepstral coefficients (MFCCs) emerged as descriptive features of the speech spectral envelope. Similarly to MFCCs, the perceptual linear prediction coefficients (PLPs) could also be derived. The aforementioned sort of speaking tradi- tional features will be tested against agnostic-features extracted by convolu- tive neural networks (CNNs) (e.g., auto-encoders) [4]. The pattern recognition step will be based on Gaussian Mixture Model based classifiers,K-nearest neighbor classifiers, Bayes classifiers, as well as Deep Neural Networks. The Massachussets Eye and Ear Infirmary Dataset (MEEI-Dataset) [5] will be exploited. At the application level, a library for feature extraction and classification in Python will be developed. Credible publicly available resources will be 1used toward achieving our goal, such as KALDI. Comparisons will be made against [6-8].
How to download and setup Speech_Signal_Processing_and_Classification
Open terminal and run command
git clone https://github.com/gionanide/Speech_Signal_Processing_and_Classification.git
git clone is used to create a copy or clone of Speech_Signal_Processing_and_Classification repositories.
You pass git clone a repository URL. it supports a few different network protocols and corresponding URL formats.
Also you may download zip file with Speech_Signal_Processing_and_Classification https://github.com/gionanide/Speech_Signal_Processing_and_Classification/archive/master.zip
Or simply clone Speech_Signal_Processing_and_Classification with SSH
[email protected]:gionanide/Speech_Signal_Processing_and_Classification.git
If you have some problems with Speech_Signal_Processing_and_Classification
You may open issue on Speech_Signal_Processing_and_Classification support forum (system) here: https://github.com/gionanide/Speech_Signal_Processing_and_Classification/issuesSimilar to Speech_Signal_Processing_and_Classification repositories
Here you may see Speech_Signal_Processing_and_Classification alternatives and analogs
natural-language-processing lectures spaCy HanLP gensim tensorflow_cookbook MatchZoo tensorflow-nlp Awesome-pytorch-list spacy-models TagUI Repo-2017 stanford-tensorflow-tutorials awesome-nlp franc nlp_tasks nltk pattern TextBlob CoreNLP allennlp mycroft-core practical-pytorch textract languagetool MITIE machine_learning_examples prose arXivTimes ltp