bert_for_longer_texts
BERT classification model for processing texts longer than 512 tokens. Text is first divided into smaller chunks and after feeding them to BERT, intermediate results are pooled. The implementation allows fine-tuning.
How to download and setup bert_for_longer_texts
Open terminal and run command
git clone https://github.com/mim-solutions/bert_for_longer_texts.git
git clone is used to create a copy or clone of bert_for_longer_texts repositories.
You pass git clone a repository URL. it supports a few different network protocols and corresponding URL formats.
Also you may download zip file with bert_for_longer_texts https://github.com/mim-solutions/bert_for_longer_texts/archive/master.zip
Or simply clone bert_for_longer_texts with SSH
[email protected]:mim-solutions/bert_for_longer_texts.git
If you have some problems with bert_for_longer_texts
You may open issue on bert_for_longer_texts support forum (system) here: https://github.com/mim-solutions/bert_for_longer_texts/issuesSimilar to bert_for_longer_texts repositories
Here you may see bert_for_longer_texts alternatives and analogs
natural-language-processing lectures spaCy HanLP gensim MatchZoo tensorflow-nlp Awesome-pytorch-list spacy-models Repo-2017 stanford-tensorflow-tutorials awesome-nlp nlp_tasks nltk pattern TextBlob CoreNLP allennlp mycroft-core practical-pytorch textract languagetool MITIE machine_learning_examples prose arXivTimes ltp libpostal sling DeepNLP-models-Pytorch