π A list of pre-trained BERT models for Japanese with word/subword tokenization + vocabulary construction algorithm information
What is the himkt/awesome-bert-japanese GitHub project? Description: "π A list of pre-trained BERT models for Japanese with word/subword tokenization + vocabulary construction algorithm information". Explain what it does, its main use cases, key features, and who would benefit from using it.
Question is copied to clipboard β paste it after the AI opens.
Clone via HTTPS
Clone via SSH
Download ZIP
Download master.zipReport bugs or request features on the awesome-bert-japanese issue tracker:
Open GitHub Issues