π€ Pretrained BERT model & WordPiece tokenizer trained on Korean Comments νκ΅μ΄ λκΈλ‘ ν리νΈλ μ΄λν BERT λͺ¨λΈκ³Ό λ°μ΄ν°μ
What is the Beomi/KcBERT GitHub project? Description: "π€ Pretrained BERT model & WordPiece tokenizer trained on Korean Comments νκ΅μ΄ λκΈλ‘ ν리νΈλ μ΄λν BERT λͺ¨λΈκ³Ό λ°μ΄ν°μ ". Explain what it does, its main use cases, key features, and who would benefit from using it.
Question is copied to clipboard β paste it after the AI opens.
Clone via HTTPS
Clone via SSH
Download ZIP
Download master.zipReport bugs or request features on the KcBERT issue tracker:
Open GitHub Issues