flash-tokenizer

flash-tokenizer

NLPOptimize

EFFICIENT AND OPTIMIZED TOKENIZER ENGINE FOR LLM INFERENCE SERVING

459 Stars
9 Forks
459 Watchers
C++ Language
100 SrcLog Score
Cost to Build
$17.12M
Market Value
$63.02M

Growth over time

4 data points  ·  2025-08-05 → 2026-04-17
Stars Forks Watchers
💬

How do you feel about this project?

Ask AI about flash-tokenizer

Question copied to clipboard

What is the NLPOptimize/flash-tokenizer GitHub project? Description: "EFFICIENT AND OPTIMIZED TOKENIZER ENGINE FOR LLM INFERENCE SERVING". Written in C++. Explain what it does, its main use cases, key features, and who would benefit from using it.

Question is copied to clipboard — paste it after the AI opens.

How to clone flash-tokenizer

Clone via HTTPS

git clone https://github.com/NLPOptimize/flash-tokenizer.git

Clone via SSH

[email protected]:NLPOptimize/flash-tokenizer.git

Download ZIP

Download master.zip

Found an issue?

Report bugs or request features on the flash-tokenizer issue tracker:

Open GitHub Issues