Understanding Tokenization: From Text to Integers
Introduction Language models are mathematical functions; they operate on numbers, not raw text. Tokenization is the crucial first step in converting human-readable text into a sequence of integers (tokens) that a model can process. These tokens are then mapped to embedding vectors. 1. Naive Approaches and Their Flaws Word-Level Tokenization The most intuitive approach: split text by spaces and punctuation. Problems: Vocabulary Explosion: A language like English has hundreds of thousands of words. The model’s vocabulary would be enormous, making the final embedding and output layers computationally massive. Out-of-Vocabulary (OOV) Words: If the model encounters a word not seen during training (e.g., a new slang term, a typo, or a technical name), it has no token for it. It typically maps it to an <UNK> (unknown) token, losing all semantic meaning. Poor Generalization: The model treats eat, eating, and eaten as three completely separate, unrelated tokens. It fails to capture the shared root eat, making it harder to learn morphological relationships. Character-Level Tokenization The opposite extreme: split text into individual characters. ...