why llama-3-8B is 8 billion parameters instead of 7?
Chris Hay Chris Hay
14K subscribers
2,706 views
0

 Published On Apr 21, 2024

llama-3 has ditched it's tokenizer and has instead opted to use the same tokenizer as gpt-4 (tiktoken created by openai), it's even using the same first 100K token vocabulary.

In this video chris walks through why Meta has switched tokenizer and the implications on the model sizes, embeddings layer and multi-lingual tokenization.

he also runs his tokenizer benchmark and show's how it's more efficient in languages such as japanese

repos
------
https://github.com/chrishayuk/embeddings
https://github.com/chrishayuk/tokeniz...

show more

Share/Embed