Harnessing Local
Chakkaradeep Chandran Chakkaradeep Chandran
8.82K subscribers
265 views
0

 Published On Dec 7, 2023

The quickest way to run #LLMs locally on your laptop!

#llamafile - A single file that is both a model and an executable with the code to run that model! How cool is that! There is no need for extra tools; just this one file will do the job!

Here is my video walkthrough of downloading and running a #llamafile with the #LLaVA - Large Language and Visual Assistant LLM #LLaVALLM, a multimodal that can generate text and infer images. We put this multimodal to the test in this video with some examples.

00:09 Intro
00:42 What is llamafile?
02:25 Download llamafile
03:46 LLaVA Model
04:07 Run LLaVA llamafile
05:08 LLaVA Demo 1
06:24 LLaVA Demo 2
06:58 LLaVA Demo 3
08:12 Ending and Outro

Having such models able to run locally offline seems like magic, but the truth is that these open-source local #LLMs are getting way better that soon it wouldn't be a surprise if we all switch to using local LLMs more than #ChatGPT! #AI #generativeai

Links:
1. Introducing Llamafile: https://hacks.mozilla.org/2023/11/int...
2. Download and try llamafile: https://github.com/Mozilla-Ocho/llama...
3. LLaVA: Large Language and Vision Assistant: https://llava-vl.github.io/

Hope you enjoyed my video!

🔔 SUBSCRIBE https://www.youtube.com/chaks?sub_con...

Thanks for watching. See you next video!

show more

Share/Embed