How To Run Ollama On F5 AppStack With An NVIDIA GPU In AWS - Getting Started With AI
F5 DevCentral F5 DevCentral
78.4K subscribers
1,150 views
0

 Published On Apr 23, 2024

If you're just getting started with AI, you'll want to watch this one, as Michael Coleman shows Aubrey King, from DevCentral, how to run Ollama on F5 AppStack on an AWS instance with an NVIDIA Tesla T4 GPU.

You'll get to see the install, what it looks like when a WAF finds a suspicious conversation and even a quick peek at how Mistral handles a challenge differently than Gemma.

Give it a try yourself:
https://github.com/open-webui/open-we...

DevCentral Article:
https://community.f5.com/kb/technical...

00:00 Introduction
04:23 Introducing Ollama
05:37 Selecting a model
08:27 Customizing and Installing Ollama
11:59 MultiCloud walkthrough
12:43 Adding WAAP
14:27 Using Ollama with Mistral and Gemma
21:15 WAAP alerts on a security threat in conversation

⬇️⬇️⬇️ JOIN THE COMMUNITY! ⬇️⬇️⬇️

DevCentral is an online community of technical peers dedicated to learning, exchanging ideas, and solving problems - together.

Find all our platform links ⬇️ and follow our Community Evangelists! 👋

➡️ DEVCENTRAL: https://community.f5.com
➡️ YOUTUBE:    / devcentral  
➡️ LINKEDIN:   / f5-devcentral  
➡️ TWITTER:   / devcentral  

Your Community Evangelists:
👋 Jason Rahm:   / jrahm   |   / jasonrahm  
👋 Buu Lam:   / buulam   |   / buulam  
👋 Aubrey King:   / aubreyking   |   / aubreykingf5  

show more

Share/Embed