Why Large Language Models Hallucinate
IBM Technology IBM Technology
733K subscribers
159,493 views
0

 Published On Apr 20, 2023

Learn about watsonx: https://ibm.biz/BdvxRD

Large language models (LLMs) like chatGPT can generate authoritative-sounding prose on many topics and domains, they are also prone to just "make stuff up". Literally plausible sounding nonsense! In this video, Martin Keen explains the different types of "LLMs hallucinations", why they happen, and ends with recommending steps that you, as a LLM user, can take to minimize their occurrence.

#AI #Software #Dev #lightboard #IBM #MartinKeen #llm

show more

Share/Embed