linkedin facebook twitter rss

25 Jan Large Language Model (LLM)

A large language model (LLM) is a deep learning algorithm used to perform natural language processing (NLP) tasks. Large language models use transformer models such as GPT and are trained using massive datasets meaning the datasets are very large as are the amounts of computing resources required to build them. They are thus able to recognize prompts and perform tasks against the prompts such as answer questions, translate, predict, or generate text or other content.

“Large language models are also referred to as neural networks (NNs), which are computing systems inspired by the human brain. These neural networks work using a network of nodes that are layered, much like neurons.”  (Elastic)

See also: Tech Target

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.