25 Jan Large Language Model (LLM)
A large language model (LLM) is a deep learning algorithm used to perform natural language processing (NLP) tasks. Large language models use transformer models such as GPT and are trained using massive datasets meaning the datasets are very large as are the amounts of computing resources required to build them. They are thus able to recognize prompts and perform tasks against the prompts such as answer questions, translate, predict, or generate text or other content.
“Large language models are also referred to as neural networks (NNs), which are computing systems inspired by the human brain. These neural networks work using a network of nodes that are layered, much like neurons.” (Elastic)
See also: Tech Target