Member-only story

History, Highlights and Patterns for LLM-powered Products

Agent Issue
4 min readAug 18, 2023

--

We’re again serving up a fresh batch of expert insights on the hottest topics: large language, computer vision, and multimodal models!

Hungry for knowledge? Dig into this edition!

1. Cameron R. Wolfe’s 3 part series about The History of Open-Source LLMs

  1. Part 1 — Early Days: In the first part, Cameron explains several initial attempts at creating open-source language models (e.g., GPT-NeoX-20B, OPT, BLOOM) and their architecture (i.e. transformer and its variants), training, fine-tuning, alignment and performance characteristics.
  2. Part 2: Better Base Models: Series continue with overview of the most popular open-source base models — i.e., language models that have been pre-trained but not fine-tuned or aligned (e.g., Llama, MPT, Falcon, Llama-2
  3. Part 3: Imitation and Alignment: Final part explains fine-tuning and alignment to improve the quality and performance between open-source and proprietary LLMs.

2. Eugene Yan’s Patterns for Building LLM-based Systems & Products

In this post, Eugene explains 7 practical patterns for integrating LLMs into systems & products

  1. Evals: To measure…

--

--

Agent Issue
Agent Issue

Written by Agent Issue

Your front-row seat to the future of Agents.

No responses yet