
When models generate false information or misleading outcomes that do not accurately reflect the facts, patterns, or associations grounded in their training data, they are said to be “hallucinating”—that is, they’re “seeing” something that isn’t actually there.*
Superagency
by Reid Hoffman and Greg Beato
keystonelearning.online