Andrew Trask Profile picture
@openminedorg, @GoogleDeepMind Ethics Team, @OxfordUni PhD Candidate, @UN PET Lab, @GovAI_
Eng Isheunesu Tembo Profile picture Remco Frijling Profile picture 2 subscribed
Sep 22, 2023 29 tweets 6 min read
This is the 1st rigorous treatment (and 3rd verification) I've seen

IMO - this is great for AI safety!

It means that LLMs are doing *exactly* what they're trained to do — estimate next-word probability based on data.

Missing data?

P(word)==0

So where is the AI logic?

1/🧵 Current hypothesis: LLMs are a lot like surveys.

When they see a context ("The cat and the") they basically conduct a *survey* over every datapoint in a training dataset.

It's like asking every datapoint "what do YOU think the next word might be"?

And then...
Sep 13, 2022 7 tweets 2 min read
Wow - in 8 tweets I just learned and un-learned more about the mysteries of deep neural networks than I've probably learned or un-learned about them in the last two years.

This is the start of something really really big... also a huge door opened for federated learning. This technique really seems to get a foothold on managing the intelligence in an AI model. Imagine training 10,000 small models on 10,000 different topic areas and being able to decide exactly what collections of specialties a model was to have.

Heads up #AISafety community!