Here’s a fun spreadsheet that implements word2vec. Use it for jumping off point.
It has:
- A single small vocabulary of 9 words
- A single example mary had a little lamb
- We move positive vectors closer together (mary is IN the context window of had); We move negative vectors farther apart (toenail is NOT IN the context window of mary)
In word2vec we maintain two embeddings per vocabulary entry (input and output) vectors.
They mean subtly different things:
- Let’s say two inputs are similar,
maryandjane.This says: whenmaryandjaneact as centers, they co-occur with similar in-context words. Maybepoppins? - Let’s say two outputs are similar:
littleandlamb. This says: they often appear near the same center words, iemary
In the end, they’ll be very correlated. But most people take the input vectors after training
The spreadsheet models two word2vec variants
- Softmax: A model to predict probability of an adjacent word directly, given a center word. IE
poppinsnearmarywould have a reasonably high probability compared tomonkeysnearmary. Then backprop to the embeddings so it predicts these probabilities more accurately. Sadly, when you get to actual vocabularies of 100k-1m+, doing this during training becomes infeasible - Skipgram, negative sampling: Learn on samples: tweak dot products of one positive to the center word (ie
poppinscloser tomary) - but push farther from out-of-context words, likemarytotoenail. This method scales better and is more common
Further reading
-Doug
PS - 3 days left to signup for Cheat at Search with Agents!
This is part of Doug’s Daily Search tips - subscribe here
Enjoy softwaredoug in training course form!
Starting June 22!
I hope you join me at Cheat at Search with LLMs to learn how to apply LLMs to search applications. Check out this post for a sneak preview.