I'm working on an embedding procedure for placing things into vector spaces. Things which might not normally live in a vector space (although in a sense we all live in a Lorentzian manifold). The basic idea is to take a model, present a bunch of examples of these objects relating to each other (outside of any notion of a metric), and then it figures out where to put them. (In the event of me getting my model to do anything actually useful, I will provide excessive technical detail, but that's not the objective here). This evening I remembered that 2 dimensions are very easy to visualise, so I made an animation of how the objects move in the space as training progresses (number of training examples is depicted in the plot title).
The example here is exceedingly trivial - five objects, two pairs of which are designed (by way of engineering the training data) to end up together and apart (colour-coded), and one loner who goes wherever. So they're not so much learning to agree as learning to jealously cling to their partner.
Look at them twitch! Stochastic gradient descent in action. I'm tempted to make more of these with different learning rates and see how badly I can get it to break, but I foolishly started doing this too late, so I'll save it for next time. (Also next time: evolving energy surfaces).