Knowledge Base
Compiled,
cross-referenced.
ML/AI concepts written once and kept current — each entry synthesises primary papers, not summaries of summaries.
Concept Map
4 entries26 papers11 posts37 connections
All entries
401020304
Contrastive Learning
A self-supervised framework that learns representations by pulling semantically similar samples together and pushing dissimilar ones apart in embedding space.
Scaling Laws
Empirical power-law relationships between model performance and the three axes of scale — parameters, training tokens, and compute budget.
Transformer Architecture
A sequence-to-sequence architecture built entirely on attention mechanisms, replacing recurrence and convolutions with parallelizable self-attention layers.
Vision-Language Models
Models that jointly encode images and text, enabling zero-shot transfer, visual question answering, and image-conditioned generation.