2026-04-13
语言
主题
Knowledge Base

Compiled,
cross-referenced.

ML/AI concepts written once and kept current — each entry synthesises primary papers, not summaries of summaries.

Concept Map
4 entries26 papers11 posts37 connections

All entries

4
01

Contrastive Learning

A self-supervised framework that learns representations by pulling semantically similar samples together and pushing dissimilar ones apart in embedding space.

02

Scaling Laws

Empirical power-law relationships between model performance and the three axes of scale — parameters, training tokens, and compute budget.

03

Transformer Architecture

A sequence-to-sequence architecture built entirely on attention mechanisms, replacing recurrence and convolutions with parallelizable self-attention layers.

04

Vision-Language Models

Models that jointly encode images and text, enabling zero-shot transfer, visual question answering, and image-conditioned generation.