-
The Mismanaged Geniuses Hypothesis
We propose the mismanaged geniuses hypothesis, which posits that existing frontier language models are severely underutilized due to sub-optimal use of individual language model calls.
-
Language Models will be Scaffolds
The language models we interact with in the near future will be what we call scaffolds today.
-
Recursive Language Models
We propose Recursive Language Models (RLMs), an inference strategy where language models can decompose and recursively interact with input context of unbounded length through REPL environments.
-
A Meticulous Guide to Advances in Deep Learning Efficiency over the Years
A very long and thorough guide how deep learning algorithms, hardware, libraries, compilers, and more have become more efficient.
-
The Annotated Kolmogorov-Arnold Network (KAN)
An annotated guide to the Kolmogorov-Arnold Network