On Abstraction and Long-Time Horizon Prediction
Organizing my thoughts on abstraction.
Proposition: The ability to make predictions over extended time horizons \(\iff\) abstraction.
We live in an era of machines endowed with ever increasing long-term predictive ability. ChatGPT exhibits linguistic intelligence through a chat interface, driven by language prediction (next token prediction). AlphaFold/ESM have largely solved protein folding, driven by missing residue prediction and the PDB. Vision transformer models have achieved SoTA on most AI image processing tasks, driven by missing patch prediction.
To what extent do these models form “rational” abstractions to achieve these feats of prediction? To what extent do these models truly understand the data they process and the world in which they act? Is there a fundamental difference in the mode of intelligence we humans use to understand the world and the mode of intellience used by these machines?
The construction of abstract models of the world (“world models”) has emerged as a proxy for asking whether a system “understands” its world. The question of whether LLMs are learning a world model or not is of great interest to AI researchers. People like Ilya, co-founder and Chief Scientist of OpenAI believe that predicting language implies a world model. People like Yann Lecun beg to differ.
The idea that “The ability to make predictions over extended time horizons \(\iff\) abstraction” keeps coming up as I try to formalize what abstraction is and why it’s interesting. In this article, I attempt to formalize the notion of abstraction and show its relationship with long-time horizon prediction.
Defining Abstraction
Abstraction involves mapping complex input information to an (often compressed) representation that preserves the relevant features for a given task. Formally, we define an abstraction function \(A\) that operates on input information \(I\) in feature space \(\mathcal F\).
Definition: An abstraction function \(A: I \to I'\) maps the input information \(I\in \mathcal F\) to an abstract representation \(I'\in \mathcal F'\) where abstract feature space \(\mathcal F'\subseteq \mathcal F^*\) where \(\mathcal F^*\) is the set of all computable functions on \(\mathcal F\), the input feature space.
An effective abstraction should satisfy four key properties:
- Information preservation: \(MI(I; I')\), the mutual information between input \(I\) and abstract representation \(I'\), must be high.
- Complexity reduction: \(K(I')\), the Kolmogorov complexity of the abstract representation \(I'\), must be low.
- Task-relevance: \(P(A, t; k)\), the performance of abstraction \(A\) on task \(t\) using an optimal program of Kolmogorov complexity \(\leq k\), must be maximized.
- Generalization: \(G(A) = \frac 1 t \sum_t P(A, t_i; k)\), the performance of optimal machines of Kolomogorov complexity \(\leq k\) using representation \(A\) on related task family \(t_1, \dots, t_N\), must be high.
I suspect some of these properties will turn out to be redundant (e.g., 3-4 are implied by 1-2). This list covers the main attributes of abstraction that are exciting to me. The more I thought about it, the more it seemed like 1-2 (info preservation and and complexity reduction) are equivalent to “abstract representations that enable long-term prediction at minimal computational cost”.
Defining Long-Term Prediction
A system \(S\) can predict process \(\{I_j\}_{j=1}^N\) if it can generate all \(\{I_j\}_{j\in [N]}\) based on \(I_1\). We are generally interested in computationally bounded systems \(S\) that are able to predict process \(\{I_j\}_{j\in[N]}\) with high fidelity (e.g., can assign maximal likelihood to sequences generated from the process \(\{I_j\}_{j\in [N]}\)).
Long-Term Prediction \(\implies\) Complexity reducing representation + info preservation
Consider a bounded computational system \(S\) with Kolmogorov complexity \(\|S\| \leq k\) capable of predicting the future state of an input signal \(I_1, \dots, I_N\) over a long time horizon \(N\) based on starting state \(I_1\). This implies the existence of program \(p\) of complexity \(|p| \leq k + H(I_1)\) that can generate \(I\). If system \(S\) has a memory smaller than \(|I_i|\), an information preserving reduced-complexity representation of \(I\) denoted \(I'\) is implied in system \(S\).
Proof sketch: Given system \(S, \|S\| \leq k\) capable of predicting all \(I_1, \dots, I_N\) from starting state \(I_1\), create program \(p = \{S, I_1\}\) with complexity \(\|p\| \leq k + H(I_1)\) that can generate \(I_1, \dots, I_N\) by running \(S\) on \(I_1\). If the memory state of program \(p\) has complexity less than \(\|I_i\|\), the memory state can be taken \(I'_i\), the reduced complexity representatiof input signal \(I\).
Complexity reducing representation and info preservation \(\implies\) long-term prediction
Suppose we have a program \(p\) that can generate representation \(I' = I'_1, \dots, I'_N\) where \(I'\) is the abstract representation \(A(I)\) of ground truth data \(I = I_1, \dots, I_N\). Assume that \(MI(I; I') \to H(I)\). This implies the existence of a bounded computational system \(S\) with Kolmogorov complexity \(|S| \leq |p|\) capable of predicting the future states of input signal \(I_1, \dots, I_N\) based on starting state \(I_1\).
Proof sketch: Same constructive argument as above. In this case, we can always create \(S = p\) – a system that ignores the first input \(I_1\) and just uses \(p\) to generate the signal \(I\). In general, we would expect a more efficient solution to exist where \(|S| < |p|\).
Task Relevance and Generalization
Exercise for the reader.
Connection to Cortical Neuroscience and Thinking Machines
In cortical neuroscience, it is thought that higher abstraction representations are localized in layers of the cortex closer to the surface, and lower abstraction representations are localized in the more internal layers of the cortex. It is believed that the cortex is largely a prediction machine, optimizing its representations and activity to predict subsequent sensory-motor input and other information projected from non-cortical brain regions (see Jeff Hawkin’s work at Numenta for a more comprehensive description of these ideas). Since sensory-motor information (and myriad projections from non-cortical brain regions) are received at the inner-most layers of the cortex, outer regions are forced to work with “stale” information, and can only propagate the results of their computation back to the inner layers with a transmission delay. The idea that long time horizon prediction necessitates the formation of abstract representations – and vice versa – may help us understand this “design decision” in the structure of the cortex. It’s a helpful idea if you’re interested in building computronium and thinking machines, too.