https://twitter.com/doesdatmaksense

Sept 21, 2024

<aside> 🏹

We will be diving deep into the paper: [Arxiv] Physics of Language Models: Part 2.1, Grade-School Math and the Hidden Reasoning Process

</aside>

Are language models just memorizing, or is there something deeper going on?

You know that feeling when you’re solving a math problem and everything just clicks? You start connecting the dots, working through each step until you land on the answer. Well, my gut says language models might be doing something similar—or, in some cases, something way more complex.

We’re used to seeing large language models (LLMs) churn out math solutions or generate code, but what’s really happening behind the scenes? Are these models actually reasoning like we do, or are they simply remixing patterns from their training? And, more intriguingly, when these models make mistakes, what’s going wrong in their "thought process"?

The study plan is to dig deep into these questions using controlled experiments. Here’s what we will uncover:

  1. Do language models truly develop reasoning skills—or is it all memorization?
  2. What does the model’s internal reasoning process look like, and how is it different from human reasoning?
  3. Can models trained on specific datasets like GSM8K generalize their skills to harder, unseen problems?
  4. What causes models to make mistakes during reasoning?
  5. Does the depth of the model (number of layers) matter more than its width (neurons per layer) in order to solve complex reasoning problems?

This research takes a principled approach to understanding the model's internal processes. The team designed synthetic math datasets and probing techniques (we'll get into that later) to see how well models tackle reasoning tasks. And here’s what they found:

Result 1: These models can solve out-of-distribution problems, including those requiring longer reasoning chains than seen in training.

Result 2: The models don’t just solve problems; they’re efficient about it, often generating the shortest possible solutions, skipping unnecessary steps—very much the opposite of memorization.

Discovering the Model’s “Mental Process”:

What’s fascinating is how the models seem to have their own internal mental process. It’s like watching someone figure something out—there are moments of reasoning that feel eerily human, and then there are completely unexpected behaviors that might hint at something deeper, possibly the early sparks of AGI.

The most significant finding lies in uncovering the model's internal "mental process", which mirrors human reasoning but also introduces new, unexpected skills:

  1. Preprocessing: Before starting to generate any solution, models internally preprocess all necessary parameters like humans. Picture how you might jot down numbers or formulas on a scrap of paper before diving into a math problem—these models do the same, but mentally, without explicit instructions.
  2. All-Pair Dependency: Here’s where things get wild—models are calculating the relationships between all variables while solving a given problem, even when they don’t need to. This ability to compute relationships between objects "mentally" suggests a level of reasoning skill that exceeds humIn fact, this skill might be one of the first glimpses of AGI, since humans usually only consider what's necessary to reach the goal.