nakamoto_damacy 3 hours ago

Perpetual Motion Machines were a thing at some point, too.

  • YeGoblynQueenne 3 hours ago

    Don't laugh. PMMs work! I built mine ten years ago when I realised I could improve the SOTA by a huge 20%. I've been improving it for the last 10 years and I get an average performance boost of ~0.25 every year. We will have Free Energy in the next 10 years.

    • ojo-rojo an hour ago

      I find your comment interesting, even though I'm not sure if I really get what you're saying. You built a perpetual motion machine? You then made improvements? Can you share details?

      • suprfsat 44 minutes ago

        Good news everyone, you've passed the Turing test.

  • api 3 hours ago

    I refer to the endless self improving runaway AI as an “information theoretic perpetual motion machine.”

    This will work in a sense. It will do… something… and learn… something. It will be unrelated to the physical universe in any way. See also: procedural landscape generators, etc.

    • K0balt 2 hours ago

      Might kinda work if you gave it tools to do its research on the open internet, fiverr, mechanical Turk, etc.

      • nakamoto_damacy an hour ago

        Sure, it could up until some point where in order for it to figure out that it has to use a tool or access the Internet it will need more intelligence (to know that its answer or understanding is not sufficient or incorrect) How do we as humans know that? Someone tells us. Who's going to tell it? Then you end up at Minsky's Society of Mind, but also a distributed perpetual motion machine. Evolution seems to have figured out the intuition mechanism as some sort of probabilistic mechanism that's been honed for potentially millions of years, if not billions (white blood cells track pathogens, without having any neural network, so it's possible.) -- I think I opened a can of worms with these thoughts.

      • agentultra an hour ago

        On its own without any alignment or labelling. Super-intelligence or super-Grok?

      • api an hour ago

        That’s at least some contact with reality, at least by proxy. I’m referring to a brain in a vat somehow learning.

    • RLAIF 2 hours ago

      [dead]

thom 5 hours ago

For values of zero quite far above zero.

  • falcor84 5 hours ago

    What am I missing? From my skimming, there's zero external data beyond what is needed for the Challenger to generate questions.

    • thom 3 hours ago

      An existing trained LLM is an enormous amount of 'data' however it might be encoded. AlphaZero didn't start with Stockfish or a database of games.

      • magicalhippo 3 hours ago

        As I understand it the point of the article isn't to train a LLM from scratch, it's to teach a non-reasoning model to reason without additional explicit training data.

        • YeGoblynQueenne 3 hours ago

          The abstract does use the term "from scratch":

          >> To overcome this limitation, we introduce R-Zero, a fully autonomous framework that generates its own training data from scratch.

          Giving the benefit of the doubt, they're just using it wrong, but the way they use it sure reads like they claim they found a way to initialise LLMs with 0 data. Only the absurdity of the claim protects the reader from such misunderstanding, and that's never a good thing in a research paper.

          • magicalhippo 2 hours ago

            If you included the previous and following sentences, it's at least to me clear what they mean:

            However, existing methods for training such models still rely heavily on vast human-curated tasks and labels, typically via fine-tuning or reinforcement learning, which poses a fundamental bottleneck to advancing AI systems toward capabilities beyond human intelligence

            To overcome this limitation, we introduce R-Zero, a fully autonomous framework that generates its own training data from scratch.

            Starting from a single base LLM, R-Zero initializes two independent models with distinct roles, a Challenger and a Solver.

            Training a LLM is a multi-stage process[1], and they're tackling the stage at the end. That's where you do fine-tuning or reinforcement learning. They're not training a LLM from scratch. They're explicitly stating they start from a base LLM, ie a pretrained non-tuned model.

            As I understand it, and as they mention, training data for the latter stages has typically required high-quality human-curated samples in large numbers, even if they're augmented using LLMs, say by generating multiple variations of each human-curated training sample.

            Their proposal is to have a generative adversarial network generate that data without any initial human input, ie from scratch.

            [1]: https://snorkel.ai/blog/large-language-model-training-three-...

      • tucnak 3 hours ago

        AlphaZero is oftentimes dragged out to ridicule the so-called "self-play LLM training" techniques, although I don't think these arguments are terribly convincing. You can think of AlphaZero games as effectively synthetic data in adversarial setting; yes, it's easy to produce and verify as the rules of chess are verifiable, so it doesn't require much data on paper. This is not the case for most texts, with some notable exceptions in verifiable domains, where self-play is coincidentally applied most successfully. Thus, you could make an argument that the pre-existing "trained LLM" is merely functioning as a verifier proxy, analogous to the well-defined chess verifier in AlphaZero.

        • nerpderp82 44 minutes ago

          Thank you for your mature intelligent answer.

jasonjmcghee 8 hours ago

Conceptually, it's effectively a GAN

  • frumiousirc 3 hours ago

    My initial thought as well. But, what is the "Discriminator" here? What grounds the training toward reality? The "Challenger" and "Solver" adversity alone can only serve to amplify hallucination.

    Ahh, GPT-4o is the arbiter.

    So, basically, this is a way to perform LLM model compression (GPT-4o to qwen3) while maximizing the in-distribution domain size. As such, it seems reasonable and useful.

    However the reliance on an arbiter LLM makes the claim that it will overcome the problem of a lack of training data unreasonable. Once the target LLM is scaled up to reach the in-distribution domain size of the arbiter, it seems to me it will turn back into a hallucination amplifier.

    • djoldman an hour ago

      See Figure 2.

      The solver/challenger is the GAN discriminator/generator.

      The challenger is trained to create difficult questions. The solver is trained to strengthen pathways that correctly solve the questions like so:

      > To guide the Challenger toward producing challenging yet solvable questions, we first define an uncertainty score. For a generated question x, we query the current Solver... The most frequent response is treated as the pseudo-label y˜(x), and we compute the Solver’s empirical accuracy....The uncertainty reward is then defined.... This function incentivizes questions where the Solver is maximally uncertain (accuracy approaches 50%)

      Identifying the best pseudo-label seems like it would be the limitation of the approach.

  • magicalhippo 3 hours ago

    For those not in the know, that's Generative Adversarial Networks[1], where two neural networks are trained in a competitive way.

    One network typically generates tasks for the other, and is rewarded if it manages to make the other network fail the task. The other network is rewarded if it successfully completes the task.

    Thus the adversarial network tries to find weaknesses to exploit, and the combined training makes the solving network much stronger. Or at least that's the idea.

    [1]: https://en.wikipedia.org/wiki/Generative_adversarial_network

  • torginus 3 hours ago

    GAN's are a supervised training method, not really self-improving (after converging to being able to reproduce the training set).

clbrmbr 2 hours ago

Terrible choice of name. DeepSeek developed a historically important model called “R-Zero” (this was the predecessor to R1 that was training without any coldstart SFT, and was very strong but difficult to read chain of thought because it code switches into Chinese and has no line breaks).