I linked to the paper in a recent comment [1]. The author has been active in the Higher Order Community Discord channel for quite a while. The Higher Order Company [2] is developing HVM (Higher Order Virtual Machine), a high performance implementation of interaction nets for both CPUs and GPUs, and started the channel to discuss development and related topics, such as the Bend language for programming the HVM, discussed on HN at [3].
The paper manages to present previous work on Levy's optimal beta reduction in a more streamlined fashion, generalizing duplicators to so-called replicators that avoid the need for separate book-keeping gadgets (like croissants and brackets)
. Its author also proves himself to be a skilled programmer in bringing his theory to life with this web based evaluator.
The app contrasts traditional graph-based λ-calculus reduction (which replaces nodes with entire subgraphs) with interaction (which makes only local modifications and hence needs more steps), while showing that semantics is preserved.
This is quite different. Salvadori's work aims for optimal reduction of the full lambda calculus (which requires something called "bookkeeping"/"oracle"), while HOC works on optimal/parallel reduction of a certain subset of the lambda calculus.
Both approaches have been researched for a long time now, where HOC's subset is typically referred to as "abstract algorithm". For example, a version of the lambdas calculus where any variable can be used at most once (the "affine lambda calculus"), can be reduced optimally with interaction nets without requiring any bookkeeping.
The novel thing about Salvadori's work is that it develops a new (and better explained) bookkeeping mechanism.
The linked paper: https://arxiv.org/pdf/2505.20314 claims the squiggles they introduce are apparently a model to solve Levy-optimal parallel reduction of lambda terms.
But the author has no affiliation, it's very weird they're calling this "lambda-reduction" and it heavily smells of AI slop?
I hope I'm wrong but it doesn't look right. Can anyone with expertise in this field chime in?
I'm not being "generically" negative, I'm being very specifically negative.
We have a paper from someone not working in the field, with no affiliation, and with an abstract that claims to "solve the longstanding enigma with groundbreaking clarity", a sentence never before uttered by a human being in flesh and blood, and that feels like it takes 4 (four) citations to justify that lambda calculus is Turing-complete, a fact that's well-known to every undergraduate student.
I'm sorry if this gives reviewer #2 vibes, but this doesn't look right to me and I'm asking if someone with actual expertise in the field can clarify what's happening.
Affiliation they mean with an institute/company funding their research/work. It's quite rare, if it ever happens, for someone to find an innovative algorithm, let alone write a technical paper, as hobbyist.
The interactive lambda-calculus interpreter looks like it does the right thing, not that I've tried to push it too hard.
Can't comment on the delta-nets. If you're looking for a real person who's been plugging away at parallel & optimal reduction of lambda terms, this is where to look: https://github.com/VictorTaelin
I don't think "lambda-reduction" is a red flag. The "real" term would be "beta-reduction" (but that's the incumbent algorithm which TFA claims to replace or improve on - so why not give it a new name?)
But if I were to go sniffing for red flags:
From the first commit:
lambdacalc.ts:
// The original lambda calculus introduced by Church was the 'relevant' lambda calculus which doesn't allow for weakening/erasure. This is why I add the '+' below to indicate that the lambda calculus started at 1936 but was extended afterwards.
What?
util.ts: Why is this full of Gaussian Elimination of Matrices? The paper doesn't mention it
I linked to the paper in a recent comment [1]. The author has been active in the Higher Order Community Discord channel for quite a while. The Higher Order Company [2] is developing HVM (Higher Order Virtual Machine), a high performance implementation of interaction nets for both CPUs and GPUs, and started the channel to discuss development and related topics, such as the Bend language for programming the HVM, discussed on HN at [3].
The paper manages to present previous work on Levy's optimal beta reduction in a more streamlined fashion, generalizing duplicators to so-called replicators that avoid the need for separate book-keeping gadgets (like croissants and brackets) . Its author also proves himself to be a skilled programmer in bringing his theory to life with this web based evaluator.
The app contrasts traditional graph-based λ-calculus reduction (which replaces nodes with entire subgraphs) with interaction (which makes only local modifications and hence needs more steps), while showing that semantics is preserved.
[1] https://news.ycombinator.com/item?id=46034355
[2] https://higherorderco.com/
[3] https://news.ycombinator.com/item?id=40390287
reminds me of https://github.com/HigherOrderCO/HVM
I see Salvatore has a fork, so they are obviously aware of it. unsure whether theyre proposing the exact same thing without reference or citation…
This is quite different. Salvadori's work aims for optimal reduction of the full lambda calculus (which requires something called "bookkeeping"/"oracle"), while HOC works on optimal/parallel reduction of a certain subset of the lambda calculus.
Both approaches have been researched for a long time now, where HOC's subset is typically referred to as "abstract algorithm". For example, a version of the lambdas calculus where any variable can be used at most once (the "affine lambda calculus"), can be reduced optimally with interaction nets without requiring any bookkeeping.
The novel thing about Salvadori's work is that it develops a new (and better explained) bookkeeping mechanism.
What the hell is this?
The linked paper: https://arxiv.org/pdf/2505.20314 claims the squiggles they introduce are apparently a model to solve Levy-optimal parallel reduction of lambda terms.
But the author has no affiliation, it's very weird they're calling this "lambda-reduction" and it heavily smells of AI slop?
I hope I'm wrong but it doesn't look right. Can anyone with expertise in this field chime in?
HN Guidelines: "Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."
I'm not being "generically" negative, I'm being very specifically negative.
We have a paper from someone not working in the field, with no affiliation, and with an abstract that claims to "solve the longstanding enigma with groundbreaking clarity", a sentence never before uttered by a human being in flesh and blood, and that feels like it takes 4 (four) citations to justify that lambda calculus is Turing-complete, a fact that's well-known to every undergraduate student.
I'm sorry if this gives reviewer #2 vibes, but this doesn't look right to me and I'm asking if someone with actual expertise in the field can clarify what's happening.
The AI slop statement is harsh. The website looks nice.
The author, Daniel Augusto Rizzi Salvadori' and Github user, 'https://github.com/danaugrs' align. Couldn't comment on the actual content, though.
Affiliation they mean with an institute/company funding their research/work. It's quite rare, if it ever happens, for someone to find an innovative algorithm, let alone write a technical paper, as hobbyist.
The interactive lambda-calculus interpreter looks like it does the right thing, not that I've tried to push it too hard.
Can't comment on the delta-nets. If you're looking for a real person who's been plugging away at parallel & optimal reduction of lambda terms, this is where to look: https://github.com/VictorTaelin
I don't think "lambda-reduction" is a red flag. The "real" term would be "beta-reduction" (but that's the incumbent algorithm which TFA claims to replace or improve on - so why not give it a new name?)
But if I were to go sniffing for red flags:
From the first commit:
lambdacalc.ts: // The original lambda calculus introduced by Church was the 'relevant' lambda calculus which doesn't allow for weakening/erasure. This is why I add the '+' below to indicate that the lambda calculus started at 1936 but was extended afterwards.
What?
util.ts: Why is this full of Gaussian Elimination of Matrices? The paper doesn't mention it
Weak vs. "strong" lambda calculus maybe? Typed vs untyped?