I saw this comment a little bit back and I don’t think the OP expanded on it, but this looks like a fantastic idea to me:
sam0x17 20 days ago:
Didn't want to bury the lead, but I've done a bunch of work with this myself. It goes fine as long as you give it both the textual representation and the ability to walk along the AST. You give it the raw source code, and then also give it the ability to ask a language server to move a cursor that walks along the AST, and then every time it makes a change you update the cursor location accordingly. You basically have a cursor in the text and a cursor in the AST and you keep them in sync so the LLM can't mess it up. If I ever have time I'll release something but right now just experimenting locally with it for my rust stuff
On the topic of LLMs understanding ASTs, they are also quite good at this. I've done a bunch of applications where you tell an LLM a novel grammar it's never seen before _in the system prompt_ and that plus a few translation examples is usually all it takes for it to learn fairly complex grammars. Combine that with a feedback loop between the LLM and a compiler for the grammar where you don't let it produce invalid sentences and when it does you just feed it back the compiler error, and you get a pretty robust system that can translate user input into valid sentences in an arbitrary grammar.
One thing to take care with in cases like this, it probably needs to handle code with syntax errors. It's not uncommon for developers to work with code that doesn't parse (e.g. while you're typing, to resolve merge conflicts, etc).
In general, a drum I beat regularly is that during development the code spends most of its time incorrect in one way or another. Syntax errors, doesn't type check, missing function implementations, still working out the types and their relationships, etc. Any developer tooling that only works on valid code immediately loses a lot of its value.
Isn't that the benefit of treesitter? I was under the impression that it's more accepting of these types of errors, at least to a degree where you can get enough info to fix it.
> thread 'main' (17953) panicked at ck-cli/src/main.rs:305:41: byte index 100 is not a char boundary
I seem to have gotten 'lucky' and it split an emoji just right.
---
For anyone curious: this is great for large, disjointed, and/or poorly documented code bases. If you kept yours tight and files smaller than ~600 lines, it is almost always better to nudge llm's into reading whole files.
I've benchmarked the code search MCPs extensively and agents with LSP-aware mcps outperform agents using raw indexed stores quite handily. Serena, as janky as it is, is a better enabler than Codanna.
This generalizes to a whole new category of tools: UX which requires more thought and skill, but is way more powerful. Human devs are mostly too lazy to use, but LLMs will put in the work to use them.
> UX which requires more thought and skill, but is way more powerful. Human devs are mostly too lazy to use
Really? My thinking is more that human devs are way too likely to sink time into powerful but complex tools that may end up being a yak shave with minimal/no benefit in the end. "too lazy to use" doesn't seem like a common problem from what I've seen.
Not that the speed of an agent being able to experiment with this kind of thing isn't a benefit... but not how I would have thought to pose it.
I actually have a WIP library for this, the indexing server isn't where I want it just yet, but I have an entire agent toolkit that does this stuff, and the indexing server is quite advance, with self-tuning, raptor/lsp integration, solves for optimal result set using knapsack, etc.
Lens is basically a rust local first mmapped file base search store, it combines RAPTOR with LSP, semantic vectors and a dual dense/sparse encoding, and can learn a function over those to tune the weights of the relevance sources adaptively per query using your data. It also uses linear programming to select an "efficient" set of results that minimizes mutual information between result atoms -- regular rag/rerank pipelines just dump the top K, but those often have a significant amount of overlap so you bloat context for no benefit.
This is cool, but I don't understand why it tries to re-implement (a subset of) grep. Not only that, but the grep-like behaviour is the default and I need to opt-in to the semantic search using the --sem flag. If I want grep I can use grep/ripgrep.
Fair comment- the initial thinking was to have both and in fact a hybrid mode too which fuses results so you can get chunks that match both semantically and on keyword search in one resultset. Later could add a reranker too.
Or another way of thinking. How much is the penalty we are talking about for semantic vs conventional grep?
My thinking is that for large codebase, sorting embedding matches maybe more efficient than reading all files and hence there is no point to put semantic search behind a --semantic flag
The reason to overload grep is that the agents already understand most of the semantics and are primed to use it, so it's a small lift to get them to call a modified grep with some minor additional semantics.
Yes- files are hashed and checked whenever you search so index should always remain up to date. Only changed files are reindexed. You can also inspect the metadata (chunking semantics, embeddings). It’s all in the .ck sidecar
Roo has codebase indexing that it'll instruct the agent to use if enabled.
It uses whatever arbitrary embedding model you want to point it at and backs it with a qdrant vector db. Roo's documents point you toward free cloud services for this, but I found those to be dreadfully slow.
Fortunately, it takes about 20 minutes to spin up a qdrant docker container and install ollama locally. I've found the nomic text embed model is fast enough for the task even running on CPU. You'll have an initial spin up as it embeds existing codebase data then it's basically real-time as changes are made.
FWIW, I've found that the indexing is worth the effort to set up. The models are generally better about finding what they need without completely blowing up their context windows when it's available.
We really are living in the golden age of the terminal. I thought this would take a chunk out of Typescript/node marketshare of young coders, but i'm starting to see more and more of these animals building TUIs using nothing but npm packages.
Last week I built my own CLI coding agent tool using just nodejs and zero dependencies! It is a lot of fun to build, really, I think everyone should try it out
At this point, we aren't even saying it's written in Rust anymore, we just mention it in the title whenever possible.
I did look into the core features and I gotta say, that looked quite cool. It's like Google search, but for the codebase. What does it take to support other languages?
Mainly I wrote it because I noticed Claude's "by design" use of grep meant it couldn't search the code base for things it didn't already know the name of, or find "the auth section". But equally, it's well documented that e.g. Cursor's old RAG technique wasn't that great.
My idea was to make a tool that just does a quick and simple embedding on each file, and uses that to provide a semantic alternative that is much closer to grep in nature, but allows an AI tool like Claude Code to run it from the command line - with some parameters.
Arguably could be MCP, but in my experience setting up a server for a basic tool like this is a whole lot of hassle.
I'm fairly confident that this is a useful tool for CC as it started using it while I was coding it, and even when buggy, was more than willing to work around the issues for the benefit of having semantic search!
CC is so good with grep that I'm half expecting to clutter its context with bad results from semantic search. But also half optimistic at this just improving its search.
If you're getting useful results from hybrid mode that's very interesting to me since well-constructed grep that claude executes don't really look like they'd work great for semantic search to me! But intuition is often wrong on this stuff.
I am very curious your thoughts on speed. I'd rather any tools claude invokes be as fast as possible so it can get feedback immediately and execute again.
For example I have a Stop hook that scans my messages to see which files we've worked on. It'll check to see if the changes to those files have been committed and, if not, it will prevent Claude from stopping and send it a message to commit the specific files in a specific style that includes the id of the current session. The same script also cleans up all previous instances of the same message in the conversation, saving like 5k tokens per session.
I have a lot of PreToolUse hooks that injects guideline messages whenever certain tools are called or bash commands run. My hooks also prune older versions of those out of context. All of the transcripts are in ~/.claude/projects/ in jsonl format and are hot-editable.
Went to the github repo and was expecting a section about Claude Code and best practices on how to set this up with Claude Code. Very curious to hear how that might work, especially with what you've found compared to Claude Code's love of grep.
> Went to the github repo and was expecting a section about Claude Code and best practices on how to set this up with Claude Code. Very curious to hear how that might work, especially with what you've found compared to Claude Code's love of grep.
bear in mind that Claude Code by default uses grep - if you watch you'll see if it's looking for something it doesn't know the name of, it flails around with different patterns. Try this tool, tell CC to take a look using ck --help and take it for a spin.
CC in my case likes it so much, it started using it to debug the repo rather than grep and suggesting its own additions
Note that it’s grep AND semantic - so Claude can start with a grep strategy and if it finds nothing can switch to semantic, and since it’s local and fast, it keeps in sync easily enough
I saw this comment a little bit back and I don’t think the OP expanded on it, but this looks like a fantastic idea to me:
sam0x17 20 days ago:
Didn't want to bury the lead, but I've done a bunch of work with this myself. It goes fine as long as you give it both the textual representation and the ability to walk along the AST. You give it the raw source code, and then also give it the ability to ask a language server to move a cursor that walks along the AST, and then every time it makes a change you update the cursor location accordingly. You basically have a cursor in the text and a cursor in the AST and you keep them in sync so the LLM can't mess it up. If I ever have time I'll release something but right now just experimenting locally with it for my rust stuff On the topic of LLMs understanding ASTs, they are also quite good at this. I've done a bunch of applications where you tell an LLM a novel grammar it's never seen before _in the system prompt_ and that plus a few translation examples is usually all it takes for it to learn fairly complex grammars. Combine that with a feedback loop between the LLM and a compiler for the grammar where you don't let it produce invalid sentences and when it does you just feed it back the compiler error, and you get a pretty robust system that can translate user input into valid sentences in an arbitrary grammar.
https://news.ycombinator.com/item?id=44941999
One thing to take care with in cases like this, it probably needs to handle code with syntax errors. It's not uncommon for developers to work with code that doesn't parse (e.g. while you're typing, to resolve merge conflicts, etc).
In general, a drum I beat regularly is that during development the code spends most of its time incorrect in one way or another. Syntax errors, doesn't type check, missing function implementations, still working out the types and their relationships, etc. Any developer tooling that only works on valid code immediately loses a lot of its value.
Isn't that the benefit of treesitter? I was under the impression that it's more accepting of these types of errors, at least to a degree where you can get enough info to fix it.
> thread 'main' (17953) panicked at ck-cli/src/main.rs:305:41: byte index 100 is not a char boundary
I seem to have gotten 'lucky' and it split an emoji just right.
---
For anyone curious: this is great for large, disjointed, and/or poorly documented code bases. If you kept yours tight and files smaller than ~600 lines, it is almost always better to nudge llm's into reading whole files.
Nice catch- should be fixed in latest
There's also https://github.com/bartolli/codanna, that's similarly new. I'll have to try that again, and this one.
I've benchmarked the code search MCPs extensively and agents with LSP-aware mcps outperform agents using raw indexed stores quite handily. Serena, as janky as it is, is a better enabler than Codanna.
This generalizes to a whole new category of tools: UX which requires more thought and skill, but is way more powerful. Human devs are mostly too lazy to use, but LLMs will put in the work to use them.
> UX which requires more thought and skill, but is way more powerful. Human devs are mostly too lazy to use
Really? My thinking is more that human devs are way too likely to sink time into powerful but complex tools that may end up being a yak shave with minimal/no benefit in the end. "too lazy to use" doesn't seem like a common problem from what I've seen.
Not that the speed of an agent being able to experiment with this kind of thing isn't a benefit... but not how I would have thought to pose it.
I actually have a WIP library for this, the indexing server isn't where I want it just yet, but I have an entire agent toolkit that does this stuff, and the indexing server is quite advance, with self-tuning, raptor/lsp integration, solves for optimal result set using knapsack, etc.
https://github.com/sibyllinesoft/grimoire
I have to know, what is the Lens SPI? The link in your readme is broken, and Kagi results for this cannot possibly be right.
Lens is basically a rust local first mmapped file base search store, it combines RAPTOR with LSP, semantic vectors and a dual dense/sparse encoding, and can learn a function over those to tune the weights of the relevance sources adaptively per query using your data. It also uses linear programming to select an "efficient" set of results that minimizes mutual information between result atoms -- regular rag/rerank pipelines just dump the top K, but those often have a significant amount of overlap so you bloat context for no benefit.
Well, there's also mine https://github.com/VectorOps/know with some details what it does and how: https://vectorops.dev/blog/post-1/
Cool. Some AI fluff can be detected in the README.
For example under the "Why CK?" section, "For teams" is of no substance compared to "For developers"
This is cool, but I don't understand why it tries to re-implement (a subset of) grep. Not only that, but the grep-like behaviour is the default and I need to opt-in to the semantic search using the --sem flag. If I want grep I can use grep/ripgrep.
Fair comment- the initial thinking was to have both and in fact a hybrid mode too which fuses results so you can get chunks that match both semantically and on keyword search in one resultset. Later could add a reranker too.
Or another way of thinking. How much is the penalty we are talking about for semantic vs conventional grep?
My thinking is that for large codebase, sorting embedding matches maybe more efficient than reading all files and hence there is no point to put semantic search behind a --semantic flag
The reason to overload grep is that the agents already understand most of the semantics and are primed to use it, so it's a small lift to get them to call a modified grep with some minor additional semantics.
I tried in my relatively small project.
All I got was spinning M2 Mac fan after a minute, and gave up.interesting - can I ask you to try a ck --index . ?
It'd be nice if respected gitignore. It's turning my M4 MBP into a space heater too.
coming up next.
I saw that you added it, thanks! I'll give this a shot for a few days.
Fyi, I just grabbed the same lib that ripgrep uses. That bit is extracted iirc, and was quite nice and simple to use.
The biggest improvement to CC would be it using the TypeScript LSP to immediately get type feedback and inspect types.
I added the VSCode plugin but it didn’t seem to help, likewise searching around yesterday I didn’t see anything surprisingly.
Man, that's a great thing! Really waiting to see Ruby and Elixir. Fingers crossed for you!
Added Ruby, but Elixir not very well supported by tree sitter
This looks very useful.
Looks like you have to build an index. When should it be rebuilt? Any support for automatic rebuilds?
Yes- files are hashed and checked whenever you search so index should always remain up to date. Only changed files are reindexed. You can also inspect the metadata (chunking semantics, embeddings). It’s all in the .ck sidecar
What model are you using to create the embeddings?
BAAI/bge-small-en-v1.5 but considering switching this to google's latest gemmaembedding - it's fairly switchable.
this is so cool, is there any other tool which is more mature?
Roo has codebase indexing that it'll instruct the agent to use if enabled.
It uses whatever arbitrary embedding model you want to point it at and backs it with a qdrant vector db. Roo's documents point you toward free cloud services for this, but I found those to be dreadfully slow.
Fortunately, it takes about 20 minutes to spin up a qdrant docker container and install ollama locally. I've found the nomic text embed model is fast enough for the task even running on CPU. You'll have an initial spin up as it embeds existing codebase data then it's basically real-time as changes are made.
FWIW, I've found that the indexing is worth the effort to set up. The models are generally better about finding what they need without completely blowing up their context windows when it's available.
I recently saw SemTools [0], but have not tried it out yet myself.
[0] https://github.com/run-llama/semtools
I don't see how these are apples-to-apples given its "send me all your content" approach <https://github.com/run-llama/semtools#:~:text=get%20your%20a...>
versus https://github.com/BeaconBay/ck#:~:text=yes%2C%20completely%...
LlamaIndex is batting a thousand since their inception. Can't go wrong with this tool, either.
Agreed - Logan is a legend, this is similar but simpler - no dependency on external models (might add it)
Thanks!
Seems like CLI tools are all the rage these days
We really are living in the golden age of the terminal. I thought this would take a chunk out of Typescript/node marketshare of young coders, but i'm starting to see more and more of these animals building TUIs using nothing but npm packages.
Have they no shame?
Last week I built my own CLI coding agent tool using just nodejs and zero dependencies! It is a lot of fun to build, really, I think everyone should try it out
help make it mature :D Add any issues
[flagged]
At this point, we aren't even saying it's written in Rust anymore, we just mention it in the title whenever possible.
I did look into the core features and I gotta say, that looked quite cool. It's like Google search, but for the codebase. What does it take to support other languages?
It supports most languages but needs a bit of tree-sitter setup to do semantic chunking. Let me know what languages you’d like added
Java would be useful as well for larger backend codebases.
Thanks for your quick response, most large codebases I've been fiddling on is Ruby!
I'd love to see elixir support.
Clojure would be awesome
[dead]
[stub for offtopicness]
What does this have to do with Claude Code?
Mainly I wrote it because I noticed Claude's "by design" use of grep meant it couldn't search the code base for things it didn't already know the name of, or find "the auth section". But equally, it's well documented that e.g. Cursor's old RAG technique wasn't that great.
My idea was to make a tool that just does a quick and simple embedding on each file, and uses that to provide a semantic alternative that is much closer to grep in nature, but allows an AI tool like Claude Code to run it from the command line - with some parameters.
Arguably could be MCP, but in my experience setting up a server for a basic tool like this is a whole lot of hassle.
I'm fairly confident that this is a useful tool for CC as it started using it while I was coding it, and even when buggy, was more than willing to work around the issues for the benefit of having semantic search!
CC is so good with grep that I'm half expecting to clutter its context with bad results from semantic search. But also half optimistic at this just improving its search.
If you're getting useful results from hybrid mode that's very interesting to me since well-constructed grep that claude executes don't really look like they'd work great for semantic search to me! But intuition is often wrong on this stuff.
I am very curious your thoughts on speed. I'd rather any tools claude invokes be as fast as possible so it can get feedback immediately and execute again.
if you’re concerned about context you can trivially make a hook that will prune your conversation history of older semantic search results.
i do a lot of context management with hooks for all sorts of tool calls.
That sounds great - do you have any examples?
For example I have a Stop hook that scans my messages to see which files we've worked on. It'll check to see if the changes to those files have been committed and, if not, it will prevent Claude from stopping and send it a message to commit the specific files in a specific style that includes the id of the current session. The same script also cleans up all previous instances of the same message in the conversation, saving like 5k tokens per session.
I have a lot of PreToolUse hooks that injects guideline messages whenever certain tools are called or bash commands run. My hooks also prune older versions of those out of context. All of the transcripts are in ~/.claude/projects/ in jsonl format and are hot-editable.
Starred the repo.
Went to the github repo and was expecting a section about Claude Code and best practices on how to set this up with Claude Code. Very curious to hear how that might work, especially with what you've found compared to Claude Code's love of grep.
> Went to the github repo and was expecting a section about Claude Code and best practices on how to set this up with Claude Code. Very curious to hear how that might work, especially with what you've found compared to Claude Code's love of grep.
A write up on this would be great!
A proper title could be "Semantic grep with completely local embeddings"
Put the title aside, the tool, if it works as described, is pretty insane
Ok, we'll use that above. Thanks!
(Submitted title was "Semantic grep for Claude Code (RUST) (local embeddings)")
Isn't Claude Code's selling point that it doesn't use embeddings?
Why would "not using embeddings" be a selling point? Some of the most effective IR systems use embeddings (bi-encoders, cross-encoders)
I don’t think that “Claude Code” is relevant to this semantic grep tool.
bear in mind that Claude Code by default uses grep - if you watch you'll see if it's looking for something it doesn't know the name of, it flails around with different patterns. Try this tool, tell CC to take a look using ck --help and take it for a spin.
CC in my case likes it so much, it started using it to debug the repo rather than grep and suggesting its own additions
Note that it’s grep AND semantic - so Claude can start with a grep strategy and if it finds nothing can switch to semantic, and since it’s local and fast, it keeps in sync easily enough
How do you tell CC to use it? Just as an entry in Claude.md?
To start with just tell it- but yes Claude.md works too.
“We have a new grep semantic hybrid tool installed called ck - check it out using ck --help and take it for a spin”
Why does it need to say RUST in the headline as if this was a feature, lol
we all know rust CLI tools are better right?
Please don't post misleading titles. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.
We've taken the Rust out of the title now.
(Submitted title was "Semantic grep for Claude Code (RUST) (local embeddings)")
This looks interesting and I look forward to trying it but the title here should really just use the description of the repo, or that be adjusted.
Apart from anything else it appears to be very misleading as Rust (ironically) according to the documentation is not one of the languages supported.
I clicked on this because it said rust in the title. Very disappointed.
I'll add rust, ruby, elixir, Clojure next. It says rust as it's written in rust, sorry about that!
We've taken the Rust out of the title now.
(Submitted title was "Semantic grep for Claude Code (RUST) (local embeddings)")