Claude and keeping context when interacting with an LLM

- Posted in ai claude

I’ve been experimenting with Claude lately.

So far for me, it’s the most meaningful competitor to OpenAI’s ChatGPT. The software looks great and has all these neat little UX touches. Every time I use it I come back more impressed.

The company behind it, Anthrophic, also seems to make conscious efforts to do AI “well”. Whether that’s just good marketing or really true is hard to evaluate at this point, but it’s nice to put the messaging around AGI safety at the core of your company’s values.

Using ChatGPT, my workflow has been to paste Svelte code snippets and ask a question when I feel I can move my process forward by asking an AI. Usually I have a component and a page that is consuming the component.

Lately I’ve been wondering how the AI understands where one file begins and one ends. I usually put some manual markers in my prompt like File1.svelte: to denote beginnings and endings. ChatGPT always seems to figure out which files are separate and Claude does as well. I guess they understand that a new file usually begins with a <script> tag and use that as a way to delineate.

In my ChatGPT workflow to create Svelte code, I would often find the following problems:

  • The output would be React-like and I would have to steer the LLM to use the Svelte way of doing things
  • It did not know about some general logic that I already had in some other part of the codebase

For a part of my work I am researching all kinds of LLM solutions, including Google Gemini, Claude and running local models (like Mistral 7b).

I found that Claude can deliver quite impressive results.

Claude has 3 models: Opus, Sonnet and Haiku. Opus is supposed to be the “smartest” one for deep work. Sonnet is the most balanced model sitting somewhere in the middle. Haiku is the fastest one. This matters because running a more “intelligent” model requires more computing power and is thus more expensive.

I think what’s helping a lot of designer/developer/hybrid profiles these days is a new feature in Claude called “Artifacts”.

If you have an account on Claude and in the settings, you enable the Artifacts feature, you will see that when you ask Claude a code-related question, it will then spawn a window of your code on the right side. If you code related question concerns multiple files, it will even show those multiple files. If you then prompt toward an answer, you can view all the versions of the code as it evolves via prompting.

But what if you are a bit further into the code process? Wouldn’t it be useful if the AI knew about your codebase? I discovered you can use a package like ai-digest to generate a markdown file of your codebase. If you then give that context to the LLM, it will then understand that maybe there are general codebase things that can help in the generation of your code. A simple example would be that pretty much any codebase has a Button component. If it knows about this component, and its props, it can use it in its answers, thus potentially being able to give a better answer.

One interesting aspect about Claude code generation is that it gives active suggestions on how to improve your codebase, even touching on accessibility aspects. The suggestions are not always correct but they are interesting nonetheless.

I have to experiment more with Claude but I would encourage everyone’s who’s interested in using LLMs to answer code qustions or move their code forward to give it a try. It’s really, really cool.

Leave a Reply

Your email address will not be published. Required fields are marked *