Get motorhead up and running with langchain

10 May 2023

LLMs are inherently stateless. If you are building an app like a chatbot that needs to maintain conversation context, you need memory. Motorhead makes it easy to add long-term memory to chatbots.

If you are unfamiliar with the concept of conversational memory, here is a brilliant explainer from Pinecone about it.

Redis

Motorhead uses Redis as a backend. The Redis instance needs to have the RediSearch module enabled, though. The easiest way to get a Redis + RediSearch instance is through Redis labs. It’ll give you a Redis URL that you can pass to the Motorhead server.

Motorhead server

Motorhead gives you a couple of options to run the server. The simplest way is using docker image, passing OPENAI_API_KEY and REDIS_URL.

docker run --name motorhead -p 8080:8080 -e MOTORHEAD_PORT=8080 -e REDIS_URL='<redis>' -e MOTORHEAD_LONG_TERM_MEMORY=true OPENAI_API_KEY='sk-<>' ghcr.io/getmetal/motorhead:latest

You can verify that the server is running correctly by hitting endpoints specified in Motorhead README.

Langchain

Once you have the Motorhead server up and running, you can use the MotorheadMemory class in Langchain to store chat messages to it. Langchain’s Python docs do a decent job of explaining it.

Here is a basic Flask server that accepts an input message, sends it to OpenAI + Motorhead, and returns AI’s response.

Gotcha:

There is a bug in the current MotorheadMemory implementation. It doesn’t include context from Motorhead while generating the prompt. I have reported it in their discord. As a workaround, you can manually pass the context in the prompt like this.

Motorhead (and even Langchain) is in the early stages of development. You are likely to face many issues while using it. The best place to get those resolved is their Discord. Motorhead does make creating conversational chatbots with long-term memory easier. It would be interesting to see how they evolve from here.