Chatbot Factory: Streamline Python Chatbot Creation with LLMs and Langchain

5/5 - (3 votes)

This guide, based on my personal experience creating Chatbots, helps you build Chatbots that use advanced technologies like large language models, external tools such as Wolfram Alpha, Google search, and short-term and long-term memory.

I created this video to showcase all the code in this tutorial, feel free to play as you scroll through the article! 👇

Chatbot Factory: Streamline Python Chatbot Creation with LLMs and Langchain

I used all these technologies with simple code and a few easy-to-learn software design patterns. I’ll cover how to create these bots and how they can be run on the command line. I’ll show how they can be run on various platforms on Discord and on a Website in later articles.

You can use this project to learn, extend, or develop your own Chatbot. I’ll demo a bot for an entertaining imaginative StoryBot and a business bot for a chiropractor called the ChiropractorBot. 

An Overview 

I wanted to create a lot of different types of bots. In order to create many bots I decided to use the factory design pattern. This design pattern lets you create many objects or Chatbots easily. 

Below is a diagram of a ChatBot Factory:

(1) WebsiteClient, DiscordClient, CommandLineClient: These are the client classes that require objects of a Bot type. They can be different applications or interfaces that interact with the BotFactory.

Here’s an example of the command-line client:

from chatbot_factory import ChatBotFactory
from chatbot_settings import ChatBotSettings
import os
from langchain.chains.conversation.memory import (ConversationBufferMemory,
from colorama import Fore, Style, init

# from colorama import Fore, Style, init

chatbot_factory = ChatBotFactory()

selected_bot = ChatBotFactory().select_chatbot(, "Please select a chatbot from the following options:")

chatbot = chatbot_factory.create_service(selected_bot, ChatBotSettings(llm=ChatBotFactory().llms[ChatBotFactory().LLM_CHAT_OPENAI],memory=ConversationBufferMemory(), tools=['serpapi','wolfram-alpha']))

print(Fore.GREEN + "Please enter a prompt:")

while True:
    # Get user input from the command line
    user_input = input()
    # If the user types 'exit', end the chatbot session
    if user_input.lower() == 'exit':
    # Get the chatbot's response using the get_bot_response_chat_completions method
    response = chatbot.get_bot_response(user_input)
    if(type(response).__name__ == "gTTS"):"welcome.mp3")
    elif(type(response) == "str"):
        # Print the chatbot's response
        print(Fore.GREEN + response)

(2) BotFactory: This class provides a method to create and return Bot objects. All clients use this same BotFactory to create bots.

import os
from typing import Callable, Dict, List, Optional, Union, Tuple
from chatbot_settings import ChatBotSettings
from bot_conversation_chain import BotConversationChain
from bot_knowledge_base import BotKnowledgeBase
from bot_dalle_imagine import BotDalle
from bot_gtts_audio import BotGtts
from bot_circumference_calculator import BotCirucumferenceTool
from bot_pinecone import BotPineCone
from bot_agent_tools import BotAgentTools
from bot_story_imagine import BotStoryImagine
from langchain import (HuggingFaceHub, Cohere)
from langchain.chat_models import ChatOpenAI
from colorama import Fore, Style, init

class ChatBotFactory:
    services = {
        BotConversationChain.__name__ : BotConversationChain,
        BotPineCone.__name__: BotPineCone,
        BotAgentTools.__name__: BotAgentTools,
        BotKnowledgeBase.__name__: BotKnowledgeBase,
        BotDalle.__name__: BotDalle,
        BotGtts.__name__: BotGtts,
        BotCirucumferenceTool.__name__: BotCirucumferenceTool,
        BotStoryImagine.__name__: BotStoryImagine

    LLM_CHAT_OPENAI = "ChatOpenAI"
    LLM_COHERE = "Cohere"
    LLM_HUGGINGFACE_HUB = "HuggingFaceHub"

    llms = {
        LLM_CHAT_OPENAI: ChatOpenAI(
        LLM_COHERE: Cohere(model='command-xlarge'),
        LLM_HUGGINGFACE_HUB: HuggingFaceHub(
            model_kwargs={"temperature": 0, "max_length": 200},

    def create_service(cls, service_type, settings):
        if service_type in
            raise ValueError(f'Unknown service type {service_type}')

    def select_chatbot(cls, chatbots, selection_text):
       # Print the options to the user
       for i, bot in enumerate(chatbots, start=1):
           print(Fore.GREEN + f"{i}. {bot}")

        # Get user input
       selection = input(Fore.GREEN + "Enter the number of your selection: ")

       selected_bot = list(chatbots)[int(selection) - 1]

       print(Fore.GREEN + f"You selected {selected_bot}")

       return selected_bot

(3) BotAbstract: This is the abstract class that defines the interface for Bots. This class tells you how to structure the bot. The methods are abstract and require a constructor and a get_bot_reponse method. 

from abc import ABC, abstractmethod
from typing import Any

class BotAbstract(ABC):
    def __init__(self, settings: Any):

    def get_bot_response(self, text: str) -> str:

(4) ConcreteBot1, ConcreteBot2: These are the subclasses of the BotAbstract class that the BotFactory can produce. Each ConcreteBot class implements the BotAbstract interface in its own way.

Here’s an example of a concrete bot called BotConversationChain. This bot uses ConversationBufferMemory and will remember what you’ve said to the bot until it hits its memory limit. This bot uses langchain.

Here’s an excellent introduction to Langchain

from typing import Callable, Dict, List, Optional, Union
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from chatbot_settings import ChatBotSettings
from bot_abstract_class import BotAbstract

class BotConversationChain(BotAbstract):
    def __init__(self, chatBotSettings: ChatBotSettings()):
        self.chatbotSettings = chatBotSettings

        self.llm = chatBotSettings.llm
        self.memory = chatBotSettings.memory
        self.conversation_buf: ConversationChain = ConversationChain(

    def get_bot_response(self, text: str):
        reply = self.conversation_buf(text)
        return reply['response']

Each client, instead of calling the constructors of the Bot classes directly, calls the BotFactory‘s creation method to get an instance of a Bot.

The clients are only aware of the BotAbstract interface and do not need to know about the ConcreteBot classes. This decouples the clients from the ConcreteBot classes and makes the client code easier to maintain and extend. This allows for the easy creation of many bots. 

In the case of the command line client, it makes it easy to create chatbots with a common interface of a constructor and get_bot_response. There is also a ChatBotSettings file that contains the keys that get instantiated with each creation of the bot.

Here’s how you can run the factory code. 

Getting Started

Requirements for Getting Started:

  • Python 3.7 or greater
  • Git
  • OpenAI API  key

 Running the Chatbot Locally

To run the chatbot locally, you need to clone the chatbot repository, set up a virtual environment, install the required dependencies, and enter the API key. Here are the steps:

# Clone the chatbot repository
git clone

# Change directory to the cloned repository
cd ExtensibleChatBot

# Create a virtual environment
python -m venv .venv

# Activate the virtual environment
# On Windows, use 

#On Unix/Linux Use
#source .venv/bin/activate  

# Install the required dependencies
pip install -r requirements.txt

#Rename the environ settings file which contains the keys
cp .\

#Put in key for OPENAI_API_KEY in the os.environ["OPENAI_API_KEY"]

#Run chatbot_client and type in a command and test with queries

#In the command line
#Select 1. BotConversationChain
#Experiment talking with this bot.
#This bot has short term memory

The Story Bot: The story bot example bot takes in text from an input and generates a story and puts that story in a PDF with a representative image of the prompt from Dalle and audio.

I made several stories for my sons so they can get a full story in a PDF form and audio to go along with it. 

from abc import ABC, abstractmethod
from typing import Any
from gtts import gTTS
from reportlab.lib.pagesizes import letter
from reportlab.pdfgen import canvas
from reportlab.platypus import SimpleDocTemplate, Paragraph, Image
from reportlab.lib.styles import getSampleStyleSheet
from langchain.chains import LLMChain, ConversationChain
import openai
from langchain.chat_models import ChatOpenAI
from chatbot_settings import ChatBotSettings
from langchain.schema import (

class BotStoryImagine(ABC):
    def __init__(self, settings: Any):
         self.llm = settings.llm

    def get_bot_response(self, text: str) -> str:
        language = 'en'
        test_llm = ChatOpenAI(

        messages = [
            SystemMessage(content="You are an adventure mystery story telling bot for young teens."),
            HumanMessage(content="Hi AI, what are your main themes?"),
            AIMessage(content="My theme and things is doing good and solving puzzles and learning about science in the world."),

        reply = test_llm(messages)

        rs = reply.content

        myobj = gTTS(text=rs, lang=language, slow=False)"story.mp3")

        response = openai.Image.create(
        image_path = response["data"][0]["url"]
        self.create_pdf("Story Bot", "story.pdf", image_path, rs)
        return myobj
    def create_pdf(self, doc_title, doc_filename, image_path, doc_text):
        document = SimpleDocTemplate(doc_filename, pagesize=letter)
        # Container for the 'Flowable' objects
        elements = []
        styles = getSampleStyleSheet()

        bodytext_style = styles['BodyText']
        # Add title
        title = Paragraph(doc_title, styles['Title'])
        # Add image
        img = Image(image_path, 200, 200)  # The numbers 200, 200 are the width and height in points

        text = Paragraph(doc_text, styles['BodyText'])

        # Generate PDF
        pdf =

        response = ResponseMultiModal(audio, pdf, text, image)

        return response

The Chiropractor Bots

The purpose of the Chiropractor bots is to gather information from the patient first and then have a separate bot that helps the Chiropractor evaluate the data collected from the patients. A patient bot takes in input from patients and stores it in short-term memory in the form of ConversationBufferMemory and long-term memory in a JSON file. 

 There is a bot for the Chiropractor to give that input to Chiropractor to evaluate his patient’s status. 

The bot uses JSON as a store of its questions and patient responses. The bot gets basic information and stores it in JSON as memory. It also saves the responses in a patient_responses.json

For enhanced conversation, a Chiropractor could use professionally curated knowledge from a PDF and load it to the bots’ long-term memory. While this bot does not do this is a possible way to enhance the conversation. Instead, this bot relies on data that OpenAI trained it on.

Here’s how to load the memory store in a Pinecone vector database.  

The Patient view 

Run the Chiropractor patient view first. Enter in data for a couple patients. 

python .\

import json
import os
import datetime
from abc import ABC, abstractmethod
from typing import Any
from gtts import gTTS
from reportlab.lib.pagesizes import letter
from reportlab.pdfgen import canvas
from reportlab.platypus import SimpleDocTemplate, Paragraph, Image
from reportlab.lib.styles import getSampleStyleSheet
from langchain.chains import LLMChain, ConversationChain
import openai
from langchain.chat_models import ChatOpenAI
from chatbot_settings import ChatBotSettings
from langchain.schema import (
from langchain.prompts.prompt import PromptTemplate
from langchain.chains.conversation.memory import (ConversationBufferMemory,
from pandasai import PandasAI

import pandas as pd

llm = ChatOpenAI(

def load_patient_questions():
    # Define the file path
    file_path = 'patient_questions.json'

    # Check if the file exists
    if os.path.exists(file_path):
        # Load questions from the existing file
        with open(file_path, 'r') as f:
            loaded_data = json.load(f)
            loaded_questions = loaded_data["questions"]
        # Put questions into a list
        questions = [
            "How are you feeling today?",
            "Have you experienced any symptoms?",
            "Are you taking any medications?",
            "What is your level of pain? (1-5)",
            "Do you have any other symptoms?"
            # Add more questions here...

        # Create the JSON object
        data = {
            "questions": questions

        # Save the initial questions to the file
        with open(file_path, 'w') as f:
            json.dump(data, f)

        # Assign the initial questions to loaded_questions
        loaded_questions = questions

    return loaded_questions

# Call the function to load patient questions
patient_questions = load_patient_questions()
# Print the loaded questions

patients = [{"name": "John Doe"}, {"name": "Jane Doe"}, {"name":"Steve Smith"}]  # List of patients

responses = []

for patient in patients:
    patient_responses = {
        "name": patient["name"],
        "date": str(,
        "responses": {},
        "questions": []
    questions = patient_questions

    print('Hi ' + patient["name"])
    # Predefined questions
    for question in questions:
        answer = input(question + " ")
        patient_responses["responses"][question] = answer

    question_response_pairs = [f"{question}: {response}" for question, response in patient_responses['responses'].items()]
    questions_and_responses = ' '.join(question_response_pairs)

    template = """The following is a response from an AI Chiropractor. The AI Chiropractor has an excellent bedside manner provides specific details from its context.
    If the AI does not know the answer to a question, it truthfully says it does not know. The AI does notd emands that someone asks a question.
    Patient: {input} """ + "Patient: " + questions_and_responses

    prompt = PromptTemplate(
            input_variables=["history","input"], template=template
    conversation_with_chain = ConversationChain(

    # Open-ended questions
    while True:
        question = input(patient["name"] +  ", What are your questions? (type 'done' when you are finished) ")
        if question.lower() == 'done':

        answer = conversation_with_chain(question)

        patient_responses["responses"][question] = answer


# Load existing responses from the file
with open('patient_responses.json') as f:
    existing_responses = json.load(f)

# Append the new responses to the existing ones

# Save the updated responses back to the file in JSON format
with open('patient_responses.json', 'w') as f:
    json.dump(existing_responses, f)

This is the bot for the Chiropractor. It uses PandasAI for the chiropractor to evaluate the data frame. Run python .\ to see it run.

from pandasai import PandasAI

import pandas as pd

chatbotSettings = ChatBotSettings()
with open('patient_responses.json', 'r') as f:
    all_responses = json.load(f)

# Open the JSON file
with open('patient_questions.json') as file:
    data = json.load(file)

# Get the list of questions
questions = data['questions']

# Flatten the dictionary inside the 'responses' key and match questions to knowledge base
flattened_data = []
for item in all_responses:
    flattened_dict = {}
    flattened_dict['name'] = item['name']
    flattened_dict['date'] = item['date']
    responses = item['responses']
    for question, response in responses.items():
        if question in questions:
            # If a match is found, add the match as a new entry in the dictionary
            flattened_dict[question] = response

print("Hello Chiropractor")
# Create DataFrame
df = pd.DataFrame(flattened_data)
df.to_csv('patient_responses.csv', index=False)

from pandasai.llm.openai import OpenAI
llm = OpenAI()

# Create an empty list to store questions and responses
questions_and_responses = []

while True:
    pandas_ai = PandasAI(llm, conversational=False)
    user_input = input("What data do you want from your patients? ")
    response =, prompt=user_input)

    # Store the question and response in a dictionary
    question_response = {
        "question": user_input,
        "response": response

    # Append the question and response to the list

    # Check if the user wants to continue
    cont = input("Do you want to ask another question? (yes/no): ")
    if cont.lower() != "yes":

# Save the questions and responses to a JSON file
with open('questions_and_responses.json', 'w') as f:
    json.dump(questions_and_responses, f)

Here’s an example dataframe and the query made by the Chiropractor “What’s John Does average pain level?” and the output of the query:


With Langchain and the Factory Design Pattern, it’s truly remarkable how effortless and streamlined bot creation can become.

As we progress in our discussion, our next piece will delve into the implementation of a demonstration bot on a live website. This journey will further illustrate the immense capabilities of chatbot technology today with the usage of LLMs, Langchain, vector databases, and other APIs.

With minimal lines of code, we can unlock an array of functionalities that go a long way in enhancing user interactions. Stay tuned as we continue to explore the fascinating world of chatbot development.