Chatbots are among the most popular applications of large language models (LLMs). Often, an LLM's internal knowledge base is adequate for answering users questions. However, in those cases, the model may generate outdated, incorrect, or too generic responses when specificity is expected. These challenges can be partially addressed by supplementing the LLM with an external knowledge base and employing the retrieval-augmented generation (RAG) technique.