Core Boxx – AI Chatbot

TABLE OF CONTENTS

 

DOWNLOAD & INSTALLATION

First, the download links for the module, and also “installation instructions”.

 

LICENSE & DOWNLOAD

GitHubSource Forge

Core Boxx is released under the MIT License. You are free to use it for personal and commercial projects, and modify it as you see fit. On the condition that the software is provided “as-is”. There are no warranties provided and “no strings attached”. Code Boxx and the authors are not liable for any claims, damages, or liabilities.

 

SORRY FOR THE ADS...

But someone has to pay the bills, and sponsors are paying for it. I insist on not turning Code Boxx into a "paid scripts" business, and I don't "block people with Adblock". Every little bit of support helps.

Buy Me A Coffee Code Boxx eBooks

 

REQUIREMENTS

  • Python – Yes, Python. Not PHP. At the time of writing, 3.9~3.10 works just fine.
  • Microsoft C++ Build Tools
  • CMake
  • Nvidia CUDA Toolkit – If you have an Nvidia graphics card.
  • An Nvidia graphics card with at least 8GB VRAM is highly recommended. You can TRY to run on CPU-only, but that will be painfully slow.

 

INSTALLATION

  • Copy/unzip this module into your Core Boxx folder.
  • Put documents you want the AI to “learn” into chatbot/docs, accepted file types – csv pdf txt epub html md odt doc docx ppt pptx.
  • WARNING, SEVERAL GIGABYTES WORTH OF DOWNLOAD!
    • GPU – Run 0-setup.bat (Windows) 0-setup.sh (Linux).
    • CPU – Run 0-setup.bat CPU (Windows) 0-setup.sh CPU (Linux).
  • You will need to manually download a model, see “changing an AI model” below.
  • Access http://your-site.com/ai for the demo.

 

 

POST INSTALL NOTES

  • By default, the AI chatbot will be deployed at http://your-site.com:8008. You can change this in:
    • chatbot/settings.pyhttp_host and http_port.
    • lib/CORE-Config.phpHOST_CHATBOT.
  • If you add (or remove) a graphics card, you must reinstall PyTorch and Llama.cpp. The easiest way is to just rerun 0-setup.bat / 0-setup.sh.
  • To rebuild the documents database, simply add/remove documents from chatbot/docs and run 1-create.bat / 1-create.sh.
  • To launch the bot, simply run 2-bot.bat / 2-bot.sh.

 

FILES LIST

PYTHON

  • chatbot/0-setup.XYZ – Setup related scripts.
  • chatbot/1-setup.XYZ and create.py – Create a database from the provided documents.
  • chatbot/2-bot.XYZ and bot.py – The AI Chatbot itself.
  • chatbot/settings.py – Settings.

ASSETS & PAGES

  • pages/PAGE-ai.php – Demo page.
  • assets/ai-XYZ.webp – “Avatar” pics.
  • assets/PAGE-ai.js and assets/PAGE-ai.js Demo page Javascript and CSS.

 

 

CHOOSING AN AI MODEL

I wish the AI world was easier, but it’s not…

 

A FEW NOTES ABOUT AI MODELS

  • Hugging Face has a lot of models to choose from.
  • Not all models are built the same.
    • GGML Optimized for CPU.
    • GPTQ Optimized for GPU.
    • GGUF A newer version/replacement of GGML. Optimized for GPU.
    • Don’t be confused. Some models can technically run on “CPU-only”, but it is painfully slow and you still need a ton of RAM.
    • CHAT Models with an attached “chat” in the name are tuned for chat in general.
    • CODE These models are tuned to provide programming assistance.
    • MATH To do calculations.
    • 7B ~ 13B ~ 34B ~ 70B The number of parameters. Technically, more parameters equal a “smarter AI”. But it also means a larger file size and requires more system resources to run.
  • A few common popular models:
    • Llama by Meta (Facebook)
    • Wizard – Overall well-tuned models.
    • Nomic AI – Guys behind the GPT4All project, plenty of good CPU-only models.
    • TheBloke – This guy has a lot of pre-built models.

 

HOW TO WORK WITH THIS MODULE

  • This module runs on 2 different libraries –
  • Handled by Llama.cpp:
    • Download and save the model into models/.
    • Change model_name in chatbot/a_settings.py.
    • Most GGML/GGUF models should have no issues.
  • Handled by Transformers:
    • Just copy the HuggingFace URL path/suffix. E.G. TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ
    • Paste the URL path into model_name in chatbot/a_settings.py.
    • The next time you launch the bot, Transformers will automatically download and configure the model.
    • But take note – The Transformers library seemingly only supports GPTQ at the time of writing.

 

 

DEVELOPMENT NOTES

A couple of notes if you want to further customize the chatbot.

 

LIBRARIES & LINKS

This module is based on:

 

OTHER USES FOR AI

  • A “ChatBot” is only one of the many possibilities.
  • LangChain is capable of doing more than just a chatbot. Examples:
    • Summarize/simplify a piece of article.
    • Extract information from a webpage/advanced crawler.
    • Evaluate information from a database.
    • Interact with API.
  • See the full list of use cases here.

 

RESTRICT CHATBOT TO REGISTERED USERS ONLY

  • Edit pages/PAGE-ai.php, enable the user check at the top.
  • Edit chatbot/bot.py, uncomment the “ENABLE THIS TO OPEN FOR REGISTERED USERS ONLY” sections.
  • In chatbot/bot.py, you can also modify jwtVerify()token contains the user ID. Connect to the database, and get the user info “as usual.”

 

 

TUNING THE CHATBOT

Not all models are built the same, we can control the AI to a certain degree with some settings in a_settings.py.

  • tokens A “unit of text” is called a token – Microsoft gives a good explanation.
    • More tokens = AI is capable of handling more data at once.
    • But more tokens = More system resources required.
    • Models have a native “max tokens” limit as well.
  • temperature From 0 to 1. Higher will give the AI more “creativity” but more unpredictability.
  • top_p and top_k Read this.
  • prompt_template
    • Simply put, we insert the user’s query into {question}.
    • Related document search results into {context}.
    • Then, pass the entire string to AI for processing.
    • TLDR – Modify the user’s question before passing it to the AI. Use this to give more context and instructions to the AI.

Leave a Comment

Your email address will not be published. Required fields are marked *