Skip to content

ETL to QE, Update 43, Design Documents are Constraints

Date: 2024-10-19

Problem Statement

The medium of social media is like staring into a spreadsheet when the medium should be a fractal graph like the flow of human thought and conversation.

Goals and Objectives

Nostr Version Controlled Wiki

I want to be able to publish what I this repo, with version control, as Nostr Events and Multiformats IPFS CID Hashes as references to content such as images, videos, and files.

Nostr LLM Chat bot with metadata logging built into the medium

There is a difference between interfacing directly with a LLM and a Chatbot.

  • RAG - When interfacing with ChatGPT when you upload a file and talk to it ChatGPT the program is executing RAG functions in the background and responding to you with the result. ChatGPT also has the capability to use tools
  • Images - When a user uploads an image the Chatbot obtains data from the image and sometimes even run a short conversation "script" with the AI system, which is separate from the LLM, analyzing the image
  • Persona System Prompts - There are special personas one can interact with inside various LLM assistant programs. These usually involve additional "system prompts" that the user via the chatbot interface does not see.
  • Tool Math Calculations - A Chatbot can detect is a user's message is asking to do math or not then feed the part of the message having to do with math into a separate calculator program such as Python or Wolfram Language
  • Tool Web Search - A Chatbot can detect if someone is asking a question that can be searched on the internet, when this is the case the Chatbot can literally search stuff on search Search engines such as Google and friends and then have a little conversation "script" with the results to look for the answer.
  • Tool Query Structured Data - A Chatbot can detect if one is asking a question about a specific piece of structured data in the form of a database or spreadsheet, transform that question about the query that can be run on the data, run said query, possibly troubleshoot the query based on errors, get the answer to the query, then take that repose and phrase it to the using in context using natural language.
  • Agent Troubleshoot Code - When a user submits a piece of code for the Chatbot to look at, the LLM can extract the code, run the code, read the error the code produces, try to fix the code based on the error, repeat this a couple times, then look at all the troubleshooting steps providing the user a single message of how the Chatbot tried to make sense of the users code
  • Agent Follow user through single troubleshooting step - If a user has a problem connecting to the internet there are a couple steps
    • Steps to connect to internet
      • Turn on Computer
      • Login to OS
      • Connect Ethernet / Wifi
      • etc. etc.
    • A individual agent can manage a individual step such as turn on the computer
      • Check power cable
      • Check power outlet works
      • Check switch on back of power supply
      • Remove battery
      • Press button to turn computer on
      • Hold button to turn computer on
    • Once the agent has walked the user through the troubleshooting a single step the agent provides a summary to the Meta agent troubleshooting the internet
  • Agent Editor - There are many steps of the editing process that a LLM should do in multiple steps and then provide the conclusions after completing all the steps and summarizing what the author of the text actually needs to know, additional context can be provided to the editor such as the intended audience, purpose of the text, any guidance on argument structure etc. etc.
    • Additional context here
  • For more info on tools used in LLM Chatbots checkout these FlowiseAI docs here or checkout OpenAI's tools at this page