Daily Post Image from Upsplash

June: 10

2024

Prompt

One of the last issues that we are facing is how we would organize and store the different prompts that we intend to use. I am thinking maybe placing them all inside of the frontmatter within the ml.mdx and then turning them into a json? This would be the best move for now, so let me start working that out.

What we could do is include a prompts inside of each of the mdx document that would hold the prompts that we would be using for that subject. This way when we reference the docker.mdx document, we also have the prompt examples right there within the document.

As for the different parts within the prompts, we would aim to include name, description, items and task. Putting it together, it would look like this:


---
prompts:
    - name: Role Examples
      description: Common role examples for text transformers.
      items:
        - Act as a javascript console
        - Act as an excel sheet
        - Act as a HR interviewer
        - Act as an advertiser
        - Act as a publisher
        - Act as a music teacher
        - Act as a relationship coach
        - Act as a World of Warcraft player and limit the response to 50 characters
      task: |
        Define the role for the AI model and prompt it to perform actions specific to that role. For example, you can ask it to act as a JavaScript console and provide code snippets, or act as a music teacher and give lessons.
---

Now the question would be if we need to expand it out? Maybe include pathways?

The way that I was thinking of structuring the pathways was like this:


pathways = {
    "initial": {
        "prompt": "Initial prompt",
        "next": [
            {"condition": r"regex1", "action": "pathway1"},
            {"condition": r"regex2", "action": "pathway2"}
        ]
    },
    "pathway1": {
        "prompt": "Prompt for pathway 1",
        "next": [
            {"condition": r"regex3", "action": "pathway3"},
            {"condition": r"regex4", "action": "pathway4"}
        ]
    },
    "pathway2": {
        "prompt": "Prompt for pathway 2",
        "next": [
            {"condition": r"regex5", "action": "pathway5"},
            {"condition": r"regex6", "action": "pathway6"}
        ]
    },
    # Add more pathways as needed
}

Yet that might not be the best way to handle it, let me think what we could do for pathways that would help resolve the issues. Maybe we try out this method for now and then come back to it later on. The chaining might introduce more API calls then it would require, yet it might help us resolve edge cases.

After adding pathways, we will add tools as a reference too.

Example of the tools block is here below:


tools = [
        {
            "type": "function",
            "function": {
                "name": "get_game_score",
                "description": "Get the score for a given NBA game",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "team_name": {
                            "type": "string",
                            "description": "The name of the NBA team (e.g. 'Golden State Warriors')",
                        }
                    },
                    "required": ["team_name"],
                },
            },
        }
]

This should be included within the prompt? I am thinking that it should not be help within the prompt but rather be something that the user would define within their codebase. We can add the tools aspect for the demo purpose. Oh! I forgot that we would include the output as well, okay to recap the prompt engine, we would have:

  • name
  • description
  • items
  • task
  • tools
  • output
  • pathways

Here is an example of this prompt engine:


---

prompts:
    - name: Role Examples
      description: Common role examples for text transformers.
      items:
        - Act as a javascript console
        - Act as an excel sheet
        - Act as a HR interviewer
        - Act as an advertiser
        - Act as a publisher
        - Act as a music teacher
        - Act as a relationship coach
        - Act as a World of Warcraft player and limit the response to 50 characters
      task: |
        Define the role for the AI model and prompt it to perform actions specific to that role. For example, you can ask it to act as a JavaScript console and provide code snippets, or act as a music teacher and give lessons.
      tools: |
        [
            {
                "type": "function",
                "function": {
                    "name": "get_role_specific_info",
                    "description": "Get specific information for a given role",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "role_name": {
                                "type": "string",
                                "description": "The name of the role (e.g. 'JavaScript console')",
                            }
                        },
                        "required": ["role_name"],
                    },
                },
            }
        ]
      output: text
      pathways: |
        {
            "initial": {
                "prompt": "Choose a role for the AI to act as:",
                "next": [
                    {"condition": "javascript console", "action": "pathway1"},
                    {"condition": "excel sheet", "action": "pathway2"}
                ]
            },
            "pathway1": {
                "prompt": "You chose JavaScript console. Provide a JavaScript code snippet:",
                "next": [
                    {"condition": "valid js code", "action": "pathway3"},
                    {"condition": "invalid js code", "action": "pathway4"}
                ]
            },
            "pathway2": {
                "prompt": "You chose Excel sheet. What operation would you like to perform?",
                "next": [
                    {"condition": "create sheet", "action": "pathway5"},
                    {"condition": "edit sheet", "action": "pathway6"}
                ]
            }
        }

---


I think this is a solid start to our prompt engine needs, we can then turn this into a json and call it from the custom action that we are building. Okay, I will go ahead and add the prompts to the ml.mdx, and then begin to assemble them into a specific json file for each instance? Should we have it build it after each run, that might be a waste of resources, hmm but for now I will just do it that way.

Zod

With the base prompt data out of the way, we will have to update the zod with this information, so that it will be easy to reference throughout our website and external applications. The base zod that we have right now looks like this:


	docs: defineCollection({
		schema: docsSchema({
			extend: z.object({
				// Make a built-in field required instead of optional.
				tags: z.array(z.string()).optional(),
				"yt-tracks" : z.array(z.string()).optional(),
				"yt-sets": z.array(z.string()).optional(),
				
			}),
		}),
	}),

There are a couple ways we could extend this out, I am thinking of creating a couple z.objects that would hold the information. So let us extend out the z.object within the docsSchema to hold the prompt schema data set, defining it like this: prompts: z.array(PromptSchema).optional(),.

After extending out the schema, we can move forward with setting up the json file with the prompts. We can split the prompts up into two areas, one being public prompts that anyone can use and the second would be private prompts that would be stored in our database.

There could be additional options in the future to extend it out and let people setup their own shop or service through the usage of those prompts, hmm, for now we will just focus on the public prompt engine.

To get started, we will go ahead and head into the /apps/kbve.com/pages/api/ directory and take a look at the other two .json.js files that we made. Based upon the way we handled those two files, we could easily extend that core concept out to the prompt engine. Now the question would be if we want to have one central file with all the prompts or if we should isolate them, hmm, since we have not yet expanded too far, I think it would be safe to just have a prompt.json.js that will hold all our public prompts. We could create a nested json, where it uses the title of the mdx to help build out the prompts but I am not yet sure the best way to go about that.

While I am thinking this through, I should go back and update my notes within ml.mdx, since I have yet to do a reform with a transform to starlight components.

To make things easier to reference, we can extend the z.objects within the frontmatter docsSchema with a prompt_index: z.string().optional(), and this would be the key that we would use to help us build out the nested json.

Going to run ./kbve.sh -nx kbve.com:build to make sure that everything builds before adding in the new json file.

The file that we created that holds the prompt will be accessible via https://kbve.com/api/prompt.json, now we need a way to filter it out.

Groq Pooling

After building out the base pool, we would need to extend it out with a temporary chat storage, so that it could reference when doing the prompt chaining. The question would be how we would go about storing the chat, I suppose the type of vector database that we would use or should we write our own tiny vector database? Having a custom vector database that would work with different vector databases like pgvector and chromadb?

2023

  • 12:30pm - Going over the login screen for KBVE, it looks like I am almost done! This was something that should have been really quick but I had to sit through and understand the whole OAuth situation, as we switched APIs. Now the whole setup is cleaner and easier to use! We can add and remove the components with just a couple quick lines and for the most part, everything seems to be on point!
  • 1:40pm - The login with Github looks like it is working fine! I think we can move forward with the Login with Discord next. Afterwards I will double check what we need to finish up the Google and Twitch. Moving past the basic authentication, then its the Profile page. We will keep the objective for the profile page to a narrow scope before extending it out further. The key components that we could focus on would be the general information and vibe the profile page should produce, something unique and creative but not over loading the browser.
  • 2:30pm - Time for a quick coffee break! One of the other aspects that I was looking into was extending out the different page layouts, including starting the blog.astro and adding a couple unique blog articles onto our website. I also did postpone the webmaster tool for a bit too long, so I should look over that and get it going too!
  • 2:45pm - It is interesting to see how Google OAuth2 does its integration, they seem to be very protective of it, which makes sense to me because they have a large scope and can be a target for various malpractice. We started with a test application and now are moving towards production, in this situation we would have to provider a lot of extra information that I did not expect. I am wondering if Google really needs me to create a YouTube video that shows why we need their login support?
  • 3:34pm - Yes! It looks like we are finally done with the basic production ready login screen. There are a couple CSS issues that I see, like the buttons not being exactly in the center? But I can address the style sheet problems later on, near the end. We got Github, Discord, Google and Twitch to be functional and operational! Now we can move straight forward with the next step, which is the profile.
  • 4:20pm - Currently looking around for some unique profile templates to base our KBVE one off of. While I was doing some generic research into the topic, I came across the million.js library, maybe we could add this into the KBVE repo? I am going to do a bit more research into how to integrate this library into Astro.
  • 6:30pm - Upgraded Astro.js to the latest version of 2.6.3 and decided to add Million.js into the project! This might help with improving the load time for complex react scripts. I will keep the integration of Million.js to a limit because it is a younger library and there might be issues down the line. This reminds me of the Preact < - > React situation, where certain components failed to render because of the dependencies.
  • 9:45pm - Having a nice salad for dinner, about to hit 40 range on the pure and the profile page is coming out pretty cute! Its going to be a long night of programming and test casing, but I am thinking that it will be amazing!

Quote

Love demands infinitely less than friendship. — George Jean Nathan


Tasks

  • Add Discord Login
  • Add Google Login
  • Add Github Login
  • Add Twitch Login
  • Prepare for Nephews Birthday Party