2024
Kilobase
The amount of jumps that I have to go through to get this docker image to be uploaded onto dockerhub. Honestly we got the build, just passing the push flag seems to be a pain and might just be some sort of out-dated docs that I been referencing? hmm. While the build is going on, we can take a look back at the bgworker and maybe take a look at ways to integrate the jedi crate! One of the areas that I was thinking was double checking the URL before we use it?
To get started, we will alter the jedi crate and create a new regex and function:
pub static SANITIZATION_URL_REGEX: Lazy<Regex> = Lazy::new(||
Regex::new(r"(?i)^https:\/\/[A-Z0-9._%+-]{1,63}\.[A-Z]{2,}(\/[A-Z0-9._%+-]*){0,64}$").unwrap()
);
pub fn extract_url_from_regex_zero_copy() -> Result<&str, &'static str> {
if SANITIZATION_URL_REGEX.is_match() {
Ok()
} else {
Err("invaild_url")
}
}
After adding the function, we want to run ./kbve.sh -nx jedi:dry --no-cloud
to make sure that the package will run without any major issue.
Docker
Okay, switching back around to the dockerhub release, we got kilobase uploaded but the image that it placed was directly a reference to the sha and pr, but not the tag that we wanted.
I am going to move this part of the issue off into a side note because it has been way too long of a problem to resolve!
This might be because we are not calling nx container directly but rather a custom nx command.
I went back around to the project.json
and took out some of the additional tags that were there.
The current settings for the json are these:
"tags": [
"type=semver,pattern=15.1",
"kbve/kilobase:15.1"
]
We could either drop the type=semver,pattern=15.1
or the kbve/kilobase:15.1
.
The whole tag system feels a bit off!
I believe it might be because this is my first time doing the tags through the nx monorepo, so I am definitely thrown off a bit.
The next round we should look into pulling the tag from the environmental variable or setting up a script that would help adjust the tags for us.
For automating the tag system, we could just keep the major and minor to reflect the postgres, then use the last set of numbers to reference the version of kilobase that we are using?
We could also do a system where it is supabase version as our base N and then make changes as N+C+1, where c is the commit? That might end up being a bit too complex, better to start off small and keep it simple.
An example of the shell script would be to pull the current cargo version for kilobase and then pull the major and minor from the base supabase image.
The shell script would be two parts, hmm, we are checking the apps/kilobase/Cargo.toml
and apps/kilobase/Dockerfile
, grabbing the data from those two files to then generate our actual version.
Benching
A quick break from the coding in about 20mins and doing some quick workouts, my brain might need some more bloodflow xD. I spent almost the whole night trying to debug these weird and nearly pointless build problems that should have been super easy. Setting up k3s was easier than this pepperhole.
Armada
We can begin setting up the armada next!
The central parts of the armada will be located here migrations/kube/charts/armada
, with the core being under a subfolder that we will call shipyard
.
Under the shipyard, we will deploy the operators and management tools, with the goal of having it all within one easy to deploy cluster.
Part of the Armada will be the Elemental
deployment but that will take some time to move into, we will most likely do that after the supabase deployment.
Upon further research, we can have the shipyard access the helm charts directly, allowing us to build ontop of the bitnami charts. There is still some of the concepts that I am not fully grasping but that might have to do with just me not being fully based. Thus we will be deploying the armada this weekend, which should provide us with a couple additional operators that we can use to help us debug some of the supabase issues.
During deployment, we got this error:
level=fatal msg="open migrations/kube/charts/armada/shipyard/values.yaml: no such file or directory"
I am thinking that it might be because we have a file called values.yaml that is empty, so lets go ahead and remove that?
Getting no where with this fleet deployment, its even worse than the supabase one because we are going backwards! xD To rule out if its a sealed secerts issue or a general fleet issue, I will try to deploy redis as well.
2023
- 9:34am - The next move would be to test case the appwrite connection on multiple widgets and concepts outside of the general structure. This will be an interesting way to see if the CORS issue is specific to the
kbve.com
domain or if its Traefik or if its Cloudflare, ect.. There are a couple different areas where we might be having this CORS issue but this might be the best way to guess and check. - 1:26pm - I am thinking of cloning out the YoRHa UI to be a bit like the Aura Idle but I am on the fence because of too many abstract libraries. I might use a 3rd party engine but at the same time, I might just stick with the basic vanilla Javascript as much as I can before I really add more weight to the components.
- 3:00pm - Taking a side break from the YoRHa UI and now adding Toasts into the KBVE repo. There are two main libraries that I am looking into, Toastify and Svelte-Toast, yet it seems that both get the job done, but I am on the fence on which one that I would use.
- 4:03pm - Was really hoping the feds would not increase the rates again but I will take a small hike on the table for the remaining year, I guess. Those calls all expired in profit!
- 5:14pm - Toast looks like its fine for now, needs to be a bit refined but I will have to figure out the best way to do so. I am thinking maybe outside the scope of the current templating system and has it operate as its own component.
Quote
Once you choose hope, anything’s possible. — Christopher Reeve
Tasks
- [ ]