Daily Post Image from Upsplash

April: 19

2024

GHA

The Github Actions are throwing too many errors for me to handle without overthinking. Thus the next step for the night would be start placing dynamic sha hashses into each pull request + branch within the automation system. I was looking to avoid this before because I figured it would create too many pull requests, yet it seems to be causing more build failures and slightly becoming a pain point.

So my solution to this would be to create a globals job before the altar and then call it like this example below:

jobs:
    build:
        runs-on: ubuntu-latest
        outputs:
            sha256head: ${{ steps.hash.outputs.sha256head }}
        steps:
            - name: Checkout code
              uses: actions/checkout@v2
            - name: Calculate SHA-256 Hash
              id: hash
              run: |
                  echo "::set-output name=sha256head::$(echo -n ${{ github.sha }} | sha256sum | awk '{print $1}')"
            - name: Log SHA-256 Hash
              run: |
                  echo "SHA-256 of the commit is ${{ steps.hash.outputs.sha256head }}"

    use_hash:
        needs: build
        runs-on: ubuntu-latest
        steps:
            - name: Use SHA-256 Hash
              run: |
                  echo "Using SHA-256 Hash in another job: ${{ needs.build.outputs.sha256head }}"

Going to place this within the ci-main.yml and call it throughout any of the pull requests that create a new branch. Now I will push this file change through the main branch and see how it performs.

v01d

The current v01d is misisng the nvidia and cuda drivers, so let me start fixing that up. We want to install the cuda drivers for nvida, so here is what we can do from the dockerfile aspect.

E: Conflicting values set for option Signed-By regarding source https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/ /: /usr/share/keyrings/cuda-archive-keyring.gpg !=
E: The list of sources could not be read.

Okay, going to update the dockerfile and remove the original cuda keys. I went back to the docs and here is what we got:


wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin
sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/12.4.1/local_installers/cuda-repo-ubuntu2204-12-4-local_12.4.1-550.54.15-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2204-12-4-local_12.4.1-550.54.15-1_amd64.deb
sudo cp /var/cuda-repo-ubuntu2204-12-4-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda-toolkit-12-4

When the docker shell is frozen, you know you been too far! xD It seems to freeze up on the libllvm15 library, maybe that might be an error from the amount of data that we are pulling? Hmm, maybe I should restart the docker build.

Looks like the v01d dockerfile builds without any issues on the local end, going to prepare the official build next.

When looking at the github actions, there are two options in figuring out if files have changed. It can be either : kbve_all_changed_files or kbve_any_changed, in this case, we will do the _any_changed for the file paths.

3:47pm - It seems we ran out of space during the docker build for this image, hmm maybe I can go back to the drawing board and figure out another way to handle this.

The Nvdia image comes with three types, base, runtime, and devel.

The other option might be to use an exisiting image, this Fooocus-API seems to fit perfectly into this project.

There are three paths that I see, the first would be having the v01d use the nvidia-based image for its own docker container and the second would be to extend out the foocus api dockerfile.

The third path would be to just drop cuda support within the base image and shift it over to a shell file that would install the needed drivers, outside of the base image.

Docker

The main kbve dockerfile seems to fail at building, I am going to traceback some issues that might be the cause? I will start with making sure the pnpm-lock.yml for the whole repo is up-to-Date, as I noticed that it was still under an older version.

Hmm, that was the error, so we can move foward with our v01d build.

For some reason the pnpm lock is still becoming a pain point, maybe I need to look back at the node versions that are split amoung the different docker files.

Preparing everything to node v20 or node v21 might b

2023

India

  • Visiting the inner city this morning and it was wild! The food smells amazing but the crowd is mind numbing and the traffic is insane, damn I almost got hit by a tuk tuk / rickshaw at least 3 times.
  • Currently trying to fix the issues with SendGrid and Ezoic. It seems that Ezoic’s DNS is replacing the SendGrid CNAME by dropping it. When trying to place the CNAME inside of the Ezoic DNS panel, it seems to say that it can not proxy it. I rasied a community ticket on their forum, hopefully they can get it resolved but if they can not then we might use a 3rd party domain.
  • Upgrading Coolify to v3.12.31. Before upgrading, I am going to execute an AWX backup stack. This is another test case for a full scale backup, including the SQL database but excluding snapshots.
  • Integration of Portainer and Coolify still has some issues, including volume and network management. I will rise an issue after doing more R&D between both applications.
  • Starting the AppWrite Register for Astro.

Notes

No notes to sync.

Tasks

  • Upgrade Coolify to v3.12.31.
  • R&D for automated RAW image conversion.
  • Rollback Astro upgrade because of errors.
  • Pay off any additional CC debt(s) for the month.
  • Leverage 100 shares of $SPY again for the week.
  • AppWrite -> Register -> https://github.com/KBVE/kbve.com/issues/122
  • Scope out local tailor for pants.
  • Ant Spray for the house in India.

Quote

Many men go fishing all of their lives without knowing that it is not fish they are after. — Henry David Thoreau