I launched the blog about one year ago. At the time, I didn’t put much effort into evaluating different tech stacks. I chose Next.js
, since I’d heard it was one of the best frameworks for server-side rendering. For the CMS, I picked Strapi
after a quick search for headless CMS solutions. On top of these, I set up a NGINX
reverse proxy to handle routing and SSL. For storage I went with MariaDB
, and for search I added Meilisearch
. Everything was deployed to a VPS with Docker
, and all of the code lived in a monorepo. Looking back, it was far from a production-ready setup.
My main priority was learning Next.js
and getting the blog online. Still, issues surfaced quickly. The monorepo structure made things messy, with no clear separation between services. Docker
was convenient for spinning things up, but lacked production features like rolling updates. Also Strapi
(v4 at the time) didn’t win me over, the admin panel was clunky, and the content manager sometimes duplicated entries, forcing me to double and triple-check everything before publishing.
I was happy with Next.js
so I kept it, however the CMS had to go. That’s when I discovered Payload
, a CMS built natively for Next.js
. It felt like a perfect fit: seamless integration, excellent docs, helpful plugins, and even starter templates. The admin panel was nice, minimal, intuitive, and easy to customize.
Since I was already reworking the CMS, I decided to refresh the UI as well. The old design was functional, but it didn’t feel right. Inspired by the clean, modern design of shadcn and Next.js, I kept the overall layout but updated the colors and styling. The result feels much cleaner and more polished.
The infrastructure, is where the biggest changes happened. I wanted a workflow that was automated, reliable, and required less manual fiddling. That’s how I found GitOps
, a methodology that manages infrastructure and apps as code.
I set up a single-node k3s
cluster and installed ArgoCD
, which continuously syncs my cluster state with a GitOps
repository. To tie everything together, I integrated GitHub Actions
into my workflow. Whenever I push to a release branch, GitHub Actions
builds a Docker
image tagged with the commit SHA. I then update the version tag in the GitOps
repo, and ArgoCD
automatically detects the change and deploys it to staging. This gives me a safe testing environment before production. When I’m ready, I merge to master, and the same process builds and deploys a new production image, seamlessly, with minimal downtime.
To secure sensitive services (like ArgoCD
web UI and the staging blog), I added Tailscale
, creating a private network accessible only from trusted devices. External traffic is managed by nginx-ingress
, while cert-manager
handles SSL certificates and external-dns
keeps DNS records in sync.
For secrets management, I use SOPS
with PGP encryption, ensuring credentials are securely stored in the GitOps
repository and only decryptable by authorized users.
This setup still isn’t fully production-ready, but it’s miles ahead of where I started. Next on my list is setting up monitoring to track performance and key metrics, experimenting with new blog features, and committing to publishing at least one post each month on programming and tech topics. Maybe in the future I’ll even upgrade to a full HA cluster or even create my own homelab, but I’m not ready to go down that rabbit hole right now. The journey has already been a huge learning experience, and I’m excited to see how far I can push it.