update readme

This commit is contained in:
liquidrinu 2025-05-20 13:00:00 +02:00
parent b48b3dc027
commit ad70178202
4 changed files with 336 additions and 2 deletions

@ -19,9 +19,63 @@ fusero-app-boilerplate/
- Docker and Docker Compose
- Git
## Development Setup
## 🗃️ Create Docker Volume for Postgres
### Important Note: Database Must Run in Docker
Before starting the database, create a Docker volume to persist Postgres data:
```sh
docker volume create fusero-db-data
```
## 🛠️ Running Migrations and Seeding the Database in Kubernetes
To run migrations and seed the database in your Kubernetes cluster, a job is included in the Helm chart. The job runs the following command:
```sh
npx mikro-orm migration:up && npm run seed
```
This job is triggered automatically on deployment. If you need to rerun it manually, you can delete and recreate the job using kubectl.
## 💻 Development Setup
Backend setup:
- Copy the example environment file:
```sh
cp backend/.env.example backend/.env
```
- Install dependencies:
```sh
npm install
```
- Run migrations and seed:
```sh
npm run migrate
npm run seed
```
- Start the backend in development mode:
```sh
npm run dev &
```
Frontend setup:
- Copy the example environment file:
```sh
cp frontend/.env.example frontend/.env
```
- Install dependencies:
```sh
npm install
```
- Start the frontend in development mode:
```sh
npm run dev &
```
App is running:
- Frontend → http://localhost:3000
- Backend → http://localhost:14000
## Important Note: Database Must Run in Docker
The PostgreSQL database must always run in Docker, regardless of your development setup choice. This ensures consistent database behavior across all environments.
To start the database:

22
chart/values.prod.yml Normal file

@ -0,0 +1,22 @@
backend:
image: fusero-backend:latest
env:
POSTGRES_HOST: postgres-service
POSTGRES_PORT: "5432"
POSTGRES_NAME: fusero-db
POSTGRES_USER: prod_admin
POSTGRES_PASSWORD: REPLACE_ME
DEFAULT_ADMIN_USERNAME: admin
DEFAULT_ADMIN_EMAIL: admin@fusero.nl
DEFAULT_ADMIN_PASSWORD: STRONG_REPLACE_ME
ENCRYPTION_KEY: PROD_REPLACE_ME_KEY
JWT_SECRET: PROD_REPLACE_ME_JWT
CHATGPT_API_KEY: PROD_REPLACE_ME_CHATGPT
CANVAS_API_KEY: PROD_REPLACE_ME_CANVAS
frontend:
image: fusero-frontend:latest
postgres:
image: postgres:15
storage: 5Gi

130
docs/DEPLOY.md Normal file

@ -0,0 +1,130 @@
# 📦 Fusero VPS Deployment Guide
This guide walks you through deploying the Fusero full-stack app to a plain Ubuntu VPS using Kubernetes (k3s), Helm, and automatic HTTPS via cert-manager.
---
## 📋 Prerequisites
- ✅ Ubuntu 22.04 VPS with root or sudo access
- ✅ Domain names pointed to your VPS IP:
- api.fusero.nl → for the backend
- app.fusero.nl → for the frontend
- ✅ Git access to your repo
---
## ☸️ 1. Install Kubernetes (k3s)
curl -sfL https://get.k3s.io | sh -
Set kubeconfig so kubectl works:
echo 'export KUBECONFIG=/etc/rancher/k3s/k3s.yaml' >> ~/.bashrc
source ~/.bashrc
Verify:
kubectl get nodes
---
## 📦 2. Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Verify:
helm version
---
## 📁 3. Clone the Project
git clone https://your.gitea.repo/fusero-app-boilerplate.git
cd fusero-app-boilerplate
---
## 🔐 4. Set Up HTTPS (cert-manager)
Install cert-manager:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.yaml
Check pods:
kubectl get pods -n cert-manager
Create file cluster-issuer.yaml with this content:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: your@email.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
Apply it:
kubectl apply -f cluster-issuer.yaml
---
## 🌍 5. Update DNS
Ensure both api.fusero.nl and app.fusero.nl point to your VPS IP address.
Check propagation:
ping api.fusero.nl
---
## 🚀 6. Deploy with Helm
Ensure you're in the repo root and the chart directory exists.
helm upgrade --install fusero ./chart -f chart/values-prod.yaml
This deploys frontend, backend, Postgres, ingress, and HTTPS.
---
## 📜 7. Verify Access
Frontend: https://app.fusero.nl
Backend API: https://api.fusero.nl
---
## 🔁 8. (Optional) Rerun DB Migrations
kubectl delete job fusero-backend-db-init
helm upgrade fusero ./chart -f chart/values-prod.yaml
---
## 🧪 9. Useful Commands
View backend logs:
kubectl logs deployment/fusero-backend
View frontend logs:
kubectl logs deployment/fusero-frontend
View pods and services:
kubectl get pods,svc,deployments
---
## ✅ Youre Done!
You now have a production deployment of Fusero on a raw VPS with:
- Kubernetes (via k3s)
- TLS via Let's Encrypt
- Helm-managed services
- DNS routing for subdomains
For CI/CD automation via Gitea, see `.gitea-ci.yml` in the repo root.

128
docs/GUIDE-TO-K8S.md Normal file

@ -0,0 +1,128 @@
# 📘 How to Install Kubernetes on Ubuntu 24.04 (Step-by-Step Guide)
This guide walks you through installing a multi-node Kubernetes cluster on Ubuntu 24.04 using `kubeadm`.
---
## 🧰 Prerequisites
* Ubuntu 24.04 instances with SSH enabled
* sudo user access
* At least 2GB RAM, 2 CPUs, and 20GB storage per node
* Internet access
### Sample Setup:
* **Master Node:** k8s-master-noble (192.168.1.120)
* **Worker 1:** k8s-worker01-noble (192.168.1.121)
* **Worker 2:** k8s-worker02-noble (192.168.1.122)
---
## 1⃣ Set Hostnames & Update Hosts File
Run on each node:
sudo hostnamectl set-hostname "k8s-master-noble" # Master
sudo hostnamectl set-hostname "k8s-worker01-noble" # Worker 1
sudo hostnamectl set-hostname "k8s-worker02-noble" # Worker 2
Edit `/etc/hosts` on all nodes:
192.168.1.120 k8s-master-noble
192.168.1.121 k8s-worker01-noble
192.168.1.122 k8s-worker02-noble
---
## 2⃣ Disable Swap & Load Kernel Modules
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
sudo modprobe overlay
sudo modprobe br\_netfilter
echo -e "overlay\nbr\_netfilter" | sudo tee /etc/modules-load.d/k8s.conf
echo -e "net.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip\_forward = 1" | sudo tee /etc/sysctl.d/kubernetes.conf
sudo sysctl --system
---
## 3⃣ Install and Configure containerd
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
curl -fsSL [https://download.docker.com/linux/ubuntu/gpg](https://download.docker.com/linux/ubuntu/gpg) | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/containerd.gpg
sudo add-apt-repository "deb \[arch=amd64] [https://download.docker.com/linux/ubuntu](https://download.docker.com/linux/ubuntu) \$(lsb\_release -cs) stable"
sudo apt update && sudo apt install containerd.io -y
containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd
---
## 4⃣ Add Kubernetes Repository
curl -fsSL [https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key](https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key) | sudo gpg --dearmor -o /etc/apt/keyrings/k8s.gpg
echo "deb \[signed-by=/etc/apt/keyrings/k8s.gpg] [https://pkgs.k8s.io/core:/stable:/v1.30/deb/](https://pkgs.k8s.io/core:/stable:/v1.30/deb/) /" | sudo tee /etc/apt/sources.list.d/k8s.list
---
## 5⃣ Install kubelet, kubeadm, kubectl
sudo apt update
sudo apt install kubelet kubeadm kubectl -y
---
## 6⃣ Initialize Kubernetes Cluster (Master Node Only)
sudo kubeadm init --control-plane-endpoint=k8s-master-noble
Then set up kubectl:
mkdir -p \$HOME/.kube
sudo cp /etc/kubernetes/admin.conf \$HOME/.kube/config
sudo chown \$(id -u):\$(id -g) \$HOME/.kube/config
---
## 7⃣ Join Worker Nodes
Use the join command from the `kubeadm init` output on each worker node:
sudo kubeadm join k8s-master-noble:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
---
## 8⃣ Install Calico Network Add-on (Master Only)
kubectl apply -f [https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/calico.yaml](https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/calico.yaml)
Check readiness:
kubectl get pods -n kube-system
kubectl get nodes
---
## 9⃣ Test the Cluster
kubectl create ns demo-app
kubectl create deployment nginx-app --image nginx --replicas 2 --namespace demo-app
kubectl expose deployment nginx-app -n demo-app --type NodePort --port 80
kubectl get svc -n demo-app
Then access it:
curl http\://<worker-node-ip>:<node-port>
✅ You now have a fully functional Kubernetes cluster on Ubuntu 24.04!