Deployment
The system can be deployed in two ways: locally via Docker Compose for development and testing, or on a local Kubernetes cluster using Minikube for a production-like environment.
Docker Compose
Prerequisites
- Docker installed and running.
Starting the system
First, clone the repository:
git clone https://github.com/ToRenameTeam/Nucleo.git
cd NucleoThen run the following script from the repository root:
./start-all-services.shThe script performs the following steps:
- Configures
.envfiles: for each service that provides a.env.exampletemplate, the script creates a.envfile from it if one does not already exist. If a.envfile is already present, any keys missing from it are automatically added from the template without overwriting existing values. - Starts all services: each service is started in detached mode via
docker-compose up -d, in the following order: Kafka,appointments-service,users-service,master-data-service,documents-service, NGINX, andfrontend-service.
On first startup, databases are seeded automatically. Allow a few minutes for all containers to become healthy.
Note: the
ai-servicerequires a valid Groq API key to function. Before starting the system, set theGROQ_API_KEYvariable indocuments-service/.env(the file is created automatically by the script on first run, but the key must be filled in manually):GROQ_API_KEY=your_api_key_here
Once running, the system is accessible at http://localhost:3000.
Stopping the system
./stop-all-services.shThe script runs docker-compose down for each service in reverse order. Database volumes are not removed, so data is preserved across restarts.
Kubernetes (Minikube)
Prerequisites
Cluster Configuration
All Kubernetes manifests and Helm charts are located in the kubernetes/ directory. The following Helm charts are defined for the different types of components:
node-chart: for Node.js microserviceskotlin-chart: for JVM microservicespython-chart: for the Python servicemongo-chart: for MongoDB instancespostgres-chart: for the PostgreSQL instanceminio-chart: for MinIO object storage
Kafka is managed by the Strimzi Kafka Operator, which handles the full lifecycle of the Kafka cluster and topics as Kubernetes custom resources.
Routing between the external network and the cluster is handled by the Gateway API with Traefik as the controller, replacing the traditional Ingress resource.
Starting the cluster
First, clone the repository if you have not already done so:
git clone https://github.com/ToRenameTeam/Nucleo.git
cd NucleoBefore deploying, set the GROQ_API_KEY in ai-service/helm-values/app.values.yaml:
secretEnv:
GROQ_API_KEY: your_api_key_hereThen start Minikube with sufficient resources:
minikube start --memory=8192 --cpus=4And run the deploy script from the repository root:
./scripts/deploy.shThe script performs the following steps in order:
- Checks prerequisites: verifies that
kubectl,helm,minikube, anddockerare available and that Minikube is running. - Installs Gateway API CRDs and Traefik controller: applies the standard Gateway API CRDs and deploys Traefik via Helm.
- Deploys Kafka: installs the Strimzi operator, then creates the Kafka cluster and topics as custom resources. The script waits for the cluster and all topics to become ready.
- Deploys data stores: installs MongoDB (for
users-service,master-data-service, anddocuments-service), PostgreSQL (forappointments-service), and MinIO via their respective Helm charts. - Builds and loads Docker images: builds each service image locally and loads it into Minikube's image registry.
- Deploys application services: installs each microservice via Helm and applies the Gateway API routes.
The deployment takes several minutes. The script waits for each component to become ready before proceeding to the next step.
Once the script completes, expose the gateway locally with:
kubectl -n default port-forward service/gateway-api-controller-traefik 3000:80The application will be accessible at http://localhost:3000.
Stopping the cluster
To remove all deployed resources while preserving database volumes:
./scripts/undeploy.shTo perform a full cleanup including all persistent volume claims (required for a clean redeploy):
./scripts/undeploy.sh --purge-pvNote: without
--purge-pv, all persistent volume claims are preserved across redeployments. This means each data store will reuse its existing data on the next deploy.