Back to Tutorials

Create and Manage a Compose Project

Open an existing docker-compose.yml, start and stop services, and view aggregated logs.

Prerequisites

  • Docker is running
  • Docker Compose is installed (included with Docker Desktop)
  • A docker-compose.yml file exists on your filesystem

Scenario 1: Open an Existing Project

  1. Click Compose in the sidebar
  2. Click the Open button (folder icon) in the toolbar
  3. Browse to the directory containing your docker-compose.yml
  4. Select the file and click Open
  5. Zenithal parses the file and displays:
    • Project name (derived from the directory name)
    • List of services with their images and status
    • Network and volume definitions

Scenario 2: Start and Stop Services

  1. With a Compose project open, click Start All in the toolbar
  2. All services start in dependency order (services with depends_on start after their dependencies)
  3. Service status indicators update in real-time:
    • Green = Running
    • Yellow = Starting
    • Red = Exited / Error
  4. To stop all services, click Stop All
  5. To start/stop individual services, right-click a service and select Start or Stop

Scenario 3: View Service Logs

  1. With services running, click a service name to view its logs
  2. Logs stream in real-time, similar to the container log viewer
  3. Use the Aggregated Logs toggle to view logs from all services combined with color-coded service names
  4. Filter by service name or search within logs

Example Project

Create a directory ~/Projects/web-db-scenario/ with these files. This is a two-container setup: a Flask API server and a PostgreSQL 16 database with a simple task manager API.

docker-compose.yml

docker-compose.yml
version: "3.9"

services:
  web:
    build: ./web
    ports:
      - "8080:5000"
    environment:
      DATABASE_URL: postgresql://appuser:apppass@db:5432/taskdb
    depends_on:
      db:
        condition: service_healthy
    restart: unless-stopped

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: taskdb
      POSTGRES_USER: appuser
      POSTGRES_PASSWORD: apppass
    ports:
      - "5433:5432"
    volumes:
      - pgdata:/var/lib/postgresql/data
      - ./db/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U appuser -d taskdb"]
      interval: 5s
      timeout: 3s
      retries: 5

volumes:
  pgdata:

db/init.sql

This seed script creates the tasks table and inserts sample data:

db/init.sql
CREATE TABLE IF NOT EXISTS tasks (
    id         SERIAL PRIMARY KEY,
    title      TEXT NOT NULL,
    done       BOOLEAN NOT NULL DEFAULT FALSE,
    created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);

-- Seed a few example rows
INSERT INTO tasks (title) VALUES
    ('Buy groceries'),
    ('Read Docker docs'),
    ('Set up CI pipeline');

web/Dockerfile

web/Dockerfile
FROM python:3.12-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY app.py .

EXPOSE 5000

CMD ["gunicorn", "-b", "0.0.0.0:5000", "-w", "2", "app:app"]

web/requirements.txt

web/requirements.txt
flask==3.1.0
psycopg2-binary==2.9.10
gunicorn==23.0.0

web/app.py

web/app.py
"""Simple task manager API — demonstrates web server + database interaction."""

import os
from datetime import datetime, timezone

import psycopg2
import psycopg2.extras
from flask import Flask, jsonify, request

app = Flask(__name__)
DATABASE_URL = os.environ["DATABASE_URL"]


def get_db():
    conn = psycopg2.connect(DATABASE_URL)
    conn.autocommit = True
    return conn


@app.route("/tasks", methods=["GET"])
def list_tasks():
    with get_db() as conn:
        with conn.cursor(cursor_factory=psycopg2.extras.RealDictCursor) as cur:
            cur.execute("SELECT * FROM tasks ORDER BY created_at DESC")
            tasks = cur.fetchall()
    return jsonify([{**t, "created_at": t["created_at"].isoformat()} for t in tasks])


@app.route("/tasks", methods=["POST"])
def create_task():
    body = request.get_json(force=True)
    title = body.get("title", "").strip()
    if not title:
        return jsonify({"error": "title is required"}), 400

    with get_db() as conn:
        with conn.cursor(cursor_factory=psycopg2.extras.RealDictCursor) as cur:
            cur.execute(
                "INSERT INTO tasks (title) VALUES (%s) RETURNING *",
                (title,),
            )
            task = cur.fetchone()
    return jsonify({**task, "created_at": task["created_at"].isoformat()}), 201


@app.route("/health")
def health():
    try:
        with get_db() as conn:
            with conn.cursor() as cur:
                cur.execute("SELECT 1")
        return jsonify({"status": "ok", "db": "connected"})
    except Exception as e:
        return jsonify({"status": "error", "db": str(e)}), 503


if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5000)

What You'll See

  • The Compose project view shows a service list with status badges
  • Starting services shows a progress indicator for image pulls (if images aren't cached)
  • The service graph visualizes dependencies between services
  • Aggregated logs use colored prefixes to distinguish services

Tips

  • Zenithal watches the docker-compose.yml file for changes — if you edit it externally, the project refreshes automatically
  • Use docker-compose.yml (v3 syntax) — Zenithal supports all standard Compose file features
  • Environment variables with ${} syntax are resolved from .env files in the same directory
  • If a service fails to start, check its logs for error details — common issues include port conflicts and missing images
  • YAML values containing special characters are automatically quoted by Zenithal when saving

Related Tutorials