Compare commits

...

80 Commits

Author SHA1 Message Date
kodi 94a2f4586a fix: cpu/mem container view 2026-03-27 18:23:16 +01:00
kodi 7d2f19f81f fix (containers): gebruik PODMAN_SYSTEMD_UNIT label als ground truth voor Managed By
De oude logica miste .kube quadlets volledig: het zocht alleen naar .container
bestanden op naam en gebruikte een fragiele pod-naam heuristiek als fallback.
Containers gestart via mediaserver.kube en bookstack.kube werden daardoor
als 'podman' geclassificeerd terwijl ze systemd-beheerd zijn.

PODMAN_SYSTEMD_UNIT label wordt door Podman/systemd automatisch gezet op elke
container gestart via een quadlet (.container, .kube, .pod). Dit is de enige
betrouwbare bron.

Verwijderd: _unit_is_active(), unit_active_cache, _map_pod_to_unit import.
Behouden: find_defined_containers() voor section C (offline containers) en
action routing (start/stop/restart via systemd unit).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-24 11:14:50 +01:00
kodi fba9b59445 docs: update documentatie voor app_volumes en tabs tree view
- CLAUDE.md: app_volumes.py in module-tabel, frontend tabs lijst, py_compile en smoke tests
- ARCHITECTURE.md: app_volumes.py in feature routers, py_compile en smoke tests
- API_GOLDEN.md: volumes endpoints gedocumenteerd (GET/POST/DELETE/prune/exists)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 18:26:08 +01:00
kodi 2dfe53895b feat (ui): compacte IDE-sidebar tree view voor Files tabblad
- Vervang kaart-stijl folder-rijen door compacte, platte rijen (2px padding, geen border)
- Verwijder badge-tellers (📁 N, 📄 N) uit folder-rijen
- Voeg .btn.tiny toe voor kleine actieknoppen (+/✕) in boom
- Alle mappen standaard ingeklapt; localStorage behoudt uitgeklapte staat
- file-entry hover highlight; verwijder bottom-border per rij

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 18:19:04 +01:00
kodi 5f6719464d fix (ui/volumes): herstel container-koppeling via inspect endpoint
containers-dashboard geeft Mounts als strings (destination paden).
Volledige mount-info (Type + Name) zit alleen in /containers/inspect/{name}.

Fix: voor containers met niet-lege Mounts parallel inspect ophalen,
daarna filteren op Type === "volume" voor named volume koppeling.

Getest: postgresdb_data → postgres-db, n8n_data → n8n.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 17:51:37 +01:00
kodi 249d24721c feat (ui): voeg Volumes tabblad toe aan webui
Nieuw tabblad na Images met:
- Tabel: Naam, Driver, Mountpoint (afgekapt + tooltip), Aangemaakt
  (relatieve tijd), Labels (pills), Containers (pills via Mounts koppeling)
- Toolbar: Ververs, + Volume, Prune (met bevestigingsdialoog)
- Verwijder knop per rij (uitgeschakeld als volume in gebruik)
- Create Volume modal: naam (verplicht) + labels (key=value per regel)
- Lege staat via renderStateBox

volumes.js: _volEsc() voor XSS-safe rendering, encodeURIComponent
voor onclick-handlers, parallel fetch volumes + containers-dashboard
voor container-koppeling via Mounts[].Name.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 17:34:29 +01:00
kodi f8bbb783b0 feat (volumes): voeg volumes router toe aan backend
Nieuw bestand control/app_volumes.py met Libpod volume operaties:
- GET  /volumes          — lijst alle volumes (optioneel ?filters=key=value)
- POST /volumes          — volume aanmaken (name, driver, labels, driverOpts)
- GET  /volumes/{name}   — details van één volume
- GET  /volumes/{name}/exists — bestaanskontrolle (204 → true, 404 → false)
- DELETE /volumes/{name} — volume verwijderen (?force=true optioneel)
- POST /volumes/prune    — ⚠️ verwijdert alle ongebruikte volumes

Filters: key=value formaat wordt automatisch omgezet naar
{"key":["value"]} JSON dat de Libpod API verwacht.

Containerfile: COPY app_volumes.py toegevoegd.
app.py: init_volumes_router geregistreerd.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 13:25:58 +01:00
kodi 4404c02967 docs: update AGENTS/SAFE_FILES/rationale na D-Bus verwijdering
- AGENTS.md: run-commando bijgewerkt (verwijder brede /run/user/1000
  mount en DBUS_SESSION_BUS_ADDRESS); notitie D-Bus niet meer vereist
- SAFE_FILES.md: verwijder DBUS_SESSION_BUS_ADDRESS; beschrijf
  concrete mounts (Podman socket + helper directory)
- podman-helper-rationale.md: daemon-reload sectie bijgewerkt —
  gaat nu via helper ipv D-Bus; samenvattingstabel gecorrigeerd

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 12:28:22 +01:00
kodi bae6fd8b9f docs (CLAUDE.md): documenteer health check gedrag en helper architectuur
Beschrijf dat systemd_user.reachable afgeleid is van helper.ok,
dat de container zelf geen D-Bus/systemctl aanroepen doet, en dat
alle systemctl-acties (incl. daemon-reload) via de helper-socket lopen.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 11:54:43 +01:00
kodi ed94ee31f4 feat (helper): daemon-reload via helper; verwijder D-Bus afhankelijkheid
- podman-helper: voeg daemon-reload toe aan ALLOWED_ACTIONS; actions
  in NO_UNIT_ACTIONS slaan unit-validatie over en bouwen cmd zonder
  unit argument
- app_system: /daemon-reload endpoint gebruikt nu _helper_call in
  plaats van directe subprocess; verwijder subprocess import
- app_system: health check legt systemd_reachable af van helper_ok
  in plaats van systemctl --user list-units — de helper draait als
  host-user en impliceert systemd bereikbaarheid
- CLAUDE.md: verwijder DBUS_SESSION_BUS_ADDRESS env var; D-Bus mount
  is niet meer nodig

Deploy: kopieer podman-helper.py naar host, daemon-reload, restart
helper, rebuild backend image, herstart container zonder bus mount.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 11:39:10 +01:00
kodi 5196e7840f fix (helper): verplaats socket naar dedicated submap /run/podman-mvp/
Vervangt file bind-mount door directory mount om stale inode probleem
op te lossen: bij file bind-mounts bindt Podman de inode op run-tijd;
als podman-helper stopt en de socket verwijdert, wijst de container
nog steeds naar de verwijderde inode. Een directory mount lost altijd
op naar de huidige mapinhoud inclusief nieuwe inodes.

Wijzigingen:
- podman-helper.py: SOCKET_PATH → XDG_RUNTIME_DIR/podman-mvp/podman-helper.sock
- common.py: HELPER_SOCKET → /run/podman-mvp/podman-helper.sock
- CLAUDE.md: run-commando gebruikt -v /run/user/1000/podman-mvp:/run/podman-mvp

Deploy: kopieer podman-helper.py naar host, daemon-reload, restart helper,
rebuild backend image, herstart container met nieuwe mount.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 09:50:31 +01:00
kodi a05d79ae2c fix (helper): verwijder ExecStopPost socket cleanup
ExecStopPost=-/bin/rm -f ${XDG_RUNTIME_DIR}/podman-helper.sock verwijderde
het socketbestand bij stoppen. Hierdoor werd bij herstart een nieuw inode
aangemaakt, terwijl de container-bind-mount nog het oude inode vasthield
(stale mount). Gevolg: health check en _helper_call faalden na herstart
ook al was de helper running.

De cleanup is overbodig: podman-helper.py doet os.unlink() bij opstarten
(regel 153) en bij afsluiten via finally-block (regel 178).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 08:31:38 +01:00
kodi 5e7d1b887c feat (health): voeg helper socket check toe, drie visuele states
Backend (/api/health):
- Importeer HELPER_SOCKET uit common.py
- Voeg helper-check toe: connect() op /run/podman-helper.sock, timeout=2s
- ok blijft true als alleen de helper ontbreekt (waarschuwing, geen fout)
- Nieuwe response key: "helper": {"ok": bool}

Frontend (pingApi / setApiState):
- pingApi() roept nu /api/health aan i.p.v. /pods-dashboard
- setApiState(state, msg) accepteert 'ok' / 'warn' / 'error'
- Gele dot met --warn kleur als helper.ok=false maar core OK
- refreshActive() delegeert statusupdate aan pingApi()
- Detailbericht bij fout: toont welk component (podman/systemd) faalt

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 08:06:38 +01:00
kodi e469508570 feat (docs): voeg Swagger UI toe op /docs, lokaal gebundeld
- Swagger UI v5.32.1 lokaal in assets/swagger-ui/ (geen CDN, offline bruikbaar)
- webui/html/docs/index.html: custom pagina die /api/openapi.json laadt
  met requestInterceptor zodat Try it out via same-origin werkt
- Link toegevoegd aan dashboard "Snel acties": API docs ↗ (opent in nieuw tabblad)
- Docstrings toegevoegd aan destructieve endpoints (app_containers, app_images):
  container stop/restart, image remove (batch + single), image prune
  geven nu ⚠️-waarschuwingen in de Swagger UI beschrijving
- Backend rebuild nodig voor docstrings zichtbaar in spec

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-23 07:30:51 +01:00
kodi c338955320 refactor (networks): herschrijf networks_usage, bundel D3 lokaal
- Verwijder phase-3 hex-ID fallback (~160 regels): NetworkSettings.Networks
  uit container inspect is de ground truth, niet netwerk-inspect + scannen
- Filter infra containers via IsInfra flag + naam-regex ^[0-9a-f]+-infra$
- Voeg IP en aliases toe aan byNetwork container entries (via inspect)
- Bridge containers krijgen altijd een inspect-call voor IP/aliases;
  pasta/host/none containers worden overgeslagen
- D3 v7.9.0 lokaal gebundeld (assets/js/d3.min.js, CDN-afhankelijkheid weg)
- Nieuw webui/Containerfile voor reproduceerbare webui image builds

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 16:28:39 +01:00
kodi f016c2bae0 perf: stats poll via lichtgewicht /api/stats i.p.v. /containers-dashboard
De frontend haalde CPU/mem stats op via het zware /containers-dashboard
endpoint (Podman call + os.walk + systemctl subprocesses per container).
Nu gaat de stats poll via een nieuw /api/stats endpoint dat alleen de
bestaande in-memory cache teruggeeft (<5ms vs ~400ms).

- app_containers.py: /api/stats endpoint toegevoegd (cache direct return)
- app_containers.py: _STATS_SHOWN_NAMES bijgehouden per dashboard call
  (filtert infra/management containers eruit op basis van _dashboard_source)
- containers.js: pollContainersDashboardStatsOnce() gebruikt /api/stats

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 15:09:53 +01:00
kodi e922cea167 added helper 2026-03-22 14:39:38 +01:00
kodi 7d2c205930 feat: systemd unit acties via podman-helper Unix socket
start/stop/restart van systemd units gaan nu via de host-helper
(/run/podman-helper.sock) in plaats van directe systemctl subprocess
vanuit de container. Hiermee wordt de user namespace isolatie omzeild
die D-Bus calls vanuit de container onbetrouwbaar maakt.

- common.py: _helper_call(action, unit) toegevoegd
- app_system.py: /{action}/{unit} route gebruikt helper voor start/stop/restart
- app_containers.py: container_action() gebruikt helper
- daemon-reload en is-active blijven subprocess (read-only, werkt al)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 11:24:25 +01:00
kodi 580c301718 refactor: verplaats run() duplicaat naar common.py
run() stond identiek in app.py en app_system.py. Verplaatst naar
common.py als single source of truth; beide modules importeren
nu de centrale versie.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 10:26:34 +01:00
kodi 1c61854143 fix: verwijder dode Flask-stijl legacy route in app_system.py
De route @router.post("/api/<action>/<unit>") gebruikte Flask-syntaxis
die nooit matcht in FastAPI. Dead code verwijderd.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 10:22:25 +01:00
kodi bacab3b20a fix (security): sluit path traversal in legacy /workloads/ endpoints
Drie endpoints gebruikten os.path.join zonder validatie, waardoor een
aanvaller buiten WORKLOADS_DIR kon lezen/schrijven. Vervangen door de
bestaande _files_safe_join() helper die al door alle /files/ endpoints
werd gebruikt.

Endpoints: /workloads/read/, /workloads/save-file, /workloads/deploy/

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 09:52:27 +01:00
kodi 2c5cb07cdb feat (ui): exec fase 5 2026-03-06 19:55:31 +01:00
kodi 3da82255ff feat (ui): exec fase 4 2026-03-06 18:38:58 +01:00
kodi a4099867a5 feat (ui): exec fase 3 2026-03-06 18:13:55 +01:00
kodi 39a33e5711 feat (ui): exec fase 2 2026-03-06 17:25:11 +01:00
kodi 92e0e04905 feat (ui): exec fase 1 2026-03-04 16:39:41 +01:00
kodi d96fc19f41 feat (backend): exec fase 1 2026-03-04 16:33:24 +01:00
kodi 8045fdc869 feat (ui): aantal netwerken weergeven in netwerk knop in linker menu 2026-03-04 15:30:48 +01:00
kodi a1609c8ea7 Opschonen app.css: Stap 1 & 2 voltooid
Stap 1: Verwijderen ongebruikte tokens en selectors
- Ongebruikte tokens: --badge-green-*, --badge-yellow-*
- Ongebruikte selectors: .pill, .data-table, .badge-green/yellow
- Resultaat: -38 regels, lagere onderhoudslast.

Stap 2: Consolidatie en opschonen cascade
- .mapLegend volledig geconsolideerd naar één definitieve set.
- Overlappende file-row fallback verwijderd.
- Resultaat: -68 regels, cascade opgeschoond.
2026-03-04 08:13:45 +01:00
kodi 6bf30db62c feat (ui): Light/Dark Theme added Complete 2026-03-04 07:48:58 +01:00
kodi ebefd2d80c feat (ui): Light/Dark Theme added 02 2026-03-04 07:29:43 +01:00
kodi 1d5bdd5089 feat (ui): Light/Dark Theme added 2026-03-03 15:17:52 +01:00
kodi 3a80ba09af feat(ui): ondersteuning Containerfile zichtbaar in build UI
- Label aangepast naar 'Dockerfile/Containerfile'
- Picker-titel aangepast naar 'Kies Dockerfile/Containerfile'
- Default waarde van buildDockerfile input leeggemaakt
- Validatiemelding aangepast naar 'Dockerfile/Containerfile'

Geen backend- of API-wijzigingen; dockerfile blijft leidend veld.
2026-03-01 11:09:18 +01:00
kodi 417d08b162 Fix: voorkom pods uit .container workloads
/api/pods-dashboard genereerde onterecht 'pod<basename>' entries voor .container Quadlet-bestanden, wat leidde tot lege nep-pods zoals 'podn8n' in de WebUI.

Alleen echte pod-workloads (.pod, evt. .kube) mogen nog een Source:"systemd" pod-row opleveren.

Geen endpoint- of schemawijzigingen. Alleen filtering in control/app_pods.py aangepast.
2026-03-01 08:41:20 +01:00
kodi 7d84733b17 refractor: afgerond 2026-02-28 15:51:58 +01:00
kodi df2a577402 refactor(api): move system endpoints into app_system router 2026-02-28 13:00:20 +01:00
kodi 1226b0654e refactor(api): move /test-hybrid into app_system router 2026-02-28 11:58:55 +01:00
kodi e61f2ccf76 refactor(api): move /health into app_system router 2026-02-28 10:51:22 +01:00
kodi 492edc2ec0 refactor(api): remove DI callables, routers import common directly 2026-02-28 10:02:42 +01:00
kodi 61b2748854 refactor(api): introduce shared common helpers (mechanical extract) 2026-02-28 09:14:35 +01:00
kodi a8d62fa340 chore(api): remove unused imports and BASE_DIR from app.py 2026-02-28 07:41:10 +01:00
kodi 278d31b68c refactor(api): move containers endpoints and stats poller into app_containers router 2026-02-27 16:01:15 +01:00
kodi efd4fe46d7 refactor(api): move pods endpoints into app_pods router 2026-02-27 15:02:53 +01:00
kodi cab706deb2 refactor(api): move networks endpoints into app_networks router 2026-02-27 14:23:43 +01:00
kodi 3d516c96e4 refactor(api): move files/workloads endpoints into app_files router 2026-02-27 13:55:09 +01:00
kodi 65395cf7e8 chore(api): remove legacy systemd allowlist 2026-02-27 12:39:34 +01:00
kodi b21d2cb2ac refact (ui): 01 2026-02-25 18:17:08 +01:00
kodi 8e4e0067ff feat(api): add cached container cpu/mem fields on containers-dashboard 2026-02-25 14:42:03 +01:00
kodi 658e41cfba feat(dashboard): add cached cpu/mem stats fields to containers-dashboard 2026-02-25 14:10:49 +01:00
kodi b89a31a068 feat(api): Codex: add /health endpoint with podman + systemd checks 2026-02-25 13:16:53 +01:00
kodi ebb6d755a0 feat (ui): netwerken en files verfraaid 2026-02-25 10:07:35 +01:00
kodi ec13059437 feat (ui): netwerk map functionaliteit verder uitgebreid en polish 2026-02-24 12:37:17 +01:00
kodi 289d222707 feat (ui): netwerk map functionaliteit verder uitgebreid met detail info per netwerk 2026-02-23 16:27:07 +01:00
kodi 001b745e2f feat (ui): functionaliteit knoppen auto-layout en reset-view toegevoegd 2026-02-23 14:04:29 +01:00
kodi 0337f1438f feat(gui): netwerk tab netwerk layout grafisch - 02 2026-02-22 19:59:45 +01:00
kodi 18ee367e1d feat(gui): netwerk tab netwerk layout grafisch 2026-02-22 18:24:57 +01:00
kodi e4214858ac Netwerken UI refactor: shared netns badge verplaatst naar Flags kolom
- Containers kolom toont nu uitsluitend het numerieke aantal containers
- Shared network namespace wordt bepaald via expliciete isShared check
- 'shared' badge verplaatst van Containers kolom naar Flags kolom
- Eerdere uitlijnings-experimenten en CSS overrides opgeschoond
- Duidelijke scheiding aangebracht tussen metriek (aantal) en status (shared netns)

Resultaat: semantisch correctere tabel, stabielere layout en betere leesbaarheid.
2026-02-22 13:51:56 +01:00
kodi cffb5e94a2 refract(ui): WebUI: netwerklogica uit index.html gehaald naar networks.js 2026-02-22 07:48:39 +01:00
kodi 597388055c feat(ui): image tabblad sorteren toegevoegd 2026-02-21 14:34:55 +01:00
kodi d28633a22d feat(ui): image tabblad uitgebreid met Dockerfile selectie en tag suggestie 2026-02-21 14:09:13 +01:00
kodi 815d16f872 feat(ui): image tabblad toegevoegd 2026-02-21 12:33:10 +01:00
kodi 1ed7699437 feat(backend): image endpoints toegevoegd 2026-02-21 12:04:21 +01:00
kodi acbf150e28 feat(ui): netwerk overzicht toegevoegd 2026-02-21 10:28:20 +01:00
kodi 5d5fdab122 feat(backend): netwerk endpoints toegevoegd 2026-02-21 07:36:32 +01:00
kodi 881382602b refactor(ui)!: verwijderen pods/systemd tab en samenvoegen functionaliteit in containers 2026-02-20 15:46:25 +01:00
kodi a7e32d08f0 refactor(ui): pods-tab verwijderen (pods-acties blijven via containers) 2026-02-20 13:30:53 +01:00
kodi b8ba0f08dc refactor(webui): introduceer assets-structuur en externe stylesheet
- CSS verplaatst naar assets/css/app.css
- Logo en favicon verplaatst naar assets/img en assets/icons
- index.html verwijst naar nieuwe paden
2026-02-20 12:05:28 +01:00
kodi 9a7321834c feat(files-dashboard) mappen in en uitklapbaar 2026-02-20 11:13:24 +01:00
kodi 7402c20791 feat(ui): menu bar verplaatst naar linkerzijkant van gui 2026-02-20 10:31:52 +01:00
kodi c1f8e8335b feat(ui): vervang start/restart/stop knoppen door 3-dot dropdown in containers en pods 2026-02-20 09:05:08 +01:00
kodi d18d0c0f77 feat(containers): toon totale CPU/MEM per pod en onderscheid inactive vs stats-pauze 2026-02-20 08:06:46 +01:00
kodi 427d7b47a1 feat(containers): reset pod CPU/MEM totals bij stoppen of fout van stats-stream 2026-02-19 16:41:44 +01:00
kodi c81f603ccc feat(containers-dashboard): podheader uitgebreid met cpu en mem totalen 2026-02-19 15:46:57 +01:00
kodi 3b586fe86d feat(containers-dashboard): port mappings van containers pods verplaatst naar pod header row 2026-02-19 15:14:44 +01:00
kodi 98fc50c1d5 bugfix(containers-dashboard): pod blijft in container overzicht staan na stoppen 2026-02-19 14:36:09 +01:00
kodi 4753dcb6d4 feat(containers-dashboard): Pod start/restart/stop knoppen en favico toegevoegd 2026-02-19 12:45:25 +01:00
kodi 35e5682b91 feat(containers-dashboard): groeperen containers toegevoegd 2026-02-18 15:46:31 +01:00
kodi eecf4ad9f2 bugfix(containers-dashboard): containers in een systemctl managed pod worden nu ook weergegeven als systemctl managed 2026-02-18 14:49:07 +01:00
kodi 10400846d2 bugfix(containers-dashboard): managed kolom - systemctl containers worden nu als managed systemctl weergegeven en niet als podman (POD containers nog niet) 2026-02-18 14:31:03 +01:00
kodi 2a08ad6989 feat(containers-dashboard): voeg host_ip toe aan gepubliceerde poorten en corrigeer portweergave 2026-02-18 11:41:48 +01:00
38 changed files with 8504 additions and 1540 deletions
+215
View File
@@ -0,0 +1,215 @@
# AGENTS.md — podman-mvp (WebUI + API)
## Goal
Primary goal: extend functionality and evolve the platform.
Feature development is the default workflow.
Refactoring is allowed when:
- it improves maintainability, OR
- it is required to implement a feature.
Refactoring must always:
- be proposed first,
- remain backward compatible,
- not change existing behaviour or API contracts without agreement.
---
## Repository structure
Backend (FastAPI modular monolith)
Bootstrap
- control/app.py (application bootstrap & router wiring ONLY)
System / platform router
- control/app_system.py
Feature routers
- control/app_containers.py
- control/app_pods.py
- control/app_networks.py
- control/app_files.py
- control/app_images.py
Shared infrastructure
- control/common.py
WebUI (static Apache)
- webui/html/index.html
- webui/html/assets/js/tabs/
- webui/conf/httpd.conf
### Backend architecture rule (HARD)
control/app.py is a bootstrap layer only.
It may:
- create FastAPI app
- include routers
- register startup events
It must NOT:
- contain feature endpoints
- contain system logic
- contain Podman or systemctl implementations
All endpoints belong in routers.
### Module ownership rule
If unsure where new logic belongs:
- shared logic → control/common.py
- system/platform logic → control/app_system.py
- feature logic → corresponding app_<feature>.py router
Never introduce new endpoints in control/app.py.
---
## Runtime architecture (IMPORTANT)
Application runs inside a Podman pod.
Pod created with:
podman pod create \
--name mvp-pod \
-p 8080:8000 \
-p 8081:8081 \
--userns=keep-id
### Backend container
Runs FastAPI control API with Podman access.
Created with:
podman run -d --pod mvp-pod \
--name mvp-backend \
--ipc=host \
--pid=host \
-e XDG_RUNTIME_DIR=/run/user/1000 \
-v /run/user/1000/podman/podman.sock:/run/user/1000/podman/podman.sock:rw \
-v /run/user/1000/podman-mvp:/run/podman-mvp \
-v /home/kodi/.config/containers:/app/workloads:rw \
mvp-control:latest
Important notes:
- Backend communicates with Podman through unix socket.
- User-session Podman is used (not root).
- D-Bus is NOT required — alle systemctl-acties gaan via podman-helper.
- Host PID/IPC namespaces are intentional.
Do NOT change these assumptions without proposal.
---
### WebUI container
Static Apache frontend.
podman run -d --pod mvp-pod \
--name mvp-webui \
-v $HOME/.config/podman-mvp/webui/html:/usr/local/apache2/htdocs:ro \
-v $HOME/.config/podman-mvp/webui/conf/httpd.conf:/usr/local/apache2/conf/httpd.conf:ro \
docker.io/library/httpd:2.4
Frontend is static JS calling API through proxy.
---
## Access
WebUI:
http://127.0.0.1:8081/
API (via proxy):
http://127.0.0.1:8081/api/
---
## Testing workflow (REQUIRED)
Always validate changes using curl.
Example:
curl -s http://127.0.0.1:8081/api/...
Before proposing implementation:
1. Analyse existing endpoints.
2. Confirm available data using curl tests.
3. Propose minimal change.
4. Provide verification curl commands.
---
## Contract rules (HARD)
- Never break existing API responses.
- Never rename or remove JSON keys.
- Maintain backward compatibility.
New functionality must be added via:
- new endpoints, OR
- optional response fields.
Security rules:
- No shell=True
- subprocess must be explicit and safe
- Never assume systemd states
Legacy notice:
allow_list / allowed_units.txt functionality has been removed
and must NOT be reintroduced.
---
## Change policy
Preferred workflow:
1. Analyse existing behaviour.
2. Propose small implementation plan.
3. Identify affected files.
4. Provide curl validation tests.
5. Implement after agreement.
Avoid:
- large rewrites
- structural changes without need
- hidden refactors.
All significant changes must follow PR_RULES.md workflow.
---
## UI direction
Target style:
Portainer-like dashboard UI.
Guidelines:
- tables and overview panels
- container status badges
- row-level actions
- minimalistic professional layout
Do NOT introduce large frontend frameworks without agreement.
---
## Coding style
Follow existing structure and conventions of each file.
Do not reformat unrelated code.
Minimize diff size whenever possible.
## Safety boundaries
Follow SAFE_FILES.md before modifying infrastructure or core files.
+84
View File
@@ -0,0 +1,84 @@
# ARCHITECTURE.md — podman-mvp (WebUI + API)
## Purpose
This document describes **where code lives** and **which module owns what**.
It is a factual map of the system for reviewers and agents.
## Runtime + Test Base URLs
### WebUI
- http://127.0.0.1:8081/
### API (via WebUI reverse proxy)
- Base URL: http://127.0.0.1:8081/api
**All verification commands must target `127.0.0.1:8081` unless explicitly stated otherwise.**
Example:
- `curl -fsS http://127.0.0.1:8081/api/health`
## Architecture Overview (Modular Monolith)
Single deployable backend service, split into modules (routers) by domain.
### Layers
1. **Bootstrap / Composition Root**
- `control/app.py`
- Responsibilities:
- create FastAPI app
- include routers
- register startup events (if needed)
- Hard rule: **no feature endpoints** and **no system logic** here.
2. **System / Platform Router**
- `control/app_system.py`
- Owns platform endpoints such as:
- `/health`
- `/daemon-reload`
- systemctl endpoints (`/{action}/{unit}`, legacy `/api/<action>/<unit>`)
- diagnostic endpoints (e.g. `/test-hybrid`, if present)
- Route ordering rule: broad patterns like `/{action}/{unit}` must be defined **last**.
3. **Feature Routers (UI Tabs / Domains)**
- `control/app_containers.py` — containers tab endpoints (dashboard, inspect, logs, stats stream)
- `control/app_pods.py` — pods tab endpoints (dashboard, pod actions)
- `control/app_networks.py` — networks tab endpoints
- `control/app_files.py` — files/workloads endpoints (tree/read/save/etc.)
- `control/app_images.py` — images endpoints
- `control/app_volumes.py` — volumes endpoints (list/create/delete/prune/exists)
4. **Shared Infrastructure Layer**
- `control/common.py`
- Owns:
- Podman HTTP helpers (unix-socket requests)
- systemctl/subprocess helpers (when shared)
- shared parsing/normalization utilities
- Hard rule: routers should not duplicate shared helpers.
## Boundaries (Hard Rules)
- `control/app.py` is **bootstrap-only**.
- Endpoints must live in the appropriate router module:
- system/platform → `app_system.py`
- domain feature → `app_<domain>.py`
- Shared helpers belong in `common.py` (not copied into routers).
- Legacy `allow_list` / `allowed_units.txt` functionality is removed and must NOT be reintroduced.
## Contracts
API response shapes are governed by:
- `contracts/API_GOLDEN.md`
No endpoint response keys may be removed or renamed without explicit approval.
## Required Verification (Minimal)
After any change affecting backend routing or shared helpers, run:
```bash
python3 -m py_compile control/app.py control/common.py control/app_system.py \
control/app_containers.py control/app_pods.py control/app_networks.py \
control/app_files.py control/app_images.py control/app_volumes.py
curl -fsS http://127.0.0.1:8081/api/health | jq
curl -fsS http://127.0.0.1:8081/api/pods-dashboard >/dev/null && echo OK
curl -fsS http://127.0.0.1:8081/api/containers-dashboard >/dev/null && echo OK
curl -fsS http://127.0.0.1:8081/api/files/tree >/dev/null && echo OK
curl -fsS http://127.0.0.1:8081/api/volumes >/dev/null && echo OK
curl -fsS http://127.0.0.1:8081/api/networks/meta | jq
+125
View File
@@ -0,0 +1,125 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
podman-mvp is a Portainer-like web dashboard for managing rootless user-session Podman containers. It runs as a two-container Podman pod: a FastAPI backend (`mvp-backend`) that talks to Podman over a Unix socket, and a static Apache frontend (`mvp-webui`) that reverse-proxies `/api/` to the backend.
## Architecture
### Backend — FastAPI modular monolith (`control/`)
| File | Role |
|---|---|
| `app.py` | Bootstrap only — creates FastAPI app, wires routers, no feature logic |
| `common.py` | Shared helpers: Podman HTTP, systemctl, utilities |
| `app_system.py` | System/platform router: `/health`, `/daemon-reload`, systemctl unit actions |
| `app_containers.py` | Containers router: dashboard, inspect, logs, stats stream, exec sessions |
| `app_pods.py` | Pods router: dashboard, pod actions |
| `app_networks.py` | Networks router |
| `app_images.py` | Images router |
| `app_volumes.py` | Volumes router: list, create, delete, prune, exists |
| `app_files.py` | Files/workloads router: tree, read, save |
Backend communicates with Podman through the Unix socket at `/run/user/1000/podman/podman.sock` using `requests_unixsocket`. Podman API base: `http+unix://%2Frun%2Frun%2Fuser%2F1000%2Fpodman%2Fpodman.sock/v5.4.2`.
### Frontend — Static Apache (`webui/`)
- `webui/html/index.html` — single-page app shell
- `webui/html/assets/js/tabs/` — per-tab JavaScript modules (containers, networks, images, volumes, files)
- `webui/conf/httpd.conf` — Apache config, proxies `/api/``http://127.0.0.1:8000/api/`
## Build & Deploy
```bash
# Build backend image
podman build -t mvp-control:latest control/
# Create pod
podman pod create --name mvp-pod -p 8080:8000 -p 8081:8081 --userns=keep-id
# Run backend
podman run -d --pod mvp-pod --name mvp-backend \
--ipc=host --pid=host \
-e XDG_RUNTIME_DIR=/run/user/1000 \
-v /run/user/1000/podman/podman.sock:/run/user/1000/podman/podman.sock:rw \
-v /run/user/1000/podman-mvp:/run/podman-mvp \
-v /home/kodi/.config/containers:/app/workloads:rw \
mvp-control:latest
# Run frontend
podman run -d --pod mvp-pod --name mvp-webui \
-v $HOME/.config/podman-mvp/webui/html:/usr/local/apache2/htdocs:ro \
-v $HOME/.config/podman-mvp/webui/conf/httpd.conf:/usr/local/apache2/conf/httpd.conf:ro \
docker.io/library/httpd:2.4
```
## Verification Commands
```bash
# Syntax check all backend modules
python3 -m py_compile control/app.py control/common.py control/app_system.py \
control/app_containers.py control/app_pods.py control/app_networks.py \
control/app_files.py control/app_images.py control/app_volumes.py
# Smoke test key endpoints (all via proxy on :8081)
curl -fsS http://127.0.0.1:8081/api/health | jq
curl -fsS http://127.0.0.1:8081/api/containers-dashboard >/dev/null && echo OK
curl -fsS http://127.0.0.1:8081/api/pods-dashboard >/dev/null && echo OK
curl -fsS http://127.0.0.1:8081/api/files/tree >/dev/null && echo OK
curl -fsS http://127.0.0.1:8081/api/volumes >/dev/null && echo OK
curl -fsS http://127.0.0.1:8081/api/networks/meta | jq
```
All test/verification URLs must target `127.0.0.1:8081` (the proxy), not port 8000 directly.
## Health Check (`/api/health`)
`GET /api/health` geeft drie deelresultaten terug:
| Veld | Wat het meet | Techniek |
|---|---|---|
| `podman.ok` | Podman API bereikbaar | HTTP GET `/libpod/info` op Unix socket |
| `helper.ok` | podman-helper socket bereikbaar | TCP connect op `/run/podman-mvp/podman-helper.sock` |
| `systemd_user.reachable` | Afgeleid van `helper.ok` | Identiek — helper draait als host-user en voert `systemctl --user` uit, dus bereikbaarheid van helper impliceert bereikbaarheid van systemd |
`ok` (toplevel) is `true` als én `podman.ok` én `helper.ok` waar zijn.
De container voert zelf **geen** `systemctl --user` of D-Bus aanroepen uit. Alle systemctl-acties (start/stop/restart/daemon-reload) gaan via de helper-socket. D-Bus en `/run/user/1000/bus` zijn niet gemount.
## Hard Rules
### Module placement
- `app.py` is bootstrap-only — no endpoints, no feature logic, no Podman/systemctl calls.
- New system/platform endpoints → `app_system.py`.
- New domain feature endpoints → the corresponding `app_<domain>.py`.
- Shared helpers → `common.py`, never duplicated into routers.
- `allow_list` / `allowed_units.txt` has been removed and must NOT be reintroduced.
- `app_system.py` broad wildcard routes (`/{action}/{unit}`) must be defined **last**.
### API contract (`contracts/API_GOLDEN.md`)
- Never remove or rename existing JSON response keys.
- Never change existing key data types.
- Extend via new optional fields or new endpoints only.
- UI-critical endpoints requiring pre-approval before any change: `/containers-dashboard`, `/pods-dashboard`, `/images`, `/networks/meta`.
### Security
- No `shell=True` in subprocess calls.
- All subprocess commands must be explicit lists.
### Infrastructure (propose before changing)
- Pod name, port mappings, `userns=keep-id`.
- DBus/XDG_RUNTIME_DIR mounts, Podman socket path, host PID/IPC namespaces.
- `control/Containerfile`, `webui/conf/httpd.conf`.
## Change Workflow
For non-trivial changes, follow PR_RULES.md:
1. Analyse existing behaviour with curl.
2. Propose minimal plan identifying affected files.
3. Confirm API contract safety.
4. Provide curl validation commands showing expected output change.
5. Implement after agreement.
Minimize diff size. Do not reformat unrelated code. No large rewrites or hidden refactors.
+10
View File
@@ -0,0 +1,10 @@
# Core runtime / infra: altijd extra voorzichtig
^control/Dockerfile$ @kodi
^webui/conf/httpd\.conf$ @kodi
# Core API files
^control/app\.py$ @kodi
^control/app_images\.py$ @kodi
# Frontend entry
^webui/html/index\.html$ @kodi
+51
View File
@@ -0,0 +1,51 @@
# Change / PR Rules — podman-mvp
All non-trivial changes must follow this workflow.
## Step 1 — Scope
Describe:
- What feature is added or improved
- Which files are touched
## Step 2 — Contract safety check
Must remain TRUE:
- Existing API responses unchanged
- No JSON keys removed or renamed
- Backward compatibility maintained
- allowed_units.txt respected
If not certain → STOP and propose first.
## Step 3 — Runtime safety
Do NOT change without agreement:
- Pod structure
- Podman socket mounts
- DBus configuration
- host PID/IPC usage
## Step 4 — Verification (required)
Provide curl validation commands.
Example:
curl -s http://127.0.0.1:8081/api/...
Explain what should change in output.
## Step 5 — Refactoring
Allowed only when:
- required for feature OR
- clearly improves maintainability
Refactor must:
- keep behaviour identical
- minimize diff size
- be proposed first.
+93
View File
@@ -0,0 +1,93 @@
# SAFE FILES — podman-mvp
These files and runtime assumptions are considered infrastructure-critical.
Changes are NOT forbidden, but must ALWAYS be proposed first
and explicitly approved before implementation.
---
## Runtime architecture (critical)
Do not change without agreement:
- Pod name: mvp-pod
- Port mappings:
- 8080 → backend
- 8081 → webui proxy
- userns=keep-id
Backend runtime assumptions:
- XDG_RUNTIME_DIR=/run/user/1000 (env var voor Podman socket pad)
- Podman unix socket: /run/user/1000/podman/podman.sock
- Helper socket directory: /run/user/1000/podman-mvp → /run/podman-mvp
- host PID namespace
- host IPC namespace
Reason:
Backend communicates with user-session Podman via unix socket.
Alle systemctl-acties (start/stop/restart/daemon-reload) gaan via
podman-helper. D-Bus is niet gemount.
---
## Infrastructure sensitive files
High risk files:
control/Dockerfile
webui/conf/httpd.conf
Changes must be proposed first.
---
## Core API stability
Files requiring caution:
control/app.py
control/app_files.py
control/app_images.py
control/app_networks.py
control/app_pods.py
control/app_system.py
control/common.py
Rules:
- Never rewrite structure without agreement.
- Extend endpoints instead of replacing logic.
---
## Frontend stability
Files:
webui/html/index.html
Avoid:
- framework migrations
- large UI rewrites
Prefer incremental improvements.
---
## Allowed improvements
Safe changes include:
- new API endpoints
- optional JSON response fields
- new UI tabs
- bug fixes
- performance improvements
---
## Goal
System stability has priority over architectural perfection.
Prefer minimal and predictable changes.
-4
View File
@@ -1,4 +0,0 @@
demo1.service
demo2.service
sonarr.service
mediaserver.service
+38
View File
@@ -0,0 +1,38 @@
import os
# Bestanden of mappen die we NIET willen zien
EXCLUDE_DIRS = {'.git', 'node_modules', '__pycache__', 'venv', '.next', 'dist', 'build'}
EXCLUDE_FILES = {'collect_code.py', 'project_context.txt', 'package-lock.json', '.DS_Store'}
# Welke bestandstypes we wel willen verzamelen
INCLUDE_EXTENSIONS = {'.js', '.jsx', '.ts', '.tsx', '.py', '.html', '.css', '.json'}
def collect_code():
output_file = "project_context.txt"
with open(output_file, "w", encoding="utf-8") as f:
for root, dirs, files in os.walk("."):
# Filter uitgesloten mappen
dirs[:] = [d for d in dirs if d not in EXCLUDE_DIRS]
for file in files:
if file in EXCLUDE_FILES:
continue
ext = os.path.splitext(file)[1]
if ext in INCLUDE_EXTENSIONS:
full_path = os.path.join(root, file)
f.write(f"\n{'='*50}\n")
f.write(f"FILE: {full_path}\n")
f.write(f"{'='*50}\n\n")
try:
with open(full_path, "r", encoding="utf-8") as code_file:
f.write(code_file.read())
except Exception as e:
f.write(f"Fout bij lezen bestand: {e}")
f.write("\n")
print(f"Klaar! Alle code staat in {output_file}")
if __name__ == "__main__":
collect_code()
+249
View File
@@ -0,0 +1,249 @@
# API_GOLDEN.md — podman-mvp
Purpose:
Freeze existing API response contracts used by the WebUI.
Existing response structures MUST remain backward compatible.
Rules:
- Existing JSON keys MUST NOT be removed.
- Existing JSON keys MUST NOT be renamed.
- Data types of listed keys MUST NOT change.
- New optional fields are allowed.
- New endpoints are allowed.
- Podman passthrough responses must remain raw Podman responses.
API accessed via proxy:
http://127.0.0.1:8081/api
==================================================
GET /api/containers-dashboard
==================================================
Curl:
curl -s http://127.0.0.1:8081/api/containers-dashboard
Response type:
Array of container objects.
Golden keys per item:
- Names
- Image
- State
- Status
- Ports
- PodName
- _dashboard_source
- _dashboard_published_ports
- _dashboard_unit
- _dashboard_def_path
Golden example:
[
{
"Names": ["mvp-webui"],
"Image": "docker.io/library/httpd:2.4",
"State": "running",
"Status": "",
"Ports": [],
"PodName": "mvp-pod",
"_dashboard_source": "podman",
"_dashboard_published_ports": [
"8080:8000/tcp",
"8081:8081/tcp"
],
"_dashboard_unit": null,
"_dashboard_def_path": null
}
]
==================================================
GET /api/pods-dashboard
==================================================
Curl:
curl -s http://127.0.0.1:8081/api/pods-dashboard
Response type:
Array of pod dashboard objects.
Golden keys per item:
- Name
- Status
- Containers
- Unit
- Source
Golden example:
[
{
"Name": "mvp-pod",
"Status": "Running",
"Containers": [
"mvp-backend",
"mvp-webui"
],
"Unit": "pod-mvp-pod.service",
"Source": "podman"
}
]
==================================================
GET /api/images
==================================================
Curl:
curl -s http://127.0.0.1:8081/api/images
Response type:
Array of Podman image objects.
Golden keys per item:
- RepoTags
- RepoDigests
- Created
- Size
- Containers
- Digest
- Arch
- Os
Golden example:
[
{
"RepoTags": [
"docker.io/library/httpd:2.4"
],
"RepoDigests": [
"docker.io/library/httpd@sha256:..."
],
"Created": 1770085385,
"Size": 120210217,
"Containers": 1,
"Digest": "sha256:...",
"Arch": "amd64",
"Os": "linux"
}
]
==================================================
GET /api/networks/meta
==================================================
Curl:
curl -s http://127.0.0.1:8081/api/networks/meta
Golden keys:
- networkBackend
- rootless
- infoEndpoint
Golden example:
{
"networkBackend": "netavark",
"rootless": true,
"infoEndpoint": "http+unix://%2Frun%2Fuser%2F1000%2Fpodman%2Fpodman.sock/v5.4.2/libpod/info"
}
==================================================
GET /api/volumes
==================================================
Curl:
curl -s http://127.0.0.1:8081/api/volumes
Response type:
Array of Podman volume objects (raw Podman passthrough).
Golden keys per item:
- Name
- Driver
- Mountpoint
- CreatedAt
- Labels
Golden example:
[
{
"Name": "my-volume",
"Driver": "local",
"Mountpoint": "/home/kodi/.local/share/containers/storage/volumes/my-volume/_data",
"CreatedAt": "2026-03-01T12:00:00Z",
"Labels": {}
}
]
==================================================
POST /api/volumes
==================================================
Request body (JSON):
- name (string, required)
- driver (string, optional, default "local")
- labels (object, optional)
- driverOpts (object, optional)
Response: created volume object (raw Podman passthrough).
==================================================
DELETE /api/volumes/{name}
==================================================
Response on success (204 from Podman):
{"ok": true}
Error responses forwarded from Podman (e.g. 409 if in use).
==================================================
POST /api/volumes/prune
==================================================
Response: array of pruned volume names (raw Podman passthrough).
==================================================
GET /api/volumes/{name}/exists
==================================================
Response:
{"exists": true} — volume bestaat (Podman 204)
{"exists": false} — volume niet gevonden (Podman 404)
==================================================
GET /api/openapi.json
==================================================
Curl:
curl -s http://127.0.0.1:8081/api/openapi.json
Contract:
OpenAPI schema must remain available for tooling and inspection.
Required top-level keys:
- openapi
- info
- paths
==================================================
GENERAL BACKWARD COMPATIBILITY RULE
==================================================
The following dashboard endpoints are considered UI-critical:
- /containers-dashboard
- /pods-dashboard
- /images
- /networks/meta
Changes affecting these endpoints must be proposed before implementation.
System stability has priority over structural refactoring.
+8 -1
View File
@@ -3,5 +3,12 @@ WORKDIR /app
RUN apt-get update && apt-get install -y curl systemd && rm -rf /var/lib/apt/lists/*
RUN pip install fastapi uvicorn requests-unixsocket pyyaml pytest httpx
COPY app.py .
COPY tests/ ./tests/
COPY app_images.py .
COPY app_volumes.py .
COPY app_files.py .
COPY app_networks.py .
COPY app_pods.py .
COPY app_containers.py .
COPY app_system.py .
COPY common.py .
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
+36 -709
View File
@@ -1,723 +1,50 @@
import os
import subprocess
from fastapi import FastAPI, HTTPException, Query
from pydantic import BaseModel
from app_images import init_images_router
from app_volumes import init_volumes_router
from app_files import init_files_router
from app_pods import init_pods_router
from app_containers import init_containers_router, start_stats_poller
from app_networks import init_networks_router
from app_system import init_system_router
from fastapi import FastAPI
import requests_unixsocket
from common import (
_systemctl as _common_systemctl,
run,
)
import uvicorn
import asyncio
import json
from pathlib import Path
from fastapi.responses import StreamingResponse
app = FastAPI(title="Podman MVP Control Plane", root_path="/api")
SESSION = requests_unixsocket.Session()
PODMAN_API_BASE = "http+unix://%2Frun%2Fuser%2F1000%2Fpodman%2Fpodman.sock/v5.4.2"
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
ALLOWLIST_FILE = os.getenv("ALLOWLIST_FILE", os.path.join(BASE_DIR, "allowed_units.txt"))
WORKLOADS_DIR = "/app/workloads"
# --- ADAPTERS (contract-neutral helpers) ---
# Centralize Podman socket and systemctl invocation.
# MUST NOT change endpoint outputs, status codes, or side-effects.
def _podman_get_json(url: str):
return SESSION.get(url).json()
def _podman_get_text(url: str) -> str:
return SESSION.get(url).text
def _podman_post(url: str, **kwargs):
return SESSION.post(url, **kwargs)
def _podman_action_post(kind: str, name: str, action: str):
if kind == "pods":
url = f"{PODMAN_API_BASE}/libpod/pods/{name}/{action}"
else:
url = f"{PODMAN_API_BASE}/libpod/containers/{name}/{action}"
return _podman_post(url)
def _podman_delete(url: str):
return SESSION.delete(url)
@app.on_event("startup")
async def _startup_stats_poller():
await start_stats_poller()
def _systemctl(cmd):
# Proxy to existing run() to avoid behavioral changes.
return run(cmd)
return _common_systemctl(cmd, run)
# --- ROUTERS ---
# Images API lives in dedicated modules to keep this file from growing further.
app.include_router(init_images_router(SESSION, PODMAN_API_BASE))
app.include_router(init_volumes_router(SESSION, PODMAN_API_BASE))
app.include_router(init_files_router(SESSION, PODMAN_API_BASE, WORKLOADS_DIR))
app.include_router(init_networks_router(SESSION, PODMAN_API_BASE))
app.include_router(init_containers_router(
SESSION,
PODMAN_API_BASE,
WORKLOADS_DIR,
_systemctl,
))
app.include_router(init_pods_router(
SESSION,
PODMAN_API_BASE,
WORKLOADS_DIR,
_systemctl,
))
app.include_router(init_system_router(SESSION, PODMAN_API_BASE, WORKLOADS_DIR))
def _run_systemctl_action(action: str, unit: str):
cmd = ["systemctl", "--user", action, unit]
return _systemctl(cmd)
# --- MODELS ---
class FileContent(BaseModel):
content: str
# --- WORKLOADS ---
@app.get("/workloads")
def list_workloads():
workloads = []
for root, _, files in os.walk(WORKLOADS_DIR):
for f in files:
if f.endswith((".yaml", ".yml", ".json")):
full = os.path.join(root, f)
rel = os.path.relpath(full, WORKLOADS_DIR)
workloads.append(rel)
return {"workloads": workloads}
@app.get("/workloads/read/{filename:path}")
def read_workload(filename: str):
path = os.path.join(WORKLOADS_DIR, filename)
if not os.path.exists(path):
raise HTTPException(404)
with open(path, 'r') as f:
content = f.read()
return {"filename": filename, "content": content}
@app.post("/workloads/save-file")
def save_workload_file(data: dict):
path = data.get("path")
content = data.get("content")
full_path = os.path.join(WORKLOADS_DIR, path)
os.makedirs(os.path.dirname(full_path), exist_ok=True)
with open(full_path, "w") as f:
f.write(content)
return {"status": "success"}
@app.post("/workloads/deploy/{filename:path}")
def deploy_workload(filename: str):
path = os.path.join(WORKLOADS_DIR, filename)
with open(path, 'r') as f:
yaml_content = f.read()
url = f"{PODMAN_API_BASE}/libpod/kube/play"
return _podman_post(url, data=yaml_content).json()
# --- FILE RESTRICTIONS ---
def safe_join(base, path):
# prevent traversal
base = os.path.abspath(base)
final = os.path.abspath(os.path.join(base, path))
if not final.startswith(base):
raise HTTPException(status_code=403, detail="Forbidden path")
return final
# STEP 4: Centralize WORKLOADS_DIR subtree enforcement via one helper.
# MUST be behavior-identical to previous safe_join(WORKLOADS_DIR, ...) calls.
def _files_safe_join(path: str) -> str:
return safe_join(WORKLOADS_DIR, path)
# --- FILES API ---
@app.get("/files/tree")
def file_tree():
root = WORKLOADS_DIR
result = []
for dirpath, dirnames, filenames in os.walk(root):
rel = os.path.relpath(dirpath, root)
if rel == ".":
rel = ""
result.append({
"path": rel,
"dirs": sorted(dirnames),
"files": sorted(filenames),
})
return result
@app.get("/files/read")
def file_read(path: str = Query(...)):
full = _files_safe_join(path)
if not os.path.exists(full):
raise HTTPException(status_code=404, detail="Not found")
if os.path.isdir(full):
raise HTTPException(status_code=403, detail="Is a directory")
with open(full, "r") as f:
content = f.read()
return {"content": content}
@app.post("/files/save")
def file_save(path: str = Query(...), data: FileContent = None):
full = _files_safe_join(path)
os.makedirs(os.path.dirname(full), exist_ok=True)
with open(full, "w") as f:
f.write(data.content)
return {"status": "success", "path": path}
@app.delete("/files/delete")
def file_delete(path: str = Query(...)):
full = _files_safe_join(path)
if not os.path.exists(full):
raise HTTPException(status_code=404, detail="Not found")
if os.path.isdir(full):
raise HTTPException(status_code=400, detail="Kan niet verwijderen: is directory")
try:
os.remove(full)
except Exception as e:
raise HTTPException(status_code=400, detail=f"Kan niet verwijderen: {e}")
return {"status": "deleted", "type": "file"}
@app.post("/files/mkdir")
def file_mkdir(path: str = Query(...)):
# UI expects operations under systemd/; enforce prefix if absent.
if not path.startswith("systemd"):
path = os.path.join("systemd", path)
full = _files_safe_join(path)
os.makedirs(full, exist_ok=True)
return {"status": "directory created", "path": path}
@app.delete("/files/rmdir")
def file_rmdir(path: str = Query(..., description="Directory path under systemd/")):
# Only allow deletion under systemd subtree
if not path or path == "systemd" or path == "systemd/":
raise HTTPException(status_code=400, detail="Refusing to delete systemd root")
if not path.startswith("systemd/") and path != "systemd":
raise HTTPException(status_code=400, detail="Only systemd subtree is allowed")
full = _files_safe_join(path)
if not os.path.exists(full):
raise HTTPException(status_code=404, detail="Directory not found")
if not os.path.isdir(full):
raise HTTPException(status_code=400, detail="Path is not a directory")
# directory must be empty
try:
Path(full).rmdir()
except OSError:
# not empty
# build a stable detail payload
try:
dirs = []
files = []
for entry in os.listdir(full):
p = os.path.join(full, entry)
if os.path.isdir(p):
dirs.append(entry)
else:
files.append(entry)
except Exception:
dirs, files = [], []
raise HTTPException(status_code=409, detail={
"error": "directory not empty",
"dirs": sorted(dirs),
"files": sorted(files),
})
return {"deleted": True, "path": path}
# --- PODS / CONTAINERS ---
@app.get("/pods")
def list_pods():
# Cruciaal: ?all=true zorgt dat EXIT_STATE pods ook getoond worden
url = f"{PODMAN_API_BASE}/libpod/pods/json?all=true"
return _podman_get_json(url)
@app.post("/actions/{action}/{name}")
def take_action(action: str, name: str):
# Legacy endpoint (keep behavior)
possible_names = [name, f"pod{name}", f"pod-{name}"]
if action == "start":
# STAP 1: Probeer direct de pod te starten (de 'Cockpit' methode)
for target in possible_names:
res = _podman_post(f"{PODMAN_API_BASE}/libpod/pods/{target}/start")
if res.status_code in (200, 204):
return {"status": "started", "target": target, "method": "direct"}
# STAP 2: Als direct starten faalt, probeer dan YAML opnieuw te deployen
target_path = None
for ext in (".yaml", ".yml"):
cand = os.path.join(WORKLOADS_DIR, f"{name}{ext}")
if os.path.exists(cand):
target_path = cand
break
if target_path:
with open(target_path, 'r') as file:
yaml_content = file.read()
res = _podman_post(f"{PODMAN_API_BASE}/libpod/kube/play", data=yaml_content)
# SPECIALE CASE: Pod bestaat al, forceer dan restart
if res.status_code == 500 and "already exists" in res.text:
print(f"DEBUG: Forceer herstart voor {name} wegens conflict")
for target in possible_names:
_podman_delete(f"{PODMAN_API_BASE}/libpod/pods/{target}?force=true")
# Probeer het nu opnieuw
retry_res = _podman_post(f"{PODMAN_API_BASE}/libpod/kube/play", data=yaml_content)
return retry_res.json()
return res.json()
return {"status": "unknown", "method": "no_yaml_found"}
if action == "stop":
for target in possible_names:
res = _podman_post(f"{PODMAN_API_BASE}/libpod/pods/{target}/stop")
if res.status_code in (200, 204):
return {"status": "stopped", "target": target}
return {"status": "not found"}
return {"status": "unknown"}
# --- DASHBOARD HELPERS (contract-neutral, no ordering/sorting changes) ---
def _build_pod_to_containers_map(containers: list):
# preserves original order of containers processing; no sorting added
pod_to_containers = {}
for c in containers:
pod_name = c.get("PodName") or ""
if pod_name:
pod_to_containers.setdefault(pod_name, []).append((c.get("Names") or ["?"])[0])
return pod_to_containers
def _map_pod_to_unit(podname: str) -> str | None:
"""
HOTFIX 3.1 FIX 1:
If podname starts with "pod", map to <rest>.service (e.g. podmediaserver -> mediaserver.service)
Else: <podname>.service
"""
if not podname:
return None
if podname.startswith("pod"):
return f"{podname[3:]}.service"
return f"{podname}.service"
def _append_podman_pods_dashboard_rows(dashboard: list, api_pods: list, pod_to_containers: dict):
# preserves original api_pods iteration order
for p in api_pods:
name = p.get("Name")
status = p.get("Status", "unknown")
unit = _map_pod_to_unit(name) if name else ""
dashboard.append({
"Name": name,
"Status": status,
"Containers": pod_to_containers.get(name, []),
"Unit": unit,
"Source": "podman",
})
def _append_defined_pods_dashboard_rows(dashboard: list, by_name: dict, root_dir: str):
# preserves original os.walk order and file iteration order
for root, _, files in os.walk(root_dir):
for f in files:
if f.endswith((".yaml", ".yml")):
base = os.path.splitext(os.path.basename(f))[0]
pod_name = f"pod{base}"
unit_name = _map_pod_to_unit(pod_name)
if pod_name not in by_name:
code, out = _systemctl(["systemctl", "--user", "is-active", unit_name])
status = (out or "").strip() or ("active" if code == 0 else "inactive")
dashboard.append({
"Name": pod_name,
"Status": status,
"Containers": [],
"Unit": unit_name,
"Source": "systemd",
})
def _ensure_container_status_field(container: dict):
# keep exact existing defaulting behavior
if "Status" not in container:
container["Status"] = container.get("State", "")
def _make_defined_container_dashboard_row(name: str, relpath: str):
# keep exact key set and default values as before
return {
"Names": [name],
"Image": "",
"State": "",
"Status": "",
"Ports": [],
"PodName": "",
"_dashboard_source": "systemd",
"_dashboard_unit": f"{name}.service",
"_dashboard_def_path": relpath,
}
def _legacy_dashboard_item_from_container(c: dict):
# Keep exact keys & defaults as before
return {
"name": (c.get("Names") or ["?"])[0],
"status": c.get("Status") or c.get("State") or "",
"path": "",
"ip": "",
"containers": [],
}
@app.get("/pods-dashboard")
def pods_dashboard():
dashboard = []
# 0) Bouw mapping: pod_name -> [container_names...]
containers = _podman_get_json(f"{PODMAN_API_BASE}/libpod/containers/json?all=true")
pod_to_containers = _build_pod_to_containers_map(containers)
# 1) A) echte pods
api_pods = _podman_get_json(f"{PODMAN_API_BASE}/libpod/pods/json?all=true")
by_name = {p.get("Name"): p for p in api_pods}
_append_podman_pods_dashboard_rows(dashboard, api_pods, pod_to_containers)
# 1) B) defined pods via workloads scan
# Based on YAML files in WORKLOADS_DIR; show even if not running.
_append_defined_pods_dashboard_rows(dashboard, by_name, WORKLOADS_DIR)
return dashboard
def _systemd_then_podman(systemd_callable, podman_callable):
systemd_res = systemd_callable()
if systemd_res is not None:
if isinstance(systemd_res, dict) and systemd_res.get("exit", 1) == 0:
return systemd_res
return podman_callable(systemd_res)
return podman_callable(None)
def try_systemd_pod_action(action: str, podname: str):
# If systemd unit exists/allowed, prefer it.
unit = _map_pod_to_unit(podname)
if not unit:
return None
code, out = _systemctl(["systemctl", "--user", action, unit])
return {
"method": "systemd",
"pod": podname,
"unit": unit,
"cmd": f"systemctl --user {action} {unit}",
"exit": code,
"output": out,
}
@app.post("/pods/actions/{action}/{podname}")
def pod_action_prefer_systemd(action: str, podname: str):
if action not in ("start", "stop", "restart"):
return {"error": "Invalid action"}, 400
def _systemd_call():
return try_systemd_pod_action(action, podname)
def _podman_call(systemd_res):
if systemd_res:
note = "systemd failed; falling back to podman"
podman = _podman_action_post("pods", podname, action).json()
return {"method": "systemd_then_podman", "note": note, "systemd": systemd_res, "podman": podman}
return {"method": "podman", "result": _podman_action_post("pods", podname, action).json()}
return _systemd_then_podman(_systemd_call, _podman_call)
def find_defined_containers():
defined = {}
for root, _, files in os.walk(os.path.join(WORKLOADS_DIR, "systemd")):
for f in files:
if f.endswith(".container"):
name = os.path.splitext(f)[0]
full = os.path.join(root, f)
rel = os.path.relpath(full, WORKLOADS_DIR)
defined[name] = rel
return defined
@app.get("/containers-dashboard")
def containers_dashboard():
dashboard = []
# A) echte containers (UNCHANGED)
real = _podman_get_json(f"{PODMAN_API_BASE}/libpod/containers/json?all=true")
for c in real:
_ensure_container_status_field(c)
c["_dashboard_source"] = "podman"
dashboard.append(c)
# B) Dedup set (HOTFIX 3.3) — exact extraction, no sorting
runtime_names = set((c.get("Names") or ["?"])[0] for c in real)
# C) defined containers from systemd/*.container (skip duplicates)
defined = find_defined_containers()
for name, relpath in defined.items():
if name in runtime_names:
continue
row = _make_defined_container_dashboard_row(name, relpath)
# fill Status from systemd is-active (existing hotfix 3.1 behavior)
code, out = _systemctl(["systemctl", "--user", "is-active", f"{name}.service"])
row["Status"] = (out or "").strip()
dashboard.append(row)
return dashboard
@app.get("/containers")
def list_containers():
# Ook hier ?all=true voor gestopte containers
url = f"{PODMAN_API_BASE}/libpod/containers/json?all=true"
return _podman_get_json(url)
@app.post("/containers/{action}/{name}")
def container_action(action: str, name: str):
if action not in ("start", "stop", "restart"):
return {"error": "Invalid action"}, 400
defined = find_defined_containers()
_sys = {"code": None, "out": None}
def _systemd_call():
if name in defined:
code, out = _systemctl(["systemctl", "--user", action, name])
_sys["code"] = code
_sys["out"] = out
if code == 0:
return {
"method": "systemd",
"name": name,
"unit": f"{name}.service",
"definition": defined[name],
"cmd": f"systemctl --user {action} {name}",
"exit": code,
"output": out,
}
return {"exit": code, "output": out}
return None
def _podman_call(systemd_res):
res = _podman_action_post("containers", name, action)
if res.status_code in (200, 204):
return {"method": "podman", "name": name, "cmd": f"podman {action} {name}", "status_code": res.status_code}
if res.status_code >= 400:
return {
"method": "podman",
"name": name,
"cmd": f"podman {action} {name}",
"status_code": res.status_code,
"error": getattr(res, "text", "") or "",
}, res.status_code
if name in defined:
return {
"method": "systemd",
"name": name,
"unit": f"{name}.service",
"definition": defined[name],
"cmd": f"systemctl --user {action} {name}",
"exit": _sys["code"],
"output": _sys["out"],
}
return {"method": "podman", "name": name, "cmd": f"podman {action} {name}", "status_code": res.status_code}
return _systemd_then_podman(_systemd_call, _podman_call)
@app.get("/debug/defined-containers")
def debug_defined_containers():
return find_defined_containers()
@app.get("/dashboard")
def get_dashboard():
# Legacy dashboard view (keep shape)
try:
api_containers = _podman_get_json(f"{PODMAN_API_BASE}/libpod/containers/json?all=true")
except:
api_containers = []
items = []
for c in api_containers:
items.append(_legacy_dashboard_item_from_container(c))
return items
@app.get("/test-hybrid")
def test_hybrid():
# 1. Check filesystem
try:
bestanden = []
for root, _, files in os.walk(WORKLOADS_DIR):
for f in files:
bestanden.append(os.path.join(root, f))
except Exception as e:
bestanden = f"FS Fout: {str(e)}"
# 2. Check Podman API
try:
api_containers = _podman_get_json(f"{PODMAN_API_BASE}/libpod/containers/json?all=true")
except Exception as e:
api_containers = f"API Fout: {str(e)}"
return {
"bestanden_gevonden": bestanden if isinstance(bestanden, list) else [],
"api_containers_aantal": len(api_containers) if isinstance(api_containers, list) else -1,
"api_raw_sample": api_containers[0] if isinstance(api_containers, list) and api_containers else api_containers,
}
@app.get("/containers/logs/{name}")
def get_container_logs(name: str):
# We vragen de laatste 100 regels op (tail=100)
txt = _podman_get_text(f"{PODMAN_API_BASE}/libpod/containers/{name}/logs?stdout=true&stderr=true&tail=100")
# Podman logs komen vaak met wat binaire metadata, we decoden dit als tekst
return {"logs": txt}
@app.get("/containers/inspect/{name}")
def inspect_container(name: str):
return _podman_get_json(f"{PODMAN_API_BASE}/libpod/containers/{name}/json")
# --- SYSTEMD allowlist ---
def read_allowlist():
units = []
if os.path.exists(ALLOWLIST_FILE):
with open(ALLOWLIST_FILE, "r") as f:
for line in f:
u = line.strip()
if u and u.endswith(".service"):
units.append(u)
return sorted(set(units))
def list_unit_files():
# fallback (als allowlist leeg is): probeer systemctl list-unit-files
code, out = _systemctl(["systemctl", "--user", "list-unit-files", "--type=service", "--no-pager"])
if code != 0:
return []
units = []
for line in out.splitlines():
parts = line.split()
if parts and parts[0].endswith(".service"):
units.append(parts[0])
return sorted(set(units))
def unit_state(unit):
# active state
_, active = _systemctl(["systemctl", "--user", "is-active", unit])
active = active.splitlines()[0].strip() if active else "unknown"
# enabled state (kan falen in container-context)
code, enabled_out = _systemctl(["systemctl", "--user", "is-enabled", unit])
enabled = enabled_out.splitlines()[0].strip() if (enabled_out and code == 0) else "unknown"
return active, enabled
@app.get("/systemd/allowlist")
def systemd_allowlist():
units = read_allowlist()
allow_mode = len(units) > 0
if not units:
units = list_unit_files()
return {"allow_mode": allow_mode, "units": units}
@app.post("/daemon-reload")
def api_daemon_reload():
try:
code, out = _systemctl(["systemctl", "--user", "daemon-reload"])
return {
"cmd": "systemctl --user daemon-reload",
"exit": code,
"output": out,
}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@app.post("/{action}/{unit}")
def api_action(action: str, unit: str):
if action not in ("status", "start", "stop", "restart"):
raise HTTPException(status_code=400, detail="Invalid action")
units = read_allowlist()
allow_mode = len(units) > 0
if allow_mode and unit not in units:
raise HTTPException(status_code=403, detail="Unit not allowed by allowlist")
cmd = ["systemctl", "--user", action, unit]
code, out = _run_systemctl_action(action, unit)
return {"cmd": " ".join(cmd), "exit": code, "output": out}
@app.post("/api/<action>/<unit>")
def legacy_api_action(action: str, unit: str):
# legacy flask-like path; keep behavior (even if not used by index.html)
if action not in ("status", "start", "stop", "restart"):
return {"error": "Invalid action"}, 400
cmd = ["systemctl", "--user", action, unit]
code, out = _run_systemctl_action(action, unit)
return {"cmd": " ".join(cmd), "exit": code, "output": out}
def run(cmd):
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=False)
output = (result.stdout or "") + (result.stderr or "")
return result.returncode, output.strip()
except Exception as e:
return 1, str(e)
# ENDPOINT TOEGEVOEGD NA CHATGPT
@app.get("/containers/stats/stream")
async def containers_stats_stream(interval: float = 2.0):
"""
SSE stream met periodieke container stats.
Contract-neutraal: nieuw endpoint, geen bestaande outputs aangepast.
"""
# Guardrails tegen misbruik
if interval < 0.5:
interval = 0.5
if interval > 30:
interval = 30
stats_url = f"{PODMAN_API_BASE}/libpod/containers/stats?all=true&stream=false"
async def event_gen():
try:
while True:
# timeout zodat een haperende podman socket niet je stream “bevriest”
try:
data = SESSION.get(stats_url, timeout=5).json()
except Exception as e:
data = {"Error": str(e), "Stats": []}
payload = {
"ts": int(__import__("time").time()),
"data": data,
}
yield "event: stats\n"
yield f"data: {json.dumps(payload, separators=(',',':'))}\n\n"
await asyncio.sleep(interval)
except asyncio.CancelledError:
return
headers = {
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no", # helpt bij proxies
}
return StreamingResponse(event_gen(), media_type="text/event-stream", headers=headers)
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
+817
View File
@@ -0,0 +1,817 @@
import os
import asyncio
import json
import time
import socket
import secrets
import threading
from collections import deque
from typing import Optional
from urllib.parse import unquote
from fastapi import APIRouter, HTTPException
from fastapi.responses import StreamingResponse
from pydantic import BaseModel, Field
from common import (
_helper_call,
_podman_action_post,
_podman_get_json,
_podman_get_text,
_systemd_then_podman,
)
_SESSION = None
_PODMAN_API_BASE = None
# --- STATS CACHE (contract-neutral; in-memory) ---
# Poll Podman stats centrally and expose as optional dashboard fields.
_STATS_CACHE_BY_NAME = {} # name -> {"cpu": float|None, "mem_usage": float|None, "mem_perc": float|None}
_STATS_CACHE_TS = None
_STATS_POLLER_TASK = None
_STATS_SHOWN_NAMES: set = set() # namen van alle dashboard-containers uit laatste dashboard call
# --- EXEC SESSION CACHE (in-memory) ---
_EXEC_SESSIONS = {} # session_id -> _ExecSessionState
_EXEC_SESSIONS_LOCK = threading.Lock()
_EXEC_SESSION_IDLE_TTL_SECONDS = 60 * 60
_EXEC_SESSION_CLOSED_GC_SECONDS = 5 * 60
_EXEC_SESSION_MAX_ACTIVE = 12
_EXEC_INPUT_MAX_BYTES = 32 * 1024
class ExecStartRequest(BaseModel):
cmd: list[str] = Field(default_factory=lambda: ["/bin/sh"])
tty: bool = True
class ExecInputRequest(BaseModel):
data: str = ""
class ExecResizeRequest(BaseModel):
rows: int = 24
cols: int = 80
class _ExecSessionState:
def __init__(self, session_id: str, exec_id: str, container: str, sock: socket.socket, tty: bool):
self.session_id = session_id
self.exec_id = exec_id
self.container = container
self.sock = sock
self.tty = tty
self.created_at = int(time.time())
self.last_activity = self.created_at
self.closed = False
self.close_reason = ""
self.seq = 0
self.events = deque(maxlen=2000) # {"seq","ts","type","data"}
self.lock = threading.Lock()
self.reader_thread = None
def push_event(self, event_type: str, data: str):
with self.lock:
self.seq += 1
self.events.append({
"seq": self.seq,
"ts": int(time.time()),
"type": event_type,
"data": data,
})
self.last_activity = int(time.time())
def mark_closed(self, reason: str):
if self.closed:
return
self.closed = True
self.close_reason = reason or "closed"
self.push_event("closed", self.close_reason)
def _parse_podman_unix_socket_and_base(api_base: str) -> tuple[str, str]:
if not isinstance(api_base, str) or not api_base.startswith("http+unix://"):
raise HTTPException(status_code=500, detail="Unsupported PODMAN_API_BASE for exec")
tail = api_base[len("http+unix://"):]
slash = tail.find("/")
if slash < 0:
encoded_socket = tail
base_path = ""
else:
encoded_socket = tail[:slash]
base_path = tail[slash:]
socket_path = unquote(encoded_socket)
if not socket_path:
raise HTTPException(status_code=500, detail="Podman socket path missing")
if not base_path:
base_path = ""
if base_path and not base_path.startswith("/"):
base_path = "/" + base_path
return socket_path, base_path
def _open_exec_hijacked_socket(exec_id: str, tty: bool) -> tuple[socket.socket, bytes]:
socket_path, base_path = _parse_podman_unix_socket_and_base(_PODMAN_API_BASE)
req_path = f"{base_path}/libpod/exec/{exec_id}/start"
body = json.dumps({"Detach": False, "Tty": bool(tty)}, separators=(",", ":")).encode("utf-8")
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.settimeout(10.0)
sock.connect(socket_path)
req = (
f"POST {req_path} HTTP/1.1\r\n"
"Host: d\r\n"
"Content-Type: application/json\r\n"
f"Content-Length: {len(body)}\r\n"
"\r\n"
).encode("utf-8") + body
sock.sendall(req)
raw = b""
while b"\r\n\r\n" not in raw:
chunk = sock.recv(4096)
if not chunk:
break
raw += chunk
if len(raw) > 64 * 1024:
sock.close()
raise HTTPException(status_code=502, detail="Exec start response headers too large")
if b"\r\n\r\n" not in raw:
sock.close()
raise HTTPException(status_code=502, detail="Exec start invalid HTTP response")
head, _, rest = raw.partition(b"\r\n\r\n")
status_line = head.split(b"\r\n", 1)[0].decode("utf-8", errors="replace")
try:
parts = status_line.split(" ", 2)
status_code = int(parts[1])
except Exception:
sock.close()
raise HTTPException(status_code=502, detail=f"Exec start parse error: {status_line}")
if status_code != 200:
body_preview = rest.decode("utf-8", errors="replace")
sock.close()
raise HTTPException(status_code=502, detail=f"Exec start failed ({status_code}): {body_preview}")
sock.settimeout(1.0)
return sock, rest
def _get_exec_session_or_404(session_id: str) -> _ExecSessionState:
with _EXEC_SESSIONS_LOCK:
sess = _EXEC_SESSIONS.get(session_id)
if not sess:
raise HTTPException(status_code=404, detail=f"Unknown exec session: {session_id}")
return sess
def _close_exec_session(sess: _ExecSessionState, reason: str):
try:
sess.sock.shutdown(socket.SHUT_RDWR)
except Exception:
pass
try:
sess.sock.close()
except Exception:
pass
sess.mark_closed(reason)
def _cleanup_exec_sessions():
now = int(time.time())
to_delete = []
with _EXEC_SESSIONS_LOCK:
for sid, sess in _EXEC_SESSIONS.items():
idle = now - int(sess.last_activity or now)
if sess.closed and idle > _EXEC_SESSION_CLOSED_GC_SECONDS:
to_delete.append(sid)
continue
if (not sess.closed) and idle > _EXEC_SESSION_IDLE_TTL_SECONDS:
_close_exec_session(sess, "idle-timeout")
for sid in to_delete:
_EXEC_SESSIONS.pop(sid, None)
def _reader_loop(session_id: str, sess: _ExecSessionState, initial_rest: bytes):
try:
if initial_rest:
txt = initial_rest.decode("utf-8", errors="replace")
if txt:
sess.push_event("stdout", txt)
while not sess.closed:
try:
chunk = sess.sock.recv(4096)
except socket.timeout:
continue
except Exception as e:
_close_exec_session(sess, f"read-error: {str(e)}")
break
if not chunk:
_close_exec_session(sess, "eof")
break
txt = chunk.decode("utf-8", errors="replace")
if txt:
sess.push_event("stdout", txt)
finally:
sess.mark_closed(sess.close_reason or "reader-exit")
def _norm_container_name(name) -> str:
try:
return str(name or "").lstrip("/")
except Exception:
return ""
def _parse_stats_interval_seconds() -> float:
raw = os.getenv("STATS_INTERVAL_SECONDS", "1.0")
try:
v = float(raw)
except Exception:
v = 1.0
if v <= 0:
v = 1.0
if v < 0.5:
v = 0.5
if v > 30:
v = 30
return v
def _parse_positive_int_env(name: str, default: int, minimum: int, maximum: int) -> int:
raw = os.getenv(name, str(default))
try:
v = int(raw)
except Exception:
v = int(default)
if v < minimum:
v = minimum
if v > maximum:
v = maximum
return v
def _exec_max_active_sessions() -> int:
return _parse_positive_int_env("EXEC_SESSION_MAX_ACTIVE", _EXEC_SESSION_MAX_ACTIVE, 1, 500)
def _exec_max_input_bytes() -> int:
return _parse_positive_int_env("EXEC_INPUT_MAX_BYTES", _EXEC_INPUT_MAX_BYTES, 64, 1024 * 1024)
async def _stats_poller_loop():
global _STATS_CACHE_BY_NAME, _STATS_CACHE_TS
interval = _parse_stats_interval_seconds()
stats_url = f"{_PODMAN_API_BASE}/libpod/containers/stats?all=true&stream=false"
def _to_float(x):
try:
return float(x)
except Exception:
return None
while True:
try:
data = _SESSION.get(stats_url, timeout=5).json()
stats_list = data.get("Stats") if isinstance(data, dict) else None
if not isinstance(stats_list, list):
stats_list = []
new_cache = {}
for st in stats_list:
if not isinstance(st, dict):
continue
key = _norm_container_name(st.get("Name"))
if not key:
continue
# CPUPerc returned by Podman is already percentage (0.10 == 0.10%)
cpu_val = st.get("CPUPerc")
if cpu_val is None:
cpu_val = st.get("CPU")
if cpu_val is None:
cpu_val = st.get("AvgCPU")
new_cache[key] = {
"cpu": _to_float(cpu_val),
"mem_usage": _to_float(st.get("MemUsage")),
"mem_perc": _to_float(st.get("MemPerc")),
}
_STATS_CACHE_BY_NAME = new_cache
_STATS_CACHE_TS = int(__import__("time").time())
except Exception:
# Keep last good cache; try again next tick.
pass
await asyncio.sleep(interval)
async def start_stats_poller():
global _STATS_POLLER_TASK
if _STATS_POLLER_TASK and not _STATS_POLLER_TASK.done():
return
loop = asyncio.get_running_loop()
_STATS_POLLER_TASK = loop.create_task(_stats_poller_loop())
def init_containers_router(
session,
podman_api_base: str,
workloads_dir: str,
systemctl_func,
) -> APIRouter:
router = APIRouter(tags=["containers"])
global _SESSION, _PODMAN_API_BASE
_SESSION = session
_PODMAN_API_BASE = podman_api_base
def find_defined_containers():
defined = {}
for root, _, files in os.walk(os.path.join(workloads_dir, "systemd")):
for f in files:
if f.endswith(".container"):
name = os.path.splitext(f)[0]
full = os.path.join(root, f)
rel = os.path.relpath(full, workloads_dir)
defined[name] = rel
return defined
def _ensure_container_status_field(container: dict):
# keep exact existing defaulting behavior
if "Status" not in container:
container["Status"] = container.get("State", "")
def _make_defined_container_dashboard_row(name: str, relpath: str):
# keep exact key set and default values as before
return {
"Names": [name],
"Image": "",
"State": "",
"Status": "",
"Ports": [],
"PodName": "",
"_dashboard_source": "systemd",
"_dashboard_unit": f"{name}.service",
"_dashboard_def_path": relpath,
"_dashboard_cpu": None,
"_dashboard_mem_usage": None,
"_dashboard_mem_perc": None,
}
def _legacy_dashboard_item_from_container(c: dict):
# Keep exact keys & defaults as before
return {
"name": (c.get("Names") or ["?"])[0],
"status": c.get("Status") or c.get("State") or "",
"path": "",
"ip": "",
"containers": [],
}
def _extract_published_ports(container: dict) -> list[str]:
"""
Normalize Podman API Ports into a stable display list:
- "127.0.0.1:8080:8000/tcp"
- "8080:8000/tcp" (if no host ip)
"""
out: list[str] = []
for p in (container.get("Ports") or []):
host_ip = p.get("host_ip") or p.get("HostIp") or ""
host_port = p.get("host_port") or p.get("HostPort")
cont_port = p.get("container_port") or p.get("ContainerPort")
proto = p.get("protocol") or p.get("Protocol") or ""
if host_port is None or cont_port is None:
continue
s = ""
if host_ip:
s += f"{host_ip}:"
s += f"{host_port}:{cont_port}"
if proto:
s += f"/{proto}"
out.append(s)
return out
@router.get("/containers-dashboard")
def containers_dashboard():
dashboard = []
defined = find_defined_containers()
stats_by_name = _STATS_CACHE_BY_NAME
# A) echte containers (runtime)
real = _podman_get_json(session, f"{podman_api_base}/libpod/containers/json?all=true")
for c in real:
_ensure_container_status_field(c)
# Published ports: behoud jouw hotfix
c["_dashboard_published_ports"] = _extract_published_ports(c)
# Normaliseer naam: Podman kan "/name" geven
rname = ((c.get("Names") or ["?"])[0] or "").lstrip("/")
# Optional live stats (always present; null on miss)
c["_dashboard_cpu"] = None
c["_dashboard_mem_usage"] = None
c["_dashboard_mem_perc"] = None
st = stats_by_name.get(rname)
if isinstance(st, dict):
c["_dashboard_cpu"] = st.get("cpu")
c["_dashboard_mem_usage"] = st.get("mem_usage")
c["_dashboard_mem_perc"] = st.get("mem_perc")
# Classificatie: PODMAN_SYSTEMD_UNIT label is ground truth
labels = c.get("Labels") or {}
podman_unit = labels.get("PODMAN_SYSTEMD_UNIT") or ""
if podman_unit:
c["_dashboard_source"] = "systemd"
c["_dashboard_unit"] = podman_unit
else:
c["_dashboard_source"] = "podman"
# Definitiepad: onafhankelijk van classificatie
if rname in defined:
c["_dashboard_def_path"] = defined[rname]
dashboard.append(c)
# B) Dedup set: ook genormaliseerd (voorkomt /name vs name doublures)
runtime_names = set((((c.get("Names") or ["?"])[0] or "").lstrip("/")) for c in real)
# C) defined containers from systemd/*.container (skip duplicates)
for name, relpath in defined.items():
if name in runtime_names:
continue
row = _make_defined_container_dashboard_row(name, relpath)
code, out = systemctl_func(["systemctl", "--user", "is-active", f"{name}.service"])
row["Status"] = (out or "").strip()
dashboard.append(row)
# Bijwerken welke containernamen in het dashboard staan (voor /stats filter)
global _STATS_SHOWN_NAMES
_STATS_SHOWN_NAMES = {
_norm_container_name((c.get("Names") or ["?"])[0])
for c in dashboard
} - {"?", ""}
return dashboard
@router.get("/stats")
def stats_snapshot():
cache = _STATS_CACHE_BY_NAME
if _STATS_SHOWN_NAMES:
return {k: v for k, v in cache.items() if k in _STATS_SHOWN_NAMES}
return cache
@router.get("/containers")
def list_containers():
# Ook hier ?all=true voor gestopte containers
url = f"{podman_api_base}/libpod/containers/json?all=true"
return _podman_get_json(session, url)
@router.get("/containers/inspect/{name}")
def inspect_container(name: str):
return _podman_get_json(session, f"{podman_api_base}/libpod/containers/{name}/json")
@router.get("/containers/logs/{name}")
def get_container_logs(name: str):
# We vragen de laatste 100 regels op (tail=100)
txt = _podman_get_text(session, f"{podman_api_base}/libpod/containers/{name}/logs?stdout=true&stderr=true&tail=100")
# Podman logs komen vaak met wat binaire metadata, we decoden dit als tekst
return {"logs": txt}
@router.get("/containers/stats/stream")
async def containers_stats_stream(interval: float = 2.0):
"""
SSE stream met periodieke container stats.
Contract-neutraal: nieuw endpoint, geen bestaande outputs aangepast.
"""
# Guardrails tegen misbruik
if interval < 0.5:
interval = 0.5
if interval > 30:
interval = 30
stats_url = f"{podman_api_base}/libpod/containers/stats?all=true&stream=false"
async def event_gen():
try:
while True:
# timeout zodat een haperende podman socket niet je stream “bevriest”
try:
data = session.get(stats_url, timeout=5).json()
except Exception as e:
data = {"Error": str(e), "Stats": []}
payload = {
"ts": int(__import__("time").time()),
"data": data,
}
yield "event: stats\n"
yield f"data: {json.dumps(payload, separators=(',',':'))}\n\n"
await asyncio.sleep(interval)
except asyncio.CancelledError:
return
headers = {
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no", # helpt bij proxies
}
return StreamingResponse(event_gen(), media_type="text/event-stream", headers=headers)
@router.get("/debug/defined-containers")
def debug_defined_containers():
return find_defined_containers()
@router.get("/dashboard")
def get_dashboard():
# Legacy dashboard view (keep shape)
try:
api_containers = _podman_get_json(session, f"{podman_api_base}/libpod/containers/json?all=true")
except:
api_containers = []
items = []
for c in api_containers:
items.append(_legacy_dashboard_item_from_container(c))
return items
@router.post("/containers/{action}/{name}")
def container_action(action: str, name: str):
"""
Voer een actie uit op een container.
- **start** — Start de container (of bijbehorende systemd-unit).
- **stop** — ⚠️ Destructief: stopt de container direct.
- **restart** — ⚠️ Destructief: herstart de container direct.
Gebruikt systemd als de container een beheerde unit heeft; anders Podman API direct.
"""
if action not in ("start", "stop", "restart"):
return {"error": "Invalid action"}, 400
defined = find_defined_containers()
_sys = {"code": None, "out": None}
def _systemd_call():
if name in defined:
code, out = _helper_call(action, f"{name}.service")
_sys["code"] = code
_sys["out"] = out
if code == 0:
return {
"method": "systemd",
"name": name,
"unit": f"{name}.service",
"definition": defined[name],
"cmd": f"systemctl --user {action} {name}",
"exit": code,
"output": out,
}
return {"exit": code, "output": out}
return None
def _podman_call(systemd_res):
res = _podman_action_post(session, podman_api_base, "containers", name, action)
if res.status_code in (200, 204):
return {"method": "podman", "name": name, "cmd": f"podman {action} {name}", "status_code": res.status_code}
if res.status_code >= 400:
return {
"method": "podman",
"name": name,
"cmd": f"podman {action} {name}",
"status_code": res.status_code,
"error": getattr(res, "text", "") or "",
}, res.status_code
if name in defined:
return {
"method": "systemd",
"name": name,
"unit": f"{name}.service",
"definition": defined[name],
"cmd": f"systemctl --user {action} {name}",
"exit": _sys["code"],
"output": _sys["out"],
}
return {"method": "podman", "name": name, "cmd": f"podman {action} {name}", "status_code": res.status_code}
return _systemd_then_podman(_systemd_call, _podman_call)
@router.post("/containers/{name}/exec/start")
def container_exec_start(name: str, req: Optional[ExecStartRequest] = None):
_cleanup_exec_sessions()
if req is None:
req = ExecStartRequest()
cmd = req.cmd or ["/bin/sh"]
with _EXEC_SESSIONS_LOCK:
active = sum(1 for s in _EXEC_SESSIONS.values() if not s.closed)
max_active = _exec_max_active_sessions()
if active >= max_active:
raise HTTPException(
status_code=429,
detail=f"Too many active exec sessions ({active}/{max_active})",
)
create_url = f"{podman_api_base}/libpod/containers/{name}/exec"
payload = {
"AttachStdin": True,
"AttachStdout": True,
"AttachStderr": True,
"Tty": bool(req.tty),
"Cmd": cmd,
}
try:
create_res = session.post(create_url, json=payload, timeout=10)
except Exception as e:
raise HTTPException(status_code=502, detail=f"Exec create request failed: {str(e)}")
if create_res.status_code >= 400:
raise HTTPException(status_code=502, detail=create_res.text)
try:
exec_id = (create_res.json() or {}).get("Id")
except Exception:
exec_id = None
if not exec_id:
raise HTTPException(status_code=502, detail=f"Exec create returned no Id: {create_res.text}")
sock, initial_rest = _open_exec_hijacked_socket(exec_id, bool(req.tty))
session_id = secrets.token_hex(8)
sess = _ExecSessionState(
session_id=session_id,
exec_id=exec_id,
container=name,
sock=sock,
tty=bool(req.tty),
)
t = threading.Thread(target=_reader_loop, args=(session_id, sess, initial_rest), daemon=True)
sess.reader_thread = t
with _EXEC_SESSIONS_LOCK:
_EXEC_SESSIONS[session_id] = sess
t.start()
return {
"session_id": session_id,
"exec_id": exec_id,
"container": name,
"tty": bool(req.tty),
"cmd": cmd,
"created_at": sess.created_at,
}
@router.get("/containers/exec/{session_id}")
def container_exec_session_info(session_id: str):
_cleanup_exec_sessions()
sess = _get_exec_session_or_404(session_id)
with sess.lock:
events = len(sess.events)
seq = sess.seq
return {
"session_id": sess.session_id,
"exec_id": sess.exec_id,
"container": sess.container,
"tty": sess.tty,
"created_at": sess.created_at,
"last_activity": sess.last_activity,
"closed": sess.closed,
"close_reason": sess.close_reason,
"event_count": events,
"event_seq": seq,
}
@router.get("/containers/exec/{session_id}/stream")
async def container_exec_stream(session_id: str, after: int = 0):
_cleanup_exec_sessions()
sess = _get_exec_session_or_404(session_id)
async def event_gen():
cursor = int(after or 0)
last_ping = time.time()
try:
while True:
pending = []
closed = False
with sess.lock:
pending = [e for e in sess.events if e["seq"] > cursor]
closed = sess.closed
if pending:
for ev in pending:
cursor = ev["seq"]
yield "event: exec\n"
yield f"data: {json.dumps(ev, separators=(',',':'))}\n\n"
else:
now = time.time()
if (now - last_ping) >= 10.0:
last_ping = now
yield "event: ping\n"
yield f"data: {int(now)}\n\n"
if closed and not pending:
break
await asyncio.sleep(0.2)
except asyncio.CancelledError:
return
headers = {
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no",
}
return StreamingResponse(event_gen(), media_type="text/event-stream", headers=headers)
@router.post("/containers/exec/{session_id}/input")
def container_exec_input(session_id: str, req: ExecInputRequest):
_cleanup_exec_sessions()
sess = _get_exec_session_or_404(session_id)
if sess.closed:
raise HTTPException(status_code=409, detail=f"Exec session is closed: {sess.close_reason or 'closed'}")
data = (req.data or "").encode("utf-8")
if not data:
return {"ok": True, "session_id": session_id, "bytes": 0}
max_bytes = _exec_max_input_bytes()
if len(data) > max_bytes:
raise HTTPException(
status_code=413,
detail=f"Input too large ({len(data)} bytes > {max_bytes} bytes)",
)
try:
sess.sock.sendall(data)
sess.last_activity = int(time.time())
return {"ok": True, "session_id": session_id, "bytes": len(data)}
except Exception as e:
_close_exec_session(sess, f"write-error: {str(e)}")
raise HTTPException(status_code=409, detail=f"Exec input failed: {str(e)}")
@router.post("/containers/exec/{session_id}/resize")
def container_exec_resize(session_id: str, req: ExecResizeRequest):
_cleanup_exec_sessions()
sess = _get_exec_session_or_404(session_id)
try:
insp = session.get(f"{podman_api_base}/libpod/exec/{sess.exec_id}/json", timeout=5)
except Exception as e:
raise HTTPException(status_code=502, detail=f"Exec inspect failed: {str(e)}")
if insp.status_code >= 400:
raise HTTPException(status_code=502, detail=insp.text)
try:
running = bool((insp.json() or {}).get("Running"))
except Exception:
running = False
if not running:
raise HTTPException(status_code=409, detail="Exec session is not running; resize requires running session")
url = f"{podman_api_base}/libpod/exec/{sess.exec_id}/resize?h={int(req.rows)}&w={int(req.cols)}"
try:
res = session.post(url, timeout=5)
except Exception as e:
raise HTTPException(status_code=502, detail=f"Exec resize request failed: {str(e)}")
if res.status_code >= 400:
detail = (res.text or "").strip()
if res.status_code == 500 and "not running" in detail.lower():
raise HTTPException(status_code=409, detail="Exec session is not running")
raise HTTPException(status_code=502, detail=detail)
sess.last_activity = int(time.time())
return {"ok": True, "session_id": session_id, "rows": int(req.rows), "cols": int(req.cols)}
@router.post("/containers/exec/{session_id}/stop")
def container_exec_stop(session_id: str):
_cleanup_exec_sessions()
sess = _get_exec_session_or_404(session_id)
if sess.closed:
return {"ok": True, "session_id": session_id, "already_closed": True, "reason": sess.close_reason}
_close_exec_session(sess, "stopped-by-user")
return {"ok": True, "session_id": session_id, "already_closed": False, "reason": "stopped-by-user"}
return router
+168
View File
@@ -0,0 +1,168 @@
import os
from pathlib import Path
from fastapi import APIRouter, HTTPException, Query
from pydantic import BaseModel
class FileContent(BaseModel):
content: str
def safe_join(base, path):
# prevent traversal
base = os.path.abspath(base)
final = os.path.abspath(os.path.join(base, path))
if not final.startswith(base):
raise HTTPException(status_code=403, detail="Forbidden path")
return final
def init_files_router(session, podman_api_base: str, workloads_dir: str) -> APIRouter:
router = APIRouter(tags=["files"])
def _podman_post(url: str, **kwargs):
# Keep behavior identical to app.py wrapper used by old /workloads/deploy.
return session.post(url, **kwargs)
# STEP 4: Centralize WORKLOADS_DIR subtree enforcement via one helper.
# MUST be behavior-identical to previous safe_join(WORKLOADS_DIR, ...) calls.
def _files_safe_join(path: str) -> str:
return safe_join(workloads_dir, path)
# --- WORKLOADS ---
@router.get("/workloads")
def list_workloads():
workloads = []
for root, _, files in os.walk(workloads_dir):
for f in files:
if f.endswith((".yaml", ".yml", ".json")):
full = os.path.join(root, f)
rel = os.path.relpath(full, workloads_dir)
workloads.append(rel)
return {"workloads": workloads}
@router.get("/workloads/read/{filename:path}")
def read_workload(filename: str):
path = _files_safe_join(filename)
if not os.path.exists(path):
raise HTTPException(404)
with open(path, 'r') as f:
content = f.read()
return {"filename": filename, "content": content}
@router.post("/workloads/save-file")
def save_workload_file(data: dict):
path = data.get("path")
content = data.get("content")
full_path = _files_safe_join(path)
os.makedirs(os.path.dirname(full_path), exist_ok=True)
with open(full_path, "w") as f:
f.write(content)
return {"status": "success"}
@router.post("/workloads/deploy/{filename:path}")
def deploy_workload(filename: str):
path = _files_safe_join(filename)
with open(path, 'r') as f:
yaml_content = f.read()
url = f"{podman_api_base}/libpod/kube/play"
return _podman_post(url, data=yaml_content).json()
# --- FILES API ---
@router.get("/files/tree")
def file_tree():
root = workloads_dir
result = []
for dirpath, dirnames, filenames in os.walk(root):
rel = os.path.relpath(dirpath, root)
if rel == ".":
rel = ""
result.append({
"path": rel,
"dirs": sorted(dirnames),
"files": sorted(filenames),
})
return result
@router.get("/files/read")
def file_read(path: str = Query(...)):
full = _files_safe_join(path)
if not os.path.exists(full):
raise HTTPException(status_code=404, detail="Not found")
if os.path.isdir(full):
raise HTTPException(status_code=403, detail="Is a directory")
with open(full, "r") as f:
content = f.read()
return {"content": content}
@router.post("/files/save")
def file_save(path: str = Query(...), data: FileContent = None):
full = _files_safe_join(path)
os.makedirs(os.path.dirname(full), exist_ok=True)
with open(full, "w") as f:
f.write(data.content)
return {"status": "success", "path": path}
@router.delete("/files/delete")
def file_delete(path: str = Query(...)):
full = _files_safe_join(path)
if not os.path.exists(full):
raise HTTPException(status_code=404, detail="Not found")
if os.path.isdir(full):
raise HTTPException(status_code=400, detail="Kan niet verwijderen: is directory")
try:
os.remove(full)
except Exception as e:
raise HTTPException(status_code=400, detail=f"Kan niet verwijderen: {e}")
return {"status": "deleted", "type": "file"}
@router.post("/files/mkdir")
def file_mkdir(path: str = Query(...)):
# UI expects operations under systemd/; enforce prefix if absent.
if not path.startswith("systemd"):
path = os.path.join("systemd", path)
full = _files_safe_join(path)
os.makedirs(full, exist_ok=True)
return {"status": "directory created", "path": path}
@router.delete("/files/rmdir")
def file_rmdir(path: str = Query(..., description="Directory path under systemd/")):
# Only allow deletion under systemd subtree
if not path or path == "systemd" or path == "systemd/":
raise HTTPException(status_code=400, detail="Refusing to delete systemd root")
if not path.startswith("systemd/") and path != "systemd":
raise HTTPException(status_code=400, detail="Only systemd subtree is allowed")
full = _files_safe_join(path)
if not os.path.exists(full):
raise HTTPException(status_code=404, detail="Directory not found")
if not os.path.isdir(full):
raise HTTPException(status_code=400, detail="Path is not a directory")
# directory must be empty
try:
Path(full).rmdir()
except OSError:
# not empty
# build a stable detail payload
try:
dirs = []
files = []
for entry in os.listdir(full):
p = os.path.join(full, entry)
if os.path.isdir(p):
dirs.append(entry)
else:
files.append(entry)
except Exception:
dirs, files = [], []
raise HTTPException(status_code=409, detail={
"error": "directory not empty",
"dirs": sorted(dirs),
"files": sorted(files),
})
return {"deleted": True, "path": path}
return router
+160
View File
@@ -0,0 +1,160 @@
from __future__ import annotations
import io
import os
import tarfile
import tempfile
from pathlib import Path
from typing import List
from fastapi import APIRouter, HTTPException, Query
from pydantic import BaseModel
class ImageRemoveRequest(BaseModel):
images: List[str]
force: bool = False
ignore: bool = False
class ImageBuildRequest(BaseModel):
# paden RELATIEF t.o.v. /app/workloads (Files tab)
context_dir: str
dockerfile: str # bv "Dockerfile" of "subdir/Dockerfile" binnen context
tag: str # bv "localhost/testimg:latest"
pull: bool = False
nocache: bool = False
## Helpers ##
def _safe_join(root: Path, rel: str) -> Path:
p = (root / rel).resolve()
root_resolved = root.resolve()
if root_resolved not in p.parents and p != root_resolved:
raise HTTPException(status_code=400, detail="Path escapes workloads root")
return p
def _create_context_tar(context_dir: Path) -> str:
# Maak tar in /tmp om niet alles in RAM te houden
tmp = tempfile.NamedTemporaryFile(prefix="podman-mvp-buildctx-", suffix=".tar", delete=False)
tmp_path = tmp.name
tmp.close()
with tarfile.open(tmp_path, "w") as tf:
# Voeg alles toe uit context_dir
for root, dirs, files in os.walk(context_dir):
root_path = Path(root)
for name in files:
fp = root_path / name
# tar-path moet relatief zijn aan context_dir
arcname = fp.relative_to(context_dir)
tf.add(fp, arcname=str(arcname))
return tmp_path
## Einde Helpers
def _raise_on_error(resp):
if 200 <= resp.status_code < 300:
return
# Podman API geeft vaak JSON error-body; maar text is altijd safe
raise HTTPException(status_code=resp.status_code, detail=resp.text)
def init_images_router(session, podman_api_base: str) -> APIRouter:
router = APIRouter(prefix="/images", tags=["images"])
@router.get("")
def list_images():
url = f"{podman_api_base}/libpod/images/json"
resp = session.get(url)
_raise_on_error(resp)
return resp.json()
# --- STAP 2: remove selected (batch) ---
@router.post("/remove")
def remove_images(req: ImageRemoveRequest):
"""⚠️ Destructief: verwijdert één of meerdere images permanent. Niet terug te draaien."""
# Libpod heeft batch remove via query params (images=...).
url = f"{podman_api_base}/libpod/images/remove"
params = {
"images": req.images,
"force": str(req.force).lower(),
"ignore": str(req.ignore).lower(),
}
resp = session.delete(url, params=params)
_raise_on_error(resp)
return resp.json()
# Convenience: delete single image (handig voor UI per-row)
@router.delete("/{image_ref:path}")
def remove_image(
image_ref: str,
force: bool = Query(False),
ignore: bool = Query(False),
):
"""⚠️ Destructief: verwijdert één image permanent op basis van naam of ID."""
url = f"{podman_api_base}/libpod/images/remove"
params = {
"images": [image_ref],
"force": str(force).lower(),
"ignore": str(ignore).lower(),
}
resp = session.delete(url, params=params)
_raise_on_error(resp)
return resp.json()
# --- STAP 2: prune (dangling default, all=true => unused) ---
@router.post("/prune")
def prune_images(all: bool = Query(False)):
"""⚠️ Destructief: verwijdert dangling images (standaard) of alle ongebruikte images (`all=true`)."""
url = f"{podman_api_base}/libpod/images/prune"
params = {"all": str(all).lower()}
resp = session.post(url, params=params)
_raise_on_error(resp)
return resp.json()
@router.post("/build")
def build_image(req: ImageBuildRequest):
if not req.context_dir.startswith("systemd/"):
raise HTTPException(status_code=400, detail="context_dir must start with systemd/")
workloads_root = Path("/app/workloads")
context_dir = _safe_join(workloads_root, req.context_dir)
if not context_dir.is_dir():
raise HTTPException(status_code=400, detail="context_dir is not a directory")
dockerfile_path = (context_dir / req.dockerfile).resolve()
if context_dir.resolve() not in dockerfile_path.parents:
raise HTTPException(status_code=400, detail="dockerfile must be inside context_dir")
if not dockerfile_path.is_file():
raise HTTPException(status_code=400, detail="dockerfile not found")
tar_path = _create_context_tar(context_dir)
try:
url = f"{podman_api_base}/build"
params = {
"dockerfile": str(Path(req.dockerfile)),
"t": req.tag,
"pull": str(req.pull).lower(),
"nocache": str(req.nocache).lower(),
}
with open(tar_path, "rb") as f:
resp = session.post(
url,
params=params,
data=f,
headers={"Content-Type": "application/x-tar"},
)
_raise_on_error(resp)
# Build API geeft doorgaans JSON-lines/stream tekst terug; voor MVP geven we raw text terug.
return {"ok": True, "output": resp.text}
finally:
try:
os.unlink(tar_path)
except OSError:
pass
return router
+266
View File
@@ -0,0 +1,266 @@
from fastapi import APIRouter, HTTPException
def init_networks_router(session, podman_api_base: str) -> APIRouter:
router = APIRouter(tags=["networks"])
def _podman_get_json_checked(url: str):
r = session.get(url)
if r.status_code >= 400:
raise HTTPException(status_code=502, detail=f"Podman API fout {r.status_code}: {r.text}")
try:
return r.json()
except Exception:
raise HTTPException(status_code=502, detail=f"Podman API gaf geen JSON terug: {r.text[:2000]}")
def _deep_get(d, path, default=None):
cur = d
for key in path:
if not isinstance(cur, dict) or key not in cur:
return default
cur = cur[key]
return cur
@router.get("/networks")
def list_networks():
# Libpod: /libpod/networks/json
url = f"{podman_api_base}/libpod/networks/json"
return {"networks": _podman_get_json_checked(url)}
@router.get("/networks/meta")
def networks_meta():
candidates = [
f"{podman_api_base}/libpod/info",
f"{podman_api_base}/libpod/info/json",
f"{podman_api_base}/info",
f"{podman_api_base}/info/json",
f"{podman_api_base}/libpod/system/info",
f"{podman_api_base}/libpod/system/info/json",
]
last_err = None
info = None
used = None
for url in candidates:
r = session.get(url)
if r.status_code == 200:
used = url
try:
info = r.json()
except Exception:
raise HTTPException(status_code=502, detail=f"Podman info endpoint gaf geen JSON terug: {url}")
break
last_err = f"{r.status_code}: {r.text}"
if info is None:
raise HTTPException(status_code=502, detail=f"Podman info endpoint niet gevonden. Laatste fout: {last_err}")
network_backend = (
_deep_get(info, ["host", "networkBackend"]) or
_deep_get(info, ["Host", "NetworkBackend"]) or
_deep_get(info, ["host", "network", "backend"]) or
_deep_get(info, ["Host", "Network", "Backend"])
)
rootless = (
_deep_get(info, ["host", "rootless"]) or
_deep_get(info, ["Host", "Rootless"]) or
_deep_get(info, ["host", "security", "rootless"]) or
_deep_get(info, ["Host", "Security", "Rootless"])
)
if not isinstance(rootless, bool):
rootless = None
return {
"networkBackend": network_backend,
"rootless": rootless,
"infoEndpoint": used,
}
@router.get("/networks/usage")
def networks_usage():
"""
Bouwt mapping netwerk -> containers/pods, en container -> netwerken.
Ground truth: NetworkSettings.Networks uit container inspect.
Infra containers (IsInfra=true) worden gefilterd.
"""
# 1) Containers ophalen
containers = _podman_get_json_checked(
f"{podman_api_base}/libpod/containers/json?all=true"
) or []
by_network: dict[str, dict] = {}
by_container: dict[str, list[str]] = {}
by_container_meta: dict[str, dict] = {}
def _norm_name(c: dict) -> str:
n = c.get("Name")
if isinstance(n, str) and n:
return n
names = c.get("Names")
if isinstance(names, list) and names:
return str(names[0]).lstrip("/")
cid = c.get("Id") or c.get("id") or ""
return cid[:12] if cid else "(unknown)"
def _norm_id(c: dict) -> str:
return c.get("Id") or c.get("id") or ""
def _pod_name(c: dict) -> str | None:
for k in ("PodName", "pod", "Pod"):
v = c.get(k)
if isinstance(v, str) and v:
return v
return None
def _extract_networks_from_summary(c: dict) -> list[str] | None:
nets = c.get("Networks")
if isinstance(nets, dict):
return list(nets.keys())
if isinstance(nets, list):
return [str(x) for x in nets if x]
ns = c.get("NetworkSettings")
if isinstance(ns, dict):
nets2 = ns.get("Networks")
if isinstance(nets2, dict):
return list(nets2.keys())
nn = c.get("NetworkNames")
if isinstance(nn, list):
return [str(x) for x in nn if x]
return None
def _ns_networks(insp: dict) -> dict:
"""Haal NetworkSettings.Networks dict op uit inspect — de ground truth."""
ns = insp.get("NetworkSettings") if isinstance(insp, dict) else None
nets = ns.get("Networks") if isinstance(ns, dict) else None
return nets if isinstance(nets, dict) else {}
def _extract_from_inspect(cid: str) -> tuple[list[str], dict, dict]:
"""
Returns: (net_names, extra, net_details)
- net_names: lijst van netwerknamen
- extra: {networkMode, networkOwnerId, networkOwnerName} voor container: mode
- net_details: {net_name: {ip, aliases}} voor bridge-netwerken
"""
if not cid:
return [], {}, {}
insp = _podman_get_json_checked(
f"{podman_api_base}/libpod/containers/{cid}/json"
)
extra: dict = {}
# 1) NetworkSettings.Networks is de ground truth voor bridge-containers
nets_dict = _ns_networks(insp)
if nets_dict:
net_details = {}
for net_name, net_info in nets_dict.items():
if isinstance(net_info, dict):
ip = net_info.get("IPAddress") or ""
aliases = [
a for a in (net_info.get("Aliases") or [])
if isinstance(a, str)
]
net_details[net_name] = {"ip": ip, "aliases": aliases}
else:
net_details[net_name] = {"ip": "", "aliases": []}
return sorted(nets_dict.keys()), extra, net_details
# 2) Shared network namespace: NetworkMode = "container:<id>"
hc = insp.get("HostConfig") if isinstance(insp, dict) else None
nm = hc.get("NetworkMode") if isinstance(hc, dict) else None
if isinstance(nm, str) and nm.startswith("container:"):
owner_id = nm.split("container:", 1)[1]
extra["networkMode"] = nm
extra["networkOwnerId"] = owner_id
owner = _podman_get_json_checked(
f"{podman_api_base}/libpod/containers/{owner_id}/json"
)
owner_name = str(owner.get("Name") or owner_id[:12]).lstrip("/")
extra["networkOwnerName"] = owner_name
owner_nets = _ns_networks(owner)
if owner_nets:
return sorted(owner_nets.keys()), extra, {}
# Owner gebruikt pasta/host/none
owner_nm = (owner.get("HostConfig") or {}).get("NetworkMode") or ""
if owner_nm in ("pasta", "host", "none"):
return [owner_nm], extra, {}
return [], extra, {}
# 3) Pseudo-netwerken: pasta / host / none
if isinstance(nm, str) and nm in ("pasta", "host", "none"):
extra["networkMode"] = nm
return [nm], extra, {}
return [], {}, {}
import re
_INFRA_NAME_RE = re.compile(r"^[0-9a-f]+-infra$")
_PSEUDO_NETS = {"pasta", "host", "none"}
# 2) Loop over alle containers
for c in containers:
if not isinstance(c, dict):
continue
cname_pre = _norm_name(c)
if c.get("IsInfra") or _INFRA_NAME_RE.match(cname_pre):
continue # pod infra containers overslaan
cid = _norm_id(c)
cname = cname_pre
pod = _pod_name(c)
nets = _extract_networks_from_summary(c)
extra: dict = {}
net_details: dict = {}
if not nets:
nets, extra, net_details = _extract_from_inspect(cid)
elif any(n not in _PSEUDO_NETS for n in nets):
# Bridge-container: inspect voor IP/aliases
_, extra, net_details = _extract_from_inspect(cid)
by_container_meta[cname] = extra
nets = [n for n in (nets or []) if isinstance(n, str) and n]
by_container[cname] = sorted(set(nets))
for n in nets:
slot = by_network.setdefault(n, {"containers": [], "pods": []})
nd = net_details.get(n, {})
slot["containers"].append({
"id": cid,
"name": cname,
"pod": pod,
"ip": nd.get("ip", ""),
"aliases": nd.get("aliases", []),
**extra,
})
# 3) Pods afleiden via containers
for n, slot in by_network.items():
pods = sorted({
c.get("pod") for c in slot["containers"]
if isinstance(c.get("pod"), str) and c.get("pod")
})
slot["pods"] = [{"name": p} for p in pods]
return {"byNetwork": by_network, "byContainer": by_container, "byContainerMeta": by_container_meta}
@router.get("/networks/{name}")
def inspect_network(name: str):
url1 = f"{podman_api_base}/libpod/networks/{name}/json"
r = session.get(url1)
if r.status_code == 200:
return _podman_get_json_checked(url1)
url2 = f"{podman_api_base}/libpod/network/{name}/json"
return _podman_get_json_checked(url2)
return router
+164
View File
@@ -0,0 +1,164 @@
import os
from fastapi import APIRouter
from common import (
_build_pod_to_containers_map,
_map_pod_to_unit,
_podman_action_post,
_podman_delete,
_podman_get_json,
_podman_post,
_systemd_then_podman,
)
def init_pods_router(
session,
podman_api_base: str,
workloads_dir: str,
systemctl_func,
) -> APIRouter:
router = APIRouter(tags=["pods"])
def _append_podman_pods_dashboard_rows(dashboard: list, api_pods: list, pod_to_containers: dict):
# preserves original api_pods iteration order
for p in api_pods:
name = p.get("Name")
status = p.get("Status", "unknown")
unit = _map_pod_to_unit(name) if name else ""
dashboard.append({
"Name": name,
"Status": status,
"Containers": pod_to_containers.get(name, []),
"Unit": unit,
"Source": "podman",
})
def _append_defined_pods_dashboard_rows(dashboard: list, by_name: dict, root_dir: str):
# preserves original os.walk order and file iteration order
SUPPORTED_POD_WORKLOAD_EXTENSIONS = {".pod", ".kube"}
for root, _, files in os.walk(root_dir):
for f in files:
_, ext = os.path.splitext(f)
if ext in SUPPORTED_POD_WORKLOAD_EXTENSIONS:
base = os.path.splitext(os.path.basename(f))[0]
pod_name = f"pod{base}"
unit_name = _map_pod_to_unit(pod_name)
if pod_name not in by_name:
code, out = systemctl_func(["systemctl", "--user", "is-active", unit_name])
status = (out or "").strip() or ("active" if code == 0 else "inactive")
dashboard.append({
"Name": pod_name,
"Status": status,
"Containers": [],
"Unit": unit_name,
"Source": "systemd",
})
def try_systemd_pod_action(action: str, podname: str):
# If systemd unit exists/allowed, prefer it.
unit = _map_pod_to_unit(podname)
if not unit:
return None
code, out = systemctl_func(["systemctl", "--user", action, unit])
return {
"method": "systemd",
"pod": podname,
"unit": unit,
"cmd": f"systemctl --user {action} {unit}",
"exit": code,
"output": out,
}
@router.get("/pods")
def list_pods():
# Cruciaal: ?all=true zorgt dat EXIT_STATE pods ook getoond worden
url = f"{podman_api_base}/libpod/pods/json?all=true"
return _podman_get_json(session, url)
@router.post("/actions/{action}/{name}")
def take_action(action: str, name: str):
# Legacy endpoint (keep behavior)
possible_names = [name, f"pod{name}", f"pod-{name}"]
if action == "start":
# STAP 1: Probeer direct de pod te starten (de 'Cockpit' methode)
for target in possible_names:
res = _podman_post(session, f"{podman_api_base}/libpod/pods/{target}/start")
if res.status_code in (200, 204):
return {"status": "started", "target": target, "method": "direct"}
# STAP 2: Als direct starten faalt, probeer dan YAML opnieuw te deployen
target_path = None
for ext in (".yaml", ".yml"):
cand = os.path.join(workloads_dir, f"{name}{ext}")
if os.path.exists(cand):
target_path = cand
break
if target_path:
with open(target_path, 'r') as file:
yaml_content = file.read()
res = _podman_post(session, f"{podman_api_base}/libpod/kube/play", data=yaml_content)
# SPECIALE CASE: Pod bestaat al, forceer dan restart
if res.status_code == 500 and "already exists" in res.text:
print(f"DEBUG: Forceer herstart voor {name} wegens conflict")
for target in possible_names:
_podman_delete(session, f"{podman_api_base}/libpod/pods/{target}?force=true")
# Probeer het nu opnieuw
retry_res = _podman_post(session, f"{podman_api_base}/libpod/kube/play", data=yaml_content)
return retry_res.json()
return res.json()
return {"status": "unknown", "method": "no_yaml_found"}
if action == "stop":
for target in possible_names:
res = _podman_post(session, f"{podman_api_base}/libpod/pods/{target}/stop")
if res.status_code in (200, 204):
return {"status": "stopped", "target": target}
return {"status": "not found"}
return {"status": "unknown"}
@router.get("/pods-dashboard")
def pods_dashboard():
dashboard = []
# 0) Bouw mapping: pod_name -> [container_names...]
containers = _podman_get_json(session, f"{podman_api_base}/libpod/containers/json?all=true")
pod_to_containers = _build_pod_to_containers_map(containers)
# 1) A) echte pods
api_pods = _podman_get_json(session, f"{podman_api_base}/libpod/pods/json?all=true")
by_name = {p.get("Name"): p for p in api_pods}
_append_podman_pods_dashboard_rows(dashboard, api_pods, pod_to_containers)
# 1) B) defined pods via workloads scan
# Based on YAML files in WORKLOADS_DIR; show even if not running.
_append_defined_pods_dashboard_rows(dashboard, by_name, workloads_dir)
return dashboard
@router.post("/pods/actions/{action}/{podname}")
def pod_action_prefer_systemd(action: str, podname: str):
if action not in ("start", "stop", "restart"):
return {"error": "Invalid action"}, 400
def _systemd_call():
return try_systemd_pod_action(action, podname)
def _podman_call(systemd_res):
if systemd_res:
note = "systemd failed; falling back to podman"
podman = _podman_action_post(session, podman_api_base, "pods", podname, action).json()
return {"method": "systemd_then_podman", "note": note, "systemd": systemd_res, "podman": podman}
return {"method": "podman", "result": _podman_action_post(session, podman_api_base, "pods", podname, action).json()}
return _systemd_then_podman(_systemd_call, _podman_call)
return router
+105
View File
@@ -0,0 +1,105 @@
import os
import socket
from fastapi import APIRouter, HTTPException
from common import (
HELPER_SOCKET,
_helper_call,
_podman_get_json as _common_podman_get_json,
_systemctl as _common_systemctl,
run,
)
def init_system_router(session, podman_api_base: str, workloads_dir: str) -> APIRouter:
router = APIRouter(tags=["system"])
@router.get("/health")
def health():
podman_ok = False
try:
r = session.get(f"{podman_api_base}/libpod/info", timeout=2)
if r.status_code == 200:
try:
r.json()
podman_ok = True
except Exception:
podman_ok = False
except Exception:
podman_ok = False
helper_ok = False
try:
with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s:
s.settimeout(2)
s.connect(HELPER_SOCKET)
helper_ok = True
except Exception:
helper_ok = False
# Helper draait op de host als de kodi-user en voert systemctl --user uit.
# Als de helper bereikbaar is, is systemd ook bereikbaar.
systemd_reachable = helper_ok
ok = podman_ok and helper_ok
return {
"ok": ok,
"podman": {"ok": podman_ok},
"systemd_user": {"reachable": systemd_reachable},
"helper": {"ok": helper_ok},
}
@router.get("/test-hybrid")
def test_hybrid():
# 1. Check filesystem
try:
bestanden = []
for root, _, files in os.walk(workloads_dir):
for f in files:
bestanden.append(os.path.join(root, f))
except Exception as e:
bestanden = f"FS Fout: {str(e)}"
# 2. Check Podman API
try:
api_containers = _common_podman_get_json(session, f"{podman_api_base}/libpod/containers/json?all=true")
except Exception as e:
api_containers = f"API Fout: {str(e)}"
return {
"bestanden_gevonden": bestanden if isinstance(bestanden, list) else [],
"api_containers_aantal": len(api_containers) if isinstance(api_containers, list) else -1,
"api_raw_sample": api_containers[0] if isinstance(api_containers, list) and api_containers else api_containers,
}
def _systemctl(cmd):
return _common_systemctl(cmd, run)
def _run_systemctl_action(action: str, unit: str):
cmd = ["systemctl", "--user", action, unit]
return _systemctl(cmd)
@router.post("/daemon-reload")
def api_daemon_reload():
try:
code, out = _helper_call("daemon-reload", "")
return {
"cmd": "systemctl --user daemon-reload",
"exit": code,
"output": out,
}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@router.post("/{action}/{unit}")
def api_action(action: str, unit: str):
if action not in ("status", "start", "stop", "restart"):
raise HTTPException(status_code=400, detail="Invalid action")
cmd = ["systemctl", "--user", action, unit]
if action in ("start", "stop", "restart"):
code, out = _helper_call(action, unit)
else:
code, out = _run_systemctl_action(action, unit)
return {"cmd": " ".join(cmd), "exit": code, "output": out}
return router
+96
View File
@@ -0,0 +1,96 @@
from __future__ import annotations
import json
from typing import Dict, Optional
from fastapi import APIRouter, HTTPException, Query
from pydantic import BaseModel
def _normalize_filters(filters: str) -> str:
"""Zet key=value formaat om naar {"key":["value"]} JSON dat Libpod verwacht.
Als de waarde al met '{' begint, wordt hij ongewijzigd doorgegeven."""
if filters.startswith("{"):
return filters
# key=value → {"key": ["value"]}
if "=" in filters:
key, _, value = filters.partition("=")
return json.dumps({key.strip(): [value.strip()]})
# Alleen een key zonder waarde → {"key": ["true"]}
return json.dumps({filters.strip(): ["true"]})
class VolumeCreateRequest(BaseModel):
name: str
driver: str = "local"
driverOpts: Optional[Dict[str, str]] = None
labels: Optional[Dict[str, str]] = None
def _raise_on_error(resp):
if 200 <= resp.status_code < 300:
return
raise HTTPException(status_code=resp.status_code, detail=resp.text)
def init_volumes_router(session, podman_api_base: str) -> APIRouter:
router = APIRouter(prefix="/volumes", tags=["volumes"])
@router.get("")
def list_volumes(filters: Optional[str] = Query(None)):
url = f"{podman_api_base}/libpod/volumes/json"
params = {}
if filters is not None:
params["filters"] = _normalize_filters(filters)
resp = session.get(url, params=params)
_raise_on_error(resp)
return resp.json()
@router.post("")
def create_volume(req: VolumeCreateRequest):
url = f"{podman_api_base}/libpod/volumes/create"
body: dict = {"name": req.name, "driver": req.driver}
if req.driverOpts:
body["driverOpts"] = req.driverOpts
if req.labels:
body["labels"] = req.labels
resp = session.post(url, json=body)
_raise_on_error(resp)
return resp.json()
@router.post("/prune")
def prune_volumes():
"""⚠️ Destructief: verwijdert alle ongebruikte volumes permanent. Niet terug te draaien."""
url = f"{podman_api_base}/libpod/volumes/prune"
resp = session.post(url)
_raise_on_error(resp)
return resp.json()
@router.get("/{name}/exists")
def volume_exists(name: str):
url = f"{podman_api_base}/libpod/volumes/{name}/exists"
resp = session.get(url)
if resp.status_code == 204:
return {"exists": True}
if resp.status_code == 404:
return {"exists": False}
_raise_on_error(resp)
@router.get("/{name}")
def get_volume(name: str):
url = f"{podman_api_base}/libpod/volumes/{name}/json"
resp = session.get(url)
_raise_on_error(resp)
return resp.json()
@router.delete("/{name}")
def remove_volume(name: str, force: bool = Query(False)):
"""⚠️ Destructief: verwijdert een volume permanent. Niet terug te draaien als het volume data bevat."""
url = f"{podman_api_base}/libpod/volumes/{name}"
params = {"force": str(force).lower()}
resp = session.delete(url, params=params)
if resp.status_code == 204:
return {"ok": True}
_raise_on_error(resp)
return router
+106
View File
@@ -0,0 +1,106 @@
import json
import socket
import subprocess
from fastapi import HTTPException
HELPER_SOCKET = "/run/podman-mvp/podman-helper.sock"
def _helper_call(action: str, unit: str) -> tuple[int, str]:
"""Stuur start/stop/restart naar de host-helper via Unix socket.
Returntype identiek aan run(): (returncode, output)."""
payload = json.dumps({"action": action, "unit": unit}).encode()
try:
with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s:
s.settimeout(35)
s.connect(HELPER_SOCKET)
s.sendall(payload)
s.shutdown(socket.SHUT_WR)
data = b""
while True:
chunk = s.recv(4096)
if not chunk:
break
data += chunk
resp = json.loads(data.decode())
if resp.get("ok"):
return 0, resp.get("output", "")
return 1, resp.get("error", "mislukt")
except Exception as e:
return 1, f"helper niet bereikbaar: {e}"
def run(cmd):
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=False)
output = (result.stdout or "") + (result.stderr or "")
return result.returncode, output.strip()
except Exception as e:
return 1, str(e)
def _podman_get_json_checked(session, url: str):
r = session.get(url)
if r.status_code >= 400:
raise HTTPException(status_code=502, detail=r.text)
try:
return r.json()
except Exception:
raise HTTPException(status_code=502, detail=r.text)
def _podman_get_json(session, url: str):
return session.get(url).json()
def _podman_get_text(session, url: str) -> str:
return session.get(url).text
def _podman_post(session, url: str, **kwargs):
return session.post(url, **kwargs)
def _podman_action_post(session, podman_api_base: str, kind: str, name: str, action: str):
if kind == "pods":
url = f"{podman_api_base}/libpod/pods/{name}/{action}"
else:
url = f"{podman_api_base}/libpod/containers/{name}/{action}"
return _podman_post(session, url)
def _podman_delete(session, url: str):
return session.delete(url)
def _systemctl(cmd, run_func):
# Proxy to existing run() to avoid behavioral changes.
return run_func(cmd)
def _build_pod_to_containers_map(containers: list):
# preserves original order of containers processing; no sorting added
pod_to_containers = {}
for c in containers:
pod_name = c.get("PodName") or ""
if pod_name:
pod_to_containers.setdefault(pod_name, []).append((c.get("Names") or ["?"])[0])
return pod_to_containers
def _map_pod_to_unit(podname: str) -> str | None:
if not podname:
return None
if podname.startswith("pod"):
return f"{podname[3:]}.service"
return f"{podname}.service"
def _systemd_then_podman(systemd_callable, podman_callable):
systemd_res = systemd_callable()
if systemd_res is not None:
if isinstance(systemd_res, dict) and systemd_res.get("exit", 1) == 0:
return systemd_res
return podman_callable(systemd_res)
return podman_callable(None)
+296
View File
@@ -0,0 +1,296 @@
# De podman-helper: waarom, hoe en wat het oplost
## Inleiding
Dit document beschrijft de technische achtergrond en motivatie voor de `podman-helper` service in het Podman MVP project. Het legt uit welk fundamenteel probleem er was, waarom meerdere eerdere oplossingen faalden, en hoe de helper dit definitief oplost.
---
## Het probleem: systemd units beheren vanuit een rootless container
### Wat we wilden
Een gebruiker moet vanuit de webui een quadlet service kunnen starten, stoppen en herstarten. Een quadlet service is een systemd user service die gegenereerd wordt door Podman's quadlet generator op basis van een `.container` of `.kube` bestand. Voorbeelden:
- `sonarr.container` → systemd genereert `sonarr.service`
- `test-web.container` → systemd genereert `test-web.service`
De bedoeling was:
```
Gebruiker klikt "stop" in de webui
→ webui stuurt POST /api/system/unit/sonarr.service/stop
→ API container voert de stop uit
→ sonarr stopt
```
### Waarom dit niet triviaal is
De API container is een rootless Podman container. Hij draait als gewone gebruiker (kodi, UID 1000) maar hij draait **in een geïsoleerde namespace**. Systemd user services draaien in de **host user session** van UID 1000. Die twee werelden zijn niet zomaar uitwisselbaar.
---
## Poging 1: D-Bus StopUnit vanuit de container
### Aanpak
Systemd is bereikbaar via D-Bus. De rootless D-Bus session bus is beschikbaar op:
```
unix:path=/run/user/1000/bus
```
Door deze socket als volume te mounten in de container en de `DBUS_SESSION_BUS_ADDRESS` omgevingsvariabele in te stellen, leek het mogelijk om via `dbus-send` systemd commando's te sturen:
```python
_dbus_call("StopUnit", f"string:{unit_name}", "string:replace")
```
### Waarom dit faalde
**D-Bus StopUnit werkt anders dan systemctl stop.**
Wanneer `StopUnit` via D-Bus wordt aangeroepen:
1. Systemd stuurt een stopsignaal naar de container
2. De container ontvangt SIGTERM maar de applicatie reageert niet snel genoeg
3. Na de TimeoutStopSec (standaard 10 seconden) stuurt systemd SIGKILL
4. De container sterft met **exit code 137** (128 + 9 = SIGKILL)
Het probleem zit in wat er daarna gebeurt. De quadlet-gegenereerde service heeft standaard `Restart=on-failure` in de gegenereerde unit. Exit code 137 wordt door systemd beschouwd als een **failure** — en systemd herstart de service automatisch.
**`systemctl --user stop` werkt wel** omdat het intern een `prevent_restart` flag zet die de automatische herstart onderdrukt. D-Bus `StopUnit` heeft deze flag niet.
**Resultaat:** De service lijkt gestopt maar herstart zichzelf binnen enkele seconden. Vanuit de webui is dit onzichtbaar — de gebruiker ziet "gestopt" maar de service draait gewoon door.
### Bewijs
```bash
# Handmatige test toonde dit gedrag:
curl -s -X POST http://api/system/unit/sonarr.service/stop
# Response: {"message": "sonarr.service gestopt"} ← misleidend
sleep 5
systemctl --user is-active sonarr.service
# Output: active ← service draait gewoon door
```
---
## Poging 2: podman stop + D-Bus StopUnit
### Aanpak
Het idee: als we eerst `podman stop` aanroepen (wat SIGTERM stuurt en de container netjes laat stoppen met exit 0), en daarna D-Bus StopUnit aanroepen voor de systemd cleanup, zou de combinatie moeten werken.
```python
# Stap 1: container netjes stoppen
await _request("POST", f"/containers/{container_name}/stop")
# Stap 2: systemd opruimen
_dbus_call("StopUnit", f"string:{unit_name}", "string:replace")
```
### Waarom dit faalde
De `StopUnit` D-Bus call na `podman stop` **blokkeerde de opvolgende StartUnit**. Wanneer we daarna probeerden de service te starten:
```python
_dbus_call("StartUnit", f"string:{unit_name}", "string:replace")
```
Returde D-Bus wel een job pad (wat suggereert dat het gelukt is), maar systemd voerde de job niet daadwerkelijk uit. De service bleef op `inactive`.
De oorzaak: de extra `StopUnit` call zette de unit in een transitie state waaruit systemd de `StartUnit` job annuleerde.
---
## Poging 3: podman stop + wachten op inactive + StartUnit
### Aanpak
Misschien was het timing. We voegden een wachtfunctie toe die wacht totdat de unit echt `inactive` is voordat we `StartUnit` aanroepen:
```python
async def _wait_for_unit_state(unit_name, target="inactive", timeout=15):
for _ in range(timeout):
state = _get_unit_active_state(unit_name)
if state == target:
return True
await asyncio.sleep(1)
return False
# Gebruik:
await _wait_for_unit_state(unit_name, "inactive")
await asyncio.sleep(2) # extra buffer
_dbus_call("StartUnit", ...)
```
### Waarom dit ook faalde
Zelfs als `ActiveState == inactive` is, is systemd intern nog bezig met de deactivatie cleanup. De `StartUnit` job wordt aangemaakt (D-Bus geeft een job pad terug) maar systemd annuleert hem intern omdat de unit nog niet volledig gestopt is.
Uitgebreide tests toonden aan:
- `ActiveState` was al `inactive`
- `SubState` was al `dead`
- Maar `StartUnit` via D-Bus vanuit de container resulteerde niet in een daadwerkelijke start
**Hetzelfde commando direct op de host werkte wél:**
```bash
# Op de host:
dbus-send --session --print-reply \
--dest=org.freedesktop.systemd1 \
/org/freedesktop/systemd1 \
org.freedesktop.systemd1.Manager.StartUnit \
string:test-web.service string:replace
# → active ✓
# Vanuit de container (zelfde commando):
podman exec podman-api dbus-send --session --print-reply \
--dest=org.freedesktop.systemd1 \
/org/freedesktop/systemd1 \
org.freedesktop.systemd1.Manager.StartUnit \
string:test-web.service string:replace
# → D-Bus geeft OK maar service blijft inactive ✗
```
Dit bevestigde dat het probleem fundamenteel is: D-Bus vanuit een container context en D-Bus vanuit de host user session zijn niet equivalent, ook al communiceren ze over dezelfde socket.
---
## De root cause: container namespace isolatie
De kern van het probleem is dat een Podman rootless container draait in een **user namespace**. Hoewel de D-Bus socket gemount is en communicatie technisch mogelijk is, heeft systemd een interne controle op de **peer credentials** van de D-Bus verbinding.
Wanneer een D-Bus bericht aankomt van een process in een andere user namespace (de container), ziet systemd dit als een andere security context dan een bericht van de host user session. Bepaalde operaties — met name het daadwerkelijk uitvoeren van StartUnit/StopUnit jobs — worden door systemd intern anders behandeld of afgekapt.
Dit is geen bug maar een bewuste beveiligingsboundary in Linux: een process in een user namespace mag niet zomaar services beheren in de parent namespace.
**`systemctl --user stop` werkt** omdat systemctl de D-Bus verbinding opbouwt vanuit de host user session met de juiste credentials. Het zet ook de `prevent_restart` flag die D-Bus `StopUnit` mist.
---
## De oplossing: podman-helper
### Ontwerp
De `podman-helper` is een kleine Python service die **direct op de host** draait als de `kodi` gebruiker. Hij luistert op een Unix socket en voert `systemctl --user` commando's uit namens de API container.
```
API container
│ Unix socket: /run/user/1000/podman-helper.sock
│ (gemount in de container als /run/podman-helper.sock)
podman-helper.py (draait op de HOST als kodi user)
│ subprocess: systemctl --user start|stop|restart <unit>
systemd user session (host)
```
### Protocol
Eenvoudig JSON over de Unix socket:
**Verzoek:**
```json
{"action": "start", "unit": "test-web.service"}
```
**Antwoord bij succes:**
```json
{"ok": true, "output": "test-web.service start geslaagd"}
```
**Antwoord bij fout:**
```json
{"ok": false, "error": "Actie 'kill' niet toegestaan. Gebruik: restart, start, stop"}
```
### Beveiliging
De helper heeft een strikte whitelist:
```python
ALLOWED_ACTIONS = {"start", "stop", "restart"}
UNIT_PATTERN = re.compile(r'^[a-zA-Z0-9._\-]+\.service$')
```
- Alleen `start`, `stop` en `restart` zijn toegestaan
- Unit namen mogen alleen veilige tekens bevatten
- Geen shell injection mogelijk — `systemctl` wordt direct aangeroepen via `subprocess`, niet via een shell
- De Unix socket heeft permissie `0o600` — alleen de eigenaar (kodi) kan ermee communiceren
### Gelijktijdige verbindingen
De helper gebruikt Python `asyncio` en `asyncio.start_unix_server`. Dit betekent dat meerdere gelijktijdige verzoeken zonder problemen verwerkt worden — de event loop handelt ze af zonder blocking.
Test bewees dit:
```bash
# 5 gelijktijdige restart verzoeken
for i in {1..5}; do
echo '{"action": "restart", "unit": "test-web.service"}' | \
socat - UNIX-CONNECT:$XDG_RUNTIME_DIR/podman-helper.sock &
done
wait
# Resultaat: alle 5 geslaagd, service actief ✓
```
---
## Wat nu wél werkt
### Stop
```
webui klikt stop
→ POST /api/system/unit/test-web.service/stop
→ API verbindt met helper socket
→ helper voert: systemctl --user stop test-web.service
→ systemd stopt de service met prevent_restart flag
→ container verdwijnt, cidfile opgeruimd
→ service is inactive ✓
```
### Start
```
webui klikt start
→ POST /api/system/unit/test-web.service/start
→ API verbindt met helper socket
→ helper voert: systemctl --user start test-web.service
→ quadlet generator heeft al een .service gegenereerd
→ systemd start de container
→ service is active ✓
```
### Restart
```
webui klikt restart
→ POST /api/system/unit/test-web.service/restart
→ API verbindt met helper socket
→ helper voert: systemctl --user restart test-web.service
→ systemd stopt en herstart de container
→ service is active (na ~8-10 seconden) ✓
```
---
## daemon-reload via de helper
`daemon-reload` gaat inmiddels ook via de helper. Oorspronkelijk werkte `Manager.Reload` via D-Bus vanuit de container, maar om de D-Bus socket en `DBUS_SESSION_BUS_ADDRESS` mount volledig te kunnen verwijderen is daemon-reload als actie toegevoegd aan de helper.
De helper bouwt het commando zonder unit-argument: `systemctl --user daemon-reload`.
---
## Samenvatting
| Operatie | Via D-Bus vanuit container | Via helper op host |
|---|---|---|
| daemon-reload | ❌ Niet meer via D-Bus | ✅ Werkt |
| Unit status opvragen | ✅ Werkt (read-only) | — |
| Unit stoppen | ❌ Herstart zichzelf | ✅ Werkt |
| Unit starten | ❌ Job wordt genegeerd | ✅ Werkt |
| Unit herstarten | ❌ Blijft inactive | ✅ Werkt |
De helper is de minimale, veilige oplossing voor het fundamentele probleem dat een process in een user namespace niet dezelfde rechten heeft als een process in de host user session — ook niet als ze via dezelfde D-Bus socket communiceren.
+184
View File
@@ -0,0 +1,184 @@
#!/usr/bin/env python3
"""
podman-helper.py
----------------
Unix socket helper die op de HOST draait als de gewone gebruiker.
Ontvangt JSON verzoeken van de API container en voert systemctl --user uit.
Beveiligingsmodel:
- Alleen start / stop / restart / daemon-reload toegestaan
- start/stop/restart: alleen .service units met veilige tekens
- daemon-reload: geen unit naam, wordt genegeerd
- Meerdere gelijktijdige verbindingen worden afgehandeld via asyncio
Protocol:
Inkomend: {"action": "start"|"stop"|"restart", "unit": "naam.service"}
{"action": "daemon-reload", "unit": ""}
Uitkomend: {"ok": true, "output": "..."} of {"ok": false, "error": "..."}
"""
import asyncio
import json
import logging
import os
import re
import subprocess
import sys
# ── Configuratie ─────────────────────────────────────────────────────────────
SOCKET_PATH = os.getenv(
"HELPER_SOCKET",
os.path.join(os.getenv("XDG_RUNTIME_DIR", f"/run/user/{os.getuid()}"), "podman-mvp", "podman-helper.sock")
)
LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO")
TIMEOUT = 30 # seconden maximaal per systemctl aanroep
# ── Logging ───────────────────────────────────────────────────────────────────
logging.basicConfig(
level = getattr(logging, LOG_LEVEL.upper(), logging.INFO),
format = "%(asctime)s [%(levelname)s] %(message)s",
datefmt = "%Y-%m-%d %H:%M:%S",
stream = sys.stdout,
)
log = logging.getLogger("podman-helper")
# ── Whitelist ─────────────────────────────────────────────────────────────────
ALLOWED_ACTIONS = {"start", "stop", "restart", "daemon-reload"}
UNIT_PATTERN = re.compile(r'^[a-zA-Z0-9._\-]+\.service$')
NO_UNIT_ACTIONS = {"daemon-reload"}
def validate(action: str, unit: str) -> str | None:
"""Geeft een foutmelding terug als het verzoek niet toegestaan is, anders None."""
if action not in ALLOWED_ACTIONS:
return f"Actie '{action}' niet toegestaan. Gebruik: {', '.join(sorted(ALLOWED_ACTIONS))}"
if action not in NO_UNIT_ACTIONS and not UNIT_PATTERN.match(unit):
return f"Ongeldige unit naam '{unit}'. Alleen .service units met veilige tekens."
return None
async def run_systemctl(action: str, unit: str) -> dict:
"""Voert systemctl --user <action> [unit] uit en geeft het resultaat terug."""
if action in NO_UNIT_ACTIONS:
cmd = ["systemctl", "--user", action]
else:
cmd = ["systemctl", "--user", action, unit]
log.info("Uitvoeren: %s", " ".join(cmd))
try:
proc = await asyncio.create_subprocess_exec(
*cmd,
stdout = asyncio.subprocess.PIPE,
stderr = asyncio.subprocess.PIPE,
)
stdout, stderr = await asyncio.wait_for(proc.communicate(), timeout=TIMEOUT)
rc = proc.returncode
output = stdout.decode().strip() or stderr.decode().strip()
if rc == 0:
log.info("OK: %s %s (rc=0)", action, unit)
return {"ok": True, "output": output or f"{unit} {action} geslaagd"}
else:
log.warning("Mislukt: %s %s (rc=%d) %s", action, unit, rc, output)
return {"ok": False, "error": output or f"{unit} {action} mislukt (rc={rc})"}
except asyncio.TimeoutError:
log.error("Timeout na %ds: %s %s", TIMEOUT, action, unit)
return {"ok": False, "error": f"Timeout na {TIMEOUT} seconden"}
except Exception as e:
log.error("Onverwachte fout: %s", e)
return {"ok": False, "error": str(e)}
async def handle_client(reader: asyncio.StreamReader, writer: asyncio.StreamWriter) -> None:
"""Verwerkt één client verbinding."""
peer = writer.get_extra_info("peername") or "onbekend"
log.debug("Verbinding van: %s", peer)
try:
# Lees tot maximaal 4KB (meer dan genoeg voor een JSON verzoek)
data = await asyncio.wait_for(reader.read(4096), timeout=10)
if not data:
return
# JSON parsen
try:
request = json.loads(data.decode())
except json.JSONDecodeError as e:
log.warning("Ongeldige JSON: %s", e)
response = {"ok": False, "error": f"Ongeldige JSON: {e}"}
writer.write(json.dumps(response).encode())
await writer.drain()
return
action = str(request.get("action", "")).strip().lower()
unit = str(request.get("unit", "")).strip()
# Valideren
error = validate(action, unit)
if error:
log.warning("Afgewezen: %s", error)
response = {"ok": False, "error": error}
writer.write(json.dumps(response).encode())
await writer.drain()
return
# Uitvoeren
response = await run_systemctl(action, unit)
writer.write(json.dumps(response).encode())
await writer.drain()
except asyncio.TimeoutError:
log.warning("Client timeout bij lezen")
response = {"ok": False, "error": "Timeout bij lezen verzoek"}
try:
writer.write(json.dumps(response).encode())
await writer.drain()
except Exception:
pass
except Exception as e:
log.error("Fout bij verwerken verbinding: %s", e)
try:
response = {"ok": False, "error": str(e)}
writer.write(json.dumps(response).encode())
await writer.drain()
except Exception:
pass
finally:
try:
writer.close()
await writer.wait_closed()
except Exception:
pass
async def main() -> None:
# Ruim oude socket op als die nog bestaat
if os.path.exists(SOCKET_PATH):
os.unlink(SOCKET_PATH)
log.info("Oude socket verwijderd: %s", SOCKET_PATH)
# Zorg dat de map bestaat
os.makedirs(os.path.dirname(SOCKET_PATH), exist_ok=True)
server = await asyncio.start_unix_server(handle_client, path=SOCKET_PATH)
# Socket alleen leesbaar voor eigenaar (de kodi user)
os.chmod(SOCKET_PATH, 0o600)
log.info("podman-helper gestart op %s", SOCKET_PATH)
log.info("Toegestane acties: %s", ", ".join(sorted(ALLOWED_ACTIONS)))
async with server:
await server.serve_forever()
if __name__ == "__main__":
try:
asyncio.run(main())
except KeyboardInterrupt:
log.info("Gestopt")
finally:
if os.path.exists(SOCKET_PATH):
os.unlink(SOCKET_PATH)
+20
View File
@@ -0,0 +1,20 @@
[Unit]
Description=Podman systemctl helper socket service
Documentation=man:systemctl(1)
After=default.target
[Service]
Type=simple
Restart=on-failure
RestartSec=3s
Environment=XDG_RUNTIME_DIR=/run/user/%U
Environment=LOG_LEVEL=INFO
ExecStart=/usr/bin/python3 %h/.config/podman-mvp/podman-helper/podman-helper.py
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=default.target
+84
View File
@@ -0,0 +1,84 @@
#!/usr/bin/env bash
# test-helper.sh — Test de podman-helper direct op de host
# Gebruik: ./test-helper.sh <unit> (standaard: test-web.service)
set -euo pipefail
UNIT="${1:-test-web.service}"
SOCKET="${XDG_RUNTIME_DIR:-/run/user/$(id -u)}/podman-helper.sock"
GREEN='\033[0;32m'; RED='\033[0;31m'; NC='\033[0m'
ok() { echo -e "${GREEN}${NC} $*"; }
fail() { echo -e "${RED}${NC} $*"; }
send() {
local action="$1"
local result
result=$(echo "{\"action\": \"$action\", \"unit\": \"$UNIT\"}" | \
socat - UNIX-CONNECT:"$SOCKET" 2>/dev/null)
echo "$result"
}
echo "Socket: $SOCKET"
echo "Unit: $UNIT"
echo ""
# Check socat
command -v socat &>/dev/null || { echo "socat niet gevonden — installeer: sudo apt install socat"; exit 1; }
# Check socket
[[ -S "$SOCKET" ]] || { fail "Socket niet gevonden. Is podman-helper.service actief?"; exit 1; }
# ── Test 1: stop ──────────────────────────────────────────────────────────────
echo "Test 1: stop"
systemctl --user start "$UNIT" 2>/dev/null || true
sleep 2
RESULT=$(send "stop")
echo " Response: $RESULT"
sleep 5
STATE=$(systemctl --user is-active "$UNIT" 2>/dev/null || true)
echo " State na stop: $STATE"
[[ "$STATE" == "inactive" ]] && ok "Stop werkt" || fail "Stop mislukt (state: $STATE)"
echo ""
# ── Test 2: start ─────────────────────────────────────────────────────────────
echo "Test 2: start"
RESULT=$(send "start")
echo " Response: $RESULT"
sleep 5
STATE=$(systemctl --user is-active "$UNIT" 2>/dev/null || true)
echo " State na start: $STATE"
[[ "$STATE" == "active" ]] && ok "Start werkt" || fail "Start mislukt (state: $STATE)"
echo ""
# ── Test 3: restart ───────────────────────────────────────────────────────────
echo "Test 3: restart"
RESULT=$(send "restart")
echo " Response: $RESULT"
sleep 5
STATE=$(systemctl --user is-active "$UNIT" 2>/dev/null || true)
echo " State na restart: $STATE"
[[ "$STATE" == "active" ]] && ok "Restart werkt" || fail "Restart mislukt (state: $STATE)"
echo ""
# ── Test 4: ongeldige actie (whitelist check) ─────────────────────────────────
echo "Test 4: ongeldige actie (whitelist)"
RESULT=$(echo '{"action": "kill", "unit": "'"$UNIT"'"}' | \
socat - UNIX-CONNECT:"$SOCKET" 2>/dev/null)
echo " Response: $RESULT"
echo "$RESULT" | grep -q '"ok": false' && ok "Whitelist werkt" || fail "Whitelist werkt NIET"
echo ""
# ── Test 5: gelijktijdige aanvragen ───────────────────────────────────────────
echo "Test 5: gelijktijdig (5 status aanvragen)"
for i in {1..5}; do
echo '{"action": "restart", "unit": "'"$UNIT"'"}' | \
socat - UNIX-CONNECT:"$SOCKET" 2>/dev/null &
done
wait
sleep 5
STATE=$(systemctl --user is-active "$UNIT" 2>/dev/null || true)
echo " State na gelijktijdige aanvragen: $STATE"
[[ "$STATE" == "active" ]] && ok "Gelijktijdig werkt" || fail "Gelijktijdig mislukt (state: $STATE)"
echo ""
echo "Tests klaar."
+3
View File
@@ -0,0 +1,3 @@
FROM docker.io/library/httpd:2.4
COPY html/ /usr/local/apache2/htdocs/
COPY conf/httpd.conf /usr/local/apache2/conf/httpd.conf
+9 -1
View File
@@ -22,6 +22,14 @@ DirectoryIndex index.html
ErrorLog /proc/self/fd/2
CustomLog /proc/self/fd/1 combined
#ProxyPreserveHost On
#ProxyPass "/api/" "http://127.0.0.1:8000/api/"
#ProxyPassReverse "/api/" "http://127.0.0.1:8000/api/"
# allow long-running upstream requests (image builds)
Timeout 600
ProxyTimeout 600
ProxyPreserveHost On
ProxyPass "/api/" "http://127.0.0.1:8000/api/"
ProxyPass "/api/" "http://127.0.0.1:8000/api/" connectiontimeout=5 timeout=600 retry=0
ProxyPassReverse "/api/" "http://127.0.0.1:8000/api/"
File diff suppressed because it is too large Load Diff
Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Before

Width:  |  Height:  |  Size: 123 KiB

After

Width:  |  Height:  |  Size: 123 KiB

File diff suppressed because one or more lines are too long
File diff suppressed because it is too large Load Diff
+466
View File
@@ -0,0 +1,466 @@
let cmEditor = null;
let filesDirty = false;
let filesSuppressDirtyEvent = false;
let filesTextareaBound = false;
function filesCurrentTheme() {
const t = document.documentElement.getAttribute('data-theme');
return (t === 'light') ? 'light' : 'dark';
}
function filesCodeMirrorTheme() {
return filesCurrentTheme() === 'light' ? 'default' : 'material-darker';
}
function filesSetEditorTheme(themeName) {
if (!cmEditor) return;
const cmTheme = (themeName === 'light') ? 'default' : 'material-darker';
cmEditor.setOption('theme', cmTheme);
cmEditor.refresh();
}
window.filesSetEditorTheme = filesSetEditorTheme;
function _isFolderCollapsed(folderKey, level) {
const stored = localStorage.getItem('files_folder_collapsed:' + folderKey);
if (stored !== null) return stored !== '0';
return true; // standaard alles ingeklapt
}
function _setFolderCollapsed(folderKey, v) {
localStorage.setItem('files_folder_collapsed:' + folderKey, v ? '1' : '0');
}
// =========================
// Files tab (systemd subtree)
// =========================
const FILES_ROOT = 'systemd'; // API-root binnen WORKLOADS_DIR
let filesCurrentUiPath = ''; // zonder "systemd/"
let filesCurrentApiPath = ''; // met "systemd/"
function filesModeLabel(uiPath) {
const mode = cmModeForPath(uiPath);
if (mode === 'yaml') return 'YAML';
if (mode === 'application/json') return 'JSON';
if (mode === 'javascript') return 'JavaScript';
return 'Text';
}
function filesCursorLabel() {
if (cmEditor) {
const c = cmEditor.getCursor();
return `Ln ${c.line + 1}, Kol ${c.ch + 1}`;
}
return '';
}
function filesUpdateEditorStatus() {
const el = document.getElementById('filesEditorStatus');
if (!el) return;
if (!filesCurrentUiPath) {
el.textContent = 'Geen bestand geselecteerd';
return;
}
const dirtyTxt = filesDirty ? 'Niet opgeslagen' : 'Opgeslagen';
const parts = [
dirtyTxt,
filesModeLabel(filesCurrentUiPath),
filesCurrentUiPath,
];
const cursor = filesCursorLabel();
if (cursor) parts.push(cursor);
el.textContent = parts.join(' | ');
}
function filesUpdateTreeSelection() {
const treeEl = document.getElementById('filesTree');
if (!treeEl) return;
treeEl.querySelectorAll('.file-entry').forEach(row => {
row.classList.remove('active', 'dirty');
const state = row.querySelector('.file-entry-state');
if (state) state.textContent = '';
});
if (!filesCurrentUiPath) return;
const key = encodeURIComponent(filesCurrentUiPath);
const row = treeEl.querySelector(`.file-entry[data-file="${CSS.escape(key)}"]`);
if (!row) return;
row.classList.add('active');
if (filesDirty) {
row.classList.add('dirty');
const state = row.querySelector('.file-entry-state');
if (state) state.textContent = '●';
}
}
function filesSetDirty(v) {
filesDirty = !!v && !!filesCurrentUiPath;
filesUpdateEditorStatus();
filesUpdateTreeSelection();
}
function cmModeForPath(uiPath) {
const p = (uiPath || '').toLowerCase();
if (p.endsWith('.yaml') || p.endsWith('.yml') || p.endsWith('.kube') || p.endsWith('.container')) return 'yaml';
if (p.endsWith('.json')) return 'application/json';
if (p.endsWith('.js')) return 'javascript';
return 'text/plain';
}
function filesToApiPath(uiPath) {
let p = (uiPath || '').trim().replace(/^\/+/, '');
if (!p) return FILES_ROOT;
if (p === FILES_ROOT || p.startsWith(FILES_ROOT + '/')) return p;
return `${FILES_ROOT}/${p}`;
}
function filesToUiPath(apiPath) {
const p = (apiPath || '').trim().replace(/^\/+/, '');
return p.replace(new RegExp('^' + FILES_ROOT + '/?'), '');
}
function filesSetCurrent(uiPath) {
filesCurrentUiPath = (uiPath || '').trim().replace(/^\/+/, '');
filesCurrentApiPath = filesToApiPath(filesCurrentUiPath);
document.getElementById('filesCurrent').textContent = filesCurrentUiPath || '-';
filesSetDirty(false);
}
async function filesRefresh() {
// Files editor: CodeMirror init (alleen als textarea bestaat)
if (!cmEditor) {
const taFiles = document.getElementById('filesEditor');
if (taFiles && window.CodeMirror) {
cmEditor = CodeMirror.fromTextArea(taFiles, {
lineNumbers: true,
lineWrapping: true,
mode: 'text/plain',
theme: filesCodeMirrorTheme()
});
cmEditor.setSize('100%', 360);
cmEditor.on('change', () => {
if (filesSuppressDirtyEvent || !filesCurrentUiPath) return;
filesSetDirty(true);
});
cmEditor.on('cursorActivity', filesUpdateEditorStatus);
} else if (taFiles && !filesTextareaBound) {
filesTextareaBound = true;
taFiles.addEventListener('input', () => {
if (!filesCurrentUiPath) return;
filesSetDirty(true);
});
}
}
const treeEl = document.getElementById('filesTree');
treeEl.textContent = 'Laden...';
let data;
try {
data = await api('/files/tree', 'GET');
} catch (e) {
if (typeof window.updateNavCount === 'function') {
window.updateNavCount('countNavFiles', 0);
}
treeEl.innerHTML = (typeof window.renderStateBox === 'function')
? window.renderStateBox('error', 'Files laden mislukt', e.message || String(e))
: 'Files laden mislukt.';
filesUpdateEditorStatus();
return;
}
// Filter alleen systemd subtree
const scoped = (data || []).filter(folder => {
const p = (folder.path || '').replace(/^\/+/, '');
return p === FILES_ROOT || p.startsWith(FILES_ROOT + '/');
});
if (!scoped.length) {
if (typeof window.updateNavCount === 'function') {
window.updateNavCount('countNavFiles', 0);
}
treeEl.innerHTML = (typeof window.renderStateBox === 'function')
? window.renderStateBox('empty', 'Geen bestanden', 'Er zijn geen bestanden gevonden onder systemd.')
: 'Geen bestanden gevonden onder systemd.';
filesUpdateEditorStatus();
return;
}
let totalFiles = 0;
for (const folder of scoped) {
totalFiles += Array.isArray(folder?.files) ? folder.files.length : 0;
}
if (typeof window.updateNavCount === 'function') {
window.updateNavCount('countNavFiles', totalFiles);
}
// Bouw een geneste folder-tree uit de "platte" API response.
const folderByPath = new Map();
for (const f of scoped) {
const apiPath = (f.path || '').replace(/^\/+/, '');
folderByPath.set(apiPath, f);
}
function getOrCreateChild(parent, name) {
if (!parent.children.has(name)) {
const apiPath = parent.apiPath ? `${parent.apiPath}/${name}` : name;
parent.children.set(name, {
name,
apiPath,
uiPath: filesToUiPath(apiPath),
children: new Map(),
});
}
return parent.children.get(name);
}
const root = { name: FILES_ROOT, apiPath: FILES_ROOT, uiPath: '', children: new Map() };
// 1) Nodes aanmaken op basis van bekende folder paths
for (const apiPath of folderByPath.keys()) {
if (apiPath === FILES_ROOT) continue;
if (!apiPath.startsWith(FILES_ROOT + '/')) continue;
const rel = apiPath.slice((FILES_ROOT + '/').length);
const segs = rel.split('/').filter(Boolean);
let cur = root;
for (const s of segs) cur = getOrCreateChild(cur, s);
}
// 2) Nodes aanvullen op basis van dirs-lijsten (zodat lege tussenfolders ook verschijnen)
for (const [apiPath, folder] of folderByPath.entries()) {
if (apiPath !== FILES_ROOT && !apiPath.startsWith(FILES_ROOT + '/')) continue;
let base = root;
if (apiPath !== FILES_ROOT) {
const rel = apiPath.slice((FILES_ROOT + '/').length);
const segs = rel.split('/').filter(Boolean);
for (const s of segs) base = getOrCreateChild(base, s);
}
for (const d of (folder.dirs || [])) getOrCreateChild(base, d);
}
function renderNode(node, level) {
const folderKey = node.apiPath;
const collapsed = _isFolderCollapsed(folderKey, level);
const label = node.uiPath || 'root';
const indent = Math.max(0, level) * 14;
const folder = folderByPath.get(folderKey);
const files = (folder && folder.files) ? folder.files : [];
const childNames = Array.from(node.children.keys()).sort((a,b) => a.localeCompare(b));
const sortedFiles = (files || []).slice().sort((a,b) => a.localeCompare(b));
const out = [];
out.push(`<div class="mono file-folder-row" data-folder="${esc(folderKey)}" style="padding-left:${indent}px;">
<span class="file-folder-left">
<span class="folder-toggle">${collapsed ? '▶' : '▼'}</span>
<span>📂 ${esc(label)}</span>
</span>
<span class="file-folder-actions" onclick="event.stopPropagation();">
<button class="btn tiny ok" title="Nieuw bestand in ${esc(label)}" onclick="filesNewFileInFolder(decodeURIComponent('${encodeURIComponent(node.uiPath)}'))">+</button>
<button class="btn tiny bad" title="Verwijder map (alleen als leeg)" onclick="filesDeleteFolder(decodeURIComponent('${encodeURIComponent(node.uiPath)}'))">✕</button>
</span>
</div>`);
out.push(`<div class="file-folder-files" data-folder-files="${esc(folderKey)}" style="${collapsed ? 'display:none;' : ''}">`);
for (const name of childNames) {
out.push(renderNode(node.children.get(name), level + 1));
}
for (const f of sortedFiles) {
const fullUi = node.uiPath ? `${node.uiPath}/${f}` : f;
const fileKey = encodeURIComponent(fullUi);
out.push(`<div class="file-entry" data-file="${fileKey}" style="padding-left:${indent + 16}px;">
<span class="mono file-entry-name" onclick="filesOpen(decodeURIComponent('${fileKey}'))">📄 ${esc(f)}</span>
<span class="file-entry-state"></span>
</div>`);
}
if (!childNames.length && !sortedFiles.length) {
out.push(`<div class="muted" style="padding-left:${indent + 16}px; font-size:0.85em;">(leeg)</div>`);
}
out.push(`</div>`);
return out.join('');
}
const parts = [];
const topNames = Array.from(root.children.keys()).sort((a,b) => a.localeCompare(b));
for (const n of topNames) parts.push(renderNode(root.children.get(n), 0));
// Files direct onder "systemd/" (root) tonen bovenaan
const rootFolder = folderByPath.get(FILES_ROOT);
if (rootFolder && (rootFolder.files || []).length) {
const folderKey = FILES_ROOT;
const collapsed = _isFolderCollapsed(folderKey, 0);
parts.unshift(`<div class="mono file-folder-row" data-folder="${esc(folderKey)}">
<span class="file-folder-left">
<span class="folder-toggle">${collapsed ? '▶' : '▼'}</span>
<span>📂 root</span>
</span>
<span class="file-folder-actions" onclick="event.stopPropagation();">
<button class="btn tiny ok" title="Nieuw bestand in root" onclick="filesNewFileInFolder('')">+</button>
</span>
</div>
<div class="file-folder-files" data-folder-files="${esc(folderKey)}" style="${collapsed ? 'display:none;' : ''}">
${(rootFolder.files || []).slice().sort((a,b)=>a.localeCompare(b)).map(f => {
const fileKey = encodeURIComponent(f);
return `<div class="file-entry" data-file="${fileKey}" style="padding-left:16px;">
<span class="mono file-entry-name" onclick="filesOpen(decodeURIComponent('${fileKey}'))">📄 ${esc(f)}</span>
<span class="file-entry-state"></span>
</div>`;
}).join('')}
</div>`);
}
treeEl.innerHTML = parts.join('');
treeEl.onclick = (ev) => {
const row = ev.target.closest('.file-folder-row');
if (!row) return;
const folderKey = row.getAttribute('data-folder');
const isNowCollapsed = !_isFolderCollapsed(folderKey);
_setFolderCollapsed(folderKey, isNowCollapsed);
// pijltje updaten
const arrow = row.querySelector('.folder-toggle');
if (arrow) arrow.textContent = isNowCollapsed ? '▶' : '▼';
// files block tonen/verbergen
const filesBlock = treeEl.querySelector(`[data-folder-files="${CSS.escape(folderKey)}"]`);
if (filesBlock) filesBlock.style.display = isNowCollapsed ? 'none' : '';
};
filesUpdateTreeSelection();
filesUpdateEditorStatus();
}
async function filesOpen(uiPath) {
filesSetCurrent(uiPath);
const res = await api(`/files/read?path=${encodeURIComponent(filesCurrentApiPath)}`, 'GET');
const text = res.content || '';
if (cmEditor) {
cmEditor.setOption('mode', cmModeForPath(uiPath));
filesSuppressDirtyEvent = true;
cmEditor.setValue(text);
filesSuppressDirtyEvent = false;
cmEditor.refresh();
cmEditor.setCursor({ line: 0, ch: 0 });
} else {
document.getElementById('filesEditor').value = text;
}
filesSetDirty(false);
filesUpdateEditorStatus();
}
async function filesSave() {
if (!filesCurrentApiPath || filesCurrentApiPath === FILES_ROOT) {
return showModal('Files', 'Selecteer eerst een bestand.');
}
const content = cmEditor
? cmEditor.getValue()
: document.getElementById('filesEditor').value;
const res = await api(
`/files/save?path=${encodeURIComponent(filesCurrentApiPath)}`,
'POST',
{ content }
);
filesSetDirty(false);
showModal('Opgeslagen', JSON.stringify(res, null, 2));
await filesRefresh();
}
async function filesDelete() {
if (!filesCurrentApiPath || filesCurrentApiPath === FILES_ROOT) {
return showModal('Files', 'Selecteer eerst een bestand om te verwijderen.');
}
if (!confirm(`Verwijderen: ${filesCurrentUiPath}?`)) return;
const res = await api(`/files/delete?path=${encodeURIComponent(filesCurrentApiPath)}`, 'DELETE');
showModal('Verwijderd', JSON.stringify(res, null, 2));
// reset current
filesSetCurrent('');
if (cmEditor) {
filesSuppressDirtyEvent = true;
cmEditor.setValue('');
filesSuppressDirtyEvent = false;
cmEditor.refresh();
} else {
document.getElementById('filesEditor').value = '';
}
await filesRefresh();
}
async function filesNewFolder() {
const ui = prompt('Nieuwe map (onder systemd):\nVoorbeeld: mediaserver', '');
if (!ui) return;
const apiPath = filesToApiPath(ui);
const res = await api(`/files/mkdir?path=${encodeURIComponent(apiPath)}`, 'POST');
showModal('Map aangemaakt', JSON.stringify(res, null, 2));
await filesRefresh();
}
async function filesNewFile() {
const ui = prompt('Nieuw bestand (onder systemd):\nVoorbeeld: demo-web/demo-web.container', '');
if (!ui) return;
const apiPath = filesToApiPath(ui);
// altijd leeg (jouw keuze) -> leeg bestand
const res = await api(`/files/save?path=${encodeURIComponent(apiPath)}`, 'POST', { content: "" });
showModal('Bestand aangemaakt', JSON.stringify(res, null, 2));
// Open direct
filesSetCurrent(ui);
const editorEl = document.getElementById('filesEditor');
if (editorEl) editorEl.value = "";
await filesRefresh();
await filesOpen(ui);
}
async function filesNewFileInFolder(uiFolderPath) {
const base = (uiFolderPath || '').trim().replace(/^\/+/, '');
const name = prompt(`Nieuw bestand in "${base || 'root'}"\nBijv: test.yaml of demo.container`, '');
if (!name) return;
const uiFull = base ? `${base}/${name}` : name;
const apiPath = filesToApiPath(uiFull);
// altijd leeg (jouw keuze)
const res = await api(`/files/save?path=${encodeURIComponent(apiPath)}`, 'POST', { content: "" });
showModal('Bestand aangemaakt', JSON.stringify(res, null, 2));
await filesRefresh();
await filesOpen(uiFull);
}
async function filesDeleteFolder(uiFolderPath) {
const base = (uiFolderPath || '').trim().replace(/^\/+/, '');
if (!base) {
return showModal('Files', 'Root map verwijderen mag niet.');
}
if (!confirm(`Map verwijderen (alleen als leeg): ${base}?`)) return;
const apiPath = filesToApiPath(base);
try {
const res = await api(`/files/rmdir?path=${encodeURIComponent(apiPath)}`, 'DELETE');
showModal('Map verwijderd', JSON.stringify(res, null, 2));
await filesRefresh();
} catch (e) {
showModal('Kan map niet verwijderen', e.message);
}
}
+429
View File
@@ -0,0 +1,429 @@
let imagesData = [];
let imagesSort = { field: null, dir: null };
async function loadImages() {
const tbody = document.getElementById("images-tbody");
try {
const res = await fetch("/api/images");
if (!res.ok) throw new Error(`HTTP ${res.status}`);
const images = await res.json();
imagesData = Array.isArray(images) ? images : [];
if (typeof window.updateNavCount === "function") {
window.updateNavCount("countNavImages", imagesData.length);
}
updateSortIndicators();
applyImageSorting();
} catch (e) {
imagesData = [];
if (typeof window.updateNavCount === "function") {
window.updateNavCount("countNavImages", 0);
}
if (tbody) {
const box = (typeof window.renderStateBox === "function")
? window.renderStateBox("error", "Images laden mislukt", e.message || String(e))
: "Images laden mislukt.";
tbody.innerHTML = `<tr><td colspan="8">${box}</td></tr>`;
}
}
}
function renderImages(images) {
const tbody = document.getElementById("images-tbody");
tbody.innerHTML = "";
if (!images.length) {
const box = (typeof window.renderStateBox === "function")
? window.renderStateBox("empty", "Geen images", "Er zijn momenteel geen images gevonden.")
: "Geen images gevonden.";
tbody.innerHTML = `<tr><td colspan="8">${box}</td></tr>`;
return;
}
images.forEach(img => {
const tr = document.createElement("tr");
const repoTag = (img.RepoTags && img.RepoTags.length > 0)
? img.RepoTags[0]
: "<none>";
const shortId = img.Id.substring(0, 12);
const sizeMB = (img.Size / 1024 / 1024).toFixed(1);
const created = img.Created ? new Date(img.Created * 1000).toLocaleString() : "-";
const containers = img.Containers || 0;
const fullId = img.Id;
const status = containers > 0
? `<span class="badge ok">In use</span>`
: `<span class="badge warn">Unused</span>`;
const disabled = containers > 0 ? "disabled" : "";
tr.innerHTML = `
<td>
<input type="checkbox" class="image-checkbox" value="${fullId}" ${disabled}>
</td>
<td>${repoTag}</td>
<td>${shortId}</td>
<td class="num">${sizeMB} MB</td>
<td class="muted">${created}</td>
<td class="num">${containers}</td>
<td>${status}</td>
<td>
<button class="btn small bad" onclick="removeSingleImage('${fullId}')" ${disabled}>
Remove
</button>
</td>
`;
tbody.appendChild(tr);
});
}
function toggleSelectAllImages(master) {
document.querySelectorAll(".image-checkbox:not(:disabled)")
.forEach(cb => cb.checked = master.checked);
}
async function removeSingleImage(id) {
if (!confirm("Image verwijderen?")) return;
await fetch("/api/images/" + encodeURIComponent(id), {
method: "DELETE"
});
await loadImages();
}
async function removeSelectedImages() {
const selected = Array.from(document.querySelectorAll(".image-checkbox:checked"))
.map(cb => cb.value);
if (!selected.length) {
alert("Geen images geselecteerd.");
return;
}
if (!confirm("Geselecteerde images verwijderen?")) return;
await fetch("/api/images/remove", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ images: selected })
});
await loadImages();
}
async function pruneUnusedImages() {
if (!confirm("Alle unused images verwijderen?")) return;
await fetch("/api/images/prune?all=true", {
method: "POST"
});
await loadImages();
}
// ---------- Build Modal ----------
function openBuildModal() {
document.getElementById("buildModalBack").style.display = "flex";
document.getElementById("buildOutput").value = "";
const ctxEl = document.getElementById("buildContext");
const tagEl = document.getElementById("buildTag");
// Reset auto-mode
tagEl.dataset.auto = "1";
// Update tag whenever context changes
ctxEl.oninput = () => {
if (tagEl.dataset.auto === "1") {
const suggestion = suggestTagFromContext(ctxEl.value);
tagEl.value = suggestion;
}
};
// If user types manually → stop auto mode
tagEl.oninput = () => {
tagEl.dataset.auto = "0";
};
}
function hideBuildModal() {
document.getElementById("buildModalBack").style.display = "none";
}
function closeBuildModal(e) {
if (e.target.id === "buildModalBack") hideBuildModal();
}
async function buildImage() {
const context = document.getElementById("buildContext").value.trim();
const dockerfile = document.getElementById("buildDockerfile").value.trim();
const tag = document.getElementById("buildTag").value.trim();
const pull = document.getElementById("buildPull").checked;
const nocache = document.getElementById("buildNoCache").checked;
const outputBox = document.getElementById("buildOutput");
if (!context || !dockerfile || !tag) {
alert("Vul context_dir, Dockerfile/Containerfile en tag in.");
return;
}
if (!ensureSystemdContextOrAlert(context)) {
return;
}
outputBox.value = "Starting build...\n";
try {
const res = await fetch("/api/images/build", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
context_dir: context,
dockerfile: dockerfile,
tag: tag,
pull: pull,
nocache: nocache
})
});
const ct = (res.headers.get("content-type") || "").toLowerCase();
let data;
if (ct.includes("application/json")) {
data = await res.json();
} else {
const text = await res.text();
data = { ok: res.ok, status: res.status, non_json: true, body: text.slice(0, 4000) };
}
if (!res.ok) {
outputBox.value += "\nERROR:\n" + JSON.stringify(data, null, 2);
return;
}
outputBox.value += data.output || "Build completed.";
await loadImages();
} catch (err) {
outputBox.value += "\nERROR:\n" + err.message;
}
}
// ---------- Dockerfile picker ----------
function openDockerfilePicker() {
document.getElementById("dfPickerBack").style.display = "flex";
const search = document.getElementById("dfPickerSearch");
if (search) {
search.value = "";
search.oninput = () => renderDockerfilePickerList(window.__dfPickerAll || []);
}
refreshDockerfilePicker();
}
function hideDockerfilePicker() {
document.getElementById("dfPickerBack").style.display = "none";
}
function closeDockerfilePicker(e) {
if (e.target.id === "dfPickerBack") hideDockerfilePicker();
}
async function refreshDockerfilePicker() {
const listEl = document.getElementById("dfPickerList");
listEl.textContent = "Laden...";
try {
const res = await fetch("/api/files/tree");
const tree = await res.json(); // [{path:"systemd/..", files:[...]}]
const candidates = [];
for (const folder of (tree || [])) {
const folderPath = (folder.path || "").replace(/^\/+/, ""); // e.g. systemd/foo
if (!folderPath || !(folderPath === "systemd" || folderPath.startsWith("systemd/"))) continue;
const files = folder.files || [];
for (const f of files) {
if (!isDockerfileName(f)) continue;
// full path under workloads-root (without leading slash)
const full = folderPath === "systemd" ? `systemd/${f}` : `${folderPath}/${f}`;
candidates.push(full);
}
}
// sort nice
candidates.sort((a, b) => a.localeCompare(b));
window.__dfPickerAll = candidates;
renderDockerfilePickerList(candidates);
} catch (e) {
listEl.textContent = "Fout bij laden: " + (e.message || e);
}
}
function isDockerfileName(name) {
const n = String(name || "").toLowerCase();
if (n === "dockerfile" || n === "containerfile") return true;
if (n.endsWith(".dockerfile") || n.endsWith(".containerfile")) return true;
return false;
}
function renderDockerfilePickerList(all) {
const listEl = document.getElementById("dfPickerList");
const q = (document.getElementById("dfPickerSearch")?.value || "").trim().toLowerCase();
const filtered = (all || []).filter(p => !q || p.toLowerCase().includes(q));
if (!filtered.length) {
listEl.innerHTML = `<div class="muted">Geen matches.</div>`;
return;
}
// Render as clickable buttons
listEl.innerHTML = filtered.map(p => {
const safe = p.replace(/"/g, "&quot;");
return `
<div style="display:flex; align-items:center; justify-content:space-between; gap:10px; padding:6px 0; border-bottom:1px dashed rgba(36,52,95,.35);">
<span>${safe}</span>
<button class="btn small ok" type="button" onclick="chooseDockerfilePath('${encodeURIComponent(p)}')">Kies</button>
</div>
`;
}).join("");
}
function chooseDockerfilePath(encodedPath) {
const fullPath = decodeURIComponent(encodedPath);
const idx = fullPath.lastIndexOf("/");
const contextDir = idx > 0 ? fullPath.substring(0, idx) : "systemd";
const dockerfile = idx > 0 ? fullPath.substring(idx + 1) : fullPath;
const ctxEl = document.getElementById("buildContext");
const tagEl = document.getElementById("buildTag");
ctxEl.value = contextDir;
document.getElementById("buildDockerfile").value = dockerfile;
if (tagEl.dataset.auto !== "0") {
const suggestion = suggestTagFromContext(contextDir);
tagEl.value = suggestion;
tagEl.dataset.auto = "1";
}
hideDockerfilePicker();
}
// ---------- Build helpers (4.3c) ----------
function suggestTagFromContext(contextDir) {
const p = String(contextDir || "").trim().replace(/^\/+/, "");
if (!p.startsWith("systemd/")) return "";
const parts = p.split("/").filter(Boolean);
// Als alleen "systemd" of "systemd/" → geen geldige image naam
if (parts.length <= 1) return "";
const name = parts[parts.length - 1];
const safe = name
.toLowerCase()
.replace(/[^a-z0-9._-]+/g, "-")
.replace(/-+/g, "-")
.replace(/^-|-$/g, "");
return safe ? `localhost/${safe}:latest` : "";
}
function ensureSystemdContextOrAlert(contextDir) {
const p = String(contextDir || "").trim().replace(/^\/+/, "");
if (!p.startsWith("systemd/")) {
alert("Context directory moet beginnen met: systemd/\nVoorbeeld: systemd/buildtests/hello");
return false;
}
return true;
}
function sortImages(field) {
if (imagesSort.field !== field) {
imagesSort.field = field;
imagesSort.dir = "asc";
} else if (imagesSort.dir === "asc") {
imagesSort.dir = "desc";
} else if (imagesSort.dir === "desc") {
imagesSort.field = null;
imagesSort.dir = null;
} else {
imagesSort.dir = "asc";
}
updateSortIndicators();
applyImageSorting();
}
function applyImageSorting() {
let data = [...imagesData];
if (imagesSort.field && imagesSort.dir) {
data.sort((a, b) => {
let va, vb;
switch (imagesSort.field) {
case "repo":
va = (a.RepoTags && a.RepoTags[0]) || "";
vb = (b.RepoTags && b.RepoTags[0]) || "";
break;
case "id":
va = a.Id || "";
vb = b.Id || "";
break;
case "size":
va = a.Size || 0;
vb = b.Size || 0;
break;
case "created":
va = a.Created || 0;
vb = b.Created || 0;
break;
case "containers":
va = a.Containers || 0;
vb = b.Containers || 0;
break;
}
if (typeof va === "string") {
return imagesSort.dir === "asc"
? va.localeCompare(vb)
: vb.localeCompare(va);
} else {
return imagesSort.dir === "asc"
? va - vb
: vb - va;
}
});
}
renderImages(data);
}
function updateSortIndicators() {
// default: toon dat alles sorteerbaar is
document.querySelectorAll(".sort-indicator").forEach(el => el.textContent = "↕");
// als er geen sort actief is: laat defaults staan
if (!imagesSort.field || !imagesSort.dir) return;
// actieve kolom: ▲ of ▼
const el = document.getElementById("sort-" + imagesSort.field);
if (el) {
el.textContent = imagesSort.dir === "asc" ? "▲" : "▼";
}
}
File diff suppressed because it is too large Load Diff
+219
View File
@@ -0,0 +1,219 @@
let volumesData = [];
let volumeContainersMap = {};
async function loadVolumes() {
const tbody = document.getElementById("volumes-tbody");
try {
const [volumes, containers] = await Promise.all([
fetch("/api/volumes").then(r => { if (!r.ok) throw new Error(`HTTP ${r.status}`); return r.json(); }),
fetch("/api/containers-dashboard").then(r => r.ok ? r.json() : []).catch(() => [])
]);
volumesData = Array.isArray(volumes) ? volumes : [];
// containers-dashboard geeft Mounts als strings (destination paden).
// Volledige mount-info (Type + Name) zit alleen in de inspect endpoint.
// Haal inspect op voor alle containers met niet-lege Mounts, parallel.
const containerList = Array.isArray(containers) ? containers : [];
const withMounts = containerList.filter(c => (c.Mounts || []).length > 0);
const inspectResults = await Promise.all(
withMounts.map(c => {
const name = (c.Names && c.Names[0]) || "";
if (!name) return Promise.resolve(null);
return fetch("/api/containers/inspect/" + encodeURIComponent(name))
.then(r => r.ok ? r.json() : null)
.catch(() => null);
})
);
// Bouw volume → containers mapping: filter op Type === "volume"
volumeContainersMap = {};
for (let i = 0; i < withMounts.length; i++) {
const inspect = inspectResults[i];
if (!inspect) continue;
const cname = (withMounts[i].Names && withMounts[i].Names[0]) || "";
for (const m of (inspect.Mounts || [])) {
if (m.Type === "volume" && m.Name) {
(volumeContainersMap[m.Name] = volumeContainersMap[m.Name] || []).push(cname);
}
}
}
if (typeof window.updateNavCount === "function") {
window.updateNavCount("countNavVolumes", volumesData.length);
}
renderVolumes(volumesData);
} catch (e) {
volumesData = [];
if (typeof window.updateNavCount === "function") window.updateNavCount("countNavVolumes", 0);
if (tbody) {
const box = typeof window.renderStateBox === "function"
? window.renderStateBox("error", "Volumes laden mislukt", e.message || String(e))
: "Volumes laden mislukt.";
tbody.innerHTML = `<tr><td colspan="7">${box}</td></tr>`;
}
}
}
function _volRelTime(isoStr) {
if (!isoStr) return "-";
const d = new Date(isoStr);
if (isNaN(d)) return String(isoStr);
const s = Math.floor((Date.now() - d.getTime()) / 1000);
if (s < 60) return `${s}s geleden`;
if (s < 3600) return `${Math.floor(s / 60)}m geleden`;
if (s < 86400) return `${Math.floor(s / 3600)}u geleden`;
return `${Math.floor(s / 86400)} dagen geleden`;
}
function _volEsc(s) {
return String(s || "")
.replace(/&/g, "&amp;")
.replace(/"/g, "&quot;")
.replace(/</g, "&lt;")
.replace(/>/g, "&gt;");
}
function renderVolumes(volumes) {
const tbody = document.getElementById("volumes-tbody");
if (!tbody) return;
tbody.innerHTML = "";
if (!volumes.length) {
const box = typeof window.renderStateBox === "function"
? window.renderStateBox("empty", "Geen volumes", "Er zijn momenteel geen volumes gevonden.")
: "Geen volumes gevonden.";
tbody.innerHTML = `<tr><td colspan="7">${box}</td></tr>`;
return;
}
volumes.forEach(vol => {
const name = vol.Name || "-";
const driver = vol.Driver || "-";
const mp = vol.Mountpoint || "";
const mpShort = mp.length > 45 ? mp.slice(0, 42) + "…" : mp;
const created = _volRelTime(vol.CreatedAt);
const labels = vol.Labels || {};
const cNames = volumeContainersMap[name] || [];
const inUse = cNames.length > 0;
const labelHtml = Object.keys(labels).length
? Object.keys(labels).map(k =>
`<span class="badge muted" title="${_volEsc(k + "=" + labels[k])}">${_volEsc(k)}</span>`
).join(" ")
: `<span class="muted">-</span>`;
const containersHtml = cNames.length
? cNames.map(n => `<span class="badge ok">${_volEsc(n)}</span>`).join(" ")
: `<span class="muted">-</span>`;
const nameEnc = encodeURIComponent(name);
const disabledAttr = inUse ? `disabled title="In gebruik door een container"` : "";
const tr = document.createElement("tr");
tr.innerHTML = `
<td><strong>${_volEsc(name)}</strong></td>
<td class="muted">${_volEsc(driver)}</td>
<td class="muted mono" title="${_volEsc(mp)}">${_volEsc(mpShort)}</td>
<td class="muted">${created}</td>
<td>${labelHtml}</td>
<td>${containersHtml}</td>
<td>
<button class="btn small bad" onclick="removeVolume(decodeURIComponent('${nameEnc}'))" ${disabledAttr}>
Verwijder
</button>
</td>
`;
tbody.appendChild(tr);
});
}
async function removeVolume(name) {
if (!confirm(`Volume '${name}' verwijderen?\nDit kan niet ongedaan worden gemaakt.`)) return;
try {
const res = await fetch("/api/volumes/" + encodeURIComponent(name), { method: "DELETE" });
if (!res.ok) {
const body = await res.text().catch(() => "");
alert(`Verwijderen mislukt (${res.status}): ${body}`);
return;
}
await loadVolumes();
} catch (e) {
alert(`Fout: ${e.message}`);
}
}
async function pruneVolumes() {
if (!confirm(
"Prune volumes\n\n" +
"Dit verwijdert alle volumes die niet aan een container gekoppeld zijn.\n" +
"Dit kan niet ongedaan worden gemaakt.\n\n" +
"Doorgaan?"
)) return;
try {
const res = await fetch("/api/volumes/prune", { method: "POST" });
if (!res.ok) {
const body = await res.text().catch(() => "");
alert(`Prune mislukt (${res.status}): ${body}`);
return;
}
const data = await res.json();
const removed = Array.isArray(data) ? data.length : 0;
alert(`Prune voltooid. ${removed} volume(s) verwijderd.`);
await loadVolumes();
} catch (e) {
alert(`Fout: ${e.message}`);
}
}
// ---- Create Volume Modal ----
function openCreateVolumeModal() {
document.getElementById("createVolumeModalBack").style.display = "flex";
document.getElementById("createVolumeName").value = "";
document.getElementById("createVolumeLabels").value = "";
}
function hideCreateVolumeModal() {
document.getElementById("createVolumeModalBack").style.display = "none";
}
function closeCreateVolumeModal(e) {
if (e.target.id === "createVolumeModalBack") hideCreateVolumeModal();
}
async function createVolume() {
const name = document.getElementById("createVolumeName").value.trim();
if (!name) { alert("Naam is verplicht."); return; }
const labelsRaw = document.getElementById("createVolumeLabels").value.trim();
const labels = {};
if (labelsRaw) {
for (const line of labelsRaw.split(/\r?\n/)) {
const l = line.trim();
if (!l) continue;
const idx = l.indexOf("=");
if (idx > 0) labels[l.slice(0, idx).trim()] = l.slice(idx + 1).trim();
}
}
const body = { name };
if (Object.keys(labels).length) body.labels = labels;
try {
const res = await fetch("/api/volumes", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(body)
});
if (!res.ok) {
const err = await res.text().catch(() => "");
alert(`Aanmaken mislukt (${res.status}): ${err}`);
return;
}
hideCreateVolumeModal();
await loadVolumes();
} catch (e) {
alert(`Fout: ${e.message}`);
}
}
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
+39
View File
@@ -0,0 +1,39 @@
<!doctype html>
<html lang="nl">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width,initial-scale=1" />
<title>API Documentatie — Podman MVP</title>
<link rel="stylesheet" href="../assets/swagger-ui/swagger-ui.css" />
<style>
body { margin: 0; }
.topbar { background: #1a1a2e; padding: 12px 20px; display: flex; align-items: center; gap: 16px; }
.topbar a { color: #ccc; text-decoration: none; font-size: 0.85rem; }
.topbar a:hover { color: #fff; }
.topbar-title { color: #fff; font-weight: 600; font-size: 1rem; }
</style>
</head>
<body>
<div class="topbar">
<span class="topbar-title">Podman MVP — API Documentatie</span>
<a href="/">← Terug naar UI</a>
</div>
<div id="swagger-ui"></div>
<script src="../assets/swagger-ui/swagger-ui-bundle.js"></script>
<script>
SwaggerUIBundle({
url: '/api/openapi.json',
dom_id: '#swagger-ui',
deepLinking: true,
presets: [SwaggerUIBundle.presets.apis, SwaggerUIBundle.SwaggerUIStandalonePreset],
layout: 'BaseLayout',
tryItOutEnabled: true,
requestInterceptor: (req) => {
// Zorg dat Try it out via dezelfde origin gaat (geen CORS issues)
req.url = req.url.replace(/^https?:\/\/[^/]+/, '');
return req;
},
});
</script>
</body>
</html>
+584 -825
View File
File diff suppressed because it is too large Load Diff