upload volledige repo
This commit is contained in:
@@ -0,0 +1,365 @@
|
||||
# PLAN.md
|
||||
|
||||
Notitie: `project_docs/SPARRING_GPT_PROJECT_PROMPT.md` is niet gevonden in deze repository op 10 maart 2026. Dit plan is opgesteld op basis van de overige documenten in `project_docs/`.
|
||||
|
||||
## 1. Doel van de applicatie
|
||||
|
||||
De applicatie is een webgebaseerde storage manager voor self-hosted omgevingen, met een veilige browserinterface om bestanden te beheren binnen strikt whitelisted root directories.
|
||||
|
||||
Kernwaarde van v1:
|
||||
- veilige en voorspelbare filesystem-operaties binnen whitelist
|
||||
- transparant taakmodel voor langlopende copy/move acties
|
||||
- stabiele API-contracten die met golden tests bewaakt worden
|
||||
|
||||
## 2. V1 scope en out-of-scope
|
||||
|
||||
In scope voor v1:
|
||||
- directory browsing binnen whitelist roots
|
||||
- file-operaties: `rename`, `delete`, `mkdir`
|
||||
- task-based operaties: `copy`, `move`
|
||||
- task status/progress polling
|
||||
- bookmarks CRUD (minimaal: list/create/delete)
|
||||
- history logging van uitgevoerde operaties
|
||||
- security path controls: canonicalisatie, traversal-blocking, symlink escape blocking
|
||||
|
||||
Out-of-scope voor v1:
|
||||
- user management en authenticatie/autorisatie
|
||||
- fijnmazig permissions management
|
||||
- cloud storage integraties
|
||||
- distributed/multi-node storage
|
||||
- geavanceerde job scheduling (prioriteiten, parallel queue tuning, retry policy)
|
||||
- recycle bin/undo functionaliteit
|
||||
|
||||
## 3. Voorgestelde architectuur voor eerste versie
|
||||
|
||||
Technische stack:
|
||||
- Backend: Python + FastAPI
|
||||
- Frontend: lichte JavaScript UI (zonder zwaar framework)
|
||||
- Opslag: SQLite voor tasks, bookmarks, history
|
||||
|
||||
Architectuur v1 (monolithisch, modulair):
|
||||
- `API-laag` (FastAPI routes): validatie, response-shaping, HTTP foutcodes
|
||||
- `Service-laag`: use-case logica (browse, file ops, tasks, bookmarks)
|
||||
- `Filesystem-laag`: gecapsuleerde filesystem calls
|
||||
- `Security/path-guard`: centrale pad-resolutie en whitelist enforcement
|
||||
- `Task-runner`: background worker voor copy/move met persistente status
|
||||
- `Repository-laag`: SQLite toegang voor tasks/bookmarks/history
|
||||
|
||||
Datastroom copy/move:
|
||||
1. API ontvangt request en valideert payload.
|
||||
2. `path_guard` valideert source/destination tegen whitelist en security regels.
|
||||
3. Service maakt task-record aan met status `queued`.
|
||||
4. Worker pakt task op, zet status op `running`, voert operatie uit, werkt progress bij.
|
||||
5. Bij afronding: status `completed` of `failed`; resultaat en fouten worden opgeslagen.
|
||||
|
||||
## 4. Voorgestelde backend projectstructuur
|
||||
|
||||
```text
|
||||
backend/
|
||||
app/
|
||||
main.py
|
||||
config.py
|
||||
logging.py
|
||||
api/
|
||||
schemas.py
|
||||
errors.py
|
||||
routes_browse.py
|
||||
routes_files.py
|
||||
routes_tasks.py
|
||||
routes_bookmarks.py
|
||||
services/
|
||||
browse_service.py
|
||||
file_ops_service.py
|
||||
task_service.py
|
||||
bookmark_service.py
|
||||
history_service.py
|
||||
security/
|
||||
path_guard.py
|
||||
fs/
|
||||
filesystem_adapter.py
|
||||
tasks/
|
||||
worker.py
|
||||
progress.py
|
||||
transitions.py
|
||||
db/
|
||||
sqlite.py
|
||||
models.py
|
||||
migrations/
|
||||
001_init.sql
|
||||
repositories/
|
||||
task_repo.py
|
||||
bookmark_repo.py
|
||||
history_repo.py
|
||||
tests/
|
||||
unit/
|
||||
test_path_guard.py
|
||||
test_task_transitions.py
|
||||
feature/
|
||||
test_browse_flow.py
|
||||
test_file_ops_flow.py
|
||||
test_task_flow.py
|
||||
regression/
|
||||
test_path_traversal_blocked.py
|
||||
test_unicode_filenames.py
|
||||
test_large_directory_listing.py
|
||||
golden/
|
||||
test_api_browse_golden.py
|
||||
test_api_errors_golden.py
|
||||
```
|
||||
|
||||
Richtlijnen:
|
||||
- security checks alleen via `path_guard.py` (geen ad-hoc checks in routes)
|
||||
- filesystem calls alleen via `filesystem_adapter.py`
|
||||
- API shapes blijven stabiel en worden bewaakt met golden tests
|
||||
|
||||
## 5. Belangrijkste API endpoints voor versie 1
|
||||
|
||||
Browse:
|
||||
- `GET /api/browse?path=<root-relative>`
|
||||
|
||||
File-operaties:
|
||||
- `POST /api/files/rename`
|
||||
- `POST /api/files/delete`
|
||||
- `POST /api/files/mkdir`
|
||||
- `POST /api/files/copy` (task-based)
|
||||
- `POST /api/files/move` (task-based)
|
||||
|
||||
Tasks:
|
||||
- `GET /api/tasks/{task_id}`
|
||||
- `GET /api/tasks`
|
||||
|
||||
Bookmarks:
|
||||
- `GET /api/bookmarks`
|
||||
- `POST /api/bookmarks`
|
||||
- `DELETE /api/bookmarks/{id}`
|
||||
|
||||
## 6. Browse contract (v1 aangescherpt)
|
||||
|
||||
Padmodel:
|
||||
- API accepteert in v1 alleen `root-relative` paden met expliciete root alias.
|
||||
- Voorstel requestvorm: `GET /api/browse?path=<root>/<subpath>`
|
||||
- Voorbeelden: `storage1/`, `storage1/series`, `archive/docs/2026`
|
||||
- Absolute host paths worden niet geaccepteerd in publieke API om ambiguiteit te vermijden.
|
||||
|
||||
Succesresponse:
|
||||
```json
|
||||
{
|
||||
"path": "storage1/series",
|
||||
"directories": [
|
||||
{
|
||||
"name": "ShowA",
|
||||
"path": "storage1/series/ShowA",
|
||||
"modified": "2026-03-10T12:00:00Z"
|
||||
}
|
||||
],
|
||||
"files": [
|
||||
{
|
||||
"name": "episode.mkv",
|
||||
"path": "storage1/series/episode.mkv",
|
||||
"size": 734003200,
|
||||
"modified": "2026-03-10T11:30:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Metadata velden (v1):
|
||||
- voor directories: `name`, `path`, `modified`
|
||||
- voor files: `name`, `path`, `size`, `modified`
|
||||
- `modified` als UTC ISO-8601 string
|
||||
- geen checksum/mime in v1
|
||||
|
||||
Hidden files beleid:
|
||||
- default: hidden entries (`.`-prefixed) niet tonen.
|
||||
- optioneel query-flag: `show_hidden=true` voor expliciete opname.
|
||||
- `.` en `..` worden nooit teruggegeven.
|
||||
|
||||
Foutresponses:
|
||||
- `400` bij ongeldige `path` parameter of malformed input
|
||||
- `403` bij pad buiten whitelist / security-blokkade
|
||||
- `404` als pad niet bestaat
|
||||
- `409` als padtype mismatch (bijv. file i.p.v. directory)
|
||||
- `500` bij onverwachte I/O fout
|
||||
|
||||
## 7. Delete gedrag (veiligheidsuitwerking v1)
|
||||
|
||||
Endpoint:
|
||||
- `POST /api/files/delete`
|
||||
- request: `{ "path": "<root-relative>", "recursive": false }`
|
||||
|
||||
Gedrag files vs directories:
|
||||
- file delete: direct verwijderen indien pad valide is
|
||||
- directory delete:
|
||||
- standaard alleen lege directory (`recursive=false`)
|
||||
- non-empty directory geeft `409 directory_not_empty`
|
||||
- recursive delete alleen bij expliciet `recursive=true`
|
||||
|
||||
Non-empty directories:
|
||||
- zonder recursive: blokkeren met heldere foutmelding
|
||||
- met recursive: toegestaan binnen whitelist, met strikte guard tegen symlink escapes tijdens traversal
|
||||
|
||||
Foutafhandeling:
|
||||
- `404` target bestaat niet
|
||||
- `403` security/path violations
|
||||
- `409` directory non-empty zonder recursive of operation conflict
|
||||
- `500` onverwachte filesystem fout
|
||||
|
||||
Bevestigingsaannames:
|
||||
- backend voert geen interactieve confirm uit
|
||||
- frontend moet destructive acties bevestigen voordat endpoint wordt aangeroepen
|
||||
- voorstel frontend: extra bevestigingstekst voor recursive delete
|
||||
|
||||
## 8. Task/progress model (concreet v1)
|
||||
|
||||
Progress model:
|
||||
- primaire metriek: bytes (`done_bytes`, `total_bytes`) voor copy/move van files
|
||||
- secundaire metriek voor directory-operaties: `done_items`, `total_items`
|
||||
- API retourneert beide waar beschikbaar
|
||||
|
||||
Als totaal onbekend is:
|
||||
- `total_bytes` en/of `total_items` blijven `null`
|
||||
- client toont indeterminate progress state
|
||||
- `done_*` kan wel oplopen
|
||||
|
||||
Partial failure gedrag:
|
||||
- v1 policy: fail-fast per task
|
||||
- eerste fatale fout zet task op `failed`
|
||||
- reeds verplaatste/gekopieerde items blijven staan (geen rollback in v1)
|
||||
- response bevat `failed_item` en foutdetails
|
||||
|
||||
Eindstatussen v1:
|
||||
- `completed`
|
||||
- `failed`
|
||||
- `queued` en `running` als tussenstatussen
|
||||
- `canceled` nog niet in v1
|
||||
|
||||
## 9. Voorstel SQLite schema (v1)
|
||||
|
||||
Tabel `tasks`:
|
||||
- `id TEXT PRIMARY KEY`
|
||||
- `operation TEXT NOT NULL` (`copy`/`move`)
|
||||
- `source_path TEXT NOT NULL`
|
||||
- `destination_path TEXT NOT NULL`
|
||||
- `status TEXT NOT NULL` (`queued`/`running`/`completed`/`failed`)
|
||||
- `done_bytes INTEGER NULL`
|
||||
- `total_bytes INTEGER NULL`
|
||||
- `done_items INTEGER NULL`
|
||||
- `total_items INTEGER NULL`
|
||||
- `current_item TEXT NULL`
|
||||
- `error_code TEXT NULL`
|
||||
- `error_message TEXT NULL`
|
||||
- `failed_item TEXT NULL`
|
||||
- `created_at TEXT NOT NULL`
|
||||
- `started_at TEXT NULL`
|
||||
- `finished_at TEXT NULL`
|
||||
|
||||
Indexen:
|
||||
- `idx_tasks_status_created_at(status, created_at DESC)`
|
||||
- `idx_tasks_created_at(created_at DESC)`
|
||||
|
||||
Tabel `bookmarks`:
|
||||
- `id INTEGER PRIMARY KEY AUTOINCREMENT`
|
||||
- `label TEXT NOT NULL`
|
||||
- `path TEXT NOT NULL UNIQUE`
|
||||
- `created_at TEXT NOT NULL`
|
||||
- `updated_at TEXT NOT NULL`
|
||||
|
||||
Tabel `history`:
|
||||
- `id INTEGER PRIMARY KEY AUTOINCREMENT`
|
||||
- `operation TEXT NOT NULL`
|
||||
- `path TEXT NULL`
|
||||
- `source_path TEXT NULL`
|
||||
- `destination_path TEXT NULL`
|
||||
- `status TEXT NOT NULL`
|
||||
- `task_id TEXT NULL`
|
||||
- `error_code TEXT NULL`
|
||||
- `error_message TEXT NULL`
|
||||
- `created_at TEXT NOT NULL`
|
||||
|
||||
Indexen:
|
||||
- `idx_history_created_at(created_at DESC)`
|
||||
- `idx_history_operation_created_at(operation, created_at DESC)`
|
||||
- `idx_history_task_id(task_id)`
|
||||
|
||||
## 10. API error model (v1)
|
||||
|
||||
Standaard error shape:
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"code": "path_outside_whitelist",
|
||||
"message": "Requested path is outside allowed roots",
|
||||
"details": {
|
||||
"path": "storage1/../../etc"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Errorvelden:
|
||||
- `code`: machine-readable, stabiel voor clientlogica
|
||||
- `message`: mens-leesbare samenvatting
|
||||
- `details`: optioneel object met context (geen gevoelige hostpaths)
|
||||
|
||||
Voorgestelde error codes:
|
||||
- security/path:
|
||||
- `path_outside_whitelist`
|
||||
- `path_traversal_detected`
|
||||
- `symlink_escape_detected`
|
||||
- `invalid_root_alias`
|
||||
- not found/conflict/validation:
|
||||
- `path_not_found`
|
||||
- `already_exists`
|
||||
- `directory_not_empty`
|
||||
- `invalid_request`
|
||||
- `validation_error`
|
||||
- operationeel:
|
||||
- `io_error`
|
||||
- `internal_error`
|
||||
|
||||
HTTP mapping:
|
||||
- `400`: `invalid_request`, `validation_error`
|
||||
- `403`: security/path errors
|
||||
- `404`: `path_not_found`, `task_not_found`, `bookmark_not_found`
|
||||
- `409`: `already_exists`, `directory_not_empty`, type conflicts
|
||||
- `500`: `io_error`, `internal_error`
|
||||
|
||||
## 11. Minimale frontend-notitie voor v1
|
||||
|
||||
Hoofdschermen:
|
||||
- `Browser view`: directory listing + acties (rename/delete/mkdir/copy/move)
|
||||
- `Tasks view`: lijst met actieve/recente taken en detailstatus
|
||||
- `Bookmarks`: snelle navigatie naar opgeslagen paden
|
||||
|
||||
Task polling:
|
||||
- bij actieve taken elke 1-2 seconden `GET /api/tasks/{id}` of batch `GET /api/tasks`
|
||||
- polling stopt automatisch bij eindstatus (`completed`/`failed`)
|
||||
- UI toont determinate progress bij bekende totalen, anders indeterminate indicator
|
||||
|
||||
Foutweergave:
|
||||
- fouten tonen met `error.message`
|
||||
- client-logica kan op `error.code` beslissen (bijv. specifieke melding voor `directory_not_empty`)
|
||||
- destructive acties (delete, vooral recursive) krijgen expliciete confirm-dialog
|
||||
|
||||
## 12. Implementatieplan in kleine stappen
|
||||
|
||||
1. Backend skeleton opzetten (FastAPI app, routers, config, logging).
|
||||
2. `path_guard` implementeren met root-alias model en centrale security checks.
|
||||
3. Unit tests toevoegen voor whitelist/traversal/symlink scenario's.
|
||||
4. Browse endpoint bouwen volgens aangescherpt contract incl. hidden files beleid.
|
||||
5. Golden tests toevoegen voor browse success + browse error responses.
|
||||
6. `rename`, `mkdir`, `delete` implementeren met veilig delete-gedrag (recursive policy).
|
||||
7. Feature tests toevoegen voor file-operaties incl. non-empty directory fouten.
|
||||
8. SQLite schema (`tasks`, `bookmarks`, `history`) toevoegen met repositories.
|
||||
9. Task statusmachine en transitions implementeren + unit tests.
|
||||
10. Worker voor copy/move implementeren met bytes/items progress.
|
||||
11. Task endpoints implementeren (`create`, `get`, `list`) met eindstatussen.
|
||||
12. Feature tests voor copy/move + partial failure gedrag + polling flow.
|
||||
13. Bookmarks endpoints implementeren + basis tests.
|
||||
14. Regression tests voor path traversal, unicode, grote/nested directories.
|
||||
15. Frontend v1 basis flows koppelen (browse, acties, tasks polling, foutweergave).
|
||||
16. Eindcontrole: volledige test run + golden contract verificatie.
|
||||
|
||||
## 13. Governance-notitie
|
||||
|
||||
Volgens `CHANGE_POLICY.md` en `SAFE_FILES.md` vallen API-wijzigingen en DB schema-wijzigingen onder "eerst voorstel nodig". Deze aangescherpte `PLAN.md` is het voorstel dat eerst goedgekeurd moet worden. Na expliciete goedkeuring kan implementatie starten.
|
||||
@@ -0,0 +1,27 @@
|
||||
# Gebruik Debian Trixie (13) als stabiele basis
|
||||
FROM debian:trixie-slim
|
||||
|
||||
# Installeren van benodigde tools
|
||||
# rsync voor data, python3 voor de backend, sqlite3 voor dataopslag
|
||||
RUN apt-get update && apt-get install -y \
|
||||
python3 \
|
||||
python3-pip \
|
||||
python3-full \
|
||||
rsync \
|
||||
sqlite3 \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Maak de mappenstructuur aan zoals voorgesteld
|
||||
WORKDIR /app
|
||||
RUN mkdir -p /app/backend /app/html /app/conf /Volumes/8TB /Volumes/8TB_RAID1
|
||||
|
||||
# Installeer een lichtgewicht Python API framework (FastAPI)
|
||||
# We gebruiken --break-system-packages omdat we in een container zitten
|
||||
RUN pip3 install fastapi uvicorn --break-system-packages
|
||||
|
||||
# Exposeer de poort voor de webinterface
|
||||
EXPOSE 8030
|
||||
|
||||
# Startscript (placeholder totdat de GPT de main.py heeft gemaakt)
|
||||
CMD ["python3", "-m", "uvicorn", "backend.main:app", "--host", "0.0.0.0", "--port", "8080", "--reload"]
|
||||
@@ -0,0 +1,26 @@
|
||||
# AGENTS.md
|
||||
|
||||
Dit document beschrijft hoe AI agents in dit project moeten werken.
|
||||
|
||||
## Rollen
|
||||
|
||||
### GPT
|
||||
|
||||
Verantwoordelijk voor:
|
||||
|
||||
- architectuur
|
||||
- analyse
|
||||
- taakopdeling
|
||||
- teststrategie
|
||||
- Codex prompts
|
||||
- review van wijzigingen
|
||||
|
||||
### Codex
|
||||
|
||||
Verantwoordelijk voor:
|
||||
|
||||
- implementeren van kleine wijzigingen
|
||||
- toevoegen van tests
|
||||
- aanpassen van documentatie
|
||||
|
||||
Codex mag **geen architectuur wijzigen** zonder instructie.
|
||||
@@ -0,0 +1,137 @@
|
||||
# API_GOLDEN.md
|
||||
|
||||
Dit document definieert stabiele API responses.
|
||||
Velden mogen niet wijzigen zonder golden tests te updaten.
|
||||
|
||||
## Browse
|
||||
|
||||
### `GET /api/browse`
|
||||
Response shape:
|
||||
```json
|
||||
{
|
||||
"path": "storage1",
|
||||
"directories": [],
|
||||
"files": []
|
||||
}
|
||||
```
|
||||
|
||||
## File Ops (direct)
|
||||
|
||||
### `POST /api/files/mkdir`
|
||||
Success:
|
||||
```json
|
||||
{ "path": "storage1/parent/new_dir" }
|
||||
```
|
||||
|
||||
Conflict (`already_exists`):
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"code": "already_exists",
|
||||
"message": "Target path already exists",
|
||||
"details": { "path": "storage1/parent/new_dir" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Invalid name (`invalid_request`):
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"code": "invalid_request",
|
||||
"message": "Invalid name",
|
||||
"details": { "name": "bad/name" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### `POST /api/files/rename`
|
||||
Success:
|
||||
```json
|
||||
{ "path": "storage1/parent/new_name.ext" }
|
||||
```
|
||||
|
||||
Conflict (`already_exists`) + invalid name (`invalid_request`) gebruiken dezelfde error-shape als mkdir.
|
||||
|
||||
### `POST /api/files/delete`
|
||||
Success:
|
||||
```json
|
||||
{ "path": "storage1/parent/file_or_empty_dir" }
|
||||
```
|
||||
|
||||
Non-empty directory:
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"code": "directory_not_empty",
|
||||
"message": "Directory is not empty",
|
||||
"details": { "path": "storage1/parent/non_empty" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Task-based create endpoints
|
||||
|
||||
### `POST /api/files/copy`
|
||||
### `POST /api/files/move`
|
||||
Success (202):
|
||||
```json
|
||||
{
|
||||
"task_id": "<uuid>",
|
||||
"status": "queued"
|
||||
}
|
||||
```
|
||||
|
||||
## Tasks read endpoints
|
||||
|
||||
### `GET /api/tasks`
|
||||
Response shape:
|
||||
```json
|
||||
{
|
||||
"items": [
|
||||
{
|
||||
"id": "<uuid>",
|
||||
"operation": "copy",
|
||||
"status": "running",
|
||||
"source": "storage1/a.txt",
|
||||
"destination": "storage2/a.txt",
|
||||
"created_at": "2026-03-10T10:00:00Z",
|
||||
"finished_at": null
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### `GET /api/tasks/{task_id}`
|
||||
Response shape:
|
||||
```json
|
||||
{
|
||||
"id": "<uuid>",
|
||||
"operation": "move",
|
||||
"status": "running",
|
||||
"source": "storage1/a.txt",
|
||||
"destination": "storage2/a.txt",
|
||||
"done_bytes": 1024,
|
||||
"total_bytes": 4096,
|
||||
"done_items": null,
|
||||
"total_items": null,
|
||||
"current_item": "storage1/a.txt",
|
||||
"failed_item": null,
|
||||
"error_code": null,
|
||||
"error_message": null,
|
||||
"created_at": "2026-03-10T10:00:00Z",
|
||||
"started_at": "2026-03-10T10:00:01Z",
|
||||
"finished_at": null
|
||||
}
|
||||
```
|
||||
|
||||
Task not found:
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"code": "task_not_found",
|
||||
"message": "Task was not found",
|
||||
"details": { "task_id": "task-missing" }
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,87 @@
|
||||
# BACKEND_V1_CONSOLIDATION.md
|
||||
|
||||
## Doel
|
||||
Consolidatie van de huidige backend v1 contracten en beperkingen.
|
||||
|
||||
## Huidige endpoints
|
||||
|
||||
Directe endpoints (geen task-creatie):
|
||||
- `GET /api/browse`
|
||||
- `POST /api/files/mkdir`
|
||||
- `POST /api/files/rename`
|
||||
- `POST /api/files/delete`
|
||||
- `GET /api/tasks`
|
||||
- `GET /api/tasks/{task_id}`
|
||||
|
||||
Task-based create endpoints:
|
||||
- `POST /api/files/copy`
|
||||
- `POST /api/files/move`
|
||||
|
||||
## Semantiek per endpoint
|
||||
|
||||
### `GET /api/browse`
|
||||
- Browse van een directory binnen whitelist roots.
|
||||
- Hidden files standaard verborgen, optioneel via `show_hidden=true`.
|
||||
|
||||
### `POST /api/files/mkdir`
|
||||
- Maakt directory op exact `parent_path + name`.
|
||||
- Geen impliciete pad-normalisatie buiten `path_guard` validatie.
|
||||
|
||||
### `POST /api/files/rename`
|
||||
- Hernoemt binnen dezelfde parent directory.
|
||||
- Geen verborgen move naar andere map.
|
||||
|
||||
### `POST /api/files/delete`
|
||||
- Verwijdert file direct.
|
||||
- Verwijdert alleen lege directory.
|
||||
- Non-empty directory => conflict.
|
||||
|
||||
### `POST /api/files/copy`
|
||||
- File-only copy.
|
||||
- `destination` is volledig doelpad (geen "copy into directory").
|
||||
- Task wordt aangemaakt (`202`, `task_id`, `queued`) en daarna uitgevoerd.
|
||||
|
||||
### `POST /api/files/move`
|
||||
- File-only move.
|
||||
- `destination` is volledig doelpad.
|
||||
- Task wordt aangemaakt (`202`, `task_id`, `queued`) en daarna uitgevoerd.
|
||||
- Same-root: native move.
|
||||
- Cross-root: copy + delete binnen dezelfde task.
|
||||
|
||||
### `GET /api/tasks`
|
||||
- Lijst van tasks, gesorteerd op `created_at DESC`.
|
||||
- Item bevat minimaal: `id`, `operation`, `status`, `source`, `destination`, `created_at`, `finished_at`.
|
||||
|
||||
### `GET /api/tasks/{task_id}`
|
||||
- Detailstatus inclusief progress/foutvelden.
|
||||
- Statusset: `queued`, `running`, `completed`, `failed`.
|
||||
|
||||
## Foutmodel per endpointgroep
|
||||
|
||||
### Browse/file-validatie
|
||||
- `invalid_request`
|
||||
- `path_traversal_detected`
|
||||
- `path_outside_whitelist`
|
||||
- `invalid_root_alias`
|
||||
- `path_not_found`
|
||||
- `type_conflict`
|
||||
- `already_exists`
|
||||
- `directory_not_empty` (delete)
|
||||
|
||||
### Task create runtime
|
||||
- validatiefouten vóór task-creatie: directe API-fout, geen task
|
||||
- runtime-fouten tijdens task-uitvoering: task naar `failed` met `io_error`
|
||||
|
||||
### Tasks read
|
||||
- `task_not_found` bij onbekend task id
|
||||
|
||||
## Expliciete v1-beperkingen
|
||||
|
||||
- copy: file-only
|
||||
- move: file-only
|
||||
- delete: alleen file + lege directory
|
||||
- geen recursive delete
|
||||
- geen directory copy/move
|
||||
- geen rollback
|
||||
- geen cancel/retry
|
||||
- geen batch-operaties
|
||||
@@ -0,0 +1,21 @@
|
||||
# BOOKMARKS_V1_CONSOLIDATION.md
|
||||
|
||||
## Endpoints
|
||||
- `POST /api/bookmarks`
|
||||
- `GET /api/bookmarks`
|
||||
- `DELETE /api/bookmarks/{bookmark_id}`
|
||||
|
||||
## Duplicate policy
|
||||
- Een bookmark is uniek op `path`.
|
||||
- Een tweede create met hetzelfde pad geeft `409 already_exists`.
|
||||
|
||||
## Validatie
|
||||
- `path` wordt centraal via `path_guard.resolve_path(...)` gevalideerd.
|
||||
- Dit dekt whitelist, traversal en root-alias validatie.
|
||||
- `label` mag niet leeg zijn (`trim()`), anders `400 invalid_request`.
|
||||
|
||||
## Model v1
|
||||
- `id`
|
||||
- `path`
|
||||
- `label`
|
||||
- `created_at`
|
||||
@@ -0,0 +1,22 @@
|
||||
# CHANGE_POLICY.md
|
||||
|
||||
## Toegestaan zonder toestemming
|
||||
|
||||
- documentatie verbeteren
|
||||
- tests toevoegen
|
||||
- logging verbeteren
|
||||
- kleine bugfixes
|
||||
|
||||
## Eerst voorstel nodig
|
||||
|
||||
- API wijzigingen
|
||||
- dependency toevoegen
|
||||
- database schema wijziging
|
||||
- frontend flow aanpassen
|
||||
|
||||
## Expliciete goedkeuring vereist
|
||||
|
||||
- security model aanpassen
|
||||
- whitelist wijzigen
|
||||
- directory structuur aanpassen
|
||||
- destructieve acties veranderen
|
||||
@@ -0,0 +1,274 @@
|
||||
# COPY_MOVE_TASKS_DESIGN.md
|
||||
|
||||
## Doel
|
||||
Ontwerpvoorstel voor de volgende implementatieslice: `copy`, `move` en `tasks`.
|
||||
|
||||
Dit document bevat alleen ontwerpkeuzes. Er is geen code-implementatie in deze stap.
|
||||
|
||||
## 1. Destination-semantiek (expliciet)
|
||||
|
||||
V1 kiest een strikte semantiek:
|
||||
- `destination` is altijd het volledige beoogde eindpad (inclusief bestandsnaam).
|
||||
- `destination` mag in v1 **niet** geïnterpreteerd worden als "copy/move into existing directory".
|
||||
- Als `destination` al bestaat (file of directory) -> `already_exists` (409).
|
||||
|
||||
Voorbeeld v1:
|
||||
- source: `storage1/a/file.txt`
|
||||
- destination: `storage2/b/file_copy.txt`
|
||||
|
||||
Niet toegestaan als impliciete directory-target in v1:
|
||||
- destination: `storage2/b/` met verwachting dat `file.txt` automatisch eronder komt.
|
||||
|
||||
Reden:
|
||||
- voorkomt ambigu gedrag
|
||||
- eenvoudiger API-contract
|
||||
- minder regressierisico
|
||||
|
||||
## 2. Scopevoorstel `copy` v1
|
||||
|
||||
### Keuze
|
||||
- `copy` ondersteunt in v1 **alleen files**.
|
||||
- directory-recursie schuift door naar een latere fase.
|
||||
|
||||
### Motivatie
|
||||
- Complexiteit: recursieve copy voegt veel edge-cases toe (diepe bomen, symlinks, partial failures).
|
||||
- Testbaarheid: file-only maakt golden/regressie tests kleiner en deterministischer.
|
||||
- Regressierisico: lagere kans op onverwachte performance/security regressies.
|
||||
|
||||
### Conflictgedrag
|
||||
- destination bestaat al -> `already_exists` (409).
|
||||
- geen overwrite/merge in v1.
|
||||
|
||||
### Uitvoering
|
||||
- altijd task-based (async).
|
||||
- create response: `task_id` + `queued`.
|
||||
|
||||
### Foutmodel copy
|
||||
- `invalid_request` (400)
|
||||
- `path_traversal_detected` (403)
|
||||
- `path_outside_whitelist` (403)
|
||||
- `invalid_root_alias` (403)
|
||||
- `path_not_found` (404)
|
||||
- `type_conflict` (409) (source is geen file)
|
||||
- `already_exists` (409)
|
||||
- `io_error` (500)
|
||||
|
||||
## 3. Scopevoorstel `move` v1
|
||||
|
||||
### Rename vs move
|
||||
- `rename` blijft naamwijziging binnen dezelfde parent directory.
|
||||
- `move` is padwijziging naar een ander eindpad (zelfde root of cross-root).
|
||||
|
||||
### Cross-root gedrag
|
||||
- toegestaan als source en destination beide binnen whitelist vallen.
|
||||
- zelfde root: native rename/move waar mogelijk.
|
||||
- cross-root: copy+delete binnen dezelfde task.
|
||||
|
||||
### Keuze
|
||||
- `move` ondersteunt in v1 **alleen files**.
|
||||
- directory-move buiten scope in v1.
|
||||
|
||||
### Conflictgedrag
|
||||
- destination bestaat al -> `already_exists` (409).
|
||||
- geen overwrite in v1.
|
||||
|
||||
### Foutmodel move
|
||||
- `invalid_request` (400)
|
||||
- `path_traversal_detected` (403)
|
||||
- `path_outside_whitelist` (403)
|
||||
- `invalid_root_alias` (403)
|
||||
- `path_not_found` (404)
|
||||
- `type_conflict` (409) (source is geen file)
|
||||
- `already_exists` (409)
|
||||
- `io_error` (500)
|
||||
|
||||
## 4. Symlinkbeleid (copy/move)
|
||||
|
||||
### Source
|
||||
- v1 file-only: source mag geen symlink zijn.
|
||||
- als source symlink resolve’t naar pad buiten whitelist -> geblokkeerd (`path_outside_whitelist`).
|
||||
- als source symlink resolve’t binnen whitelist: in v1 alsnog afwijzen als `type_conflict` om semantiek simpel te houden.
|
||||
|
||||
### Destination
|
||||
- destination wordt via `path_guard` canoniek gevalideerd.
|
||||
- destination parent moet binnen whitelist liggen.
|
||||
- destination parent die via symlink buiten whitelist valt -> blokkeren (`path_outside_whitelist`).
|
||||
|
||||
### Recursieve escapes
|
||||
- Niet van toepassing in v1 (geen directory-recursie).
|
||||
- Voor latere directory-copy geldt: elk bezocht pad moet per entry containment-check krijgen.
|
||||
|
||||
## 5. Task persistence en history
|
||||
|
||||
### Relatie tasks vs history
|
||||
- In v1 zijn `tasks` en `history` aparte modellen/tabelrollen:
|
||||
- `tasks`: actuele en afgeronde task-status/progress
|
||||
- `history`: audit-log van uitgevoerde operaties
|
||||
- history kan vanuit task-completion gevuld worden, maar is niet hetzelfde model.
|
||||
|
||||
### Retentie
|
||||
- v1 bewaart `completed` en `failed` tasks persistent (geen automatische cleanup in scope).
|
||||
|
||||
### Sortering GET /api/tasks
|
||||
- standaard sortering: `created_at DESC` (nieuwste eerst).
|
||||
|
||||
## 6. Taskmodel
|
||||
|
||||
### Wanneer task verplicht is
|
||||
- `copy` en `move` altijd task-based.
|
||||
|
||||
### Statussen
|
||||
- `queued`
|
||||
- `running`
|
||||
- `completed`
|
||||
- `failed`
|
||||
|
||||
### Progress
|
||||
- file-only v1:
|
||||
- `done_bytes`
|
||||
- `total_bytes`
|
||||
- `done_items`/`total_items` optioneel en kunnen `null` blijven in v1.
|
||||
|
||||
### Partial failure
|
||||
- fail-fast.
|
||||
- eerste fatale fout -> `failed`.
|
||||
- geen rollback in v1.
|
||||
- taskdetail bevat `failed_item`, `error_code`, `error_message`.
|
||||
|
||||
### Duplicate create requests (correctie)
|
||||
- v1 gedrag is **niet idempotent**.
|
||||
- twee identieke requests mogen twee verschillende tasks creëren.
|
||||
|
||||
## 7. API-voorstel
|
||||
|
||||
### Endpoints
|
||||
- `POST /api/files/copy`
|
||||
- `POST /api/files/move`
|
||||
- `GET /api/tasks/{task_id}`
|
||||
- `GET /api/tasks`
|
||||
|
||||
### Request/response shapes
|
||||
|
||||
#### POST /api/files/copy
|
||||
Request:
|
||||
```json
|
||||
{
|
||||
"source": "storage1/path/file.txt",
|
||||
"destination": "storage2/path/file_copy.txt"
|
||||
}
|
||||
```
|
||||
Response (202):
|
||||
```json
|
||||
{
|
||||
"task_id": "<uuid>",
|
||||
"status": "queued"
|
||||
}
|
||||
```
|
||||
|
||||
#### POST /api/files/move
|
||||
Request:
|
||||
```json
|
||||
{
|
||||
"source": "storage1/path/file.txt",
|
||||
"destination": "storage2/path/file_moved.txt"
|
||||
}
|
||||
```
|
||||
Response (202):
|
||||
```json
|
||||
{
|
||||
"task_id": "<uuid>",
|
||||
"status": "queued"
|
||||
}
|
||||
```
|
||||
|
||||
#### GET /api/tasks/{task_id}
|
||||
Response (200):
|
||||
```json
|
||||
{
|
||||
"id": "<uuid>",
|
||||
"operation": "copy",
|
||||
"status": "running",
|
||||
"source": "storage1/a/file.txt",
|
||||
"destination": "storage2/b/file.txt",
|
||||
"done_bytes": 1024,
|
||||
"total_bytes": 4096,
|
||||
"done_items": null,
|
||||
"total_items": null,
|
||||
"current_item": "storage1/a/file.txt",
|
||||
"failed_item": null,
|
||||
"error_code": null,
|
||||
"error_message": null,
|
||||
"created_at": "2026-03-10T00:00:00Z",
|
||||
"started_at": "2026-03-10T00:00:01Z",
|
||||
"finished_at": null
|
||||
}
|
||||
```
|
||||
|
||||
#### GET /api/tasks
|
||||
Response (200):
|
||||
```json
|
||||
{
|
||||
"items": [
|
||||
{
|
||||
"id": "<uuid>",
|
||||
"operation": "move",
|
||||
"status": "completed",
|
||||
"source": "storage1/a/file.txt",
|
||||
"destination": "storage2/b/file.txt",
|
||||
"created_at": "2026-03-10T00:00:00Z",
|
||||
"finished_at": "2026-03-10T00:00:05Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Waarom `source` en `destination` ook in task-list
|
||||
- Ja, opnemen in `GET /api/tasks`.
|
||||
- Reden: operator kan zonder extra detail-calls direct zien wat elke task doet.
|
||||
- Dit maakt UI en troubleshooting eenvoudiger.
|
||||
|
||||
### Error codes (versmald)
|
||||
- `invalid_request` (400)
|
||||
- `path_traversal_detected` (403)
|
||||
- `path_outside_whitelist` (403)
|
||||
- `invalid_root_alias` (403)
|
||||
- `path_not_found` (404)
|
||||
- `task_not_found` (404)
|
||||
- `type_conflict` (409)
|
||||
- `already_exists` (409)
|
||||
- `io_error` (500)
|
||||
|
||||
`internal_error` wordt in v1 niet als aparte publieke code gebruikt; onverwachte fouten worden genormaliseerd naar `io_error` of framework-500.
|
||||
|
||||
## 8. Teststrategie
|
||||
|
||||
### Golden tests
|
||||
- `POST /api/files/copy` success (task create shape)
|
||||
- `POST /api/files/move` success (task create shape)
|
||||
- `already_exists` voor copy/move
|
||||
- `type_conflict` als source geen file is
|
||||
- `invalid_request` voor ongeldige payload
|
||||
- traversal/root errors voor source en destination
|
||||
- `GET /api/tasks/{task_id}` shapes voor `queued/running/completed/failed`
|
||||
- `GET /api/tasks` list shape inclusief `source` en `destination`
|
||||
|
||||
### Regressietests
|
||||
- cross-root move gebruikt copy+delete pad correct
|
||||
- file copy/move met unicode namen
|
||||
- grote files progress updates blijven monotone stijging
|
||||
- duplicate create requests leveren 2 verschillende `task_id`s (niet-idempotent)
|
||||
|
||||
### Securitytests
|
||||
- traversal blokkade op source/destination
|
||||
- destination outside whitelist blokkade
|
||||
- invalid root alias blokkade
|
||||
- symlink source wordt afgewezen
|
||||
- symlinked destination parent buiten whitelist wordt afgewezen
|
||||
|
||||
## Implementatiegrenzen voor volgende stap
|
||||
- Geen wijziging aan bestaande endpoints:
|
||||
- browse
|
||||
- mkdir
|
||||
- rename
|
||||
- delete
|
||||
- Geen nieuwe dependencies zonder expliciete motivatie.
|
||||
@@ -0,0 +1,50 @@
|
||||
# COPY_V1_CONSOLIDATION.md
|
||||
|
||||
## Task lifecycle (copy v1)
|
||||
|
||||
`POST /api/files/copy`:
|
||||
1. Request wordt gevalideerd.
|
||||
2. Bij geldige input wordt direct een task aangemaakt met status `queued`.
|
||||
3. Een achtergrond-runner zet de task op `running` en vult progress (`done_bytes`, `total_bytes`).
|
||||
4. Eindstatus:
|
||||
- `completed` bij succesvolle file copy
|
||||
- `failed` bij runtime I/O fout (`error_code = io_error`)
|
||||
|
||||
## Validatiefouten vs runtime-fouten
|
||||
|
||||
Validatiefouten (voor task-creatie):
|
||||
- `invalid_request`
|
||||
- `path_traversal_detected`
|
||||
- `path_outside_whitelist`
|
||||
- `invalid_root_alias`
|
||||
- `path_not_found`
|
||||
- `type_conflict`
|
||||
- `already_exists`
|
||||
|
||||
Gedrag:
|
||||
- request faalt direct met error response
|
||||
- er wordt geen task aangemaakt
|
||||
|
||||
Runtime-fouten (na task-creatie):
|
||||
- `io_error`
|
||||
|
||||
Gedrag:
|
||||
- request zelf retourneert `202` met `task_id`
|
||||
- task gaat naar `failed`
|
||||
- foutdetails verschijnen via `GET /api/tasks/{task_id}`
|
||||
|
||||
## Copy metadata in v1 (`copy_file(...)`)
|
||||
|
||||
V1 kopieert:
|
||||
- file-inhoud (byte stream)
|
||||
- basic filesystem metadata via `copystat` (mtime/atime/mode waar ondersteund)
|
||||
|
||||
V1 doet niet expliciet:
|
||||
- ownership/ACL normalisatie
|
||||
- extended attributes beleid
|
||||
|
||||
## Destination-semantiek (expliciet)
|
||||
|
||||
`destination` blijft in v1 altijd het volledige doelpad.
|
||||
- Geen impliciete "copy into existing directory" interpretatie.
|
||||
- Als destination al bestaat: `already_exists`.
|
||||
@@ -0,0 +1,49 @@
|
||||
# MKDIR_RENAME_CONSOLIDATION.md
|
||||
|
||||
## Gedragsbevestiging
|
||||
|
||||
- `POST /api/files/rename` werkt alleen binnen dezelfde parent directory.
|
||||
- De target wordt opgebouwd als `parent(source_path) + new_name`.
|
||||
- `rename` laat geen verborgen move toe.
|
||||
- `new_name` accepteert geen padsegmenten (`/`, `\\`, `..`), dus geen directorywissel.
|
||||
- `validate_name` wordt toegepast op:
|
||||
- `mkdir.name`
|
||||
- `rename.new_name`
|
||||
|
||||
## Error model voor mkdir/rename
|
||||
|
||||
Standaard error shape:
|
||||
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"code": "...",
|
||||
"message": "...",
|
||||
"details": {"...": "..."}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Ondersteunde codes in deze slice:
|
||||
|
||||
- `invalid_request`
|
||||
- ongeldige naam (`..`, leeg, of met `/`/`\\`)
|
||||
- HTTP `400`
|
||||
- `path_traversal_detected`
|
||||
- traversal in aangeleverd pad (`../`)
|
||||
- HTTP `403`
|
||||
- `path_not_found`
|
||||
- bronpad of parent pad bestaat niet
|
||||
- HTTP `404`
|
||||
- `already_exists`
|
||||
- doelpad bestaat al (file of directory)
|
||||
- HTTP `409`
|
||||
- `io_error`
|
||||
- onverwachte filesystem fout tijdens operatie
|
||||
- HTTP `500`
|
||||
|
||||
## Scopebevestiging
|
||||
|
||||
Deze consolidatie wijzigt niet:
|
||||
- browse endpoint
|
||||
- delete/copy/move/tasks/frontend
|
||||
@@ -0,0 +1,123 @@
|
||||
# PRODUCT_SPEC.md
|
||||
|
||||
## Project
|
||||
|
||||
Web-based Storage Manager
|
||||
|
||||
Een webapplicatie waarmee gebruikers bestanden op
|
||||
whitelisted storage volumes kunnen beheren.
|
||||
|
||||
De applicatie draait in een containeromgeving.
|
||||
|
||||
---
|
||||
|
||||
# Doel
|
||||
|
||||
Een veilige webinterface bouwen voor filesystem beheer.
|
||||
|
||||
Gebruikers moeten via een browser:
|
||||
|
||||
- directories kunnen browsen
|
||||
- bestanden kunnen kopiëren
|
||||
- bestanden kunnen verplaatsen
|
||||
- bestanden kunnen hernoemen
|
||||
- bestanden kunnen verwijderen
|
||||
- mappen kunnen aanmaken
|
||||
- bookmarks kunnen opslaan
|
||||
|
||||
De applicatie moet geschikt zijn voor gebruik binnen een
|
||||
self-hosted infrastructuur.
|
||||
|
||||
---
|
||||
|
||||
# Architectuur uitgangspunten
|
||||
|
||||
Backend
|
||||
|
||||
Python + FastAPI
|
||||
|
||||
Frontend
|
||||
|
||||
lichte JS UI zonder zware frameworks.
|
||||
|
||||
Database
|
||||
|
||||
SQLite voor:
|
||||
|
||||
- tasks
|
||||
- bookmarks
|
||||
- history
|
||||
|
||||
---
|
||||
|
||||
# Storage model
|
||||
|
||||
De applicatie mag alleen werken binnen
|
||||
**whitelisted root directories**.
|
||||
|
||||
Voorbeeld:
|
||||
|
||||
/mnt/storage1
|
||||
/mnt/storage2
|
||||
/media/archive
|
||||
|
||||
Alle paden moeten binnen deze roots blijven.
|
||||
|
||||
Traversal en symlink escapes moeten geblokkeerd worden.
|
||||
|
||||
---
|
||||
|
||||
# Functionaliteiten
|
||||
|
||||
## Directory browsing
|
||||
|
||||
De UI moet directories kunnen tonen.
|
||||
|
||||
Response moet bevatten:
|
||||
|
||||
- directories
|
||||
- files
|
||||
- metadata (size, modified)
|
||||
|
||||
---
|
||||
|
||||
## File operations
|
||||
|
||||
Ondersteunde acties:
|
||||
|
||||
rename
|
||||
move
|
||||
copy
|
||||
delete
|
||||
create directory
|
||||
|
||||
---
|
||||
|
||||
## Task system
|
||||
|
||||
Langlopende acties (copy/move):
|
||||
|
||||
- krijgen een task id
|
||||
- status wordt opgeslagen
|
||||
- progress kan opgehaald worden via API
|
||||
|
||||
---
|
||||
|
||||
# Security eisen
|
||||
|
||||
- path validation
|
||||
- whitelist enforcement
|
||||
- symlink protection
|
||||
- traversal protection
|
||||
|
||||
---
|
||||
|
||||
# Out of scope
|
||||
|
||||
Deze functionaliteit hoort **niet** bij versie 1:
|
||||
|
||||
- user management
|
||||
- permissions management
|
||||
- cloud storage
|
||||
- distributed storage
|
||||
- multi-node clusters
|
||||
@@ -0,0 +1,20 @@
|
||||
# SAFE_FILES.md
|
||||
|
||||
## Safe to edit
|
||||
|
||||
- backend code
|
||||
- frontend js
|
||||
- tests
|
||||
- documentatie
|
||||
|
||||
## Ask first
|
||||
|
||||
- database schema
|
||||
- container config
|
||||
- security modules
|
||||
|
||||
## Do not edit
|
||||
|
||||
- deployment configs
|
||||
- storage volume mappings
|
||||
- whitelist configuration
|
||||
@@ -0,0 +1,72 @@
|
||||
# STEP1_5_CONSOLIDATION.md
|
||||
|
||||
## Doel
|
||||
Consolidatie van stap 1 t/m 5 (backend skeleton, path_guard, unit tests, browse endpoint, golden tests).
|
||||
|
||||
## 1. Test-run context
|
||||
|
||||
Tests draaien vanuit repository root met:
|
||||
- `PYTHONPATH=webui`
|
||||
|
||||
Definitieve testcommando's voor deze fase:
|
||||
- `PYTHONPATH=webui python3 -m unittest discover -s webui/backend/tests/unit -p "test_*.py" -v`
|
||||
- `PYTHONPATH=webui python3 -m unittest discover -s webui/backend/tests/golden -p "test_*.py" -v`
|
||||
- `PYTHONPATH=webui python3 -m unittest discover -s webui/backend/tests -p "test_*.py" -v`
|
||||
|
||||
## 2. Minimale dependencies voor stap 1-5
|
||||
|
||||
Vastgelegd in:
|
||||
- `webui/backend/requirements.txt`
|
||||
|
||||
Benodigd voor deze fase:
|
||||
- FastAPI app + browse endpoint
|
||||
- unit tests
|
||||
- golden tests
|
||||
|
||||
Dependencies:
|
||||
- `fastapi==0.111.0`
|
||||
- `starlette==0.37.2`
|
||||
- `pydantic==2.12.5`
|
||||
- `httpx==0.27.2`
|
||||
- `anyio==4.4.0`
|
||||
- `sniffio==1.3.1`
|
||||
|
||||
## 3. Minimale wijzigingen in deze fase
|
||||
|
||||
Functioneel gewijzigde bestanden:
|
||||
- `webui/backend/app/api/routes_browse.py`
|
||||
- `webui/backend/app/dependencies.py`
|
||||
- `webui/backend/app/main.py`
|
||||
|
||||
Testmatig gewijzigde bestanden:
|
||||
- `webui/backend/tests/golden/test_api_browse_golden.py`
|
||||
- `webui/backend/tests/golden/test_api_errors_golden.py`
|
||||
|
||||
Waarom golden tests van TestClient naar `httpx.ASGITransport` zijn omgezet:
|
||||
- In deze omgeving blokkeerde de TestClient-runner op sync execution pad.
|
||||
- `httpx.ASGITransport` voert requests in-memory tegen de ASGI app uit zonder die blokkade.
|
||||
- Dit is een testuitvoerbaarheidsfix, geen API-contractwijziging.
|
||||
|
||||
Waarom async wijzigingen nodig waren:
|
||||
- Error-pad en dependency-pad moesten async blijven om runtime-blokkade in deze omgeving te vermijden.
|
||||
- De wijzigingen zijn intern uitvoeringsgericht; het HTTP-contract blijft gelijk.
|
||||
|
||||
## 4. Contractbehoud (expliciet)
|
||||
|
||||
Door deze consolidatiestap is niets gewijzigd aan:
|
||||
- endpointpad: `GET /api/browse`
|
||||
- query parameters: `path`, `show_hidden`
|
||||
- response shape: `path`, `directories[]`, `files[]` met dezelfde velden
|
||||
- error codes: o.a. `invalid_root_alias`, `path_traversal_detected`, `path_not_found`, `path_type_conflict`
|
||||
- hidden files policy: default hidden uit, optioneel via `show_hidden=true`
|
||||
|
||||
## 5. Scopebevestiging
|
||||
|
||||
Niet geïmplementeerd in deze consolidatiestap:
|
||||
- rename
|
||||
- mkdir
|
||||
- delete
|
||||
- copy
|
||||
- move
|
||||
- tasks/worker
|
||||
- frontend
|
||||
@@ -0,0 +1,34 @@
|
||||
# TEST_STRATEGY.md
|
||||
|
||||
## Unit tests
|
||||
|
||||
Test individuele functies:
|
||||
|
||||
- path validation
|
||||
- whitelist checks
|
||||
- helper functions
|
||||
- task status transitions
|
||||
|
||||
## Feature tests
|
||||
|
||||
Test volledige flows:
|
||||
|
||||
- browse directory
|
||||
- copy
|
||||
- move
|
||||
- rename
|
||||
- delete
|
||||
- create folder
|
||||
|
||||
## Regression tests
|
||||
|
||||
Bescherm tegen bekende problemen:
|
||||
|
||||
- path traversal
|
||||
- unicode filenames
|
||||
- grote directories
|
||||
- nested directories
|
||||
|
||||
## API Golden tests
|
||||
|
||||
Controleer dat response formats niet veranderen.
|
||||
@@ -0,0 +1,233 @@
|
||||
# UI_DUAL_PANE_DESIGN.md
|
||||
|
||||
## Doel van deze notitie
|
||||
|
||||
Deze notitie beschrijft de ontwerpstap voor **Dual-pane UI v2** op basis van de huidige v1.1 UI en `UI_VISION_MC.md`.
|
||||
|
||||
Randvoorwaarden voor deze stap:
|
||||
- geen backendwijzigingen
|
||||
- geen nieuwe dependencies
|
||||
- geen multi-select
|
||||
- geen viewer/editor
|
||||
- geen keyboard-uitbreiding (behalve als latere notitie)
|
||||
|
||||
---
|
||||
|
||||
## 1) Ombouw van single-page UI naar twee panelen
|
||||
|
||||
### Huidige situatie (v1.1)
|
||||
- één browsepaneel met één `currentPath`
|
||||
- globale selectie voor dat paneel
|
||||
- acties (mkdir/rename/delete/copy/move) vanuit die ene context
|
||||
|
||||
### Doelsituatie (v2)
|
||||
- twee browsepanelen naast elkaar: `left` en `right`
|
||||
- elk paneel heeft:
|
||||
- eigen path input + go
|
||||
- eigen breadcrumbs
|
||||
- eigen directory/file lijst
|
||||
- eigen geselecteerd item
|
||||
- één paneel is altijd **actief**
|
||||
- acties worden contextueel uitgevoerd vanuit actief paneel
|
||||
- tasks- en bookmarks-sectie blijven gedeeld (onder of naast de panelen)
|
||||
|
||||
### UI-structuur op hoog niveau
|
||||
- Header: algemene status
|
||||
- Main layout:
|
||||
- Paneel links (`left-pane`)
|
||||
- Paneel rechts (`right-pane`)
|
||||
- Utility-kolom (tasks + bookmarks)
|
||||
- Actieknoppen:
|
||||
- per paneel (mkdir)
|
||||
- actieve-paneelacties (rename/delete/copy/move)
|
||||
|
||||
### Bookmark-open gedrag (expliciete keuze)
|
||||
- Een klik op een bookmark opent altijd in het **actieve paneel**.
|
||||
- Motivatie (kort): dit is het meest voorspelbaar, sluit aan op het active/inactive-pane model, en voorkomt verborgen side-effects in het niet-actieve paneel.
|
||||
|
||||
---
|
||||
|
||||
## 2) Benodigde state per paneel
|
||||
|
||||
## State-model
|
||||
|
||||
```js
|
||||
state = {
|
||||
panes: {
|
||||
left: {
|
||||
currentPath: "storage1",
|
||||
showHidden: false,
|
||||
selectedItem: null, // { path, name, kind: "file"|"directory" }
|
||||
entries: { directories: [], files: [] }
|
||||
},
|
||||
right: {
|
||||
currentPath: "storage1",
|
||||
showHidden: false,
|
||||
selectedItem: null,
|
||||
entries: { directories: [], files: [] }
|
||||
}
|
||||
},
|
||||
activePane: "left", // "left" | "right"
|
||||
selectedTaskId: null,
|
||||
pollHandle: null
|
||||
}
|
||||
```
|
||||
|
||||
### Verplichte kernstate
|
||||
- `current path` per paneel
|
||||
- `selected item` per paneel
|
||||
- `active pane` globaal
|
||||
|
||||
### Afgeleide helperstate
|
||||
- `inactivePane = activePane === "left" ? "right" : "left"`
|
||||
- `canRename/canDelete` op basis van selectie actief paneel
|
||||
- `canCopy/canMove` alleen als selectie actief paneel een file is
|
||||
|
||||
---
|
||||
|
||||
## 3) Copy/move van actief paneel naar ander paneel
|
||||
|
||||
## Semantiek v2
|
||||
- Bron komt altijd uit `selectedItem` van `activePane`.
|
||||
- Doelcontext is standaard het **andere paneel**.
|
||||
- `destination` blijft volledig doelpad (geen impliciete shell-achtige "copy into" semantics).
|
||||
|
||||
### Destination-bepaling
|
||||
- `targetBasePath = panes[inactivePane].currentPath`
|
||||
- `destination = targetBasePath + "/" + sourceItem.name`
|
||||
- UI toont dit voorstel in prompt/confirm en laat aanpassen toe.
|
||||
|
||||
### Copy flow
|
||||
1. User selecteert file in actief paneel.
|
||||
2. User kiest `Copy`.
|
||||
3. UI bouwt default destination op basis van inactieve paneel-path + bestandsnaam.
|
||||
4. UI roept `POST /api/files/copy` met `{source, destination}`.
|
||||
5. Bij `202`: tasks refresh + taskdetail naar nieuw task_id.
|
||||
6. Na start: beide panelen refresh (best effort), zodat user contextueel resultaat ziet.
|
||||
|
||||
### Move flow
|
||||
1. Zelfde als copy, maar endpoint `POST /api/files/move`.
|
||||
2. Bij `202`: tasks refresh + taskdetail selecteren.
|
||||
3. Beide panelen refreshen (source kan verdwijnen, destination verschijnen).
|
||||
|
||||
### Refresh-semantiek per actie (expliciet)
|
||||
- `mkdir`: refresh alleen het actieve paneel.
|
||||
- `rename`: refresh alleen het actieve paneel.
|
||||
- `delete`: refresh alleen het actieve paneel.
|
||||
- `copy`: refresh beide panelen.
|
||||
- `move`: refresh beide panelen.
|
||||
|
||||
Rationale (kort):
|
||||
- Lokale mutaties (`mkdir/rename/delete`) zijn paneelgebonden en blijven daardoor snel en voorspelbaar.
|
||||
- Transfer-acties (`copy/move`) raken bron- en doelcontext, dus beide panelen verversen voorkomt stale state.
|
||||
|
||||
### Foutafhandeling
|
||||
- Validatiefouten vóór task-creatie direct tonen op actiegebied actief paneel.
|
||||
- Runtime-fouten via task-status (`failed`) zichtbaar in tasklijst/detail.
|
||||
|
||||
---
|
||||
|
||||
## 4) Bestaande backend-endpoints die hergebruikt worden
|
||||
|
||||
Geen contractwijzigingen; hergebruik van bestaande endpoints:
|
||||
|
||||
Browse/file ops:
|
||||
- `GET /api/browse`
|
||||
- `POST /api/files/mkdir`
|
||||
- `POST /api/files/rename`
|
||||
- `POST /api/files/delete`
|
||||
- `POST /api/files/copy`
|
||||
- `POST /api/files/move`
|
||||
|
||||
Tasks:
|
||||
- `GET /api/tasks`
|
||||
- `GET /api/tasks/{task_id}`
|
||||
|
||||
Bookmarks:
|
||||
- `GET /api/bookmarks`
|
||||
- `POST /api/bookmarks`
|
||||
- `DELETE /api/bookmarks/{bookmark_id}`
|
||||
|
||||
---
|
||||
|
||||
## 5) Waarschijnlijk te wijzigen bestanden
|
||||
|
||||
Primair frontend:
|
||||
- `webui/html/index.html`
|
||||
- layout naar dual-pane
|
||||
- pane-specifieke controls/containers
|
||||
- `webui/html/app.js`
|
||||
- state refactor naar `panes.left/right`
|
||||
- browse/render/actions per pane
|
||||
- active-pane switching
|
||||
- copy/move destination vanuit ander pane
|
||||
- `webui/html/style.css`
|
||||
- twee panelen visueel gelijkwaardig
|
||||
- actieve-paneel-highlight
|
||||
- responsieve fallback (mobiel: gestapeld)
|
||||
|
||||
Tests:
|
||||
- `webui/backend/tests/golden/test_ui_smoke_golden.py`
|
||||
- assertions bijwerken naar dual-pane markup/ids
|
||||
|
||||
Niet gepland in deze stap:
|
||||
- backend python-bestanden
|
||||
- API schemas/routes/services
|
||||
|
||||
---
|
||||
|
||||
## 6) Regressierisico
|
||||
|
||||
## Hoog risico
|
||||
- Stateverwarring tussen links/rechts paneel (actie op verkeerde paneelcontext).
|
||||
- Onjuiste destination-opbouw bij copy/move (per ongeluk vanuit actief i.p.v. inactief pad).
|
||||
|
||||
## Middel risico
|
||||
- Disable/enable logica van knoppen niet synchroon met actieve paneelselectie.
|
||||
- Refresh-volgorde na acties waardoor UI tijdelijk stale data toont.
|
||||
- Bookmarks die per ongeluk in het verkeerde paneel openen als `activePane` niet consequent wordt gebruikt.
|
||||
|
||||
## Laag risico
|
||||
- CSS regressies in mobile layout.
|
||||
- Kleine tekst/label inconsistenties in error/statusweergave.
|
||||
|
||||
## Mitigatie
|
||||
- Duidelijke pane-identifiers in DOM en handlers (`left`/`right`).
|
||||
- Eén centrale helper voor `getActivePane()` en `getInactivePane()`.
|
||||
- Eén centrale helper voor copy/move destination-opbouw.
|
||||
- Smoke tests expliciet op dual-pane hoofdstructuur.
|
||||
|
||||
---
|
||||
|
||||
## 7) Aan te passen/toe te voegen UI smoke tests
|
||||
|
||||
## Aanpassen bestaande smoke test
|
||||
`test_ui_smoke_golden.py`:
|
||||
- huidige checks op enkel browsepaneel vervangen door dual-pane checks.
|
||||
|
||||
Nieuwe/gewijzigde assertions:
|
||||
- UI mount bestaat op `/ui`.
|
||||
- HTML bevat beide hoofdpanelen:
|
||||
- `id="left-pane"`
|
||||
- `id="right-pane"`
|
||||
- HTML bevat actieve-paneel-indicator/container (bijv. class of data-attribute).
|
||||
- Assets blijven gemapt:
|
||||
- `/ui/app.js`
|
||||
- `/ui/style.css`
|
||||
|
||||
## Kleine extra smoke checks (zonder backenduitbreiding)
|
||||
- basisactieknoppen aanwezig voor actieve-paneelacties.
|
||||
- tasks- en bookmarks-panelen blijven aanwezig in HTML.
|
||||
|
||||
Geen end-to-end browserautomatisering in deze stap.
|
||||
|
||||
---
|
||||
|
||||
## Niet in scope voor deze implementatieslice
|
||||
|
||||
- multi-select
|
||||
- viewer/editor
|
||||
- cancel/retry
|
||||
- history UI
|
||||
- nieuwe backend endpoints
|
||||
- keyboard mapping uitbreiding (eventueel later)
|
||||
@@ -0,0 +1,141 @@
|
||||
# UI_VISION_MC.md
|
||||
|
||||
## Doel
|
||||
|
||||
De webapp moet geleidelijk evolueren naar een web-based bestandsbeheerder
|
||||
met een workflow die geïnspireerd is op Midnight Commander.
|
||||
|
||||
Het doel is **niet** om Midnight Commander exact te klonen,
|
||||
maar om de belangrijkste werkprincipes over te nemen:
|
||||
|
||||
- twee panelen naast elkaar
|
||||
- snel wisselen tussen panelen
|
||||
- actief paneel en inactief paneel
|
||||
- copy/move van actief paneel naar ander paneel
|
||||
- keyboard-efficiënte bediening
|
||||
- focus op snelheid en weinig muisafhankelijkheid
|
||||
|
||||
---
|
||||
|
||||
## Ontwerpprincipes
|
||||
|
||||
- web-native uitvoering, geen letterlijke terminal-clone
|
||||
- behoud van bestaande backend API-contracten
|
||||
- incrementele UI-ontwikkeling in kleine stappen
|
||||
- veiligheid en voorspelbaarheid blijven leidend
|
||||
- bestaande werkende functionaliteit mag niet regressief kapot gaan
|
||||
|
||||
---
|
||||
|
||||
## Doelbeeld
|
||||
|
||||
De UI moet uiteindelijk bestaan uit:
|
||||
|
||||
### 1. Dubbel paneel
|
||||
- een linker paneel
|
||||
- een rechter paneel
|
||||
- elk paneel heeft een eigen current path
|
||||
- elk paneel toont directories en files
|
||||
- elk paneel heeft een eigen selectie
|
||||
|
||||
### 2. Actief paneel
|
||||
- één paneel is actief
|
||||
- acties worden gestart vanuit het actieve paneel
|
||||
- het andere paneel is standaard de doelcontext voor copy/move
|
||||
|
||||
### 3. Navigatie
|
||||
- pad boven elk paneel zichtbaar
|
||||
- directorynavigatie per paneel
|
||||
- later uitbreidbaar met keyboard shortcuts
|
||||
|
||||
### 4. Bestandsacties
|
||||
- rename
|
||||
- mkdir
|
||||
- delete
|
||||
- copy
|
||||
- move
|
||||
|
||||
### 5. Taken
|
||||
- tasklijst zichtbaar
|
||||
- status polling blijft mogelijk
|
||||
- copy/move blijven task-based
|
||||
|
||||
---
|
||||
|
||||
## Buiten scope voor de eerstvolgende UI-stap
|
||||
|
||||
Deze dingen worden nu nog niet gebouwd:
|
||||
|
||||
- multi-select
|
||||
- Insert-achtige selectie
|
||||
- viewer
|
||||
- editor
|
||||
- menubalk zoals F9
|
||||
- chmod/chown/symlink tools
|
||||
- zoeken
|
||||
- directory compare
|
||||
- volledige keyboard mapping
|
||||
|
||||
---
|
||||
|
||||
## Eerstvolgende UI-doel
|
||||
|
||||
De eerstvolgende UI-stap is:
|
||||
|
||||
### Dual-pane UI v2
|
||||
|
||||
Deze stap bevat alleen:
|
||||
|
||||
- links en rechts paneel
|
||||
- per paneel eigen current path
|
||||
- actief paneel
|
||||
- visuele actieve paneel-indicatie
|
||||
- copy/move gebruiken standaard het andere paneel als destination context
|
||||
- bestaande backend-endpoints blijven ongewijzigd
|
||||
|
||||
Nog niet in deze stap:
|
||||
|
||||
- multi-select
|
||||
- uitgebreide keyboard controls
|
||||
- viewer/editor
|
||||
- history UI
|
||||
- nieuwe backend features
|
||||
|
||||
---
|
||||
|
||||
## Verwachte impact
|
||||
|
||||
Waarschijnlijk vooral wijziging in:
|
||||
|
||||
- `webui/html/index.html`
|
||||
- `webui/html/app.js`
|
||||
- `webui/html/style.css`
|
||||
|
||||
Backend hergebruik:
|
||||
|
||||
- `GET /api/browse`
|
||||
- `POST /api/files/mkdir`
|
||||
- `POST /api/files/rename`
|
||||
- `POST /api/files/delete`
|
||||
- `POST /api/files/copy`
|
||||
- `POST /api/files/move`
|
||||
- `GET /api/tasks`
|
||||
- `GET /api/tasks/{task_id}`
|
||||
- `GET /api/bookmarks`
|
||||
- `POST /api/bookmarks`
|
||||
- `DELETE /api/bookmarks/{bookmark_id}`
|
||||
|
||||
---
|
||||
|
||||
## Acceptatiecriteria voor de volgende UI-stap
|
||||
|
||||
De dual-pane stap is pas klaar als:
|
||||
|
||||
- beide panelen onafhankelijk kunnen browsen
|
||||
- elk paneel een eigen path toont
|
||||
- één paneel actief is
|
||||
- actief paneel visueel herkenbaar is
|
||||
- copy/move logisch vanuit actief naar ander paneel werken
|
||||
- bestaande functionaliteit niet stuk gaat
|
||||
- bestaande tests blijven slagen
|
||||
- UI smoke tests waar nodig zijn bijgewerkt
|
||||
@@ -0,0 +1 @@
|
||||
"""Backend package."""
|
||||
Binary file not shown.
Binary file not shown.
@@ -0,0 +1 @@
|
||||
"""Application package."""
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -0,0 +1 @@
|
||||
"""API routes and schemas."""
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -0,0 +1,11 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass
|
||||
class AppError(Exception):
|
||||
code: str
|
||||
message: str
|
||||
status_code: int
|
||||
details: dict[str, str] | None = None
|
||||
@@ -0,0 +1,35 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from fastapi import APIRouter, Depends
|
||||
|
||||
from backend.app.api.schemas import (
|
||||
BookmarkCreateRequest,
|
||||
BookmarkDeleteResponse,
|
||||
BookmarkItem,
|
||||
BookmarkListResponse,
|
||||
)
|
||||
from backend.app.dependencies import get_bookmark_service
|
||||
from backend.app.services.bookmark_service import BookmarkService
|
||||
|
||||
router = APIRouter(prefix="/bookmarks")
|
||||
|
||||
|
||||
@router.post("", response_model=BookmarkItem)
|
||||
async def create_bookmark(
|
||||
request: BookmarkCreateRequest,
|
||||
service: BookmarkService = Depends(get_bookmark_service),
|
||||
) -> BookmarkItem:
|
||||
return service.create_bookmark(path=request.path, label=request.label)
|
||||
|
||||
|
||||
@router.get("", response_model=BookmarkListResponse)
|
||||
async def list_bookmarks(service: BookmarkService = Depends(get_bookmark_service)) -> BookmarkListResponse:
|
||||
return service.list_bookmarks()
|
||||
|
||||
|
||||
@router.delete("/{bookmark_id}", response_model=BookmarkDeleteResponse)
|
||||
async def delete_bookmark(
|
||||
bookmark_id: int,
|
||||
service: BookmarkService = Depends(get_bookmark_service),
|
||||
) -> BookmarkDeleteResponse:
|
||||
return service.delete_bookmark(bookmark_id)
|
||||
@@ -0,0 +1,18 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from fastapi import APIRouter, Depends, Query
|
||||
|
||||
from backend.app.api.schemas import BrowseResponse
|
||||
from backend.app.dependencies import get_browse_service
|
||||
from backend.app.services.browse_service import BrowseService
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
@router.get("/browse", response_model=BrowseResponse)
|
||||
async def browse(
|
||||
path: str = Query(...),
|
||||
show_hidden: bool = Query(False),
|
||||
service: BrowseService = Depends(get_browse_service),
|
||||
) -> BrowseResponse:
|
||||
return service.browse(path=path, show_hidden=show_hidden)
|
||||
@@ -0,0 +1,17 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from fastapi import APIRouter, Depends
|
||||
|
||||
from backend.app.api.schemas import CopyRequest, TaskCreateResponse
|
||||
from backend.app.dependencies import get_copy_task_service
|
||||
from backend.app.services.copy_task_service import CopyTaskService
|
||||
|
||||
router = APIRouter(prefix="/files")
|
||||
|
||||
|
||||
@router.post("/copy", response_model=TaskCreateResponse, status_code=202)
|
||||
async def copy_file(
|
||||
request: CopyRequest,
|
||||
service: CopyTaskService = Depends(get_copy_task_service),
|
||||
) -> TaskCreateResponse:
|
||||
return service.create_copy_task(source=request.source, destination=request.destination)
|
||||
@@ -0,0 +1,33 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from fastapi import APIRouter, Depends
|
||||
|
||||
from backend.app.api.schemas import DeleteRequest, DeleteResponse, MkdirRequest, MkdirResponse, RenameRequest, RenameResponse
|
||||
from backend.app.dependencies import get_file_ops_service
|
||||
from backend.app.services.file_ops_service import FileOpsService
|
||||
|
||||
router = APIRouter(prefix="/files")
|
||||
|
||||
|
||||
@router.post("/mkdir", response_model=MkdirResponse)
|
||||
async def mkdir(
|
||||
request: MkdirRequest,
|
||||
service: FileOpsService = Depends(get_file_ops_service),
|
||||
) -> MkdirResponse:
|
||||
return service.mkdir(parent_path=request.parent_path, name=request.name)
|
||||
|
||||
|
||||
@router.post("/rename", response_model=RenameResponse)
|
||||
async def rename(
|
||||
request: RenameRequest,
|
||||
service: FileOpsService = Depends(get_file_ops_service),
|
||||
) -> RenameResponse:
|
||||
return service.rename(path=request.path, new_name=request.new_name)
|
||||
|
||||
|
||||
@router.post("/delete", response_model=DeleteResponse)
|
||||
async def delete(
|
||||
request: DeleteRequest,
|
||||
service: FileOpsService = Depends(get_file_ops_service),
|
||||
) -> DeleteResponse:
|
||||
return service.delete(path=request.path)
|
||||
@@ -0,0 +1,17 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from fastapi import APIRouter, Depends
|
||||
|
||||
from backend.app.api.schemas import MoveRequest, TaskCreateResponse
|
||||
from backend.app.dependencies import get_move_task_service
|
||||
from backend.app.services.move_task_service import MoveTaskService
|
||||
|
||||
router = APIRouter(prefix="/files")
|
||||
|
||||
|
||||
@router.post("/move", response_model=TaskCreateResponse, status_code=202)
|
||||
async def move_file(
|
||||
request: MoveRequest,
|
||||
service: MoveTaskService = Depends(get_move_task_service),
|
||||
) -> TaskCreateResponse:
|
||||
return service.create_move_task(source=request.source, destination=request.destination)
|
||||
@@ -0,0 +1,19 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from fastapi import APIRouter, Depends
|
||||
|
||||
from backend.app.api.schemas import TaskDetailResponse, TaskListResponse
|
||||
from backend.app.dependencies import get_task_service
|
||||
from backend.app.services.task_service import TaskService
|
||||
|
||||
router = APIRouter(prefix="/tasks")
|
||||
|
||||
|
||||
@router.get("", response_model=TaskListResponse)
|
||||
async def list_tasks(service: TaskService = Depends(get_task_service)) -> TaskListResponse:
|
||||
return service.list_tasks()
|
||||
|
||||
|
||||
@router.get("/{task_id}", response_model=TaskDetailResponse)
|
||||
async def get_task(task_id: str, service: TaskService = Depends(get_task_service)) -> TaskDetailResponse:
|
||||
return service.get_task(task_id)
|
||||
@@ -0,0 +1,126 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class ErrorBody(BaseModel):
|
||||
code: str
|
||||
message: str
|
||||
details: dict[str, str] | None = None
|
||||
|
||||
|
||||
class ErrorResponse(BaseModel):
|
||||
error: ErrorBody
|
||||
|
||||
|
||||
class DirectoryEntry(BaseModel):
|
||||
name: str
|
||||
path: str
|
||||
modified: str
|
||||
|
||||
|
||||
class FileEntry(BaseModel):
|
||||
name: str
|
||||
path: str
|
||||
size: int
|
||||
modified: str
|
||||
|
||||
|
||||
class BrowseResponse(BaseModel):
|
||||
path: str
|
||||
directories: list[DirectoryEntry]
|
||||
files: list[FileEntry]
|
||||
|
||||
|
||||
class MkdirRequest(BaseModel):
|
||||
parent_path: str
|
||||
name: str
|
||||
|
||||
|
||||
class MkdirResponse(BaseModel):
|
||||
path: str
|
||||
|
||||
|
||||
class RenameRequest(BaseModel):
|
||||
path: str
|
||||
new_name: str
|
||||
|
||||
|
||||
class RenameResponse(BaseModel):
|
||||
path: str
|
||||
|
||||
|
||||
class DeleteRequest(BaseModel):
|
||||
path: str
|
||||
|
||||
|
||||
class DeleteResponse(BaseModel):
|
||||
path: str
|
||||
|
||||
|
||||
class TaskListItem(BaseModel):
|
||||
id: str
|
||||
operation: str
|
||||
status: str
|
||||
source: str
|
||||
destination: str
|
||||
created_at: str
|
||||
finished_at: str | None = None
|
||||
|
||||
|
||||
class TaskListResponse(BaseModel):
|
||||
items: list[TaskListItem]
|
||||
|
||||
|
||||
class TaskDetailResponse(BaseModel):
|
||||
id: str
|
||||
operation: str
|
||||
status: str
|
||||
source: str
|
||||
destination: str
|
||||
done_bytes: int | None = None
|
||||
total_bytes: int | None = None
|
||||
done_items: int | None = None
|
||||
total_items: int | None = None
|
||||
current_item: str | None = None
|
||||
failed_item: str | None = None
|
||||
error_code: str | None = None
|
||||
error_message: str | None = None
|
||||
created_at: str
|
||||
started_at: str | None = None
|
||||
finished_at: str | None = None
|
||||
|
||||
|
||||
class CopyRequest(BaseModel):
|
||||
source: str
|
||||
destination: str
|
||||
|
||||
|
||||
class TaskCreateResponse(BaseModel):
|
||||
task_id: str
|
||||
status: str
|
||||
|
||||
|
||||
class MoveRequest(BaseModel):
|
||||
source: str
|
||||
destination: str
|
||||
|
||||
|
||||
class BookmarkCreateRequest(BaseModel):
|
||||
path: str
|
||||
label: str
|
||||
|
||||
|
||||
class BookmarkItem(BaseModel):
|
||||
id: int
|
||||
path: str
|
||||
label: str
|
||||
created_at: str
|
||||
|
||||
|
||||
class BookmarkListResponse(BaseModel):
|
||||
items: list[BookmarkItem]
|
||||
|
||||
|
||||
class BookmarkDeleteResponse(BaseModel):
|
||||
id: int
|
||||
@@ -0,0 +1,39 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class Settings:
|
||||
root_aliases: dict[str, str]
|
||||
task_db_path: str
|
||||
|
||||
|
||||
DEFAULT_ROOT_ALIASES = {
|
||||
"storage1": "/Volumes/8TB",
|
||||
"storage2": "/Volumes/8TB_RAID1",
|
||||
}
|
||||
|
||||
|
||||
def _load_root_aliases() -> dict[str, str]:
|
||||
# Minimal env override format: storage1=/path1,storage2=/path2
|
||||
raw = os.getenv("WEBMANAGER_ROOT_ALIASES", "").strip()
|
||||
if not raw:
|
||||
return dict(DEFAULT_ROOT_ALIASES)
|
||||
|
||||
parsed: dict[str, str] = {}
|
||||
for entry in raw.split(","):
|
||||
if "=" not in entry:
|
||||
continue
|
||||
alias, root = entry.split("=", 1)
|
||||
alias = alias.strip()
|
||||
root = root.strip()
|
||||
if alias and root:
|
||||
parsed[alias] = root
|
||||
return parsed or dict(DEFAULT_ROOT_ALIASES)
|
||||
|
||||
|
||||
def get_settings() -> Settings:
|
||||
task_db_path = os.getenv("WEBMANAGER_TASK_DB_PATH", "webui/backend/data/tasks.db").strip()
|
||||
return Settings(root_aliases=_load_root_aliases(), task_db_path=task_db_path)
|
||||
@@ -0,0 +1 @@
|
||||
"""Database utilities."""
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -0,0 +1,94 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import sqlite3
|
||||
from contextlib import contextmanager
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
class BookmarkRepository:
|
||||
def __init__(self, db_path: str):
|
||||
self._db_path = db_path
|
||||
self._ensure_schema()
|
||||
|
||||
def create_bookmark(self, path: str, label: str) -> dict:
|
||||
created_at = self._now_iso()
|
||||
with self._connection() as conn:
|
||||
cursor = conn.execute(
|
||||
"""
|
||||
INSERT INTO bookmarks (path, label, created_at)
|
||||
VALUES (?, ?, ?)
|
||||
""",
|
||||
(path, label, created_at),
|
||||
)
|
||||
bookmark_id = int(cursor.lastrowid)
|
||||
row = conn.execute(
|
||||
"SELECT id, path, label, created_at FROM bookmarks WHERE id = ?",
|
||||
(bookmark_id,),
|
||||
).fetchone()
|
||||
return self._to_dict(row)
|
||||
|
||||
def list_bookmarks(self) -> list[dict]:
|
||||
with self._connection() as conn:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT id, path, label, created_at
|
||||
FROM bookmarks
|
||||
ORDER BY created_at DESC
|
||||
"""
|
||||
).fetchall()
|
||||
return [self._to_dict(row) for row in rows]
|
||||
|
||||
def delete_bookmark(self, bookmark_id: int) -> bool:
|
||||
with self._connection() as conn:
|
||||
cursor = conn.execute("DELETE FROM bookmarks WHERE id = ?", (bookmark_id,))
|
||||
return cursor.rowcount > 0
|
||||
|
||||
def _ensure_schema(self) -> None:
|
||||
db_path = Path(self._db_path)
|
||||
if db_path.parent and str(db_path.parent) not in {"", "."}:
|
||||
db_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
with self._connection() as conn:
|
||||
conn.execute(
|
||||
"""
|
||||
CREATE TABLE IF NOT EXISTS bookmarks (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
path TEXT NOT NULL UNIQUE,
|
||||
label TEXT NOT NULL,
|
||||
created_at TEXT NOT NULL
|
||||
)
|
||||
"""
|
||||
)
|
||||
conn.execute(
|
||||
"""
|
||||
CREATE INDEX IF NOT EXISTS idx_bookmarks_created_at_desc
|
||||
ON bookmarks(created_at DESC)
|
||||
"""
|
||||
)
|
||||
|
||||
@contextmanager
|
||||
def _connection(self):
|
||||
conn = sqlite3.connect(self._db_path)
|
||||
conn.row_factory = sqlite3.Row
|
||||
try:
|
||||
yield conn
|
||||
conn.commit()
|
||||
except Exception:
|
||||
conn.rollback()
|
||||
raise
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
@staticmethod
|
||||
def _to_dict(row: sqlite3.Row) -> dict:
|
||||
return {
|
||||
"id": int(row["id"]),
|
||||
"path": row["path"],
|
||||
"label": row["label"],
|
||||
"created_at": row["created_at"],
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _now_iso() -> str:
|
||||
return datetime.now(tz=timezone.utc).isoformat().replace("+00:00", "Z")
|
||||
@@ -0,0 +1,241 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import sqlite3
|
||||
import uuid
|
||||
from contextlib import contextmanager
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
VALID_STATUSES = {"queued", "running", "completed", "failed"}
|
||||
VALID_OPERATIONS = {"copy", "move"}
|
||||
|
||||
|
||||
class TaskRepository:
|
||||
def __init__(self, db_path: str):
|
||||
self._db_path = db_path
|
||||
self._ensure_schema()
|
||||
|
||||
def create_task(self, operation: str, source: str, destination: str) -> dict:
|
||||
if operation not in VALID_OPERATIONS:
|
||||
raise ValueError("invalid operation")
|
||||
|
||||
task_id = str(uuid.uuid4())
|
||||
created_at = self._now_iso()
|
||||
|
||||
with self._connection() as conn:
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO tasks (
|
||||
id, operation, status, source, destination,
|
||||
done_bytes, total_bytes, done_items, total_items,
|
||||
current_item, failed_item, error_code, error_message,
|
||||
created_at, started_at, finished_at
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(
|
||||
task_id,
|
||||
operation,
|
||||
"queued",
|
||||
source,
|
||||
destination,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
created_at,
|
||||
None,
|
||||
None,
|
||||
),
|
||||
)
|
||||
row = conn.execute("SELECT * FROM tasks WHERE id = ?", (task_id,)).fetchone()
|
||||
|
||||
return self._to_dict(row)
|
||||
|
||||
def insert_task_for_testing(self, task: dict) -> None:
|
||||
status = task["status"]
|
||||
operation = task["operation"]
|
||||
if status not in VALID_STATUSES:
|
||||
raise ValueError("invalid status")
|
||||
if operation not in VALID_OPERATIONS:
|
||||
raise ValueError("invalid operation")
|
||||
|
||||
with self._connection() as conn:
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO tasks (
|
||||
id, operation, status, source, destination,
|
||||
done_bytes, total_bytes, done_items, total_items,
|
||||
current_item, failed_item, error_code, error_message,
|
||||
created_at, started_at, finished_at
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(
|
||||
task["id"],
|
||||
operation,
|
||||
status,
|
||||
task["source"],
|
||||
task["destination"],
|
||||
task.get("done_bytes"),
|
||||
task.get("total_bytes"),
|
||||
task.get("done_items"),
|
||||
task.get("total_items"),
|
||||
task.get("current_item"),
|
||||
task.get("failed_item"),
|
||||
task.get("error_code"),
|
||||
task.get("error_message"),
|
||||
task["created_at"],
|
||||
task.get("started_at"),
|
||||
task.get("finished_at"),
|
||||
),
|
||||
)
|
||||
|
||||
def get_task(self, task_id: str) -> dict | None:
|
||||
with self._connection() as conn:
|
||||
row = conn.execute("SELECT * FROM tasks WHERE id = ?", (task_id,)).fetchone()
|
||||
return self._to_dict(row) if row else None
|
||||
|
||||
def list_tasks(self) -> list[dict]:
|
||||
with self._connection() as conn:
|
||||
rows = conn.execute(
|
||||
"""
|
||||
SELECT * FROM tasks
|
||||
ORDER BY created_at DESC
|
||||
"""
|
||||
).fetchall()
|
||||
return [self._to_dict(row) for row in rows]
|
||||
|
||||
def mark_running(self, task_id: str, done_bytes: int, total_bytes: int | None, current_item: str | None) -> None:
|
||||
started_at = self._now_iso()
|
||||
with self._connection() as conn:
|
||||
conn.execute(
|
||||
"""
|
||||
UPDATE tasks
|
||||
SET status = ?, started_at = ?, done_bytes = ?, total_bytes = ?, current_item = ?
|
||||
WHERE id = ?
|
||||
""",
|
||||
("running", started_at, done_bytes, total_bytes, current_item, task_id),
|
||||
)
|
||||
|
||||
def update_progress(self, task_id: str, done_bytes: int, total_bytes: int | None, current_item: str | None) -> None:
|
||||
with self._connection() as conn:
|
||||
conn.execute(
|
||||
"""
|
||||
UPDATE tasks
|
||||
SET done_bytes = ?, total_bytes = ?, current_item = ?
|
||||
WHERE id = ?
|
||||
""",
|
||||
(done_bytes, total_bytes, current_item, task_id),
|
||||
)
|
||||
|
||||
def mark_completed(self, task_id: str, done_bytes: int | None, total_bytes: int | None) -> None:
|
||||
finished_at = self._now_iso()
|
||||
with self._connection() as conn:
|
||||
conn.execute(
|
||||
"""
|
||||
UPDATE tasks
|
||||
SET status = ?, finished_at = ?, done_bytes = ?, total_bytes = ?
|
||||
WHERE id = ?
|
||||
""",
|
||||
("completed", finished_at, done_bytes, total_bytes, task_id),
|
||||
)
|
||||
|
||||
def mark_failed(
|
||||
self,
|
||||
task_id: str,
|
||||
error_code: str,
|
||||
error_message: str,
|
||||
failed_item: str | None,
|
||||
done_bytes: int | None,
|
||||
total_bytes: int | None,
|
||||
) -> None:
|
||||
finished_at = self._now_iso()
|
||||
with self._connection() as conn:
|
||||
conn.execute(
|
||||
"""
|
||||
UPDATE tasks
|
||||
SET status = ?, finished_at = ?, error_code = ?, error_message = ?, failed_item = ?, done_bytes = ?, total_bytes = ?
|
||||
WHERE id = ?
|
||||
""",
|
||||
("failed", finished_at, error_code, error_message, failed_item, done_bytes, total_bytes, task_id),
|
||||
)
|
||||
|
||||
def _ensure_schema(self) -> None:
|
||||
db_path = Path(self._db_path)
|
||||
if db_path.parent and str(db_path.parent) not in {"", "."}:
|
||||
db_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
with self._connection() as conn:
|
||||
conn.execute(
|
||||
"""
|
||||
CREATE TABLE IF NOT EXISTS tasks (
|
||||
id TEXT PRIMARY KEY,
|
||||
operation TEXT NOT NULL,
|
||||
status TEXT NOT NULL,
|
||||
source TEXT NOT NULL,
|
||||
destination TEXT NOT NULL,
|
||||
done_bytes INTEGER NULL,
|
||||
total_bytes INTEGER NULL,
|
||||
done_items INTEGER NULL,
|
||||
total_items INTEGER NULL,
|
||||
current_item TEXT NULL,
|
||||
failed_item TEXT NULL,
|
||||
error_code TEXT NULL,
|
||||
error_message TEXT NULL,
|
||||
created_at TEXT NOT NULL,
|
||||
started_at TEXT NULL,
|
||||
finished_at TEXT NULL
|
||||
)
|
||||
"""
|
||||
)
|
||||
conn.execute(
|
||||
"""
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_created_at_desc
|
||||
ON tasks(created_at DESC)
|
||||
"""
|
||||
)
|
||||
|
||||
def _connect(self) -> sqlite3.Connection:
|
||||
conn = sqlite3.connect(self._db_path)
|
||||
conn.row_factory = sqlite3.Row
|
||||
return conn
|
||||
|
||||
@contextmanager
|
||||
def _connection(self):
|
||||
conn = self._connect()
|
||||
try:
|
||||
yield conn
|
||||
conn.commit()
|
||||
except Exception:
|
||||
conn.rollback()
|
||||
raise
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
@staticmethod
|
||||
def _to_dict(row: sqlite3.Row) -> dict:
|
||||
return {
|
||||
"id": row["id"],
|
||||
"operation": row["operation"],
|
||||
"status": row["status"],
|
||||
"source": row["source"],
|
||||
"destination": row["destination"],
|
||||
"done_bytes": row["done_bytes"],
|
||||
"total_bytes": row["total_bytes"],
|
||||
"done_items": row["done_items"],
|
||||
"total_items": row["total_items"],
|
||||
"current_item": row["current_item"],
|
||||
"failed_item": row["failed_item"],
|
||||
"error_code": row["error_code"],
|
||||
"error_message": row["error_message"],
|
||||
"created_at": row["created_at"],
|
||||
"started_at": row["started_at"],
|
||||
"finished_at": row["finished_at"],
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _now_iso() -> str:
|
||||
return datetime.now(tz=timezone.utc).isoformat().replace("+00:00", "Z")
|
||||
@@ -0,0 +1,75 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from functools import lru_cache
|
||||
|
||||
from backend.app.config import Settings, get_settings
|
||||
from backend.app.db.bookmark_repository import BookmarkRepository
|
||||
from backend.app.db.task_repository import TaskRepository
|
||||
from backend.app.fs.filesystem_adapter import FilesystemAdapter
|
||||
from backend.app.security.path_guard import PathGuard
|
||||
from backend.app.services.bookmark_service import BookmarkService
|
||||
from backend.app.services.browse_service import BrowseService
|
||||
from backend.app.services.copy_task_service import CopyTaskService
|
||||
from backend.app.services.file_ops_service import FileOpsService
|
||||
from backend.app.services.move_task_service import MoveTaskService
|
||||
from backend.app.services.task_service import TaskService
|
||||
from backend.app.tasks_runner import TaskRunner
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def get_path_guard() -> PathGuard:
|
||||
settings: Settings = get_settings()
|
||||
return PathGuard(root_aliases=settings.root_aliases)
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def get_filesystem_adapter() -> FilesystemAdapter:
|
||||
return FilesystemAdapter()
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def get_task_repository() -> TaskRepository:
|
||||
settings: Settings = get_settings()
|
||||
return TaskRepository(db_path=settings.task_db_path)
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def get_bookmark_repository() -> BookmarkRepository:
|
||||
settings: Settings = get_settings()
|
||||
return BookmarkRepository(db_path=settings.task_db_path)
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def get_task_runner() -> TaskRunner:
|
||||
return TaskRunner(repository=get_task_repository(), filesystem=get_filesystem_adapter())
|
||||
|
||||
|
||||
async def get_browse_service() -> BrowseService:
|
||||
return BrowseService(path_guard=get_path_guard(), filesystem=get_filesystem_adapter())
|
||||
|
||||
|
||||
async def get_file_ops_service() -> FileOpsService:
|
||||
return FileOpsService(path_guard=get_path_guard(), filesystem=get_filesystem_adapter())
|
||||
|
||||
|
||||
async def get_task_service() -> TaskService:
|
||||
return TaskService(repository=get_task_repository())
|
||||
|
||||
|
||||
async def get_copy_task_service() -> CopyTaskService:
|
||||
return CopyTaskService(
|
||||
path_guard=get_path_guard(),
|
||||
repository=get_task_repository(),
|
||||
runner=get_task_runner(),
|
||||
)
|
||||
|
||||
|
||||
async def get_move_task_service() -> MoveTaskService:
|
||||
return MoveTaskService(
|
||||
path_guard=get_path_guard(),
|
||||
repository=get_task_repository(),
|
||||
runner=get_task_runner(),
|
||||
)
|
||||
|
||||
|
||||
async def get_bookmark_service() -> BookmarkService:
|
||||
return BookmarkService(path_guard=get_path_guard(), repository=get_bookmark_repository())
|
||||
@@ -0,0 +1 @@
|
||||
"""Filesystem access layer."""
|
||||
Binary file not shown.
Binary file not shown.
@@ -0,0 +1,61 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import shutil
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
class FilesystemAdapter:
|
||||
def list_directory(self, directory: Path, show_hidden: bool) -> tuple[list[dict], list[dict]]:
|
||||
directories: list[dict] = []
|
||||
files: list[dict] = []
|
||||
|
||||
for entry in sorted(directory.iterdir(), key=lambda item: item.name.lower()):
|
||||
if not show_hidden and entry.name.startswith("."):
|
||||
continue
|
||||
stat = entry.stat()
|
||||
modified = datetime.fromtimestamp(stat.st_mtime, tz=timezone.utc).isoformat().replace("+00:00", "Z")
|
||||
if entry.is_dir():
|
||||
directories.append({"name": entry.name, "modified": modified, "absolute": entry})
|
||||
elif entry.is_file():
|
||||
files.append(
|
||||
{
|
||||
"name": entry.name,
|
||||
"size": int(stat.st_size),
|
||||
"modified": modified,
|
||||
"absolute": entry,
|
||||
}
|
||||
)
|
||||
|
||||
return directories, files
|
||||
|
||||
def make_directory(self, path: Path) -> None:
|
||||
path.mkdir(parents=False, exist_ok=False)
|
||||
|
||||
def rename_path(self, source: Path, destination: Path) -> None:
|
||||
source.rename(destination)
|
||||
|
||||
def move_file(self, source: str, destination: str) -> None:
|
||||
Path(source).rename(Path(destination))
|
||||
|
||||
def is_directory_empty(self, path: Path) -> bool:
|
||||
return not any(path.iterdir())
|
||||
|
||||
def delete_file(self, path: Path) -> None:
|
||||
path.unlink()
|
||||
|
||||
def delete_empty_directory(self, path: Path) -> None:
|
||||
path.rmdir()
|
||||
|
||||
def copy_file(self, source: str, destination: str, on_progress: callable | None = None) -> None:
|
||||
src = Path(source)
|
||||
dst = Path(destination)
|
||||
with src.open("rb") as in_f, dst.open("xb") as out_f:
|
||||
while True:
|
||||
chunk = in_f.read(1024 * 1024)
|
||||
if not chunk:
|
||||
break
|
||||
out_f.write(chunk)
|
||||
if on_progress:
|
||||
on_progress(out_f.tell())
|
||||
shutil.copystat(src, dst, follow_symlinks=False)
|
||||
@@ -0,0 +1,8 @@
|
||||
import logging
|
||||
|
||||
|
||||
def configure_logging() -> None:
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s %(levelname)s %(name)s %(message)s",
|
||||
)
|
||||
@@ -0,0 +1,51 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
from fastapi import FastAPI, Request
|
||||
from fastapi.responses import JSONResponse
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
|
||||
from backend.app.api.errors import AppError
|
||||
from backend.app.api.routes_bookmarks import router as bookmarks_router
|
||||
from backend.app.api.routes_browse import router as browse_router
|
||||
from backend.app.api.routes_copy import router as copy_router
|
||||
from backend.app.api.routes_files import router as files_router
|
||||
from backend.app.api.routes_move import router as move_router
|
||||
from backend.app.api.routes_tasks import router as tasks_router
|
||||
from backend.app.logging import configure_logging
|
||||
|
||||
configure_logging()
|
||||
|
||||
BASE_DIR = Path(__file__).resolve().parents[3]
|
||||
UI_DIR = Path(__file__).resolve().parents[2] / "html"
|
||||
if not UI_DIR.exists():
|
||||
raise RuntimeError(f"UI directory does not exist: {UI_DIR}")
|
||||
|
||||
app = FastAPI(title="WebManager MVP Backend")
|
||||
app.mount("/ui", StaticFiles(directory=str(UI_DIR), html=True), name="ui")
|
||||
app.include_router(browse_router, prefix="/api")
|
||||
app.include_router(files_router, prefix="/api")
|
||||
app.include_router(copy_router, prefix="/api")
|
||||
app.include_router(move_router, prefix="/api")
|
||||
app.include_router(bookmarks_router, prefix="/api")
|
||||
app.include_router(tasks_router, prefix="/api")
|
||||
|
||||
|
||||
@app.exception_handler(AppError)
|
||||
async def handle_app_error(_: Request, exc: AppError) -> JSONResponse:
|
||||
return JSONResponse(
|
||||
status_code=exc.status_code,
|
||||
content={
|
||||
"error": {
|
||||
"code": exc.code,
|
||||
"message": exc.message,
|
||||
"details": exc.details,
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@app.get("/")
|
||||
async def read_root() -> dict[str, str]:
|
||||
return {"status": "ok"}
|
||||
@@ -0,0 +1 @@
|
||||
"""Security helpers."""
|
||||
Binary file not shown.
Binary file not shown.
@@ -0,0 +1,139 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
|
||||
from backend.app.api.errors import AppError
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class ResolvedPath:
|
||||
alias: str
|
||||
relative: str
|
||||
absolute: Path
|
||||
|
||||
|
||||
class PathGuard:
|
||||
def __init__(self, root_aliases: dict[str, str]):
|
||||
normalized: dict[str, Path] = {}
|
||||
for alias, root in root_aliases.items():
|
||||
normalized[alias] = Path(root).resolve()
|
||||
self._roots = normalized
|
||||
|
||||
def resolve_directory_path(self, input_path: str) -> ResolvedPath:
|
||||
resolved = self.resolve_path(input_path)
|
||||
if not resolved.absolute.exists():
|
||||
raise AppError(
|
||||
code="path_not_found",
|
||||
message="Requested path was not found",
|
||||
status_code=404,
|
||||
details={"path": input_path},
|
||||
)
|
||||
if not resolved.absolute.is_dir():
|
||||
raise AppError(
|
||||
code="path_type_conflict",
|
||||
message="Requested path is not a directory",
|
||||
status_code=409,
|
||||
details={"path": input_path},
|
||||
)
|
||||
return resolved
|
||||
|
||||
def resolve_existing_path(self, input_path: str) -> ResolvedPath:
|
||||
resolved = self.resolve_path(input_path)
|
||||
if not resolved.absolute.exists():
|
||||
raise AppError(
|
||||
code="path_not_found",
|
||||
message="Requested path was not found",
|
||||
status_code=404,
|
||||
details={"path": input_path},
|
||||
)
|
||||
return resolved
|
||||
|
||||
def resolve_path(self, input_path: str) -> ResolvedPath:
|
||||
alias, rel_segments, candidate = self.resolve_lexical_path(input_path)
|
||||
root = self._roots[alias]
|
||||
|
||||
# Resolve symlinks for existing prefixes; for not-yet-existing tails strict=False keeps
|
||||
# path normalization while still enabling containment check.
|
||||
resolved_candidate = candidate.resolve(strict=False)
|
||||
if not self._is_under_root(resolved_candidate, root):
|
||||
raise AppError(
|
||||
code="path_outside_whitelist",
|
||||
message="Requested path is outside allowed roots",
|
||||
status_code=403,
|
||||
details={"path": input_path},
|
||||
)
|
||||
|
||||
return ResolvedPath(
|
||||
alias=alias,
|
||||
relative=self._format_relative(alias, rel_segments),
|
||||
absolute=resolved_candidate,
|
||||
)
|
||||
|
||||
def resolve_lexical_path(self, input_path: str) -> tuple[str, list[str], Path]:
|
||||
normalized_input = (input_path or "").strip().strip("/")
|
||||
if not normalized_input:
|
||||
raise AppError(
|
||||
code="invalid_request",
|
||||
message="Query parameter 'path' is required",
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
segments = [seg for seg in normalized_input.split("/") if seg]
|
||||
alias = segments[0] if segments else ""
|
||||
if alias not in self._roots:
|
||||
raise AppError(
|
||||
code="invalid_root_alias",
|
||||
message="Unknown root alias",
|
||||
status_code=403,
|
||||
details={"path": input_path},
|
||||
)
|
||||
|
||||
rel_segments = segments[1:]
|
||||
if any(seg == ".." for seg in rel_segments):
|
||||
raise AppError(
|
||||
code="path_traversal_detected",
|
||||
message="Path traversal is not allowed",
|
||||
status_code=403,
|
||||
details={"path": input_path},
|
||||
)
|
||||
|
||||
root = self._roots[alias]
|
||||
candidate = root.joinpath(*rel_segments)
|
||||
return alias, rel_segments, candidate
|
||||
|
||||
def validate_name(self, name: str, field: str) -> str:
|
||||
normalized = (name or "").strip()
|
||||
if not normalized or normalized in {".", ".."} or "/" in normalized or "\\" in normalized:
|
||||
raise AppError(
|
||||
code="invalid_request",
|
||||
message="Invalid name",
|
||||
status_code=400,
|
||||
details={field: name},
|
||||
)
|
||||
return normalized
|
||||
|
||||
def entry_relative_path(self, alias: str, absolute: Path) -> str:
|
||||
root = self._roots[alias]
|
||||
resolved_absolute = absolute.resolve(strict=False)
|
||||
if not self._is_under_root(resolved_absolute, root):
|
||||
raise AppError(
|
||||
code="symlink_escape_detected",
|
||||
message="Entry resolves outside allowed root",
|
||||
status_code=403,
|
||||
details={"path": f"{alias}"},
|
||||
)
|
||||
rel = resolved_absolute.relative_to(root).as_posix()
|
||||
return self._format_relative(alias, [p for p in rel.split("/") if p])
|
||||
|
||||
@staticmethod
|
||||
def _is_under_root(path: Path, root: Path) -> bool:
|
||||
try:
|
||||
path.relative_to(root)
|
||||
return True
|
||||
except ValueError:
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def _format_relative(alias: str, rel_segments: list[str]) -> str:
|
||||
return alias if not rel_segments else f"{alias}/{'/'.join(rel_segments)}"
|
||||
@@ -0,0 +1 @@
|
||||
"""Service layer."""
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -0,0 +1,53 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import sqlite3
|
||||
|
||||
from backend.app.api.errors import AppError
|
||||
from backend.app.api.schemas import BookmarkDeleteResponse, BookmarkItem, BookmarkListResponse
|
||||
from backend.app.db.bookmark_repository import BookmarkRepository
|
||||
from backend.app.security.path_guard import PathGuard
|
||||
|
||||
|
||||
class BookmarkService:
|
||||
def __init__(self, path_guard: PathGuard, repository: BookmarkRepository):
|
||||
self._path_guard = path_guard
|
||||
self._repository = repository
|
||||
|
||||
def create_bookmark(self, path: str, label: str) -> BookmarkItem:
|
||||
normalized_label = (label or "").strip()
|
||||
if not normalized_label:
|
||||
raise AppError(
|
||||
code="invalid_request",
|
||||
message="Label is required",
|
||||
status_code=400,
|
||||
details={"label": label},
|
||||
)
|
||||
|
||||
resolved = self._path_guard.resolve_path(path)
|
||||
|
||||
try:
|
||||
bookmark = self._repository.create_bookmark(path=resolved.relative, label=normalized_label)
|
||||
except sqlite3.IntegrityError:
|
||||
raise AppError(
|
||||
code="already_exists",
|
||||
message="Bookmark already exists for path",
|
||||
status_code=409,
|
||||
details={"path": resolved.relative},
|
||||
)
|
||||
|
||||
return BookmarkItem(**bookmark)
|
||||
|
||||
def list_bookmarks(self) -> BookmarkListResponse:
|
||||
items = [BookmarkItem(**row) for row in self._repository.list_bookmarks()]
|
||||
return BookmarkListResponse(items=items)
|
||||
|
||||
def delete_bookmark(self, bookmark_id: int) -> BookmarkDeleteResponse:
|
||||
deleted = self._repository.delete_bookmark(bookmark_id)
|
||||
if not deleted:
|
||||
raise AppError(
|
||||
code="path_not_found",
|
||||
message="Bookmark was not found",
|
||||
status_code=404,
|
||||
details={"bookmark_id": str(bookmark_id)},
|
||||
)
|
||||
return BookmarkDeleteResponse(id=bookmark_id)
|
||||
@@ -0,0 +1,36 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from backend.app.api.schemas import BrowseResponse, DirectoryEntry, FileEntry
|
||||
from backend.app.fs.filesystem_adapter import FilesystemAdapter
|
||||
from backend.app.security.path_guard import PathGuard
|
||||
|
||||
|
||||
class BrowseService:
|
||||
def __init__(self, path_guard: PathGuard, filesystem: FilesystemAdapter):
|
||||
self._path_guard = path_guard
|
||||
self._filesystem = filesystem
|
||||
|
||||
def browse(self, path: str, show_hidden: bool) -> BrowseResponse:
|
||||
resolved = self._path_guard.resolve_directory_path(path)
|
||||
directories_raw, files_raw = self._filesystem.list_directory(resolved.absolute, show_hidden=show_hidden)
|
||||
|
||||
directories = [
|
||||
DirectoryEntry(
|
||||
name=item["name"],
|
||||
path=self._path_guard.entry_relative_path(resolved.alias, item["absolute"]),
|
||||
modified=item["modified"],
|
||||
)
|
||||
for item in directories_raw
|
||||
]
|
||||
|
||||
files = [
|
||||
FileEntry(
|
||||
name=item["name"],
|
||||
path=self._path_guard.entry_relative_path(resolved.alias, item["absolute"]),
|
||||
size=item["size"],
|
||||
modified=item["modified"],
|
||||
)
|
||||
for item in files_raw
|
||||
]
|
||||
|
||||
return BrowseResponse(path=resolved.relative, directories=directories, files=files)
|
||||
@@ -0,0 +1,77 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
from backend.app.api.errors import AppError
|
||||
from backend.app.api.schemas import TaskCreateResponse
|
||||
from backend.app.db.task_repository import TaskRepository
|
||||
from backend.app.security.path_guard import PathGuard
|
||||
from backend.app.tasks_runner import TaskRunner
|
||||
|
||||
|
||||
class CopyTaskService:
|
||||
def __init__(self, path_guard: PathGuard, repository: TaskRepository, runner: TaskRunner):
|
||||
self._path_guard = path_guard
|
||||
self._repository = repository
|
||||
self._runner = runner
|
||||
|
||||
def create_copy_task(self, source: str, destination: str) -> TaskCreateResponse:
|
||||
resolved_source = self._path_guard.resolve_existing_path(source)
|
||||
_, _, lexical_source = self._path_guard.resolve_lexical_path(source)
|
||||
if lexical_source.is_symlink():
|
||||
raise AppError(
|
||||
code="type_conflict",
|
||||
message="Source must be a regular file",
|
||||
status_code=409,
|
||||
details={"path": source},
|
||||
)
|
||||
if not resolved_source.absolute.is_file():
|
||||
raise AppError(
|
||||
code="type_conflict",
|
||||
message="Source must be a file",
|
||||
status_code=409,
|
||||
details={"path": source},
|
||||
)
|
||||
|
||||
resolved_destination = self._path_guard.resolve_path(destination)
|
||||
|
||||
destination_parent = resolved_destination.absolute.parent
|
||||
parent_relative = self._path_guard.entry_relative_path(resolved_destination.alias, destination_parent)
|
||||
self._map_directory_validation(parent_relative)
|
||||
|
||||
if resolved_destination.absolute.exists():
|
||||
raise AppError(
|
||||
code="already_exists",
|
||||
message="Target path already exists",
|
||||
status_code=409,
|
||||
details={"path": resolved_destination.relative},
|
||||
)
|
||||
|
||||
total_bytes = int(resolved_source.absolute.stat().st_size)
|
||||
task = self._repository.create_task(
|
||||
operation="copy",
|
||||
source=resolved_source.relative,
|
||||
destination=resolved_destination.relative,
|
||||
)
|
||||
|
||||
self._runner.enqueue_copy_file(
|
||||
task_id=task["id"],
|
||||
source=str(resolved_source.absolute),
|
||||
destination=str(resolved_destination.absolute),
|
||||
total_bytes=total_bytes,
|
||||
)
|
||||
|
||||
return TaskCreateResponse(task_id=task["id"], status=task["status"])
|
||||
|
||||
def _map_directory_validation(self, relative_path: str) -> None:
|
||||
try:
|
||||
self._path_guard.resolve_directory_path(relative_path)
|
||||
except AppError as exc:
|
||||
if exc.code == "path_type_conflict":
|
||||
raise AppError(
|
||||
code="type_conflict",
|
||||
message="Destination parent is not a directory",
|
||||
status_code=409,
|
||||
details=exc.details,
|
||||
)
|
||||
raise
|
||||
@@ -0,0 +1,134 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
from backend.app.api.errors import AppError
|
||||
from backend.app.api.schemas import DeleteResponse, MkdirResponse, RenameResponse
|
||||
from backend.app.fs.filesystem_adapter import FilesystemAdapter
|
||||
from backend.app.security.path_guard import PathGuard
|
||||
|
||||
|
||||
class FileOpsService:
|
||||
def __init__(self, path_guard: PathGuard, filesystem: FilesystemAdapter):
|
||||
self._path_guard = path_guard
|
||||
self._filesystem = filesystem
|
||||
|
||||
def mkdir(self, parent_path: str, name: str) -> MkdirResponse:
|
||||
resolved_parent = self._path_guard.resolve_directory_path(parent_path)
|
||||
safe_name = self._path_guard.validate_name(name, field="name")
|
||||
target_relative = self._join_relative(resolved_parent.relative, safe_name)
|
||||
resolved_target = self._path_guard.resolve_path(target_relative)
|
||||
|
||||
if resolved_target.absolute.exists():
|
||||
raise AppError(
|
||||
code="already_exists",
|
||||
message="Target path already exists",
|
||||
status_code=409,
|
||||
details={"path": resolved_target.relative},
|
||||
)
|
||||
|
||||
try:
|
||||
self._filesystem.make_directory(resolved_target.absolute)
|
||||
except FileExistsError:
|
||||
raise AppError(
|
||||
code="already_exists",
|
||||
message="Target path already exists",
|
||||
status_code=409,
|
||||
details={"path": resolved_target.relative},
|
||||
)
|
||||
except OSError as exc:
|
||||
raise AppError(
|
||||
code="io_error",
|
||||
message="Filesystem operation failed",
|
||||
status_code=500,
|
||||
details={"reason": str(exc)},
|
||||
)
|
||||
|
||||
return MkdirResponse(path=resolved_target.relative)
|
||||
|
||||
def rename(self, path: str, new_name: str) -> RenameResponse:
|
||||
resolved_source = self._path_guard.resolve_existing_path(path)
|
||||
safe_name = self._path_guard.validate_name(new_name, field="new_name")
|
||||
|
||||
parent_relative = self._path_guard.entry_relative_path(resolved_source.alias, resolved_source.absolute.parent)
|
||||
target_relative = self._join_relative(parent_relative, safe_name)
|
||||
resolved_target = self._path_guard.resolve_path(target_relative)
|
||||
|
||||
if resolved_target.absolute.exists():
|
||||
raise AppError(
|
||||
code="already_exists",
|
||||
message="Target path already exists",
|
||||
status_code=409,
|
||||
details={"path": resolved_target.relative},
|
||||
)
|
||||
|
||||
try:
|
||||
self._filesystem.rename_path(resolved_source.absolute, resolved_target.absolute)
|
||||
except FileNotFoundError:
|
||||
raise AppError(
|
||||
code="path_not_found",
|
||||
message="Requested path was not found",
|
||||
status_code=404,
|
||||
details={"path": path},
|
||||
)
|
||||
except FileExistsError:
|
||||
raise AppError(
|
||||
code="already_exists",
|
||||
message="Target path already exists",
|
||||
status_code=409,
|
||||
details={"path": resolved_target.relative},
|
||||
)
|
||||
except OSError as exc:
|
||||
raise AppError(
|
||||
code="io_error",
|
||||
message="Filesystem operation failed",
|
||||
status_code=500,
|
||||
details={"reason": str(exc)},
|
||||
)
|
||||
|
||||
return RenameResponse(path=resolved_target.relative)
|
||||
|
||||
def delete(self, path: str) -> DeleteResponse:
|
||||
resolved_target = self._path_guard.resolve_existing_path(path)
|
||||
|
||||
try:
|
||||
if resolved_target.absolute.is_file():
|
||||
self._filesystem.delete_file(resolved_target.absolute)
|
||||
elif resolved_target.absolute.is_dir():
|
||||
if not self._filesystem.is_directory_empty(resolved_target.absolute):
|
||||
raise AppError(
|
||||
code="directory_not_empty",
|
||||
message="Directory is not empty",
|
||||
status_code=409,
|
||||
details={"path": resolved_target.relative},
|
||||
)
|
||||
self._filesystem.delete_empty_directory(resolved_target.absolute)
|
||||
else:
|
||||
raise AppError(
|
||||
code="type_conflict",
|
||||
message="Unsupported path type for delete",
|
||||
status_code=409,
|
||||
details={"path": resolved_target.relative},
|
||||
)
|
||||
except AppError:
|
||||
raise
|
||||
except FileNotFoundError:
|
||||
raise AppError(
|
||||
code="path_not_found",
|
||||
message="Requested path was not found",
|
||||
status_code=404,
|
||||
details={"path": path},
|
||||
)
|
||||
except OSError as exc:
|
||||
raise AppError(
|
||||
code="io_error",
|
||||
message="Filesystem operation failed",
|
||||
status_code=500,
|
||||
details={"reason": str(exc)},
|
||||
)
|
||||
|
||||
return DeleteResponse(path=resolved_target.relative)
|
||||
|
||||
@staticmethod
|
||||
def _join_relative(base: str, name: str) -> str:
|
||||
return f"{base}/{name}" if base else name
|
||||
@@ -0,0 +1,77 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from backend.app.api.errors import AppError
|
||||
from backend.app.api.schemas import TaskCreateResponse
|
||||
from backend.app.db.task_repository import TaskRepository
|
||||
from backend.app.security.path_guard import PathGuard
|
||||
from backend.app.tasks_runner import TaskRunner
|
||||
|
||||
|
||||
class MoveTaskService:
|
||||
def __init__(self, path_guard: PathGuard, repository: TaskRepository, runner: TaskRunner):
|
||||
self._path_guard = path_guard
|
||||
self._repository = repository
|
||||
self._runner = runner
|
||||
|
||||
def create_move_task(self, source: str, destination: str) -> TaskCreateResponse:
|
||||
resolved_source = self._path_guard.resolve_existing_path(source)
|
||||
_, _, lexical_source = self._path_guard.resolve_lexical_path(source)
|
||||
|
||||
if lexical_source.is_symlink():
|
||||
raise AppError(
|
||||
code="type_conflict",
|
||||
message="Source must be a regular file",
|
||||
status_code=409,
|
||||
details={"path": source},
|
||||
)
|
||||
if not resolved_source.absolute.is_file():
|
||||
raise AppError(
|
||||
code="type_conflict",
|
||||
message="Source must be a file",
|
||||
status_code=409,
|
||||
details={"path": source},
|
||||
)
|
||||
|
||||
resolved_destination = self._path_guard.resolve_path(destination)
|
||||
destination_parent = resolved_destination.absolute.parent
|
||||
parent_relative = self._path_guard.entry_relative_path(resolved_destination.alias, destination_parent)
|
||||
self._map_directory_validation(parent_relative)
|
||||
|
||||
if resolved_destination.absolute.exists():
|
||||
raise AppError(
|
||||
code="already_exists",
|
||||
message="Target path already exists",
|
||||
status_code=409,
|
||||
details={"path": resolved_destination.relative},
|
||||
)
|
||||
|
||||
total_bytes = int(resolved_source.absolute.stat().st_size)
|
||||
task = self._repository.create_task(
|
||||
operation="move",
|
||||
source=resolved_source.relative,
|
||||
destination=resolved_destination.relative,
|
||||
)
|
||||
|
||||
same_root = resolved_source.alias == resolved_destination.alias
|
||||
self._runner.enqueue_move_file(
|
||||
task_id=task["id"],
|
||||
source=str(resolved_source.absolute),
|
||||
destination=str(resolved_destination.absolute),
|
||||
total_bytes=total_bytes,
|
||||
same_root=same_root,
|
||||
)
|
||||
|
||||
return TaskCreateResponse(task_id=task["id"], status=task["status"])
|
||||
|
||||
def _map_directory_validation(self, relative_path: str) -> None:
|
||||
try:
|
||||
self._path_guard.resolve_directory_path(relative_path)
|
||||
except AppError as exc:
|
||||
if exc.code == "path_type_conflict":
|
||||
raise AppError(
|
||||
code="type_conflict",
|
||||
message="Destination parent is not a directory",
|
||||
status_code=409,
|
||||
details=exc.details,
|
||||
)
|
||||
raise
|
||||
@@ -0,0 +1,42 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from backend.app.api.errors import AppError
|
||||
from backend.app.api.schemas import TaskDetailResponse, TaskListItem, TaskListResponse
|
||||
from backend.app.db.task_repository import TaskRepository
|
||||
|
||||
|
||||
class TaskService:
|
||||
def __init__(self, repository: TaskRepository):
|
||||
self._repository = repository
|
||||
|
||||
def create_task(self, operation: str, source: str, destination: str) -> TaskDetailResponse:
|
||||
task = self._repository.create_task(operation=operation, source=source, destination=destination)
|
||||
return TaskDetailResponse(**task)
|
||||
|
||||
def get_task(self, task_id: str) -> TaskDetailResponse:
|
||||
task = self._repository.get_task(task_id)
|
||||
if not task:
|
||||
raise AppError(
|
||||
code="task_not_found",
|
||||
message="Task was not found",
|
||||
status_code=404,
|
||||
details={"task_id": task_id},
|
||||
)
|
||||
return TaskDetailResponse(**task)
|
||||
|
||||
def list_tasks(self) -> TaskListResponse:
|
||||
tasks = self._repository.list_tasks()
|
||||
return TaskListResponse(
|
||||
items=[
|
||||
TaskListItem(
|
||||
id=task["id"],
|
||||
operation=task["operation"],
|
||||
status=task["status"],
|
||||
source=task["source"],
|
||||
destination=task["destination"],
|
||||
created_at=task["created_at"],
|
||||
finished_at=task["finished_at"],
|
||||
)
|
||||
for task in tasks
|
||||
]
|
||||
)
|
||||
@@ -0,0 +1,125 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import threading
|
||||
from pathlib import Path
|
||||
|
||||
from backend.app.db.task_repository import TaskRepository
|
||||
from backend.app.fs.filesystem_adapter import FilesystemAdapter
|
||||
|
||||
|
||||
class TaskRunner:
|
||||
def __init__(self, repository: TaskRepository, filesystem: FilesystemAdapter):
|
||||
self._repository = repository
|
||||
self._filesystem = filesystem
|
||||
|
||||
def enqueue_copy_file(self, task_id: str, source: str, destination: str, total_bytes: int) -> None:
|
||||
thread = threading.Thread(
|
||||
target=self._run_copy_file,
|
||||
args=(task_id, source, destination, total_bytes),
|
||||
daemon=True,
|
||||
)
|
||||
thread.start()
|
||||
|
||||
def enqueue_move_file(
|
||||
self,
|
||||
task_id: str,
|
||||
source: str,
|
||||
destination: str,
|
||||
total_bytes: int,
|
||||
same_root: bool,
|
||||
) -> None:
|
||||
thread = threading.Thread(
|
||||
target=self._run_move_file,
|
||||
args=(task_id, source, destination, total_bytes, same_root),
|
||||
daemon=True,
|
||||
)
|
||||
thread.start()
|
||||
|
||||
def _run_copy_file(self, task_id: str, source: str, destination: str, total_bytes: int) -> None:
|
||||
self._repository.mark_running(
|
||||
task_id=task_id,
|
||||
done_bytes=0,
|
||||
total_bytes=total_bytes,
|
||||
current_item=source,
|
||||
)
|
||||
|
||||
progress = {"done": 0}
|
||||
|
||||
def on_progress(done_bytes: int) -> None:
|
||||
progress["done"] = done_bytes
|
||||
self._repository.update_progress(
|
||||
task_id=task_id,
|
||||
done_bytes=done_bytes,
|
||||
total_bytes=total_bytes,
|
||||
current_item=source,
|
||||
)
|
||||
|
||||
try:
|
||||
self._filesystem.copy_file(source=source, destination=destination, on_progress=on_progress)
|
||||
self._repository.mark_completed(
|
||||
task_id=task_id,
|
||||
done_bytes=total_bytes,
|
||||
total_bytes=total_bytes,
|
||||
)
|
||||
except OSError as exc:
|
||||
self._repository.mark_failed(
|
||||
task_id=task_id,
|
||||
error_code="io_error",
|
||||
error_message=str(exc),
|
||||
failed_item=source,
|
||||
done_bytes=progress["done"],
|
||||
total_bytes=total_bytes,
|
||||
)
|
||||
|
||||
def _run_move_file(
|
||||
self,
|
||||
task_id: str,
|
||||
source: str,
|
||||
destination: str,
|
||||
total_bytes: int,
|
||||
same_root: bool,
|
||||
) -> None:
|
||||
self._repository.mark_running(
|
||||
task_id=task_id,
|
||||
done_bytes=0,
|
||||
total_bytes=total_bytes,
|
||||
current_item=source,
|
||||
)
|
||||
|
||||
progress = {"done": 0}
|
||||
|
||||
try:
|
||||
if same_root:
|
||||
self._filesystem.move_file(source=source, destination=destination)
|
||||
self._repository.mark_completed(
|
||||
task_id=task_id,
|
||||
done_bytes=total_bytes,
|
||||
total_bytes=total_bytes,
|
||||
)
|
||||
return
|
||||
|
||||
def on_progress(done_bytes: int) -> None:
|
||||
progress["done"] = done_bytes
|
||||
self._repository.update_progress(
|
||||
task_id=task_id,
|
||||
done_bytes=done_bytes,
|
||||
total_bytes=total_bytes,
|
||||
current_item=source,
|
||||
)
|
||||
|
||||
self._filesystem.copy_file(source=source, destination=destination, on_progress=on_progress)
|
||||
self._filesystem.delete_file(Path(source))
|
||||
self._repository.mark_completed(
|
||||
task_id=task_id,
|
||||
done_bytes=total_bytes,
|
||||
total_bytes=total_bytes,
|
||||
)
|
||||
except OSError as exc:
|
||||
self._repository.mark_failed(
|
||||
task_id=task_id,
|
||||
error_code="io_error",
|
||||
error_message=str(exc),
|
||||
failed_item=source,
|
||||
done_bytes=progress["done"],
|
||||
total_bytes=total_bytes,
|
||||
)
|
||||
@@ -0,0 +1,3 @@
|
||||
from backend.app.main import app
|
||||
|
||||
__all__ = ["app"]
|
||||
@@ -0,0 +1,6 @@
|
||||
fastapi==0.111.0
|
||||
starlette==0.37.2
|
||||
pydantic==2.12.5
|
||||
httpx==0.27.2
|
||||
anyio==4.4.0
|
||||
sniffio==1.3.1
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -0,0 +1,157 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
import httpx
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parents[3]))
|
||||
|
||||
from backend.app.dependencies import get_bookmark_service
|
||||
from backend.app.db.bookmark_repository import BookmarkRepository
|
||||
from backend.app.main import app
|
||||
from backend.app.security.path_guard import PathGuard
|
||||
from backend.app.services.bookmark_service import BookmarkService
|
||||
|
||||
|
||||
class BookmarksApiGoldenTest(unittest.TestCase):
|
||||
def setUp(self) -> None:
|
||||
self.temp_dir = tempfile.TemporaryDirectory()
|
||||
self.root = Path(self.temp_dir.name) / "root"
|
||||
self.root.mkdir(parents=True, exist_ok=True)
|
||||
self.repo = BookmarkRepository(str(Path(self.temp_dir.name) / "bookmarks.db"))
|
||||
|
||||
path_guard = PathGuard({"storage1": str(self.root)})
|
||||
service = BookmarkService(path_guard=path_guard, repository=self.repo)
|
||||
|
||||
async def _override_bookmark_service() -> BookmarkService:
|
||||
return service
|
||||
|
||||
app.dependency_overrides[get_bookmark_service] = _override_bookmark_service
|
||||
|
||||
def tearDown(self) -> None:
|
||||
app.dependency_overrides.clear()
|
||||
self.temp_dir.cleanup()
|
||||
|
||||
def _request(self, method: str, url: str, payload: dict | None = None) -> httpx.Response:
|
||||
async def _run() -> httpx.Response:
|
||||
transport = httpx.ASGITransport(app=app)
|
||||
async with httpx.AsyncClient(transport=transport, base_url="http://testserver") as client:
|
||||
if method == "POST":
|
||||
return await client.post(url, json=payload)
|
||||
if method == "DELETE":
|
||||
return await client.delete(url)
|
||||
return await client.get(url)
|
||||
|
||||
return asyncio.run(_run())
|
||||
|
||||
def test_create_success(self) -> None:
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/bookmarks",
|
||||
{"path": "storage1/my/path", "label": "My Path"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 200)
|
||||
body = response.json()
|
||||
self.assertEqual(body["path"], "storage1/my/path")
|
||||
self.assertEqual(body["label"], "My Path")
|
||||
self.assertIn("id", body)
|
||||
self.assertIn("created_at", body)
|
||||
|
||||
def test_list_shape(self) -> None:
|
||||
self._request(
|
||||
"POST",
|
||||
"/api/bookmarks",
|
||||
{"path": "storage1/a", "label": "A"},
|
||||
)
|
||||
|
||||
response = self._request("GET", "/api/bookmarks")
|
||||
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self.assertEqual(len(response.json()["items"]), 1)
|
||||
item = response.json()["items"][0]
|
||||
self.assertEqual(set(item.keys()), {"id", "path", "label", "created_at"})
|
||||
|
||||
def test_delete_success(self) -> None:
|
||||
created = self._request(
|
||||
"POST",
|
||||
"/api/bookmarks",
|
||||
{"path": "storage1/a", "label": "A"},
|
||||
).json()
|
||||
|
||||
response = self._request("DELETE", f"/api/bookmarks/{created['id']}")
|
||||
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self.assertEqual(response.json(), {"id": created["id"]})
|
||||
|
||||
def test_invalid_path(self) -> None:
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/bookmarks",
|
||||
{"path": "unknown/path", "label": "A"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 403)
|
||||
self.assertEqual(response.json()["error"]["code"], "invalid_root_alias")
|
||||
|
||||
def test_invalid_label(self) -> None:
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/bookmarks",
|
||||
{"path": "storage1/a", "label": " "},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 400)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "invalid_request",
|
||||
"message": "Label is required",
|
||||
"details": {"label": " "},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_duplicate_conflict(self) -> None:
|
||||
self._request(
|
||||
"POST",
|
||||
"/api/bookmarks",
|
||||
{"path": "storage1/a", "label": "A"},
|
||||
)
|
||||
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/bookmarks",
|
||||
{"path": "storage1/a", "label": "Again"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 409)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "already_exists",
|
||||
"message": "Bookmark already exists for path",
|
||||
"details": {"path": "storage1/a"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_traversal_attempt(self) -> None:
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/bookmarks",
|
||||
{"path": "storage1/../etc", "label": "Bad"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 403)
|
||||
self.assertEqual(response.json()["error"]["code"], "path_traversal_detected")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@@ -0,0 +1,105 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import tempfile
|
||||
import unittest
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
import httpx
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parents[3]))
|
||||
|
||||
from backend.app.dependencies import get_browse_service
|
||||
from backend.app.fs.filesystem_adapter import FilesystemAdapter
|
||||
from backend.app.main import app
|
||||
from backend.app.security.path_guard import PathGuard
|
||||
from backend.app.services.browse_service import BrowseService
|
||||
|
||||
|
||||
class BrowseApiGoldenTest(unittest.TestCase):
|
||||
def setUp(self) -> None:
|
||||
self.temp_dir = tempfile.TemporaryDirectory()
|
||||
self.root = Path(self.temp_dir.name) / "root"
|
||||
self.root.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
folder = self.root / "folder"
|
||||
folder.mkdir()
|
||||
file_path = self.root / "video.mkv"
|
||||
file_path.write_bytes(b"abc")
|
||||
|
||||
hidden_dir = self.root / ".hidden_dir"
|
||||
hidden_dir.mkdir()
|
||||
hidden_file = self.root / ".secret"
|
||||
hidden_file.write_bytes(b"x")
|
||||
|
||||
mtime = 1710000000
|
||||
for path in [folder, file_path, hidden_dir, hidden_file]:
|
||||
Path(path).touch()
|
||||
Path(path).chmod(0o755)
|
||||
import os
|
||||
os.utime(path, (mtime, mtime))
|
||||
|
||||
service = BrowseService(
|
||||
path_guard=PathGuard({"storage1": str(self.root)}),
|
||||
filesystem=FilesystemAdapter(),
|
||||
)
|
||||
async def _override_browse_service() -> BrowseService:
|
||||
return service
|
||||
|
||||
app.dependency_overrides[get_browse_service] = _override_browse_service
|
||||
|
||||
def tearDown(self) -> None:
|
||||
app.dependency_overrides.clear()
|
||||
self.temp_dir.cleanup()
|
||||
|
||||
def _get(self, path: str, show_hidden: str | None = None) -> httpx.Response:
|
||||
async def _run() -> httpx.Response:
|
||||
transport = httpx.ASGITransport(app=app)
|
||||
async with httpx.AsyncClient(transport=transport, base_url="http://testserver") as client:
|
||||
params = {"path": path}
|
||||
if show_hidden is not None:
|
||||
params["show_hidden"] = show_hidden
|
||||
return await client.get("/api/browse", params=params)
|
||||
|
||||
return asyncio.run(_run())
|
||||
|
||||
def test_browse_success_default_hides_hidden_entries(self) -> None:
|
||||
response = self._get("storage1")
|
||||
|
||||
self.assertEqual(response.status_code, 200)
|
||||
modified = datetime.fromtimestamp(1710000000, tz=timezone.utc).isoformat().replace("+00:00", "Z")
|
||||
expected = {
|
||||
"path": "storage1",
|
||||
"directories": [
|
||||
{
|
||||
"name": "folder",
|
||||
"path": "storage1/folder",
|
||||
"modified": modified,
|
||||
}
|
||||
],
|
||||
"files": [
|
||||
{
|
||||
"name": "video.mkv",
|
||||
"path": "storage1/video.mkv",
|
||||
"size": 3,
|
||||
"modified": modified,
|
||||
}
|
||||
],
|
||||
}
|
||||
self.assertEqual(response.json(), expected)
|
||||
|
||||
def test_browse_success_show_hidden_true(self) -> None:
|
||||
response = self._get("storage1", show_hidden="true")
|
||||
|
||||
self.assertEqual(response.status_code, 200)
|
||||
body = response.json()
|
||||
directory_names = [item["name"] for item in body["directories"]]
|
||||
file_names = [item["name"] for item in body["files"]]
|
||||
self.assertEqual(directory_names, [".hidden_dir", "folder"])
|
||||
self.assertEqual(file_names, [".secret", "video.mkv"])
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@@ -0,0 +1,211 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import tempfile
|
||||
import time
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
import httpx
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parents[3]))
|
||||
|
||||
from backend.app.dependencies import get_copy_task_service, get_task_service
|
||||
from backend.app.db.task_repository import TaskRepository
|
||||
from backend.app.main import app
|
||||
from backend.app.security.path_guard import PathGuard
|
||||
from backend.app.services.copy_task_service import CopyTaskService
|
||||
from backend.app.services.task_service import TaskService
|
||||
from backend.app.tasks_runner import TaskRunner
|
||||
from backend.app.fs.filesystem_adapter import FilesystemAdapter
|
||||
|
||||
|
||||
class FailingFilesystemAdapter(FilesystemAdapter):
|
||||
def copy_file(self, source: str, destination: str, on_progress: callable | None = None) -> None:
|
||||
raise OSError("forced copy failure")
|
||||
|
||||
|
||||
class CopyApiGoldenTest(unittest.TestCase):
|
||||
def setUp(self) -> None:
|
||||
self.temp_dir = tempfile.TemporaryDirectory()
|
||||
self.root = Path(self.temp_dir.name) / "root"
|
||||
self.root.mkdir(parents=True, exist_ok=True)
|
||||
self.repo = TaskRepository(str(Path(self.temp_dir.name) / "tasks.db"))
|
||||
|
||||
path_guard = PathGuard({"storage1": str(self.root), "storage2": str(self.root)})
|
||||
self._set_services(path_guard=path_guard, filesystem=FilesystemAdapter())
|
||||
|
||||
def tearDown(self) -> None:
|
||||
app.dependency_overrides.clear()
|
||||
self.temp_dir.cleanup()
|
||||
|
||||
def _set_services(self, path_guard: PathGuard, filesystem: FilesystemAdapter) -> None:
|
||||
runner = TaskRunner(repository=self.repo, filesystem=filesystem)
|
||||
copy_service = CopyTaskService(path_guard=path_guard, repository=self.repo, runner=runner)
|
||||
task_service = TaskService(repository=self.repo)
|
||||
|
||||
async def _override_copy_service() -> CopyTaskService:
|
||||
return copy_service
|
||||
|
||||
async def _override_task_service() -> TaskService:
|
||||
return task_service
|
||||
|
||||
app.dependency_overrides[get_copy_task_service] = _override_copy_service
|
||||
app.dependency_overrides[get_task_service] = _override_task_service
|
||||
|
||||
def _request(self, method: str, url: str, payload: dict | None = None) -> httpx.Response:
|
||||
async def _run() -> httpx.Response:
|
||||
transport = httpx.ASGITransport(app=app)
|
||||
async with httpx.AsyncClient(transport=transport, base_url="http://testserver") as client:
|
||||
if method == "POST":
|
||||
return await client.post(url, json=payload)
|
||||
return await client.get(url)
|
||||
|
||||
return asyncio.run(_run())
|
||||
|
||||
def _wait_task(self, task_id: str, timeout_s: float = 2.0) -> dict:
|
||||
deadline = time.time() + timeout_s
|
||||
while time.time() < deadline:
|
||||
response = self._request("GET", f"/api/tasks/{task_id}")
|
||||
body = response.json()
|
||||
if body["status"] in {"completed", "failed"}:
|
||||
return body
|
||||
time.sleep(0.02)
|
||||
self.fail("task did not reach terminal state in time")
|
||||
|
||||
def test_copy_success_create_task_shape(self) -> None:
|
||||
src = self.root / "source.txt"
|
||||
src.write_text("hello", encoding="utf-8")
|
||||
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/files/copy",
|
||||
{"source": "storage1/source.txt", "destination": "storage1/copy.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 202)
|
||||
body = response.json()
|
||||
self.assertIn("task_id", body)
|
||||
self.assertEqual(body["status"], "queued")
|
||||
|
||||
detail = self._wait_task(body["task_id"])
|
||||
self.assertEqual(detail["status"], "completed")
|
||||
self.assertEqual(detail["total_bytes"], 5)
|
||||
self.assertEqual(detail["done_bytes"], 5)
|
||||
self.assertTrue((self.root / "copy.txt").exists())
|
||||
self.assertEqual((self.root / "copy.txt").read_text(encoding="utf-8"), "hello")
|
||||
|
||||
def test_copy_source_not_found(self) -> None:
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/files/copy",
|
||||
{"source": "storage1/missing.txt", "destination": "storage1/out.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 404)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "path_not_found",
|
||||
"message": "Requested path was not found",
|
||||
"details": {"path": "storage1/missing.txt"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_copy_source_is_directory_type_conflict(self) -> None:
|
||||
(self.root / "dir").mkdir()
|
||||
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/files/copy",
|
||||
{"source": "storage1/dir", "destination": "storage1/out.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 409)
|
||||
self.assertEqual(response.json()["error"]["code"], "type_conflict")
|
||||
|
||||
def test_copy_destination_exists_already_exists(self) -> None:
|
||||
(self.root / "source.txt").write_text("x", encoding="utf-8")
|
||||
(self.root / "exists.txt").write_text("y", encoding="utf-8")
|
||||
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/files/copy",
|
||||
{"source": "storage1/source.txt", "destination": "storage1/exists.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 409)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "already_exists",
|
||||
"message": "Target path already exists",
|
||||
"details": {"path": "storage1/exists.txt"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_copy_traversal_source(self) -> None:
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/files/copy",
|
||||
{"source": "storage1/../etc/passwd", "destination": "storage1/out.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 403)
|
||||
self.assertEqual(response.json()["error"]["code"], "path_traversal_detected")
|
||||
|
||||
def test_copy_traversal_destination(self) -> None:
|
||||
(self.root / "source.txt").write_text("x", encoding="utf-8")
|
||||
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/files/copy",
|
||||
{"source": "storage1/source.txt", "destination": "storage1/../etc/out.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 403)
|
||||
self.assertEqual(response.json()["error"]["code"], "path_traversal_detected")
|
||||
|
||||
def test_copy_source_symlink_rejected(self) -> None:
|
||||
target = self.root / "real.txt"
|
||||
target.write_text("x", encoding="utf-8")
|
||||
link = self.root / "link.txt"
|
||||
link.symlink_to(target)
|
||||
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/files/copy",
|
||||
{"source": "storage1/link.txt", "destination": "storage1/out.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 409)
|
||||
self.assertEqual(response.json()["error"]["code"], "type_conflict")
|
||||
|
||||
def test_copy_runtime_io_error_failed_task_shape(self) -> None:
|
||||
src = self.root / "source.txt"
|
||||
src.write_text("hello", encoding="utf-8")
|
||||
|
||||
path_guard = PathGuard({"storage1": str(self.root), "storage2": str(self.root)})
|
||||
self._set_services(path_guard=path_guard, filesystem=FailingFilesystemAdapter())
|
||||
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/files/copy",
|
||||
{"source": "storage1/source.txt", "destination": "storage1/copy.txt"},
|
||||
)
|
||||
self.assertEqual(response.status_code, 202)
|
||||
|
||||
task_id = response.json()["task_id"]
|
||||
detail = self._wait_task(task_id)
|
||||
self.assertEqual(detail["status"], "failed")
|
||||
self.assertEqual(detail["error_code"], "io_error")
|
||||
self.assertEqual(detail["failed_item"], str(src))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@@ -0,0 +1,110 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
import httpx
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parents[3]))
|
||||
|
||||
from backend.app.dependencies import get_browse_service
|
||||
from backend.app.fs.filesystem_adapter import FilesystemAdapter
|
||||
from backend.app.main import app
|
||||
from backend.app.security.path_guard import PathGuard
|
||||
from backend.app.services.browse_service import BrowseService
|
||||
|
||||
|
||||
class BrowseApiErrorsGoldenTest(unittest.TestCase):
|
||||
def setUp(self) -> None:
|
||||
self.temp_dir = tempfile.TemporaryDirectory()
|
||||
self.root = Path(self.temp_dir.name) / "root"
|
||||
self.root.mkdir(parents=True, exist_ok=True)
|
||||
(self.root / "a.txt").write_text("a", encoding="utf-8")
|
||||
|
||||
service = BrowseService(
|
||||
path_guard=PathGuard({"storage1": str(self.root)}),
|
||||
filesystem=FilesystemAdapter(),
|
||||
)
|
||||
async def _override_browse_service() -> BrowseService:
|
||||
return service
|
||||
|
||||
app.dependency_overrides[get_browse_service] = _override_browse_service
|
||||
|
||||
def tearDown(self) -> None:
|
||||
app.dependency_overrides.clear()
|
||||
self.temp_dir.cleanup()
|
||||
|
||||
def _get(self, path: str) -> httpx.Response:
|
||||
async def _run() -> httpx.Response:
|
||||
transport = httpx.ASGITransport(app=app)
|
||||
async with httpx.AsyncClient(transport=transport, base_url="http://testserver") as client:
|
||||
return await client.get("/api/browse", params={"path": path})
|
||||
|
||||
return asyncio.run(_run())
|
||||
|
||||
def test_invalid_root_alias_error_shape(self) -> None:
|
||||
response = self._get("unknown/path")
|
||||
|
||||
self.assertEqual(response.status_code, 403)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "invalid_root_alias",
|
||||
"message": "Unknown root alias",
|
||||
"details": {"path": "unknown/path"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_traversal_error_shape(self) -> None:
|
||||
response = self._get("storage1/../etc")
|
||||
|
||||
self.assertEqual(response.status_code, 403)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "path_traversal_detected",
|
||||
"message": "Path traversal is not allowed",
|
||||
"details": {"path": "storage1/../etc"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_not_found_error_shape(self) -> None:
|
||||
response = self._get("storage1/missing")
|
||||
|
||||
self.assertEqual(response.status_code, 404)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "path_not_found",
|
||||
"message": "Requested path was not found",
|
||||
"details": {"path": "storage1/missing"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_type_conflict_error_shape(self) -> None:
|
||||
response = self._get("storage1/a.txt")
|
||||
|
||||
self.assertEqual(response.status_code, 409)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "path_type_conflict",
|
||||
"message": "Requested path is not a directory",
|
||||
"details": {"path": "storage1/a.txt"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@@ -0,0 +1,323 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
import httpx
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parents[3]))
|
||||
|
||||
from backend.app.dependencies import get_file_ops_service
|
||||
from backend.app.fs.filesystem_adapter import FilesystemAdapter
|
||||
from backend.app.main import app
|
||||
from backend.app.security.path_guard import PathGuard
|
||||
from backend.app.services.file_ops_service import FileOpsService
|
||||
|
||||
|
||||
class FileOpsApiGoldenTest(unittest.TestCase):
|
||||
def setUp(self) -> None:
|
||||
self.temp_dir = tempfile.TemporaryDirectory()
|
||||
self.root = Path(self.temp_dir.name) / "root"
|
||||
self.root.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
self.scope = self.root / "scope"
|
||||
self.scope.mkdir(parents=True, exist_ok=True)
|
||||
(self.scope / "old.txt").write_text("x", encoding="utf-8")
|
||||
(self.scope / "existing.txt").write_text("y", encoding="utf-8")
|
||||
|
||||
service = FileOpsService(
|
||||
path_guard=PathGuard({"storage1": str(self.root)}),
|
||||
filesystem=FilesystemAdapter(),
|
||||
)
|
||||
|
||||
async def _override_file_ops_service() -> FileOpsService:
|
||||
return service
|
||||
|
||||
app.dependency_overrides[get_file_ops_service] = _override_file_ops_service
|
||||
|
||||
def tearDown(self) -> None:
|
||||
app.dependency_overrides.clear()
|
||||
self.temp_dir.cleanup()
|
||||
|
||||
def _post(self, url: str, payload: dict[str, str]) -> httpx.Response:
|
||||
async def _run() -> httpx.Response:
|
||||
transport = httpx.ASGITransport(app=app)
|
||||
async with httpx.AsyncClient(transport=transport, base_url="http://testserver") as client:
|
||||
return await client.post(url, json=payload)
|
||||
|
||||
return asyncio.run(_run())
|
||||
|
||||
def test_mkdir_success(self) -> None:
|
||||
response = self._post(
|
||||
"/api/files/mkdir",
|
||||
{"parent_path": "storage1/scope", "name": "new_folder"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self.assertEqual(response.json(), {"path": "storage1/scope/new_folder"})
|
||||
self.assertTrue((self.scope / "new_folder").is_dir())
|
||||
|
||||
def test_mkdir_conflict_directory_exists(self) -> None:
|
||||
(self.scope / "existing_dir").mkdir()
|
||||
response = self._post(
|
||||
"/api/files/mkdir",
|
||||
{"parent_path": "storage1/scope", "name": "existing_dir"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 409)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "already_exists",
|
||||
"message": "Target path already exists",
|
||||
"details": {"path": "storage1/scope/existing_dir"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_mkdir_conflict_file_exists(self) -> None:
|
||||
response = self._post(
|
||||
"/api/files/mkdir",
|
||||
{"parent_path": "storage1/scope", "name": "existing.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 409)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "already_exists",
|
||||
"message": "Target path already exists",
|
||||
"details": {"path": "storage1/scope/existing.txt"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_rename_success(self) -> None:
|
||||
response = self._post(
|
||||
"/api/files/rename",
|
||||
{"path": "storage1/scope/old.txt", "new_name": "renamed.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self.assertEqual(response.json(), {"path": "storage1/scope/renamed.txt"})
|
||||
self.assertFalse((self.scope / "old.txt").exists())
|
||||
self.assertTrue((self.scope / "renamed.txt").exists())
|
||||
|
||||
def test_rename_conflict(self) -> None:
|
||||
response = self._post(
|
||||
"/api/files/rename",
|
||||
{"path": "storage1/scope/old.txt", "new_name": "existing.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 409)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "already_exists",
|
||||
"message": "Target path already exists",
|
||||
"details": {"path": "storage1/scope/existing.txt"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_rename_not_found(self) -> None:
|
||||
response = self._post(
|
||||
"/api/files/rename",
|
||||
{"path": "storage1/scope/missing.txt", "new_name": "renamed.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 404)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "path_not_found",
|
||||
"message": "Requested path was not found",
|
||||
"details": {"path": "storage1/scope/missing.txt"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_rename_invalid_new_name_dotdot(self) -> None:
|
||||
response = self._post(
|
||||
"/api/files/rename",
|
||||
{"path": "storage1/scope/old.txt", "new_name": ".."},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 400)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "invalid_request",
|
||||
"message": "Invalid name",
|
||||
"details": {"new_name": ".."},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_rename_invalid_new_name_with_slash(self) -> None:
|
||||
response = self._post(
|
||||
"/api/files/rename",
|
||||
{"path": "storage1/scope/old.txt", "new_name": "a/b"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 400)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "invalid_request",
|
||||
"message": "Invalid name",
|
||||
"details": {"new_name": "a/b"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_mkdir_invalid_path(self) -> None:
|
||||
response = self._post(
|
||||
"/api/files/mkdir",
|
||||
{"parent_path": "storage1/scope", "name": "bad/name"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 400)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "invalid_request",
|
||||
"message": "Invalid name",
|
||||
"details": {"name": "bad/name"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_mkdir_traversal_attempt(self) -> None:
|
||||
response = self._post(
|
||||
"/api/files/mkdir",
|
||||
{"parent_path": "storage1/../etc", "name": "x"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 403)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "path_traversal_detected",
|
||||
"message": "Path traversal is not allowed",
|
||||
"details": {"path": "storage1/../etc"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_delete_file_success(self) -> None:
|
||||
target = self.scope / "delete_me.txt"
|
||||
target.write_text("z", encoding="utf-8")
|
||||
|
||||
response = self._post(
|
||||
"/api/files/delete",
|
||||
{"path": "storage1/scope/delete_me.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self.assertEqual(response.json(), {"path": "storage1/scope/delete_me.txt"})
|
||||
self.assertFalse(target.exists())
|
||||
|
||||
def test_delete_empty_directory_success(self) -> None:
|
||||
target = self.scope / "empty_dir"
|
||||
target.mkdir()
|
||||
|
||||
response = self._post(
|
||||
"/api/files/delete",
|
||||
{"path": "storage1/scope/empty_dir"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self.assertEqual(response.json(), {"path": "storage1/scope/empty_dir"})
|
||||
self.assertFalse(target.exists())
|
||||
|
||||
def test_delete_not_found(self) -> None:
|
||||
response = self._post(
|
||||
"/api/files/delete",
|
||||
{"path": "storage1/scope/missing.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 404)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "path_not_found",
|
||||
"message": "Requested path was not found",
|
||||
"details": {"path": "storage1/scope/missing.txt"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_delete_traversal_attempt(self) -> None:
|
||||
response = self._post(
|
||||
"/api/files/delete",
|
||||
{"path": "storage1/../etc/passwd"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 403)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "path_traversal_detected",
|
||||
"message": "Path traversal is not allowed",
|
||||
"details": {"path": "storage1/../etc/passwd"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_delete_non_empty_directory_conflict(self) -> None:
|
||||
target = self.scope / "non_empty"
|
||||
target.mkdir()
|
||||
(target / "a.txt").write_text("a", encoding="utf-8")
|
||||
|
||||
response = self._post(
|
||||
"/api/files/delete",
|
||||
{"path": "storage1/scope/non_empty"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 409)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "directory_not_empty",
|
||||
"message": "Directory is not empty",
|
||||
"details": {"path": "storage1/scope/non_empty"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
def test_delete_invalid_path(self) -> None:
|
||||
response = self._post(
|
||||
"/api/files/delete",
|
||||
{"path": ""},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 400)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "invalid_request",
|
||||
"message": "Query parameter 'path' is required",
|
||||
"details": None,
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@@ -0,0 +1,215 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import tempfile
|
||||
import time
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
import httpx
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parents[3]))
|
||||
|
||||
from backend.app.dependencies import get_move_task_service, get_task_service
|
||||
from backend.app.db.task_repository import TaskRepository
|
||||
from backend.app.fs.filesystem_adapter import FilesystemAdapter
|
||||
from backend.app.main import app
|
||||
from backend.app.security.path_guard import PathGuard
|
||||
from backend.app.services.move_task_service import MoveTaskService
|
||||
from backend.app.services.task_service import TaskService
|
||||
from backend.app.tasks_runner import TaskRunner
|
||||
|
||||
|
||||
class FailingDeleteFilesystemAdapter(FilesystemAdapter):
|
||||
def delete_file(self, path: Path) -> None:
|
||||
raise OSError("forced delete failure")
|
||||
|
||||
|
||||
class MoveApiGoldenTest(unittest.TestCase):
|
||||
def setUp(self) -> None:
|
||||
self.temp_dir = tempfile.TemporaryDirectory()
|
||||
self.root1 = Path(self.temp_dir.name) / "root1"
|
||||
self.root2 = Path(self.temp_dir.name) / "root2"
|
||||
self.root1.mkdir(parents=True, exist_ok=True)
|
||||
self.root2.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
self.repo = TaskRepository(str(Path(self.temp_dir.name) / "tasks.db"))
|
||||
path_guard = PathGuard({"storage1": str(self.root1), "storage2": str(self.root2)})
|
||||
self._set_services(path_guard=path_guard, filesystem=FilesystemAdapter())
|
||||
|
||||
def tearDown(self) -> None:
|
||||
app.dependency_overrides.clear()
|
||||
self.temp_dir.cleanup()
|
||||
|
||||
def _set_services(self, path_guard: PathGuard, filesystem: FilesystemAdapter) -> None:
|
||||
runner = TaskRunner(repository=self.repo, filesystem=filesystem)
|
||||
move_service = MoveTaskService(path_guard=path_guard, repository=self.repo, runner=runner)
|
||||
task_service = TaskService(repository=self.repo)
|
||||
|
||||
async def _override_move_service() -> MoveTaskService:
|
||||
return move_service
|
||||
|
||||
async def _override_task_service() -> TaskService:
|
||||
return task_service
|
||||
|
||||
app.dependency_overrides[get_move_task_service] = _override_move_service
|
||||
app.dependency_overrides[get_task_service] = _override_task_service
|
||||
|
||||
def _request(self, method: str, url: str, payload: dict | None = None) -> httpx.Response:
|
||||
async def _run() -> httpx.Response:
|
||||
transport = httpx.ASGITransport(app=app)
|
||||
async with httpx.AsyncClient(transport=transport, base_url="http://testserver") as client:
|
||||
if method == "POST":
|
||||
return await client.post(url, json=payload)
|
||||
return await client.get(url)
|
||||
|
||||
return asyncio.run(_run())
|
||||
|
||||
def _wait_task(self, task_id: str, timeout_s: float = 2.0) -> dict:
|
||||
deadline = time.time() + timeout_s
|
||||
while time.time() < deadline:
|
||||
response = self._request("GET", f"/api/tasks/{task_id}")
|
||||
body = response.json()
|
||||
if body["status"] in {"completed", "failed"}:
|
||||
return body
|
||||
time.sleep(0.02)
|
||||
self.fail("task did not reach terminal state in time")
|
||||
|
||||
def test_move_success_same_root_create_task_shape_and_completed(self) -> None:
|
||||
src = self.root1 / "source.txt"
|
||||
src.write_text("hello", encoding="utf-8")
|
||||
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/files/move",
|
||||
{"source": "storage1/source.txt", "destination": "storage1/moved.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 202)
|
||||
body = response.json()
|
||||
self.assertIn("task_id", body)
|
||||
self.assertEqual(body["status"], "queued")
|
||||
|
||||
detail = self._wait_task(body["task_id"])
|
||||
self.assertEqual(detail["status"], "completed")
|
||||
self.assertTrue((self.root1 / "moved.txt").exists())
|
||||
self.assertFalse(src.exists())
|
||||
|
||||
def test_move_success_cross_root_create_task_shape_and_completed(self) -> None:
|
||||
src = self.root1 / "source.txt"
|
||||
src.write_text("hello", encoding="utf-8")
|
||||
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/files/move",
|
||||
{"source": "storage1/source.txt", "destination": "storage2/moved.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 202)
|
||||
body = response.json()
|
||||
self.assertIn("task_id", body)
|
||||
self.assertEqual(body["status"], "queued")
|
||||
|
||||
detail = self._wait_task(body["task_id"])
|
||||
self.assertEqual(detail["status"], "completed")
|
||||
self.assertTrue((self.root2 / "moved.txt").exists())
|
||||
self.assertFalse(src.exists())
|
||||
|
||||
def test_move_source_not_found(self) -> None:
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/files/move",
|
||||
{"source": "storage1/missing.txt", "destination": "storage1/out.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 404)
|
||||
self.assertEqual(response.json()["error"]["code"], "path_not_found")
|
||||
|
||||
def test_move_source_is_directory_type_conflict(self) -> None:
|
||||
(self.root1 / "dir").mkdir()
|
||||
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/files/move",
|
||||
{"source": "storage1/dir", "destination": "storage1/out.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 409)
|
||||
self.assertEqual(response.json()["error"]["code"], "type_conflict")
|
||||
|
||||
def test_move_destination_exists_already_exists(self) -> None:
|
||||
(self.root1 / "source.txt").write_text("x", encoding="utf-8")
|
||||
(self.root1 / "exists.txt").write_text("y", encoding="utf-8")
|
||||
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/files/move",
|
||||
{"source": "storage1/source.txt", "destination": "storage1/exists.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 409)
|
||||
self.assertEqual(response.json()["error"]["code"], "already_exists")
|
||||
|
||||
def test_move_traversal_source(self) -> None:
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/files/move",
|
||||
{"source": "storage1/../etc/passwd", "destination": "storage1/out.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 403)
|
||||
self.assertEqual(response.json()["error"]["code"], "path_traversal_detected")
|
||||
|
||||
def test_move_traversal_destination(self) -> None:
|
||||
(self.root1 / "source.txt").write_text("x", encoding="utf-8")
|
||||
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/files/move",
|
||||
{"source": "storage1/source.txt", "destination": "storage1/../etc/out.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 403)
|
||||
self.assertEqual(response.json()["error"]["code"], "path_traversal_detected")
|
||||
|
||||
def test_move_source_symlink_rejected(self) -> None:
|
||||
target = self.root1 / "real.txt"
|
||||
target.write_text("x", encoding="utf-8")
|
||||
link = self.root1 / "link.txt"
|
||||
link.symlink_to(target)
|
||||
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/files/move",
|
||||
{"source": "storage1/link.txt", "destination": "storage1/out.txt"},
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 409)
|
||||
self.assertEqual(response.json()["error"]["code"], "type_conflict")
|
||||
|
||||
def test_move_runtime_io_error_failed_task_shape(self) -> None:
|
||||
src = self.root1 / "source.txt"
|
||||
src.write_text("hello", encoding="utf-8")
|
||||
|
||||
path_guard = PathGuard({"storage1": str(self.root1), "storage2": str(self.root2)})
|
||||
self._set_services(path_guard=path_guard, filesystem=FailingDeleteFilesystemAdapter())
|
||||
|
||||
response = self._request(
|
||||
"POST",
|
||||
"/api/files/move",
|
||||
{"source": "storage1/source.txt", "destination": "storage2/moved.txt"},
|
||||
)
|
||||
self.assertEqual(response.status_code, 202)
|
||||
|
||||
task_id = response.json()["task_id"]
|
||||
detail = self._wait_task(task_id)
|
||||
|
||||
self.assertEqual(detail["status"], "failed")
|
||||
self.assertEqual(detail["error_code"], "io_error")
|
||||
self.assertTrue((self.root2 / "moved.txt").exists())
|
||||
self.assertTrue(src.exists())
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@@ -0,0 +1,261 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
import httpx
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parents[3]))
|
||||
|
||||
from backend.app.dependencies import get_task_service
|
||||
from backend.app.db.task_repository import TaskRepository
|
||||
from backend.app.main import app
|
||||
from backend.app.services.task_service import TaskService
|
||||
|
||||
|
||||
class TasksApiGoldenTest(unittest.TestCase):
|
||||
def setUp(self) -> None:
|
||||
self.temp_dir = tempfile.TemporaryDirectory()
|
||||
self.db_path = str(Path(self.temp_dir.name) / "tasks.db")
|
||||
self.repo = TaskRepository(self.db_path)
|
||||
self.service = TaskService(self.repo)
|
||||
|
||||
async def _override_task_service() -> TaskService:
|
||||
return self.service
|
||||
|
||||
app.dependency_overrides[get_task_service] = _override_task_service
|
||||
|
||||
def tearDown(self) -> None:
|
||||
app.dependency_overrides.clear()
|
||||
self.temp_dir.cleanup()
|
||||
|
||||
def _get(self, url: str) -> httpx.Response:
|
||||
async def _run() -> httpx.Response:
|
||||
transport = httpx.ASGITransport(app=app)
|
||||
async with httpx.AsyncClient(transport=transport, base_url="http://testserver") as client:
|
||||
return await client.get(url)
|
||||
|
||||
return asyncio.run(_run())
|
||||
|
||||
def _insert_task(
|
||||
self,
|
||||
*,
|
||||
task_id: str,
|
||||
operation: str,
|
||||
status: str,
|
||||
source: str,
|
||||
destination: str,
|
||||
created_at: str,
|
||||
started_at: str | None = None,
|
||||
finished_at: str | None = None,
|
||||
done_bytes: int | None = None,
|
||||
total_bytes: int | None = None,
|
||||
done_items: int | None = None,
|
||||
total_items: int | None = None,
|
||||
current_item: str | None = None,
|
||||
failed_item: str | None = None,
|
||||
error_code: str | None = None,
|
||||
error_message: str | None = None,
|
||||
) -> None:
|
||||
self.repo.insert_task_for_testing(
|
||||
{
|
||||
"id": task_id,
|
||||
"operation": operation,
|
||||
"status": status,
|
||||
"source": source,
|
||||
"destination": destination,
|
||||
"done_bytes": done_bytes,
|
||||
"total_bytes": total_bytes,
|
||||
"done_items": done_items,
|
||||
"total_items": total_items,
|
||||
"current_item": current_item,
|
||||
"failed_item": failed_item,
|
||||
"error_code": error_code,
|
||||
"error_message": error_message,
|
||||
"created_at": created_at,
|
||||
"started_at": started_at,
|
||||
"finished_at": finished_at,
|
||||
}
|
||||
)
|
||||
|
||||
def test_get_tasks_empty_list(self) -> None:
|
||||
response = self._get("/api/tasks")
|
||||
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self.assertEqual(response.json(), {"items": []})
|
||||
|
||||
def test_get_tasks_list_shape(self) -> None:
|
||||
self._insert_task(
|
||||
task_id="task-old",
|
||||
operation="copy",
|
||||
status="completed",
|
||||
source="storage1/a.txt",
|
||||
destination="storage2/a.txt",
|
||||
created_at="2026-03-10T10:00:00Z",
|
||||
finished_at="2026-03-10T10:00:05Z",
|
||||
)
|
||||
self._insert_task(
|
||||
task_id="task-new",
|
||||
operation="move",
|
||||
status="running",
|
||||
source="storage1/b.txt",
|
||||
destination="storage2/b.txt",
|
||||
created_at="2026-03-10T10:01:00Z",
|
||||
)
|
||||
|
||||
response = self._get("/api/tasks")
|
||||
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"items": [
|
||||
{
|
||||
"id": "task-new",
|
||||
"operation": "move",
|
||||
"status": "running",
|
||||
"source": "storage1/b.txt",
|
||||
"destination": "storage2/b.txt",
|
||||
"created_at": "2026-03-10T10:01:00Z",
|
||||
"finished_at": None,
|
||||
},
|
||||
{
|
||||
"id": "task-old",
|
||||
"operation": "copy",
|
||||
"status": "completed",
|
||||
"source": "storage1/a.txt",
|
||||
"destination": "storage2/a.txt",
|
||||
"created_at": "2026-03-10T10:00:00Z",
|
||||
"finished_at": "2026-03-10T10:00:05Z",
|
||||
},
|
||||
]
|
||||
},
|
||||
)
|
||||
|
||||
def test_get_task_detail_queued(self) -> None:
|
||||
self._insert_task(
|
||||
task_id="task-queued",
|
||||
operation="copy",
|
||||
status="queued",
|
||||
source="storage1/a.txt",
|
||||
destination="storage2/a.txt",
|
||||
created_at="2026-03-10T10:00:00Z",
|
||||
)
|
||||
|
||||
response = self._get("/api/tasks/task-queued")
|
||||
|
||||
self.assertEqual(response.status_code, 200)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"id": "task-queued",
|
||||
"operation": "copy",
|
||||
"status": "queued",
|
||||
"source": "storage1/a.txt",
|
||||
"destination": "storage2/a.txt",
|
||||
"done_bytes": None,
|
||||
"total_bytes": None,
|
||||
"done_items": None,
|
||||
"total_items": None,
|
||||
"current_item": None,
|
||||
"failed_item": None,
|
||||
"error_code": None,
|
||||
"error_message": None,
|
||||
"created_at": "2026-03-10T10:00:00Z",
|
||||
"started_at": None,
|
||||
"finished_at": None,
|
||||
},
|
||||
)
|
||||
|
||||
def test_get_task_detail_running(self) -> None:
|
||||
self._insert_task(
|
||||
task_id="task-running",
|
||||
operation="move",
|
||||
status="running",
|
||||
source="storage1/a.txt",
|
||||
destination="storage2/a.txt",
|
||||
created_at="2026-03-10T10:00:00Z",
|
||||
started_at="2026-03-10T10:00:01Z",
|
||||
done_bytes=1024,
|
||||
total_bytes=2048,
|
||||
done_items=1,
|
||||
total_items=2,
|
||||
current_item="storage1/a.txt",
|
||||
)
|
||||
|
||||
response = self._get("/api/tasks/task-running")
|
||||
|
||||
self.assertEqual(response.status_code, 200)
|
||||
body = response.json()
|
||||
self.assertEqual(body["status"], "running")
|
||||
self.assertEqual(body["done_bytes"], 1024)
|
||||
self.assertEqual(body["total_bytes"], 2048)
|
||||
self.assertEqual(body["current_item"], "storage1/a.txt")
|
||||
|
||||
def test_get_task_detail_completed(self) -> None:
|
||||
self._insert_task(
|
||||
task_id="task-completed",
|
||||
operation="copy",
|
||||
status="completed",
|
||||
source="storage1/a.txt",
|
||||
destination="storage2/a.txt",
|
||||
created_at="2026-03-10T10:00:00Z",
|
||||
started_at="2026-03-10T10:00:01Z",
|
||||
finished_at="2026-03-10T10:00:03Z",
|
||||
done_bytes=2048,
|
||||
total_bytes=2048,
|
||||
)
|
||||
|
||||
response = self._get("/api/tasks/task-completed")
|
||||
|
||||
self.assertEqual(response.status_code, 200)
|
||||
body = response.json()
|
||||
self.assertEqual(body["status"], "completed")
|
||||
self.assertEqual(body["finished_at"], "2026-03-10T10:00:03Z")
|
||||
self.assertEqual(body["error_code"], None)
|
||||
|
||||
def test_get_task_detail_failed(self) -> None:
|
||||
self._insert_task(
|
||||
task_id="task-failed",
|
||||
operation="move",
|
||||
status="failed",
|
||||
source="storage1/a.txt",
|
||||
destination="storage2/a.txt",
|
||||
created_at="2026-03-10T10:00:00Z",
|
||||
started_at="2026-03-10T10:00:01Z",
|
||||
finished_at="2026-03-10T10:00:02Z",
|
||||
failed_item="storage1/a.txt",
|
||||
error_code="io_error",
|
||||
error_message="write failed",
|
||||
)
|
||||
|
||||
response = self._get("/api/tasks/task-failed")
|
||||
|
||||
self.assertEqual(response.status_code, 200)
|
||||
body = response.json()
|
||||
self.assertEqual(body["status"], "failed")
|
||||
self.assertEqual(body["failed_item"], "storage1/a.txt")
|
||||
self.assertEqual(body["error_code"], "io_error")
|
||||
self.assertEqual(body["error_message"], "write failed")
|
||||
|
||||
def test_get_task_not_found(self) -> None:
|
||||
response = self._get("/api/tasks/task-missing")
|
||||
|
||||
self.assertEqual(response.status_code, 404)
|
||||
self.assertEqual(
|
||||
response.json(),
|
||||
{
|
||||
"error": {
|
||||
"code": "task_not_found",
|
||||
"message": "Task was not found",
|
||||
"details": {"task_id": "task-missing"},
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
@@ -0,0 +1,49 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
from starlette.routing import Mount
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parents[3]))
|
||||
|
||||
from backend.app.main import app
|
||||
|
||||
|
||||
class UiSmokeGoldenTest(unittest.TestCase):
|
||||
def _ui_mount(self) -> Mount:
|
||||
for route in app.routes:
|
||||
if isinstance(route, Mount) and route.path == "/ui":
|
||||
return route
|
||||
self.fail("Expected /ui mount to be registered")
|
||||
|
||||
def test_ui_mount_and_index_contains_expected_panels(self) -> None:
|
||||
mount = self._ui_mount()
|
||||
self.assertIsInstance(mount.app, StaticFiles)
|
||||
index_path = Path(mount.app.directory) / "index.html"
|
||||
self.assertTrue(index_path.exists())
|
||||
|
||||
body = index_path.read_text(encoding="utf-8")
|
||||
self.assertIn('id="workspace"', body)
|
||||
self.assertIn('id="footer-bar"', body)
|
||||
self.assertIn('id="left-pane"', body)
|
||||
self.assertIn('id="right-pane"', body)
|
||||
self.assertNotIn('id="bookmarks-panel"', body)
|
||||
self.assertNotIn('id="tasks-panel"', body)
|
||||
|
||||
def test_ui_static_assets_are_present_and_mapped(self) -> None:
|
||||
mount = self._ui_mount()
|
||||
static_root = Path(mount.app.directory)
|
||||
self.assertTrue((static_root / "app.js").exists())
|
||||
self.assertTrue((static_root / "style.css").exists())
|
||||
|
||||
app_js_url = app.url_path_for("ui", path="/app.js")
|
||||
style_css_url = app.url_path_for("ui", path="/style.css")
|
||||
self.assertEqual(app_js_url, "/ui/app.js")
|
||||
self.assertEqual(style_css_url, "/ui/style.css")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user