feat: upgrade

This commit is contained in:
uprightbass360
2025-11-20 02:11:24 -05:00
parent 9deff01441
commit 5f7bdcb7e7
25 changed files with 1502 additions and 777 deletions

View File

@@ -122,11 +122,11 @@ flowchart TB
- **Worldserver debug logging** Need extra verbosity temporarily? Flip `COMPOSE_OVERRIDE_WORLDSERVER_DEBUG_LOGGING_ENABLED=1` to include `compose-overrides/worldserver-debug-logging.yml`, which bumps `AC_LOG_LEVEL` across all worldserver profiles. Turn it back off once you're done to avoid noisy logs.
- **Binary logging toggle** `MYSQL_DISABLE_BINLOG=1` appends `--skip-log-bin` via the MySQL wrapper entrypoint to keep disk churn low (and match Playerbot guidance). Flip the flag to `0` to re-enable binlogs for debugging or replication.
- **Drop-in configs** Any `.cnf` placed in `${STORAGE_PATH}/config/mysql/conf.d` (exposed via `MYSQL_CONFIG_DIR`) is mounted into `/etc/mysql/conf.d`. Use this to add custom tunables or temporarily override the binlog setting without touching the image.
- **Forcing a fresh database import** The MySQL data volume (`local-storage/mysql-data`) tracks whether a restore/import completed via the sentinel file `.restore-completed`. The import workflow now double-checks the live MySQL runtime before trusting that sentinel, and automatically logs `Restoration marker found, but databases are empty - forcing re-import` (while deleting the stale marker) if it detects an empty tmpfs. Manual cleanup is only needed when you intentionally want to rerun the import; in that case delete the sentinel and run `docker compose run --rm ac-db-import` or the full `./scripts/bash/stage-modules.sh`. Leave the sentinel alone during normal operations so the import job doesnt wipe existing data on every start.
- **Module-driven SQL migration** Module code is staged through the `ac-modules` service and `scripts/bash/manage-modules.sh`, while SQL payloads are copied into the running `ac-worldserver` container by `scripts/bash/stage-modules.sh`. The staging script maintains a ledger at `storage/modules/.modules-meta/module-sql-ledger.txt` (mirrored in the container) so identical SQL files arent copied twice, and it prunes any staged update thats already recorded in the database `updates` table. If you ever need to force a re-stage, delete that ledger file and rerun the script. Always trigger module/deploy workflows via these scripts rather than copying repositories manually; this keeps C++ builds, Lua assets, and SQL migrations synchronized with the database state.
- **Forcing a fresh database import** MySQLs persistent files (and the `.restore-*` sentinels) now live inside the Docker volume `mysql-data` at `/var/lib/mysql-persistent`. The import workflow still double-checks the live runtime before trusting those markers, logging `Restoration marker found, but databases are empty - forcing re-import` if the tmpfs is empty. When you intentionally need to rerun the import, delete the sentinel with `docker run --rm -v mysql-data:/var/lib/mysql-persistent alpine sh -c 'rm -f /var/lib/mysql-persistent/.restore-completed'` and then execute `docker compose run --rm ac-db-import` or `./scripts/bash/stage-modules.sh`. Leave the sentinel alone during normal operations so the import job doesnt wipe existing data on every start.
- **Module-driven SQL migration** Module code is staged through the `ac-modules` service and `scripts/bash/manage-modules.sh`, while SQL payloads are copied into the running `ac-worldserver` container by `scripts/bash/stage-modules.sh`. Every run clears `/azerothcore/data/sql/updates/{db_world,db_characters,db_auth}` and recopies all enabled module SQL files with deterministic names, letting AzerothCores built-in updater decide what to apply. Always trigger module/deploy workflows via these scripts rather than copying repositories manually; this keeps C++ builds, Lua assets, and SQL migrations synchronized with the database state.
### Restore-aware module SQL
When a backup successfully restores, the `ac-db-import` container automatically executes `scripts/bash/restore-and-stage.sh`. The helper refreshes the module SQL ledger in shared storage (using the snapshot stored alongside the backup when available, or rebuilding it from the modules directory) and writes a `.restore-prestaged` marker so the next `./scripts/bash/stage-modules.sh` run knows to repopulate `/azerothcore/data/sql/updates/*` before the worldserver boots. The staging script now recopies every module SQL file with deterministic names, letting AzerothCores built-in updater decide whether an individual script should run while leaving already-applied files in place so the server never complains about missing history. If the snapshot is missing (legacy backup) the helper simply rebuilds the ledger and still sets the flag, so the runtime staging pass behaves exactly the same.
When a backup successfully restores, the `ac-db-import` container automatically executes `scripts/bash/restore-and-stage.sh`, which simply drops `storage/modules/.modules-meta/.restore-prestaged`. The next `./scripts/bash/stage-modules.sh --yes` clears any previously staged files and recopies every enabled module SQL file before the worldserver boots. AzerothCores auto-updater then scans `/azerothcore/data/sql/updates/*`, applies any scripts that arent recorded in the `updates` tables yet, and skips the rest—without ever complaining about missing history files.
## Compose Overrides
@@ -168,15 +168,16 @@ To tweak MySQL settings, place `.cnf` snippets in `storage/config/mysql/conf.d`.
**Local Storage** (`STORAGE_PATH_LOCAL` - default: `./local-storage`)
```
local-storage/
├── mysql-data/ # MySQL persistent data (tmpfs runtime + persistent snapshot)
├── client-data-cache/ # Downloaded WoW client data archives
├── source/ # AzerothCore source repository (created during builds)
│ └── azerothcore-playerbots/ # Playerbot fork (when playerbots enabled)
└── images/ # Exported Docker images for remote deployment
```
Local storage now only hosts build artifacts, cached downloads, and helper images; the database files have moved into a dedicated Docker volume.
**Docker Volume**
- `client-data-cache` - Temporary storage for client data downloads
**Docker Volumes**
- `client-data-cache` Temporary storage for client data downloads
- `mysql-data` MySQL persistent data + `.restore-*` sentinels (`/var/lib/mysql-persistent`)
This separation ensures database and build artifacts stay on fast local storage while configuration, modules, and backups can be shared across hosts via NFS.