mirror of
https://github.com/uprightbass360/AzerothCore-RealmMaster.git
synced 2026-02-04 11:23:49 +00:00
feat: upgrade
This commit is contained in:
@@ -122,11 +122,11 @@ flowchart TB
|
||||
- **Worldserver debug logging** – Need extra verbosity temporarily? Flip `COMPOSE_OVERRIDE_WORLDSERVER_DEBUG_LOGGING_ENABLED=1` to include `compose-overrides/worldserver-debug-logging.yml`, which bumps `AC_LOG_LEVEL` across all worldserver profiles. Turn it back off once you're done to avoid noisy logs.
|
||||
- **Binary logging toggle** – `MYSQL_DISABLE_BINLOG=1` appends `--skip-log-bin` via the MySQL wrapper entrypoint to keep disk churn low (and match Playerbot guidance). Flip the flag to `0` to re-enable binlogs for debugging or replication.
|
||||
- **Drop-in configs** – Any `.cnf` placed in `${STORAGE_PATH}/config/mysql/conf.d` (exposed via `MYSQL_CONFIG_DIR`) is mounted into `/etc/mysql/conf.d`. Use this to add custom tunables or temporarily override the binlog setting without touching the image.
|
||||
- **Forcing a fresh database import** – The MySQL data volume (`local-storage/mysql-data`) tracks whether a restore/import completed via the sentinel file `.restore-completed`. The import workflow now double-checks the live MySQL runtime before trusting that sentinel, and automatically logs `Restoration marker found, but databases are empty - forcing re-import` (while deleting the stale marker) if it detects an empty tmpfs. Manual cleanup is only needed when you intentionally want to rerun the import; in that case delete the sentinel and run `docker compose run --rm ac-db-import` or the full `./scripts/bash/stage-modules.sh`. Leave the sentinel alone during normal operations so the import job doesn’t wipe existing data on every start.
|
||||
- **Module-driven SQL migration** – Module code is staged through the `ac-modules` service and `scripts/bash/manage-modules.sh`, while SQL payloads are copied into the running `ac-worldserver` container by `scripts/bash/stage-modules.sh`. The staging script maintains a ledger at `storage/modules/.modules-meta/module-sql-ledger.txt` (mirrored in the container) so identical SQL files aren’t copied twice, and it prunes any staged update that’s already recorded in the database `updates` table. If you ever need to force a re-stage, delete that ledger file and rerun the script. Always trigger module/deploy workflows via these scripts rather than copying repositories manually; this keeps C++ builds, Lua assets, and SQL migrations synchronized with the database state.
|
||||
- **Forcing a fresh database import** – MySQL’s persistent files (and the `.restore-*` sentinels) now live inside the Docker volume `mysql-data` at `/var/lib/mysql-persistent`. The import workflow still double-checks the live runtime before trusting those markers, logging `Restoration marker found, but databases are empty - forcing re-import` if the tmpfs is empty. When you intentionally need to rerun the import, delete the sentinel with `docker run --rm -v mysql-data:/var/lib/mysql-persistent alpine sh -c 'rm -f /var/lib/mysql-persistent/.restore-completed'` and then execute `docker compose run --rm ac-db-import` or `./scripts/bash/stage-modules.sh`. Leave the sentinel alone during normal operations so the import job doesn’t wipe existing data on every start.
|
||||
- **Module-driven SQL migration** – Module code is staged through the `ac-modules` service and `scripts/bash/manage-modules.sh`, while SQL payloads are copied into the running `ac-worldserver` container by `scripts/bash/stage-modules.sh`. Every run clears `/azerothcore/data/sql/updates/{db_world,db_characters,db_auth}` and recopies all enabled module SQL files with deterministic names, letting AzerothCore’s built-in updater decide what to apply. Always trigger module/deploy workflows via these scripts rather than copying repositories manually; this keeps C++ builds, Lua assets, and SQL migrations synchronized with the database state.
|
||||
|
||||
### Restore-aware module SQL
|
||||
When a backup successfully restores, the `ac-db-import` container automatically executes `scripts/bash/restore-and-stage.sh`. The helper refreshes the module SQL ledger in shared storage (using the snapshot stored alongside the backup when available, or rebuilding it from the modules directory) and writes a `.restore-prestaged` marker so the next `./scripts/bash/stage-modules.sh` run knows to repopulate `/azerothcore/data/sql/updates/*` before the worldserver boots. The staging script now recopies every module SQL file with deterministic names, letting AzerothCore’s built-in updater decide whether an individual script should run while leaving already-applied files in place so the server never complains about missing history. If the snapshot is missing (legacy backup) the helper simply rebuilds the ledger and still sets the flag, so the runtime staging pass behaves exactly the same.
|
||||
When a backup successfully restores, the `ac-db-import` container automatically executes `scripts/bash/restore-and-stage.sh`, which simply drops `storage/modules/.modules-meta/.restore-prestaged`. The next `./scripts/bash/stage-modules.sh --yes` clears any previously staged files and recopies every enabled module SQL file before the worldserver boots. AzerothCore’s auto-updater then scans `/azerothcore/data/sql/updates/*`, applies any scripts that aren’t recorded in the `updates` tables yet, and skips the rest—without ever complaining about missing history files.
|
||||
|
||||
## Compose Overrides
|
||||
|
||||
@@ -168,15 +168,16 @@ To tweak MySQL settings, place `.cnf` snippets in `storage/config/mysql/conf.d`.
|
||||
**Local Storage** (`STORAGE_PATH_LOCAL` - default: `./local-storage`)
|
||||
```
|
||||
local-storage/
|
||||
├── mysql-data/ # MySQL persistent data (tmpfs runtime + persistent snapshot)
|
||||
├── client-data-cache/ # Downloaded WoW client data archives
|
||||
├── source/ # AzerothCore source repository (created during builds)
|
||||
│ └── azerothcore-playerbots/ # Playerbot fork (when playerbots enabled)
|
||||
└── images/ # Exported Docker images for remote deployment
|
||||
```
|
||||
Local storage now only hosts build artifacts, cached downloads, and helper images; the database files have moved into a dedicated Docker volume.
|
||||
|
||||
**Docker Volume**
|
||||
- `client-data-cache` - Temporary storage for client data downloads
|
||||
**Docker Volumes**
|
||||
- `client-data-cache` – Temporary storage for client data downloads
|
||||
- `mysql-data` – MySQL persistent data + `.restore-*` sentinels (`/var/lib/mysql-persistent`)
|
||||
|
||||
This separation ensures database and build artifacts stay on fast local storage while configuration, modules, and backups can be shared across hosts via NFS.
|
||||
|
||||
|
||||
@@ -181,16 +181,20 @@ The system automatically detects and restores backups on first startup:
|
||||
|
||||
### Restore Safety Checks & Sentinels
|
||||
|
||||
Because MySQL stores its hot data in a tmpfs (`/var/lib/mysql-runtime`) while persisting only backups and status markers under `local-storage/mysql-data`, it is possible for the runtime data to be wiped (for example, after a host reboot) while the sentinel `.restore-completed` file still claims the databases are ready. To prevent the worldserver and authserver from entering restart loops, the `ac-db-import` workflow now performs an explicit sanity check before trusting those markers:
|
||||
Because MySQL stores its hot data in a tmpfs (`/var/lib/mysql-runtime`) while persisting the durable files inside the Docker volume `mysql-data` (mounted at `/var/lib/mysql-persistent`), it is possible for the runtime data to be wiped (for example, after a host reboot) while the sentinel `.restore-completed` file still claims the databases are ready. To prevent the worldserver and authserver from entering restart loops, the `ac-db-import` workflow now performs an explicit sanity check before trusting those markers:
|
||||
|
||||
- The import script queries MySQL for the combined table count across `acore_auth`, `acore_world`, and `acore_characters`.
|
||||
- If **any tables exist**, the script logs `Backup restoration completed successfully` and skips the expensive restore just as before.
|
||||
- If **no tables are found or the query fails**, the script logs `Restoration marker found, but databases are empty - forcing re-import`, automatically clears the stale marker, and reruns the backup restore + `dbimport` pipeline so services always start with real data.
|
||||
|
||||
To complement that one-shot safety net, the long-running `ac-db-guard` service now watches the runtime tmpfs. It polls MySQL, and if it ever finds those schemas empty (the usual symptom after a daemon restart), it automatically reruns `db-import-conditional.sh` to rehydrate from the most recent backup before marking itself healthy. All auth/world services now depend on `ac-db-guard`'s health check, guaranteeing that AzerothCore never boots without real tables in memory. The guard also mounts the working SQL tree from `local-storage/source/azerothcore-playerbots/data/sql` into the db containers so that every `dbimport` run uses the exact SQL that matches your checked-out source, even if the Docker image was built earlier.
|
||||
|
||||
Because new features sometimes require schema changes even when the databases already contain data, `ac-db-guard` now performs a `dbimport` verification sweep (configurable via `DB_GUARD_VERIFY_INTERVAL_SECONDS`) to proactively apply any outstanding updates from the mounted SQL tree. By default it runs once per bootstrap and then every 24 hours, so the auth/world servers always see the columns/tables expected by their binaries without anyone having to run host scripts manually.
|
||||
|
||||
Manual intervention is only required if you intentionally want to force a fresh import despite having data. In that scenario:
|
||||
|
||||
1. Stop the stack: `docker compose down`
|
||||
2. Delete the sentinel: `rm -f local-storage/mysql-data/.restore-completed`
|
||||
2. Delete the sentinel inside the volume: `docker run --rm -v mysql-data:/var/lib/mysql-persistent alpine sh -c 'rm -f /var/lib/mysql-persistent/.restore-completed'`
|
||||
3. Run `docker compose run --rm ac-db-import`
|
||||
|
||||
See [docs/ADVANCED.md#database-hardening](ADVANCED.md#database-hardening) for more background on the tmpfs/persistent split and why the sentinel exists, and review [docs/TROUBLESHOOTING.md](TROUBLESHOOTING.md#database-connection-issues) for quick steps when the automation logs the warning above.
|
||||
@@ -412,26 +416,13 @@ SOURCE /path/to/your/file.sql;
|
||||
docker exec -i ac-mysql mysql -uroot -pPASSWORD acore_world < yourfile.sql
|
||||
```
|
||||
|
||||
### Module SQL Ledger & Deduplication
|
||||
### Module SQL Staging
|
||||
|
||||
`./scripts/bash/stage-modules.sh` now keeps a lightweight ledger at `storage/modules/.modules-meta/module-sql-ledger.txt` (also mounted inside containers at `/azerothcore/modules/.modules-meta/module-sql-ledger.txt`). Each staged SQL file is recorded as:
|
||||
|
||||
```
|
||||
<database-scope>|<module>|<base_filename>|<hash>
|
||||
```
|
||||
|
||||
When the script runs again it hashes every module SQL file and skips any entry whose `(db, module, filename)` already matches with the same hash. This prevents re-copying identical SQL after a backup restore and stops worldserver from reapplying inserts that already exist in the database. If a database restore is detected (`local-storage/mysql-data/.restore-completed` changed), the ledger is automatically reset so every module SQL file is recopied exactly once. The ledger is automatically updated anytime a file changes so only the modified SQL is restaged.
|
||||
|
||||
The stage script also cross-checks MySQL’s `updates` table before copying files and prunes any staged file whose identifier already exists there. That means even if a file gets stuck in `/azerothcore/data/sql/updates/<db>` (e.g., after an interrupted run), it is removed before worldserver starts if the database already recorded it.
|
||||
`./scripts/bash/stage-modules.sh` recopies every enabled module SQL file into `/azerothcore/data/sql/updates/{db_world,db_characters,db_auth}` each time it runs. Files are named deterministically (`MODULE_mod-name_file.sql`) and left on disk permanently. AzerothCore’s auto-updater consults the `updates` tables to decide whether a script needs to run; if it already ran, the entry in `updates` prevents a reapply, but leaving the file in place avoids “missing history” warnings and provides a clear audit trail.
|
||||
|
||||
### Restore-Time SQL Reconciliation
|
||||
|
||||
During a backup restore the `ac-db-import` service now runs `scripts/bash/restore-and-stage.sh`, which consolidates the old restore workflow with module SQL staging. Every backup created by the scheduler now includes a snapshot of the module ledger at `module-sql-ledger.txt` (for example `storage/backups/hourly/20250101_120000/module-sql-ledger.txt`). The restore script:
|
||||
|
||||
- Refreshes `storage/modules/.modules-meta/module-sql-ledger.txt` using the snapshot bundled with the backup (or rebuilds it from the modules directory if the snapshot is missing).
|
||||
- Writes `storage/modules/.modules-meta/.restore-prestaged` to signal that the next `./scripts/bash/stage-modules.sh` run must repopulate `/azerothcore/data/sql/updates/*` before worldserver comes online.
|
||||
|
||||
The staging script now recopies every module SQL file—regardless of whether it has already been applied—using deterministic names like `MODULE_mod-npc-buffer_npc_buffer.sql`. AzerothCore’s built-in updater consults the `updates` tables to decide what should actually run, so already-applied files remain on disk purely to keep history intact and avoid “file missing” warnings. If a legacy backup doesn’t contain the ledger snapshot the helper simply rebuilds it and still sets the flag, so the runtime staging pass behaves the same. Run `rm -f storage/modules/.modules-meta/module-sql-ledger.txt` and rerun `./scripts/bash/stage-modules.sh --yes` if you intentionally need to reseed the ledger from scratch.
|
||||
During a backup restore the `ac-db-import` service now runs `scripts/bash/restore-and-stage.sh`, which simply drops `storage/modules/.modules-meta/.restore-prestaged`. On the next `./scripts/bash/stage-modules.sh --yes`, the script sees the flag, clears any previously staged files, and recopies every enabled SQL file before worldserver boots. Because the files are always present, AzerothCore’s updater has the complete history it needs to apply or skip scripts correctly—no hash/ledger bookkeeping required.
|
||||
|
||||
This snapshot-driven workflow means restoring a new backup automatically replays any newly added module SQL while avoiding duplicate inserts for modules that were already present. See **[docs/ADVANCED.md](ADVANCED.md)** for a deeper look at the marker workflow and container responsibilities.
|
||||
|
||||
@@ -440,16 +431,12 @@ This snapshot-driven workflow means restoring a new backup automatically replays
|
||||
If you intentionally need to reapply all module SQL (for example after manually cleaning tables):
|
||||
|
||||
1. Stop services: `docker compose down`
|
||||
2. Remove the SQL ledger so the next run rehashes everything:
|
||||
```bash
|
||||
rm -f storage/modules/.modules-meta/module-sql-ledger.txt
|
||||
```
|
||||
3. (Optional) Drop the relevant records from the `updates` table if you want AzerothCore to rerun them, e.g.:
|
||||
2. (Optional) Drop the relevant records from the `updates` table if you want AzerothCore to rerun them, e.g.:
|
||||
```bash
|
||||
docker exec -it ac-mysql mysql -uroot -p \
|
||||
-e "DELETE FROM acore_characters.updates WHERE name LIKE '%MODULE_mod-ollama-chat%';"
|
||||
```
|
||||
4. Run `./scripts/bash/stage-modules.sh --yes`
|
||||
3. Run `./scripts/bash/stage-modules.sh --yes`
|
||||
|
||||
Only perform step 3 if you understand the impact—deleting entries causes worldserver to execute those SQL scripts again on next startup.
|
||||
|
||||
|
||||
@@ -52,8 +52,8 @@ docker exec ac-mysql mysql -u root -p -e "SELECT 1;"
|
||||
# Forcing a fresh import (if schema missing/invalid)
|
||||
# 1. Stop the stack
|
||||
docker compose down
|
||||
# 2. Remove the sentinel created after a successful restore
|
||||
sudo rm -f local-storage/mysql-data/.restore-completed
|
||||
# 2. Remove the sentinel created after a successful restore (inside the docker volume)
|
||||
docker run --rm -v mysql-data:/var/lib/mysql-persistent alpine sh -c 'rm -f /var/lib/mysql-persistent/.restore-completed'
|
||||
# 3. Re-run the import pipeline (either stand-alone or via stage-modules)
|
||||
docker compose run --rm ac-db-import
|
||||
# or
|
||||
@@ -61,6 +61,16 @@ docker compose run --rm ac-db-import
|
||||
#
|
||||
# See docs/ADVANCED.md#database-hardening for details on the sentinel workflow and why it's required.
|
||||
|
||||
**Permission denied writing to local-storage or storage**
|
||||
```bash
|
||||
# Reset ownership/permissions on the shared directories
|
||||
./scripts/bash/repair-storage-permissions.sh
|
||||
```
|
||||
> This script reuses the same helper container as the staging workflow to `chown`
|
||||
> `storage/`, `local-storage/`, and module metadata paths back to the current
|
||||
> host UID/GID so tools like `scripts/python/modules.py` can regenerate
|
||||
> `modules.env` without manual intervention.
|
||||
|
||||
# Check database initialization
|
||||
docker logs ac-db-init
|
||||
docker logs ac-db-import
|
||||
@@ -77,31 +87,18 @@ docker logs ac-worldserver
|
||||
# 2. Remove the staged SQL file that keeps replaying:
|
||||
docker exec ac-worldserver rm /azerothcore/data/sql/updates/<db>/<filename>.sql
|
||||
|
||||
# 3. (Optional) Clean the module SQL ledger so staging rehashes everything
|
||||
rm -f storage/modules/.modules-meta/module-sql-ledger.txt
|
||||
|
||||
# 4. Re-run the staging workflow
|
||||
# 3. Re-run the staging workflow
|
||||
./scripts/bash/stage-modules.sh --yes
|
||||
|
||||
# 5. Restart the worldserver container
|
||||
# 4. Restart the worldserver container
|
||||
docker compose restart ac-worldserver-playerbots # or the profile you use
|
||||
|
||||
# See docs/DATABASE_MANAGEMENT.md#module-sql-management for details on the ledger
|
||||
# and docs/ADVANCED.md#restore-aware-module-sql for the import workflow.
|
||||
# See docs/DATABASE_MANAGEMENT.md#module-sql-management for details on the workflow.
|
||||
```
|
||||
|
||||
**Legacy backup missing module SQL snapshot**
|
||||
|
||||
New backups include `module-sql-ledger.txt` which lets `ac-db-import` automatically restage only the SQL that didn’t ship with the backup. If you restored an older backup you’ll see `No module SQL snapshot found ...` in the import logs and no extra SQL will be staged. That’s intentional to avoid duplicate inserts.
|
||||
|
||||
1. Decide if you really need to restage modules (for example you know new modules were added after the backup was taken).
|
||||
2. Remove the host ledger so the next run copies every SQL file:
|
||||
```bash
|
||||
rm -f storage/modules/.modules-meta/module-sql-ledger.txt
|
||||
```
|
||||
3. Rerun `./scripts/bash/stage-modules.sh --yes` to restage and restart the stack.
|
||||
|
||||
After you take a new backup the snapshot will exist and future restores won’t need this manual step.
|
||||
Legacy backups behave the same as new ones now—just rerun `./scripts/bash/stage-modules.sh --yes` after a restore and the updater will apply whatever the database still needs.
|
||||
|
||||
**Source rebuild issues**
|
||||
```bash
|
||||
|
||||
@@ -44,7 +44,7 @@ services:
|
||||
image: ${MYSQL_IMAGE}
|
||||
container_name: ac-mysql
|
||||
volumes:
|
||||
- ${STORAGE_PATH_LOCAL}/mysql-data:/var/lib/mysql-persistent
|
||||
- mysql-data:/var/lib/mysql-persistent
|
||||
- ${HOST_ZONEINFO_PATH}:/usr/share/zoneinfo:ro
|
||||
command:
|
||||
- mysqld
|
||||
@@ -65,6 +65,7 @@ services:
|
||||
volumes:
|
||||
- ${STORAGE_PATH}/config:/azerothcore/env/dist/etc
|
||||
- ${STORAGE_PATH}/logs:/azerothcore/logs
|
||||
- mysql-data:/var/lib/mysql-persistent
|
||||
```
|
||||
|
||||
> **Tip:** Need custom bind mounts for DBC overrides like in the upstream doc? Add them to `${STORAGE_PATH}/client-data` or mount extra read-only paths under the `ac-worldserver-*` service. RealmMaster already downloads `data.zip` via `ac-client-data-*` containers, so you can drop additional files beside the cached dataset.
|
||||
@@ -82,6 +83,23 @@ services:
|
||||
|
||||
For a full architecture diagram, cross-reference [README → Architecture Overview](../README.md#architecture-overview).
|
||||
|
||||
### Storage / Bind Mount Map
|
||||
|
||||
| Host Path | Mounted In | Purpose / Notes |
|
||||
|-----------|------------|-----------------|
|
||||
| `${STORAGE_PATH}/config` | `ac-authserver-*`, `ac-worldserver-*`, `ac-db-import`, `ac-db-guard`, `ac-post-install` | Holds `authserver.conf`, `worldserver.conf`, `dbimport.conf`, and module configs. Generated from the `.dist` templates during `setup.sh` / `auto-post-install.sh`. |
|
||||
| `${STORAGE_PATH}/logs` | `ac-worldserver-*`, `ac-authserver-*`, `ac-db-import`, `ac-db-guard` | Persistent server logs (mirrors upstream `logs/` bind mount). |
|
||||
| `${STORAGE_PATH}/modules` | `ac-worldserver-*`, `ac-db-import`, `ac-db-guard`, `ac-modules` | Cloned module repositories live here. `ac-modules` / `stage-modules.sh` sync this tree. |
|
||||
| `${STORAGE_PATH}/lua_scripts` | `ac-worldserver-*` | Custom Lua scripts (same structure as upstream `lua_scripts`). |
|
||||
| `${STORAGE_PATH}/backups` | `ac-db-import`, `ac-backup`, `ac-mysql` (via `mysql-data` volume) | Automatic hourly/daily SQL dumps. `ac-db-import` restores from here on cold start. |
|
||||
| `${STORAGE_PATH}/client-data` | `ac-client-data-*`, `ac-worldserver-*`, `ac-authserver-*` | Cached `Data.zip` plus optional DBC/maps/vmaps overrides. Equivalent to mounting `data` in the original instructions. |
|
||||
| `${STORAGE_PATH}/module-sql-updates` *(host literal path only used when you override the default)* | *(legacy, see below)* | Prior to this update, this path stayed under `storage/`. It now defaults to `${STORAGE_PATH_LOCAL}/module-sql-updates` so it can sit on a writable share even if `storage/` is NFS read-only. |
|
||||
| `${STORAGE_PATH_LOCAL}/module-sql-updates` | `ac-db-import`, `ac-db-guard` (mounted as `/modules-sql`) | **New:** `stage-modules.sh` copies every staged `MODULE_*.sql` into this directory. The guard and importer copy from `/modules-sql` into `/azerothcore/data/sql/updates/*` before running `dbimport`, so historical module SQL is preserved across container rebuilds. |
|
||||
| `${STORAGE_PATH_LOCAL}/client-data-cache` | `ac-client-data-*` | Download cache for `Data.zip`. Keeps the upstream client-data instructions intact. |
|
||||
| `${STORAGE_PATH_LOCAL}/source/azerothcore-playerbots/data/sql` | `ac-db-import`, `ac-db-guard` | Mounted read-only so dbimport always sees the checked-out SQL (matches the upstream “mount the source tree” advice). |
|
||||
| `mysql-data` (named volume) | `ac-mysql`, `ac-db-import`, `ac-db-init`, `ac-backup` | Stores the persistent InnoDB files. Runtime tmpfs lives inside the container, just like the original guide’s “tmpfs + bind mount” pattern. |
|
||||
|
||||
> Hosting storage over NFS/SMB? Point `STORAGE_PATH` at your read-only export and keep `STORAGE_PATH_LOCAL` on a writable tier for caches (`client-data-cache`, `module-sql-updates`, etc.). `stage-modules.sh` and `repair-storage-permissions.sh` respect those split paths.
|
||||
|
||||
## Familiar Workflow Using RealmMaster Commands
|
||||
|
||||
|
||||
Reference in New Issue
Block a user