feat: comprehensive module system and database management improvements

This commit introduces major enhancements to the module installation system,
database management, and configuration handling for AzerothCore deployments.

## Module System Improvements

### Module SQL Staging & Installation
- Refactor module SQL staging to properly handle AzerothCore's sql/ directory structure
- Fix SQL staging path to use correct AzerothCore format (sql/custom/db_*/*)
- Implement conditional module database importing based on enabled modules
- Add support for both cpp-modules and lua-scripts module types
- Handle rsync exit code 23 (permission warnings) gracefully during deployment

### Module Manifest & Automation
- Add automated module manifest generation via GitHub Actions workflow
- Implement Python-based module manifest updater with comprehensive validation
- Add module dependency tracking and SQL file discovery
- Support for blocked modules and module metadata management

## Database Management Enhancements

### Database Import System
- Add db-guard container for continuous database health monitoring and verification
- Implement conditional database import that skips when databases are current
- Add backup restoration and SQL staging coordination
- Support for Playerbots database (4th database) in all import operations
- Add comprehensive database health checking and status reporting

### Database Configuration
- Implement 10 new dbimport.conf settings from environment variables:
  - Database.Reconnect.Seconds/Attempts for connection reliability
  - Updates.AllowedModules for module auto-update control
  - Updates.Redundancy for data integrity checks
  - Worker/Synch thread settings for all three core databases
- Auto-apply dbimport.conf settings via auto-post-install.sh
- Add environment variable injection for db-import and db-guard containers

### Backup & Recovery
- Fix backup scheduler to prevent immediate execution on container startup
- Add backup status monitoring script with detailed reporting
- Implement backup import/export utilities
- Add database verification scripts for SQL update tracking

## User Import Directory

- Add new import/ directory for user-provided database files and configurations
- Support for custom SQL files, configuration overrides, and example templates
- Automatic import of user-provided databases and configs during initialization
- Documentation and examples for custom database imports

## Configuration & Environment

- Eliminate CLIENT_DATA_VERSION warning by adding default value syntax
- Improve CLIENT_DATA_VERSION documentation in .env.template
- Add comprehensive database import settings to .env and .env.template
- Update setup.sh to handle new configuration variables with proper defaults

## Monitoring & Debugging

- Add status dashboard with Go-based terminal UI (statusdash.go)
- Implement JSON status output (statusjson.sh) for programmatic access
- Add comprehensive database health check script
- Add repair-storage-permissions.sh utility for permission issues

## Testing & Documentation

- Add Phase 1 integration test suite for module installation verification
- Add comprehensive documentation for:
  - Database management (DATABASE_MANAGEMENT.md)
  - Module SQL analysis (AZEROTHCORE_MODULE_SQL_ANALYSIS.md)
  - Implementation mapping (IMPLEMENTATION_MAP.md)
  - SQL staging comparison and path coverage
  - Module assets and DBC file requirements
- Update SCRIPTS.md, ADVANCED.md, and troubleshooting documentation
- Update references from database-import/ to import/ directory

## Breaking Changes

- Renamed database-import/ directory to import/ for clarity
- Module SQL files now staged to AzerothCore-compatible paths
- db-guard container now required for proper database lifecycle management

## Bug Fixes

- Fix module SQL staging directory structure for AzerothCore compatibility
- Handle rsync exit code 23 gracefully during deployments
- Prevent backup from running immediately on container startup
- Correct SQL staging paths for proper module installation
This commit is contained in:
uprightbass360
2025-11-20 18:26:00 -05:00
committed by Deckard
parent 0d83f01995
commit e6231bb4a4
56 changed files with 11298 additions and 487 deletions

View File

@@ -122,6 +122,11 @@ flowchart TB
- **Worldserver debug logging** Need extra verbosity temporarily? Flip `COMPOSE_OVERRIDE_WORLDSERVER_DEBUG_LOGGING_ENABLED=1` to include `compose-overrides/worldserver-debug-logging.yml`, which bumps `AC_LOG_LEVEL` across all worldserver profiles. Turn it back off once you're done to avoid noisy logs.
- **Binary logging toggle** `MYSQL_DISABLE_BINLOG=1` appends `--skip-log-bin` via the MySQL wrapper entrypoint to keep disk churn low (and match Playerbot guidance). Flip the flag to `0` to re-enable binlogs for debugging or replication.
- **Drop-in configs** Any `.cnf` placed in `${STORAGE_PATH}/config/mysql/conf.d` (exposed via `MYSQL_CONFIG_DIR`) is mounted into `/etc/mysql/conf.d`. Use this to add custom tunables or temporarily override the binlog setting without touching the image.
- **Forcing a fresh database import** MySQLs persistent files (and the `.restore-*` sentinels) now live inside the Docker volume `mysql-data` at `/var/lib/mysql-persistent`. The import workflow still double-checks the live runtime before trusting those markers, logging `Restoration marker found, but databases are empty - forcing re-import` if the tmpfs is empty. When you intentionally need to rerun the import, delete the sentinel with `docker run --rm -v mysql-data:/var/lib/mysql-persistent alpine sh -c 'rm -f /var/lib/mysql-persistent/.restore-completed'` and then execute `docker compose run --rm ac-db-import` or `./scripts/bash/stage-modules.sh`. Leave the sentinel alone during normal operations so the import job doesnt wipe existing data on every start.
- **Module-driven SQL migration** Module code is staged through the `ac-modules` service and `scripts/bash/manage-modules.sh`, while SQL payloads are copied into the running `ac-worldserver` container by `scripts/bash/stage-modules.sh`. Every run clears `/azerothcore/data/sql/updates/{db_world,db_characters,db_auth}` and recopies all enabled module SQL files with deterministic names, letting AzerothCores built-in updater decide what to apply. Always trigger module/deploy workflows via these scripts rather than copying repositories manually; this keeps C++ builds, Lua assets, and SQL migrations synchronized with the database state.
### Restore-aware module SQL
When a backup successfully restores, the `ac-db-import` container automatically executes `scripts/bash/restore-and-stage.sh`, which simply drops `storage/modules/.modules-meta/.restore-prestaged`. The next `./scripts/bash/stage-modules.sh --yes` clears any previously staged files and recopies every enabled module SQL file before the worldserver boots. AzerothCores auto-updater then scans `/azerothcore/data/sql/updates/*`, applies any scripts that arent recorded in the `updates` tables yet, and skips the rest—without ever complaining about missing history files.
## Compose Overrides
@@ -163,15 +168,16 @@ To tweak MySQL settings, place `.cnf` snippets in `storage/config/mysql/conf.d`.
**Local Storage** (`STORAGE_PATH_LOCAL` - default: `./local-storage`)
```
local-storage/
├── mysql-data/ # MySQL persistent data (tmpfs runtime + persistent snapshot)
├── client-data-cache/ # Downloaded WoW client data archives
├── source/ # AzerothCore source repository (created during builds)
│ └── azerothcore-playerbots/ # Playerbot fork (when playerbots enabled)
└── images/ # Exported Docker images for remote deployment
```
Local storage now only hosts build artifacts, cached downloads, and helper images; the database files have moved into a dedicated Docker volume.
**Docker Volume**
- `client-data-cache` - Temporary storage for client data downloads
**Docker Volumes**
- `client-data-cache` Temporary storage for client data downloads
- `mysql-data` MySQL persistent data + `.restore-*` sentinels (`/var/lib/mysql-persistent`)
This separation ensures database and build artifacts stay on fast local storage while configuration, modules, and backups can be shared across hosts via NFS.