2 Commits

Author SHA1 Message Date
uprightbass360
71c1be1b46 cleanup: validation and integrations for importing data 2025-11-22 16:49:01 -05:00
uprightbass360
5c9f1d7389 feat: comprehensive module system and database management improvements
This commit introduces major enhancements to the module installation system,
database management, and configuration handling for AzerothCore deployments.

## Module System Improvements

### Module SQL Staging & Installation
- Refactor module SQL staging to properly handle AzerothCore's sql/ directory structure
- Fix SQL staging path to use correct AzerothCore format (sql/custom/db_*/*)
- Implement conditional module database importing based on enabled modules
- Add support for both cpp-modules and lua-scripts module types
- Handle rsync exit code 23 (permission warnings) gracefully during deployment

### Module Manifest & Automation
- Add automated module manifest generation via GitHub Actions workflow
- Implement Python-based module manifest updater with comprehensive validation
- Add module dependency tracking and SQL file discovery
- Support for blocked modules and module metadata management

## Database Management Enhancements

### Database Import System
- Add db-guard container for continuous database health monitoring and verification
- Implement conditional database import that skips when databases are current
- Add backup restoration and SQL staging coordination
- Support for Playerbots database (4th database) in all import operations
- Add comprehensive database health checking and status reporting

### Database Configuration
- Implement 10 new dbimport.conf settings from environment variables:
  - Database.Reconnect.Seconds/Attempts for connection reliability
  - Updates.AllowedModules for module auto-update control
  - Updates.Redundancy for data integrity checks
  - Worker/Synch thread settings for all three core databases
- Auto-apply dbimport.conf settings via auto-post-install.sh
- Add environment variable injection for db-import and db-guard containers

### Backup & Recovery
- Fix backup scheduler to prevent immediate execution on container startup
- Add backup status monitoring script with detailed reporting
- Implement backup import/export utilities
- Add database verification scripts for SQL update tracking

## User Import Directory

- Add new import/ directory for user-provided database files and configurations
- Support for custom SQL files, configuration overrides, and example templates
- Automatic import of user-provided databases and configs during initialization
- Documentation and examples for custom database imports

## Configuration & Environment

- Eliminate CLIENT_DATA_VERSION warning by adding default value syntax
- Improve CLIENT_DATA_VERSION documentation in .env.template
- Add comprehensive database import settings to .env and .env.template
- Update setup.sh to handle new configuration variables with proper defaults

## Monitoring & Debugging

- Add status dashboard with Go-based terminal UI (statusdash.go)
- Implement JSON status output (statusjson.sh) for programmatic access
- Add comprehensive database health check script
- Add repair-storage-permissions.sh utility for permission issues

## Testing & Documentation

- Add Phase 1 integration test suite for module installation verification
- Add comprehensive documentation for:
  - Database management (DATABASE_MANAGEMENT.md)
  - Module SQL analysis (AZEROTHCORE_MODULE_SQL_ANALYSIS.md)
  - Implementation mapping (IMPLEMENTATION_MAP.md)
  - SQL staging comparison and path coverage
  - Module assets and DBC file requirements
- Update SCRIPTS.md, ADVANCED.md, and troubleshooting documentation
- Update references from database-import/ to import/ directory

## Breaking Changes

- Renamed database-import/ directory to import/ for clarity
- Module SQL files now staged to AzerothCore-compatible paths
- db-guard container now required for proper database lifecycle management

## Bug Fixes

- Fix module SQL staging directory structure for AzerothCore compatibility
- Handle rsync exit code 23 gracefully during deployments
- Prevent backup from running immediately on container startup
- Correct SQL staging paths for proper module installation
2025-11-20 18:26:00 -05:00
19 changed files with 794 additions and 966 deletions

View File

@@ -65,7 +65,7 @@ DB_GUARD_VERIFY_INTERVAL_SECONDS=86400
# ===================== # =====================
# Module SQL staging # Module SQL staging
# ===================== # =====================
STAGE_PATH_MODULE_SQL=${STORAGE_PATH_LOCAL}/module-sql-updates MODULE_SQL_STAGE_PATH=${STORAGE_PATH_LOCAL}/module-sql-updates
# ===================== # =====================
# SQL Source Overlay # SQL Source Overlay
@@ -180,7 +180,6 @@ DB_CHARACTER_SYNCH_THREADS=1
BACKUP_RETENTION_DAYS=3 BACKUP_RETENTION_DAYS=3
BACKUP_RETENTION_HOURS=6 BACKUP_RETENTION_HOURS=6
BACKUP_DAILY_TIME=09 BACKUP_DAILY_TIME=09
BACKUP_INTERVAL_MINUTES=60
# Optional comma/space separated schemas to include in automated backups # Optional comma/space separated schemas to include in automated backups
BACKUP_EXTRA_DATABASES= BACKUP_EXTRA_DATABASES=
BACKUP_HEALTHCHECK_MAX_MINUTES=1440 BACKUP_HEALTHCHECK_MAX_MINUTES=1440

1
.gitignore vendored
View File

@@ -20,4 +20,3 @@ todo.md
.gocache/ .gocache/
.module-ledger/ .module-ledger/
deploy.log deploy.log
statusdash

View File

@@ -137,18 +137,11 @@ generate_module_state(){
# Check if blocked modules were detected in warnings # Check if blocked modules were detected in warnings
if echo "$validation_output" | grep -q "is blocked:"; then if echo "$validation_output" | grep -q "is blocked:"; then
# Gather blocked module keys for display
local blocked_modules
blocked_modules=$(echo "$validation_output" | grep -oE 'MODULE_[A-Za-z0-9_]+' | sort -u | tr '\n' ' ')
# Blocked modules detected - show warning and ask for confirmation # Blocked modules detected - show warning and ask for confirmation
echo echo
warn "════════════════════════════════════════════════════════════════" warn "════════════════════════════════════════════════════════════════"
warn "⚠️ BLOCKED MODULES DETECTED ⚠️" warn "⚠️ BLOCKED MODULES DETECTED ⚠️"
warn "════════════════════════════════════════════════════════════════" warn "════════════════════════════════════════════════════════════════"
if [ -n "$blocked_modules" ]; then
warn "Affected modules: ${blocked_modules}"
fi
warn "Some enabled modules are marked as blocked due to compatibility" warn "Some enabled modules are marked as blocked due to compatibility"
warn "issues. These modules will be SKIPPED during the build process." warn "issues. These modules will be SKIPPED during the build process."
warn "" warn ""

File diff suppressed because it is too large Load Diff

47
database-import/README.md Normal file
View File

@@ -0,0 +1,47 @@
# Database Import
> **📌 Note:** This directory is maintained for backward compatibility.
> **New location:** `import/db/` - See [import/README.md](../import/README.md) for the new unified import system.
Place your database backup files here for automatic import during deployment.
## Supported Imports
- `.sql` files (uncompressed SQL dumps)
- `.sql.gz` files (gzip compressed SQL dumps)
- **Full backup directories** (e.g., `ExportBackup_YYYYMMDD_HHMMSS/` containing multiple dumps)
- **Full backup archives** (`.tar`, `.tar.gz`, `.tgz`, `.zip`) that contain the files above
## How to Use
1. **Copy your backup files here:**
```bash
cp my_auth_backup.sql.gz ./database-import/
cp my_world_backup.sql.gz ./database-import/
cp my_characters_backup.sql.gz ./database-import/
# or drop an entire ExportBackup folder / archive
cp -r ExportBackup_20241029_120000 ./database-import/
cp ExportBackup_20241029_120000.tar.gz ./database-import/
```
2. **Run deployment:**
```bash
./deploy.sh
```
3. **Files are automatically copied to backup system** and imported during deployment
## File Naming
- Any filename works - the system will auto-detect database type by content
- Recommended naming: `auth.sql.gz`, `world.sql.gz`, `characters.sql.gz`
- Full backups keep their original directory/archive name so you can track multiple copies
## What Happens
- Individual `.sql`/`.sql.gz` files are copied to `storage/backups/daily/` with a timestamped name
- Full backup directories or archives are staged directly under `storage/backups/` (e.g., `storage/backups/ExportBackup_20241029_120000/`)
- Database import system automatically restores the most recent matching backup
- Original files remain here for reference (archives are left untouched)
## Notes
- Only processed on first deployment (when databases don't exist)
- Files/directories are copied once; existing restored databases will skip import
- Empty folder is ignored - no files, no import

View File

@@ -35,9 +35,6 @@ REMOTE_COPY_SOURCE=0
REMOTE_ARGS_PROVIDED=0 REMOTE_ARGS_PROVIDED=0
REMOTE_AUTO_DEPLOY=0 REMOTE_AUTO_DEPLOY=0
REMOTE_AUTO_DEPLOY=0 REMOTE_AUTO_DEPLOY=0
REMOTE_STORAGE_OVERRIDE=""
REMOTE_CONTAINER_USER_OVERRIDE=""
REMOTE_ENV_FILE=""
MODULE_HELPER="$ROOT_DIR/scripts/python/modules.py" MODULE_HELPER="$ROOT_DIR/scripts/python/modules.py"
MODULE_STATE_INITIALIZED=0 MODULE_STATE_INITIALIZED=0
@@ -167,23 +164,6 @@ collect_remote_details(){
*) REMOTE_SKIP_STORAGE=0 ;; *) REMOTE_SKIP_STORAGE=0 ;;
esac esac
fi fi
# Optional remote env overrides (default to current values)
local storage_default container_user_default
storage_default="$(read_env STORAGE_PATH "./storage")"
container_user_default="$(read_env CONTAINER_USER "$(id -u):$(id -g)")"
if [ -z "$REMOTE_STORAGE_OVERRIDE" ] && [ "$interactive" -eq 1 ]; then
local storage_input
read -rp "Remote storage path (STORAGE_PATH) [${storage_default}]: " storage_input
REMOTE_STORAGE_OVERRIDE="${storage_input:-$storage_default}"
fi
if [ -z "$REMOTE_CONTAINER_USER_OVERRIDE" ] && [ "$interactive" -eq 1 ]; then
local cu_input
read -rp "Remote container user (CONTAINER_USER) [${container_user_default}]: " cu_input
REMOTE_CONTAINER_USER_OVERRIDE="${cu_input:-$container_user_default}"
fi
} }
validate_remote_configuration(){ validate_remote_configuration(){
@@ -240,8 +220,6 @@ Options:
--remote-skip-storage Skip syncing the storage directory during migration --remote-skip-storage Skip syncing the storage directory during migration
--remote-copy-source Copy the local project directory to remote instead of relying on git --remote-copy-source Copy the local project directory to remote instead of relying on git
--remote-auto-deploy Run './deploy.sh --yes --no-watch' on the remote host after migration --remote-auto-deploy Run './deploy.sh --yes --no-watch' on the remote host after migration
--remote-storage-path PATH Override STORAGE_PATH/STORAGE_PATH_LOCAL in the remote .env
--remote-container-user USER[:GROUP] Override CONTAINER_USER in the remote .env
--skip-config Skip applying server configuration preset --skip-config Skip applying server configuration preset
-h, --help Show this help -h, --help Show this help
@@ -270,8 +248,6 @@ while [[ $# -gt 0 ]]; do
--remote-skip-storage) REMOTE_SKIP_STORAGE=1; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift;; --remote-skip-storage) REMOTE_SKIP_STORAGE=1; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift;;
--remote-copy-source) REMOTE_COPY_SOURCE=1; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift;; --remote-copy-source) REMOTE_COPY_SOURCE=1; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift;;
--remote-auto-deploy) REMOTE_AUTO_DEPLOY=1; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift;; --remote-auto-deploy) REMOTE_AUTO_DEPLOY=1; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift;;
--remote-storage-path) REMOTE_STORAGE_OVERRIDE="$2"; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift 2;;
--remote-container-user) REMOTE_CONTAINER_USER_OVERRIDE="$2"; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift 2;;
--skip-config) SKIP_CONFIG=1; shift;; --skip-config) SKIP_CONFIG=1; shift;;
-h|--help) usage; exit 0;; -h|--help) usage; exit 0;;
*) err "Unknown option: $1"; usage; exit 1;; *) err "Unknown option: $1"; usage; exit 1;;
@@ -631,33 +607,6 @@ determine_profile(){
} }
run_remote_migration(){ run_remote_migration(){
if [ -z "$REMOTE_ENV_FILE" ] && { [ -n "$REMOTE_STORAGE_OVERRIDE" ] || [ -n "$REMOTE_CONTAINER_USER_OVERRIDE" ]; }; then
local base_env=""
if [ -f "$ENV_PATH" ]; then
base_env="$ENV_PATH"
elif [ -f "$TEMPLATE_PATH" ]; then
base_env="$TEMPLATE_PATH"
fi
REMOTE_ENV_FILE="$(mktemp)"
if [ -n "$base_env" ]; then
cp "$base_env" "$REMOTE_ENV_FILE"
else
: > "$REMOTE_ENV_FILE"
fi
if [ -n "$REMOTE_STORAGE_OVERRIDE" ]; then
{
echo
echo "STORAGE_PATH=$REMOTE_STORAGE_OVERRIDE"
} >>"$REMOTE_ENV_FILE"
fi
if [ -n "$REMOTE_CONTAINER_USER_OVERRIDE" ]; then
{
echo
echo "CONTAINER_USER=$REMOTE_CONTAINER_USER_OVERRIDE"
} >>"$REMOTE_ENV_FILE"
fi
fi
local args=(--host "$REMOTE_HOST" --user "$REMOTE_USER") local args=(--host "$REMOTE_HOST" --user "$REMOTE_USER")
if [ -n "$REMOTE_PORT" ] && [ "$REMOTE_PORT" != "22" ]; then if [ -n "$REMOTE_PORT" ] && [ "$REMOTE_PORT" != "22" ]; then
@@ -684,10 +633,6 @@ run_remote_migration(){
args+=(--yes) args+=(--yes)
fi fi
if [ -n "$REMOTE_ENV_FILE" ]; then
args+=(--env-file "$REMOTE_ENV_FILE")
fi
(cd "$ROOT_DIR" && ./scripts/bash/migrate-stack.sh "${args[@]}") (cd "$ROOT_DIR" && ./scripts/bash/migrate-stack.sh "${args[@]}")
} }

View File

@@ -1,11 +1,4 @@
name: ${COMPOSE_PROJECT_NAME} name: ${COMPOSE_PROJECT_NAME}
x-logging: &logging-default
driver: json-file
options:
max-size: "10m"
max-file: "3"
services: services:
# ===================== # =====================
# Database Layer (db) # Database Layer (db)
@@ -47,7 +40,7 @@ services:
- --innodb-log-file-size=${MYSQL_INNODB_LOG_FILE_SIZE} - --innodb-log-file-size=${MYSQL_INNODB_LOG_FILE_SIZE}
- --innodb-redo-log-capacity=${MYSQL_INNODB_REDO_LOG_CAPACITY} - --innodb-redo-log-capacity=${MYSQL_INNODB_REDO_LOG_CAPACITY}
restart: unless-stopped restart: unless-stopped
logging: *logging-default logging:
healthcheck: healthcheck:
test: ["CMD", "sh", "-c", "mysqladmin ping -h localhost -u ${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} --silent || exit 1"] test: ["CMD", "sh", "-c", "mysqladmin ping -h localhost -u ${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} --silent || exit 1"]
interval: ${MYSQL_HEALTHCHECK_INTERVAL} interval: ${MYSQL_HEALTHCHECK_INTERVAL}
@@ -74,12 +67,11 @@ services:
- ${STORAGE_PATH}/config:/azerothcore/env/dist/etc - ${STORAGE_PATH}/config:/azerothcore/env/dist/etc
- ${STORAGE_PATH}/logs:/azerothcore/logs - ${STORAGE_PATH}/logs:/azerothcore/logs
- ${AC_SQL_SOURCE_PATH:-${STORAGE_PATH_LOCAL}/source/azerothcore-playerbots/data/sql}:/azerothcore/data/sql:ro - ${AC_SQL_SOURCE_PATH:-${STORAGE_PATH_LOCAL}/source/azerothcore-playerbots/data/sql}:/azerothcore/data/sql:ro
- ${STAGE_PATH_MODULE_SQL:-${STORAGE_PATH}/module-sql-updates}:/modules-sql - ${MODULE_SQL_STAGE_PATH:-${STORAGE_PATH}/module-sql-updates}:/modules-sql
- mysql-data:/var/lib/mysql-persistent - mysql-data:/var/lib/mysql-persistent
- ${STORAGE_PATH}/modules:/modules - ${STORAGE_PATH}/modules:/modules
- ${BACKUP_PATH}:/backups - ${BACKUP_PATH}:/backups
- ./scripts/bash/db-import-conditional.sh:/tmp/db-import-conditional.sh:ro - ./scripts/bash/db-import-conditional.sh:/tmp/db-import-conditional.sh:ro
- ./scripts/bash/seed-dbimport-conf.sh:/tmp/seed-dbimport-conf.sh:ro
- ./scripts/bash/restore-and-stage.sh:/tmp/restore-and-stage.sh:ro - ./scripts/bash/restore-and-stage.sh:/tmp/restore-and-stage.sh:ro
environment: environment:
AC_DATA_DIR: "/azerothcore/data" AC_DATA_DIR: "/azerothcore/data"
@@ -139,12 +131,11 @@ services:
- ${STORAGE_PATH}/config:/azerothcore/env/dist/etc - ${STORAGE_PATH}/config:/azerothcore/env/dist/etc
- ${STORAGE_PATH}/logs:/azerothcore/logs - ${STORAGE_PATH}/logs:/azerothcore/logs
- ${AC_SQL_SOURCE_PATH:-${STORAGE_PATH_LOCAL}/source/azerothcore-playerbots/data/sql}:/azerothcore/data/sql:ro - ${AC_SQL_SOURCE_PATH:-${STORAGE_PATH_LOCAL}/source/azerothcore-playerbots/data/sql}:/azerothcore/data/sql:ro
- ${STAGE_PATH_MODULE_SQL:-${STORAGE_PATH}/module-sql-updates}:/modules-sql - ${MODULE_SQL_STAGE_PATH:-${STORAGE_PATH}/module-sql-updates}:/modules-sql
- mysql-data:/var/lib/mysql-persistent - mysql-data:/var/lib/mysql-persistent
- ${STORAGE_PATH}/modules:/modules - ${STORAGE_PATH}/modules:/modules
- ${BACKUP_PATH}:/backups - ${BACKUP_PATH}:/backups
- ./scripts/bash/db-import-conditional.sh:/tmp/db-import-conditional.sh:ro - ./scripts/bash/db-import-conditional.sh:/tmp/db-import-conditional.sh:ro
- ./scripts/bash/seed-dbimport-conf.sh:/tmp/seed-dbimport-conf.sh:ro
- ./scripts/bash/restore-and-stage.sh:/tmp/restore-and-stage.sh:ro - ./scripts/bash/restore-and-stage.sh:/tmp/restore-and-stage.sh:ro
- ./scripts/bash/db-guard.sh:/tmp/db-guard.sh:ro - ./scripts/bash/db-guard.sh:/tmp/db-guard.sh:ro
environment: environment:
@@ -334,7 +325,7 @@ services:
profiles: ["client-data", "client-data-bots"] profiles: ["client-data", "client-data-bots"]
image: ${ALPINE_IMAGE} image: ${ALPINE_IMAGE}
container_name: ac-volume-init container_name: ac-volume-init
user: "0:0" user: "${CONTAINER_USER}"
volumes: volumes:
- ${CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}:/azerothcore/data - ${CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}:/azerothcore/data
- client-data-cache:/cache - client-data-cache:/cache
@@ -360,11 +351,10 @@ services:
profiles: ["db", "modules"] profiles: ["db", "modules"]
image: ${ALPINE_IMAGE} image: ${ALPINE_IMAGE}
container_name: ac-storage-init container_name: ac-storage-init
user: "0:0" user: "${CONTAINER_USER}"
volumes: volumes:
- ${STORAGE_PATH}:/storage-root - ${STORAGE_PATH}:/storage-root
- ${STORAGE_PATH_LOCAL}:/local-storage-root - ${STORAGE_PATH_LOCAL}:/local-storage-root
- ./scripts/bash/seed-dbimport-conf.sh:/tmp/seed-dbimport-conf.sh:ro
command: command:
- sh - sh
- -c - -c
@@ -374,48 +364,11 @@ services:
mkdir -p /storage-root/config/mysql/conf.d mkdir -p /storage-root/config/mysql/conf.d
mkdir -p /storage-root/client-data mkdir -p /storage-root/client-data
mkdir -p /storage-root/backups mkdir -p /storage-root/backups
# Copy core config files if they don't exist
# Copy core AzerothCore config template files (.dist) to config directory if [ -f "/local-storage-root/source/azerothcore-playerbots/src/tools/dbimport/dbimport.conf.dist" ] && [ ! -f "/storage-root/config/dbimport.conf.dist" ]; then
echo "📄 Copying AzerothCore configuration templates..." echo "📄 Copying dbimport.conf.dist..."
SOURCE_DIR="${SOURCE_DIR:-/local-storage-root/source/azerothcore-playerbots}" cp /local-storage-root/source/azerothcore-playerbots/src/tools/dbimport/dbimport.conf.dist /storage-root/config/
if [ ! -d "$SOURCE_DIR" ] && [ -d "/local-storage-root/source/azerothcore-wotlk" ]; then
SOURCE_DIR="/local-storage-root/source/azerothcore-wotlk"
fi fi
# Seed dbimport.conf with a shared helper (fallback to a simple copy if missing)
if [ -f "/tmp/seed-dbimport-conf.sh" ]; then
echo "🧩 Seeding dbimport.conf"
DBIMPORT_CONF_DIR="/storage-root/config" \
DBIMPORT_SOURCE_ROOT="$SOURCE_DIR" \
sh -c '. /tmp/seed-dbimport-conf.sh && seed_dbimport_conf' || true
else
if [ -f "$SOURCE_DIR/src/tools/dbimport/dbimport.conf.dist" ]; then
cp -n "$SOURCE_DIR/src/tools/dbimport/dbimport.conf.dist" /storage-root/config/ 2>/dev/null || true
if [ ! -f "/storage-root/config/dbimport.conf" ]; then
cp "$SOURCE_DIR/src/tools/dbimport/dbimport.conf.dist" /storage-root/config/dbimport.conf
echo " ✓ Created dbimport.conf"
fi
fi
fi
# Copy authserver.conf.dist
if [ -f "$SOURCE_DIR/env/dist/etc/authserver.conf.dist" ]; then
cp -n "$SOURCE_DIR/env/dist/etc/authserver.conf.dist" /storage-root/config/ 2>/dev/null || true
if [ ! -f "/storage-root/config/authserver.conf" ]; then
cp "$SOURCE_DIR/env/dist/etc/authserver.conf.dist" /storage-root/config/authserver.conf
echo " ✓ Created authserver.conf"
fi
fi
# Copy worldserver.conf.dist
if [ -f "$SOURCE_DIR/env/dist/etc/worldserver.conf.dist" ]; then
cp -n "$SOURCE_DIR/env/dist/etc/worldserver.conf.dist" /storage-root/config/ 2>/dev/null || true
if [ ! -f "/storage-root/config/worldserver.conf" ]; then
cp "$SOURCE_DIR/env/dist/etc/worldserver.conf.dist" /storage-root/config/worldserver.conf
echo " ✓ Created worldserver.conf"
fi
fi
mkdir -p /storage-root/config/temp
# Fix ownership of root directories and all contents # Fix ownership of root directories and all contents
if [ "$(id -u)" -eq 0 ]; then if [ "$(id -u)" -eq 0 ]; then
chown -R ${CONTAINER_USER} /storage-root /local-storage-root chown -R ${CONTAINER_USER} /storage-root /local-storage-root
@@ -525,7 +478,7 @@ services:
ports: ports:
- "${AUTH_EXTERNAL_PORT}:${AUTH_PORT}" - "${AUTH_EXTERNAL_PORT}:${AUTH_PORT}"
restart: unless-stopped restart: unless-stopped
logging: *logging-default logging:
networks: networks:
- azerothcore - azerothcore
volumes: volumes:
@@ -580,7 +533,7 @@ services:
- ${STORAGE_PATH}/modules:/azerothcore/modules - ${STORAGE_PATH}/modules:/azerothcore/modules
- ${STORAGE_PATH}/lua_scripts:/azerothcore/lua_scripts - ${STORAGE_PATH}/lua_scripts:/azerothcore/lua_scripts
restart: unless-stopped restart: unless-stopped
logging: *logging-default logging:
networks: networks:
- azerothcore - azerothcore
cap_add: ["SYS_NICE"] cap_add: ["SYS_NICE"]
@@ -618,7 +571,11 @@ services:
ports: ports:
- "${AUTH_EXTERNAL_PORT}:${AUTH_PORT}" - "${AUTH_EXTERNAL_PORT}:${AUTH_PORT}"
restart: unless-stopped restart: unless-stopped
logging: *logging-default logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
networks: networks:
- azerothcore - azerothcore
volumes: volumes:
@@ -654,7 +611,7 @@ services:
ports: ports:
- "${AUTH_EXTERNAL_PORT}:${AUTH_PORT}" - "${AUTH_EXTERNAL_PORT}:${AUTH_PORT}"
restart: unless-stopped restart: unless-stopped
logging: *logging-default logging:
networks: networks:
- azerothcore - azerothcore
volumes: volumes:
@@ -712,7 +669,7 @@ services:
- ${STORAGE_PATH}/modules:/azerothcore/modules - ${STORAGE_PATH}/modules:/azerothcore/modules
- ${STORAGE_PATH}/lua_scripts:/azerothcore/lua_scripts - ${STORAGE_PATH}/lua_scripts:/azerothcore/lua_scripts
restart: unless-stopped restart: unless-stopped
logging: *logging-default logging:
networks: networks:
- azerothcore - azerothcore
cap_add: ["SYS_NICE"] cap_add: ["SYS_NICE"]
@@ -769,7 +726,11 @@ services:
- "${WORLD_EXTERNAL_PORT}:${WORLD_PORT}" - "${WORLD_EXTERNAL_PORT}:${WORLD_PORT}"
- "${SOAP_EXTERNAL_PORT}:${SOAP_PORT}" - "${SOAP_EXTERNAL_PORT}:${SOAP_PORT}"
restart: unless-stopped restart: unless-stopped
logging: *logging-default logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
cap_add: ["SYS_NICE"] cap_add: ["SYS_NICE"]
healthcheck: healthcheck:
test: ["CMD", "sh", "-c", "ps aux | grep '[w]orldserver' | grep -v grep || exit 1"] test: ["CMD", "sh", "-c", "ps aux | grep '[w]orldserver' | grep -v grep || exit 1"]
@@ -858,10 +819,8 @@ services:
- | - |
apk add --no-cache bash curl docker-cli su-exec apk add --no-cache bash curl docker-cli su-exec
chmod +x /tmp/scripts/bash/auto-post-install.sh 2>/dev/null || true chmod +x /tmp/scripts/bash/auto-post-install.sh 2>/dev/null || true
echo "📥 Running post-install as root (testing mode)" echo "📥 Running post-install as ${CONTAINER_USER}"
mkdir -p /install-markers su-exec ${CONTAINER_USER} bash /tmp/scripts/bash/auto-post-install.sh
chown -R ${CONTAINER_USER} /azerothcore/config /install-markers 2>/dev/null || true
bash /tmp/scripts/bash/auto-post-install.sh
restart: "no" restart: "no"
networks: networks:
- azerothcore - azerothcore
@@ -918,7 +877,7 @@ services:
timeout: 10s timeout: 10s
retries: 3 retries: 3
start_period: 40s start_period: 40s
logging: *logging-default logging:
security_opt: security_opt:
- no-new-privileges:true - no-new-privileges:true
networks: networks:

View File

@@ -170,8 +170,6 @@ Optional flags:
- `--remote-port 2222` - Custom SSH port - `--remote-port 2222` - Custom SSH port
- `--remote-identity ~/.ssh/custom_key` - Specific SSH key - `--remote-identity ~/.ssh/custom_key` - Specific SSH key
- `--remote-skip-storage` - Don't sync storage directory (fresh install on remote) - `--remote-skip-storage` - Don't sync storage directory (fresh install on remote)
- `--remote-storage-path /mnt/acore-storage` - Override STORAGE_PATH on the remote host (local-storage stays per .env)
- `--remote-container-user 1001:1001` - Override CONTAINER_USER on the remote host (uid:gid)
### Step 3: Deploy on Remote Host ### Step 3: Deploy on Remote Host
```bash ```bash

View File

@@ -24,34 +24,6 @@ STATUS_FILE="${DB_GUARD_STATUS_FILE:-/tmp/db-guard.status}"
ERROR_FILE="${DB_GUARD_ERROR_FILE:-/tmp/db-guard.error}" ERROR_FILE="${DB_GUARD_ERROR_FILE:-/tmp/db-guard.error}"
MODULE_SQL_HOST_PATH="${MODULE_SQL_HOST_PATH:-/modules-sql}" MODULE_SQL_HOST_PATH="${MODULE_SQL_HOST_PATH:-/modules-sql}"
SEED_CONF_SCRIPT="${SEED_DBIMPORT_CONF_SCRIPT:-/tmp/seed-dbimport-conf.sh}"
if [ -f "$SEED_CONF_SCRIPT" ]; then
# shellcheck source=/dev/null
. "$SEED_CONF_SCRIPT"
elif ! command -v seed_dbimport_conf >/dev/null 2>&1; then
seed_dbimport_conf(){
local conf="/azerothcore/env/dist/etc/dbimport.conf"
local dist="${conf}.dist"
mkdir -p "$(dirname "$conf")"
[ -f "$conf" ] && return 0
if [ -f "$dist" ]; then
cp "$dist" "$conf"
else
warn "dbimport.conf missing and no dist available; writing minimal defaults"
cat > "$conf" <<EOF
LoginDatabaseInfo = "localhost;3306;root;root;acore_auth"
WorldDatabaseInfo = "localhost;3306;root;root;acore_world"
CharacterDatabaseInfo = "localhost;3306;root;root;acore_characters"
PlayerbotsDatabaseInfo = "localhost;3306;root;root;acore_playerbots"
EnableDatabases = 15
Updates.AutoSetup = 1
MySQLExecutable = "/usr/bin/mysql"
TempDir = "/azerothcore/env/dist/etc/temp"
EOF
fi
}
fi
declare -a DB_SCHEMAS=() declare -a DB_SCHEMAS=()
for var in DB_AUTH_NAME DB_WORLD_NAME DB_CHARACTERS_NAME DB_PLAYERBOTS_NAME; do for var in DB_AUTH_NAME DB_WORLD_NAME DB_CHARACTERS_NAME DB_PLAYERBOTS_NAME; do
value="${!var:-}" value="${!var:-}"
@@ -113,6 +85,15 @@ rehydrate(){
"$IMPORT_SCRIPT" "$IMPORT_SCRIPT"
} }
ensure_dbimport_conf(){
local conf="/azerothcore/env/dist/etc/dbimport.conf"
local dist="${conf}.dist"
if [ ! -f "$conf" ] && [ -f "$dist" ]; then
cp "$dist" "$conf"
fi
mkdir -p /azerothcore/env/dist/temp
}
sync_host_stage_files(){ sync_host_stage_files(){
local host_root="${MODULE_SQL_HOST_PATH}" local host_root="${MODULE_SQL_HOST_PATH}"
[ -d "$host_root" ] || return 0 [ -d "$host_root" ] || return 0
@@ -129,7 +110,7 @@ sync_host_stage_files(){
dbimport_verify(){ dbimport_verify(){
local bin_dir="/azerothcore/env/dist/bin" local bin_dir="/azerothcore/env/dist/bin"
seed_dbimport_conf ensure_dbimport_conf
sync_host_stage_files sync_host_stage_files
if [ ! -x "${bin_dir}/dbimport" ]; then if [ ! -x "${bin_dir}/dbimport" ]; then
warn "dbimport binary not found at ${bin_dir}/dbimport" warn "dbimport binary not found at ${bin_dir}/dbimport"

View File

@@ -81,6 +81,15 @@ wait_for_mysql(){
return 1 return 1
} }
ensure_dbimport_conf(){
local conf="/azerothcore/env/dist/etc/dbimport.conf"
local dist="${conf}.dist"
if [ ! -f "$conf" ] && [ -f "$dist" ]; then
cp "$dist" "$conf"
fi
mkdir -p /azerothcore/env/dist/temp
}
case "${1:-}" in case "${1:-}" in
-h|--help) -h|--help)
print_help print_help
@@ -97,34 +106,6 @@ esac
echo "🔧 Conditional AzerothCore Database Import" echo "🔧 Conditional AzerothCore Database Import"
echo "========================================" echo "========================================"
SEED_CONF_SCRIPT="${SEED_DBIMPORT_CONF_SCRIPT:-/tmp/seed-dbimport-conf.sh}"
if [ -f "$SEED_CONF_SCRIPT" ]; then
# shellcheck source=/dev/null
. "$SEED_CONF_SCRIPT"
elif ! command -v seed_dbimport_conf >/dev/null 2>&1; then
seed_dbimport_conf(){
local conf="/azerothcore/env/dist/etc/dbimport.conf"
local dist="${conf}.dist"
mkdir -p "$(dirname "$conf")"
[ -f "$conf" ] && return 0
if [ -f "$dist" ]; then
cp "$dist" "$conf"
else
echo "⚠️ dbimport.conf missing and no dist available; using localhost defaults" >&2
cat > "$conf" <<EOF
LoginDatabaseInfo = "localhost;3306;root;root;acore_auth"
WorldDatabaseInfo = "localhost;3306;root;root;acore_world"
CharacterDatabaseInfo = "localhost;3306;root;root;acore_characters"
PlayerbotsDatabaseInfo = "localhost;3306;root;root;acore_playerbots"
EnableDatabases = 15
Updates.AutoSetup = 1
MySQLExecutable = "/usr/bin/mysql"
TempDir = "/azerothcore/env/dist/etc/temp"
EOF
fi
}
fi
if ! wait_for_mysql; then if ! wait_for_mysql; then
echo "❌ MySQL service is unavailable; aborting database import" echo "❌ MySQL service is unavailable; aborting database import"
exit 1 exit 1
@@ -177,8 +158,6 @@ echo "🔧 Starting database import process..."
echo "🔍 Checking for backups to restore..." echo "🔍 Checking for backups to restore..."
# Allow tolerant scanning; re-enable -e after search.
set +e
# Define backup search paths in priority order # Define backup search paths in priority order
BACKUP_SEARCH_PATHS=( BACKUP_SEARCH_PATHS=(
"/backups" "/backups"
@@ -274,16 +253,13 @@ if [ -z "$backup_path" ]; then
# Check for manual backups (*.sql files) # Check for manual backups (*.sql files)
if [ -z "$backup_path" ]; then if [ -z "$backup_path" ]; then
echo "🔍 Checking for manual backup files..." echo "🔍 Checking for manual backup files..."
latest_manual="" latest_manual=$(ls -1t "$BACKUP_DIRS"/*.sql 2>/dev/null | head -n 1)
if ls "$BACKUP_DIRS"/*.sql >/dev/null 2>&1; then if [ -n "$latest_manual" ] && [ -f "$latest_manual" ]; then
latest_manual=$(ls -1t "$BACKUP_DIRS"/*.sql | head -n 1) echo "📦 Found manual backup: $(basename "$latest_manual")"
if [ -n "$latest_manual" ] && [ -f "$latest_manual" ]; then if timeout 10 head -20 "$latest_manual" >/dev/null 2>&1; then
echo "📦 Found manual backup: $(basename "$latest_manual")" echo "✅ Valid manual backup file: $(basename "$latest_manual")"
if timeout 10 head -20 "$latest_manual" >/dev/null 2>&1; then backup_path="$latest_manual"
echo "✅ Valid manual backup file: $(basename "$latest_manual")" break
backup_path="$latest_manual"
break
fi
fi fi
fi fi
fi fi
@@ -296,7 +272,6 @@ if [ -z "$backup_path" ]; then
done done
fi fi
set -e
echo "🔄 Final backup path result: '$backup_path'" echo "🔄 Final backup path result: '$backup_path'"
if [ -n "$backup_path" ]; then if [ -n "$backup_path" ]; then
echo "📦 Found backup: $(basename "$backup_path")" echo "📦 Found backup: $(basename "$backup_path")"
@@ -382,7 +357,7 @@ if [ -n "$backup_path" ]; then
return 0 return 0
fi fi
seed_dbimport_conf ensure_dbimport_conf
cd /azerothcore/env/dist/bin cd /azerothcore/env/dist/bin
echo "🔄 Running dbimport to apply any missing updates..." echo "🔄 Running dbimport to apply any missing updates..."
@@ -449,73 +424,23 @@ fi
echo "🗄️ Creating fresh AzerothCore databases..." echo "🗄️ Creating fresh AzerothCore databases..."
mysql -h ${CONTAINER_MYSQL} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} -e " mysql -h ${CONTAINER_MYSQL} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} -e "
DROP DATABASE IF EXISTS ${DB_AUTH_NAME}; CREATE DATABASE IF NOT EXISTS ${DB_AUTH_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
DROP DATABASE IF EXISTS ${DB_WORLD_NAME}; CREATE DATABASE IF NOT EXISTS ${DB_WORLD_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
DROP DATABASE IF EXISTS ${DB_CHARACTERS_NAME}; CREATE DATABASE IF NOT EXISTS ${DB_CHARACTERS_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
DROP DATABASE IF EXISTS ${DB_PLAYERBOTS_NAME:-acore_playerbots}; CREATE DATABASE IF NOT EXISTS acore_playerbots DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE DATABASE ${DB_AUTH_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE DATABASE ${DB_WORLD_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE DATABASE ${DB_CHARACTERS_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE DATABASE ${DB_PLAYERBOTS_NAME:-acore_playerbots} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
SHOW DATABASES;" || { echo "❌ Failed to create databases"; exit 1; } SHOW DATABASES;" || { echo "❌ Failed to create databases"; exit 1; }
echo "✅ Fresh databases created - proceeding with schema import" echo "✅ Fresh databases created - proceeding with schema import"
ensure_dbimport_conf
echo "🚀 Running database import..." echo "🚀 Running database import..."
cd /azerothcore/env/dist/bin cd /azerothcore/env/dist/bin
seed_dbimport_conf
maybe_run_base_import(){
local mysql_host="${CONTAINER_MYSQL:-ac-mysql}"
local mysql_port="${MYSQL_PORT:-3306}"
local mysql_user="${MYSQL_USER:-root}"
local mysql_pass="${MYSQL_ROOT_PASSWORD:-root}"
import_dir(){
local db="$1" dir="$2"
[ -d "$dir" ] || return 0
echo "🔧 Importing base schema for ${db} from $(basename "$dir")..."
for f in $(ls "$dir"/*.sql 2>/dev/null | LC_ALL=C sort); do
MYSQL_PWD="$mysql_pass" mysql -h "$mysql_host" -P "$mysql_port" -u "$mysql_user" "$db" < "$f" >/dev/null 2>&1 || true
done
}
needs_import(){
local db="$1"
local count
count="$(MYSQL_PWD="$mysql_pass" mysql -h "$mysql_host" -P "$mysql_port" -u "$mysql_user" -N -B -e "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema='${db}';" 2>/dev/null || echo 0)"
[ "${count:-0}" -eq 0 ] && return 0
local updates
updates="$(MYSQL_PWD="$mysql_pass" mysql -h "$mysql_host" -P "$mysql_port" -u "$mysql_user" -N -B -e "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema='${db}' AND table_name='updates';" 2>/dev/null || echo 0)"
[ "${updates:-0}" -eq 0 ]
}
if needs_import "${DB_WORLD_NAME:-acore_world}"; then
import_dir "${DB_WORLD_NAME:-acore_world}" "/azerothcore/data/sql/base/db_world"
fi
if needs_import "${DB_AUTH_NAME:-acore_auth}"; then
import_dir "${DB_AUTH_NAME:-acore_auth}" "/azerothcore/data/sql/base/db_auth"
fi
if needs_import "${DB_CHARACTERS_NAME:-acore_characters}"; then
import_dir "${DB_CHARACTERS_NAME:-acore_characters}" "/azerothcore/data/sql/base/db_characters"
fi
}
maybe_run_base_import
if ./dbimport; then if ./dbimport; then
echo "✅ Database import completed successfully!" echo "✅ Database import completed successfully!"
import_marker_msg="$(date): Database import completed successfully" echo "$(date): Database import completed successfully" > "$RESTORE_STATUS_DIR/.import-completed" || echo "$(date): Database import completed successfully" > "$MARKER_STATUS_DIR/.import-completed"
if [ -w "$RESTORE_STATUS_DIR" ]; then
echo "$import_marker_msg" > "$RESTORE_STATUS_DIR/.import-completed"
elif [ -w "$MARKER_STATUS_DIR" ]; then
echo "$import_marker_msg" > "$MARKER_STATUS_DIR/.import-completed" 2>/dev/null || true
fi
else else
echo "❌ Database import failed!" echo "❌ Database import failed!"
if [ -w "$RESTORE_STATUS_DIR" ]; then echo "$(date): Database import failed" > "$RESTORE_STATUS_DIR/.import-failed" || echo "$(date): Database import failed" > "$MARKER_STATUS_DIR/.import-failed"
echo "$(date): Database import failed" > "$RESTORE_STATUS_DIR/.import-failed"
elif [ -w "$MARKER_STATUS_DIR" ]; then
echo "$(date): Database import failed" > "$MARKER_STATUS_DIR/.import-failed" 2>/dev/null || true
fi
exit 1 exit 1
fi fi

View File

@@ -1,7 +1,7 @@
#!/bin/bash #!/bin/bash
# Utility to migrate deployment images (and optionally storage) to a remote host. # Utility to migrate module images (and optionally storage) to a remote host.
# Assumes your runtime images have already been built or pulled locally. # Assumes module images have already been rebuilt locally.
set -euo pipefail set -euo pipefail
@@ -41,74 +41,6 @@ resolve_project_image(){
echo "${project_name}:${tag}" echo "${project_name}:${tag}"
} }
declare -a DEPLOY_IMAGE_REFS=()
declare -a CLEANUP_IMAGE_REFS=()
declare -A DEPLOY_IMAGE_SET=()
declare -A CLEANUP_IMAGE_SET=()
add_deploy_image_ref(){
local image="$1"
[ -z "$image" ] && return
if [[ -z "${DEPLOY_IMAGE_SET[$image]:-}" ]]; then
DEPLOY_IMAGE_SET["$image"]=1
DEPLOY_IMAGE_REFS+=("$image")
fi
add_cleanup_image_ref "$image"
}
add_cleanup_image_ref(){
local image="$1"
[ -z "$image" ] && return
if [[ -z "${CLEANUP_IMAGE_SET[$image]:-}" ]]; then
CLEANUP_IMAGE_SET["$image"]=1
CLEANUP_IMAGE_REFS+=("$image")
fi
}
collect_deploy_image_refs(){
local auth_modules world_modules auth_playerbots world_playerbots db_import client_data bots_client_data
local auth_standard world_standard client_data_standard
auth_modules="$(read_env_value AC_AUTHSERVER_IMAGE_MODULES "$(resolve_project_image "authserver-modules-latest")")"
world_modules="$(read_env_value AC_WORLDSERVER_IMAGE_MODULES "$(resolve_project_image "worldserver-modules-latest")")"
auth_playerbots="$(read_env_value AC_AUTHSERVER_IMAGE_PLAYERBOTS "$(resolve_project_image "authserver-playerbots")")"
world_playerbots="$(read_env_value AC_WORLDSERVER_IMAGE_PLAYERBOTS "$(resolve_project_image "worldserver-playerbots")")"
db_import="$(read_env_value AC_DB_IMPORT_IMAGE "$(resolve_project_image "db-import-playerbots")")"
client_data="$(read_env_value AC_CLIENT_DATA_IMAGE_PLAYERBOTS "$(resolve_project_image "client-data-playerbots")")"
auth_standard="$(read_env_value AC_AUTHSERVER_IMAGE "acore/ac-wotlk-authserver:master")"
world_standard="$(read_env_value AC_WORLDSERVER_IMAGE "acore/ac-wotlk-worldserver:master")"
client_data_standard="$(read_env_value AC_CLIENT_DATA_IMAGE "acore/ac-wotlk-client-data:master")"
local refs=(
"$auth_modules"
"$world_modules"
"$auth_playerbots"
"$world_playerbots"
"$db_import"
"$client_data"
"$auth_standard"
"$world_standard"
"$client_data_standard"
)
for ref in "${refs[@]}"; do
add_deploy_image_ref "$ref"
done
# Include default project-tagged images for cleanup even if env moved to custom tags
local fallback_refs=(
"$(resolve_project_image "authserver-modules-latest")"
"$(resolve_project_image "worldserver-modules-latest")"
"$(resolve_project_image "authserver-playerbots")"
"$(resolve_project_image "worldserver-playerbots")"
"$(resolve_project_image "db-import-playerbots")"
"$(resolve_project_image "client-data-playerbots")"
)
for ref in "${fallback_refs[@]}"; do
add_cleanup_image_ref "$ref"
done
}
ensure_host_writable(){ ensure_host_writable(){
local path="$1" local path="$1"
[ -n "$path" ] || return 0 [ -n "$path" ] || return 0
@@ -144,7 +76,6 @@ Options:
--port PORT SSH port (default: 22) --port PORT SSH port (default: 22)
--identity PATH SSH private key (passed to scp/ssh) --identity PATH SSH private key (passed to scp/ssh)
--project-dir DIR Remote project directory (default: ~/<project-name>) --project-dir DIR Remote project directory (default: ~/<project-name>)
--env-file PATH Use this env file for image lookup and upload (default: ./.env)
--tarball PATH Output path for the image tar (default: ./local-storage/images/acore-modules-images.tar) --tarball PATH Output path for the image tar (default: ./local-storage/images/acore-modules-images.tar)
--storage PATH Remote storage directory (default: <project-dir>/storage) --storage PATH Remote storage directory (default: <project-dir>/storage)
--skip-storage Do not sync the storage directory --skip-storage Do not sync the storage directory
@@ -172,7 +103,6 @@ while [[ $# -gt 0 ]]; do
--port) PORT="$2"; shift 2;; --port) PORT="$2"; shift 2;;
--identity) IDENTITY="$2"; shift 2;; --identity) IDENTITY="$2"; shift 2;;
--project-dir) PROJECT_DIR="$2"; shift 2;; --project-dir) PROJECT_DIR="$2"; shift 2;;
--env-file) ENV_FILE="$2"; shift 2;;
--tarball) TARBALL="$2"; shift 2;; --tarball) TARBALL="$2"; shift 2;;
--storage) REMOTE_STORAGE="$2"; shift 2;; --storage) REMOTE_STORAGE="$2"; shift 2;;
--skip-storage) SKIP_STORAGE=1; shift;; --skip-storage) SKIP_STORAGE=1; shift;;
@@ -189,14 +119,6 @@ if [[ -z "$HOST" || -z "$USER" ]]; then
exit 1 exit 1
fi fi
# Normalize env file path if provided and recompute defaults
if [ -n "$ENV_FILE" ] && [ -f "$ENV_FILE" ]; then
ENV_FILE="$(cd "$(dirname "$ENV_FILE")" && pwd)/$(basename "$ENV_FILE")"
else
ENV_FILE="$PROJECT_ROOT/.env"
fi
DEFAULT_PROJECT_NAME="$(project_name::resolve "$ENV_FILE" "$TEMPLATE_FILE")"
expand_remote_path(){ expand_remote_path(){
local path="$1" local path="$1"
case "$path" in case "$path" in
@@ -223,27 +145,6 @@ ensure_host_writable "$LOCAL_STORAGE_ROOT"
TARBALL="${TARBALL:-${LOCAL_STORAGE_ROOT}/images/acore-modules-images.tar}" TARBALL="${TARBALL:-${LOCAL_STORAGE_ROOT}/images/acore-modules-images.tar}"
ensure_host_writable "$(dirname "$TARBALL")" ensure_host_writable "$(dirname "$TARBALL")"
# Resolve module SQL staging paths (local and remote)
resolve_path_relative_to_project(){
local path="$1" root="$2"
if [[ "$path" != /* ]]; then
# drop leading ./ if present
path="${path#./}"
path="${root%/}/$path"
fi
echo "${path%/}"
}
STAGE_SQL_PATH_RAW="$(read_env_value STAGE_PATH_MODULE_SQL "${LOCAL_STORAGE_ROOT:-./local-storage}/module-sql-updates")"
# Ensure STORAGE_PATH_LOCAL is defined to avoid set -u failures during expansion
if [ -z "${STORAGE_PATH_LOCAL:-}" ]; then
STORAGE_PATH_LOCAL="$LOCAL_STORAGE_ROOT"
fi
# Expand any env references (e.g., ${STORAGE_PATH_LOCAL})
STAGE_SQL_PATH_RAW="$(eval "echo \"$STAGE_SQL_PATH_RAW\"")"
LOCAL_STAGE_SQL_DIR="$(resolve_path_relative_to_project "$STAGE_SQL_PATH_RAW" "$PROJECT_ROOT")"
REMOTE_STAGE_SQL_DIR="$(resolve_path_relative_to_project "$STAGE_SQL_PATH_RAW" "$PROJECT_DIR")"
SCP_OPTS=(-P "$PORT") SCP_OPTS=(-P "$PORT")
SSH_OPTS=(-p "$PORT") SSH_OPTS=(-p "$PORT")
if [[ -n "$IDENTITY" ]]; then if [[ -n "$IDENTITY" ]]; then
@@ -387,13 +288,25 @@ setup_remote_repository(){
cleanup_stale_docker_resources(){ cleanup_stale_docker_resources(){
echo "⋅ Cleaning up stale Docker resources on remote..." echo "⋅ Cleaning up stale Docker resources on remote..."
# Get project name to target our containers/images specifically
local project_name
project_name="$(resolve_project_name)"
# Stop and remove old containers # Stop and remove old containers
echo " • Removing old containers..." echo " • Removing old containers..."
run_ssh "docker ps -a --filter 'name=ac-' --format '{{.Names}}' | xargs -r docker rm -f 2>/dev/null || true" run_ssh "docker ps -a --filter 'name=ac-' --format '{{.Names}}' | xargs -r docker rm -f 2>/dev/null || true"
# Remove old project images to force fresh load # Remove old project images to force fresh load
echo " • Removing old project images..." echo " • Removing old project images..."
for img in "${CLEANUP_IMAGE_REFS[@]}"; do local images_to_remove=(
"${project_name}:authserver-modules-latest"
"${project_name}:worldserver-modules-latest"
"${project_name}:authserver-playerbots"
"${project_name}:worldserver-playerbots"
"${project_name}:db-import-playerbots"
"${project_name}:client-data-playerbots"
)
for img in "${images_to_remove[@]}"; do
run_ssh "docker rmi '$img' 2>/dev/null || true" run_ssh "docker rmi '$img' 2>/dev/null || true"
done done
@@ -407,25 +320,31 @@ cleanup_stale_docker_resources(){
validate_remote_environment validate_remote_environment
collect_deploy_image_refs echo "⋅ Exporting module images to $TARBALL"
echo "⋅ Exporting deployment images to $TARBALL"
# Ensure destination directory exists
ensure_host_writable "$(dirname "$TARBALL")"
# Check which images are available and collect them # Check which images are available and collect them
IMAGES_TO_SAVE=() IMAGES_TO_SAVE=()
MISSING_IMAGES=()
for image in "${DEPLOY_IMAGE_REFS[@]}"; do project_auth_modules="$(resolve_project_image "authserver-modules-latest")"
project_world_modules="$(resolve_project_image "worldserver-modules-latest")"
project_auth_playerbots="$(resolve_project_image "authserver-playerbots")"
project_world_playerbots="$(resolve_project_image "worldserver-playerbots")"
project_db_import="$(resolve_project_image "db-import-playerbots")"
project_client_data="$(resolve_project_image "client-data-playerbots")"
for image in \
"$project_auth_modules" \
"$project_world_modules" \
"$project_auth_playerbots" \
"$project_world_playerbots" \
"$project_db_import" \
"$project_client_data"; do
if docker image inspect "$image" >/dev/null 2>&1; then if docker image inspect "$image" >/dev/null 2>&1; then
IMAGES_TO_SAVE+=("$image") IMAGES_TO_SAVE+=("$image")
else
MISSING_IMAGES+=("$image")
fi fi
done done
if [ ${#IMAGES_TO_SAVE[@]} -eq 0 ]; then if [ ${#IMAGES_TO_SAVE[@]} -eq 0 ]; then
echo "❌ No AzerothCore images found to migrate. Run './build.sh' first or pull the images defined in your .env." echo "❌ No AzerothCore images found to migrate. Run './build.sh' first or pull standard images."
exit 1 exit 1
fi fi
@@ -433,11 +352,6 @@ echo "⋅ Found ${#IMAGES_TO_SAVE[@]} images to migrate:"
printf ' • %s\n' "${IMAGES_TO_SAVE[@]}" printf ' • %s\n' "${IMAGES_TO_SAVE[@]}"
docker image save "${IMAGES_TO_SAVE[@]}" > "$TARBALL" docker image save "${IMAGES_TO_SAVE[@]}" > "$TARBALL"
if [ ${#MISSING_IMAGES[@]} -gt 0 ]; then
echo "⚠️ Skipping ${#MISSING_IMAGES[@]} images not present locally (will need to pull on remote if required):"
printf ' • %s\n' "${MISSING_IMAGES[@]}"
fi
if [[ $SKIP_STORAGE -eq 0 ]]; then if [[ $SKIP_STORAGE -eq 0 ]]; then
if [[ -d storage ]]; then if [[ -d storage ]]; then
echo "⋅ Syncing storage to remote" echo "⋅ Syncing storage to remote"
@@ -473,18 +387,6 @@ if [[ $SKIP_STORAGE -eq 0 ]]; then
rm -f "$modules_tar" rm -f "$modules_tar"
run_ssh "tar -xf '$REMOTE_TEMP_DIR/acore-modules.tar' -C '$REMOTE_STORAGE/modules' && rm '$REMOTE_TEMP_DIR/acore-modules.tar'" run_ssh "tar -xf '$REMOTE_TEMP_DIR/acore-modules.tar' -C '$REMOTE_STORAGE/modules' && rm '$REMOTE_TEMP_DIR/acore-modules.tar'"
fi fi
# Sync module SQL staging directory (STAGE_PATH_MODULE_SQL)
if [[ -d "$LOCAL_STAGE_SQL_DIR" ]]; then
echo "⋅ Syncing module SQL staging to remote"
run_ssh "rm -rf '$REMOTE_STAGE_SQL_DIR' && mkdir -p '$REMOTE_STAGE_SQL_DIR'"
sql_tar=$(mktemp)
tar -cf "$sql_tar" -C "$LOCAL_STAGE_SQL_DIR" .
ensure_remote_temp_dir
run_scp "$sql_tar" "$USER@$HOST:$REMOTE_TEMP_DIR/acore-module-sql.tar"
rm -f "$sql_tar"
run_ssh "tar -xf '$REMOTE_TEMP_DIR/acore-module-sql.tar' -C '$REMOTE_STAGE_SQL_DIR' && rm '$REMOTE_TEMP_DIR/acore-module-sql.tar'"
fi
fi fi
reset_remote_post_install_marker(){ reset_remote_post_install_marker(){
@@ -504,9 +406,9 @@ ensure_remote_temp_dir
run_scp "$TARBALL" "$USER@$HOST:$REMOTE_TEMP_DIR/acore-modules-images.tar" run_scp "$TARBALL" "$USER@$HOST:$REMOTE_TEMP_DIR/acore-modules-images.tar"
run_ssh "docker load < '$REMOTE_TEMP_DIR/acore-modules-images.tar' && rm '$REMOTE_TEMP_DIR/acore-modules-images.tar'" run_ssh "docker load < '$REMOTE_TEMP_DIR/acore-modules-images.tar' && rm '$REMOTE_TEMP_DIR/acore-modules-images.tar'"
if [[ -f "$ENV_FILE" ]]; then if [[ -f .env ]]; then
echo "⋅ Uploading .env" echo "⋅ Uploading .env"
run_scp "$ENV_FILE" "$USER@$HOST:$PROJECT_DIR/.env" run_scp .env "$USER@$HOST:$PROJECT_DIR/.env"
fi fi
echo "⋅ Remote prepares completed" echo "⋅ Remote prepares completed"

View File

@@ -1,88 +0,0 @@
#!/bin/bash
# Ensure dbimport.conf exists with usable connection values.
set -euo pipefail 2>/dev/null || set -eu
# Usage: seed_dbimport_conf [conf_dir]
# - conf_dir: target directory (defaults to DBIMPORT_CONF_DIR or /azerothcore/env/dist/etc)
seed_dbimport_conf() {
local conf_dir="${1:-${DBIMPORT_CONF_DIR:-/azerothcore/env/dist/etc}}"
local conf="${conf_dir}/dbimport.conf"
local dist="${conf}.dist"
local source_root="${DBIMPORT_SOURCE_ROOT:-${AC_SOURCE_DIR:-/local-storage-root/source/azerothcore-playerbots}}"
if [ ! -d "$source_root" ]; then
local fallback="/local-storage-root/source/azerothcore-wotlk"
if [ -d "$fallback" ]; then
source_root="$fallback"
fi
fi
local source_dist="${DBIMPORT_DIST_PATH:-${source_root}/src/tools/dbimport/dbimport.conf.dist}"
# Put temp dir inside the writable config mount so non-root can create files.
local temp_dir="${DBIMPORT_TEMP_DIR:-/azerothcore/env/dist/etc/temp}"
mkdir -p "$conf_dir" "$temp_dir"
# Prefer a real .dist from the source tree if it exists.
if [ -f "$source_dist" ]; then
cp -n "$source_dist" "$dist" 2>/dev/null || true
fi
if [ ! -f "$conf" ]; then
if [ -f "$dist" ]; then
cp "$dist" "$conf"
else
echo "⚠️ dbimport.conf.dist not found; generating minimal dbimport.conf" >&2
cat > "$conf" <<EOF
LoginDatabaseInfo = "localhost;3306;root;root;acore_auth"
WorldDatabaseInfo = "localhost;3306;root;root;acore_world"
CharacterDatabaseInfo = "localhost;3306;root;root;acore_characters"
PlayerbotsDatabaseInfo = "localhost;3306;root;root;acore_playerbots"
EnableDatabases = 15
Updates.AutoSetup = 1
MySQLExecutable = "/usr/bin/mysql"
TempDir = "/azerothcore/env/dist/temp"
EOF
fi
fi
set_conf() {
local key="$1" value="$2" file="$3" quoted="${4:-true}"
local formatted="$value"
if [ "$quoted" = "true" ]; then
formatted="\"${value}\""
fi
if grep -qE "^[[:space:]]*${key}[[:space:]]*=" "$file"; then
sed -i "s|^[[:space:]]*${key}[[:space:]]*=.*|${key} = ${formatted}|" "$file"
else
printf '%s = %s\n' "$key" "$formatted" >> "$file"
fi
}
local host="${CONTAINER_MYSQL:-${MYSQL_HOST:-localhost}}"
local port="${MYSQL_PORT:-3306}"
local user="${MYSQL_USER:-root}"
local pass="${MYSQL_ROOT_PASSWORD:-root}"
local db_auth="${DB_AUTH_NAME:-acore_auth}"
local db_world="${DB_WORLD_NAME:-acore_world}"
local db_chars="${DB_CHARACTERS_NAME:-acore_characters}"
local db_bots="${DB_PLAYERBOTS_NAME:-acore_playerbots}"
set_conf "LoginDatabaseInfo" "${host};${port};${user};${pass};${db_auth}" "$conf"
set_conf "WorldDatabaseInfo" "${host};${port};${user};${pass};${db_world}" "$conf"
set_conf "CharacterDatabaseInfo" "${host};${port};${user};${pass};${db_chars}" "$conf"
set_conf "PlayerbotsDatabaseInfo" "${host};${port};${user};${pass};${db_bots}" "$conf"
set_conf "EnableDatabases" "${AC_UPDATES_ENABLE_DATABASES:-15}" "$conf" false
set_conf "Updates.AutoSetup" "${AC_UPDATES_AUTO_SETUP:-1}" "$conf" false
set_conf "Updates.ExceptionShutdownDelay" "${AC_UPDATES_EXCEPTION_SHUTDOWN_DELAY:-10000}" "$conf" false
set_conf "Updates.AllowedModules" "${DB_UPDATES_ALLOWED_MODULES:-all}" "$conf"
set_conf "Updates.Redundancy" "${DB_UPDATES_REDUNDANCY:-1}" "$conf" false
set_conf "Database.Reconnect.Seconds" "${DB_RECONNECT_SECONDS:-5}" "$conf" false
set_conf "Database.Reconnect.Attempts" "${DB_RECONNECT_ATTEMPTS:-5}" "$conf" false
set_conf "LoginDatabase.WorkerThreads" "${DB_LOGIN_WORKER_THREADS:-1}" "$conf" false
set_conf "WorldDatabase.WorkerThreads" "${DB_WORLD_WORKER_THREADS:-1}" "$conf" false
set_conf "CharacterDatabase.WorkerThreads" "${DB_CHARACTER_WORKER_THREADS:-1}" "$conf" false
set_conf "LoginDatabase.SynchThreads" "${DB_LOGIN_SYNCH_THREADS:-1}" "$conf" false
set_conf "WorldDatabase.SynchThreads" "${DB_WORLD_SYNCH_THREADS:-1}" "$conf" false
set_conf "CharacterDatabase.SynchThreads" "${DB_CHARACTER_SYNCH_THREADS:-1}" "$conf" false
set_conf "MySQLExecutable" "/usr/bin/mysql" "$conf"
set_conf "TempDir" "$temp_dir" "$conf"
}

View File

@@ -259,14 +259,14 @@ SENTINEL_FILE="$LOCAL_STORAGE_PATH/modules/.requires_rebuild"
MODULES_META_DIR="$STORAGE_PATH/modules/.modules-meta" MODULES_META_DIR="$STORAGE_PATH/modules/.modules-meta"
RESTORE_PRESTAGED_FLAG="$MODULES_META_DIR/.restore-prestaged" RESTORE_PRESTAGED_FLAG="$MODULES_META_DIR/.restore-prestaged"
MODULES_ENABLED_FILE="$MODULES_META_DIR/modules-enabled.txt" MODULES_ENABLED_FILE="$MODULES_META_DIR/modules-enabled.txt"
STAGE_PATH_MODULE_SQL="$(read_env STAGE_PATH_MODULE_SQL "$STORAGE_PATH/module-sql-updates")" MODULE_SQL_STAGE_PATH="$(read_env MODULE_SQL_STAGE_PATH "$STORAGE_PATH/module-sql-updates")"
STAGE_PATH_MODULE_SQL="$(eval "echo \"$STAGE_PATH_MODULE_SQL\"")" MODULE_SQL_STAGE_PATH="$(eval "echo \"$MODULE_SQL_STAGE_PATH\"")"
if [[ "$STAGE_PATH_MODULE_SQL" != /* ]]; then if [[ "$MODULE_SQL_STAGE_PATH" != /* ]]; then
STAGE_PATH_MODULE_SQL="$PROJECT_DIR/$STAGE_PATH_MODULE_SQL" MODULE_SQL_STAGE_PATH="$PROJECT_DIR/$MODULE_SQL_STAGE_PATH"
fi fi
STAGE_PATH_MODULE_SQL="$(canonical_path "$STAGE_PATH_MODULE_SQL")" MODULE_SQL_STAGE_PATH="$(canonical_path "$MODULE_SQL_STAGE_PATH")"
mkdir -p "$STAGE_PATH_MODULE_SQL" mkdir -p "$MODULE_SQL_STAGE_PATH"
ensure_host_writable "$STAGE_PATH_MODULE_SQL" ensure_host_writable "$MODULE_SQL_STAGE_PATH"
HOST_STAGE_HELPER_IMAGE="$(read_env ALPINE_IMAGE "alpine:latest")" HOST_STAGE_HELPER_IMAGE="$(read_env ALPINE_IMAGE "alpine:latest")"
declare -A ENABLED_MODULES=() declare -A ENABLED_MODULES=()
@@ -439,7 +439,7 @@ esac
# Stage module SQL to core updates directory (after containers start) # Stage module SQL to core updates directory (after containers start)
host_stage_clear(){ host_stage_clear(){
docker run --rm \ docker run --rm \
-v "$STAGE_PATH_MODULE_SQL":/host-stage \ -v "$MODULE_SQL_STAGE_PATH":/host-stage \
"$HOST_STAGE_HELPER_IMAGE" \ "$HOST_STAGE_HELPER_IMAGE" \
sh -c 'find /host-stage -type f -name "MODULE_*.sql" -delete' >/dev/null 2>&1 || true sh -c 'find /host-stage -type f -name "MODULE_*.sql" -delete' >/dev/null 2>&1 || true
} }
@@ -447,7 +447,7 @@ host_stage_clear(){
host_stage_reset_dir(){ host_stage_reset_dir(){
local dir="$1" local dir="$1"
docker run --rm \ docker run --rm \
-v "$STAGE_PATH_MODULE_SQL":/host-stage \ -v "$MODULE_SQL_STAGE_PATH":/host-stage \
"$HOST_STAGE_HELPER_IMAGE" \ "$HOST_STAGE_HELPER_IMAGE" \
sh -c "mkdir -p /host-stage/$dir && rm -f /host-stage/$dir/MODULE_*.sql" >/dev/null 2>&1 || true sh -c "mkdir -p /host-stage/$dir && rm -f /host-stage/$dir/MODULE_*.sql" >/dev/null 2>&1 || true
} }
@@ -461,7 +461,7 @@ copy_to_host_stage(){
local base_name local base_name
base_name="$(basename "$file_path")" base_name="$(basename "$file_path")"
docker run --rm \ docker run --rm \
-v "$STAGE_PATH_MODULE_SQL":/host-stage \ -v "$MODULE_SQL_STAGE_PATH":/host-stage \
-v "$src_dir":/src \ -v "$src_dir":/src \
"$HOST_STAGE_HELPER_IMAGE" \ "$HOST_STAGE_HELPER_IMAGE" \
sh -c "mkdir -p /host-stage/$core_dir && cp \"/src/$base_name\" \"/host-stage/$core_dir/$target_name\"" >/dev/null 2>&1 sh -c "mkdir -p /host-stage/$core_dir && cp \"/src/$base_name\" \"/host-stage/$core_dir/$target_name\"" >/dev/null 2>&1

View File

@@ -1,6 +1,6 @@
module acore-compose/statusdash module acore-compose/statusdash
go 1.22 go 1.22.2
require ( require (
github.com/gizak/termui/v3 v3.1.0 // indirect github.com/gizak/termui/v3 v3.1.0 // indirect

View File

@@ -62,26 +62,16 @@ type Module struct {
} }
type Snapshot struct { type Snapshot struct {
Timestamp string `json:"timestamp"` Timestamp string `json:"timestamp"`
Project string `json:"project"` Project string `json:"project"`
Network string `json:"network"` Network string `json:"network"`
Services []Service `json:"services"` Services []Service `json:"services"`
Ports []Port `json:"ports"` Ports []Port `json:"ports"`
Modules []Module `json:"modules"` Modules []Module `json:"modules"`
Storage map[string]DirInfo `json:"storage"` Storage map[string]DirInfo `json:"storage"`
Volumes map[string]VolumeInfo `json:"volumes"` Volumes map[string]VolumeInfo `json:"volumes"`
Users UserStats `json:"users"` Users UserStats `json:"users"`
Stats map[string]ContainerStats `json:"stats"` Stats map[string]ContainerStats `json:"stats"`
}
var persistentServiceOrder = []string{
"ac-mysql",
"ac-db-guard",
"ac-authserver",
"ac-worldserver",
"ac-phpmyadmin",
"ac-keira3",
"ac-backup",
} }
func runSnapshot() (*Snapshot, error) { func runSnapshot() (*Snapshot, error) {
@@ -97,76 +87,27 @@ func runSnapshot() (*Snapshot, error) {
return snap, nil return snap, nil
} }
func partitionServices(all []Service) ([]Service, []Service) {
byName := make(map[string]Service)
for _, svc := range all {
byName[svc.Name] = svc
}
seen := make(map[string]bool)
persistent := make([]Service, 0, len(persistentServiceOrder))
for _, name := range persistentServiceOrder {
if svc, ok := byName[name]; ok {
persistent = append(persistent, svc)
seen[name] = true
}
}
setups := make([]Service, 0, len(all))
for _, svc := range all {
if seen[svc.Name] {
continue
}
setups = append(setups, svc)
}
return persistent, setups
}
func buildServicesTable(s *Snapshot) *TableNoCol { func buildServicesTable(s *Snapshot) *TableNoCol {
runningServices, setupServices := partitionServices(s.Services)
table := NewTableNoCol() table := NewTableNoCol()
rows := [][]string{{"Group", "Service", "Status", "Health", "CPU%", "Memory"}} rows := [][]string{{"Service", "Status", "Health", "CPU%", "Memory"}}
appendRows := func(groupLabel string, services []Service) { for _, svc := range s.Services {
for _, svc := range services { cpu := "-"
cpu := "-" mem := "-"
mem := "-" if stats, ok := s.Stats[svc.Name]; ok {
if svcStats, ok := s.Stats[svc.Name]; ok { cpu = fmt.Sprintf("%.1f", stats.CPU)
cpu = fmt.Sprintf("%.1f", svcStats.CPU) mem = strings.Split(stats.Memory, " / ")[0] // Just show used, not total
mem = strings.Split(svcStats.Memory, " / ")[0] // Just show used, not total
}
health := svc.Health
if svc.Status != "running" && svc.ExitCode != "0" && svc.ExitCode != "" {
health = fmt.Sprintf("%s (%s)", svc.Health, svc.ExitCode)
}
rows = append(rows, []string{groupLabel, svc.Label, svc.Status, health, cpu, mem})
} }
// Combine health with exit code for stopped containers
health := svc.Health
if svc.Status != "running" && svc.ExitCode != "0" && svc.ExitCode != "" {
health = fmt.Sprintf("%s (%s)", svc.Health, svc.ExitCode)
}
rows = append(rows, []string{svc.Label, svc.Status, health, cpu, mem})
} }
appendRows("Persistent", runningServices)
appendRows("Setup", setupServices)
table.Rows = rows table.Rows = rows
table.RowSeparator = false table.RowSeparator = false
table.Border = true table.Border = true
table.Title = "Services" table.Title = "Services"
for i := 1; i < len(table.Rows); i++ {
if table.RowStyles == nil {
table.RowStyles = make(map[int]ui.Style)
}
state := strings.ToLower(table.Rows[i][2])
switch state {
case "running", "healthy":
table.RowStyles[i] = ui.NewStyle(ui.ColorGreen)
case "restarting", "unhealthy":
table.RowStyles[i] = ui.NewStyle(ui.ColorRed)
case "exited":
table.RowStyles[i] = ui.NewStyle(ui.ColorYellow)
default:
table.RowStyles[i] = ui.NewStyle(ui.ColorWhite)
}
}
return table return table
} }
@@ -204,6 +145,7 @@ func buildModulesList(s *Snapshot) *widgets.List {
func buildStorageParagraph(s *Snapshot) *widgets.Paragraph { func buildStorageParagraph(s *Snapshot) *widgets.Paragraph {
var b strings.Builder var b strings.Builder
fmt.Fprintf(&b, "STORAGE:\n")
entries := []struct { entries := []struct {
Key string Key string
Label string Label string
@@ -219,7 +161,11 @@ func buildStorageParagraph(s *Snapshot) *widgets.Paragraph {
if !ok { if !ok {
continue continue
} }
fmt.Fprintf(&b, " %-15s %s (%s)\n", item.Label, info.Path, info.Size) mark := "○"
if info.Exists {
mark = "●"
}
fmt.Fprintf(&b, " %-15s %s %s (%s)\n", item.Label, mark, info.Path, info.Size)
} }
par := widgets.NewParagraph() par := widgets.NewParagraph()
par.Title = "Storage" par.Title = "Storage"
@@ -231,6 +177,7 @@ func buildStorageParagraph(s *Snapshot) *widgets.Paragraph {
func buildVolumesParagraph(s *Snapshot) *widgets.Paragraph { func buildVolumesParagraph(s *Snapshot) *widgets.Paragraph {
var b strings.Builder var b strings.Builder
fmt.Fprintf(&b, "VOLUMES:\n")
entries := []struct { entries := []struct {
Key string Key string
Label string Label string
@@ -243,7 +190,11 @@ func buildVolumesParagraph(s *Snapshot) *widgets.Paragraph {
if !ok { if !ok {
continue continue
} }
fmt.Fprintf(&b, " %-13s %s\n", item.Label, info.Mountpoint) mark := "○"
if info.Exists {
mark = "●"
}
fmt.Fprintf(&b, " %-13s %s %s\n", item.Label, mark, info.Mountpoint)
} }
par := widgets.NewParagraph() par := widgets.NewParagraph()
par.Title = "Volumes" par.Title = "Volumes"
@@ -255,6 +206,22 @@ func buildVolumesParagraph(s *Snapshot) *widgets.Paragraph {
func renderSnapshot(s *Snapshot, selectedModule int) (*widgets.List, *ui.Grid) { func renderSnapshot(s *Snapshot, selectedModule int) (*widgets.List, *ui.Grid) {
servicesTable := buildServicesTable(s) servicesTable := buildServicesTable(s)
for i := 1; i < len(servicesTable.Rows); i++ {
if servicesTable.RowStyles == nil {
servicesTable.RowStyles = make(map[int]ui.Style)
}
state := strings.ToLower(servicesTable.Rows[i][1])
switch state {
case "running", "healthy":
servicesTable.RowStyles[i] = ui.NewStyle(ui.ColorGreen)
case "restarting", "unhealthy":
servicesTable.RowStyles[i] = ui.NewStyle(ui.ColorRed)
case "exited":
servicesTable.RowStyles[i] = ui.NewStyle(ui.ColorYellow)
default:
servicesTable.RowStyles[i] = ui.NewStyle(ui.ColorWhite)
}
}
portsTable := buildPortsTable(s) portsTable := buildPortsTable(s)
for i := 1; i < len(portsTable.Rows); i++ { for i := 1; i < len(portsTable.Rows); i++ {
if portsTable.RowStyles == nil { if portsTable.RowStyles == nil {
@@ -280,7 +247,7 @@ func renderSnapshot(s *Snapshot, selectedModule int) (*widgets.List, *ui.Grid) {
moduleInfoPar.Title = "Module Info" moduleInfoPar.Title = "Module Info"
if selectedModule >= 0 && selectedModule < len(s.Modules) { if selectedModule >= 0 && selectedModule < len(s.Modules) {
mod := s.Modules[selectedModule] mod := s.Modules[selectedModule]
moduleInfoPar.Text = fmt.Sprintf("%s\nCategory: %s\nType: %s", mod.Description, mod.Category, mod.Type) moduleInfoPar.Text = fmt.Sprintf("%s\n\nCategory: %s\nType: %s", mod.Description, mod.Category, mod.Type)
} else { } else {
moduleInfoPar.Text = "Select a module to view info" moduleInfoPar.Text = "Select a module to view info"
} }
@@ -305,15 +272,15 @@ func renderSnapshot(s *Snapshot, selectedModule int) (*widgets.List, *ui.Grid) {
termWidth, termHeight := ui.TerminalDimensions() termWidth, termHeight := ui.TerminalDimensions()
grid.SetRect(0, 0, termWidth, termHeight) grid.SetRect(0, 0, termWidth, termHeight)
grid.Set( grid.Set(
ui.NewRow(0.15, ui.NewRow(0.18,
ui.NewCol(0.6, header), ui.NewCol(0.6, header),
ui.NewCol(0.4, usersPar), ui.NewCol(0.4, usersPar),
), ),
ui.NewRow(0.46, ui.NewRow(0.42,
ui.NewCol(0.6, servicesTable), ui.NewCol(0.6, servicesTable),
ui.NewCol(0.4, portsTable), ui.NewCol(0.4, portsTable),
), ),
ui.NewRow(0.39, ui.NewRow(0.40,
ui.NewCol(0.25, modulesList), ui.NewCol(0.25, modulesList),
ui.NewCol(0.15, ui.NewCol(0.15,
ui.NewRow(0.30, helpPar), ui.NewRow(0.30, helpPar),

View File

@@ -588,16 +588,14 @@ def handle_generate(args: argparse.Namespace) -> int:
write_outputs(state, output_dir) write_outputs(state, output_dir)
if state.warnings: if state.warnings:
module_keys_with_warnings = sorted( warning_block = "\n".join(f"- {warning}" for warning in state.warnings)
{warning.split()[0].strip(":,") for warning in state.warnings if warning.startswith("MODULE_")}
)
warning_lines = []
if module_keys_with_warnings:
warning_lines.append(f"- Modules with warnings: {', '.join(module_keys_with_warnings)}")
warning_lines.extend(f"- {warning}" for warning in state.warnings)
warning_block = textwrap.indent("\n".join(warning_lines), " ")
print( print(
f"⚠️ Module manifest warnings detected:\n{warning_block}\n", textwrap.dedent(
f"""\
⚠️ Module manifest warnings detected:
{warning_block}
"""
),
file=sys.stderr, file=sys.stderr,
) )
if state.errors: if state.errors:

View File

@@ -1241,7 +1241,7 @@ fi
"automation" "quality-of-life" "gameplay-enhancement" "npc-service" "automation" "quality-of-life" "gameplay-enhancement" "npc-service"
"pvp" "progression" "economy" "social" "account-wide" "pvp" "progression" "economy" "social" "account-wide"
"customization" "scripting" "admin" "premium" "minigame" "customization" "scripting" "admin" "premium" "minigame"
"content" "rewards" "developer" "database" "tooling" "uncategorized" "content" "rewards" "developer"
) )
declare -A category_titles=( declare -A category_titles=(
["automation"]="🤖 Automation" ["automation"]="🤖 Automation"
@@ -1261,18 +1261,30 @@ fi
["content"]="🏰 Content" ["content"]="🏰 Content"
["rewards"]="🎁 Rewards" ["rewards"]="🎁 Rewards"
["developer"]="🛠️ Developer Tools" ["developer"]="🛠️ Developer Tools"
["database"]="🗄️ Database"
["tooling"]="🔨 Tooling"
["uncategorized"]="📦 Miscellaneous"
) )
declare -A processed_categories=()
render_category() { # Group modules by category using arrays
local cat="$1" declare -A modules_by_category
local key
for key in "${selection_keys[@]}"; do
[ -n "${KNOWN_MODULE_LOOKUP[$key]:-}" ] || continue
local category="${MODULE_CATEGORY_MAP[$key]:-uncategorized}"
if [ -z "${modules_by_category[$category]:-}" ]; then
modules_by_category[$category]="$key"
else
modules_by_category[$category]="${modules_by_category[$category]} $key"
fi
done
# Process modules by category
local cat
for cat in "${category_order[@]}"; do
local module_list="${modules_by_category[$cat]:-}" local module_list="${modules_by_category[$cat]:-}"
[ -n "$module_list" ] || return 0 [ -n "$module_list" ] || continue
# Check if this category has any valid modules before showing header
local has_valid_modules=0 local has_valid_modules=0
# Split the space-separated string properly
local -a module_array local -a module_array
IFS=' ' read -ra module_array <<< "$module_list" IFS=' ' read -ra module_array <<< "$module_list"
for key in "${module_array[@]}"; do for key in "${module_array[@]}"; do
@@ -1284,12 +1296,14 @@ fi
fi fi
done done
[ "$has_valid_modules" = "1" ] || return 0 # Skip category if no valid modules
[ "$has_valid_modules" = "1" ] || continue
# Display category header only when we have valid modules
local cat_title="${category_titles[$cat]:-$cat}" local cat_title="${category_titles[$cat]:-$cat}"
printf '\n%b\n' "${BOLD}${CYAN}═══ ${cat_title} ═══${NC}" printf '\n%b\n' "${BOLD}${CYAN}═══ ${cat_title} ═══${NC}"
local first_in_cat=1 # Process modules in this category
for key in "${module_array[@]}"; do for key in "${module_array[@]}"; do
[ -n "${KNOWN_MODULE_LOOKUP[$key]:-}" ] || continue [ -n "${KNOWN_MODULE_LOOKUP[$key]:-}" ] || continue
local status_lc="${MODULE_STATUS_MAP[$key],,}" local status_lc="${MODULE_STATUS_MAP[$key],,}"
@@ -1299,10 +1313,6 @@ fi
printf -v "$key" '%s' "0" printf -v "$key" '%s' "0"
continue continue
fi fi
if [ "$first_in_cat" -ne 1 ]; then
printf '\n'
fi
first_in_cat=0
local prompt_label local prompt_label
prompt_label="$(module_display_name "$key")" prompt_label="$(module_display_name "$key")"
if [ "${MODULE_NEEDS_BUILD_MAP[$key]}" = "1" ]; then if [ "${MODULE_NEEDS_BUILD_MAP[$key]}" = "1" ]; then
@@ -1330,30 +1340,6 @@ fi
printf -v "$key" '%s' "0" printf -v "$key" '%s' "0"
fi fi
done done
processed_categories["$cat"]=1
}
# Group modules by category using arrays
declare -A modules_by_category
local key
for key in "${selection_keys[@]}"; do
[ -n "${KNOWN_MODULE_LOOKUP[$key]:-}" ] || continue
local category="${MODULE_CATEGORY_MAP[$key]:-uncategorized}"
if [ -z "${modules_by_category[$category]:-}" ]; then
modules_by_category[$category]="$key"
else
modules_by_category[$category]="${modules_by_category[$category]} $key"
fi
done
# Process modules by category (ordered, then any new categories)
local cat
for cat in "${category_order[@]}"; do
render_category "$cat"
done
for cat in "${!modules_by_category[@]}"; do
[ -n "${processed_categories[$cat]:-}" ] && continue
render_category "$cat"
done done
module_mode_label="preset 3 (Manual)" module_mode_label="preset 3 (Manual)"
elif [ "$MODE_SELECTION" = "4" ]; then elif [ "$MODE_SELECTION" = "4" ]; then
@@ -1529,24 +1515,8 @@ fi
# Set build sentinel to indicate rebuild is needed # Set build sentinel to indicate rebuild is needed
local sentinel="$LOCAL_STORAGE_ROOT_ABS/modules/.requires_rebuild" local sentinel="$LOCAL_STORAGE_ROOT_ABS/modules/.requires_rebuild"
mkdir -p "$(dirname "$sentinel")" mkdir -p "$(dirname "$sentinel")"
if touch "$sentinel" 2>/dev/null; then touch "$sentinel"
say INFO "Build sentinel created at $sentinel" say INFO "Build sentinel created at $sentinel"
else
say WARNING "Could not create build sentinel at $sentinel (permissions/ownership); forcing with sudo..."
if command -v sudo >/dev/null 2>&1; then
if sudo mkdir -p "$(dirname "$sentinel")" \
&& sudo chown -R "$(id -u):$(id -g)" "$(dirname "$sentinel")" \
&& sudo touch "$sentinel"; then
say INFO "Build sentinel created at $sentinel (after fixing ownership)"
else
say ERROR "Failed to force build sentinel creation at $sentinel. Fix permissions and rerun setup."
exit 1
fi
else
say ERROR "Cannot force build sentinel creation (sudo unavailable). Fix permissions on $(dirname "$sentinel") and rerun setup."
exit 1
fi
fi
fi fi
local default_source_rel="${LOCAL_STORAGE_ROOT}/source/azerothcore" local default_source_rel="${LOCAL_STORAGE_ROOT}/source/azerothcore"

View File

@@ -1,117 +0,0 @@
#!/bin/bash
#
# Safe wrapper to update to the latest commit on the current branch and run deploy.
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$ROOT_DIR"
BLUE='\033[0;34m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'; RED='\033[0;31m'; NC='\033[0m'
info(){ printf '%b\n' "${BLUE} $*${NC}"; }
ok(){ printf '%b\n' "${GREEN}$*${NC}"; }
warn(){ printf '%b\n' "${YELLOW}⚠️ $*${NC}"; }
err(){ printf '%b\n' "${RED}$*${NC}"; }
FORCE_DIRTY=0
DEPLOY_ARGS=()
SKIP_BUILD=0
AUTO_DEPLOY=0
usage(){
cat <<'EOF'
Usage: ./update-latest.sh [--force] [--help] [deploy args...]
Updates the current git branch with a fast-forward pull, runs a fresh build,
and optionally runs ./deploy.sh with any additional arguments you provide
(e.g., --yes --no-watch).
Options:
--force Skip the dirty-tree check (not recommended; you may lose changes)
--skip-build Do not run ./build.sh after updating
--deploy Auto-run ./deploy.sh after build (non-interactive)
--help Show this help
Examples:
./update-latest.sh --yes --no-watch
./update-latest.sh --deploy --yes --no-watch
./update-latest.sh --force --skip-build
./update-latest.sh --force --deploy --remote --remote-host my.host --remote-user sam --yes
EOF
}
while [[ $# -gt 0 ]]; do
case "$1" in
--force) FORCE_DIRTY=1; shift;;
--skip-build) SKIP_BUILD=1; shift;;
--deploy) AUTO_DEPLOY=1; shift;;
--help|-h) usage; exit 0;;
*) DEPLOY_ARGS+=("$1"); shift;;
esac
done
command -v git >/dev/null 2>&1 || { err "git is required"; exit 1; }
if [ "$FORCE_DIRTY" -ne 1 ]; then
if [ -n "$(git status --porcelain)" ]; then
err "Working tree is dirty. Commit/stash or re-run with --force."
exit 1
fi
fi
current_branch="$(git rev-parse --abbrev-ref HEAD 2>/dev/null || true)"
if [ -z "$current_branch" ] || [ "$current_branch" = "HEAD" ]; then
err "Cannot update: detached HEAD or unknown branch."
exit 1
fi
if ! git ls-remote --exit-code --heads origin "$current_branch" >/dev/null 2>&1; then
err "Remote branch origin/$current_branch not found."
exit 1
fi
info "Fetching latest changes from origin/$current_branch"
git fetch --prune origin
info "Fast-forwarding to origin/$current_branch"
if ! git merge --ff-only "origin/$current_branch"; then
err "Fast-forward failed. Resolve manually or rebase, then rerun."
exit 1
fi
ok "Repository updated to $(git rev-parse --short HEAD)"
if [ "$SKIP_BUILD" -ne 1 ]; then
info "Running build.sh --yes"
if ! "$ROOT_DIR/build.sh" --yes; then
err "Build failed. Resolve issues and re-run."
exit 1
fi
ok "Build completed"
else
warn "Skipping build (--skip-build set)"
fi
# Offer to run deploy
if [ "$AUTO_DEPLOY" -eq 1 ]; then
info "Auto-deploy enabled; running deploy.sh ${DEPLOY_ARGS[*]:-(no extra args)}"
exec "$ROOT_DIR/deploy.sh" "${DEPLOY_ARGS[@]}"
fi
if [ -t 0 ]; then
read -r -p "Run deploy.sh now? [y/N]: " reply
reply="${reply:-n}"
case "$reply" in
[Yy]*)
info "Running deploy.sh ${DEPLOY_ARGS[*]:-(no extra args)}"
exec "$ROOT_DIR/deploy.sh" "${DEPLOY_ARGS[@]}"
;;
*)
ok "Update (and build) complete. Run ./deploy.sh ${DEPLOY_ARGS[*]} when ready."
exit 0
;;
esac
else
warn "Non-interactive mode and --deploy not set; skipping deploy."
ok "Update (and build) complete. Run ./deploy.sh ${DEPLOY_ARGS[*]} when ready."
fi

350
updates-dry-run.json Normal file
View File

@@ -0,0 +1,350 @@
[
{
"key": "MODULE_INDIVIDUAL_PROGRESSION",
"repo_name": "ZhengPeiRu21/mod-individual-progression",
"topic": "azerothcore-module",
"repo_url": "https://github.com/ZhengPeiRu21/mod-individual-progression"
},
{
"key": "MODULE_PLAYERBOTS",
"repo_name": "mod-playerbots/mod-playerbots",
"topic": "azerothcore-module",
"repo_url": "https://github.com/mod-playerbots/mod-playerbots"
},
{
"key": "MODULE_OLLAMA_CHAT",
"repo_name": "DustinHendrickson/mod-ollama-chat",
"topic": "azerothcore-module",
"repo_url": "https://github.com/DustinHendrickson/mod-ollama-chat"
},
{
"key": "MODULE_PLAYER_BOT_LEVEL_BRACKETS",
"repo_name": "DustinHendrickson/mod-player-bot-level-brackets",
"topic": "azerothcore-module",
"repo_url": "https://github.com/DustinHendrickson/mod-player-bot-level-brackets"
},
{
"key": "MODULE_DUEL_RESET",
"repo_name": "azerothcore/mod-duel-reset",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-duel-reset"
},
{
"key": "MODULE_AOE_LOOT",
"repo_name": "azerothcore/mod-aoe-loot",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-aoe-loot"
},
{
"key": "MODULE_TIC_TAC_TOE",
"repo_name": "azerothcore/mod-tic-tac-toe",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-tic-tac-toe"
},
{
"key": "MODULE_NPC_BEASTMASTER",
"repo_name": "azerothcore/mod-npc-beastmaster",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-npc-beastmaster"
},
{
"key": "MODULE_MORPHSUMMON",
"repo_name": "azerothcore/mod-morphsummon",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-morphsummon"
},
{
"key": "MODULE_WORGOBLIN",
"repo_name": "heyitsbench/mod-worgoblin",
"topic": "azerothcore-module",
"repo_url": "https://github.com/heyitsbench/mod-worgoblin"
},
{
"key": "MODULE_SKELETON_MODULE",
"repo_name": "azerothcore/skeleton-module",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/skeleton-module"
},
{
"key": "MODULE_AUTOBALANCE",
"repo_name": "azerothcore/mod-autobalance",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-autobalance"
},
{
"key": "MODULE_TRANSMOG",
"repo_name": "azerothcore/mod-transmog",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-transmog"
},
{
"key": "MODULE_ARAC",
"repo_name": "heyitsbench/mod-arac",
"topic": "azerothcore-module",
"repo_url": "https://github.com/heyitsbench/mod-arac"
},
{
"key": "MODULE_GLOBAL_CHAT",
"repo_name": "azerothcore/mod-global-chat",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-global-chat"
},
{
"key": "MODULE_PRESTIGE_DRAFT_MODE",
"repo_name": "Youpeoples/Prestige-and-Draft-Mode",
"topic": "azerothcore-module",
"repo_url": "https://github.com/Youpeoples/Prestige-and-Draft-Mode"
},
{
"key": "MODULE_BLACK_MARKET_AUCTION_HOUSE",
"repo_name": "Youpeoples/Black-Market-Auction-House",
"topic": "azerothcore-module",
"repo_url": "https://github.com/Youpeoples/Black-Market-Auction-House"
},
{
"key": "MODULE_ULTIMATE_FULL_LOOT_PVP",
"repo_name": "Youpeoples/Ultimate-Full-Loot-Pvp",
"topic": "azerothcore-module",
"repo_url": "https://github.com/Youpeoples/Ultimate-Full-Loot-Pvp"
},
{
"key": "MODULE_SERVER_AUTO_SHUTDOWN",
"repo_name": "azerothcore/mod-server-auto-shutdown",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-server-auto-shutdown"
},
{
"key": "MODULE_TIME_IS_TIME",
"repo_name": "dunjeon/mod-TimeIsTime",
"topic": "azerothcore-module",
"repo_url": "https://github.com/dunjeon/mod-TimeIsTime"
},
{
"key": "MODULE_WAR_EFFORT",
"repo_name": "azerothcore/mod-war-effort",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-war-effort"
},
{
"key": "MODULE_FIREWORKS",
"repo_name": "azerothcore/mod-fireworks-on-level",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-fireworks-on-level"
},
{
"key": "MODULE_NPC_ENCHANTER",
"repo_name": "azerothcore/mod-npc-enchanter",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-npc-enchanter"
},
{
"key": "MODULE_NPC_BUFFER",
"repo_name": "azerothcore/mod-npc-buffer",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-npc-buffer"
},
{
"key": "MODULE_PVP_TITLES",
"repo_name": "azerothcore/mod-pvp-titles",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-pvp-titles"
},
{
"key": "MODULE_CHALLENGE_MODES",
"repo_name": "ZhengPeiRu21/mod-challenge-modes",
"topic": "azerothcore-module",
"repo_url": "https://github.com/ZhengPeiRu21/mod-challenge-modes"
},
{
"key": "MODULE_TREASURE_CHEST_SYSTEM",
"repo_name": "zyggy123/Treasure-Chest-System",
"topic": "azerothcore-module",
"repo_url": "https://github.com/zyggy123/Treasure-Chest-System"
},
{
"key": "MODULE_ASSISTANT",
"repo_name": "noisiver/mod-assistant",
"topic": "azerothcore-module",
"repo_url": "https://github.com/noisiver/mod-assistant"
},
{
"key": "MODULE_STATBOOSTER",
"repo_name": "AnchyDev/StatBooster",
"topic": "azerothcore-module",
"repo_url": "https://github.com/AnchyDev/StatBooster"
},
{
"key": "MODULE_BG_SLAVERYVALLEY",
"repo_name": "Helias/mod-bg-slaveryvalley",
"topic": "azerothcore-module",
"repo_url": "https://github.com/Helias/mod-bg-slaveryvalley"
},
{
"key": "MODULE_REAGENT_BANK",
"repo_name": "ZhengPeiRu21/mod-reagent-bank",
"topic": "azerothcore-module",
"repo_url": "https://github.com/ZhengPeiRu21/mod-reagent-bank"
},
{
"key": "MODULE_ELUNA_TS",
"repo_name": "azerothcore/eluna-ts",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/eluna-ts"
},
{
"key": "MODULE_AZEROTHSHARD",
"repo_name": "azerothcore/mod-azerothshard",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-azerothshard"
},
{
"key": "MODULE_LEVEL_GRANT",
"repo_name": "michaeldelago/mod-quest-count-level",
"topic": "azerothcore-module",
"repo_url": "https://github.com/michaeldelago/mod-quest-count-level"
},
{
"key": "MODULE_DUNGEON_RESPAWN",
"repo_name": "AnchyDev/DungeonRespawn",
"topic": "azerothcore-module",
"repo_url": "https://github.com/AnchyDev/DungeonRespawn"
},
{
"key": "MODULE_LUA_AH_BOT",
"repo_name": "mostlynick3/azerothcore-lua-ah-bot",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/mostlynick3/azerothcore-lua-ah-bot"
},
{
"key": "MODULE_ACCOUNTWIDE_SYSTEMS",
"repo_name": "Aldori15/azerothcore-eluna-accountwide",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/Aldori15/azerothcore-eluna-accountwide"
},
{
"key": "MODULE_ELUNA_SCRIPTS",
"repo_name": "Isidorsson/Eluna-scripts",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/Isidorsson/Eluna-scripts"
},
{
"key": "MODULE_TRANSMOG_AIO",
"repo_name": "DanieltheDeveloper/azerothcore-transmog-3.3.5a",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/DanieltheDeveloper/azerothcore-transmog-3.3.5a"
},
{
"key": "MODULE_HARDCORE_MODE",
"repo_name": "PrivateDonut/hardcore_mode",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/PrivateDonut/hardcore_mode"
},
{
"key": "MODULE_RECRUIT_A_FRIEND",
"repo_name": "55Honey/Acore_RecruitAFriend",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/55Honey/Acore_RecruitAFriend"
},
{
"key": "MODULE_EVENT_SCRIPTS",
"repo_name": "55Honey/Acore_eventScripts",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/55Honey/Acore_eventScripts"
},
{
"key": "MODULE_LOTTERY_LUA",
"repo_name": "zyggy123/lottery-lua",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/zyggy123/lottery-lua"
},
{
"key": "MODULE_HORADRIC_CUBE",
"repo_name": "TITIaio/Horadric-Cube-for-World-of-Warcraft",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/TITIaio/Horadric-Cube-for-World-of-Warcraft"
},
{
"key": "MODULE_GLOBAL_MAIL_BANKING_AUCTIONS",
"repo_name": "Aldori15/azerothcore-global-mail_banking_auctions",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/Aldori15/azerothcore-global-mail_banking_auctions"
},
{
"key": "MODULE_LEVEL_UP_REWARD",
"repo_name": "55Honey/Acore_LevelUpReward",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/55Honey/Acore_LevelUpReward"
},
{
"key": "MODULE_AIO_BLACKJACK",
"repo_name": "Manmadedrummer/AIO-Blackjack",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/Manmadedrummer/AIO-Blackjack"
},
{
"key": "MODULE_NPCBOT_EXTENDED_COMMANDS",
"repo_name": "Day36512/Npcbot_Extended_Commands",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/Day36512/Npcbot_Extended_Commands"
},
{
"key": "MODULE_ACTIVE_CHAT",
"repo_name": "Day36512/ActiveChat",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/Day36512/ActiveChat"
},
{
"key": "MODULE_MULTIVENDOR",
"repo_name": "Shadowveil-WotLK/AzerothCore-lua-MultiVendor",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/Shadowveil-WotLK/AzerothCore-lua-MultiVendor"
},
{
"key": "MODULE_EXCHANGE_NPC",
"repo_name": "55Honey/Acore_ExchangeNpc",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/55Honey/Acore_ExchangeNpc"
},
{
"key": "MODULE_DYNAMIC_TRADER",
"repo_name": "Day36512/Dynamic-Trader",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/Day36512/Dynamic-Trader"
},
{
"key": "MODULE_DISCORD_NOTIFIER",
"repo_name": "0xCiBeR/Acore_DiscordNotifier",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/0xCiBeR/Acore_DiscordNotifier"
},
{
"key": "MODULE_ZONE_CHECK",
"repo_name": "55Honey/Acore_Zonecheck",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/55Honey/Acore_Zonecheck"
},
{
"key": "MODULE_HARDCORE_MODE",
"repo_name": "HellionOP/Lua-HardcoreMode",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/HellionOP/Lua-HardcoreMode"
},
{
"key": "MODULE_SEND_AND_BIND",
"repo_name": "55Honey/Acore_SendAndBind",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/55Honey/Acore_SendAndBind"
},
{
"key": "MODULE_TEMP_ANNOUNCEMENTS",
"repo_name": "55Honey/Acore_TempAnnouncements",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/55Honey/Acore_TempAnnouncements"
},
{
"key": "MODULE_CARBON_COPY",
"repo_name": "55Honey/Acore_CarbonCopy",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/55Honey/Acore_CarbonCopy"
}
]