mirror of
https://github.com/uprightbass360/AzerothCore-RealmMaster.git
synced 2026-01-13 09:07:20 +00:00
Compare commits
2 Commits
b62e33bb03
...
feat/modul
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
71c1be1b46 | ||
|
|
5c9f1d7389 |
@@ -65,7 +65,7 @@ DB_GUARD_VERIFY_INTERVAL_SECONDS=86400
|
||||
# =====================
|
||||
# Module SQL staging
|
||||
# =====================
|
||||
STAGE_PATH_MODULE_SQL=${STORAGE_PATH_LOCAL}/module-sql-updates
|
||||
MODULE_SQL_STAGE_PATH=${STORAGE_PATH_LOCAL}/module-sql-updates
|
||||
|
||||
# =====================
|
||||
# SQL Source Overlay
|
||||
@@ -180,7 +180,6 @@ DB_CHARACTER_SYNCH_THREADS=1
|
||||
BACKUP_RETENTION_DAYS=3
|
||||
BACKUP_RETENTION_HOURS=6
|
||||
BACKUP_DAILY_TIME=09
|
||||
BACKUP_INTERVAL_MINUTES=60
|
||||
# Optional comma/space separated schemas to include in automated backups
|
||||
BACKUP_EXTRA_DATABASES=
|
||||
BACKUP_HEALTHCHECK_MAX_MINUTES=1440
|
||||
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -19,5 +19,4 @@ package.json
|
||||
todo.md
|
||||
.gocache/
|
||||
.module-ledger/
|
||||
deploy.log
|
||||
statusdash
|
||||
deploy.log
|
||||
46
README.md
46
README.md
@@ -4,7 +4,7 @@
|
||||
|
||||
# AzerothCore RealmMaster
|
||||
|
||||
A complete containerized deployment of AzerothCore WoW 3.3.5a (Wrath of the Lich King) private server with **hundreds** of supported modules and intelligent automations to allow for easy setup, deployment and management.
|
||||
A complete containerized deployment of AzerothCore WoW 3.3.5a (Wrath of the Lich King) private server with 93+ enhanced modules and intelligent automation.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
@@ -23,10 +23,10 @@ A complete containerized deployment of AzerothCore WoW 3.3.5a (Wrath of the Lich
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Reccomendations
|
||||
- **Docker** with Docker Compose 2
|
||||
- **16GB+ RAM** and **64GB+ storage**
|
||||
- **Linux/macOS/WSL2** Fully tested with Ubuntu 24.04 and Debian 12
|
||||
### Prerequisites
|
||||
- **Docker** with Docker Compose
|
||||
- **16GB+ RAM** and **32GB+ storage**
|
||||
- **Linux/macOS/WSL2** (Windows with WSL2 recommended)
|
||||
|
||||
### Three Simple Steps
|
||||
|
||||
@@ -50,15 +50,17 @@ See [Getting Started](#getting-started) for detailed walkthrough.
|
||||
## What You Get
|
||||
|
||||
### ✅ Core Server Components
|
||||
- **AzerothCore 3.3.5a** - WotLK server application with 348 modules in the manifest (221 currently supported)
|
||||
- **AzerothCore 3.3.5a** - WotLK server application with 93+ enhanced modules
|
||||
- **MySQL 8.0** - Database with intelligent initialization and restoration
|
||||
- **Smart Module System** - Automated module management and source builds
|
||||
- **phpMyAdmin** - Web-based database administration
|
||||
- **Keira3** - Game content editor and developer tools
|
||||
|
||||
### ✅ Automated Configuration
|
||||
- **Intelligent Database Setup** - Smart backup detection, restoration, and conditional schema import (details in [docs/DATABASE_MANAGEMENT.md](docs/DATABASE_MANAGEMENT.md))
|
||||
- **Restore-Aware Backups & SQL** - Restore-aware SQL staging and snapshot safety checks keep modules in sync after restores ([docs/DATABASE_MANAGEMENT.md](docs/DATABASE_MANAGEMENT.md))
|
||||
- **Intelligent Database Setup** - Smart backup detection, restoration, and conditional schema import
|
||||
- **Restore Safety Checks** - The import job now validates the live MySQL runtime before honoring restore markers so stale tmpfs volumes can’t trick automation into skipping a needed restore (see [docs/DATABASE_MANAGEMENT.md](docs/DATABASE_MANAGEMENT.md))
|
||||
- **Backup Management** - Automated hourly/daily backups with intelligent restoration
|
||||
- **Restore-Aware Module SQL** - After a backup restore the ledger snapshot from that backup is synced into shared storage and `stage-modules.sh` recopies every enabled SQL file into `/azerothcore/data/sql/updates/*` so the worldserver’s built-in updater reapplies anything the database still needs (see [docs/DATABASE_MANAGEMENT.md](docs/DATABASE_MANAGEMENT.md))
|
||||
- **Module Integration** - Automatic source builds when C++ modules are enabled
|
||||
- **Service Orchestration** - Profile-based deployment (standard/playerbots/modules)
|
||||
|
||||
@@ -77,9 +79,7 @@ For complete local and remote deployment guides, see **[docs/GETTING_STARTED.md]
|
||||
|
||||
## Complete Module Catalog
|
||||
|
||||
Choose from **hundreds of enhanced modules** spanning automation, quality-of-life improvements, gameplay enhancements, PvP features, and more. The manifest contains 348 modules (221 marked supported/active); the default RealmMaster preset enables 33 that are exercised in testing. All modules are automatically downloaded, configured, and integrated during deployment when selected.
|
||||
|
||||
Want a shortcut? Use a preset (RealmMaster, suggested QoL, playerbot variants, all-modules) from `config/module-profiles/`—see [docs/GETTING_STARTED.md#module-presets](docs/GETTING_STARTED.md#module-presets).
|
||||
Choose from **93+ enhanced modules** spanning automation, quality-of-life improvements, gameplay enhancements, PvP features, and more. All modules are automatically downloaded, configured, and integrated during deployment.
|
||||
|
||||
**Popular Categories:**
|
||||
- **Automation** - Playerbots, AI chat, level management
|
||||
@@ -93,13 +93,23 @@ Browse the complete catalog with descriptions at **[docs/MODULES.md](docs/MODULE
|
||||
|
||||
## Custom NPCs Guide
|
||||
|
||||
The server includes **14 custom NPCs** spanning services, buffs, PvP, and guild support. Full spawn commands, coordinates, and functions are in **[docs/NPCS.md](docs/NPCS.md)**.
|
||||
The server includes **14 custom NPCs** providing enhanced functionality including profession training, enchantments, arena services, and more. All NPCs are spawnable through GM commands and designed for permanent placement.
|
||||
|
||||
**Available NPCs:**
|
||||
- **Service NPCs** - Profession training, reagent banking, instance resets
|
||||
- **Enhancement NPCs** - Enchanting, buffing, pet management, transmog
|
||||
- **PvP NPCs** - 1v1 arena battlemaster
|
||||
- **Guild House NPCs** - Property management and services
|
||||
|
||||
For complete spawn commands, coordinates, and functionality details, see **[docs/NPCS.md](docs/NPCS.md)**.
|
||||
|
||||
---
|
||||
|
||||
## Management & Operations
|
||||
|
||||
For common workflows, management commands, and database operations, see **[docs/GETTING_STARTED.md](docs/GETTING_STARTED.md)**. For script details (including module manifest auto-sync), see **[docs/SCRIPTS.md](docs/SCRIPTS.md)**.
|
||||
For common workflows, management commands, and database operations, see **[docs/GETTING_STARTED.md](docs/GETTING_STARTED.md)**.
|
||||
|
||||
- Keep the module catalog current with `scripts/python/update_module_manifest.py` or trigger the scheduled **Sync Module Manifest** GitHub Action to auto-open a PR with the latest AzerothCore topic repos.
|
||||
|
||||
---
|
||||
|
||||
@@ -139,8 +149,10 @@ This project builds upon:
|
||||
- ✅ **Comprehensive Documentation** - Clear setup and troubleshooting guides
|
||||
|
||||
### Next Steps After Installation
|
||||
**For detailed server administration, monitoring, backup configuration, and performance tuning, see [docs/GETTING_STARTED.md](docs/GETTING_STARTED.md).**
|
||||
|
||||
- **Create admin account** - Attach to worldserver and create a GM user (commands in **[docs/GETTING_STARTED.md#post-installation-steps](docs/GETTING_STARTED.md#post-installation-steps)**).
|
||||
- **Point your client** - Update `realmlist.wtf` to your host/ports (defaults in the same section above).
|
||||
- **Open services** - phpMyAdmin and Keira3 URLs/ports are listed in **[docs/GETTING_STARTED.md#post-installation-steps](docs/GETTING_STARTED.md#post-installation-steps)**.
|
||||
**Essential First Steps:**
|
||||
1. **Create admin account**: `docker attach ac-worldserver` → `account create admin password` → `account set gmlevel admin 3 -1`
|
||||
2. **Test your setup**: Connect with WoW 3.3.5a client using `set realmlist 127.0.0.1`
|
||||
3. **Access web tools**: phpMyAdmin (port 8081) and Keira3 (port 4201)
|
||||
|
||||
**For detailed server administration, monitoring, backup configuration, and performance tuning, see [docs/GETTING_STARTED.md](docs/GETTING_STARTED.md).**
|
||||
|
||||
7
build.sh
7
build.sh
@@ -137,18 +137,11 @@ generate_module_state(){
|
||||
|
||||
# Check if blocked modules were detected in warnings
|
||||
if echo "$validation_output" | grep -q "is blocked:"; then
|
||||
# Gather blocked module keys for display
|
||||
local blocked_modules
|
||||
blocked_modules=$(echo "$validation_output" | grep -oE 'MODULE_[A-Za-z0-9_]+' | sort -u | tr '\n' ' ')
|
||||
|
||||
# Blocked modules detected - show warning and ask for confirmation
|
||||
echo
|
||||
warn "════════════════════════════════════════════════════════════════"
|
||||
warn "⚠️ BLOCKED MODULES DETECTED ⚠️"
|
||||
warn "════════════════════════════════════════════════════════════════"
|
||||
if [ -n "$blocked_modules" ]; then
|
||||
warn "Affected modules: ${blocked_modules}"
|
||||
fi
|
||||
warn "Some enabled modules are marked as blocked due to compatibility"
|
||||
warn "issues. These modules will be SKIPPED during the build process."
|
||||
warn ""
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
47
database-import/README.md
Normal file
47
database-import/README.md
Normal file
@@ -0,0 +1,47 @@
|
||||
# Database Import
|
||||
|
||||
> **📌 Note:** This directory is maintained for backward compatibility.
|
||||
> **New location:** `import/db/` - See [import/README.md](../import/README.md) for the new unified import system.
|
||||
|
||||
Place your database backup files here for automatic import during deployment.
|
||||
|
||||
## Supported Imports
|
||||
- `.sql` files (uncompressed SQL dumps)
|
||||
- `.sql.gz` files (gzip compressed SQL dumps)
|
||||
- **Full backup directories** (e.g., `ExportBackup_YYYYMMDD_HHMMSS/` containing multiple dumps)
|
||||
- **Full backup archives** (`.tar`, `.tar.gz`, `.tgz`, `.zip`) that contain the files above
|
||||
|
||||
## How to Use
|
||||
|
||||
1. **Copy your backup files here:**
|
||||
```bash
|
||||
cp my_auth_backup.sql.gz ./database-import/
|
||||
cp my_world_backup.sql.gz ./database-import/
|
||||
cp my_characters_backup.sql.gz ./database-import/
|
||||
# or drop an entire ExportBackup folder / archive
|
||||
cp -r ExportBackup_20241029_120000 ./database-import/
|
||||
cp ExportBackup_20241029_120000.tar.gz ./database-import/
|
||||
```
|
||||
|
||||
2. **Run deployment:**
|
||||
```bash
|
||||
./deploy.sh
|
||||
```
|
||||
|
||||
3. **Files are automatically copied to backup system** and imported during deployment
|
||||
|
||||
## File Naming
|
||||
- Any filename works - the system will auto-detect database type by content
|
||||
- Recommended naming: `auth.sql.gz`, `world.sql.gz`, `characters.sql.gz`
|
||||
- Full backups keep their original directory/archive name so you can track multiple copies
|
||||
|
||||
## What Happens
|
||||
- Individual `.sql`/`.sql.gz` files are copied to `storage/backups/daily/` with a timestamped name
|
||||
- Full backup directories or archives are staged directly under `storage/backups/` (e.g., `storage/backups/ExportBackup_20241029_120000/`)
|
||||
- Database import system automatically restores the most recent matching backup
|
||||
- Original files remain here for reference (archives are left untouched)
|
||||
|
||||
## Notes
|
||||
- Only processed on first deployment (when databases don't exist)
|
||||
- Files/directories are copied once; existing restored databases will skip import
|
||||
- Empty folder is ignored - no files, no import
|
||||
72
deploy.sh
72
deploy.sh
@@ -35,10 +35,6 @@ REMOTE_COPY_SOURCE=0
|
||||
REMOTE_ARGS_PROVIDED=0
|
||||
REMOTE_AUTO_DEPLOY=0
|
||||
REMOTE_AUTO_DEPLOY=0
|
||||
REMOTE_CLEAN_RUNTIME=0
|
||||
REMOTE_STORAGE_OVERRIDE=""
|
||||
REMOTE_CONTAINER_USER_OVERRIDE=""
|
||||
REMOTE_ENV_FILE=""
|
||||
|
||||
MODULE_HELPER="$ROOT_DIR/scripts/python/modules.py"
|
||||
MODULE_STATE_INITIALIZED=0
|
||||
@@ -168,33 +164,6 @@ collect_remote_details(){
|
||||
*) REMOTE_SKIP_STORAGE=0 ;;
|
||||
esac
|
||||
fi
|
||||
|
||||
if [ "$interactive" -eq 1 ] && [ "$REMOTE_ARGS_PROVIDED" -eq 0 ]; then
|
||||
local cleanup_answer
|
||||
read -rp "Stop/remove remote containers & project images during migration? [y/N]: " cleanup_answer
|
||||
cleanup_answer="${cleanup_answer:-n}"
|
||||
case "${cleanup_answer,,}" in
|
||||
y|yes) REMOTE_CLEAN_RUNTIME=1 ;;
|
||||
*) REMOTE_CLEAN_RUNTIME=0 ;;
|
||||
esac
|
||||
fi
|
||||
|
||||
# Optional remote env overrides (default to current values)
|
||||
local storage_default container_user_default
|
||||
storage_default="$(read_env STORAGE_PATH "./storage")"
|
||||
container_user_default="$(read_env CONTAINER_USER "$(id -u):$(id -g)")"
|
||||
|
||||
if [ -z "$REMOTE_STORAGE_OVERRIDE" ] && [ "$interactive" -eq 1 ]; then
|
||||
local storage_input
|
||||
read -rp "Remote storage path (STORAGE_PATH) [${storage_default}]: " storage_input
|
||||
REMOTE_STORAGE_OVERRIDE="${storage_input:-$storage_default}"
|
||||
fi
|
||||
|
||||
if [ -z "$REMOTE_CONTAINER_USER_OVERRIDE" ] && [ "$interactive" -eq 1 ]; then
|
||||
local cu_input
|
||||
read -rp "Remote container user (CONTAINER_USER) [${container_user_default}]: " cu_input
|
||||
REMOTE_CONTAINER_USER_OVERRIDE="${cu_input:-$container_user_default}"
|
||||
fi
|
||||
}
|
||||
|
||||
validate_remote_configuration(){
|
||||
@@ -251,9 +220,6 @@ Options:
|
||||
--remote-skip-storage Skip syncing the storage directory during migration
|
||||
--remote-copy-source Copy the local project directory to remote instead of relying on git
|
||||
--remote-auto-deploy Run './deploy.sh --yes --no-watch' on the remote host after migration
|
||||
--remote-clean-runtime Stop/remove remote containers & project images during migration
|
||||
--remote-storage-path PATH Override STORAGE_PATH/STORAGE_PATH_LOCAL in the remote .env
|
||||
--remote-container-user USER[:GROUP] Override CONTAINER_USER in the remote .env
|
||||
--skip-config Skip applying server configuration preset
|
||||
-h, --help Show this help
|
||||
|
||||
@@ -282,9 +248,6 @@ while [[ $# -gt 0 ]]; do
|
||||
--remote-skip-storage) REMOTE_SKIP_STORAGE=1; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift;;
|
||||
--remote-copy-source) REMOTE_COPY_SOURCE=1; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift;;
|
||||
--remote-auto-deploy) REMOTE_AUTO_DEPLOY=1; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift;;
|
||||
--remote-clean-runtime) REMOTE_CLEAN_RUNTIME=1; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift;;
|
||||
--remote-storage-path) REMOTE_STORAGE_OVERRIDE="$2"; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift 2;;
|
||||
--remote-container-user) REMOTE_CONTAINER_USER_OVERRIDE="$2"; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift 2;;
|
||||
--skip-config) SKIP_CONFIG=1; shift;;
|
||||
-h|--help) usage; exit 0;;
|
||||
*) err "Unknown option: $1"; usage; exit 1;;
|
||||
@@ -644,33 +607,6 @@ determine_profile(){
|
||||
}
|
||||
|
||||
run_remote_migration(){
|
||||
if [ -z "$REMOTE_ENV_FILE" ] && { [ -n "$REMOTE_STORAGE_OVERRIDE" ] || [ -n "$REMOTE_CONTAINER_USER_OVERRIDE" ]; }; then
|
||||
local base_env=""
|
||||
if [ -f "$ENV_PATH" ]; then
|
||||
base_env="$ENV_PATH"
|
||||
elif [ -f "$TEMPLATE_PATH" ]; then
|
||||
base_env="$TEMPLATE_PATH"
|
||||
fi
|
||||
REMOTE_ENV_FILE="$(mktemp)"
|
||||
if [ -n "$base_env" ]; then
|
||||
cp "$base_env" "$REMOTE_ENV_FILE"
|
||||
else
|
||||
: > "$REMOTE_ENV_FILE"
|
||||
fi
|
||||
if [ -n "$REMOTE_STORAGE_OVERRIDE" ]; then
|
||||
{
|
||||
echo
|
||||
echo "STORAGE_PATH=$REMOTE_STORAGE_OVERRIDE"
|
||||
} >>"$REMOTE_ENV_FILE"
|
||||
fi
|
||||
if [ -n "$REMOTE_CONTAINER_USER_OVERRIDE" ]; then
|
||||
{
|
||||
echo
|
||||
echo "CONTAINER_USER=$REMOTE_CONTAINER_USER_OVERRIDE"
|
||||
} >>"$REMOTE_ENV_FILE"
|
||||
fi
|
||||
fi
|
||||
|
||||
local args=(--host "$REMOTE_HOST" --user "$REMOTE_USER")
|
||||
|
||||
if [ -n "$REMOTE_PORT" ] && [ "$REMOTE_PORT" != "22" ]; then
|
||||
@@ -693,18 +629,10 @@ run_remote_migration(){
|
||||
args+=(--copy-source)
|
||||
fi
|
||||
|
||||
if [ "$REMOTE_CLEAN_RUNTIME" -eq 1 ]; then
|
||||
args+=(--cleanup-runtime)
|
||||
fi
|
||||
|
||||
if [ "$ASSUME_YES" -eq 1 ]; then
|
||||
args+=(--yes)
|
||||
fi
|
||||
|
||||
if [ -n "$REMOTE_ENV_FILE" ]; then
|
||||
args+=(--env-file "$REMOTE_ENV_FILE")
|
||||
fi
|
||||
|
||||
(cd "$ROOT_DIR" && ./scripts/bash/migrate-stack.sh "${args[@]}")
|
||||
}
|
||||
|
||||
|
||||
@@ -1,11 +1,4 @@
|
||||
name: ${COMPOSE_PROJECT_NAME}
|
||||
|
||||
x-logging: &logging-default
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
|
||||
services:
|
||||
# =====================
|
||||
# Database Layer (db)
|
||||
@@ -47,7 +40,7 @@ services:
|
||||
- --innodb-log-file-size=${MYSQL_INNODB_LOG_FILE_SIZE}
|
||||
- --innodb-redo-log-capacity=${MYSQL_INNODB_REDO_LOG_CAPACITY}
|
||||
restart: unless-stopped
|
||||
logging: *logging-default
|
||||
logging:
|
||||
healthcheck:
|
||||
test: ["CMD", "sh", "-c", "mysqladmin ping -h localhost -u ${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} --silent || exit 1"]
|
||||
interval: ${MYSQL_HEALTHCHECK_INTERVAL}
|
||||
@@ -74,12 +67,11 @@ services:
|
||||
- ${STORAGE_PATH}/config:/azerothcore/env/dist/etc
|
||||
- ${STORAGE_PATH}/logs:/azerothcore/logs
|
||||
- ${AC_SQL_SOURCE_PATH:-${STORAGE_PATH_LOCAL}/source/azerothcore-playerbots/data/sql}:/azerothcore/data/sql:ro
|
||||
- ${STAGE_PATH_MODULE_SQL:-${STORAGE_PATH}/module-sql-updates}:/modules-sql
|
||||
- ${MODULE_SQL_STAGE_PATH:-${STORAGE_PATH}/module-sql-updates}:/modules-sql
|
||||
- mysql-data:/var/lib/mysql-persistent
|
||||
- ${STORAGE_PATH}/modules:/modules
|
||||
- ${BACKUP_PATH}:/backups
|
||||
- ./scripts/bash/db-import-conditional.sh:/tmp/db-import-conditional.sh:ro
|
||||
- ./scripts/bash/seed-dbimport-conf.sh:/tmp/seed-dbimport-conf.sh:ro
|
||||
- ./scripts/bash/restore-and-stage.sh:/tmp/restore-and-stage.sh:ro
|
||||
environment:
|
||||
AC_DATA_DIR: "/azerothcore/data"
|
||||
@@ -139,12 +131,11 @@ services:
|
||||
- ${STORAGE_PATH}/config:/azerothcore/env/dist/etc
|
||||
- ${STORAGE_PATH}/logs:/azerothcore/logs
|
||||
- ${AC_SQL_SOURCE_PATH:-${STORAGE_PATH_LOCAL}/source/azerothcore-playerbots/data/sql}:/azerothcore/data/sql:ro
|
||||
- ${STAGE_PATH_MODULE_SQL:-${STORAGE_PATH}/module-sql-updates}:/modules-sql
|
||||
- ${MODULE_SQL_STAGE_PATH:-${STORAGE_PATH}/module-sql-updates}:/modules-sql
|
||||
- mysql-data:/var/lib/mysql-persistent
|
||||
- ${STORAGE_PATH}/modules:/modules
|
||||
- ${BACKUP_PATH}:/backups
|
||||
- ./scripts/bash/db-import-conditional.sh:/tmp/db-import-conditional.sh:ro
|
||||
- ./scripts/bash/seed-dbimport-conf.sh:/tmp/seed-dbimport-conf.sh:ro
|
||||
- ./scripts/bash/restore-and-stage.sh:/tmp/restore-and-stage.sh:ro
|
||||
- ./scripts/bash/db-guard.sh:/tmp/db-guard.sh:ro
|
||||
environment:
|
||||
@@ -334,7 +325,7 @@ services:
|
||||
profiles: ["client-data", "client-data-bots"]
|
||||
image: ${ALPINE_IMAGE}
|
||||
container_name: ac-volume-init
|
||||
user: "0:0"
|
||||
user: "${CONTAINER_USER}"
|
||||
volumes:
|
||||
- ${CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}:/azerothcore/data
|
||||
- client-data-cache:/cache
|
||||
@@ -360,11 +351,10 @@ services:
|
||||
profiles: ["db", "modules"]
|
||||
image: ${ALPINE_IMAGE}
|
||||
container_name: ac-storage-init
|
||||
user: "0:0"
|
||||
user: "${CONTAINER_USER}"
|
||||
volumes:
|
||||
- ${STORAGE_PATH}:/storage-root
|
||||
- ${STORAGE_PATH_LOCAL}:/local-storage-root
|
||||
- ./scripts/bash/seed-dbimport-conf.sh:/tmp/seed-dbimport-conf.sh:ro
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
@@ -374,48 +364,11 @@ services:
|
||||
mkdir -p /storage-root/config/mysql/conf.d
|
||||
mkdir -p /storage-root/client-data
|
||||
mkdir -p /storage-root/backups
|
||||
|
||||
# Copy core AzerothCore config template files (.dist) to config directory
|
||||
echo "📄 Copying AzerothCore configuration templates..."
|
||||
SOURCE_DIR="${SOURCE_DIR:-/local-storage-root/source/azerothcore-playerbots}"
|
||||
if [ ! -d "$SOURCE_DIR" ] && [ -d "/local-storage-root/source/azerothcore-wotlk" ]; then
|
||||
SOURCE_DIR="/local-storage-root/source/azerothcore-wotlk"
|
||||
# Copy core config files if they don't exist
|
||||
if [ -f "/local-storage-root/source/azerothcore-playerbots/src/tools/dbimport/dbimport.conf.dist" ] && [ ! -f "/storage-root/config/dbimport.conf.dist" ]; then
|
||||
echo "📄 Copying dbimport.conf.dist..."
|
||||
cp /local-storage-root/source/azerothcore-playerbots/src/tools/dbimport/dbimport.conf.dist /storage-root/config/
|
||||
fi
|
||||
|
||||
# Seed dbimport.conf with a shared helper (fallback to a simple copy if missing)
|
||||
if [ -f "/tmp/seed-dbimport-conf.sh" ]; then
|
||||
echo "🧩 Seeding dbimport.conf"
|
||||
DBIMPORT_CONF_DIR="/storage-root/config" \
|
||||
DBIMPORT_SOURCE_ROOT="$SOURCE_DIR" \
|
||||
sh -c '. /tmp/seed-dbimport-conf.sh && seed_dbimport_conf' || true
|
||||
else
|
||||
if [ -f "$SOURCE_DIR/src/tools/dbimport/dbimport.conf.dist" ]; then
|
||||
cp -n "$SOURCE_DIR/src/tools/dbimport/dbimport.conf.dist" /storage-root/config/ 2>/dev/null || true
|
||||
if [ ! -f "/storage-root/config/dbimport.conf" ]; then
|
||||
cp "$SOURCE_DIR/src/tools/dbimport/dbimport.conf.dist" /storage-root/config/dbimport.conf
|
||||
echo " ✓ Created dbimport.conf"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Copy authserver.conf.dist
|
||||
if [ -f "$SOURCE_DIR/env/dist/etc/authserver.conf.dist" ]; then
|
||||
cp -n "$SOURCE_DIR/env/dist/etc/authserver.conf.dist" /storage-root/config/ 2>/dev/null || true
|
||||
if [ ! -f "/storage-root/config/authserver.conf" ]; then
|
||||
cp "$SOURCE_DIR/env/dist/etc/authserver.conf.dist" /storage-root/config/authserver.conf
|
||||
echo " ✓ Created authserver.conf"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Copy worldserver.conf.dist
|
||||
if [ -f "$SOURCE_DIR/env/dist/etc/worldserver.conf.dist" ]; then
|
||||
cp -n "$SOURCE_DIR/env/dist/etc/worldserver.conf.dist" /storage-root/config/ 2>/dev/null || true
|
||||
if [ ! -f "/storage-root/config/worldserver.conf" ]; then
|
||||
cp "$SOURCE_DIR/env/dist/etc/worldserver.conf.dist" /storage-root/config/worldserver.conf
|
||||
echo " ✓ Created worldserver.conf"
|
||||
fi
|
||||
fi
|
||||
mkdir -p /storage-root/config/temp
|
||||
# Fix ownership of root directories and all contents
|
||||
if [ "$(id -u)" -eq 0 ]; then
|
||||
chown -R ${CONTAINER_USER} /storage-root /local-storage-root
|
||||
@@ -525,7 +478,7 @@ services:
|
||||
ports:
|
||||
- "${AUTH_EXTERNAL_PORT}:${AUTH_PORT}"
|
||||
restart: unless-stopped
|
||||
logging: *logging-default
|
||||
logging:
|
||||
networks:
|
||||
- azerothcore
|
||||
volumes:
|
||||
@@ -557,7 +510,7 @@ services:
|
||||
AC_UPDATES_ENABLE_DATABASES: "7"
|
||||
AC_BIND_IP: "0.0.0.0"
|
||||
AC_DATA_DIR: "/azerothcore/data"
|
||||
AC_SOAP_PORT: "${SOAP_PORT}"
|
||||
AC_SOAP_PORT: "7878"
|
||||
AC_PROCESS_PRIORITY: "0"
|
||||
AC_ELUNA_ENABLED: "${AC_ELUNA_ENABLED}"
|
||||
AC_ELUNA_TRACE_BACK: "${AC_ELUNA_TRACE_BACK}"
|
||||
@@ -580,7 +533,7 @@ services:
|
||||
- ${STORAGE_PATH}/modules:/azerothcore/modules
|
||||
- ${STORAGE_PATH}/lua_scripts:/azerothcore/lua_scripts
|
||||
restart: unless-stopped
|
||||
logging: *logging-default
|
||||
logging:
|
||||
networks:
|
||||
- azerothcore
|
||||
cap_add: ["SYS_NICE"]
|
||||
@@ -618,7 +571,11 @@ services:
|
||||
ports:
|
||||
- "${AUTH_EXTERNAL_PORT}:${AUTH_PORT}"
|
||||
restart: unless-stopped
|
||||
logging: *logging-default
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
networks:
|
||||
- azerothcore
|
||||
volumes:
|
||||
@@ -654,7 +611,7 @@ services:
|
||||
ports:
|
||||
- "${AUTH_EXTERNAL_PORT}:${AUTH_PORT}"
|
||||
restart: unless-stopped
|
||||
logging: *logging-default
|
||||
logging:
|
||||
networks:
|
||||
- azerothcore
|
||||
volumes:
|
||||
@@ -688,7 +645,7 @@ services:
|
||||
AC_UPDATES_ENABLE_DATABASES: "7"
|
||||
AC_BIND_IP: "0.0.0.0"
|
||||
AC_DATA_DIR: "/azerothcore/data"
|
||||
AC_SOAP_PORT: "${SOAP_PORT}"
|
||||
AC_SOAP_PORT: "7878"
|
||||
AC_PROCESS_PRIORITY: "0"
|
||||
AC_ELUNA_ENABLED: "${AC_ELUNA_ENABLED}"
|
||||
AC_ELUNA_TRACE_BACK: "${AC_ELUNA_TRACE_BACK}"
|
||||
@@ -712,7 +669,7 @@ services:
|
||||
- ${STORAGE_PATH}/modules:/azerothcore/modules
|
||||
- ${STORAGE_PATH}/lua_scripts:/azerothcore/lua_scripts
|
||||
restart: unless-stopped
|
||||
logging: *logging-default
|
||||
logging:
|
||||
networks:
|
||||
- azerothcore
|
||||
cap_add: ["SYS_NICE"]
|
||||
@@ -744,7 +701,7 @@ services:
|
||||
AC_UPDATES_ENABLE_DATABASES: "7"
|
||||
AC_BIND_IP: "0.0.0.0"
|
||||
AC_DATA_DIR: "/azerothcore/data"
|
||||
AC_SOAP_PORT: "${SOAP_PORT}"
|
||||
AC_SOAP_PORT: "7878"
|
||||
AC_PROCESS_PRIORITY: "0"
|
||||
AC_ELUNA_ENABLED: "${AC_ELUNA_ENABLED}"
|
||||
AC_ELUNA_TRACE_BACK: "${AC_ELUNA_TRACE_BACK}"
|
||||
@@ -769,7 +726,11 @@ services:
|
||||
- "${WORLD_EXTERNAL_PORT}:${WORLD_PORT}"
|
||||
- "${SOAP_EXTERNAL_PORT}:${SOAP_PORT}"
|
||||
restart: unless-stopped
|
||||
logging: *logging-default
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
cap_add: ["SYS_NICE"]
|
||||
healthcheck:
|
||||
test: ["CMD", "sh", "-c", "ps aux | grep '[w]orldserver' | grep -v grep || exit 1"]
|
||||
@@ -858,10 +819,8 @@ services:
|
||||
- |
|
||||
apk add --no-cache bash curl docker-cli su-exec
|
||||
chmod +x /tmp/scripts/bash/auto-post-install.sh 2>/dev/null || true
|
||||
echo "📥 Running post-install as root (testing mode)"
|
||||
mkdir -p /install-markers
|
||||
chown -R ${CONTAINER_USER} /azerothcore/config /install-markers 2>/dev/null || true
|
||||
bash /tmp/scripts/bash/auto-post-install.sh
|
||||
echo "📥 Running post-install as ${CONTAINER_USER}"
|
||||
su-exec ${CONTAINER_USER} bash /tmp/scripts/bash/auto-post-install.sh
|
||||
restart: "no"
|
||||
networks:
|
||||
- azerothcore
|
||||
@@ -918,7 +877,7 @@ services:
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
logging: *logging-default
|
||||
logging:
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
networks:
|
||||
|
||||
@@ -1,261 +0,0 @@
|
||||
# Generated by azerothcore-rm/setup.sh
|
||||
|
||||
# Compose overrides (set to 1 to include matching file under compose-overrides/)
|
||||
# mysql-expose.yml -> exposes MySQL externally via COMPOSE_OVERRIDE_MYSQL_EXPOSE_ENABLED
|
||||
# worldserver-debug-logging.yml -> raises log verbosity via COMPOSE_OVERRIDE_WORLDSERVER_DEBUG_LOGGING_ENABLED
|
||||
COMPOSE_OVERRIDE_MYSQL_EXPOSE_ENABLED=0
|
||||
COMPOSE_OVERRIDE_WORLDSERVER_DEBUG_LOGGING_ENABLED=0
|
||||
|
||||
COMPOSE_PROJECT_NAME=azerothcore-stack
|
||||
|
||||
STORAGE_PATH=/nfs/azerothcore
|
||||
STORAGE_PATH_LOCAL=./local-storage
|
||||
BACKUP_PATH=${STORAGE_PATH}/backups
|
||||
TZ=America/New_York
|
||||
|
||||
# Database
|
||||
MYSQL_IMAGE=mysql:8.0
|
||||
MYSQL_ROOT_PASSWORD=azerothcore123
|
||||
MYSQL_ROOT_HOST=%
|
||||
MYSQL_USER=root
|
||||
MYSQL_PORT=3306
|
||||
MYSQL_EXTERNAL_PORT=64306
|
||||
MYSQL_DISABLE_BINLOG=1
|
||||
MYSQL_CONFIG_DIR=${STORAGE_PATH}/config/mysql/conf.d
|
||||
MYSQL_CHARACTER_SET=utf8mb4
|
||||
MYSQL_COLLATION=utf8mb4_unicode_ci
|
||||
MYSQL_MAX_CONNECTIONS=1000
|
||||
MYSQL_INNODB_BUFFER_POOL_SIZE=256M
|
||||
MYSQL_INNODB_LOG_FILE_SIZE=64M
|
||||
MYSQL_INNODB_REDO_LOG_CAPACITY=512M
|
||||
MYSQL_RUNTIME_TMPFS_SIZE=8G
|
||||
MYSQL_HOST=ac-mysql
|
||||
DB_WAIT_RETRIES=60
|
||||
DB_WAIT_SLEEP=10
|
||||
DB_AUTH_NAME=acore_auth
|
||||
DB_WORLD_NAME=acore_world
|
||||
DB_CHARACTERS_NAME=acore_characters
|
||||
DB_PLAYERBOTS_NAME=acore_playerbots
|
||||
AC_DB_IMPORT_IMAGE=azerothcore-stack:db-import-playerbots
|
||||
|
||||
# Services (images)
|
||||
AC_AUTHSERVER_IMAGE=acore/ac-wotlk-authserver:master
|
||||
AC_WORLDSERVER_IMAGE=acore/ac-wotlk-worldserver:master
|
||||
AC_AUTHSERVER_IMAGE_PLAYERBOTS=azerothcore-stack:authserver-playerbots
|
||||
AC_WORLDSERVER_IMAGE_PLAYERBOTS=azerothcore-stack:worldserver-playerbots
|
||||
AC_AUTHSERVER_IMAGE_MODULES=azerothcore-stack:authserver-modules-latest
|
||||
AC_WORLDSERVER_IMAGE_MODULES=azerothcore-stack:worldserver-modules-latest
|
||||
|
||||
# Client data images
|
||||
AC_CLIENT_DATA_IMAGE=acore/ac-wotlk-client-data:master
|
||||
AC_CLIENT_DATA_IMAGE_PLAYERBOTS=azerothcore-stack:client-data-playerbots
|
||||
CLIENT_DATA_CACHE_PATH=${STORAGE_PATH_LOCAL}/client-data-cache
|
||||
CLIENT_DATA_PATH=${STORAGE_PATH}/client-data
|
||||
|
||||
# Build artifacts
|
||||
DOCKER_IMAGE_TAG=master
|
||||
AC_AUTHSERVER_IMAGE_BASE=acore/ac-wotlk-authserver
|
||||
AC_WORLDSERVER_IMAGE_BASE=acore/ac-wotlk-worldserver
|
||||
AC_DB_IMPORT_IMAGE_BASE=acore/ac-wotlk-db-import
|
||||
AC_CLIENT_DATA_IMAGE_BASE=acore/ac-wotlk-client-data
|
||||
|
||||
# Container user
|
||||
CONTAINER_USER=1001:1000
|
||||
|
||||
# Containers
|
||||
CONTAINER_MYSQL=ac-mysql
|
||||
CONTAINER_DB_IMPORT=ac-db-import
|
||||
CONTAINER_DB_INIT=ac-db-init
|
||||
CONTAINER_BACKUP=ac-backup
|
||||
CONTAINER_MODULES=ac-modules
|
||||
CONTAINER_POST_INSTALL=ac-post-install
|
||||
|
||||
# Ports
|
||||
AUTH_EXTERNAL_PORT=3784
|
||||
AUTH_PORT=3724
|
||||
WORLD_EXTERNAL_PORT=8215
|
||||
WORLD_PORT=8085
|
||||
SOAP_EXTERNAL_PORT=7778
|
||||
SOAP_PORT=7878
|
||||
|
||||
# Realm
|
||||
SERVER_ADDRESS=192.168.0.179
|
||||
REALM_PORT=8215
|
||||
|
||||
# Backups
|
||||
BACKUP_RETENTION_DAYS=3
|
||||
BACKUP_RETENTION_HOURS=6
|
||||
BACKUP_DAILY_TIME=09
|
||||
BACKUP_HEALTHCHECK_MAX_MINUTES=1440
|
||||
BACKUP_HEALTHCHECK_GRACE_SECONDS=4500
|
||||
|
||||
|
||||
# Modules
|
||||
MODULE_PLAYERBOTS=1
|
||||
MODULE_AOE_LOOT=0
|
||||
MODULE_LEARN_SPELLS=1
|
||||
MODULE_FIREWORKS=1
|
||||
MODULE_INDIVIDUAL_PROGRESSION=0
|
||||
MODULE_AHBOT=0
|
||||
MODULE_AUTOBALANCE=0
|
||||
MODULE_TRANSMOG=1
|
||||
MODULE_NPC_BUFFER=1
|
||||
MODULE_DYNAMIC_XP=0
|
||||
MODULE_SOLO_LFG=1
|
||||
MODULE_1V1_ARENA=1
|
||||
MODULE_PHASED_DUELS=0
|
||||
MODULE_BREAKING_NEWS=1
|
||||
MODULE_BOSS_ANNOUNCER=1
|
||||
MODULE_ACCOUNT_ACHIEVEMENTS=1
|
||||
MODULE_AUTO_REVIVE=1
|
||||
MODULE_GAIN_HONOR_GUARD=1
|
||||
MODULE_ELUNA=1
|
||||
MODULE_TIME_IS_TIME=1
|
||||
MODULE_POCKET_PORTAL=0
|
||||
MODULE_RANDOM_ENCHANTS=1
|
||||
MODULE_SOLOCRAFT=1
|
||||
MODULE_PVP_TITLES=0
|
||||
MODULE_NPC_BEASTMASTER=1
|
||||
MODULE_NPC_ENCHANTER=1
|
||||
MODULE_INSTANCE_RESET=1
|
||||
MODULE_LEVEL_GRANT=0
|
||||
MODULE_ARAC=1
|
||||
MODULE_ASSISTANT=1
|
||||
MODULE_REAGENT_BANK=1
|
||||
MODULE_BLACK_MARKET_AUCTION_HOUSE=1
|
||||
MODULE_CHALLENGE_MODES=0
|
||||
MODULE_OLLAMA_CHAT=0
|
||||
MODULE_PLAYER_BOT_LEVEL_BRACKETS=0
|
||||
MODULE_STATBOOSTER=0
|
||||
MODULE_DUNGEON_RESPAWN=0
|
||||
MODULE_SKELETON_MODULE=0
|
||||
MODULE_BG_SLAVERYVALLEY=0
|
||||
MODULE_AZEROTHSHARD=0
|
||||
MODULE_WORGOBLIN=0
|
||||
MODULE_ELUNA_TS=1
|
||||
MODULE_AIO=1
|
||||
MODULE_ELUNA_SCRIPTS=1
|
||||
MODULE_TRANSMOG_AIO=0
|
||||
MODULE_EVENT_SCRIPTS=1
|
||||
MODULE_LEVEL_UP_REWARD=0
|
||||
MODULE_ACCOUNTWIDE_SYSTEMS=0
|
||||
MODULE_EXCHANGE_NPC=0
|
||||
MODULE_RECRUIT_A_FRIEND=0
|
||||
MODULE_PRESTIGE_DRAFT_MODE=0
|
||||
MODULE_LUA_AH_BOT=0
|
||||
MODULE_HARDCORE_MODE=0
|
||||
MODULE_NPCBOT_EXTENDED_COMMANDS=0
|
||||
MODULE_MULTIVENDOR=0
|
||||
MODULE_TREASURE_CHEST_SYSTEM=0
|
||||
MODULE_ACTIVE_CHAT=1
|
||||
MODULE_ULTIMATE_FULL_LOOT_PVP=0
|
||||
MODULE_HORADRIC_CUBE=0
|
||||
MODULE_CARBON_COPY=0
|
||||
MODULE_TEMP_ANNOUNCEMENTS=0
|
||||
MODULE_ZONE_CHECK=0
|
||||
MODULE_AIO_BLACKJACK=0
|
||||
MODULE_SEND_AND_BIND=0
|
||||
MODULE_DYNAMIC_TRADER=0
|
||||
MODULE_LOTTERY_LUA=0
|
||||
MODULE_DISCORD_NOTIFIER=0
|
||||
MODULE_GLOBAL_MAIL_BANKING_AUCTIONS=0
|
||||
MODULE_GUILDHOUSE=1
|
||||
MODULE_PROGRESSION_SYSTEM=0
|
||||
MODULE_NPC_FREE_PROFESSIONS=1
|
||||
MODULE_DUEL_RESET=0
|
||||
MODULE_ZONE_DIFFICULTY=0
|
||||
MODULE_MORPHSUMMON=1
|
||||
MODULE_SPELL_REGULATOR=0
|
||||
MODULE_WEEKEND_XP=0
|
||||
MODULE_REWARD_PLAYED_TIME=0
|
||||
MODULE_RESURRECTION_SCROLL=0
|
||||
MODULE_ITEM_LEVEL_UP=1
|
||||
MODULE_NPC_TALENT_TEMPLATE=0
|
||||
MODULE_GLOBAL_CHAT=1
|
||||
MODULE_PREMIUM=0
|
||||
MODULE_SYSTEM_VIP=0
|
||||
MODULE_ACORE_SUBSCRIPTIONS=0
|
||||
MODULE_KEEP_OUT=0
|
||||
MODULE_SERVER_AUTO_SHUTDOWN=0
|
||||
MODULE_WHO_LOGGED=0
|
||||
MODULE_ACCOUNT_MOUNTS=0
|
||||
MODULE_ANTIFARMING=0
|
||||
MODULE_ARENA_REPLAY=0
|
||||
MODULE_TIC_TAC_TOE=0
|
||||
MODULE_WAR_EFFORT=0
|
||||
MODULE_PROMOTION_AZEROTHCORE=0
|
||||
|
||||
# Client data
|
||||
CLIENT_DATA_VERSION=
|
||||
|
||||
# Server configuration
|
||||
SERVER_CONFIG_PRESET=none
|
||||
|
||||
# Playerbot runtime
|
||||
PLAYERBOT_ENABLED=1
|
||||
PLAYERBOT_MIN_BOTS=2000
|
||||
PLAYERBOT_MAX_BOTS=4000
|
||||
STACK_IMAGE_MODE=playerbots
|
||||
STACK_SOURCE_VARIANT=playerbots
|
||||
MODULES_ENABLED_LIST=MODULE_PLAYERBOTS,MODULE_LEARN_SPELLS,MODULE_FIREWORKS,MODULE_TRANSMOG,MODULE_NPC_BUFFER,MODULE_SOLO_LFG,MODULE_1V1_ARENA,MODULE_BREAKING_NEWS,MODULE_BOSS_ANNOUNCER,MODULE_ACCOUNT_ACHIEVEMENTS,MODULE_AUTO_REVIVE,MODULE_GAIN_HONOR_GUARD,MODULE_ELUNA,MODULE_TIME_IS_TIME,MODULE_RANDOM_ENCHANTS,MODULE_SOLOCRAFT,MODULE_NPC_BEASTMASTER,MODULE_NPC_ENCHANTER,MODULE_INSTANCE_RESET,MODULE_ARAC,MODULE_ASSISTANT,MODULE_REAGENT_BANK,MODULE_BLACK_MARKET_AUCTION_HOUSE,MODULE_STATBOOSTER,MODULE_ELUNA_TS,MODULE_AIO,MODULE_ELUNA_SCRIPTS,MODULE_EVENT_SCRIPTS,MODULE_ACTIVE_CHAT,MODULE_GUILDHOUSE,MODULE_NPC_FREE_PROFESSIONS,MODULE_MORPHSUMMON,MODULE_ITEM_LEVEL_UP,MODULE_GLOBAL_CHAT
|
||||
MODULES_CPP_LIST=MODULE_LEARN_SPELLS,MODULE_FIREWORKS,MODULE_TRANSMOG,MODULE_NPC_BUFFER,MODULE_SOLO_LFG,MODULE_1V1_ARENA,MODULE_BREAKING_NEWS,MODULE_BOSS_ANNOUNCER,MODULE_ACCOUNT_ACHIEVEMENTS,MODULE_AUTO_REVIVE,MODULE_GAIN_HONOR_GUARD,MODULE_ELUNA,MODULE_TIME_IS_TIME,MODULE_RANDOM_ENCHANTS,MODULE_SOLOCRAFT,MODULE_NPC_BEASTMASTER,MODULE_NPC_ENCHANTER,MODULE_INSTANCE_RESET,MODULE_ARAC,MODULE_ASSISTANT,MODULE_REAGENT_BANK,MODULE_STATBOOSTER,MODULE_AIO,MODULE_GUILDHOUSE,MODULE_NPC_FREE_PROFESSIONS,MODULE_MORPHSUMMON,MODULE_ITEM_LEVEL_UP,MODULE_GLOBAL_CHAT
|
||||
MODULES_REQUIRES_CUSTOM_BUILD=1
|
||||
MODULES_REQUIRES_PLAYERBOT_SOURCE=1
|
||||
|
||||
# Rebuild automation
|
||||
AUTO_REBUILD_ON_DEPLOY=0
|
||||
MODULES_REBUILD_SOURCE_PATH=./local-storage/source/azerothcore-playerbots
|
||||
|
||||
# Eluna
|
||||
AC_ELUNA_ENABLED=1 # Power users may set to 0 to turn off bundled Eluna runtime
|
||||
AC_ELUNA_TRACE_BACK=1
|
||||
AC_ELUNA_AUTO_RELOAD=1
|
||||
AC_ELUNA_BYTECODE_CACHE=1
|
||||
AC_ELUNA_SCRIPT_PATH=lua_scripts
|
||||
AC_ELUNA_REQUIRE_PATHS=
|
||||
AC_ELUNA_REQUIRE_CPATHS=
|
||||
AC_ELUNA_AUTO_RELOAD_INTERVAL=1
|
||||
|
||||
# Tools
|
||||
PMA_HOST=ac-mysql
|
||||
PMA_PORT=3306
|
||||
PMA_USER=root
|
||||
PMA_EXTERNAL_PORT=8081
|
||||
PMA_ARBITRARY=1
|
||||
PMA_ABSOLUTE_URI=
|
||||
PMA_UPLOAD_LIMIT=300M
|
||||
PMA_MEMORY_LIMIT=512M
|
||||
PMA_MAX_EXECUTION_TIME=600
|
||||
KEIRA3_EXTERNAL_PORT=4201
|
||||
KEIRA_DATABASE_HOST=ac-mysql
|
||||
KEIRA_DATABASE_PORT=3306
|
||||
|
||||
# Health checks
|
||||
MYSQL_HEALTHCHECK_INTERVAL=20s
|
||||
MYSQL_HEALTHCHECK_TIMEOUT=15s
|
||||
MYSQL_HEALTHCHECK_RETRIES=25
|
||||
MYSQL_HEALTHCHECK_START_PERIOD=120s
|
||||
AUTH_HEALTHCHECK_INTERVAL=30s
|
||||
AUTH_HEALTHCHECK_TIMEOUT=10s
|
||||
AUTH_HEALTHCHECK_RETRIES=3
|
||||
AUTH_HEALTHCHECK_START_PERIOD=60s
|
||||
WORLD_HEALTHCHECK_INTERVAL=30s
|
||||
WORLD_HEALTHCHECK_TIMEOUT=10s
|
||||
WORLD_HEALTHCHECK_RETRIES=3
|
||||
WORLD_HEALTHCHECK_START_PERIOD=120s
|
||||
BACKUP_HEALTHCHECK_INTERVAL=60s
|
||||
BACKUP_HEALTHCHECK_TIMEOUT=30s
|
||||
BACKUP_HEALTHCHECK_RETRIES=3
|
||||
BACKUP_HEALTHCHECK_START_PERIOD=120s
|
||||
|
||||
# Networking
|
||||
NETWORK_NAME=azerothcore
|
||||
NETWORK_SUBNET=172.20.0.0/16
|
||||
NETWORK_GATEWAY=172.20.0.1
|
||||
|
||||
# Storage helpers
|
||||
HOST_ZONEINFO_PATH=/usr/share/zoneinfo
|
||||
|
||||
# Helper images
|
||||
ALPINE_GIT_IMAGE=alpine/git:latest
|
||||
ALPINE_IMAGE=alpine:latest
|
||||
@@ -3,8 +3,6 @@
|
||||
**Last Updated:** 2025-11-14
|
||||
**Status:** ✅ All blocked modules properly disabled
|
||||
|
||||
**Note:** This summary is historical. The authoritative block list lives in `config/module-manifest.json` (currently 94 modules marked `status: "blocked"`). This file and `docs/DISABLED_MODULES.md` should be reconciled during the next blocklist refresh.
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
@@ -4,8 +4,6 @@ This document tracks modules that have been disabled due to compilation errors o
|
||||
|
||||
**Last Updated:** 2025-11-14
|
||||
|
||||
**Note:** Historical snapshot. The current authoritative status for disabled/blocked modules is `status: "blocked"` in `config/module-manifest.json` (94 entries as of now). Align this file with the manifest during the next maintenance pass.
|
||||
|
||||
---
|
||||
|
||||
## Disabled Modules
|
||||
@@ -113,7 +111,7 @@ These modules are blocked in the manifest with known issues:
|
||||
|
||||
## Current Working Module Count
|
||||
|
||||
**Total in Manifest:** ~93 modules (historical; current manifest: 348 total / 221 supported / 94 blocked)
|
||||
**Total in Manifest:** ~93 modules
|
||||
**Enabled:** 89 modules
|
||||
**Disabled (Build Issues):** 4 modules
|
||||
**Blocked (Manifest):** 3 modules
|
||||
|
||||
@@ -9,7 +9,7 @@ This guide provides a complete walkthrough for deploying AzerothCore RealmMaster
|
||||
|
||||
Before you begin, ensure you have:
|
||||
- **Docker** with Docker Compose
|
||||
- **16GB+ RAM** and **64GB+ storage**
|
||||
- **16GB+ RAM** and **32GB+ storage**
|
||||
- **Linux/macOS/WSL2** (Windows with WSL2 recommended)
|
||||
|
||||
## Quick Overview
|
||||
@@ -40,7 +40,7 @@ cd AzerothCore-RealmMaster
|
||||
|
||||
The setup wizard will guide you through:
|
||||
- **Server Configuration**: IP address, ports, timezone
|
||||
- **Module Selection**: Choose from hundreds of official modules (348 in manifest; 221 currently supported) or use presets
|
||||
- **Module Selection**: Choose from 30+ available modules or use presets
|
||||
- **Module Definitions**: Customize defaults in `config/module-manifest.json` and optional presets under `config/module-profiles/`
|
||||
- **Storage Paths**: Configure NFS/local storage locations
|
||||
- **Playerbot Settings**: Max bots, account limits (if enabled)
|
||||
@@ -170,8 +170,6 @@ Optional flags:
|
||||
- `--remote-port 2222` - Custom SSH port
|
||||
- `--remote-identity ~/.ssh/custom_key` - Specific SSH key
|
||||
- `--remote-skip-storage` - Don't sync storage directory (fresh install on remote)
|
||||
- `--remote-storage-path /mnt/acore-storage` - Override STORAGE_PATH on the remote host (local-storage stays per .env)
|
||||
- `--remote-container-user 1001:1001` - Override CONTAINER_USER on the remote host (uid:gid)
|
||||
|
||||
### Step 3: Deploy on Remote Host
|
||||
```bash
|
||||
@@ -199,6 +197,8 @@ The remote deployment process transfers:
|
||||
|
||||
### Module Presets
|
||||
|
||||
> **⚠️ Warning:** Module preset support is still in progress. The bundled presets have not been fully tested yet—please share issues or suggestions via Discord (`uprightbass360`).
|
||||
|
||||
- Define JSON presets in `config/module-profiles/*.json`. Each file contains:
|
||||
- `modules` (array, required) – list of `MODULE_*` identifiers to enable.
|
||||
- `label` (string, optional) – text shown in the setup menu (emoji welcome).
|
||||
@@ -216,11 +216,11 @@ The remote deployment process transfers:
|
||||
```
|
||||
- `setup.sh` automatically adds these presets to the module menu and enables the listed modules when selected or when `--module-config <name>` is provided.
|
||||
- Built-in presets:
|
||||
- - `config/module-profiles/RealmMaster.json` – 33-module baseline used for testing.
|
||||
- - `config/module-profiles/suggested-modules.json` – default solo-friendly QoL stack.
|
||||
- - `config/module-profiles/playerbots-suggested-modules.json` – suggested stack plus playerbots.
|
||||
- - `config/module-profiles/playerbots-only.json` – playerbot-focused profile (adjust `--playerbot-max-bots`).
|
||||
- - `config/module-profiles/all-modules.json` – enable everything currently marked supported/active.
|
||||
- `config/module-profiles/suggested-modules.json` – default solo-friendly QoL stack.
|
||||
- `config/module-profiles/playerbots-suggested-modules.json` – suggested stack plus playerbots.
|
||||
- `config/module-profiles/playerbots-only.json` – playerbot-focused profile (adjust `--playerbot-max-bots`).
|
||||
- Custom example:
|
||||
- `config/module-profiles/sam.json` – Sam's playerbot-focused profile (set `--playerbot-max-bots 3000` when using this preset).
|
||||
- Module metadata lives in `config/module-manifest.json`; update that file if you need to add new modules or change repositories/branches.
|
||||
|
||||
---
|
||||
|
||||
@@ -4,7 +4,7 @@ This document provides a comprehensive overview of all available modules in the
|
||||
|
||||
## Overview
|
||||
|
||||
AzerothCore RealmMaster currently ships a manifest of **348 modules** (221 marked supported/active). The default RealmMaster preset enables 33 of these for day-to-day testing. All modules are automatically downloaded, configured, and SQL scripts executed when enabled. Modules are organized into logical categories for easy browsing and selection.
|
||||
AzerothCore RealmMaster includes **93 modules** that are automatically downloaded, configured, and SQL scripts executed when enabled. All modules are organized into logical categories for easy browsing and selection.
|
||||
|
||||
## How Modules Work
|
||||
|
||||
@@ -261,4 +261,4 @@ Modules are categorized by type:
|
||||
|
||||
For detailed setup and deployment instructions, see the main [README.md](../README.md) file.
|
||||
|
||||
For technical details about module management and the build system, refer to the [Architecture Overview](../README.md#architecture-overview) section.
|
||||
For technical details about module management and the build system, refer to the [Architecture Overview](../README.md#architecture-overview) section.
|
||||
@@ -6,8 +6,6 @@ This document tracks all modules that have been disabled due to compilation fail
|
||||
|
||||
**Total Blocked Modules:** 93
|
||||
|
||||
**Note:** Historical snapshot from 2025-11-22 validation. The current authoritative count lives in `config/module-manifest.json` (94 modules marked `status: "blocked"`). Update this file when reconciling the manifest.
|
||||
|
||||
---
|
||||
|
||||
## Compilation Errors
|
||||
|
||||
@@ -3,8 +3,6 @@
|
||||
**Date:** 2025-11-14
|
||||
**Status:** ✅ PRE-DEPLOYMENT TESTS PASSED
|
||||
|
||||
**Note:** Historical record for the 2025-11-14 run. Counts here reflect that test set (93 modules). The current manifest contains 348 modules, 221 marked supported/active, and the RealmMaster preset exercises 33 modules.
|
||||
|
||||
---
|
||||
|
||||
## Test Execution Summary
|
||||
@@ -33,7 +31,7 @@
|
||||
**Verified:**
|
||||
- Environment file present
|
||||
- Module configuration loaded
|
||||
- 93 modules enabled for testing in this run (current manifest: 348 total / 221 supported; RealmMaster preset: 33)
|
||||
- 93 modules enabled for testing
|
||||
|
||||
### Test 2: Module Manifest Validation ✅
|
||||
```bash
|
||||
@@ -141,7 +139,7 @@ MODULES_ENABLED="mod-playerbots mod-aoe-loot ..."
|
||||
|
||||
**What Gets Built:**
|
||||
- AzerothCore with playerbots branch
|
||||
- 93 modules compiled and integrated in this run (current manifest: 348 total / 221 supported)
|
||||
- 93 modules compiled and integrated
|
||||
- Custom Docker images: `acore-compose:worldserver-modules-latest` etc.
|
||||
|
||||
### Deployment Status: READY TO DEPLOY 🚀
|
||||
@@ -263,7 +261,7 @@ docker exec ac-mysql mysql -uroot -p[password] acore_world \
|
||||
- **Bash:** 5.0+
|
||||
- **Python:** 3.x
|
||||
- **Docker:** Available
|
||||
- **Modules Enabled:** 93 (historical run)
|
||||
- **Modules Enabled:** 93
|
||||
- **Test Date:** 2025-11-14
|
||||
|
||||
---
|
||||
|
||||
@@ -23,7 +23,7 @@ Interactive `.env` generator with module selection, server configuration, and de
|
||||
|
||||
```bash
|
||||
./setup.sh # Interactive configuration
|
||||
./setup.sh --module-config RealmMaster # Use predefined module profile, check profiles directory
|
||||
./setup.sh --module-config sam # Use predefined module profile, check profiles directory
|
||||
./setup.sh --playerbot-max-bots 3000 # Set playerbot limits
|
||||
```
|
||||
|
||||
|
||||
@@ -4,17 +4,8 @@ set -euo pipefail
|
||||
|
||||
INVOCATION_DIR="$PWD"
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Load environment defaults if present
|
||||
if [ -f "$PROJECT_ROOT/.env" ]; then
|
||||
set -a
|
||||
# shellcheck disable=SC1091
|
||||
source "$PROJECT_ROOT/.env"
|
||||
set +a
|
||||
fi
|
||||
|
||||
SUPPORTED_DBS=(auth characters world)
|
||||
declare -A SUPPORTED_SET=()
|
||||
for db in "${SUPPORTED_DBS[@]}"; do
|
||||
@@ -25,12 +16,10 @@ declare -A DB_NAMES=([auth]="" [characters]="" [world]="")
|
||||
declare -a INCLUDE_DBS=()
|
||||
declare -a SKIP_DBS=()
|
||||
|
||||
MYSQL_PW="${MYSQL_ROOT_PASSWORD:-}"
|
||||
MYSQL_PW=""
|
||||
DEST_PARENT=""
|
||||
DEST_PROVIDED=false
|
||||
EXPLICIT_SELECTION=false
|
||||
MYSQL_CONTAINER="${CONTAINER_MYSQL:-ac-mysql}"
|
||||
DEFAULT_BACKUP_DIR="${BACKUP_PATH:-${STORAGE_PATH:-./storage}/backups}"
|
||||
|
||||
usage(){
|
||||
cat <<'EOF'
|
||||
@@ -39,7 +28,7 @@ Usage: ./backup-export.sh [options]
|
||||
Creates a timestamped backup of one or more ACore databases.
|
||||
|
||||
Options:
|
||||
-o, --output DIR Destination directory (default: BACKUP_PATH from .env, fallback: ./storage/backups)
|
||||
-o, --output DIR Destination directory (default: storage/backups)
|
||||
-p, --password PASS MySQL root password
|
||||
--auth-db NAME Auth database schema name
|
||||
--characters-db NAME Characters database schema name
|
||||
@@ -235,9 +224,13 @@ done
|
||||
if $DEST_PROVIDED; then
|
||||
DEST_PARENT="$(resolve_relative "$INVOCATION_DIR" "$DEST_PARENT")"
|
||||
else
|
||||
DEFAULT_BACKUP_DIR="$(resolve_relative "$PROJECT_ROOT" "$DEFAULT_BACKUP_DIR")"
|
||||
DEST_PARENT="$DEFAULT_BACKUP_DIR"
|
||||
mkdir -p "$DEST_PARENT"
|
||||
# Use storage/backups as default to align with existing backup structure
|
||||
if [ -d "$SCRIPT_DIR/storage" ]; then
|
||||
DEST_PARENT="$SCRIPT_DIR/storage/backups"
|
||||
mkdir -p "$DEST_PARENT"
|
||||
else
|
||||
DEST_PARENT="$SCRIPT_DIR"
|
||||
fi
|
||||
fi
|
||||
|
||||
TIMESTAMP="$(date +%Y%m%d_%H%M%S)"
|
||||
@@ -248,7 +241,7 @@ generated_at="$(date --iso-8601=seconds)"
|
||||
dump_db(){
|
||||
local schema="$1" outfile="$2"
|
||||
echo "Dumping ${schema} -> ${outfile}"
|
||||
docker exec "$MYSQL_CONTAINER" mysqldump -uroot -p"$MYSQL_PW" "$schema" | gzip > "$outfile"
|
||||
docker exec ac-mysql mysqldump -uroot -p"$MYSQL_PW" "$schema" | gzip > "$outfile"
|
||||
}
|
||||
|
||||
for db in "${ACTIVE_DBS[@]}"; do
|
||||
|
||||
@@ -24,34 +24,6 @@ STATUS_FILE="${DB_GUARD_STATUS_FILE:-/tmp/db-guard.status}"
|
||||
ERROR_FILE="${DB_GUARD_ERROR_FILE:-/tmp/db-guard.error}"
|
||||
MODULE_SQL_HOST_PATH="${MODULE_SQL_HOST_PATH:-/modules-sql}"
|
||||
|
||||
SEED_CONF_SCRIPT="${SEED_DBIMPORT_CONF_SCRIPT:-/tmp/seed-dbimport-conf.sh}"
|
||||
if [ -f "$SEED_CONF_SCRIPT" ]; then
|
||||
# shellcheck source=/dev/null
|
||||
. "$SEED_CONF_SCRIPT"
|
||||
elif ! command -v seed_dbimport_conf >/dev/null 2>&1; then
|
||||
seed_dbimport_conf(){
|
||||
local conf="/azerothcore/env/dist/etc/dbimport.conf"
|
||||
local dist="${conf}.dist"
|
||||
mkdir -p "$(dirname "$conf")"
|
||||
[ -f "$conf" ] && return 0
|
||||
if [ -f "$dist" ]; then
|
||||
cp "$dist" "$conf"
|
||||
else
|
||||
warn "dbimport.conf missing and no dist available; writing minimal defaults"
|
||||
cat > "$conf" <<EOF
|
||||
LoginDatabaseInfo = "localhost;3306;root;root;acore_auth"
|
||||
WorldDatabaseInfo = "localhost;3306;root;root;acore_world"
|
||||
CharacterDatabaseInfo = "localhost;3306;root;root;acore_characters"
|
||||
PlayerbotsDatabaseInfo = "localhost;3306;root;root;acore_playerbots"
|
||||
EnableDatabases = 15
|
||||
Updates.AutoSetup = 1
|
||||
MySQLExecutable = "/usr/bin/mysql"
|
||||
TempDir = "/azerothcore/env/dist/etc/temp"
|
||||
EOF
|
||||
fi
|
||||
}
|
||||
fi
|
||||
|
||||
declare -a DB_SCHEMAS=()
|
||||
for var in DB_AUTH_NAME DB_WORLD_NAME DB_CHARACTERS_NAME DB_PLAYERBOTS_NAME; do
|
||||
value="${!var:-}"
|
||||
@@ -113,6 +85,15 @@ rehydrate(){
|
||||
"$IMPORT_SCRIPT"
|
||||
}
|
||||
|
||||
ensure_dbimport_conf(){
|
||||
local conf="/azerothcore/env/dist/etc/dbimport.conf"
|
||||
local dist="${conf}.dist"
|
||||
if [ ! -f "$conf" ] && [ -f "$dist" ]; then
|
||||
cp "$dist" "$conf"
|
||||
fi
|
||||
mkdir -p /azerothcore/env/dist/temp
|
||||
}
|
||||
|
||||
sync_host_stage_files(){
|
||||
local host_root="${MODULE_SQL_HOST_PATH}"
|
||||
[ -d "$host_root" ] || return 0
|
||||
@@ -129,7 +110,7 @@ sync_host_stage_files(){
|
||||
|
||||
dbimport_verify(){
|
||||
local bin_dir="/azerothcore/env/dist/bin"
|
||||
seed_dbimport_conf
|
||||
ensure_dbimport_conf
|
||||
sync_host_stage_files
|
||||
if [ ! -x "${bin_dir}/dbimport" ]; then
|
||||
warn "dbimport binary not found at ${bin_dir}/dbimport"
|
||||
|
||||
@@ -32,22 +32,6 @@ SHOW_PENDING=0
|
||||
SHOW_MODULES=1
|
||||
CONTAINER_NAME="ac-mysql"
|
||||
|
||||
resolve_path(){
|
||||
local base="$1" path="$2"
|
||||
if command -v python3 >/dev/null 2>&1; then
|
||||
python3 - "$base" "$path" <<'PY'
|
||||
import os, sys
|
||||
base, path = sys.argv[1:3]
|
||||
if os.path.isabs(path):
|
||||
print(os.path.normpath(path))
|
||||
else:
|
||||
print(os.path.normpath(os.path.join(base, path)))
|
||||
PY
|
||||
else
|
||||
(cd "$base" && realpath -m "$path")
|
||||
fi
|
||||
}
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage: ./db-health-check.sh [options]
|
||||
@@ -89,10 +73,6 @@ if [ -f "$PROJECT_ROOT/.env" ]; then
|
||||
set +a
|
||||
fi
|
||||
|
||||
BACKUP_PATH_RAW="${BACKUP_PATH:-${STORAGE_PATH:-./storage}/backups}"
|
||||
BACKUP_PATH="$(resolve_path "$PROJECT_ROOT" "$BACKUP_PATH_RAW")"
|
||||
CONTAINER_NAME="${CONTAINER_MYSQL:-$CONTAINER_NAME}"
|
||||
|
||||
MYSQL_HOST="${MYSQL_HOST:-ac-mysql}"
|
||||
MYSQL_PORT="${MYSQL_PORT:-3306}"
|
||||
MYSQL_USER="${MYSQL_USER:-root}"
|
||||
@@ -283,7 +263,7 @@ show_module_updates() {
|
||||
|
||||
# Get backup information
|
||||
get_backup_info() {
|
||||
local backup_dir="$BACKUP_PATH"
|
||||
local backup_dir="$PROJECT_ROOT/storage/backups"
|
||||
|
||||
if [ ! -d "$backup_dir" ]; then
|
||||
printf " ${ICON_INFO} No backups directory found\n"
|
||||
|
||||
@@ -81,6 +81,15 @@ wait_for_mysql(){
|
||||
return 1
|
||||
}
|
||||
|
||||
ensure_dbimport_conf(){
|
||||
local conf="/azerothcore/env/dist/etc/dbimport.conf"
|
||||
local dist="${conf}.dist"
|
||||
if [ ! -f "$conf" ] && [ -f "$dist" ]; then
|
||||
cp "$dist" "$conf"
|
||||
fi
|
||||
mkdir -p /azerothcore/env/dist/temp
|
||||
}
|
||||
|
||||
case "${1:-}" in
|
||||
-h|--help)
|
||||
print_help
|
||||
@@ -97,34 +106,6 @@ esac
|
||||
echo "🔧 Conditional AzerothCore Database Import"
|
||||
echo "========================================"
|
||||
|
||||
SEED_CONF_SCRIPT="${SEED_DBIMPORT_CONF_SCRIPT:-/tmp/seed-dbimport-conf.sh}"
|
||||
if [ -f "$SEED_CONF_SCRIPT" ]; then
|
||||
# shellcheck source=/dev/null
|
||||
. "$SEED_CONF_SCRIPT"
|
||||
elif ! command -v seed_dbimport_conf >/dev/null 2>&1; then
|
||||
seed_dbimport_conf(){
|
||||
local conf="/azerothcore/env/dist/etc/dbimport.conf"
|
||||
local dist="${conf}.dist"
|
||||
mkdir -p "$(dirname "$conf")"
|
||||
[ -f "$conf" ] && return 0
|
||||
if [ -f "$dist" ]; then
|
||||
cp "$dist" "$conf"
|
||||
else
|
||||
echo "⚠️ dbimport.conf missing and no dist available; using localhost defaults" >&2
|
||||
cat > "$conf" <<EOF
|
||||
LoginDatabaseInfo = "localhost;3306;root;root;acore_auth"
|
||||
WorldDatabaseInfo = "localhost;3306;root;root;acore_world"
|
||||
CharacterDatabaseInfo = "localhost;3306;root;root;acore_characters"
|
||||
PlayerbotsDatabaseInfo = "localhost;3306;root;root;acore_playerbots"
|
||||
EnableDatabases = 15
|
||||
Updates.AutoSetup = 1
|
||||
MySQLExecutable = "/usr/bin/mysql"
|
||||
TempDir = "/azerothcore/env/dist/etc/temp"
|
||||
EOF
|
||||
fi
|
||||
}
|
||||
fi
|
||||
|
||||
if ! wait_for_mysql; then
|
||||
echo "❌ MySQL service is unavailable; aborting database import"
|
||||
exit 1
|
||||
@@ -177,8 +158,6 @@ echo "🔧 Starting database import process..."
|
||||
|
||||
echo "🔍 Checking for backups to restore..."
|
||||
|
||||
# Allow tolerant scanning; re-enable -e after search.
|
||||
set +e
|
||||
# Define backup search paths in priority order
|
||||
BACKUP_SEARCH_PATHS=(
|
||||
"/backups"
|
||||
@@ -274,16 +253,13 @@ if [ -z "$backup_path" ]; then
|
||||
# Check for manual backups (*.sql files)
|
||||
if [ -z "$backup_path" ]; then
|
||||
echo "🔍 Checking for manual backup files..."
|
||||
latest_manual=""
|
||||
if ls "$BACKUP_DIRS"/*.sql >/dev/null 2>&1; then
|
||||
latest_manual=$(ls -1t "$BACKUP_DIRS"/*.sql | head -n 1)
|
||||
if [ -n "$latest_manual" ] && [ -f "$latest_manual" ]; then
|
||||
echo "📦 Found manual backup: $(basename "$latest_manual")"
|
||||
if timeout 10 head -20 "$latest_manual" >/dev/null 2>&1; then
|
||||
echo "✅ Valid manual backup file: $(basename "$latest_manual")"
|
||||
backup_path="$latest_manual"
|
||||
break
|
||||
fi
|
||||
latest_manual=$(ls -1t "$BACKUP_DIRS"/*.sql 2>/dev/null | head -n 1)
|
||||
if [ -n "$latest_manual" ] && [ -f "$latest_manual" ]; then
|
||||
echo "📦 Found manual backup: $(basename "$latest_manual")"
|
||||
if timeout 10 head -20 "$latest_manual" >/dev/null 2>&1; then
|
||||
echo "✅ Valid manual backup file: $(basename "$latest_manual")"
|
||||
backup_path="$latest_manual"
|
||||
break
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
@@ -296,7 +272,6 @@ if [ -z "$backup_path" ]; then
|
||||
done
|
||||
fi
|
||||
|
||||
set -e
|
||||
echo "🔄 Final backup path result: '$backup_path'"
|
||||
if [ -n "$backup_path" ]; then
|
||||
echo "📦 Found backup: $(basename "$backup_path")"
|
||||
@@ -382,7 +357,7 @@ if [ -n "$backup_path" ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
seed_dbimport_conf
|
||||
ensure_dbimport_conf
|
||||
|
||||
cd /azerothcore/env/dist/bin
|
||||
echo "🔄 Running dbimport to apply any missing updates..."
|
||||
@@ -449,73 +424,23 @@ fi
|
||||
|
||||
echo "🗄️ Creating fresh AzerothCore databases..."
|
||||
mysql -h ${CONTAINER_MYSQL} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} -e "
|
||||
DROP DATABASE IF EXISTS ${DB_AUTH_NAME};
|
||||
DROP DATABASE IF EXISTS ${DB_WORLD_NAME};
|
||||
DROP DATABASE IF EXISTS ${DB_CHARACTERS_NAME};
|
||||
DROP DATABASE IF EXISTS ${DB_PLAYERBOTS_NAME:-acore_playerbots};
|
||||
CREATE DATABASE ${DB_AUTH_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
|
||||
CREATE DATABASE ${DB_WORLD_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
|
||||
CREATE DATABASE ${DB_CHARACTERS_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
|
||||
CREATE DATABASE ${DB_PLAYERBOTS_NAME:-acore_playerbots} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
|
||||
CREATE DATABASE IF NOT EXISTS ${DB_AUTH_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
|
||||
CREATE DATABASE IF NOT EXISTS ${DB_WORLD_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
|
||||
CREATE DATABASE IF NOT EXISTS ${DB_CHARACTERS_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
|
||||
CREATE DATABASE IF NOT EXISTS acore_playerbots DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
|
||||
SHOW DATABASES;" || { echo "❌ Failed to create databases"; exit 1; }
|
||||
echo "✅ Fresh databases created - proceeding with schema import"
|
||||
|
||||
ensure_dbimport_conf
|
||||
|
||||
echo "🚀 Running database import..."
|
||||
cd /azerothcore/env/dist/bin
|
||||
seed_dbimport_conf
|
||||
|
||||
maybe_run_base_import(){
|
||||
local mysql_host="${CONTAINER_MYSQL:-ac-mysql}"
|
||||
local mysql_port="${MYSQL_PORT:-3306}"
|
||||
local mysql_user="${MYSQL_USER:-root}"
|
||||
local mysql_pass="${MYSQL_ROOT_PASSWORD:-root}"
|
||||
|
||||
import_dir(){
|
||||
local db="$1" dir="$2"
|
||||
[ -d "$dir" ] || return 0
|
||||
echo "🔧 Importing base schema for ${db} from $(basename "$dir")..."
|
||||
for f in $(ls "$dir"/*.sql 2>/dev/null | LC_ALL=C sort); do
|
||||
MYSQL_PWD="$mysql_pass" mysql -h "$mysql_host" -P "$mysql_port" -u "$mysql_user" "$db" < "$f" >/dev/null 2>&1 || true
|
||||
done
|
||||
}
|
||||
|
||||
needs_import(){
|
||||
local db="$1"
|
||||
local count
|
||||
count="$(MYSQL_PWD="$mysql_pass" mysql -h "$mysql_host" -P "$mysql_port" -u "$mysql_user" -N -B -e "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema='${db}';" 2>/dev/null || echo 0)"
|
||||
[ "${count:-0}" -eq 0 ] && return 0
|
||||
local updates
|
||||
updates="$(MYSQL_PWD="$mysql_pass" mysql -h "$mysql_host" -P "$mysql_port" -u "$mysql_user" -N -B -e "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema='${db}' AND table_name='updates';" 2>/dev/null || echo 0)"
|
||||
[ "${updates:-0}" -eq 0 ]
|
||||
}
|
||||
|
||||
if needs_import "${DB_WORLD_NAME:-acore_world}"; then
|
||||
import_dir "${DB_WORLD_NAME:-acore_world}" "/azerothcore/data/sql/base/db_world"
|
||||
fi
|
||||
if needs_import "${DB_AUTH_NAME:-acore_auth}"; then
|
||||
import_dir "${DB_AUTH_NAME:-acore_auth}" "/azerothcore/data/sql/base/db_auth"
|
||||
fi
|
||||
if needs_import "${DB_CHARACTERS_NAME:-acore_characters}"; then
|
||||
import_dir "${DB_CHARACTERS_NAME:-acore_characters}" "/azerothcore/data/sql/base/db_characters"
|
||||
fi
|
||||
}
|
||||
|
||||
maybe_run_base_import
|
||||
if ./dbimport; then
|
||||
echo "✅ Database import completed successfully!"
|
||||
import_marker_msg="$(date): Database import completed successfully"
|
||||
if [ -w "$RESTORE_STATUS_DIR" ]; then
|
||||
echo "$import_marker_msg" > "$RESTORE_STATUS_DIR/.import-completed"
|
||||
elif [ -w "$MARKER_STATUS_DIR" ]; then
|
||||
echo "$import_marker_msg" > "$MARKER_STATUS_DIR/.import-completed" 2>/dev/null || true
|
||||
fi
|
||||
echo "$(date): Database import completed successfully" > "$RESTORE_STATUS_DIR/.import-completed" || echo "$(date): Database import completed successfully" > "$MARKER_STATUS_DIR/.import-completed"
|
||||
else
|
||||
echo "❌ Database import failed!"
|
||||
if [ -w "$RESTORE_STATUS_DIR" ]; then
|
||||
echo "$(date): Database import failed" > "$RESTORE_STATUS_DIR/.import-failed"
|
||||
elif [ -w "$MARKER_STATUS_DIR" ]; then
|
||||
echo "$(date): Database import failed" > "$MARKER_STATUS_DIR/.import-failed" 2>/dev/null || true
|
||||
fi
|
||||
echo "$(date): Database import failed" > "$RESTORE_STATUS_DIR/.import-failed" || echo "$(date): Database import failed" > "$MARKER_STATUS_DIR/.import-failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Utility to migrate deployment images (and optionally storage) to a remote host.
|
||||
# Assumes your runtime images have already been built or pulled locally.
|
||||
# Utility to migrate module images (and optionally storage) to a remote host.
|
||||
# Assumes module images have already been rebuilt locally.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
@@ -41,74 +41,6 @@ resolve_project_image(){
|
||||
echo "${project_name}:${tag}"
|
||||
}
|
||||
|
||||
declare -a DEPLOY_IMAGE_REFS=()
|
||||
declare -a CLEANUP_IMAGE_REFS=()
|
||||
declare -A DEPLOY_IMAGE_SET=()
|
||||
declare -A CLEANUP_IMAGE_SET=()
|
||||
|
||||
add_deploy_image_ref(){
|
||||
local image="$1"
|
||||
[ -z "$image" ] && return
|
||||
if [[ -z "${DEPLOY_IMAGE_SET[$image]:-}" ]]; then
|
||||
DEPLOY_IMAGE_SET["$image"]=1
|
||||
DEPLOY_IMAGE_REFS+=("$image")
|
||||
fi
|
||||
add_cleanup_image_ref "$image"
|
||||
}
|
||||
|
||||
add_cleanup_image_ref(){
|
||||
local image="$1"
|
||||
[ -z "$image" ] && return
|
||||
if [[ -z "${CLEANUP_IMAGE_SET[$image]:-}" ]]; then
|
||||
CLEANUP_IMAGE_SET["$image"]=1
|
||||
CLEANUP_IMAGE_REFS+=("$image")
|
||||
fi
|
||||
}
|
||||
|
||||
collect_deploy_image_refs(){
|
||||
local auth_modules world_modules auth_playerbots world_playerbots db_import client_data bots_client_data
|
||||
local auth_standard world_standard client_data_standard
|
||||
|
||||
auth_modules="$(read_env_value AC_AUTHSERVER_IMAGE_MODULES "$(resolve_project_image "authserver-modules-latest")")"
|
||||
world_modules="$(read_env_value AC_WORLDSERVER_IMAGE_MODULES "$(resolve_project_image "worldserver-modules-latest")")"
|
||||
auth_playerbots="$(read_env_value AC_AUTHSERVER_IMAGE_PLAYERBOTS "$(resolve_project_image "authserver-playerbots")")"
|
||||
world_playerbots="$(read_env_value AC_WORLDSERVER_IMAGE_PLAYERBOTS "$(resolve_project_image "worldserver-playerbots")")"
|
||||
db_import="$(read_env_value AC_DB_IMPORT_IMAGE "$(resolve_project_image "db-import-playerbots")")"
|
||||
client_data="$(read_env_value AC_CLIENT_DATA_IMAGE_PLAYERBOTS "$(resolve_project_image "client-data-playerbots")")"
|
||||
|
||||
auth_standard="$(read_env_value AC_AUTHSERVER_IMAGE "acore/ac-wotlk-authserver:master")"
|
||||
world_standard="$(read_env_value AC_WORLDSERVER_IMAGE "acore/ac-wotlk-worldserver:master")"
|
||||
client_data_standard="$(read_env_value AC_CLIENT_DATA_IMAGE "acore/ac-wotlk-client-data:master")"
|
||||
|
||||
local refs=(
|
||||
"$auth_modules"
|
||||
"$world_modules"
|
||||
"$auth_playerbots"
|
||||
"$world_playerbots"
|
||||
"$db_import"
|
||||
"$client_data"
|
||||
"$auth_standard"
|
||||
"$world_standard"
|
||||
"$client_data_standard"
|
||||
)
|
||||
for ref in "${refs[@]}"; do
|
||||
add_deploy_image_ref "$ref"
|
||||
done
|
||||
|
||||
# Include default project-tagged images for cleanup even if env moved to custom tags
|
||||
local fallback_refs=(
|
||||
"$(resolve_project_image "authserver-modules-latest")"
|
||||
"$(resolve_project_image "worldserver-modules-latest")"
|
||||
"$(resolve_project_image "authserver-playerbots")"
|
||||
"$(resolve_project_image "worldserver-playerbots")"
|
||||
"$(resolve_project_image "db-import-playerbots")"
|
||||
"$(resolve_project_image "client-data-playerbots")"
|
||||
)
|
||||
for ref in "${fallback_refs[@]}"; do
|
||||
add_cleanup_image_ref "$ref"
|
||||
done
|
||||
}
|
||||
|
||||
ensure_host_writable(){
|
||||
local path="$1"
|
||||
[ -n "$path" ] || return 0
|
||||
@@ -144,12 +76,10 @@ Options:
|
||||
--port PORT SSH port (default: 22)
|
||||
--identity PATH SSH private key (passed to scp/ssh)
|
||||
--project-dir DIR Remote project directory (default: ~/<project-name>)
|
||||
--env-file PATH Use this env file for image lookup and upload (default: ./.env)
|
||||
--tarball PATH Output path for the image tar (default: ./local-storage/images/acore-modules-images.tar)
|
||||
--storage PATH Remote storage directory (default: <project-dir>/storage)
|
||||
--skip-storage Do not sync the storage directory
|
||||
--copy-source Copy the full local project directory instead of syncing via git
|
||||
--cleanup-runtime Stop/remove existing ac-* containers and project images on remote
|
||||
--yes, -y Auto-confirm prompts (for existing deployments)
|
||||
--help Show this help
|
||||
EOF_HELP
|
||||
@@ -165,7 +95,6 @@ REMOTE_STORAGE=""
|
||||
SKIP_STORAGE=0
|
||||
ASSUME_YES=0
|
||||
COPY_SOURCE=0
|
||||
CLEANUP_RUNTIME=0
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
@@ -174,12 +103,10 @@ while [[ $# -gt 0 ]]; do
|
||||
--port) PORT="$2"; shift 2;;
|
||||
--identity) IDENTITY="$2"; shift 2;;
|
||||
--project-dir) PROJECT_DIR="$2"; shift 2;;
|
||||
--env-file) ENV_FILE="$2"; shift 2;;
|
||||
--tarball) TARBALL="$2"; shift 2;;
|
||||
--storage) REMOTE_STORAGE="$2"; shift 2;;
|
||||
--skip-storage) SKIP_STORAGE=1; shift;;
|
||||
--copy-source) COPY_SOURCE=1; shift;;
|
||||
--cleanup-runtime) CLEANUP_RUNTIME=1; shift;;
|
||||
--yes|-y) ASSUME_YES=1; shift;;
|
||||
--help|-h) usage; exit 0;;
|
||||
*) echo "Unknown option: $1" >&2; usage; exit 1;;
|
||||
@@ -192,14 +119,6 @@ if [[ -z "$HOST" || -z "$USER" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Normalize env file path if provided and recompute defaults
|
||||
if [ -n "$ENV_FILE" ] && [ -f "$ENV_FILE" ]; then
|
||||
ENV_FILE="$(cd "$(dirname "$ENV_FILE")" && pwd)/$(basename "$ENV_FILE")"
|
||||
else
|
||||
ENV_FILE="$PROJECT_ROOT/.env"
|
||||
fi
|
||||
DEFAULT_PROJECT_NAME="$(project_name::resolve "$ENV_FILE" "$TEMPLATE_FILE")"
|
||||
|
||||
expand_remote_path(){
|
||||
local path="$1"
|
||||
case "$path" in
|
||||
@@ -226,27 +145,6 @@ ensure_host_writable "$LOCAL_STORAGE_ROOT"
|
||||
TARBALL="${TARBALL:-${LOCAL_STORAGE_ROOT}/images/acore-modules-images.tar}"
|
||||
ensure_host_writable "$(dirname "$TARBALL")"
|
||||
|
||||
# Resolve module SQL staging paths (local and remote)
|
||||
resolve_path_relative_to_project(){
|
||||
local path="$1" root="$2"
|
||||
if [[ "$path" != /* ]]; then
|
||||
# drop leading ./ if present
|
||||
path="${path#./}"
|
||||
path="${root%/}/$path"
|
||||
fi
|
||||
echo "${path%/}"
|
||||
}
|
||||
|
||||
STAGE_SQL_PATH_RAW="$(read_env_value STAGE_PATH_MODULE_SQL "${LOCAL_STORAGE_ROOT:-./local-storage}/module-sql-updates")"
|
||||
# Ensure STORAGE_PATH_LOCAL is defined to avoid set -u failures during expansion
|
||||
if [ -z "${STORAGE_PATH_LOCAL:-}" ]; then
|
||||
STORAGE_PATH_LOCAL="$LOCAL_STORAGE_ROOT"
|
||||
fi
|
||||
# Expand any env references (e.g., ${STORAGE_PATH_LOCAL})
|
||||
STAGE_SQL_PATH_RAW="$(eval "echo \"$STAGE_SQL_PATH_RAW\"")"
|
||||
LOCAL_STAGE_SQL_DIR="$(resolve_path_relative_to_project "$STAGE_SQL_PATH_RAW" "$PROJECT_ROOT")"
|
||||
REMOTE_STAGE_SQL_DIR="$(resolve_path_relative_to_project "$STAGE_SQL_PATH_RAW" "$PROJECT_DIR")"
|
||||
|
||||
SCP_OPTS=(-P "$PORT")
|
||||
SSH_OPTS=(-p "$PORT")
|
||||
if [[ -n "$IDENTITY" ]]; then
|
||||
@@ -388,20 +286,27 @@ setup_remote_repository(){
|
||||
}
|
||||
|
||||
cleanup_stale_docker_resources(){
|
||||
if [ "$CLEANUP_RUNTIME" -ne 1 ]; then
|
||||
echo "⋅ Skipping remote runtime cleanup (containers and images preserved)."
|
||||
return
|
||||
fi
|
||||
|
||||
echo "⋅ Cleaning up stale Docker resources on remote..."
|
||||
|
||||
# Get project name to target our containers/images specifically
|
||||
local project_name
|
||||
project_name="$(resolve_project_name)"
|
||||
|
||||
# Stop and remove old containers
|
||||
echo " • Removing old containers..."
|
||||
run_ssh "docker ps -a --filter 'name=ac-' --format '{{.Names}}' | xargs -r docker rm -f 2>/dev/null || true"
|
||||
|
||||
# Remove old project images to force fresh load
|
||||
echo " • Removing old project images..."
|
||||
for img in "${CLEANUP_IMAGE_REFS[@]}"; do
|
||||
local images_to_remove=(
|
||||
"${project_name}:authserver-modules-latest"
|
||||
"${project_name}:worldserver-modules-latest"
|
||||
"${project_name}:authserver-playerbots"
|
||||
"${project_name}:worldserver-playerbots"
|
||||
"${project_name}:db-import-playerbots"
|
||||
"${project_name}:client-data-playerbots"
|
||||
)
|
||||
for img in "${images_to_remove[@]}"; do
|
||||
run_ssh "docker rmi '$img' 2>/dev/null || true"
|
||||
done
|
||||
|
||||
@@ -415,25 +320,31 @@ cleanup_stale_docker_resources(){
|
||||
|
||||
validate_remote_environment
|
||||
|
||||
collect_deploy_image_refs
|
||||
|
||||
echo "⋅ Exporting deployment images to $TARBALL"
|
||||
# Ensure destination directory exists
|
||||
ensure_host_writable "$(dirname "$TARBALL")"
|
||||
|
||||
echo "⋅ Exporting module images to $TARBALL"
|
||||
# Check which images are available and collect them
|
||||
IMAGES_TO_SAVE=()
|
||||
MISSING_IMAGES=()
|
||||
for image in "${DEPLOY_IMAGE_REFS[@]}"; do
|
||||
|
||||
project_auth_modules="$(resolve_project_image "authserver-modules-latest")"
|
||||
project_world_modules="$(resolve_project_image "worldserver-modules-latest")"
|
||||
project_auth_playerbots="$(resolve_project_image "authserver-playerbots")"
|
||||
project_world_playerbots="$(resolve_project_image "worldserver-playerbots")"
|
||||
project_db_import="$(resolve_project_image "db-import-playerbots")"
|
||||
project_client_data="$(resolve_project_image "client-data-playerbots")"
|
||||
|
||||
for image in \
|
||||
"$project_auth_modules" \
|
||||
"$project_world_modules" \
|
||||
"$project_auth_playerbots" \
|
||||
"$project_world_playerbots" \
|
||||
"$project_db_import" \
|
||||
"$project_client_data"; do
|
||||
if docker image inspect "$image" >/dev/null 2>&1; then
|
||||
IMAGES_TO_SAVE+=("$image")
|
||||
else
|
||||
MISSING_IMAGES+=("$image")
|
||||
fi
|
||||
done
|
||||
|
||||
if [ ${#IMAGES_TO_SAVE[@]} -eq 0 ]; then
|
||||
echo "❌ No AzerothCore images found to migrate. Run './build.sh' first or pull the images defined in your .env."
|
||||
echo "❌ No AzerothCore images found to migrate. Run './build.sh' first or pull standard images."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -441,11 +352,6 @@ echo "⋅ Found ${#IMAGES_TO_SAVE[@]} images to migrate:"
|
||||
printf ' • %s\n' "${IMAGES_TO_SAVE[@]}"
|
||||
docker image save "${IMAGES_TO_SAVE[@]}" > "$TARBALL"
|
||||
|
||||
if [ ${#MISSING_IMAGES[@]} -gt 0 ]; then
|
||||
echo "⚠️ Skipping ${#MISSING_IMAGES[@]} images not present locally (will need to pull on remote if required):"
|
||||
printf ' • %s\n' "${MISSING_IMAGES[@]}"
|
||||
fi
|
||||
|
||||
if [[ $SKIP_STORAGE -eq 0 ]]; then
|
||||
if [[ -d storage ]]; then
|
||||
echo "⋅ Syncing storage to remote"
|
||||
@@ -481,18 +387,6 @@ if [[ $SKIP_STORAGE -eq 0 ]]; then
|
||||
rm -f "$modules_tar"
|
||||
run_ssh "tar -xf '$REMOTE_TEMP_DIR/acore-modules.tar' -C '$REMOTE_STORAGE/modules' && rm '$REMOTE_TEMP_DIR/acore-modules.tar'"
|
||||
fi
|
||||
|
||||
# Sync module SQL staging directory (STAGE_PATH_MODULE_SQL)
|
||||
if [[ -d "$LOCAL_STAGE_SQL_DIR" ]]; then
|
||||
echo "⋅ Syncing module SQL staging to remote"
|
||||
run_ssh "rm -rf '$REMOTE_STAGE_SQL_DIR' && mkdir -p '$REMOTE_STAGE_SQL_DIR'"
|
||||
sql_tar=$(mktemp)
|
||||
tar -cf "$sql_tar" -C "$LOCAL_STAGE_SQL_DIR" .
|
||||
ensure_remote_temp_dir
|
||||
run_scp "$sql_tar" "$USER@$HOST:$REMOTE_TEMP_DIR/acore-module-sql.tar"
|
||||
rm -f "$sql_tar"
|
||||
run_ssh "tar -xf '$REMOTE_TEMP_DIR/acore-module-sql.tar' -C '$REMOTE_STAGE_SQL_DIR' && rm '$REMOTE_TEMP_DIR/acore-module-sql.tar'"
|
||||
fi
|
||||
fi
|
||||
|
||||
reset_remote_post_install_marker(){
|
||||
@@ -512,9 +406,9 @@ ensure_remote_temp_dir
|
||||
run_scp "$TARBALL" "$USER@$HOST:$REMOTE_TEMP_DIR/acore-modules-images.tar"
|
||||
run_ssh "docker load < '$REMOTE_TEMP_DIR/acore-modules-images.tar' && rm '$REMOTE_TEMP_DIR/acore-modules-images.tar'"
|
||||
|
||||
if [[ -f "$ENV_FILE" ]]; then
|
||||
if [[ -f .env ]]; then
|
||||
echo "⋅ Uploading .env"
|
||||
run_scp "$ENV_FILE" "$USER@$HOST:$PROJECT_DIR/.env"
|
||||
run_scp .env "$USER@$HOST:$PROJECT_DIR/.env"
|
||||
fi
|
||||
|
||||
echo "⋅ Remote prepares completed"
|
||||
|
||||
@@ -1,88 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Ensure dbimport.conf exists with usable connection values.
|
||||
set -euo pipefail 2>/dev/null || set -eu
|
||||
|
||||
# Usage: seed_dbimport_conf [conf_dir]
|
||||
# - conf_dir: target directory (defaults to DBIMPORT_CONF_DIR or /azerothcore/env/dist/etc)
|
||||
seed_dbimport_conf() {
|
||||
local conf_dir="${1:-${DBIMPORT_CONF_DIR:-/azerothcore/env/dist/etc}}"
|
||||
local conf="${conf_dir}/dbimport.conf"
|
||||
local dist="${conf}.dist"
|
||||
local source_root="${DBIMPORT_SOURCE_ROOT:-${AC_SOURCE_DIR:-/local-storage-root/source/azerothcore-playerbots}}"
|
||||
if [ ! -d "$source_root" ]; then
|
||||
local fallback="/local-storage-root/source/azerothcore-wotlk"
|
||||
if [ -d "$fallback" ]; then
|
||||
source_root="$fallback"
|
||||
fi
|
||||
fi
|
||||
local source_dist="${DBIMPORT_DIST_PATH:-${source_root}/src/tools/dbimport/dbimport.conf.dist}"
|
||||
# Put temp dir inside the writable config mount so non-root can create files.
|
||||
local temp_dir="${DBIMPORT_TEMP_DIR:-/azerothcore/env/dist/etc/temp}"
|
||||
|
||||
mkdir -p "$conf_dir" "$temp_dir"
|
||||
|
||||
# Prefer a real .dist from the source tree if it exists.
|
||||
if [ -f "$source_dist" ]; then
|
||||
cp -n "$source_dist" "$dist" 2>/dev/null || true
|
||||
fi
|
||||
|
||||
if [ ! -f "$conf" ]; then
|
||||
if [ -f "$dist" ]; then
|
||||
cp "$dist" "$conf"
|
||||
else
|
||||
echo "⚠️ dbimport.conf.dist not found; generating minimal dbimport.conf" >&2
|
||||
cat > "$conf" <<EOF
|
||||
LoginDatabaseInfo = "localhost;3306;root;root;acore_auth"
|
||||
WorldDatabaseInfo = "localhost;3306;root;root;acore_world"
|
||||
CharacterDatabaseInfo = "localhost;3306;root;root;acore_characters"
|
||||
PlayerbotsDatabaseInfo = "localhost;3306;root;root;acore_playerbots"
|
||||
EnableDatabases = 15
|
||||
Updates.AutoSetup = 1
|
||||
MySQLExecutable = "/usr/bin/mysql"
|
||||
TempDir = "/azerothcore/env/dist/temp"
|
||||
EOF
|
||||
fi
|
||||
fi
|
||||
|
||||
set_conf() {
|
||||
local key="$1" value="$2" file="$3" quoted="${4:-true}"
|
||||
local formatted="$value"
|
||||
if [ "$quoted" = "true" ]; then
|
||||
formatted="\"${value}\""
|
||||
fi
|
||||
if grep -qE "^[[:space:]]*${key}[[:space:]]*=" "$file"; then
|
||||
sed -i "s|^[[:space:]]*${key}[[:space:]]*=.*|${key} = ${formatted}|" "$file"
|
||||
else
|
||||
printf '%s = %s\n' "$key" "$formatted" >> "$file"
|
||||
fi
|
||||
}
|
||||
|
||||
local host="${CONTAINER_MYSQL:-${MYSQL_HOST:-localhost}}"
|
||||
local port="${MYSQL_PORT:-3306}"
|
||||
local user="${MYSQL_USER:-root}"
|
||||
local pass="${MYSQL_ROOT_PASSWORD:-root}"
|
||||
local db_auth="${DB_AUTH_NAME:-acore_auth}"
|
||||
local db_world="${DB_WORLD_NAME:-acore_world}"
|
||||
local db_chars="${DB_CHARACTERS_NAME:-acore_characters}"
|
||||
local db_bots="${DB_PLAYERBOTS_NAME:-acore_playerbots}"
|
||||
|
||||
set_conf "LoginDatabaseInfo" "${host};${port};${user};${pass};${db_auth}" "$conf"
|
||||
set_conf "WorldDatabaseInfo" "${host};${port};${user};${pass};${db_world}" "$conf"
|
||||
set_conf "CharacterDatabaseInfo" "${host};${port};${user};${pass};${db_chars}" "$conf"
|
||||
set_conf "PlayerbotsDatabaseInfo" "${host};${port};${user};${pass};${db_bots}" "$conf"
|
||||
set_conf "EnableDatabases" "${AC_UPDATES_ENABLE_DATABASES:-15}" "$conf" false
|
||||
set_conf "Updates.AutoSetup" "${AC_UPDATES_AUTO_SETUP:-1}" "$conf" false
|
||||
set_conf "Updates.ExceptionShutdownDelay" "${AC_UPDATES_EXCEPTION_SHUTDOWN_DELAY:-10000}" "$conf" false
|
||||
set_conf "Updates.AllowedModules" "${DB_UPDATES_ALLOWED_MODULES:-all}" "$conf"
|
||||
set_conf "Updates.Redundancy" "${DB_UPDATES_REDUNDANCY:-1}" "$conf" false
|
||||
set_conf "Database.Reconnect.Seconds" "${DB_RECONNECT_SECONDS:-5}" "$conf" false
|
||||
set_conf "Database.Reconnect.Attempts" "${DB_RECONNECT_ATTEMPTS:-5}" "$conf" false
|
||||
set_conf "LoginDatabase.WorkerThreads" "${DB_LOGIN_WORKER_THREADS:-1}" "$conf" false
|
||||
set_conf "WorldDatabase.WorkerThreads" "${DB_WORLD_WORKER_THREADS:-1}" "$conf" false
|
||||
set_conf "CharacterDatabase.WorkerThreads" "${DB_CHARACTER_WORKER_THREADS:-1}" "$conf" false
|
||||
set_conf "LoginDatabase.SynchThreads" "${DB_LOGIN_SYNCH_THREADS:-1}" "$conf" false
|
||||
set_conf "WorldDatabase.SynchThreads" "${DB_WORLD_SYNCH_THREADS:-1}" "$conf" false
|
||||
set_conf "CharacterDatabase.SynchThreads" "${DB_CHARACTER_SYNCH_THREADS:-1}" "$conf" false
|
||||
set_conf "MySQLExecutable" "/usr/bin/mysql" "$conf"
|
||||
set_conf "TempDir" "$temp_dir" "$conf"
|
||||
}
|
||||
@@ -259,14 +259,14 @@ SENTINEL_FILE="$LOCAL_STORAGE_PATH/modules/.requires_rebuild"
|
||||
MODULES_META_DIR="$STORAGE_PATH/modules/.modules-meta"
|
||||
RESTORE_PRESTAGED_FLAG="$MODULES_META_DIR/.restore-prestaged"
|
||||
MODULES_ENABLED_FILE="$MODULES_META_DIR/modules-enabled.txt"
|
||||
STAGE_PATH_MODULE_SQL="$(read_env STAGE_PATH_MODULE_SQL "$STORAGE_PATH/module-sql-updates")"
|
||||
STAGE_PATH_MODULE_SQL="$(eval "echo \"$STAGE_PATH_MODULE_SQL\"")"
|
||||
if [[ "$STAGE_PATH_MODULE_SQL" != /* ]]; then
|
||||
STAGE_PATH_MODULE_SQL="$PROJECT_DIR/$STAGE_PATH_MODULE_SQL"
|
||||
MODULE_SQL_STAGE_PATH="$(read_env MODULE_SQL_STAGE_PATH "$STORAGE_PATH/module-sql-updates")"
|
||||
MODULE_SQL_STAGE_PATH="$(eval "echo \"$MODULE_SQL_STAGE_PATH\"")"
|
||||
if [[ "$MODULE_SQL_STAGE_PATH" != /* ]]; then
|
||||
MODULE_SQL_STAGE_PATH="$PROJECT_DIR/$MODULE_SQL_STAGE_PATH"
|
||||
fi
|
||||
STAGE_PATH_MODULE_SQL="$(canonical_path "$STAGE_PATH_MODULE_SQL")"
|
||||
mkdir -p "$STAGE_PATH_MODULE_SQL"
|
||||
ensure_host_writable "$STAGE_PATH_MODULE_SQL"
|
||||
MODULE_SQL_STAGE_PATH="$(canonical_path "$MODULE_SQL_STAGE_PATH")"
|
||||
mkdir -p "$MODULE_SQL_STAGE_PATH"
|
||||
ensure_host_writable "$MODULE_SQL_STAGE_PATH"
|
||||
HOST_STAGE_HELPER_IMAGE="$(read_env ALPINE_IMAGE "alpine:latest")"
|
||||
|
||||
declare -A ENABLED_MODULES=()
|
||||
@@ -439,7 +439,7 @@ esac
|
||||
# Stage module SQL to core updates directory (after containers start)
|
||||
host_stage_clear(){
|
||||
docker run --rm \
|
||||
-v "$STAGE_PATH_MODULE_SQL":/host-stage \
|
||||
-v "$MODULE_SQL_STAGE_PATH":/host-stage \
|
||||
"$HOST_STAGE_HELPER_IMAGE" \
|
||||
sh -c 'find /host-stage -type f -name "MODULE_*.sql" -delete' >/dev/null 2>&1 || true
|
||||
}
|
||||
@@ -447,7 +447,7 @@ host_stage_clear(){
|
||||
host_stage_reset_dir(){
|
||||
local dir="$1"
|
||||
docker run --rm \
|
||||
-v "$STAGE_PATH_MODULE_SQL":/host-stage \
|
||||
-v "$MODULE_SQL_STAGE_PATH":/host-stage \
|
||||
"$HOST_STAGE_HELPER_IMAGE" \
|
||||
sh -c "mkdir -p /host-stage/$dir && rm -f /host-stage/$dir/MODULE_*.sql" >/dev/null 2>&1 || true
|
||||
}
|
||||
@@ -461,7 +461,7 @@ copy_to_host_stage(){
|
||||
local base_name
|
||||
base_name="$(basename "$file_path")"
|
||||
docker run --rm \
|
||||
-v "$STAGE_PATH_MODULE_SQL":/host-stage \
|
||||
-v "$MODULE_SQL_STAGE_PATH":/host-stage \
|
||||
-v "$src_dir":/src \
|
||||
"$HOST_STAGE_HELPER_IMAGE" \
|
||||
sh -c "mkdir -p /host-stage/$core_dir && cp \"/src/$base_name\" \"/host-stage/$core_dir/$target_name\"" >/dev/null 2>&1
|
||||
|
||||
@@ -22,32 +22,6 @@ ICON_ERROR="❌"
|
||||
ICON_INFO="ℹ️"
|
||||
ICON_TEST="🧪"
|
||||
|
||||
resolve_path(){
|
||||
local base="$1" path="$2"
|
||||
if command -v python3 >/dev/null 2>&1; then
|
||||
python3 - "$base" "$path" <<'PY'
|
||||
import os, sys
|
||||
base, path = sys.argv[1:3]
|
||||
if os.path.isabs(path):
|
||||
print(os.path.normpath(path))
|
||||
else:
|
||||
print(os.path.normpath(os.path.join(base, path)))
|
||||
PY
|
||||
else
|
||||
(cd "$base" && realpath -m "$path")
|
||||
fi
|
||||
}
|
||||
|
||||
if [ -f "$PROJECT_ROOT/.env" ]; then
|
||||
set -a
|
||||
# shellcheck disable=SC1091
|
||||
source "$PROJECT_ROOT/.env"
|
||||
set +a
|
||||
fi
|
||||
|
||||
LOCAL_MODULES_DIR_RAW="${STORAGE_PATH_LOCAL:-./local-storage}/modules"
|
||||
LOCAL_MODULES_DIR="$(resolve_path "$PROJECT_ROOT" "$LOCAL_MODULES_DIR_RAW")"
|
||||
|
||||
# Counters
|
||||
TESTS_TOTAL=0
|
||||
TESTS_PASSED=0
|
||||
@@ -143,7 +117,7 @@ info "Running: python3 scripts/python/modules.py generate"
|
||||
if python3 scripts/python/modules.py \
|
||||
--env-path .env \
|
||||
--manifest config/module-manifest.json \
|
||||
generate --output-dir "$LOCAL_MODULES_DIR" > /tmp/phase1-modules-generate.log 2>&1; then
|
||||
generate --output-dir local-storage/modules > /tmp/phase1-modules-generate.log 2>&1; then
|
||||
ok "Module state generation successful"
|
||||
else
|
||||
# Check if it's just warnings
|
||||
@@ -156,11 +130,11 @@ fi
|
||||
|
||||
# Test 4: Verify SQL manifest created
|
||||
test_header "SQL Manifest Verification"
|
||||
if [ -f "$LOCAL_MODULES_DIR/.sql-manifest.json" ]; then
|
||||
ok "SQL manifest created: $LOCAL_MODULES_DIR/.sql-manifest.json"
|
||||
if [ -f local-storage/modules/.sql-manifest.json ]; then
|
||||
ok "SQL manifest created: local-storage/modules/.sql-manifest.json"
|
||||
|
||||
# Check manifest structure
|
||||
module_count=$(python3 -c "import json; data=json.load(open('$LOCAL_MODULES_DIR/.sql-manifest.json')); print(len(data.get('modules', [])))" 2>/dev/null || echo "0")
|
||||
module_count=$(python3 -c "import json; data=json.load(open('local-storage/modules/.sql-manifest.json')); print(len(data.get('modules', [])))" 2>/dev/null || echo "0")
|
||||
info "Modules with SQL: $module_count"
|
||||
|
||||
if [ "$module_count" -gt 0 ]; then
|
||||
@@ -168,7 +142,7 @@ if [ -f "$LOCAL_MODULES_DIR/.sql-manifest.json" ]; then
|
||||
|
||||
# Show first module
|
||||
info "Sample module SQL info:"
|
||||
python3 -c "import json; data=json.load(open('$LOCAL_MODULES_DIR/.sql-manifest.json')); m=data['modules'][0] if data['modules'] else {}; print(f\" Name: {m.get('name', 'N/A')}\n SQL files: {len(m.get('sql_files', {}))}\") " 2>/dev/null || true
|
||||
python3 -c "import json; data=json.load(open('local-storage/modules/.sql-manifest.json')); m=data['modules'][0] if data['modules'] else {}; print(f\" Name: {m.get('name', 'N/A')}\n SQL files: {len(m.get('sql_files', {}))}\") " 2>/dev/null || true
|
||||
else
|
||||
warn "No modules with SQL files (expected if modules not yet staged)"
|
||||
fi
|
||||
@@ -178,19 +152,19 @@ fi
|
||||
|
||||
# Test 5: Verify modules.env created
|
||||
test_header "Module Environment File Check"
|
||||
if [ -f "$LOCAL_MODULES_DIR/modules.env" ]; then
|
||||
if [ -f local-storage/modules/modules.env ]; then
|
||||
ok "modules.env created"
|
||||
|
||||
# Check for key exports
|
||||
if grep -q "MODULES_ENABLED=" "$LOCAL_MODULES_DIR/modules.env"; then
|
||||
if grep -q "MODULES_ENABLED=" local-storage/modules/modules.env; then
|
||||
ok "MODULES_ENABLED variable present"
|
||||
fi
|
||||
|
||||
if grep -q "MODULES_REQUIRES_CUSTOM_BUILD=" "$LOCAL_MODULES_DIR/modules.env"; then
|
||||
if grep -q "MODULES_REQUIRES_CUSTOM_BUILD=" local-storage/modules/modules.env; then
|
||||
ok "Build requirement flags present"
|
||||
|
||||
# Check if build required
|
||||
source "$LOCAL_MODULES_DIR/modules.env"
|
||||
source local-storage/modules/modules.env
|
||||
if [ "${MODULES_REQUIRES_CUSTOM_BUILD:-0}" = "1" ]; then
|
||||
info "Custom build required (C++ modules enabled)"
|
||||
else
|
||||
@@ -203,8 +177,8 @@ fi
|
||||
|
||||
# Test 6: Check build requirement
|
||||
test_header "Build Requirement Check"
|
||||
if [ -f "$LOCAL_MODULES_DIR/modules.env" ]; then
|
||||
source "$LOCAL_MODULES_DIR/modules.env"
|
||||
if [ -f local-storage/modules/modules.env ]; then
|
||||
source local-storage/modules/modules.env
|
||||
|
||||
info "MODULES_REQUIRES_CUSTOM_BUILD=${MODULES_REQUIRES_CUSTOM_BUILD:-0}"
|
||||
info "MODULES_REQUIRES_PLAYERBOT_SOURCE=${MODULES_REQUIRES_PLAYERBOT_SOURCE:-0}"
|
||||
|
||||
@@ -98,23 +98,12 @@ read_env_value(){
|
||||
if [ -f "$env_path" ]; then
|
||||
value="$(grep -E "^${key}=" "$env_path" | tail -n1 | cut -d'=' -f2- | tr -d '\r')"
|
||||
fi
|
||||
# Fallback to template defaults if not set in the chosen env file
|
||||
if [ -z "$value" ] && [ -f "$TEMPLATE_FILE" ]; then
|
||||
value="$(grep -E "^${key}=" "$TEMPLATE_FILE" | tail -n1 | cut -d'=' -f2- | tr -d '\r')"
|
||||
fi
|
||||
if [ -z "$value" ]; then
|
||||
value="$default"
|
||||
fi
|
||||
echo "$value"
|
||||
}
|
||||
|
||||
MYSQL_EXTERNAL_PORT="$(read_env_value MYSQL_EXTERNAL_PORT 64306)"
|
||||
AUTH_EXTERNAL_PORT="$(read_env_value AUTH_EXTERNAL_PORT 3784)"
|
||||
WORLD_EXTERNAL_PORT="$(read_env_value WORLD_EXTERNAL_PORT 8215)"
|
||||
SOAP_EXTERNAL_PORT="$(read_env_value SOAP_EXTERNAL_PORT 7778)"
|
||||
PMA_EXTERNAL_PORT="$(read_env_value PMA_EXTERNAL_PORT 8081)"
|
||||
KEIRA3_EXTERNAL_PORT="$(read_env_value KEIRA3_EXTERNAL_PORT 4201)"
|
||||
|
||||
handle_auto_rebuild(){
|
||||
local storage_path
|
||||
storage_path="$(read_env_value STORAGE_PATH_LOCAL "./local-storage")"
|
||||
@@ -182,7 +171,7 @@ health_checks(){
|
||||
check_health ac-worldserver || ((failures++))
|
||||
if [ "$QUICK" = false ]; then
|
||||
info "Port checks"
|
||||
for port in "$MYSQL_EXTERNAL_PORT" "$AUTH_EXTERNAL_PORT" "$WORLD_EXTERNAL_PORT" "$SOAP_EXTERNAL_PORT" "$PMA_EXTERNAL_PORT" "$KEIRA3_EXTERNAL_PORT"; do
|
||||
for port in 64306 3784 8215 7778 8081 4201; do
|
||||
if timeout 3 bash -c "</dev/tcp/127.0.0.1/$port" 2>/dev/null; then ok "port $port: open"; else warn "port $port: closed"; fi
|
||||
done
|
||||
fi
|
||||
@@ -201,7 +190,7 @@ main(){
|
||||
fi
|
||||
health_checks
|
||||
handle_auto_rebuild
|
||||
info "Endpoints: MySQL:${MYSQL_EXTERNAL_PORT}, Auth:${AUTH_EXTERNAL_PORT}, World:${WORLD_EXTERNAL_PORT}, SOAP:${SOAP_EXTERNAL_PORT}, phpMyAdmin:${PMA_EXTERNAL_PORT}, Keira3:${KEIRA3_EXTERNAL_PORT}"
|
||||
info "Endpoints: MySQL:64306, Auth:3784, World:8215, SOAP:7778, phpMyAdmin:8081, Keira3:4201"
|
||||
}
|
||||
|
||||
main "$@"
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
module acore-compose/statusdash
|
||||
|
||||
go 1.22
|
||||
go 1.22.2
|
||||
|
||||
require (
|
||||
github.com/gizak/termui/v3 v3.1.0 // indirect
|
||||
|
||||
@@ -62,26 +62,16 @@ type Module struct {
|
||||
}
|
||||
|
||||
type Snapshot struct {
|
||||
Timestamp string `json:"timestamp"`
|
||||
Project string `json:"project"`
|
||||
Network string `json:"network"`
|
||||
Services []Service `json:"services"`
|
||||
Ports []Port `json:"ports"`
|
||||
Modules []Module `json:"modules"`
|
||||
Storage map[string]DirInfo `json:"storage"`
|
||||
Volumes map[string]VolumeInfo `json:"volumes"`
|
||||
Users UserStats `json:"users"`
|
||||
Stats map[string]ContainerStats `json:"stats"`
|
||||
}
|
||||
|
||||
var persistentServiceOrder = []string{
|
||||
"ac-mysql",
|
||||
"ac-db-guard",
|
||||
"ac-authserver",
|
||||
"ac-worldserver",
|
||||
"ac-phpmyadmin",
|
||||
"ac-keira3",
|
||||
"ac-backup",
|
||||
Timestamp string `json:"timestamp"`
|
||||
Project string `json:"project"`
|
||||
Network string `json:"network"`
|
||||
Services []Service `json:"services"`
|
||||
Ports []Port `json:"ports"`
|
||||
Modules []Module `json:"modules"`
|
||||
Storage map[string]DirInfo `json:"storage"`
|
||||
Volumes map[string]VolumeInfo `json:"volumes"`
|
||||
Users UserStats `json:"users"`
|
||||
Stats map[string]ContainerStats `json:"stats"`
|
||||
}
|
||||
|
||||
func runSnapshot() (*Snapshot, error) {
|
||||
@@ -97,76 +87,27 @@ func runSnapshot() (*Snapshot, error) {
|
||||
return snap, nil
|
||||
}
|
||||
|
||||
func partitionServices(all []Service) ([]Service, []Service) {
|
||||
byName := make(map[string]Service)
|
||||
for _, svc := range all {
|
||||
byName[svc.Name] = svc
|
||||
}
|
||||
|
||||
seen := make(map[string]bool)
|
||||
persistent := make([]Service, 0, len(persistentServiceOrder))
|
||||
for _, name := range persistentServiceOrder {
|
||||
if svc, ok := byName[name]; ok {
|
||||
persistent = append(persistent, svc)
|
||||
seen[name] = true
|
||||
}
|
||||
}
|
||||
|
||||
setups := make([]Service, 0, len(all))
|
||||
for _, svc := range all {
|
||||
if seen[svc.Name] {
|
||||
continue
|
||||
}
|
||||
setups = append(setups, svc)
|
||||
}
|
||||
return persistent, setups
|
||||
}
|
||||
|
||||
func buildServicesTable(s *Snapshot) *TableNoCol {
|
||||
runningServices, setupServices := partitionServices(s.Services)
|
||||
|
||||
table := NewTableNoCol()
|
||||
rows := [][]string{{"Group", "Service", "Status", "Health", "CPU%", "Memory"}}
|
||||
appendRows := func(groupLabel string, services []Service) {
|
||||
for _, svc := range services {
|
||||
cpu := "-"
|
||||
mem := "-"
|
||||
if svcStats, ok := s.Stats[svc.Name]; ok {
|
||||
cpu = fmt.Sprintf("%.1f", svcStats.CPU)
|
||||
mem = strings.Split(svcStats.Memory, " / ")[0] // Just show used, not total
|
||||
}
|
||||
health := svc.Health
|
||||
if svc.Status != "running" && svc.ExitCode != "0" && svc.ExitCode != "" {
|
||||
health = fmt.Sprintf("%s (%s)", svc.Health, svc.ExitCode)
|
||||
}
|
||||
rows = append(rows, []string{groupLabel, svc.Label, svc.Status, health, cpu, mem})
|
||||
rows := [][]string{{"Service", "Status", "Health", "CPU%", "Memory"}}
|
||||
for _, svc := range s.Services {
|
||||
cpu := "-"
|
||||
mem := "-"
|
||||
if stats, ok := s.Stats[svc.Name]; ok {
|
||||
cpu = fmt.Sprintf("%.1f", stats.CPU)
|
||||
mem = strings.Split(stats.Memory, " / ")[0] // Just show used, not total
|
||||
}
|
||||
// Combine health with exit code for stopped containers
|
||||
health := svc.Health
|
||||
if svc.Status != "running" && svc.ExitCode != "0" && svc.ExitCode != "" {
|
||||
health = fmt.Sprintf("%s (%s)", svc.Health, svc.ExitCode)
|
||||
}
|
||||
rows = append(rows, []string{svc.Label, svc.Status, health, cpu, mem})
|
||||
}
|
||||
|
||||
appendRows("Persistent", runningServices)
|
||||
appendRows("Setup", setupServices)
|
||||
|
||||
table.Rows = rows
|
||||
table.RowSeparator = false
|
||||
table.Border = true
|
||||
table.Title = "Services"
|
||||
|
||||
for i := 1; i < len(table.Rows); i++ {
|
||||
if table.RowStyles == nil {
|
||||
table.RowStyles = make(map[int]ui.Style)
|
||||
}
|
||||
state := strings.ToLower(table.Rows[i][2])
|
||||
switch state {
|
||||
case "running", "healthy":
|
||||
table.RowStyles[i] = ui.NewStyle(ui.ColorGreen)
|
||||
case "restarting", "unhealthy":
|
||||
table.RowStyles[i] = ui.NewStyle(ui.ColorRed)
|
||||
case "exited":
|
||||
table.RowStyles[i] = ui.NewStyle(ui.ColorYellow)
|
||||
default:
|
||||
table.RowStyles[i] = ui.NewStyle(ui.ColorWhite)
|
||||
}
|
||||
}
|
||||
return table
|
||||
}
|
||||
|
||||
@@ -204,6 +145,7 @@ func buildModulesList(s *Snapshot) *widgets.List {
|
||||
|
||||
func buildStorageParagraph(s *Snapshot) *widgets.Paragraph {
|
||||
var b strings.Builder
|
||||
fmt.Fprintf(&b, "STORAGE:\n")
|
||||
entries := []struct {
|
||||
Key string
|
||||
Label string
|
||||
@@ -219,7 +161,11 @@ func buildStorageParagraph(s *Snapshot) *widgets.Paragraph {
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
fmt.Fprintf(&b, " %-15s %s (%s)\n", item.Label, info.Path, info.Size)
|
||||
mark := "○"
|
||||
if info.Exists {
|
||||
mark = "●"
|
||||
}
|
||||
fmt.Fprintf(&b, " %-15s %s %s (%s)\n", item.Label, mark, info.Path, info.Size)
|
||||
}
|
||||
par := widgets.NewParagraph()
|
||||
par.Title = "Storage"
|
||||
@@ -231,6 +177,7 @@ func buildStorageParagraph(s *Snapshot) *widgets.Paragraph {
|
||||
|
||||
func buildVolumesParagraph(s *Snapshot) *widgets.Paragraph {
|
||||
var b strings.Builder
|
||||
fmt.Fprintf(&b, "VOLUMES:\n")
|
||||
entries := []struct {
|
||||
Key string
|
||||
Label string
|
||||
@@ -243,7 +190,11 @@ func buildVolumesParagraph(s *Snapshot) *widgets.Paragraph {
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
fmt.Fprintf(&b, " %-13s %s\n", item.Label, info.Mountpoint)
|
||||
mark := "○"
|
||||
if info.Exists {
|
||||
mark = "●"
|
||||
}
|
||||
fmt.Fprintf(&b, " %-13s %s %s\n", item.Label, mark, info.Mountpoint)
|
||||
}
|
||||
par := widgets.NewParagraph()
|
||||
par.Title = "Volumes"
|
||||
@@ -255,6 +206,22 @@ func buildVolumesParagraph(s *Snapshot) *widgets.Paragraph {
|
||||
|
||||
func renderSnapshot(s *Snapshot, selectedModule int) (*widgets.List, *ui.Grid) {
|
||||
servicesTable := buildServicesTable(s)
|
||||
for i := 1; i < len(servicesTable.Rows); i++ {
|
||||
if servicesTable.RowStyles == nil {
|
||||
servicesTable.RowStyles = make(map[int]ui.Style)
|
||||
}
|
||||
state := strings.ToLower(servicesTable.Rows[i][1])
|
||||
switch state {
|
||||
case "running", "healthy":
|
||||
servicesTable.RowStyles[i] = ui.NewStyle(ui.ColorGreen)
|
||||
case "restarting", "unhealthy":
|
||||
servicesTable.RowStyles[i] = ui.NewStyle(ui.ColorRed)
|
||||
case "exited":
|
||||
servicesTable.RowStyles[i] = ui.NewStyle(ui.ColorYellow)
|
||||
default:
|
||||
servicesTable.RowStyles[i] = ui.NewStyle(ui.ColorWhite)
|
||||
}
|
||||
}
|
||||
portsTable := buildPortsTable(s)
|
||||
for i := 1; i < len(portsTable.Rows); i++ {
|
||||
if portsTable.RowStyles == nil {
|
||||
@@ -280,7 +247,7 @@ func renderSnapshot(s *Snapshot, selectedModule int) (*widgets.List, *ui.Grid) {
|
||||
moduleInfoPar.Title = "Module Info"
|
||||
if selectedModule >= 0 && selectedModule < len(s.Modules) {
|
||||
mod := s.Modules[selectedModule]
|
||||
moduleInfoPar.Text = fmt.Sprintf("%s\nCategory: %s\nType: %s", mod.Description, mod.Category, mod.Type)
|
||||
moduleInfoPar.Text = fmt.Sprintf("%s\n\nCategory: %s\nType: %s", mod.Description, mod.Category, mod.Type)
|
||||
} else {
|
||||
moduleInfoPar.Text = "Select a module to view info"
|
||||
}
|
||||
@@ -305,15 +272,15 @@ func renderSnapshot(s *Snapshot, selectedModule int) (*widgets.List, *ui.Grid) {
|
||||
termWidth, termHeight := ui.TerminalDimensions()
|
||||
grid.SetRect(0, 0, termWidth, termHeight)
|
||||
grid.Set(
|
||||
ui.NewRow(0.15,
|
||||
ui.NewRow(0.18,
|
||||
ui.NewCol(0.6, header),
|
||||
ui.NewCol(0.4, usersPar),
|
||||
),
|
||||
ui.NewRow(0.46,
|
||||
ui.NewRow(0.42,
|
||||
ui.NewCol(0.6, servicesTable),
|
||||
ui.NewCol(0.4, portsTable),
|
||||
),
|
||||
ui.NewRow(0.39,
|
||||
ui.NewRow(0.40,
|
||||
ui.NewCol(0.25, modulesList),
|
||||
ui.NewCol(0.15,
|
||||
ui.NewRow(0.30, helpPar),
|
||||
|
||||
@@ -588,16 +588,14 @@ def handle_generate(args: argparse.Namespace) -> int:
|
||||
write_outputs(state, output_dir)
|
||||
|
||||
if state.warnings:
|
||||
module_keys_with_warnings = sorted(
|
||||
{warning.split()[0].strip(":,") for warning in state.warnings if warning.startswith("MODULE_")}
|
||||
)
|
||||
warning_lines = []
|
||||
if module_keys_with_warnings:
|
||||
warning_lines.append(f"- Modules with warnings: {', '.join(module_keys_with_warnings)}")
|
||||
warning_lines.extend(f"- {warning}" for warning in state.warnings)
|
||||
warning_block = textwrap.indent("\n".join(warning_lines), " ")
|
||||
warning_block = "\n".join(f"- {warning}" for warning in state.warnings)
|
||||
print(
|
||||
f"⚠️ Module manifest warnings detected:\n{warning_block}\n",
|
||||
textwrap.dedent(
|
||||
f"""\
|
||||
⚠️ Module manifest warnings detected:
|
||||
{warning_block}
|
||||
"""
|
||||
),
|
||||
file=sys.stderr,
|
||||
)
|
||||
if state.errors:
|
||||
|
||||
82
setup.sh
82
setup.sh
@@ -1241,7 +1241,7 @@ fi
|
||||
"automation" "quality-of-life" "gameplay-enhancement" "npc-service"
|
||||
"pvp" "progression" "economy" "social" "account-wide"
|
||||
"customization" "scripting" "admin" "premium" "minigame"
|
||||
"content" "rewards" "developer" "database" "tooling" "uncategorized"
|
||||
"content" "rewards" "developer"
|
||||
)
|
||||
declare -A category_titles=(
|
||||
["automation"]="🤖 Automation"
|
||||
@@ -1261,18 +1261,30 @@ fi
|
||||
["content"]="🏰 Content"
|
||||
["rewards"]="🎁 Rewards"
|
||||
["developer"]="🛠️ Developer Tools"
|
||||
["database"]="🗄️ Database"
|
||||
["tooling"]="🔨 Tooling"
|
||||
["uncategorized"]="📦 Miscellaneous"
|
||||
)
|
||||
declare -A processed_categories=()
|
||||
|
||||
render_category() {
|
||||
local cat="$1"
|
||||
# Group modules by category using arrays
|
||||
declare -A modules_by_category
|
||||
local key
|
||||
for key in "${selection_keys[@]}"; do
|
||||
[ -n "${KNOWN_MODULE_LOOKUP[$key]:-}" ] || continue
|
||||
local category="${MODULE_CATEGORY_MAP[$key]:-uncategorized}"
|
||||
if [ -z "${modules_by_category[$category]:-}" ]; then
|
||||
modules_by_category[$category]="$key"
|
||||
else
|
||||
modules_by_category[$category]="${modules_by_category[$category]} $key"
|
||||
fi
|
||||
done
|
||||
|
||||
# Process modules by category
|
||||
local cat
|
||||
for cat in "${category_order[@]}"; do
|
||||
local module_list="${modules_by_category[$cat]:-}"
|
||||
[ -n "$module_list" ] || return 0
|
||||
[ -n "$module_list" ] || continue
|
||||
|
||||
# Check if this category has any valid modules before showing header
|
||||
local has_valid_modules=0
|
||||
# Split the space-separated string properly
|
||||
local -a module_array
|
||||
IFS=' ' read -ra module_array <<< "$module_list"
|
||||
for key in "${module_array[@]}"; do
|
||||
@@ -1284,12 +1296,14 @@ fi
|
||||
fi
|
||||
done
|
||||
|
||||
[ "$has_valid_modules" = "1" ] || return 0
|
||||
# Skip category if no valid modules
|
||||
[ "$has_valid_modules" = "1" ] || continue
|
||||
|
||||
# Display category header only when we have valid modules
|
||||
local cat_title="${category_titles[$cat]:-$cat}"
|
||||
printf '\n%b\n' "${BOLD}${CYAN}═══ ${cat_title} ═══${NC}"
|
||||
|
||||
local first_in_cat=1
|
||||
# Process modules in this category
|
||||
for key in "${module_array[@]}"; do
|
||||
[ -n "${KNOWN_MODULE_LOOKUP[$key]:-}" ] || continue
|
||||
local status_lc="${MODULE_STATUS_MAP[$key],,}"
|
||||
@@ -1299,10 +1313,6 @@ fi
|
||||
printf -v "$key" '%s' "0"
|
||||
continue
|
||||
fi
|
||||
if [ "$first_in_cat" -ne 1 ]; then
|
||||
printf '\n'
|
||||
fi
|
||||
first_in_cat=0
|
||||
local prompt_label
|
||||
prompt_label="$(module_display_name "$key")"
|
||||
if [ "${MODULE_NEEDS_BUILD_MAP[$key]}" = "1" ]; then
|
||||
@@ -1330,30 +1340,6 @@ fi
|
||||
printf -v "$key" '%s' "0"
|
||||
fi
|
||||
done
|
||||
processed_categories["$cat"]=1
|
||||
}
|
||||
|
||||
# Group modules by category using arrays
|
||||
declare -A modules_by_category
|
||||
local key
|
||||
for key in "${selection_keys[@]}"; do
|
||||
[ -n "${KNOWN_MODULE_LOOKUP[$key]:-}" ] || continue
|
||||
local category="${MODULE_CATEGORY_MAP[$key]:-uncategorized}"
|
||||
if [ -z "${modules_by_category[$category]:-}" ]; then
|
||||
modules_by_category[$category]="$key"
|
||||
else
|
||||
modules_by_category[$category]="${modules_by_category[$category]} $key"
|
||||
fi
|
||||
done
|
||||
|
||||
# Process modules by category (ordered, then any new categories)
|
||||
local cat
|
||||
for cat in "${category_order[@]}"; do
|
||||
render_category "$cat"
|
||||
done
|
||||
for cat in "${!modules_by_category[@]}"; do
|
||||
[ -n "${processed_categories[$cat]:-}" ] && continue
|
||||
render_category "$cat"
|
||||
done
|
||||
module_mode_label="preset 3 (Manual)"
|
||||
elif [ "$MODE_SELECTION" = "4" ]; then
|
||||
@@ -1529,24 +1515,8 @@ fi
|
||||
# Set build sentinel to indicate rebuild is needed
|
||||
local sentinel="$LOCAL_STORAGE_ROOT_ABS/modules/.requires_rebuild"
|
||||
mkdir -p "$(dirname "$sentinel")"
|
||||
if touch "$sentinel" 2>/dev/null; then
|
||||
say INFO "Build sentinel created at $sentinel"
|
||||
else
|
||||
say WARNING "Could not create build sentinel at $sentinel (permissions/ownership); forcing with sudo..."
|
||||
if command -v sudo >/dev/null 2>&1; then
|
||||
if sudo mkdir -p "$(dirname "$sentinel")" \
|
||||
&& sudo chown -R "$(id -u):$(id -g)" "$(dirname "$sentinel")" \
|
||||
&& sudo touch "$sentinel"; then
|
||||
say INFO "Build sentinel created at $sentinel (after fixing ownership)"
|
||||
else
|
||||
say ERROR "Failed to force build sentinel creation at $sentinel. Fix permissions and rerun setup."
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
say ERROR "Cannot force build sentinel creation (sudo unavailable). Fix permissions on $(dirname "$sentinel") and rerun setup."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
touch "$sentinel"
|
||||
say INFO "Build sentinel created at $sentinel"
|
||||
fi
|
||||
|
||||
local default_source_rel="${LOCAL_STORAGE_ROOT}/source/azerothcore"
|
||||
|
||||
117
update-latest.sh
117
update-latest.sh
@@ -1,117 +0,0 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Safe wrapper to update to the latest commit on the current branch and run deploy.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
cd "$ROOT_DIR"
|
||||
|
||||
BLUE='\033[0;34m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'; RED='\033[0;31m'; NC='\033[0m'
|
||||
info(){ printf '%b\n' "${BLUE}ℹ️ $*${NC}"; }
|
||||
ok(){ printf '%b\n' "${GREEN}✅ $*${NC}"; }
|
||||
warn(){ printf '%b\n' "${YELLOW}⚠️ $*${NC}"; }
|
||||
err(){ printf '%b\n' "${RED}❌ $*${NC}"; }
|
||||
|
||||
FORCE_DIRTY=0
|
||||
DEPLOY_ARGS=()
|
||||
SKIP_BUILD=0
|
||||
AUTO_DEPLOY=0
|
||||
|
||||
usage(){
|
||||
cat <<'EOF'
|
||||
Usage: ./update-latest.sh [--force] [--help] [deploy args...]
|
||||
|
||||
Updates the current git branch with a fast-forward pull, runs a fresh build,
|
||||
and optionally runs ./deploy.sh with any additional arguments you provide
|
||||
(e.g., --yes --no-watch).
|
||||
|
||||
Options:
|
||||
--force Skip the dirty-tree check (not recommended; you may lose changes)
|
||||
--skip-build Do not run ./build.sh after updating
|
||||
--deploy Auto-run ./deploy.sh after build (non-interactive)
|
||||
--help Show this help
|
||||
|
||||
Examples:
|
||||
./update-latest.sh --yes --no-watch
|
||||
./update-latest.sh --deploy --yes --no-watch
|
||||
./update-latest.sh --force --skip-build
|
||||
./update-latest.sh --force --deploy --remote --remote-host my.host --remote-user sam --yes
|
||||
EOF
|
||||
}
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--force) FORCE_DIRTY=1; shift;;
|
||||
--skip-build) SKIP_BUILD=1; shift;;
|
||||
--deploy) AUTO_DEPLOY=1; shift;;
|
||||
--help|-h) usage; exit 0;;
|
||||
*) DEPLOY_ARGS+=("$1"); shift;;
|
||||
esac
|
||||
done
|
||||
|
||||
command -v git >/dev/null 2>&1 || { err "git is required"; exit 1; }
|
||||
|
||||
if [ "$FORCE_DIRTY" -ne 1 ]; then
|
||||
if [ -n "$(git status --porcelain)" ]; then
|
||||
err "Working tree is dirty. Commit/stash or re-run with --force."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
current_branch="$(git rev-parse --abbrev-ref HEAD 2>/dev/null || true)"
|
||||
if [ -z "$current_branch" ] || [ "$current_branch" = "HEAD" ]; then
|
||||
err "Cannot update: detached HEAD or unknown branch."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! git ls-remote --exit-code --heads origin "$current_branch" >/dev/null 2>&1; then
|
||||
err "Remote branch origin/$current_branch not found."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
info "Fetching latest changes from origin/$current_branch"
|
||||
git fetch --prune origin
|
||||
|
||||
info "Fast-forwarding to origin/$current_branch"
|
||||
if ! git merge --ff-only "origin/$current_branch"; then
|
||||
err "Fast-forward failed. Resolve manually or rebase, then rerun."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ok "Repository updated to $(git rev-parse --short HEAD)"
|
||||
|
||||
if [ "$SKIP_BUILD" -ne 1 ]; then
|
||||
info "Running build.sh --yes"
|
||||
if ! "$ROOT_DIR/build.sh" --yes; then
|
||||
err "Build failed. Resolve issues and re-run."
|
||||
exit 1
|
||||
fi
|
||||
ok "Build completed"
|
||||
else
|
||||
warn "Skipping build (--skip-build set)"
|
||||
fi
|
||||
|
||||
# Offer to run deploy
|
||||
if [ "$AUTO_DEPLOY" -eq 1 ]; then
|
||||
info "Auto-deploy enabled; running deploy.sh ${DEPLOY_ARGS[*]:-(no extra args)}"
|
||||
exec "$ROOT_DIR/deploy.sh" "${DEPLOY_ARGS[@]}"
|
||||
fi
|
||||
|
||||
if [ -t 0 ]; then
|
||||
read -r -p "Run deploy.sh now? [y/N]: " reply
|
||||
reply="${reply:-n}"
|
||||
case "$reply" in
|
||||
[Yy]*)
|
||||
info "Running deploy.sh ${DEPLOY_ARGS[*]:-(no extra args)}"
|
||||
exec "$ROOT_DIR/deploy.sh" "${DEPLOY_ARGS[@]}"
|
||||
;;
|
||||
*)
|
||||
ok "Update (and build) complete. Run ./deploy.sh ${DEPLOY_ARGS[*]} when ready."
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
else
|
||||
warn "Non-interactive mode and --deploy not set; skipping deploy."
|
||||
ok "Update (and build) complete. Run ./deploy.sh ${DEPLOY_ARGS[*]} when ready."
|
||||
fi
|
||||
350
updates-dry-run.json
Normal file
350
updates-dry-run.json
Normal file
@@ -0,0 +1,350 @@
|
||||
[
|
||||
{
|
||||
"key": "MODULE_INDIVIDUAL_PROGRESSION",
|
||||
"repo_name": "ZhengPeiRu21/mod-individual-progression",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/ZhengPeiRu21/mod-individual-progression"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_PLAYERBOTS",
|
||||
"repo_name": "mod-playerbots/mod-playerbots",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/mod-playerbots/mod-playerbots"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_OLLAMA_CHAT",
|
||||
"repo_name": "DustinHendrickson/mod-ollama-chat",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/DustinHendrickson/mod-ollama-chat"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_PLAYER_BOT_LEVEL_BRACKETS",
|
||||
"repo_name": "DustinHendrickson/mod-player-bot-level-brackets",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/DustinHendrickson/mod-player-bot-level-brackets"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_DUEL_RESET",
|
||||
"repo_name": "azerothcore/mod-duel-reset",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/azerothcore/mod-duel-reset"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_AOE_LOOT",
|
||||
"repo_name": "azerothcore/mod-aoe-loot",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/azerothcore/mod-aoe-loot"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_TIC_TAC_TOE",
|
||||
"repo_name": "azerothcore/mod-tic-tac-toe",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/azerothcore/mod-tic-tac-toe"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_NPC_BEASTMASTER",
|
||||
"repo_name": "azerothcore/mod-npc-beastmaster",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/azerothcore/mod-npc-beastmaster"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_MORPHSUMMON",
|
||||
"repo_name": "azerothcore/mod-morphsummon",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/azerothcore/mod-morphsummon"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_WORGOBLIN",
|
||||
"repo_name": "heyitsbench/mod-worgoblin",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/heyitsbench/mod-worgoblin"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_SKELETON_MODULE",
|
||||
"repo_name": "azerothcore/skeleton-module",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/azerothcore/skeleton-module"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_AUTOBALANCE",
|
||||
"repo_name": "azerothcore/mod-autobalance",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/azerothcore/mod-autobalance"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_TRANSMOG",
|
||||
"repo_name": "azerothcore/mod-transmog",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/azerothcore/mod-transmog"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_ARAC",
|
||||
"repo_name": "heyitsbench/mod-arac",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/heyitsbench/mod-arac"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_GLOBAL_CHAT",
|
||||
"repo_name": "azerothcore/mod-global-chat",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/azerothcore/mod-global-chat"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_PRESTIGE_DRAFT_MODE",
|
||||
"repo_name": "Youpeoples/Prestige-and-Draft-Mode",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/Youpeoples/Prestige-and-Draft-Mode"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_BLACK_MARKET_AUCTION_HOUSE",
|
||||
"repo_name": "Youpeoples/Black-Market-Auction-House",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/Youpeoples/Black-Market-Auction-House"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_ULTIMATE_FULL_LOOT_PVP",
|
||||
"repo_name": "Youpeoples/Ultimate-Full-Loot-Pvp",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/Youpeoples/Ultimate-Full-Loot-Pvp"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_SERVER_AUTO_SHUTDOWN",
|
||||
"repo_name": "azerothcore/mod-server-auto-shutdown",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/azerothcore/mod-server-auto-shutdown"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_TIME_IS_TIME",
|
||||
"repo_name": "dunjeon/mod-TimeIsTime",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/dunjeon/mod-TimeIsTime"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_WAR_EFFORT",
|
||||
"repo_name": "azerothcore/mod-war-effort",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/azerothcore/mod-war-effort"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_FIREWORKS",
|
||||
"repo_name": "azerothcore/mod-fireworks-on-level",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/azerothcore/mod-fireworks-on-level"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_NPC_ENCHANTER",
|
||||
"repo_name": "azerothcore/mod-npc-enchanter",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/azerothcore/mod-npc-enchanter"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_NPC_BUFFER",
|
||||
"repo_name": "azerothcore/mod-npc-buffer",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/azerothcore/mod-npc-buffer"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_PVP_TITLES",
|
||||
"repo_name": "azerothcore/mod-pvp-titles",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/azerothcore/mod-pvp-titles"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_CHALLENGE_MODES",
|
||||
"repo_name": "ZhengPeiRu21/mod-challenge-modes",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/ZhengPeiRu21/mod-challenge-modes"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_TREASURE_CHEST_SYSTEM",
|
||||
"repo_name": "zyggy123/Treasure-Chest-System",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/zyggy123/Treasure-Chest-System"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_ASSISTANT",
|
||||
"repo_name": "noisiver/mod-assistant",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/noisiver/mod-assistant"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_STATBOOSTER",
|
||||
"repo_name": "AnchyDev/StatBooster",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/AnchyDev/StatBooster"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_BG_SLAVERYVALLEY",
|
||||
"repo_name": "Helias/mod-bg-slaveryvalley",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/Helias/mod-bg-slaveryvalley"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_REAGENT_BANK",
|
||||
"repo_name": "ZhengPeiRu21/mod-reagent-bank",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/ZhengPeiRu21/mod-reagent-bank"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_ELUNA_TS",
|
||||
"repo_name": "azerothcore/eluna-ts",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/azerothcore/eluna-ts"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_AZEROTHSHARD",
|
||||
"repo_name": "azerothcore/mod-azerothshard",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/azerothcore/mod-azerothshard"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_LEVEL_GRANT",
|
||||
"repo_name": "michaeldelago/mod-quest-count-level",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/michaeldelago/mod-quest-count-level"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_DUNGEON_RESPAWN",
|
||||
"repo_name": "AnchyDev/DungeonRespawn",
|
||||
"topic": "azerothcore-module",
|
||||
"repo_url": "https://github.com/AnchyDev/DungeonRespawn"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_LUA_AH_BOT",
|
||||
"repo_name": "mostlynick3/azerothcore-lua-ah-bot",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/mostlynick3/azerothcore-lua-ah-bot"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_ACCOUNTWIDE_SYSTEMS",
|
||||
"repo_name": "Aldori15/azerothcore-eluna-accountwide",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/Aldori15/azerothcore-eluna-accountwide"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_ELUNA_SCRIPTS",
|
||||
"repo_name": "Isidorsson/Eluna-scripts",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/Isidorsson/Eluna-scripts"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_TRANSMOG_AIO",
|
||||
"repo_name": "DanieltheDeveloper/azerothcore-transmog-3.3.5a",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/DanieltheDeveloper/azerothcore-transmog-3.3.5a"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_HARDCORE_MODE",
|
||||
"repo_name": "PrivateDonut/hardcore_mode",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/PrivateDonut/hardcore_mode"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_RECRUIT_A_FRIEND",
|
||||
"repo_name": "55Honey/Acore_RecruitAFriend",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/55Honey/Acore_RecruitAFriend"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_EVENT_SCRIPTS",
|
||||
"repo_name": "55Honey/Acore_eventScripts",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/55Honey/Acore_eventScripts"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_LOTTERY_LUA",
|
||||
"repo_name": "zyggy123/lottery-lua",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/zyggy123/lottery-lua"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_HORADRIC_CUBE",
|
||||
"repo_name": "TITIaio/Horadric-Cube-for-World-of-Warcraft",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/TITIaio/Horadric-Cube-for-World-of-Warcraft"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_GLOBAL_MAIL_BANKING_AUCTIONS",
|
||||
"repo_name": "Aldori15/azerothcore-global-mail_banking_auctions",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/Aldori15/azerothcore-global-mail_banking_auctions"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_LEVEL_UP_REWARD",
|
||||
"repo_name": "55Honey/Acore_LevelUpReward",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/55Honey/Acore_LevelUpReward"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_AIO_BLACKJACK",
|
||||
"repo_name": "Manmadedrummer/AIO-Blackjack",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/Manmadedrummer/AIO-Blackjack"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_NPCBOT_EXTENDED_COMMANDS",
|
||||
"repo_name": "Day36512/Npcbot_Extended_Commands",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/Day36512/Npcbot_Extended_Commands"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_ACTIVE_CHAT",
|
||||
"repo_name": "Day36512/ActiveChat",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/Day36512/ActiveChat"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_MULTIVENDOR",
|
||||
"repo_name": "Shadowveil-WotLK/AzerothCore-lua-MultiVendor",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/Shadowveil-WotLK/AzerothCore-lua-MultiVendor"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_EXCHANGE_NPC",
|
||||
"repo_name": "55Honey/Acore_ExchangeNpc",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/55Honey/Acore_ExchangeNpc"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_DYNAMIC_TRADER",
|
||||
"repo_name": "Day36512/Dynamic-Trader",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/Day36512/Dynamic-Trader"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_DISCORD_NOTIFIER",
|
||||
"repo_name": "0xCiBeR/Acore_DiscordNotifier",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/0xCiBeR/Acore_DiscordNotifier"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_ZONE_CHECK",
|
||||
"repo_name": "55Honey/Acore_Zonecheck",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/55Honey/Acore_Zonecheck"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_HARDCORE_MODE",
|
||||
"repo_name": "HellionOP/Lua-HardcoreMode",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/HellionOP/Lua-HardcoreMode"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_SEND_AND_BIND",
|
||||
"repo_name": "55Honey/Acore_SendAndBind",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/55Honey/Acore_SendAndBind"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_TEMP_ANNOUNCEMENTS",
|
||||
"repo_name": "55Honey/Acore_TempAnnouncements",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/55Honey/Acore_TempAnnouncements"
|
||||
},
|
||||
{
|
||||
"key": "MODULE_CARBON_COPY",
|
||||
"repo_name": "55Honey/Acore_CarbonCopy",
|
||||
"topic": "azerothcore-lua",
|
||||
"repo_url": "https://github.com/55Honey/Acore_CarbonCopy"
|
||||
}
|
||||
]
|
||||
Reference in New Issue
Block a user