44 Commits

Author SHA1 Message Date
uprightbass360
c0aaf8ce96 chore: sync module manifest 2026-01-03 09:11:12 +00:00
uprightbass360
861e924aae chore: update eluna naming 2026-01-03 04:07:45 -05:00
uprightbass360
4e8af3f7a7 cleanup ignore 2026-01-03 02:27:10 -05:00
uprightbass360
dc32e715ca chore: removes TS module and updates names 2026-01-03 02:21:35 -05:00
uprightbass360
93d3df7436 chore: docs update 2026-01-03 02:19:01 -05:00
uprightbass360
179c486f73 mod-ale finnagling 2026-01-02 18:33:10 -05:00
uprightbass360
a497f2844c cleanup: better variable handling 2026-01-02 16:18:25 -05:00
uprightbass360
b046f7f8ba fix: workaround for ale+playerbots 2026-01-02 02:53:47 -05:00
uprightbass360
f5b3b07bcb update module profiles 2026-01-01 15:50:04 -05:00
uprightbass360
07110902a6 cleanup: remove stale changelog and notes 2025-12-27 18:48:40 -05:00
uprightbass360
a67bfcd87b sets module rebuild path automatically 2025-12-27 18:41:33 -05:00
uprightbass360
b0444019ae cleans up variable expansion 2025-12-27 18:36:50 -05:00
Deckard
29d299e402 Change realmmaster status from 'active' to 'blocked' 2025-12-27 18:14:26 -05:00
Deckard
10c45716cf Remove MODULE_AZEROTHCORE_REALMMASTER from template 2025-12-27 18:14:26 -05:00
uprightbass360
3a8f076894 chore: sync module manifest 2025-12-27 18:14:26 -05:00
uprightbass360
3ec83b7714 adds fallback for workflow 2025-12-27 18:07:40 -05:00
uprightbass360
b7d55976cd updates setup language 2025-12-27 17:05:10 -05:00
uprightbass360
63b0a4ba5d adds thanks to readme 2025-12-27 17:00:05 -05:00
uprightbass360
9b9d99904a cleans up env generation and dropps disabled flags 2025-12-27 16:46:27 -05:00
uprightbass360
690ee4317c updates modules and module setup 2025-12-27 16:46:27 -05:00
uprightbass360
b8245e7b3f chore: updates modules and module updater 2025-12-27 15:30:59 -05:00
uprightbass360
6ed10dead7 add helps 2025-12-12 18:56:42 -05:00
uprightbass360
9f3038a516 flips qr generation to params 2025-12-12 18:49:17 -05:00
uprightbass360
ea3c2e750c adds pdump and 2fa generation 2025-12-12 18:33:53 -05:00
uprightbass360
63b2ca8151 backup fixes 2025-12-03 22:13:22 -05:00
uprightbass360
4596320856 add log bind mounts 2025-12-02 01:26:14 -05:00
uprightbass360
d11b9f4089 break apart paths for easier management 2025-11-30 23:21:09 -05:00
uprightbass360
82a5104e87 profile updates 2025-11-27 01:06:48 -05:00
uprightbass360
251b5d8f9f update port display for clarity 2025-11-26 15:37:41 -05:00
uprightbass360
5620fbae91 fix size computing for nested container 2025-11-26 15:19:41 -05:00
uprightbass360
319da1a553 remove test config 2025-11-26 15:00:08 -05:00
uprightbass360
681da2767b exclude bots from stats 2025-11-26 01:58:06 -05:00
uprightbass360
d38c7557e0 status info 2025-11-26 01:31:00 -05:00
uprightbass360
df7689f26a cleanup 2025-11-25 22:11:47 -05:00
uprightbass360
b62e33bb03 docs 2025-11-25 17:45:42 -05:00
uprightbass360
44f9beff71 cleanup hard-coded vars 2025-11-25 17:45:17 -05:00
uprightbass360
e1dc98f1e7 deploy updates 2025-11-23 16:42:50 -05:00
uprightbass360
7e9e6e1b4f setup hardening 2025-11-23 16:05:00 -05:00
uprightbass360
3d0e88e9f6 add status info for new containers 2025-11-23 16:04:29 -05:00
uprightbass360
b3019eb603 directory staging 2025-11-23 13:05:08 -05:00
uprightbass360
327774c0df tagging new modules and images 2025-11-22 22:08:07 -05:00
uprightbass360
9742ce3f83 cleanup 2025-11-22 16:59:18 -05:00
uprightbass360
6ddfe9b2c7 cleanup: validation and integrations for importing data 2025-11-22 16:56:02 -05:00
uprightbass360
e6231bb4a4 feat: comprehensive module system and database management improvements
This commit introduces major enhancements to the module installation system,
database management, and configuration handling for AzerothCore deployments.

## Module System Improvements

### Module SQL Staging & Installation
- Refactor module SQL staging to properly handle AzerothCore's sql/ directory structure
- Fix SQL staging path to use correct AzerothCore format (sql/custom/db_*/*)
- Implement conditional module database importing based on enabled modules
- Add support for both cpp-modules and lua-scripts module types
- Handle rsync exit code 23 (permission warnings) gracefully during deployment

### Module Manifest & Automation
- Add automated module manifest generation via GitHub Actions workflow
- Implement Python-based module manifest updater with comprehensive validation
- Add module dependency tracking and SQL file discovery
- Support for blocked modules and module metadata management

## Database Management Enhancements

### Database Import System
- Add db-guard container for continuous database health monitoring and verification
- Implement conditional database import that skips when databases are current
- Add backup restoration and SQL staging coordination
- Support for Playerbots database (4th database) in all import operations
- Add comprehensive database health checking and status reporting

### Database Configuration
- Implement 10 new dbimport.conf settings from environment variables:
  - Database.Reconnect.Seconds/Attempts for connection reliability
  - Updates.AllowedModules for module auto-update control
  - Updates.Redundancy for data integrity checks
  - Worker/Synch thread settings for all three core databases
- Auto-apply dbimport.conf settings via auto-post-install.sh
- Add environment variable injection for db-import and db-guard containers

### Backup & Recovery
- Fix backup scheduler to prevent immediate execution on container startup
- Add backup status monitoring script with detailed reporting
- Implement backup import/export utilities
- Add database verification scripts for SQL update tracking

## User Import Directory

- Add new import/ directory for user-provided database files and configurations
- Support for custom SQL files, configuration overrides, and example templates
- Automatic import of user-provided databases and configs during initialization
- Documentation and examples for custom database imports

## Configuration & Environment

- Eliminate CLIENT_DATA_VERSION warning by adding default value syntax
- Improve CLIENT_DATA_VERSION documentation in .env.template
- Add comprehensive database import settings to .env and .env.template
- Update setup.sh to handle new configuration variables with proper defaults

## Monitoring & Debugging

- Add status dashboard with Go-based terminal UI (statusdash.go)
- Implement JSON status output (statusjson.sh) for programmatic access
- Add comprehensive database health check script
- Add repair-storage-permissions.sh utility for permission issues

## Testing & Documentation

- Add Phase 1 integration test suite for module installation verification
- Add comprehensive documentation for:
  - Database management (DATABASE_MANAGEMENT.md)
  - Module SQL analysis (AZEROTHCORE_MODULE_SQL_ANALYSIS.md)
  - Implementation mapping (IMPLEMENTATION_MAP.md)
  - SQL staging comparison and path coverage
  - Module assets and DBC file requirements
- Update SCRIPTS.md, ADVANCED.md, and troubleshooting documentation
- Update references from database-import/ to import/ directory

## Breaking Changes

- Renamed database-import/ directory to import/ for clarity
- Module SQL files now staged to AzerothCore-compatible paths
- db-guard container now required for proper database lifecycle management

## Bug Fixes

- Fix module SQL staging directory structure for AzerothCore compatibility
- Handle rsync exit code 23 gracefully during deployments
- Prevent backup from running immediately on container startup
- Correct SQL staging paths for proper module installation
2025-11-22 16:56:02 -05:00
60 changed files with 5170 additions and 1546 deletions

View File

@@ -14,13 +14,22 @@ COMPOSE_OVERRIDE_WORLDSERVER_DEBUG_LOGGING_ENABLED=0
# Project name
# =====================
# Customize this to match your deployment slug (used for container names/tags)
COMPOSE_PROJECT_NAME=azerothcore-stack
COMPOSE_PROJECT_NAME=azerothcore-realmmaster
# =====================
# Storage & Timezone
# =====================
STORAGE_PATH=./storage
STORAGE_PATH_LOCAL=./local-storage
STORAGE_CONFIG_PATH=${STORAGE_PATH}/config
STORAGE_LOGS_PATH=${STORAGE_PATH}/logs
STORAGE_MODULES_PATH=${STORAGE_PATH}/modules
STORAGE_LUA_SCRIPTS_PATH=${STORAGE_PATH}/lua_scripts
STORAGE_MODULES_META_PATH=${STORAGE_MODULES_PATH}/.modules-meta
STORAGE_MODULE_SQL_PATH=${STORAGE_PATH}/module-sql-updates
STORAGE_INSTALL_MARKERS_PATH=${STORAGE_PATH}/install-markers
STORAGE_CLIENT_DATA_PATH=${STORAGE_PATH}/client-data
STORAGE_LOCAL_SOURCE_PATH=${STORAGE_PATH_LOCAL}/source
BACKUP_PATH=${STORAGE_PATH}/backups
HOST_ZONEINFO_PATH=/usr/share/zoneinfo
TZ=UTC
@@ -65,12 +74,19 @@ DB_GUARD_VERIFY_INTERVAL_SECONDS=86400
# =====================
# Module SQL staging
# =====================
MODULE_SQL_STAGE_PATH=${STORAGE_PATH_LOCAL}/module-sql-updates
STAGE_PATH_MODULE_SQL=${STORAGE_MODULE_SQL_PATH}
# =====================
# Modules rebuild source path
# =====================
# Default AzerothCore source checkout used for module rebuilds
MODULES_REBUILD_SOURCE_PATH=${STORAGE_PATH_LOCAL}/source/azerothcore
# =====================
# SQL Source Overlay
# =====================
AC_SQL_SOURCE_PATH=${STORAGE_PATH_LOCAL}/source/azerothcore-playerbots/data/sql
SOURCE_DIR=${MODULES_REBUILD_SOURCE_PATH}
AC_SQL_SOURCE_PATH=${MODULES_REBUILD_SOURCE_PATH}/data/sql
# =====================
# Images
@@ -80,15 +96,15 @@ AC_DB_IMPORT_IMAGE=acore/ac-wotlk-db-import:master
AC_AUTHSERVER_IMAGE=acore/ac-wotlk-authserver:master
AC_WORLDSERVER_IMAGE=acore/ac-wotlk-worldserver:master
# Services (Playerbots)
AC_AUTHSERVER_IMAGE_PLAYERBOTS=azerothcore-realmmaster:authserver-playerbots
AC_WORLDSERVER_IMAGE_PLAYERBOTS=azerothcore-realmmaster:worldserver-playerbots
AC_AUTHSERVER_IMAGE_PLAYERBOTS=${COMPOSE_PROJECT_NAME}:authserver-playerbots
AC_WORLDSERVER_IMAGE_PLAYERBOTS=${COMPOSE_PROJECT_NAME}:worldserver-playerbots
# Services (Module Build Tags)
# Images used during module compilation and tagging
AC_AUTHSERVER_IMAGE_MODULES=azerothcore-realmmaster:authserver-modules-latest
AC_WORLDSERVER_IMAGE_MODULES=azerothcore-realmmaster:worldserver-modules-latest
AC_AUTHSERVER_IMAGE_MODULES=${COMPOSE_PROJECT_NAME}:authserver-modules-latest
AC_WORLDSERVER_IMAGE_MODULES=${COMPOSE_PROJECT_NAME}:worldserver-modules-latest
# Client Data
AC_CLIENT_DATA_IMAGE=acore/ac-wotlk-client-data:master
AC_CLIENT_DATA_IMAGE_PLAYERBOTS=uprightbass360/azerothcore-wotlk-playerbots:client-data-Playerbot
AC_CLIENT_DATA_IMAGE_PLAYERBOTS=${COMPOSE_PROJECT_NAME}:client-data-playerbots
# Build artifacts
DOCKER_IMAGE_TAG=master
AC_AUTHSERVER_IMAGE_BASE=acore/ac-wotlk-authserver
@@ -141,7 +157,7 @@ MYSQL_INNODB_LOG_FILE_SIZE=64M
MYSQL_INNODB_REDO_LOG_CAPACITY=512M
MYSQL_RUNTIME_TMPFS_SIZE=8G
MYSQL_DISABLE_BINLOG=1
MYSQL_CONFIG_DIR=${STORAGE_PATH}/config/mysql/conf.d
MYSQL_CONFIG_DIR=${STORAGE_CONFIG_PATH}/mysql/conf.d
DB_WAIT_RETRIES=60
DB_WAIT_SLEEP=10
@@ -180,6 +196,7 @@ DB_CHARACTER_SYNCH_THREADS=1
BACKUP_RETENTION_DAYS=3
BACKUP_RETENTION_HOURS=6
BACKUP_DAILY_TIME=09
BACKUP_INTERVAL_MINUTES=60
# Optional comma/space separated schemas to include in automated backups
BACKUP_EXTRA_DATABASES=
BACKUP_HEALTHCHECK_MAX_MINUTES=1440
@@ -209,6 +226,8 @@ MODULES_REQUIRES_PLAYERBOT_SOURCE=0
# Only set this if you need to override the auto-detected version
# Example: v18.0, v17.0, etc.
CLIENT_DATA_VERSION=
# Client data path for deployment (auto-calculated when left blank)
CLIENT_DATA_PATH=
# =====================
# Server Configuration
@@ -217,176 +236,11 @@ CLIENT_DATA_VERSION=
# Available: none, blizzlike, fast-leveling, hardcore-pvp, casual-pve
SERVER_CONFIG_PRESET=none
CLIENT_DATA_CACHE_PATH=${STORAGE_PATH_LOCAL}/client-data-cache
CLIENT_DATA_PATH=${STORAGE_PATH}/client-data
# =====================
# Module toggles (0/1)
# =====================
# Enable/disable modules by setting to 1 (enabled) or 0 (disabled)
# Modules are organized by category for easier navigation
# 🤖 Automation
# Playerbot and AI systems
MODULE_NPCBOT_EXTENDED_COMMANDS=0
MODULE_OLLAMA_CHAT=0
# mod-playerbots: Installs SQL/config assets; core functionality is built into playerbot images
MODULE_PLAYERBOTS=0
MODULE_PLAYER_BOT_LEVEL_BRACKETS=0
# ✨ Quality of Life
# Convenience features that improve gameplay experience
MODULE_AOE_LOOT=0
MODULE_AUTO_REVIVE=0
MODULE_FIREWORKS=0
MODULE_INSTANCE_RESET=0
MODULE_LEARN_SPELLS=0
MODULE_SOLO_LFG=0
# ⚔️ Gameplay Enhancement
# Core gameplay improvements and mechanics
MODULE_AUTOBALANCE=0
MODULE_CHALLENGE_MODES=0
MODULE_DUEL_RESET=0
MODULE_DUNGEON_RESPAWN=0
MODULE_HARDCORE_MODE=0
MODULE_HORADRIC_CUBE=0
MODULE_SOLOCRAFT=0
MODULE_STATBOOSTER=0
MODULE_TIME_IS_TIME=0
# 🏪 NPC Services
# Service NPCs that provide player utilities
MODULE_ASSISTANT=0
MODULE_MULTIVENDOR=0
MODULE_NPC_BEASTMASTER=0
MODULE_NPC_BUFFER=0
MODULE_NPC_ENCHANTER=0
MODULE_NPC_FREE_PROFESSIONS=0
# mod-npc-talent-template: Admin commands: .templatenpc create [TemplateName] and .templatenpc reload
MODULE_NPC_TALENT_TEMPLATE=0
MODULE_REAGENT_BANK=0
MODULE_TRANSMOG=0
# ⚡ PvP
# Player vs Player focused modules
MODULE_1V1_ARENA=0
# mod-arena-replay: NPC ID: 98500; known issue: players who were participants experience unusual behavior when watching their own replay
MODULE_ARENA_REPLAY=0
MODULE_GAIN_HONOR_GUARD=0
MODULE_PHASED_DUELS=0
MODULE_PVP_TITLES=0
MODULE_ULTIMATE_FULL_LOOT_PVP=0
# 📈 Progression
# Character and server progression systems
MODULE_DYNAMIC_XP=0
MODULE_INDIVIDUAL_PROGRESSION=0
MODULE_ITEM_LEVEL_UP=0
MODULE_LEVEL_GRANT=0
# mod-progression-system: SQL files cannot be unloaded once executed; requires auto DB updater enabled in worldserver config
MODULE_PROGRESSION_SYSTEM=0
MODULE_PROMOTION_AZEROTHCORE=0
MODULE_WEEKEND_XP=0
# mod-zone-difficulty: Mythicmode NPC 1128001 spawned in raids/heroic dungeons; NPC 1128002 for Mythicmode rewards
MODULE_ZONE_DIFFICULTY=0
# 💰 Economy
# Auction house, trading, and economic systems
MODULE_AHBOT=0
MODULE_BLACK_MARKET_AUCTION_HOUSE=0
MODULE_DYNAMIC_TRADER=0
MODULE_EXCHANGE_NPC=0
MODULE_GLOBAL_MAIL_BANKING_AUCTIONS=0
MODULE_LOTTERY_LUA=0
MODULE_LUA_AH_BOT=0
MODULE_RANDOM_ENCHANTS=0
# 👥 Social
# Social and community features
MODULE_ACTIVE_CHAT=0
MODULE_BOSS_ANNOUNCER=0
MODULE_BREAKING_NEWS=0
MODULE_DISCORD_NOTIFIER=0
MODULE_GLOBAL_CHAT=0
MODULE_TEMP_ANNOUNCEMENTS=0
# 👤 Account-Wide
# Features that apply across all characters on an account
MODULE_ACCOUNTWIDE_SYSTEMS=0
MODULE_ACCOUNT_ACHIEVEMENTS=0
MODULE_ACCOUNT_MOUNTS=0
# 🎨 Customization
# Character and appearance customization
MODULE_ARAC=0
# mod-morphsummon: Allows customization of summoned creature appearances (Warlock demons, Death Knight ghouls, Mage water elementals); NPC ID: 601072
MODULE_MORPHSUMMON=0
MODULE_TRANSMOG_AIO=0
MODULE_WORGOBLIN=0
# 📜 Scripting
# Lua/Eluna scripting frameworks and tools
# mod-aio: Azeroth Interface Override - enables client-server interface communication
MODULE_AIO=0
MODULE_ELUNA=1
MODULE_ELUNA_SCRIPTS=0
MODULE_ELUNA_TS=0
MODULE_EVENT_SCRIPTS=0
# 🔧 Admin Tools
# Server administration and management utilities
MODULE_ANTIFARMING=0
MODULE_CARBON_COPY=0
# mod-keep-out: Requires editing database table mod_mko_map_lock; use .gps command to obtain map and zone IDs
MODULE_KEEP_OUT=0
MODULE_SEND_AND_BIND=0
MODULE_SERVER_AUTO_SHUTDOWN=0
# mod-spell-regulator: WARNING: Custom code changes mandatory before module functions; requires custom hooks from external gist
MODULE_SPELL_REGULATOR=0
MODULE_WHO_LOGGED=0
MODULE_ZONE_CHECK=0
# 💎 Premium/VIP
# Premium account and VIP systems
MODULE_ACORE_SUBSCRIPTIONS=0
# mod-premium: Script must be assigned to an item (like hearthstone) using script name 'premium_account'
MODULE_PREMIUM=0
MODULE_SYSTEM_VIP=0
# 🎮 Mini-Games
# Fun and entertainment features
MODULE_AIO_BLACKJACK=0
MODULE_POCKET_PORTAL=0
# mod-tic-tac-toe: NPC ID: 100155
MODULE_TIC_TAC_TOE=0
# 🏰 Content
# Additional game content and features
MODULE_AZEROTHSHARD=0
MODULE_BG_SLAVERYVALLEY=0
MODULE_GUILDHOUSE=0
MODULE_TREASURE_CHEST_SYSTEM=0
MODULE_WAR_EFFORT=0
# 🎁 Rewards
# Player reward and incentive systems
MODULE_LEVEL_UP_REWARD=0
MODULE_PRESTIGE_DRAFT_MODE=0
MODULE_RECRUIT_A_FRIEND=0
# mod-resurrection-scroll: Requires EnablePlayerSettings to be enabled in worldserver config file
MODULE_RESURRECTION_SCROLL=0
MODULE_REWARD_PLAYED_TIME=0
# 🛠️ Developer Tools
# Development and testing utilities
MODULE_SKELETON_MODULE=0
# =====================
# Rebuild automation
# =====================
AUTO_REBUILD_ON_DEPLOY=0
# Default AzerothCore source checkout used for module rebuilds
MODULES_REBUILD_SOURCE_PATH=${STORAGE_PATH_LOCAL}/source/azerothcore
# =====================
# Source repositories
@@ -443,39 +297,111 @@ KEIRA_DATABASE_HOST=ac-mysql
KEIRA_DATABASE_PORT=3306
# Auto-generated defaults for new modules
MODULE_NPCBOT_EXTENDED_COMMANDS=0
MODULE_OLLAMA_CHAT=0
MODULE_PLAYERBOTS=0
MODULE_PLAYER_BOT_LEVEL_BRACKETS=0
MODULE_AOE_LOOT=0
MODULE_AUTO_REVIVE=0
MODULE_FIREWORKS=0
MODULE_INSTANCE_RESET=0
MODULE_LEARN_SPELLS=0
MODULE_SOLO_LFG=0
MODULE_AUTOBALANCE=0
MODULE_DUEL_RESET=0
MODULE_HARDCORE_MODE=0
MODULE_HORADRIC_CUBE=0
MODULE_SOLOCRAFT=0
MODULE_TIME_IS_TIME=0
MODULE_ASSISTANT=0
MODULE_NPC_BEASTMASTER=0
MODULE_NPC_BUFFER=0
MODULE_NPC_ENCHANTER=0
MODULE_NPC_FREE_PROFESSIONS=0
MODULE_NPC_TALENT_TEMPLATE=0
MODULE_REAGENT_BANK=0
MODULE_TRANSMOG=0
MODULE_1V1_ARENA=0
MODULE_ARENA_REPLAY=0
MODULE_GAIN_HONOR_GUARD=0
MODULE_PHASED_DUELS=0
MODULE_PVP_TITLES=0
MODULE_ULTIMATE_FULL_LOOT_PVP=0
MODULE_DYNAMIC_XP=0
MODULE_INDIVIDUAL_PROGRESSION=0
MODULE_ITEM_LEVEL_UP=0
MODULE_PROGRESSION_SYSTEM=0
MODULE_PROMOTION_AZEROTHCORE=0
MODULE_WEEKEND_XP=0
MODULE_ZONE_DIFFICULTY=0
MODULE_DYNAMIC_TRADER=0
MODULE_EXCHANGE_NPC=0
MODULE_GLOBAL_MAIL_BANKING_AUCTIONS=0
MODULE_LOTTERY_LUA=0
MODULE_LUA_AH_BOT=0
MODULE_RANDOM_ENCHANTS=0
MODULE_ACTIVE_CHAT=0
MODULE_BOSS_ANNOUNCER=0
MODULE_BREAKING_NEWS=0
MODULE_DISCORD_NOTIFIER=0
MODULE_GLOBAL_CHAT=0
MODULE_TEMP_ANNOUNCEMENTS=0
MODULE_ACCOUNTWIDE_SYSTEMS=0
MODULE_ACCOUNT_ACHIEVEMENTS=0
MODULE_ACCOUNT_MOUNTS=0
MODULE_ARAC=0
MODULE_MORPHSUMMON=0
MODULE_TRANSMOG_AIO=0
MODULE_WORGOBLIN=0
MODULE_AIO=0
MODULE_ELUNA=1
MODULE_ELUNA_SCRIPTS=0
MODULE_ELUNA_TS=0
MODULE_EVENT_SCRIPTS=0
MODULE_ANTIFARMING=0
MODULE_CARBON_COPY=0
MODULE_KEEP_OUT=0
MODULE_SEND_AND_BIND=0
MODULE_SERVER_AUTO_SHUTDOWN=0
MODULE_SPELL_REGULATOR=0
MODULE_WHO_LOGGED=0
MODULE_ZONE_CHECK=0
MODULE_PREMIUM=0
MODULE_SYSTEM_VIP=0
MODULE_AIO_BLACKJACK=0
MODULE_TIC_TAC_TOE=0
MODULE_BG_SLAVERYVALLEY=0
MODULE_GUILDHOUSE=0
MODULE_TREASURE_CHEST_SYSTEM=0
MODULE_WAR_EFFORT=0
MODULE_LEVEL_UP_REWARD=0
MODULE_PRESTIGE_DRAFT_MODE=0
MODULE_RECRUIT_A_FRIEND=0
MODULE_RESURRECTION_SCROLL=0
MODULE_REWARD_PLAYED_TIME=0
MODULE_SKELETON_MODULE=0
MODULE_1V1_PVP_SYSTEM=0
MODULE_ACI=0
MODULE_ACORE_API=0
MODULE_ACORE_BG_END_ANNOUNCER=0
MODULE_ACORE_BOX=0
MODULE_ACORE_CLIENT=0
MODULE_ACORE_CMS=0
MODULE_ACORE_ELUNATEST=0
MODULE_ACORE_LINUX_RESTARTER=0
MODULE_ACORE_LUA_UNLIMITED_AMMO=0
MODULE_ACORE_LXD_IMAGE=0
MODULE_ACORE_MALL=0
MODULE_ACORE_MINI_REG_PAGE=0
MODULE_ACORE_NODE_SERVER=0
MODULE_ACORE_PWA=0
MODULE_ACORE_SOD=0
MODULE_ACORE_SUMMONALL=0
MODULE_ACORE_TILEMAP=0
MODULE_ACORE_ZONEDEBUFF=0
MODULE_ACREBUILD=0
MODULE_ADDON_FACTION_FREE_UNIT_POPUP=0
MODULE_AOE_LOOT_MERGE=0
MODULE_APAW=0
MODULE_ARENA_SPECTATOR=0
MODULE_ARENA_STATS=0
MODULE_ATTRIBOOST=0
MODULE_AUTO_CHECK_RESTART=0
MODULE_AZEROTHCOREADMIN=0
MODULE_AZEROTHCOREDISCORDBOT=0
MODULE_AZEROTHCORE_ADDITIONS=0
MODULE_AZEROTHCORE_ALL_STACKABLES_200=0
MODULE_AZEROTHCORE_ANSIBLE=0
MODULE_AZEROTHCORE_ARMORY=0
MODULE_AZEROTHCORE_LUA_ARENA_MASTER_COMMAND=0
MODULE_AZEROTHCORE_LUA_DEMON_MORPHER=0
MODULE_AZEROTHCORE_PASSRESET=0
@@ -485,41 +411,25 @@ MODULE_AZEROTHCORE_TRIVIA_SYSTEM=0
MODULE_AZEROTHCORE_WEBSITE=0
MODULE_AZEROTHCORE_WOWHEAD_MOD_LUA=0
MODULE_AZTRAL_AIRLINES=0
MODULE_BGQUEUECHECKER=0
MODULE_BG_QUEUE_ABUSER_VIEWER=0
MODULE_BLIZZLIKE_TELES=0
MODULE_BREAKINGNEWSOVERRIDE=0
MODULE_CLASSIC_MODE=0
MODULE_CODEBASE=0
MODULE_CONFIG_RATES=0
MODULE_DEVJOESTAR=0
MODULE_ELUNA_WOW_SCRIPTS=0
MODULE_EXTENDEDXP=0
MODULE_EXTENDED_HOLIDAYS_LUA=0
MODULE_FFAFIX=0
MODULE_FLAG_CHECKER=0
MODULE_GUILDBANKTABFEEFIXER=0
MODULE_HARDMODE=0
MODULE_HEARTHSTONE_COOLDOWNS=0
MODULE_ITEMBROADCASTGUILDCHAT=0
MODULE_KARGATUM_SYSTEM=0
MODULE_KEIRA3=0
MODULE_LOTTERY_CHANCE_INSTANT=0
MODULE_LUA_AIO_MODRATE_EXP=0
MODULE_LUA_COMMAND_PLUS=0
MODULE_LUA_ITEMUPGRADER_TEMPLATE=0
MODULE_LUA_NOTONLY_RANDOMMORPHER=0
MODULE_LUA_PARAGON_ANNIVERSARY=0
MODULE_LUA_PVP_TITLES_RANKING_SYSTEM=0
MODULE_LUA_SCRIPTS=0
MODULE_LUA_SUPER_BUFFERNPC=0
MODULE_LUA_VIP=0
MODULE_MOD_ACCOUNTBOUND=0
MODULE_MOD_ACCOUNT_VANITY_PETS=0
MODULE_MOD_ACTIVATEZONES=0
MODULE_MOD_AH_BOT_PLUS=0
MODULE_MOD_ALPHA_REWARDS=0
MODULE_MOD_AOE_LOOT=0
MODULE_MOD_APPRECIATION=0
MODULE_MOD_ARENA_TIGERSPEAK=0
MODULE_MOD_ARENA_TOLVIRON=0
@@ -530,44 +440,29 @@ MODULE_MOD_BG_ITEM_REWARD=0
MODULE_MOD_BG_REWARD=0
MODULE_MOD_BG_TWINPEAKS=0
MODULE_MOD_BIENVENIDA=0
MODULE_MOD_BLACK_MARKET=0
MODULE_MOD_BRAWLERS_GUILD=0
MODULE_MOD_BUFF_COMMAND=0
MODULE_MOD_CFPVE=0
MODULE_MOD_CHANGEABLESPAWNRATES=0
MODULE_MOD_CHARACTER_SERVICES=0
MODULE_MOD_CHARACTER_TOOLS=0
MODULE_MOD_CHAT_TRANSMITTER=0
MODULE_MOD_CHROMIE_XP=0
MODULE_MOD_CONGRATS_ON_LEVEL=0
MODULE_MOD_COSTUMES=0
MODULE_MOD_CRAFTSPEED=0
MODULE_MOD_CTA_SWITCH=0
MODULE_MOD_DEAD_MEANS_DEAD=0
MODULE_MOD_DEATHROLL_AIO=0
MODULE_MOD_DEMONIC_PACT_CLASSIC=0
MODULE_MOD_DESERTION_WARNINGS=0
MODULE_MOD_DISCORD_ANNOUNCE=0
MODULE_MOD_DISCORD_WEBHOOK=0
MODULE_MOD_DMF_SWITCH=0
MODULE_MOD_DUNGEONMASTER=0
MODULE_MOD_DUNGEON_SCALE=0
MODULE_MOD_DYNAMIC_LOOT_RATES=0
MODULE_MOD_DYNAMIC_RESURRECTIONS=0
MODULE_MOD_ENCOUNTER_LOGS=0
MODULE_MOD_FACTION_FREE=0
MODULE_MOD_FIRSTLOGIN_AIO=0
MODULE_MOD_FLIGHTMASTER_WHISTLE=0
MODULE_MOD_FORTIS_AUTOBALANCE=0
MODULE_MOD_GAME_STATE_API=0
MODULE_MOD_GEDDON_BINDING_SHARD=0
MODULE_MOD_GHOST_SPEED=0
MODULE_MOD_GLOBALCHAT=0
MODULE_MOD_GM_COMMANDS=0
MODULE_MOD_GOMOVE=0
MODULE_MOD_GROWNUP=0
MODULE_MOD_GUILDFUNDS=0
MODULE_MOD_GUILD_VILLAGE=0
MODULE_MOD_GUILD_ZONE_SYSTEM=0
MODULE_MOD_HARDCORE=0
MODULE_MOD_HARDCORE_MAKGORA=0
@@ -576,32 +471,21 @@ MODULE_MOD_HIGH_RISK_SYSTEM=0
MODULE_MOD_HUNTER_PET_STORAGE=0
MODULE_MOD_IMPROVED_BANK=0
MODULE_MOD_INCREMENT_CACHE_VERSION=0
MODULE_MOD_INDIVIDUAL_XP=0
MODULE_MOD_INFLUXDB=0
MODULE_MOD_INSTANCE_TOOLS=0
MODULE_MOD_IP2NATION=0
MODULE_MOD_IP_TRACKER=0
MODULE_MOD_ITEMLEVEL=0
MODULE_MOD_ITEM_UPGRADE=0
MODULE_MOD_JUNK_TO_GOLD=0
MODULE_MOD_LEARNSPELLS=0
MODULE_MOD_LEECH=0
MODULE_MOD_LEVEL_15_BOOST=0
MODULE_MOD_LEVEL_ONE_MOUNTS=0
MODULE_MOD_LEVEL_REWARDS=0
MODULE_MOD_LOGIN_REWARDS=0
MODULE_MOD_LOW_LEVEL_ARENA=0
MODULE_MOD_LOW_LEVEL_RBG=0
MODULE_MOD_MISSING_OBJECTIVES=0
MODULE_MOD_MONEY_FOR_KILLS=0
MODULE_MOD_MOUNTS_ON_ACCOUNT=0
MODULE_MOD_MOUNT_REQUIREMENTS=0
MODULE_MOD_MULTI_VENDOR=0
MODULE_MOD_MYTHIC_PLUS=0
MODULE_MOD_NOCLIP=0
MODULE_MOD_NORDF=0
MODULE_MOD_NOTIFY_MUTED=0
MODULE_MOD_NO_FARMING=0
MODULE_MOD_NO_HEARTHSTONE_COOLDOWN=0
MODULE_MOD_NPC_ALL_MOUNTS=0
MODULE_MOD_NPC_CODEBOX=0
@@ -611,90 +495,66 @@ MODULE_MOD_NPC_PROMOTION=0
MODULE_MOD_NPC_SERVICES=0
MODULE_MOD_NPC_SPECTATOR=0
MODULE_MOD_NPC_SUBCLASS=0
MODULE_MOD_OBJSCALE=0
MODULE_MOD_OLLAMA_BOT_BUDDY=0
MODULE_MOD_ONY_NAXX_LOGOUT_TELEPORT=0
MODULE_MOD_PEACEKEEPER=0
MODULE_MOD_PETEQUIP=0
MODULE_MOD_PREMIUM=0
MODULE_MOD_PREMIUM_LIB=0
MODULE_MOD_PROFESSION_EXPERIENCE=0
MODULE_MOD_PROFSPECS=0
MODULE_MOD_PTR_TEMPLATE=0
MODULE_MOD_PVPSCRIPT=0
MODULE_MOD_PVPSTATS_ANNOUNCER=0
MODULE_MOD_PVP_ZONES=0
MODULE_MOD_QUEST_LOOT_PARTY=0
MODULE_MOD_QUEST_STATUS=0
MODULE_MOD_QUEUE_LIST_CACHE=0
MODULE_MOD_QUICKBALANCE=0
MODULE_MOD_QUICK_RESPAWN=0
MODULE_MOD_RACIAL_TRAIT_SWAP=0
MODULE_MOD_RARE_DROPS=0
MODULE_MOD_RDF_EXPANSION=0
MODULE_MOD_REAL_ONLINE=0
MODULE_MOD_RECRUIT_FRIEND=0
MODULE_MOD_REFORGING=0
MODULE_MOD_RESET_RAID_COOLDOWNS=0
MODULE_MOD_REWARD_PLAYED_TIME_IMPROVED=0
MODULE_MOD_REWARD_SHOP=0
MODULE_MOD_SELL_ITEMS=0
MODULE_MOD_SETXPBAR=0
MODULE_MOD_SHARE_MOUNTS=0
MODULE_MOD_SPAWNPOINTS=0
MODULE_MOD_SPEC_REWARD=0
MODULE_MOD_SPELLREGULATOR=0
MODULE_MOD_SPONSORSHIP=0
MODULE_MOD_STARTER_GUILD=0
MODULE_MOD_STARTER_WANDS=0
MODULE_MOD_STARTING_PET=0
MODULE_MOD_STREAMS=0
MODULE_MOD_SWIFT_TRAVEL_FORM=0
MODULE_MOD_TALENTBUTTON=0
MODULE_MOD_TRADE_ITEMS_FILTER=0
MODULE_MOD_TREASURE=0
MODULE_MOD_TRIAL_OF_FINALITY=0
MODULE_MOD_VANILLA_NAXXRAMAS=0
MODULE_MOD_WARLOCK_PET_RENAME=0
MODULE_MOD_WEAPON_VISUAL=0
MODULE_MOD_WEEKENDBONUS=0
MODULE_MOD_WEEKEND_XP=0
MODULE_MOD_WHOLOGGED=0
MODULE_MORZA_ISLAND_ARAXIA_SERVER=0
MODULE_MPQ_TOOLS_OSX=0
MODULE_MYSQL_TOOLS=0
MODULE_NODEROUTER=0
MODULE_OPENPROJECTS=0
MODULE_PLAYERTELEPORT=0
MODULE_PORTALS_IN_ALL_CAPITALS=0
MODULE_PRESTIGE=0
MODULE_PRESTIGIOUS=0
MODULE_PVPSTATS=0
MODULE_RAIDTELEPORTER=0
MODULE_RECACHE=0
MODULE_RECYCLEDITEMS=0
MODULE_REWARD_SYSTEM=0
MODULE_SAHTOUTCMS=0
MODULE_SERVER_STATUS=0
MODULE_SETXPBAR=0
MODULE_SPELLSCRIPT_REFACTOR_TOOL=0
MODULE_SQL_NPC_TELEPORTER=0
MODULE_STATBOOSTERREROLLER=0
MODULE_STRAPI_AZEROTHCORE=0
MODULE_TBC_RAID_HP_RESTORATION=0
MODULE_TELEGRAM_AUTOMATED_DB_BACKUP=0
MODULE_TOOL_TC_MIGRATION=0
MODULE_TRANSMOG_ADDONS=0
MODULE_UPDATE_MOB_LEVEL_TO_PLAYER_AND_RANDOM_ITEM_STATS=0
MODULE_UPDATE_MODULE_CONFS=0
MODULE_WEB_CHARACTER_MIGRATION_TOOL=0
MODULE_WEEKLY_ARMOR_VENDOR_BLACK_MARKET=0
MODULE_WORLD_BOSS_RANK=0
MODULE_WOWDATABASEEDITOR=0
MODULE_WOWLAUNCHER_DELPHI=0
MODULE_WOWSIMS_TO_COMMANDS=0
MODULE_WOW_CLIENT_PATCHER=0
MODULE_WOW_ELUNA_TS_MODULE=0
MODULE_WOW_SERVER_RELAY=0
MODULE_WOW_STATISTICS=0
MODULE_WRATH_OF_THE_VANILLA=0
MODULE_MOD_BOTS_LOGIN_FIX=0
MODULE_MOD_MATERIAL_BANK=0
MODULE_MOD_PROGRESSION_BLIZZLIKE=0
MODULE_MOD_PYTHON_ENGINE=0
MODULE_WRATH_OF_THE_VANILLA_V2=0
MODULE_DUELS=0
MODULE_WOW_CORE=0

View File

@@ -17,13 +17,31 @@ jobs:
with:
python-version: '3.11'
- name: Configure git
run: |
git config --global user.name 'github-actions[bot]'
git config --global user.email 'github-actions[bot]@users.noreply.github.com'
- name: Update manifest from GitHub topics
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
python3 scripts/python/update_module_manifest.py --log
- name: Check for changes
id: changes
run: |
if git diff --quiet; then
echo "changed=false" >> $GITHUB_OUTPUT
echo "No changes detected in manifest or template files"
else
echo "changed=true" >> $GITHUB_OUTPUT
echo "Changes detected:"
git diff --name-only
fi
- name: Create Pull Request with changes
if: steps.changes.outputs.changed == 'true'
uses: peter-evans/create-pull-request@v5
with:
commit-message: 'chore: sync module manifest'

41
.gitignore vendored
View File

@@ -1,22 +1,27 @@
database-import/*.sql
database-import/*.sql.gz
database-import/*/
database-import/ImportBackup*/
source/*
local-data-tools/
changelogs/
# ===================================
# Environment & Configuration
# ===================================
.env
.claude/
.mcp*/
# ===================================
# Storage & Data Directories
# ===================================
storage/
local-storage/
.claude/
images/
node_modules/
.mcp*/
scripts/__pycache__/*
scripts/python/__pycache__/*
.env
package-lock.json
package.json
todo.md
db_*/
# ===================================
# Build Artifacts & Cache
# ===================================
.gocache/
.module-ledger/
statusdash
scripts/__pycache__/*
scripts/bash/__pycache__/*
scripts/python/__pycache__/*
# ===================================
# Logs & Runtime State
# ===================================
deploy.log

View File

@@ -1,87 +0,0 @@
# Changelog
## [2025-11-09] - Recent Changes
### ✨ Features
#### Backup System Enhancements
- **Manual Backup Support**: Added `manual-backup.sh` script (92 lines) enabling on-demand database backups through the ac-backup container
- **Backup Permission Fixes**: Resolved Docker volume permission issues with backup operations
- **Container User Configuration**: Backup operations now run as proper container user to avoid permission conflicts
#### Remote Deployment
- **Auto Deploy Option**: Added remote auto-deployment functionality to `deploy.sh` (36 additional lines) for automated server provisioning
#### Configuration Management System
- **Database/Config Import**: Major new feature with 1,405+ lines of code across 15 files
- Added `apply-config.py` (323 lines) for dynamic server configuration
- Created `configure-server.sh` (162 lines) for server setup automation
- Implemented `import-database-files.sh` (68 lines) for database initialization
- Added `parse-config-presets.py` (92 lines) for configuration templating
- **Configuration Presets**: 5 new server preset configurations
- `blizzlike.conf` - Authentic Blizzard-like experience
- `casual-pve.conf` - Relaxed PvE gameplay
- `fast-leveling.conf` - Accelerated character progression
- `hardcore-pvp.conf` - Competitive PvP settings
- `none.conf` - Minimal configuration baseline
- **Dynamic Server Overrides**: `server-overrides.conf` (134 lines) for customizable server parameters
- **Comprehensive Config Documentation**: `CONFIG_MANAGEMENT.md` (279 lines) detailing the entire configuration system
#### Infrastructure Improvements
- **MySQL Exposure Toggle**: Optional MySQL port exposure for external database access
- **Client Data Management**: Automatic client data detection, download, and binding with version detection
- **Dynamic Docker Overrides**: Flexible compose override system for modular container configurations
- **Module Profile System**: Structured module management with preset profiles
### 🏗️ Refactoring
#### Script Organization
- **Directory Restructure**: Reorganized all scripts into `scripts/bash/` and `scripts/python/` directories (40 files moved/modified)
- **Project Naming**: Added centralized project name management with `project_name.sh`
- **Module Manifest Rename**: Moved `modules.json``module-manifest.json` for clarity
### 🐛 Bug Fixes
#### Container Improvements
- **Client Data Container**: Enhanced with 7zip support, root access during extraction, and ownership fixes
- **Permission Resolution**: Fixed file ownership issues in client data extraction process
- **Path Updates**: Corrected deployment paths and script references after reorganization
### 📚 Documentation
#### Major Documentation Overhaul
- **Modular Documentation**: Split massive README into focused documents (1,500+ lines reorganized)
- `docs/GETTING_STARTED.md` (467 lines) - Setup and initial configuration
- `docs/MODULES.md` (264 lines) - Module management and customization
- `docs/SCRIPTS.md` (404 lines) - Script reference and automation
- `docs/ADVANCED.md` (207 lines) - Advanced configuration topics
- `docs/TROUBLESHOOTING.md` (127 lines) - Common issues and solutions
- **README Streamlining**: Reduced main README from 1,200+ to focused overview
- **Script Documentation**: Updated script references and usage examples throughout
### 🔧 Technical Changes
#### Development Experience
- **Setup Enhancements**: Improved `setup.sh` with better error handling and configuration options (66 lines added)
- **Status Monitoring**: Enhanced `status.sh` with better container and service monitoring
- **Build Process**: Updated build scripts with new directory structure and module handling
- **Cleanup Operations**: Improved cleanup scripts with proper path handling
#### DevOps & Deployment
- **Remote Cleanup**: Enhanced remote server cleanup and temporary file management
- **Network Binding**: Improved container networking and port management
- **Import Folder**: Added dedicated import directory structure
- **Development Onboarding**: Streamlined developer setup process
---
### Migration Notes
- Scripts have moved from `scripts/` to `scripts/bash/` and `scripts/python/`
- Module configuration is now in `config/module-manifest.json`
- New environment variables added for MySQL exposure and client data management
- Configuration presets are available in `config/presets/`
### Breaking Changes
- Script paths have changed due to reorganization
- Module manifest file has been renamed
- Some environment variables have been added/modified

View File

@@ -4,7 +4,7 @@
# AzerothCore RealmMaster
A complete containerized deployment of AzerothCore WoW 3.3.5a (Wrath of the Lich King) private server with 93+ enhanced modules and intelligent automation.
A complete containerized deployment of AzerothCore WoW 3.3.5a (Wrath of the Lich King) private server with **hundreds** of supported modules and intelligent automations to allow for easy setup, deployment and management.
## Table of Contents
@@ -23,10 +23,10 @@ A complete containerized deployment of AzerothCore WoW 3.3.5a (Wrath of the Lich
## Quick Start
### Prerequisites
- **Docker** with Docker Compose
- **16GB+ RAM** and **32GB+ storage**
- **Linux/macOS/WSL2** (Windows with WSL2 recommended)
### Reccomendations
- **Docker** with Docker Compose 2
- **16GB+ RAM** and **64GB+ storage**
- **Linux/macOS/WSL2** Fully tested with Ubuntu 24.04 and Debian 12
### Three Simple Steps
@@ -50,17 +50,15 @@ See [Getting Started](#getting-started) for detailed walkthrough.
## What You Get
### ✅ Core Server Components
- **AzerothCore 3.3.5a** - WotLK server application with 93+ enhanced modules
- **AzerothCore 3.3.5a** - WotLK server application with 348 modules in the manifest (221 currently supported)
- **MySQL 8.0** - Database with intelligent initialization and restoration
- **Smart Module System** - Automated module management and source builds
- **phpMyAdmin** - Web-based database administration
- **Keira3** - Game content editor and developer tools
### ✅ Automated Configuration
- **Intelligent Database Setup** - Smart backup detection, restoration, and conditional schema import
- **Restore Safety Checks** - The import job now validates the live MySQL runtime before honoring restore markers so stale tmpfs volumes cant trick automation into skipping a needed restore (see [docs/DATABASE_MANAGEMENT.md](docs/DATABASE_MANAGEMENT.md))
- **Backup Management** - Automated hourly/daily backups with intelligent restoration
- **Restore-Aware Module SQL** - After a backup restore the ledger snapshot from that backup is synced into shared storage and `stage-modules.sh` recopies every enabled SQL file into `/azerothcore/data/sql/updates/*` so the worldservers built-in updater reapplies anything the database still needs (see [docs/DATABASE_MANAGEMENT.md](docs/DATABASE_MANAGEMENT.md))
- **Intelligent Database Setup** - Smart backup detection, restoration, and conditional schema import (details in [docs/DATABASE_MANAGEMENT.md](docs/DATABASE_MANAGEMENT.md))
- **Restore-Aware Backups & SQL** - Restore-aware SQL staging and snapshot safety checks keep modules in sync after restores ([docs/DATABASE_MANAGEMENT.md](docs/DATABASE_MANAGEMENT.md))
- **Module Integration** - Automatic source builds when C++ modules are enabled
- **Service Orchestration** - Profile-based deployment (standard/playerbots/modules)
@@ -79,7 +77,9 @@ For complete local and remote deployment guides, see **[docs/GETTING_STARTED.md]
## Complete Module Catalog
Choose from **93+ enhanced modules** spanning automation, quality-of-life improvements, gameplay enhancements, PvP features, and more. All modules are automatically downloaded, configured, and integrated during deployment.
Choose from **hundreds of enhanced modules** spanning automation, quality-of-life improvements, gameplay enhancements, PvP features, and more. The manifest contains 348 modules (221 marked supported/active); the default RealmMaster preset enables 33 that are exercised in testing. All modules are automatically downloaded, configured, and integrated during deployment when selected.
Want a shortcut? Use a preset (`RealmMaster`, `suggested-modules`, `playerbots-suggested-modules`, `azerothcore-vanilla`, `playerbots-only`, `all-modules`) from `config/module-profiles/`—see [docs/GETTING_STARTED.md#module-presets](docs/GETTING_STARTED.md#module-presets).
**Popular Categories:**
- **Automation** - Playerbots, AI chat, level management
@@ -93,23 +93,13 @@ Browse the complete catalog with descriptions at **[docs/MODULES.md](docs/MODULE
## Custom NPCs Guide
The server includes **14 custom NPCs** providing enhanced functionality including profession training, enchantments, arena services, and more. All NPCs are spawnable through GM commands and designed for permanent placement.
**Available NPCs:**
- **Service NPCs** - Profession training, reagent banking, instance resets
- **Enhancement NPCs** - Enchanting, buffing, pet management, transmog
- **PvP NPCs** - 1v1 arena battlemaster
- **Guild House NPCs** - Property management and services
For complete spawn commands, coordinates, and functionality details, see **[docs/NPCS.md](docs/NPCS.md)**.
The server includes **14 custom NPCs** spanning services, buffs, PvP, and guild support. Full spawn commands, coordinates, and functions are in **[docs/NPCS.md](docs/NPCS.md)**.
---
## Management & Operations
For common workflows, management commands, and database operations, see **[docs/GETTING_STARTED.md](docs/GETTING_STARTED.md)**.
- Keep the module catalog current with `scripts/python/update_module_manifest.py` or trigger the scheduled **Sync Module Manifest** GitHub Action to auto-open a PR with the latest AzerothCore topic repos.
For common workflows, management commands, and database operations, see **[docs/GETTING_STARTED.md](docs/GETTING_STARTED.md)**. For script details (including module manifest auto-sync), see **[docs/SCRIPTS.md](docs/SCRIPTS.md)**.
---
@@ -140,6 +130,13 @@ For diagnostic procedures, common issues, and backup system documentation, see *
This project builds upon:
- **[AzerothCore](https://github.com/azerothcore/azerothcore-wotlk)** - Core server application
- **[AzerothCore Module Community](https://github.com/azerothcore)** - Enhanced gameplay modules
- **[acore-docker](https://github.com/azerothcore/acore-docker)** - Inspiration for containerized deployment
- **[mod-playerbots](https://github.com/mod-playerbots/azerothcore-wotlk)** - Advanced playerbot functionality
- **All module creators** - Making amazing things every day
### Community & Support
- **[AzerothCore Discord](https://discord.gg/gkt4y2x)** - Join the community for support and discussions
- **[GitHub Issues](https://github.com/uprightbass360/AzerothCore-RealmMaster/issues)** - Report build or deployment issues here
#### Key Features
-**Fully Automated Setup** - Interactive configuration and deployment
@@ -149,10 +146,8 @@ This project builds upon:
-**Comprehensive Documentation** - Clear setup and troubleshooting guides
### Next Steps After Installation
**Essential First Steps:**
1. **Create admin account**: `docker attach ac-worldserver``account create admin password``account set gmlevel admin 3 -1`
2. **Test your setup**: Connect with WoW 3.3.5a client using `set realmlist 127.0.0.1`
3. **Access web tools**: phpMyAdmin (port 8081) and Keira3 (port 4201)
**For detailed server administration, monitoring, backup configuration, and performance tuning, see [docs/GETTING_STARTED.md](docs/GETTING_STARTED.md).**
- **Create admin account** - Attach to worldserver and create a GM user (commands in **[docs/GETTING_STARTED.md#post-installation-steps](docs/GETTING_STARTED.md#post-installation-steps)**).
- **Point your client** - Update `realmlist.wtf` to your host/ports (defaults in the same section above).
- **Open services** - phpMyAdmin and Keira3 URLs/ports are listed in **[docs/GETTING_STARTED.md#post-installation-steps](docs/GETTING_STARTED.md#post-installation-steps)**.

View File

@@ -38,6 +38,7 @@ Build AzerothCore with custom modules and create deployment-ready images.
Options:
--yes, -y Auto-confirm all prompts
--force Force rebuild even if no changes detected
--force-update Force update source repository to latest commits
--source-path PATH Custom source repository path
--skip-source-setup Skip automatic source repository setup
-h, --help Show this help
@@ -53,6 +54,7 @@ Examples:
./build.sh Interactive build
./build.sh --yes Auto-confirm build
./build.sh --force Force rebuild regardless of state
./build.sh --force-update Update source to latest and build
EOF
}
@@ -60,6 +62,7 @@ while [[ $# -gt 0 ]]; do
case "$1" in
--yes|-y) ASSUME_YES=1; shift;;
--force) FORCE_REBUILD=1; shift;;
--force-update) FORCE_UPDATE=1; shift;;
--source-path) CUSTOM_SOURCE_PATH="$2"; shift 2;;
--skip-source-setup) SKIP_SOURCE_SETUP=1; shift;;
-h|--help) usage; exit 0;;
@@ -137,11 +140,18 @@ generate_module_state(){
# Check if blocked modules were detected in warnings
if echo "$validation_output" | grep -q "is blocked:"; then
# Gather blocked module keys for display
local blocked_modules
blocked_modules=$(echo "$validation_output" | grep -oE 'MODULE_[A-Za-z0-9_]+' | sort -u | tr '\n' ' ')
# Blocked modules detected - show warning and ask for confirmation
echo
warn "════════════════════════════════════════════════════════════════"
warn "⚠️ BLOCKED MODULES DETECTED ⚠️"
warn "════════════════════════════════════════════════════════════════"
if [ -n "$blocked_modules" ]; then
warn "Affected modules: ${blocked_modules}"
fi
warn "Some enabled modules are marked as blocked due to compatibility"
warn "issues. These modules will be SKIPPED during the build process."
warn ""
@@ -233,6 +243,13 @@ ensure_source_repo(){
src_path="${src_path//\/.\//\/}"
if [ -d "$src_path/.git" ]; then
if [ "${FORCE_UPDATE:-0}" = "1" ]; then
info "Force update requested - updating source repository to latest" >&2
if ! (cd "$ROOT_DIR" && ./scripts/bash/setup-source.sh) >&2; then
err "Failed to update source repository" >&2
exit 1
fi
fi
echo "$src_path"
return
fi
@@ -533,6 +550,10 @@ stage_modules(){
rm -f "$staging_modules_dir/.modules_state" "$staging_modules_dir/.requires_rebuild" 2>/dev/null || true
fi
# Export environment variables needed by module hooks
export STACK_SOURCE_VARIANT="$(read_env STACK_SOURCE_VARIANT "core")"
export MODULES_REBUILD_SOURCE_PATH="$(read_env MODULES_REBUILD_SOURCE_PATH "")"
if ! (cd "$local_modules_dir" && bash "$ROOT_DIR/scripts/bash/manage-modules.sh"); then
err "Module staging failed; aborting build"
return 1

View File

@@ -99,7 +99,36 @@ done
# Get last build time from container metadata
get_last_build_time() {
local containers=("ac-worldserver" "ac-authserver")
local images=("azerothcore-stack:worldserver-playerbots" "azerothcore-stack:authserver-playerbots")
local images=()
# Require COMPOSE_PROJECT_NAME to be set
if [[ -z "${COMPOSE_PROJECT_NAME:-}" ]]; then
warn "COMPOSE_PROJECT_NAME not set in environment"
return 1
fi
# Use actual image names from environment
# Detect variant to check appropriate images
if [[ "${STACK_IMAGE_MODE:-standard}" == "playerbots" ]] || [[ "${MODULE_PLAYERBOTS:-0}" == "1" ]] || [[ "${PLAYERBOT_ENABLED:-0}" == "1" ]] || [[ "${STACK_SOURCE_VARIANT:-}" == "playerbots" ]]; then
if [[ -z "${AC_WORLDSERVER_IMAGE_PLAYERBOTS:-}" ]] || [[ -z "${AC_AUTHSERVER_IMAGE_PLAYERBOTS:-}" ]]; then
warn "Playerbots mode detected but AC_WORLDSERVER_IMAGE_PLAYERBOTS or AC_AUTHSERVER_IMAGE_PLAYERBOTS not set"
return 1
fi
images=(
"${AC_WORLDSERVER_IMAGE_PLAYERBOTS}"
"${AC_AUTHSERVER_IMAGE_PLAYERBOTS}"
)
else
if [[ -z "${AC_WORLDSERVER_IMAGE:-}" ]] || [[ -z "${AC_AUTHSERVER_IMAGE:-}" ]]; then
warn "Standard mode detected but AC_WORLDSERVER_IMAGE or AC_AUTHSERVER_IMAGE not set"
return 1
fi
images=(
"${AC_WORLDSERVER_IMAGE}"
"${AC_AUTHSERVER_IMAGE}"
)
fi
local latest_date=""
# Try to get build timestamp from containers and images
@@ -143,7 +172,7 @@ if [[ -n "$SINCE_DATE" ]]; then
DATE_DESC="since $SINCE_DATE"
else
# Try to use last build time as default
LAST_BUILD_DATE=$(get_last_build_time)
LAST_BUILD_DATE=$(get_last_build_time 2>/dev/null) || LAST_BUILD_DATE=""
if [[ -n "$LAST_BUILD_DATE" ]]; then
SINCE_OPTION="--since=$LAST_BUILD_DATE"
@@ -194,11 +223,17 @@ detect_source_config() {
$VERBOSE && log "Switched to playerbots variant" >&2
fi
# Repository URLs from environment or defaults
local standard_repo="${ACORE_REPO_STANDARD:-https://github.com/azerothcore/azerothcore-wotlk.git}"
local standard_branch="${ACORE_BRANCH_STANDARD:-master}"
local playerbots_repo="${ACORE_REPO_PLAYERBOTS:-https://github.com/mod-playerbots/azerothcore-wotlk.git}"
local playerbots_branch="${ACORE_BRANCH_PLAYERBOTS:-Playerbot}"
# Repository URLs from environment (required)
local standard_repo="${ACORE_REPO_STANDARD}"
local standard_branch="${ACORE_BRANCH_STANDARD}"
local playerbots_repo="${ACORE_REPO_PLAYERBOTS}"
local playerbots_branch="${ACORE_BRANCH_PLAYERBOTS}"
if [[ -z "$standard_repo" ]] || [[ -z "$standard_branch" ]] || [[ -z "$playerbots_repo" ]] || [[ -z "$playerbots_branch" ]]; then
warn "Repository configuration missing from environment"
warn "Required: ACORE_REPO_STANDARD, ACORE_BRANCH_STANDARD, ACORE_REPO_PLAYERBOTS, ACORE_BRANCH_PLAYERBOTS"
return 1
fi
if [[ "$variant" == "playerbots" ]]; then
echo "$playerbots_repo|$playerbots_branch|$LOCAL_STORAGE_ROOT/source/azerothcore-playerbots"

View File

@@ -146,8 +146,6 @@ sanitize_project_name(){
project_name::sanitize "$1"
}
PROJECT_IMAGE_PREFIX="$(sanitize_project_name "${COMPOSE_PROJECT_NAME:-$DEFAULT_PROJECT_NAME}")"
remove_storage_dir(){
local path="$1"
if [ -d "$path" ]; then
@@ -223,8 +221,7 @@ nuclear_cleanup() {
# Remove project images (server/tool images typical to this project)
execute_command "Remove acore images" "docker images --format '{{.Repository}}:{{.Tag}}' | grep -E '^acore/' | xargs -r docker rmi"
execute_command "Remove local project images" "docker images --format '{{.Repository}}:{{.Tag}}' | grep -E '^${PROJECT_IMAGE_PREFIX}:' | xargs -r docker rmi"
execute_command "Remove legacy playerbots images" "docker images --format '{{.Repository}}:{{.Tag}}' | grep -E '^uprightbass360/azerothcore-wotlk-playerbots' | xargs -r docker rmi"
execute_command "Remove project-specific images" "docker images --format '{{.Repository}}:{{.Tag}}' | grep -E \"^${PROJECT_NAME}:\" | xargs -r docker rmi"
execute_command "Remove tool images" "docker images --format '{{.Repository}}:{{.Tag}}' | grep -E 'phpmyadmin|uprightbass360/keira3' | xargs -r docker rmi"
# Storage cleanup (preserve backups if requested)

File diff suppressed because it is too large Load Diff

View File

@@ -12,7 +12,6 @@
"MODULE_ACCOUNT_ACHIEVEMENTS",
"MODULE_AUTO_REVIVE",
"MODULE_GAIN_HONOR_GUARD",
"MODULE_ELUNA",
"MODULE_TIME_IS_TIME",
"MODULE_RANDOM_ENCHANTS",
"MODULE_SOLOCRAFT",
@@ -23,7 +22,7 @@
"MODULE_ASSISTANT",
"MODULE_REAGENT_BANK",
"MODULE_BLACK_MARKET_AUCTION_HOUSE",
"MODULE_ELUNA_TS",
"MODULE_ELUNA",
"MODULE_AIO",
"MODULE_ELUNA_SCRIPTS",
"MODULE_EVENT_SCRIPTS",
@@ -34,7 +33,7 @@
"MODULE_ITEM_LEVEL_UP",
"MODULE_GLOBAL_CHAT"
],
"label": "\ud83e\udde9 Sam",
"description": "Sam's playerbot-centric preset (use high bot counts)",
"order": 7
"label": "\ud83e\udde9 RealmMaster",
"description": "RealmMaster suggested build (33 enabled modules)",
"order": 0
}

View File

@@ -342,6 +342,6 @@
"MODULE_CLASSIC_MODE"
],
"label": "\ud83e\udde9 All Modules",
"description": "Enable every optional module in the repository",
"order": 5
"description": "Enable every optional module in the repository - NOT RECOMMENDED",
"order": 7
}

View File

@@ -0,0 +1,8 @@
{
"modules": [
"MODULE_ELUNA"
],
"label": "\ud83d\udd30 AzerothCore Main - Mod Free",
"description": "Pure AzerothCore with no optional modules enabled",
"order": 3
}

View File

@@ -1,10 +1,9 @@
{
"modules": [
"MODULE_PLAYERBOTS",
"MODULE_ELUNA",
"MODULE_ELUNA_TS"
"MODULE_ELUNA"
],
"label": "\ud83e\udde9 Playerbots Only",
"description": "Minimal preset that only enables playerbot prerequisites",
"order": 6
"order": 4
}

View File

@@ -7,9 +7,11 @@
"MODULE_TRANSMOG",
"MODULE_NPC_BUFFER",
"MODULE_LEARN_SPELLS",
"MODULE_FIREWORKS"
"MODULE_FIREWORKS",
"MODULE_ELUNA",
"MODULE_AIO"
],
"label": "\ud83e\udd16 Playerbots + Suggested modules",
"label": "\ud83e\udd16 Suggested modules (Playerbots)",
"description": "Suggested stack plus playerbots enabled",
"order": 2
"order": 1
}

View File

@@ -1,6 +1,7 @@
{
"modules": [
"MODULE_ELUNA",
"MODULE_AIO",
"MODULE_SOLO_LFG",
"MODULE_SOLOCRAFT",
"MODULE_AUTOBALANCE",
@@ -9,7 +10,7 @@
"MODULE_LEARN_SPELLS",
"MODULE_FIREWORKS"
],
"label": "\u2b50 Suggested Modules",
"description": "Baseline solo-friendly quality of life mix",
"order": 1
}
"label": "\u2b50 Suggested Modules (Main)",
"description": "Baseline solo-friendly quality of life mix (no playerbots)",
"order": 2
}

View File

@@ -1,47 +0,0 @@
# Database Import
> **📌 Note:** This directory is maintained for backward compatibility.
> **New location:** `import/db/` - See [import/README.md](../import/README.md) for the new unified import system.
Place your database backup files here for automatic import during deployment.
## Supported Imports
- `.sql` files (uncompressed SQL dumps)
- `.sql.gz` files (gzip compressed SQL dumps)
- **Full backup directories** (e.g., `ExportBackup_YYYYMMDD_HHMMSS/` containing multiple dumps)
- **Full backup archives** (`.tar`, `.tar.gz`, `.tgz`, `.zip`) that contain the files above
## How to Use
1. **Copy your backup files here:**
```bash
cp my_auth_backup.sql.gz ./database-import/
cp my_world_backup.sql.gz ./database-import/
cp my_characters_backup.sql.gz ./database-import/
# or drop an entire ExportBackup folder / archive
cp -r ExportBackup_20241029_120000 ./database-import/
cp ExportBackup_20241029_120000.tar.gz ./database-import/
```
2. **Run deployment:**
```bash
./deploy.sh
```
3. **Files are automatically copied to backup system** and imported during deployment
## File Naming
- Any filename works - the system will auto-detect database type by content
- Recommended naming: `auth.sql.gz`, `world.sql.gz`, `characters.sql.gz`
- Full backups keep their original directory/archive name so you can track multiple copies
## What Happens
- Individual `.sql`/`.sql.gz` files are copied to `storage/backups/daily/` with a timestamped name
- Full backup directories or archives are staged directly under `storage/backups/` (e.g., `storage/backups/ExportBackup_20241029_120000/`)
- Database import system automatically restores the most recent matching backup
- Original files remain here for reference (archives are left untouched)
## Notes
- Only processed on first deployment (when databases don't exist)
- Files/directories are copied once; existing restored databases will skip import
- Empty folder is ignored - no files, no import

123
deploy.sh
View File

@@ -34,7 +34,12 @@ REMOTE_SKIP_STORAGE=0
REMOTE_COPY_SOURCE=0
REMOTE_ARGS_PROVIDED=0
REMOTE_AUTO_DEPLOY=0
REMOTE_AUTO_DEPLOY=0
REMOTE_CLEAN_CONTAINERS=0
REMOTE_STORAGE_OVERRIDE=""
REMOTE_CONTAINER_USER_OVERRIDE=""
REMOTE_ENV_FILE=""
REMOTE_SKIP_ENV=0
REMOTE_PRESERVE_CONTAINERS=0
MODULE_HELPER="$ROOT_DIR/scripts/python/modules.py"
MODULE_STATE_INITIALIZED=0
@@ -164,6 +169,43 @@ collect_remote_details(){
*) REMOTE_SKIP_STORAGE=0 ;;
esac
fi
if [ "$interactive" -eq 1 ] && [ "$REMOTE_ARGS_PROVIDED" -eq 0 ]; then
local cleanup_answer
read -rp "Stop/remove remote containers & project images during migration? [y/N]: " cleanup_answer
cleanup_answer="${cleanup_answer:-n}"
case "${cleanup_answer,,}" in
y|yes) REMOTE_CLEAN_CONTAINERS=1 ;;
*)
REMOTE_CLEAN_CONTAINERS=0
# Offer explicit preservation when declining cleanup
local preserve_answer
read -rp "Preserve remote containers/images (skip cleanup)? [Y/n]: " preserve_answer
preserve_answer="${preserve_answer:-Y}"
case "${preserve_answer,,}" in
n|no) REMOTE_PRESERVE_CONTAINERS=0 ;;
*) REMOTE_PRESERVE_CONTAINERS=1 ;;
esac
;;
esac
fi
# Optional remote env overrides (default to current values)
local storage_default container_user_default
storage_default="$(read_env STORAGE_PATH "./storage")"
container_user_default="$(read_env CONTAINER_USER "$(id -u):$(id -g)")"
if [ -z "$REMOTE_STORAGE_OVERRIDE" ] && [ "$interactive" -eq 1 ]; then
local storage_input
read -rp "Remote storage path (STORAGE_PATH) [${storage_default}]: " storage_input
REMOTE_STORAGE_OVERRIDE="${storage_input:-$storage_default}"
fi
if [ -z "$REMOTE_CONTAINER_USER_OVERRIDE" ] && [ "$interactive" -eq 1 ]; then
local cu_input
read -rp "Remote container user (CONTAINER_USER) [${container_user_default}]: " cu_input
REMOTE_CONTAINER_USER_OVERRIDE="${cu_input:-$container_user_default}"
fi
}
validate_remote_configuration(){
@@ -220,6 +262,11 @@ Options:
--remote-skip-storage Skip syncing the storage directory during migration
--remote-copy-source Copy the local project directory to remote instead of relying on git
--remote-auto-deploy Run './deploy.sh --yes --no-watch' on the remote host after migration
--remote-clean-containers Stop/remove remote containers & project images during migration
--remote-storage-path PATH Override STORAGE_PATH/STORAGE_PATH_LOCAL in the remote .env
--remote-container-user USER[:GROUP] Override CONTAINER_USER in the remote .env
--remote-skip-env Do not upload .env to the remote host
--remote-preserve-containers Skip stopping/removing remote containers during migration
--skip-config Skip applying server configuration preset
-h, --help Show this help
@@ -248,12 +295,22 @@ while [[ $# -gt 0 ]]; do
--remote-skip-storage) REMOTE_SKIP_STORAGE=1; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift;;
--remote-copy-source) REMOTE_COPY_SOURCE=1; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift;;
--remote-auto-deploy) REMOTE_AUTO_DEPLOY=1; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift;;
--remote-clean-containers) REMOTE_CLEAN_CONTAINERS=1; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift;;
--remote-storage-path) REMOTE_STORAGE_OVERRIDE="$2"; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift 2;;
--remote-container-user) REMOTE_CONTAINER_USER_OVERRIDE="$2"; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift 2;;
--remote-skip-env) REMOTE_SKIP_ENV=1; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift;;
--remote-preserve-containers) REMOTE_PRESERVE_CONTAINERS=1; REMOTE_MODE=1; REMOTE_ARGS_PROVIDED=1; shift;;
--skip-config) SKIP_CONFIG=1; shift;;
-h|--help) usage; exit 0;;
*) err "Unknown option: $1"; usage; exit 1;;
esac
done
if [ "$REMOTE_CLEAN_CONTAINERS" -eq 1 ] && [ "$REMOTE_PRESERVE_CONTAINERS" -eq 1 ]; then
err "Cannot combine --remote-clean-containers with --remote-preserve-containers."
exit 1
fi
require_cmd(){
command -v "$1" >/dev/null 2>&1 || { err "Missing required command: $1"; exit 1; }
}
@@ -515,6 +572,27 @@ prompt_build_if_needed(){
local build_reasons_output
build_reasons_output=$(detect_build_needed)
if [ -z "$build_reasons_output" ]; then
# Belt-and-suspenders: if C++ modules are enabled but module images missing, warn
ensure_module_state
if [ "${#MODULES_COMPILE_LIST[@]}" -gt 0 ]; then
local authserver_modules_image
local worldserver_modules_image
authserver_modules_image="$(read_env AC_AUTHSERVER_IMAGE_MODULES "$(resolve_project_image "authserver-modules-latest")")"
worldserver_modules_image="$(read_env AC_WORLDSERVER_IMAGE_MODULES "$(resolve_project_image "worldserver-modules-latest")")"
local missing_images=()
if ! docker image inspect "$authserver_modules_image" >/dev/null 2>&1; then
missing_images+=("$authserver_modules_image")
fi
if ! docker image inspect "$worldserver_modules_image" >/dev/null 2>&1; then
missing_images+=("$worldserver_modules_image")
fi
if [ ${#missing_images[@]} -gt 0 ]; then
build_reasons_output=$(printf "C++ modules enabled but module images missing: %s\n" "${missing_images[*]}")
fi
fi
fi
if [ -z "$build_reasons_output" ]; then
return 0 # No build needed
fi
@@ -607,6 +685,33 @@ determine_profile(){
}
run_remote_migration(){
if [ -z "$REMOTE_ENV_FILE" ] && { [ -n "$REMOTE_STORAGE_OVERRIDE" ] || [ -n "$REMOTE_CONTAINER_USER_OVERRIDE" ]; }; then
local base_env=""
if [ -f "$ENV_PATH" ]; then
base_env="$ENV_PATH"
elif [ -f "$TEMPLATE_PATH" ]; then
base_env="$TEMPLATE_PATH"
fi
REMOTE_ENV_FILE="$(mktemp)"
if [ -n "$base_env" ]; then
cp "$base_env" "$REMOTE_ENV_FILE"
else
: > "$REMOTE_ENV_FILE"
fi
if [ -n "$REMOTE_STORAGE_OVERRIDE" ]; then
{
echo
echo "STORAGE_PATH=$REMOTE_STORAGE_OVERRIDE"
} >>"$REMOTE_ENV_FILE"
fi
if [ -n "$REMOTE_CONTAINER_USER_OVERRIDE" ]; then
{
echo
echo "CONTAINER_USER=$REMOTE_CONTAINER_USER_OVERRIDE"
} >>"$REMOTE_ENV_FILE"
fi
fi
local args=(--host "$REMOTE_HOST" --user "$REMOTE_USER")
if [ -n "$REMOTE_PORT" ] && [ "$REMOTE_PORT" != "22" ]; then
@@ -629,10 +734,26 @@ run_remote_migration(){
args+=(--copy-source)
fi
if [ "$REMOTE_CLEAN_CONTAINERS" -eq 1 ]; then
args+=(--clean-containers)
fi
if [ "$ASSUME_YES" -eq 1 ]; then
args+=(--yes)
fi
if [ "$REMOTE_SKIP_ENV" -eq 1 ]; then
args+=(--skip-env)
fi
if [ "$REMOTE_PRESERVE_CONTAINERS" -eq 1 ]; then
args+=(--preserve-containers)
fi
if [ -n "$REMOTE_ENV_FILE" ]; then
args+=(--env-file "$REMOTE_ENV_FILE")
fi
(cd "$ROOT_DIR" && ./scripts/bash/migrate-stack.sh "${args[@]}")
}

View File

@@ -1,4 +1,11 @@
name: ${COMPOSE_PROJECT_NAME}
x-logging: &logging-default
driver: json-file
options:
max-size: "10m"
max-file: "3"
services:
# =====================
# Database Layer (db)
@@ -18,6 +25,7 @@ services:
MYSQL_MAX_CONNECTIONS: ${MYSQL_MAX_CONNECTIONS}
MYSQL_INNODB_BUFFER_POOL_SIZE: ${MYSQL_INNODB_BUFFER_POOL_SIZE}
MYSQL_INNODB_LOG_FILE_SIZE: ${MYSQL_INNODB_LOG_FILE_SIZE}
MYSQL_BINLOG_EXPIRE_LOGS_SECONDS: 86400
TZ: "${TZ}"
entrypoint:
- /usr/local/bin/mysql-entrypoint.sh
@@ -26,7 +34,7 @@ services:
- mysql-data:/var/lib/mysql-persistent
- ${BACKUP_PATH}:/backups
- ${HOST_ZONEINFO_PATH}:/usr/share/zoneinfo:ro
- ${MYSQL_CONFIG_DIR:-${STORAGE_PATH}/config/mysql/conf.d}:/etc/mysql/conf.d
- ${MYSQL_CONFIG_DIR:-${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}/mysql/conf.d}:/etc/mysql/conf.d
tmpfs:
- /var/lib/mysql-runtime:size=${MYSQL_RUNTIME_TMPFS_SIZE}
command:
@@ -39,8 +47,11 @@ services:
- --innodb-buffer-pool-size=${MYSQL_INNODB_BUFFER_POOL_SIZE}
- --innodb-log-file-size=${MYSQL_INNODB_LOG_FILE_SIZE}
- --innodb-redo-log-capacity=${MYSQL_INNODB_REDO_LOG_CAPACITY}
- --expire_logs_days=0
- --binlog_expire_logs_seconds=86400
- --binlog_expire_logs_auto_purge=ON
restart: unless-stopped
logging:
logging: *logging-default
healthcheck:
test: ["CMD", "sh", "-c", "mysqladmin ping -h localhost -u ${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} --silent || exit 1"]
interval: ${MYSQL_HEALTHCHECK_INTERVAL}
@@ -64,14 +75,16 @@ services:
networks:
- azerothcore
volumes:
- ${STORAGE_PATH}/config:/azerothcore/env/dist/etc
- ${STORAGE_PATH}/logs:/azerothcore/logs
- ${AC_SQL_SOURCE_PATH:-${STORAGE_PATH_LOCAL}/source/azerothcore-playerbots/data/sql}:/azerothcore/data/sql:ro
- ${MODULE_SQL_STAGE_PATH:-${STORAGE_PATH}/module-sql-updates}:/modules-sql
- ${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}:/azerothcore/env/dist/etc
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/logs
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/env/dist/logs
- ${AC_SQL_SOURCE_PATH:-${STORAGE_LOCAL_SOURCE_PATH:-${STORAGE_PATH_LOCAL}/source}/azerothcore-playerbots/data/sql}:/azerothcore/data/sql:ro
- ${STAGE_PATH_MODULE_SQL:-${STORAGE_MODULE_SQL_PATH:-${STORAGE_PATH}/module-sql-updates}}:/modules-sql
- mysql-data:/var/lib/mysql-persistent
- ${STORAGE_PATH}/modules:/modules
- ${STORAGE_MODULES_PATH:-${STORAGE_PATH}/modules}:/modules
- ${BACKUP_PATH}:/backups
- ./scripts/bash/db-import-conditional.sh:/tmp/db-import-conditional.sh:ro
- ./scripts/bash/seed-dbimport-conf.sh:/tmp/seed-dbimport-conf.sh:ro
- ./scripts/bash/restore-and-stage.sh:/tmp/restore-and-stage.sh:ro
environment:
AC_DATA_DIR: "/azerothcore/data"
@@ -128,14 +141,16 @@ services:
networks:
- azerothcore
volumes:
- ${STORAGE_PATH}/config:/azerothcore/env/dist/etc
- ${STORAGE_PATH}/logs:/azerothcore/logs
- ${AC_SQL_SOURCE_PATH:-${STORAGE_PATH_LOCAL}/source/azerothcore-playerbots/data/sql}:/azerothcore/data/sql:ro
- ${MODULE_SQL_STAGE_PATH:-${STORAGE_PATH}/module-sql-updates}:/modules-sql
- ${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}:/azerothcore/env/dist/etc
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/logs
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/env/dist/logs
- ${AC_SQL_SOURCE_PATH:-${STORAGE_LOCAL_SOURCE_PATH:-${STORAGE_PATH_LOCAL}/source}/azerothcore-playerbots/data/sql}:/azerothcore/data/sql:ro
- ${STAGE_PATH_MODULE_SQL:-${STORAGE_MODULE_SQL_PATH:-${STORAGE_PATH}/module-sql-updates}}:/modules-sql
- mysql-data:/var/lib/mysql-persistent
- ${STORAGE_PATH}/modules:/modules
- ${STORAGE_MODULES_PATH:-${STORAGE_PATH}/modules}:/modules
- ${BACKUP_PATH}:/backups
- ./scripts/bash/db-import-conditional.sh:/tmp/db-import-conditional.sh:ro
- ./scripts/bash/seed-dbimport-conf.sh:/tmp/seed-dbimport-conf.sh:ro
- ./scripts/bash/restore-and-stage.sh:/tmp/restore-and-stage.sh:ro
- ./scripts/bash/db-guard.sh:/tmp/db-guard.sh:ro
environment:
@@ -258,7 +273,7 @@ services:
CONTAINER_USER: ${CONTAINER_USER}
volumes:
- ${BACKUP_PATH}:/backups
- ${STORAGE_PATH}/modules/.modules-meta:/modules-meta:ro
- ${STORAGE_MODULES_META_PATH:-${STORAGE_PATH}/modules/.modules-meta}:/modules-meta:ro
- ./scripts:/tmp/scripts:ro
working_dir: /tmp
command:
@@ -325,9 +340,9 @@ services:
profiles: ["client-data", "client-data-bots"]
image: ${ALPINE_IMAGE}
container_name: ac-volume-init
user: "${CONTAINER_USER}"
user: "0:0"
volumes:
- ${CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}:/azerothcore/data
- ${CLIENT_DATA_PATH:-${STORAGE_CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}}:/azerothcore/data
- client-data-cache:/cache
command:
- sh
@@ -351,10 +366,20 @@ services:
profiles: ["db", "modules"]
image: ${ALPINE_IMAGE}
container_name: ac-storage-init
user: "${CONTAINER_USER}"
user: "0:0"
volumes:
- ${STORAGE_PATH}:/storage-root
- ${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}:/storage-root/config
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/storage-root/logs
- ${STORAGE_MODULES_PATH:-${STORAGE_PATH}/modules}:/storage-root/modules
- ${STORAGE_LUA_SCRIPTS_PATH:-${STORAGE_PATH}/lua_scripts}:/storage-root/lua_scripts
- ${STORAGE_INSTALL_MARKERS_PATH:-${STORAGE_PATH}/install-markers}:/storage-root/install-markers
- ${STORAGE_MODULE_SQL_PATH:-${STORAGE_PATH}/module-sql-updates}:/storage-root/module-sql-updates
- ${STORAGE_MODULES_META_PATH:-${STORAGE_PATH}/modules/.modules-meta}:/storage-root/modules/.modules-meta
- ${STORAGE_CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}:/storage-root/client-data
- ${BACKUP_PATH}:/storage-root/backups
- ${STORAGE_PATH_LOCAL}:/local-storage-root
- ${STORAGE_LOCAL_SOURCE_PATH:-${STORAGE_PATH_LOCAL}/source}:/local-storage-root/source
- ./scripts/bash/seed-dbimport-conf.sh:/tmp/seed-dbimport-conf.sh:ro
command:
- sh
- -c
@@ -362,13 +387,51 @@ services:
echo "🔧 Initializing storage directories with proper permissions..."
mkdir -p /storage-root/config /storage-root/logs /storage-root/modules /storage-root/lua_scripts /storage-root/install-markers
mkdir -p /storage-root/config/mysql/conf.d
mkdir -p /storage-root/module-sql-updates /storage-root/modules/.modules-meta
mkdir -p /storage-root/client-data
mkdir -p /storage-root/backups
# Copy core config files if they don't exist
if [ -f "/local-storage-root/source/azerothcore-playerbots/src/tools/dbimport/dbimport.conf.dist" ] && [ ! -f "/storage-root/config/dbimport.conf.dist" ]; then
echo "📄 Copying dbimport.conf.dist..."
cp /local-storage-root/source/azerothcore-playerbots/src/tools/dbimport/dbimport.conf.dist /storage-root/config/
# Copy core AzerothCore config template files (.dist) to config directory
echo "📄 Copying AzerothCore configuration templates..."
SOURCE_DIR="${SOURCE_DIR:-/local-storage-root/source/azerothcore-playerbots}"
if [ ! -d "$SOURCE_DIR" ] && [ -d "/local-storage-root/source/azerothcore-wotlk" ]; then
SOURCE_DIR="/local-storage-root/source/azerothcore-wotlk"
fi
# Seed dbimport.conf with a shared helper (fallback to a simple copy if missing)
if [ -f "/tmp/seed-dbimport-conf.sh" ]; then
echo "🧩 Seeding dbimport.conf"
DBIMPORT_CONF_DIR="/storage-root/config" \
DBIMPORT_SOURCE_ROOT="$SOURCE_DIR" \
sh -c '. /tmp/seed-dbimport-conf.sh && seed_dbimport_conf' || true
else
if [ -f "$SOURCE_DIR/src/tools/dbimport/dbimport.conf.dist" ]; then
cp -n "$SOURCE_DIR/src/tools/dbimport/dbimport.conf.dist" /storage-root/config/ 2>/dev/null || true
if [ ! -f "/storage-root/config/dbimport.conf" ]; then
cp "$SOURCE_DIR/src/tools/dbimport/dbimport.conf.dist" /storage-root/config/dbimport.conf
echo " ✓ Created dbimport.conf"
fi
fi
fi
# Copy authserver.conf.dist
if [ -f "$SOURCE_DIR/env/dist/etc/authserver.conf.dist" ]; then
cp -n "$SOURCE_DIR/env/dist/etc/authserver.conf.dist" /storage-root/config/ 2>/dev/null || true
if [ ! -f "/storage-root/config/authserver.conf" ]; then
cp "$SOURCE_DIR/env/dist/etc/authserver.conf.dist" /storage-root/config/authserver.conf
echo " ✓ Created authserver.conf"
fi
fi
# Copy worldserver.conf.dist
if [ -f "$SOURCE_DIR/env/dist/etc/worldserver.conf.dist" ]; then
cp -n "$SOURCE_DIR/env/dist/etc/worldserver.conf.dist" /storage-root/config/ 2>/dev/null || true
if [ ! -f "/storage-root/config/worldserver.conf" ]; then
cp "$SOURCE_DIR/env/dist/etc/worldserver.conf.dist" /storage-root/config/worldserver.conf
echo " ✓ Created worldserver.conf"
fi
fi
mkdir -p /storage-root/config/temp
# Fix ownership of root directories and all contents
if [ "$(id -u)" -eq 0 ]; then
chown -R ${CONTAINER_USER} /storage-root /local-storage-root
@@ -393,7 +456,7 @@ services:
ac-volume-init:
condition: service_completed_successfully
volumes:
- ${CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}:/azerothcore/data
- ${CLIENT_DATA_PATH:-${STORAGE_CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}}:/azerothcore/data
- client-data-cache:/cache
- ./scripts:/tmp/scripts:ro
working_dir: /tmp
@@ -424,7 +487,7 @@ services:
ac-volume-init:
condition: service_completed_successfully
volumes:
- ${CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}:/azerothcore/data
- ${CLIENT_DATA_PATH:-${STORAGE_CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}}:/azerothcore/data
- client-data-cache:/cache
- ./scripts:/tmp/scripts:ro
working_dir: /tmp
@@ -478,11 +541,13 @@ services:
ports:
- "${AUTH_EXTERNAL_PORT}:${AUTH_PORT}"
restart: unless-stopped
logging:
logging: *logging-default
networks:
- azerothcore
volumes:
- ${STORAGE_PATH}/config:/azerothcore/env/dist/etc
- ${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}:/azerothcore/env/dist/etc
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/logs
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/env/dist/logs
cap_add: ["SYS_NICE"]
healthcheck:
test: ["CMD", "sh", "-c", "ps aux | grep '[a]uthserver' | grep -v grep || exit 1"]
@@ -510,7 +575,7 @@ services:
AC_UPDATES_ENABLE_DATABASES: "7"
AC_BIND_IP: "0.0.0.0"
AC_DATA_DIR: "/azerothcore/data"
AC_SOAP_PORT: "7878"
AC_SOAP_PORT: "${SOAP_PORT}"
AC_PROCESS_PRIORITY: "0"
AC_ELUNA_ENABLED: "${AC_ELUNA_ENABLED}"
AC_ELUNA_TRACE_BACK: "${AC_ELUNA_TRACE_BACK}"
@@ -527,13 +592,14 @@ services:
- "${WORLD_EXTERNAL_PORT}:${WORLD_PORT}"
- "${SOAP_EXTERNAL_PORT}:${SOAP_PORT}"
volumes:
- ${CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}:/azerothcore/data
- ${STORAGE_PATH}/config:/azerothcore/env/dist/etc
- ${STORAGE_PATH}/logs:/azerothcore/logs
- ${STORAGE_PATH}/modules:/azerothcore/modules
- ${STORAGE_PATH}/lua_scripts:/azerothcore/lua_scripts
- ${CLIENT_DATA_PATH:-${STORAGE_CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}}:/azerothcore/data
- ${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}:/azerothcore/env/dist/etc
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/logs
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/env/dist/logs
- ${STORAGE_MODULES_PATH:-${STORAGE_PATH}/modules}:/azerothcore/modules
- ${STORAGE_LUA_SCRIPTS_PATH:-${STORAGE_PATH}/lua_scripts}:/azerothcore/lua_scripts
restart: unless-stopped
logging:
logging: *logging-default
networks:
- azerothcore
cap_add: ["SYS_NICE"]
@@ -571,15 +637,11 @@ services:
ports:
- "${AUTH_EXTERNAL_PORT}:${AUTH_PORT}"
restart: unless-stopped
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
logging: *logging-default
networks:
- azerothcore
volumes:
- ${STORAGE_PATH}/config:/azerothcore/env/dist/etc
- ${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}:/azerothcore/env/dist/etc
cap_add: ["SYS_NICE"]
healthcheck:
test: ["CMD", "sh", "-c", "ps aux | grep '[a]uthserver' | grep -v grep || exit 1"]
@@ -611,11 +673,11 @@ services:
ports:
- "${AUTH_EXTERNAL_PORT}:${AUTH_PORT}"
restart: unless-stopped
logging:
logging: *logging-default
networks:
- azerothcore
volumes:
- ${STORAGE_PATH}/config:/azerothcore/env/dist/etc
- ${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}:/azerothcore/env/dist/etc
cap_add: ["SYS_NICE"]
healthcheck:
test: ["CMD", "sh", "-c", "ps aux | grep '[a]uthserver' | grep -v grep || exit 1"]
@@ -645,7 +707,7 @@ services:
AC_UPDATES_ENABLE_DATABASES: "7"
AC_BIND_IP: "0.0.0.0"
AC_DATA_DIR: "/azerothcore/data"
AC_SOAP_PORT: "7878"
AC_SOAP_PORT: "${SOAP_PORT}"
AC_PROCESS_PRIORITY: "0"
AC_ELUNA_ENABLED: "${AC_ELUNA_ENABLED}"
AC_ELUNA_TRACE_BACK: "${AC_ELUNA_TRACE_BACK}"
@@ -663,13 +725,14 @@ services:
- "${WORLD_EXTERNAL_PORT}:${WORLD_PORT}"
- "${SOAP_EXTERNAL_PORT}:${SOAP_PORT}"
volumes:
- ${CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}:/azerothcore/data
- ${STORAGE_PATH}/config:/azerothcore/env/dist/etc
- ${STORAGE_PATH}/logs:/azerothcore/logs
- ${STORAGE_PATH}/modules:/azerothcore/modules
- ${STORAGE_PATH}/lua_scripts:/azerothcore/lua_scripts
- ${CLIENT_DATA_PATH:-${STORAGE_CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}}:/azerothcore/data
- ${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}:/azerothcore/env/dist/etc
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/logs
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/env/dist/logs
- ${STORAGE_MODULES_PATH:-${STORAGE_PATH}/modules}:/azerothcore/modules
- ${STORAGE_LUA_SCRIPTS_PATH:-${STORAGE_PATH}/lua_scripts}:/azerothcore/lua_scripts
restart: unless-stopped
logging:
logging: *logging-default
networks:
- azerothcore
cap_add: ["SYS_NICE"]
@@ -701,7 +764,7 @@ services:
AC_UPDATES_ENABLE_DATABASES: "7"
AC_BIND_IP: "0.0.0.0"
AC_DATA_DIR: "/azerothcore/data"
AC_SOAP_PORT: "7878"
AC_SOAP_PORT: "${SOAP_PORT}"
AC_PROCESS_PRIORITY: "0"
AC_ELUNA_ENABLED: "${AC_ELUNA_ENABLED}"
AC_ELUNA_TRACE_BACK: "${AC_ELUNA_TRACE_BACK}"
@@ -715,22 +778,19 @@ services:
PLAYERBOT_MAX_BOTS: "${PLAYERBOT_MAX_BOTS}"
AC_LOG_LEVEL: "2"
volumes:
- ${CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}:/azerothcore/data
- ${STORAGE_PATH}/config:/azerothcore/env/dist/etc
- ${STORAGE_PATH}/logs:/azerothcore/logs
- ${STORAGE_PATH}/modules:/azerothcore/modules
- ${STORAGE_PATH}/lua_scripts:/azerothcore/lua_scripts
- ${CLIENT_DATA_PATH:-${STORAGE_CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}}:/azerothcore/data
- ${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}:/azerothcore/env/dist/etc
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/logs
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/env/dist/logs
- ${STORAGE_MODULES_PATH:-${STORAGE_PATH}/modules}:/azerothcore/modules
- ${STORAGE_LUA_SCRIPTS_PATH:-${STORAGE_PATH}/lua_scripts}:/azerothcore/lua_scripts
networks:
- azerothcore
ports:
- "${WORLD_EXTERNAL_PORT}:${WORLD_PORT}"
- "${SOAP_EXTERNAL_PORT}:${SOAP_PORT}"
restart: unless-stopped
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
logging: *logging-default
cap_add: ["SYS_NICE"]
healthcheck:
test: ["CMD", "sh", "-c", "ps aux | grep '[w]orldserver' | grep -v grep || exit 1"]
@@ -757,8 +817,8 @@ services:
ac-storage-init:
condition: service_completed_successfully
volumes:
- ${STORAGE_PATH}/modules:/modules
- ${STORAGE_PATH}/config:/azerothcore/env/dist/etc
- ${STORAGE_MODULES_PATH:-${STORAGE_PATH}/modules}:/modules
- ${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}:/azerothcore/env/dist/etc
- ./scripts:/tmp/scripts:ro
- ./config:/tmp/config:ro
env_file:
@@ -783,8 +843,8 @@ services:
container_name: ${CONTAINER_POST_INSTALL}
user: "0:0"
volumes:
- ${STORAGE_PATH}/config:/azerothcore/config
- ${STORAGE_PATH}/install-markers:/install-markers
- ${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}:/azerothcore/config
- ${STORAGE_INSTALL_MARKERS_PATH:-${STORAGE_PATH}/install-markers}:/install-markers
- ./scripts:/tmp/scripts:ro
- /var/run/docker.sock:/var/run/docker.sock:rw
working_dir: /tmp
@@ -819,8 +879,10 @@ services:
- |
apk add --no-cache bash curl docker-cli su-exec
chmod +x /tmp/scripts/bash/auto-post-install.sh 2>/dev/null || true
echo "📥 Running post-install as ${CONTAINER_USER}"
su-exec ${CONTAINER_USER} bash /tmp/scripts/bash/auto-post-install.sh
echo "📥 Running post-install as root (testing mode)"
mkdir -p /install-markers
chown -R ${CONTAINER_USER} /azerothcore/config /install-markers 2>/dev/null || true
bash /tmp/scripts/bash/auto-post-install.sh
restart: "no"
networks:
- azerothcore
@@ -877,7 +939,7 @@ services:
timeout: 10s
retries: 3
start_period: 40s
logging:
logging: *logging-default
security_opt:
- no-new-privileges:true
networks:

View File

@@ -152,7 +152,7 @@ storage/
├── client-data/ # Unpacked WoW client data & DBC overrides
├── logs/ # Server log files
├── modules/ # Downloaded module source code
├── lua_scripts/ # Eluna Lua scripts (auto-loaded)
├── lua_scripts/ # ALE Lua scripts (auto-loaded)
├── install-markers/ # Module installation state tracking
└── backups/ # Automated database backups
├── daily/ # Daily backups (retained per BACKUP_RETENTION_DAYS)
@@ -190,6 +190,26 @@ The build system is optimized for development and production deployments with Do
- Build artifact caching for faster rebuilds
- Support for custom patches and modifications
### Module Build Source Path
**`MODULES_REBUILD_SOURCE_PATH`** - Path to AzerothCore source used for C++ module compilation.
**Default:** `${STORAGE_PATH_LOCAL}/source/azerothcore`
Auto-selects the appropriate fork:
- Playerbots enabled → `./local-storage/source/azerothcore-playerbots`
- Standard build → `./local-storage/source/azerothcore`
**Custom Override:**
```bash
MODULES_REBUILD_SOURCE_PATH=/path/to/custom/azerothcore
```
**Notes:**
- Must be a valid AzerothCore git repository
- Cannot be inside `STORAGE_PATH` (performance)
- Auto-managed by `setup-source.sh` and `rebuild-with-modules.sh`
## Custom Configuration
Advanced customization options for specialized deployments and development environments.

View File

@@ -3,6 +3,8 @@
**Last Updated:** 2025-11-14
**Status:** ✅ All blocked modules properly disabled
**Note:** This summary is historical. The authoritative block list lives in `config/module-manifest.json` (currently 94 modules marked `status: "blocked"`). This file and `docs/DISABLED_MODULES.md` should be reconciled during the next blocklist refresh.
---
## Summary

View File

@@ -4,6 +4,8 @@ This document tracks modules that have been disabled due to compilation errors o
**Last Updated:** 2025-11-14
**Note:** Historical snapshot. The current authoritative status for disabled/blocked modules is `status: "blocked"` in `config/module-manifest.json` (94 entries as of now). Align this file with the manifest during the next maintenance pass.
---
## Disabled Modules
@@ -111,7 +113,7 @@ These modules are blocked in the manifest with known issues:
## Current Working Module Count
**Total in Manifest:** ~93 modules
**Total in Manifest:** ~93 modules (historical; current manifest: 348 total / 221 supported / 94 blocked)
**Enabled:** 89 modules
**Disabled (Build Issues):** 4 modules
**Blocked (Manifest):** 3 modules

View File

@@ -9,7 +9,7 @@ This guide provides a complete walkthrough for deploying AzerothCore RealmMaster
Before you begin, ensure you have:
- **Docker** with Docker Compose
- **16GB+ RAM** and **32GB+ storage**
- **16GB+ RAM** and **64GB+ storage**
- **Linux/macOS/WSL2** (Windows with WSL2 recommended)
## Quick Overview
@@ -40,7 +40,7 @@ cd AzerothCore-RealmMaster
The setup wizard will guide you through:
- **Server Configuration**: IP address, ports, timezone
- **Module Selection**: Choose from 30+ available modules or use presets
- **Module Selection**: Choose from hundreds of official modules (348 in manifest; 221 currently supported) or use presets
- **Module Definitions**: Customize defaults in `config/module-manifest.json` and optional presets under `config/module-profiles/`
- **Storage Paths**: Configure NFS/local storage locations
- **Playerbot Settings**: Max bots, account limits (if enabled)
@@ -170,6 +170,12 @@ Optional flags:
- `--remote-port 2222` - Custom SSH port
- `--remote-identity ~/.ssh/custom_key` - Specific SSH key
- `--remote-skip-storage` - Don't sync storage directory (fresh install on remote)
- `--remote-clean-containers` - Stop/remove existing `ac-*` containers and project images during migration
- `--remote-skip-env` - Leave the remote `.env` untouched (won't upload local one)
- `--remote-preserve-containers` - Do not stop/remove existing `ac-*` containers/images during migration
- `--remote-storage-path /mnt/acore-storage` - Override STORAGE_PATH on the remote host (local-storage stays per .env)
- `--remote-container-user 1001:1001` - Override CONTAINER_USER on the remote host (uid:gid)
- Note: do not combine `--remote-clean-containers` with `--remote-preserve-containers`; the flags are mutually exclusive.
### Step 3: Deploy on Remote Host
```bash
@@ -197,8 +203,6 @@ The remote deployment process transfers:
### Module Presets
> **⚠️ Warning:** Module preset support is still in progress. The bundled presets have not been fully tested yet—please share issues or suggestions via Discord (`uprightbass360`).
- Define JSON presets in `config/module-profiles/*.json`. Each file contains:
- `modules` (array, required) list of `MODULE_*` identifiers to enable.
- `label` (string, optional) text shown in the setup menu (emoji welcome).
@@ -216,11 +220,12 @@ The remote deployment process transfers:
```
- `setup.sh` automatically adds these presets to the module menu and enables the listed modules when selected or when `--module-config <name>` is provided.
- Built-in presets:
- `config/module-profiles/suggested-modules.json` default solo-friendly QoL stack.
- `config/module-profiles/playerbots-suggested-modules.json` suggested stack plus playerbots.
- `config/module-profiles/playerbots-only.json` playerbot-focused profile (adjust `--playerbot-max-bots`).
- Custom example:
- `config/module-profiles/sam.json` Sam's playerbot-focused profile (set `--playerbot-max-bots 3000` when using this preset).
- `config/module-profiles/RealmMaster.json` 33-module baseline used for testing.
- `config/module-profiles/suggested-modules.json` light AzerothCore QoL stack without playerbots.
- `config/module-profiles/playerbots-suggested-modules.json` suggested QoL stack plus playerbots.
- `config/module-profiles/azerothcore-vanilla.json` pure AzerothCore (no optional modules).
- `config/module-profiles/playerbots-only.json` playerbot prerequisites only (tune bot counts separately).
- `config/module-profiles/all-modules.json` enable everything currently marked supported/active (not recommended).
- Module metadata lives in `config/module-manifest.json`; update that file if you need to add new modules or change repositories/branches.
---

View File

@@ -4,7 +4,7 @@ This document provides a comprehensive overview of all available modules in the
## Overview
AzerothCore RealmMaster includes **93 modules** that are automatically downloaded, configured, and SQL scripts executed when enabled. All modules are organized into logical categories for easy browsing and selection.
AzerothCore RealmMaster currently ships a manifest of **348 modules** (221 marked supported/active). The default RealmMaster preset enables 33 of these for day-to-day testing. All modules are automatically downloaded, configured, and SQL scripts executed when enabled. Modules are organized into logical categories for easy browsing and selection.
## How Modules Work
@@ -158,7 +158,7 @@ The module collection is organized into the following categories:
| **[eluna-scripts](https://github.com/Isidorsson/Eluna-scripts.git)** | Collection of Lua scripts for creating custom gameplay mechanics and features |
| **[eluna-ts](https://github.com/azerothcore/eluna-ts.git)** | Adds a TS-to-Lua workflow so Eluna scripts can be authored with modern tooling |
| **[mod-aio](https://github.com/Rochet2/AIO.git)** | Pure Lua server-client communication system for bidirectional data transmission |
| **[mod-ale](https://github.com/azerothcore/mod-ale.git)** | Adds Eluna Lua scripting engine for creating custom gameplay mechanics |
| **[mod-ale](https://github.com/azerothcore/mod-ale.git)** | ALE (AzerothCore Lua Engine) - Lua scripting engine for custom gameplay mechanics (formerly Eluna) |
## Admin Tools
@@ -233,10 +233,13 @@ This will present a menu for selecting individual modules or choosing from prede
Pre-configured module combinations are available in `config/module-profiles/`:
- **Suggested Modules** - Baseline solo-friendly quality of life mix
- **Playerbots Suggested** - Suggested stack plus playerbots
- **Playerbots Only** - Playerbot-focused profile
- **Custom Profiles** - Additional specialized configurations
- `RealmMaster` - 33-module baseline used for day-to-day testing
- `suggested-modules` - Light AzerothCore QoL stack without playerbots
- `playerbots-suggested-modules` - Suggested QoL stack plus playerbots
- `azerothcore-vanilla` - Pure AzerothCore with no optional modules
- `playerbots-only` - Playerbot prerequisites only
- `all-modules` - Everything in the manifest (not recommended)
- Custom profiles - Drop new JSON files to add your own combinations
### Manual Configuration
@@ -261,4 +264,4 @@ Modules are categorized by type:
For detailed setup and deployment instructions, see the main [README.md](../README.md) file.
For technical details about module management and the build system, refer to the [Architecture Overview](../README.md#architecture-overview) section.
For technical details about module management and the build system, refer to the [Architecture Overview](../README.md#architecture-overview) section.

View File

@@ -6,6 +6,8 @@ This document tracks all modules that have been disabled due to compilation fail
**Total Blocked Modules:** 93
**Note:** Historical snapshot from 2025-11-22 validation. The current authoritative count lives in `config/module-manifest.json` (94 modules marked `status: "blocked"`). Update this file when reconciling the manifest.
---
## Compilation Errors

View File

@@ -3,6 +3,8 @@
**Date:** 2025-11-14
**Status:** ✅ PRE-DEPLOYMENT TESTS PASSED
**Note:** Historical record for the 2025-11-14 run. Counts here reflect that test set (93 modules). The current manifest contains 348 modules, 221 marked supported/active, and the RealmMaster preset exercises 33 modules.
---
## Test Execution Summary
@@ -31,7 +33,7 @@
**Verified:**
- Environment file present
- Module configuration loaded
- 93 modules enabled for testing
- 93 modules enabled for testing in this run (current manifest: 348 total / 221 supported; RealmMaster preset: 33)
### Test 2: Module Manifest Validation ✅
```bash
@@ -139,8 +141,8 @@ MODULES_ENABLED="mod-playerbots mod-aoe-loot ..."
**What Gets Built:**
- AzerothCore with playerbots branch
- 93 modules compiled and integrated
- Custom Docker images: `acore-compose:worldserver-modules-latest` etc.
- 93 modules compiled and integrated in this run (current manifest: 348 total / 221 supported)
- Custom Docker images: `${COMPOSE_PROJECT_NAME}:worldserver-modules-latest` etc.
### Deployment Status: READY TO DEPLOY 🚀
@@ -261,7 +263,7 @@ docker exec ac-mysql mysql -uroot -p[password] acore_world \
- **Bash:** 5.0+
- **Python:** 3.x
- **Docker:** Available
- **Modules Enabled:** 93
- **Modules Enabled:** 93 (historical run)
- **Test Date:** 2025-11-14
---

View File

@@ -23,7 +23,7 @@ Interactive `.env` generator with module selection, server configuration, and de
```bash
./setup.sh # Interactive configuration
./setup.sh --module-config sam # Use predefined module profile, check profiles directory
./setup.sh --module-config RealmMaster # Use predefined module profile (see config/module-profiles)
./setup.sh --playerbot-max-bots 3000 # Set playerbot limits
```
@@ -140,6 +140,147 @@ Restores user accounts and characters from backup while preserving world data.
- `acore_characters.sql[.gz]` - Character data (required)
- `acore_world.sql[.gz]` - World data (optional)
#### `scripts/bash/pdump-import.sh` - Character Import
Imports individual character dump files into the database.
```bash
# Import character from pdump file
./scripts/bash/pdump-import.sh --file character.pdump --account testuser --password azerothcore123
# Import with character rename
./scripts/bash/pdump-import.sh --file oldchar.pdump --account newuser --name "NewName" --password azerothcore123
# Validate pdump without importing (dry run)
./scripts/bash/pdump-import.sh --file character.pdump --account testuser --password azerothcore123 --dry-run
```
**Features:**
- Automatic GUID assignment or manual override with `--guid`
- Character renaming during import with `--name`
- Account validation and character name uniqueness checks
- Automatic database backup before import
- Safe server restart handling
#### `scripts/bash/import-pdumps.sh` - Batch Character Import
Processes multiple character dump files from the `import/pdumps/` directory.
```bash
# Import all pdumps with environment settings
./scripts/bash/import-pdumps.sh --password azerothcore123 --account defaultuser
# Non-interactive batch import
./scripts/bash/import-pdumps.sh --password azerothcore123 --non-interactive
```
**Directory Structure:**
```
import/pdumps/
├── character1.pdump # Character dump files
├── character2.sql # SQL dump files also supported
├── configs/ # Optional per-character configuration
│ ├── character1.conf # account=user1, name=NewName
│ └── character2.conf # account=user2, guid=5000
└── processed/ # Successfully imported files moved here
```
**Configuration Format (`.conf`):**
```ini
account=target_account_name_or_id
name=new_character_name # Optional: rename character
guid=force_specific_guid # Optional: force GUID
```
### Security Management Scripts
#### `scripts/bash/bulk-2fa-setup.sh` - Bulk 2FA Setup
Configures TOTP 2FA for multiple AzerothCore accounts using official SOAP API.
```bash
# Setup 2FA for all accounts without it
./scripts/bash/bulk-2fa-setup.sh --all
# Setup for specific accounts
./scripts/bash/bulk-2fa-setup.sh --account user1 --account user2
# Force regenerate with custom issuer
./scripts/bash/bulk-2fa-setup.sh --all --force --issuer "MyServer"
# Preview what would be done
./scripts/bash/bulk-2fa-setup.sh --all --dry-run
# Use custom SOAP credentials
./scripts/bash/bulk-2fa-setup.sh --all --soap-user admin --soap-pass adminpass
# Show help / options
./scripts/bash/bulk-2fa-setup.sh --help
```
**Features:**
- **Official AzerothCore API Integration**: Uses SOAP commands instead of direct database manipulation
- Generates AzerothCore-compatible 16-character Base32 TOTP secrets (longer secrets are rejected by SOAP)
- Automatic account discovery or specific targeting
- QR code generation for authenticator apps
- Force regeneration of existing 2FA secrets
- Comprehensive output with setup instructions
- Safe dry-run mode for testing
- SOAP connectivity validation
- Proper error handling and validation
**Requirements:**
- AzerothCore worldserver with SOAP enabled (SOAP.Enabled = 1)
- SOAP port exposed on 7778 (SOAP.Port = 7878, mapped to external 7778)
- Remote Access enabled (Ra.Enable = 1) in worldserver.conf
- SOAP.IP = "0.0.0.0" for external connectivity
- GM account with sufficient privileges (gmlevel 3)
- Provide SOAP credentials explicitly via `--soap-user` and `--soap-pass` (these are required; no env fallback)
**Output Structure:**
```
./2fa-setup-TIMESTAMP/
├── qr-codes/ # QR code images for each account
├── setup-report.txt # Complete setup summary
├── console-commands.txt # Manual verification commands
└── secrets-backup.csv # Secure backup of all secrets
```
**Security Notes:**
- Generated QR codes and backup files contain sensitive TOTP secrets
- Distribute QR codes securely to users
- Delete or encrypt backup files after distribution
- TOTP secrets are also stored in AzerothCore database
#### `scripts/bash/generate-2fa-qr.sh` / `generate-2fa-qr.py` - Individual 2FA Setup
Generate QR codes for individual account 2FA setup.
> Tip: each script supports `-h/--help` to see all options.
```bash
# Generate QR code for single account
./scripts/bash/generate-2fa-qr.sh -u username
# Use custom issuer and output path
./scripts/bash/generate-2fa-qr.sh -u username -i "MyServer" -o /tmp/qr.png
# Use existing secret
./scripts/bash/generate-2fa-qr.sh -u username -s JBSWY3DPEHPK3PXP
# Show help / options
./scripts/bash/generate-2fa-qr.sh -h
```
> AzerothCore's SOAP endpoint only accepts 16-character Base32 secrets (A-Z and 2-7). The generators enforce this length to avoid "The provided two-factor authentication secret is not valid" errors.
#### `scripts/bash/test-2fa-token.py` - Generate TOTP Test Codes
Quickly verify a 16-character Base32 secret produces valid 6-digit codes.
```bash
# Show help
./scripts/bash/test-2fa-token.py --help
# Generate two consecutive codes for a secret
./scripts/bash/test-2fa-token.py -s JBSWY3DPEHPK3PXP -c 2
```
### Module Management Scripts
#### `scripts/bash/stage-modules.sh` - Module Staging
@@ -274,6 +415,51 @@ Comprehensive deployment verification with health checks and service validation.
./scripts/bash/verify-deployment.sh --quick # Quick health check only
```
#### `scripts/bash/validate-env.sh` - Environment Configuration Validator
Validates `.env` configuration for required and optional variables with detailed reporting.
```bash
./scripts/bash/validate-env.sh # Basic validation (required vars only)
./scripts/bash/validate-env.sh --strict # Validate required + optional vars
./scripts/bash/validate-env.sh --quiet # Errors only, suppress success messages
```
**Exit Codes:**
- `0` - All required variables present (and optional if --strict)
- `1` - Missing required variables
- `2` - Missing optional variables (only in --strict mode)
**Validates:**
- **Project Configuration:** `COMPOSE_PROJECT_NAME`, `NETWORK_NAME`
- **Repository URLs:** Standard and playerbots AzerothCore repositories
- **Storage Paths:** `STORAGE_PATH`, `STORAGE_PATH_LOCAL`, `MODULES_REBUILD_SOURCE_PATH`
- **Database Settings:** MySQL credentials, ports, database names
- **Container Config:** Container names and user permissions
- **Build Paths:** Module rebuild source paths (optional)
- **Performance Tuning:** MySQL buffer pool, InnoDB settings (optional)
- **Image References:** Docker image tags (optional)
**Use Cases:**
- Pre-deployment validation
- Troubleshooting configuration issues
- CI/CD pipeline checks
- Documentation of environment requirements
**Example Output:**
```
Validating environment configuration...
✅ Loaded environment from /path/to/.env
Checking required variables...
✅ COMPOSE_PROJECT_NAME=azerothcore-realmmaster
✅ NETWORK_NAME=azerothcore
✅ STORAGE_PATH=./storage
✅ MYSQL_ROOT_PASSWORD=********
✅ All required variables are set
✅ Environment validation passed ✨
```
### Backup System Scripts
#### `scripts/bash/backup-scheduler.sh` - Automated Backup Service

View File

@@ -7,7 +7,8 @@ This directory allows you to easily import custom database files and configurati
```
import/
├── db/ # Database SQL files to import
── conf/ # Configuration file overrides
── conf/ # Configuration file overrides
└── pdumps/ # Character dump files to import
```
## 🗄️ Database Import (`import/db/`)
@@ -93,6 +94,31 @@ AiPlayerbot.MaxRandomBots = 200
See `config/CONFIG_MANAGEMENT.md` for detailed preset documentation.
## 🎮 Character Import (`import/pdumps/`)
Import character dump files from other AzerothCore servers.
### Supported Formats
- **`.pdump`** - Character dump files from `.pdump write` command
- **`.sql`** - SQL character dump files
### Quick Start
1. Place character dump files in `import/pdumps/`
2. Run the import script:
```bash
./scripts/bash/import-pdumps.sh --password your_mysql_password --account target_account
```
### Advanced Configuration
Create `import/pdumps/configs/filename.conf` for per-character settings:
```ini
account=target_account
name=NewCharacterName # Optional: rename
guid=5000 # Optional: force GUID
```
**📖 For complete character import documentation, see [import/pdumps/README.md](pdumps/README.md)**
## 🔄 Automated Import
Both database and configuration imports are automatically handled during:
@@ -118,6 +144,7 @@ Both database and configuration imports are automatically handled during:
## 📚 Related Documentation
- [Character Import Guide](pdumps/README.md) - Complete pdump import documentation
- [Database Management](../docs/DATABASE_MANAGEMENT.md)
- [Configuration Management](../config/CONFIG_MANAGEMENT.md)
- [Module Management](../docs/ADVANCED.md#module-management)

192
import/pdumps/README.md Normal file
View File

@@ -0,0 +1,192 @@
# Character PDump Import
This directory allows you to easily import character pdump files into your AzerothCore server.
## 📁 Directory Structure
```
import/pdumps/
├── README.md # This file
├── *.pdump # Place your character dump files here
├── *.sql # SQL dump files also supported
├── configs/ # Optional per-file configuration
│ ├── character1.conf
│ └── character2.conf
├── examples/ # Example files and configurations
└── processed/ # Successfully imported files are moved here
```
## 🎮 Character Dump Import
### Quick Start
1. **Place your pdump files** in this directory:
```bash
cp /path/to/mycharacter.pdump import/pdumps/
```
2. **Run the import script**:
```bash
./scripts/bash/import-pdumps.sh --password your_mysql_password --account target_account
```
3. **Login and play** - your characters are now available!
### Supported File Formats
- **`.pdump`** - Character dump files from AzerothCore `.pdump write` command
- **`.sql`** - SQL character dump files
### Configuration Options
#### Environment Variables (`.env`)
```bash
# Set default account for all imports
DEFAULT_IMPORT_ACCOUNT=testuser
# Database credentials (usually already set)
MYSQL_ROOT_PASSWORD=your_mysql_password
ACORE_DB_AUTH_NAME=acore_auth
ACORE_DB_CHARACTERS_NAME=acore_characters
```
#### Per-Character Configuration (`configs/filename.conf`)
Create a `.conf` file with the same name as your pdump file to specify custom import options:
**Example: `configs/mycharacter.conf`**
```ini
# Target account (required if not set globally)
account=testuser
# Rename character during import (optional)
name=NewCharacterName
# Force specific GUID (optional, auto-assigned if not specified)
guid=5000
```
### Command Line Usage
#### Import All Files
```bash
# Use environment settings
./scripts/bash/import-pdumps.sh
# Override settings
./scripts/bash/import-pdumps.sh --password mypass --account testuser
```
#### Import Single File
```bash
# Direct import with pdump-import.sh
./scripts/bash/pdump-import.sh --file character.pdump --account testuser --password mypass
# With character rename
./scripts/bash/pdump-import.sh --file oldchar.pdump --account newuser --name "NewName" --password mypass
# Validate before import (dry run)
./scripts/bash/pdump-import.sh --file character.pdump --account testuser --password mypass --dry-run
```
## 🛠️ Advanced Features
### Account Management
- **Account Validation**: Scripts automatically verify that target accounts exist
- **Account ID or Name**: You can use either account names or numeric IDs
- **Interactive Mode**: If no account is specified, you'll be prompted to enter one
### GUID Handling
- **Auto-Assignment**: Next available GUID is automatically assigned
- **Force GUID**: Use `--guid` parameter or config file to force specific GUID
- **Conflict Detection**: Import fails safely if GUID already exists
### Character Names
- **Validation**: Character names must follow WoW naming rules (2-12 letters)
- **Uniqueness**: Import fails if character name already exists on server
- **Renaming**: Use `--name` parameter or config file to rename during import
### Safety Features
- **Automatic Backup**: Characters database is backed up before each import
- **Server Management**: World server is safely stopped/restarted during import
- **Rollback Ready**: Backups are stored in `manual-backups/` directory
- **Dry Run**: Validate imports without actually importing
## 📋 Import Workflow
1. **Validation Phase**
- Check file format and readability
- Validate target account exists
- Verify character name availability (if specified)
- Check GUID conflicts
2. **Pre-Import Phase**
- Create automatic database backup
- Stop world server for safe import
3. **Processing Phase**
- Process SQL file (update account references, GUID, name)
- Import character data into database
4. **Post-Import Phase**
- Restart world server
- Verify import success
- Move processed files to `processed/` directory
## 🚨 Important Notes
### Before You Import
- **Backup Your Database**: Always backup before importing characters
- **Account Required**: Target account must exist in your auth database
- **Unique Names**: Character names must be unique across the entire server
- **Server Downtime**: World server is briefly restarted during import
### PDump Limitations
The AzerothCore pdump system has some known limitations:
- **Guild Data**: Guild information is not included in pdump files
- **Module Data**: Some module-specific data (transmog, reagent bank) may not transfer
- **Version Compatibility**: Pdump files from different database versions may have issues
### Troubleshooting
- **"Account not found"**: Verify account exists in auth database
- **"Character name exists"**: Use `--name` to rename or choose different name
- **"GUID conflicts"**: Use `--guid` to force different GUID or let system auto-assign
- **"Database errors"**: Check that pdump file is compatible with your database version
## 📚 Examples
### Basic Import
```bash
# Place file and import
cp character.pdump import/pdumps/
./scripts/bash/import-pdumps.sh --password mypass --account testuser
```
### Batch Import with Configuration
```bash
# Set up multiple characters
cp char1.pdump import/pdumps/
cp char2.pdump import/pdumps/
# Configure individual characters
echo "account=user1" > import/pdumps/configs/char1.conf
echo "account=user2
name=RenamedChar" > import/pdumps/configs/char2.conf
# Import all
./scripts/bash/import-pdumps.sh --password mypass
```
### Single Character Import
```bash
./scripts/bash/pdump-import.sh \
--file character.pdump \
--account testuser \
--name "MyNewCharacter" \
--password mypass
```
## 🔗 Related Documentation
- [Database Management](../../docs/DATABASE_MANAGEMENT.md)
- [Backup System](../../docs/TROUBLESHOOTING.md#backup-system)
- [Getting Started Guide](../../docs/GETTING_STARTED.md)

View File

@@ -0,0 +1,43 @@
#!/bin/bash
# Example batch import script
# This shows how to import multiple characters with different configurations
set -euo pipefail
MYSQL_PASSWORD="your_mysql_password_here"
echo "Setting up character import batch..."
# Create character-specific configurations
mkdir -p ../configs
# Character 1: Import to specific account
cat > ../configs/warrior.conf <<EOF
account=player1
EOF
# Character 2: Import with rename
cat > ../configs/mage.conf <<EOF
account=player2
name=NewMageName
EOF
# Character 3: Import with forced GUID
cat > ../configs/priest.conf <<EOF
account=player3
name=HolyPriest
guid=5000
EOF
echo "Configuration files created!"
echo ""
echo "Now place your pdump files:"
echo " warrior.pdump -> ../warrior.pdump"
echo " mage.pdump -> ../mage.pdump"
echo " priest.pdump -> ../priest.pdump"
echo ""
echo "Then run the import:"
echo " ../../../scripts/bash/import-pdumps.sh --password $MYSQL_PASSWORD"
echo ""
echo "Or import individually:"
echo " ../../../scripts/bash/pdump-import.sh --file ../warrior.pdump --account player1 --password $MYSQL_PASSWORD"

View File

@@ -0,0 +1,20 @@
# Example character import configuration
# Copy this file to configs/yourcharacter.conf and modify as needed
# Target account (required if DEFAULT_IMPORT_ACCOUNT is not set)
# Can be account name or account ID
account=testuser
# Rename character during import (optional)
# Must follow WoW naming rules: 2-12 letters, no numbers/special chars
name=NewCharacterName
# Force specific character GUID (optional)
# If not specified, next available GUID will be used automatically
# guid=5000
# Additional notes:
# - Account must exist in auth database before import
# - Character names must be unique across the server
# - GUID conflicts will cause import to fail
# - Use dry-run mode to test before actual import

View File

@@ -4,8 +4,17 @@ set -euo pipefail
INVOCATION_DIR="$PWD"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
cd "$SCRIPT_DIR"
# Load environment defaults if present
if [ -f "$PROJECT_ROOT/.env" ]; then
set -a
# shellcheck disable=SC1091
source "$PROJECT_ROOT/.env"
set +a
fi
SUPPORTED_DBS=(auth characters world)
declare -A SUPPORTED_SET=()
for db in "${SUPPORTED_DBS[@]}"; do
@@ -16,10 +25,12 @@ declare -A DB_NAMES=([auth]="" [characters]="" [world]="")
declare -a INCLUDE_DBS=()
declare -a SKIP_DBS=()
MYSQL_PW=""
MYSQL_PW="${MYSQL_ROOT_PASSWORD:-}"
DEST_PARENT=""
DEST_PROVIDED=false
EXPLICIT_SELECTION=false
MYSQL_CONTAINER="${CONTAINER_MYSQL:-ac-mysql}"
DEFAULT_BACKUP_DIR="${BACKUP_PATH:-${STORAGE_PATH:-./storage}/backups}"
usage(){
cat <<'EOF'
@@ -28,7 +39,7 @@ Usage: ./backup-export.sh [options]
Creates a timestamped backup of one or more ACore databases.
Options:
-o, --output DIR Destination directory (default: storage/backups)
-o, --output DIR Destination directory (default: BACKUP_PATH from .env, fallback: ./storage/backups)
-p, --password PASS MySQL root password
--auth-db NAME Auth database schema name
--characters-db NAME Characters database schema name
@@ -224,13 +235,9 @@ done
if $DEST_PROVIDED; then
DEST_PARENT="$(resolve_relative "$INVOCATION_DIR" "$DEST_PARENT")"
else
# Use storage/backups as default to align with existing backup structure
if [ -d "$SCRIPT_DIR/storage" ]; then
DEST_PARENT="$SCRIPT_DIR/storage/backups"
mkdir -p "$DEST_PARENT"
else
DEST_PARENT="$SCRIPT_DIR"
fi
DEFAULT_BACKUP_DIR="$(resolve_relative "$PROJECT_ROOT" "$DEFAULT_BACKUP_DIR")"
DEST_PARENT="$DEFAULT_BACKUP_DIR"
mkdir -p "$DEST_PARENT"
fi
TIMESTAMP="$(date +%Y%m%d_%H%M%S)"
@@ -241,7 +248,7 @@ generated_at="$(date --iso-8601=seconds)"
dump_db(){
local schema="$1" outfile="$2"
echo "Dumping ${schema} -> ${outfile}"
docker exec ac-mysql mysqldump -uroot -p"$MYSQL_PW" "$schema" | gzip > "$outfile"
docker exec "$MYSQL_CONTAINER" mysqldump -uroot -p"$MYSQL_PW" "$schema" | gzip > "$outfile"
}
for db in "${ACTIVE_DBS[@]}"; do

584
scripts/bash/bulk-2fa-setup.sh Executable file
View File

@@ -0,0 +1,584 @@
#!/bin/bash
#
# AzerothCore Bulk 2FA Setup Script
# Generates and configures TOTP 2FA for multiple accounts
#
# Usage: ./scripts/bash/bulk-2fa-setup.sh [OPTIONS]
#
set -e
# Script directory for relative imports
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Source common utilities
source "$SCRIPT_DIR/lib/common.sh"
# Set environment paths
ENV_PATH="${ENV_PATH:-$PROJECT_ROOT/.env}"
DEFAULT_ENV_PATH="$PROJECT_ROOT/.env"
# =============================================================================
# GLOBAL VARIABLES
# =============================================================================
# Command line options
OPT_ALL=false
OPT_ACCOUNTS=()
OPT_FORCE=false
OPT_OUTPUT_DIR=""
OPT_DRY_RUN=false
OPT_ISSUER="AzerothCore"
OPT_FORMAT="qr"
# Container and database settings
WORLDSERVER_CONTAINER="ac-worldserver"
DATABASE_CONTAINER="ac-mysql"
MYSQL_PASSWORD=""
# SOAP settings for official AzerothCore API
SOAP_HOST="localhost"
SOAP_PORT="7778"
SOAP_USERNAME=""
SOAP_PASSWORD=""
# Output paths
OUTPUT_BASE_DIR=""
QR_CODES_DIR=""
SETUP_REPORT=""
CONSOLE_COMMANDS=""
SECRETS_BACKUP=""
# =============================================================================
# USAGE AND HELP
# =============================================================================
show_usage() {
echo "Usage: $0 [OPTIONS]"
echo ""
echo "Bulk 2FA setup for AzerothCore accounts using official SOAP API"
echo ""
echo "Options:"
echo " --all Process all non-bot accounts without 2FA"
echo " --account USERNAME Process specific account (can be repeated)"
echo " --force Regenerate 2FA even if already exists"
echo " --output-dir PATH Custom output directory"
echo " --dry-run Show what would be done without executing"
echo " --issuer NAME Issuer name for TOTP (default: AzerothCore)"
echo " --format [qr|manual] Output QR codes or manual setup info"
echo " --soap-user USERNAME SOAP API username (required)"
echo " --soap-pass PASSWORD SOAP API password (required)"
echo " -h, --help Show this help message"
echo ""
echo "Examples:"
echo " $0 --all # Setup 2FA for all accounts"
echo " $0 --account user1 --account user2 # Setup for specific accounts"
echo " $0 --all --force --issuer MyServer # Force regenerate with custom issuer"
echo " $0 --all --dry-run # Preview what would be done"
echo ""
echo "Requirements:"
echo " - AzerothCore worldserver with SOAP enabled on port 7778"
echo " - GM account with sufficient privileges for SOAP access"
echo " - Remote Access (Ra.Enable = 1) enabled in worldserver.conf"
}
# =============================================================================
# UTILITY FUNCTIONS
# =============================================================================
# Check if required containers are running and healthy
check_containers() {
info "Checking container status..."
# Check worldserver container
if ! docker ps --format '{{.Names}}' | grep -q "^${WORLDSERVER_CONTAINER}$"; then
fatal "Container $WORLDSERVER_CONTAINER is not running"
fi
# Check if database container exists
if ! docker ps --format '{{.Names}}' | grep -q "^${DATABASE_CONTAINER}$"; then
fatal "Container $DATABASE_CONTAINER is not running"
fi
# Test database connectivity
if ! docker exec "$WORLDSERVER_CONTAINER" mysql -h "$DATABASE_CONTAINER" -u root -p"$MYSQL_PASSWORD" acore_auth -e "SELECT 1;" &>/dev/null; then
fatal "Cannot connect to AzerothCore database"
fi
# Test SOAP connectivity (only if credentials are available)
if [ -n "$SOAP_USERNAME" ] && [ -n "$SOAP_PASSWORD" ]; then
info "Testing SOAP API connectivity..."
if ! soap_result=$(soap_execute_command "server info"); then
fatal "Cannot connect to SOAP API: $soap_result"
fi
ok "SOAP API is accessible"
fi
ok "Containers are healthy and accessible"
}
# Execute MySQL query via container
mysql_query() {
local query="$1"
local database="${2:-acore_auth}"
docker exec "$WORLDSERVER_CONTAINER" mysql \
-h "$DATABASE_CONTAINER" \
-u root \
-p"$MYSQL_PASSWORD" \
"$database" \
-e "$query" \
2>/dev/null
}
# Execute SOAP command via AzerothCore official API
soap_execute_command() {
local command="$1"
local response
# Construct SOAP XML request
local soap_request='<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:xsi="http://www.w3.org/1999/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/1999/XMLSchema"
xmlns:ns1="urn:AC">
<SOAP-ENV:Body>
<ns1:executeCommand>
<command>'"$command"'</command>
</ns1:executeCommand>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>'
# Execute SOAP request
response=$(curl -s -X POST \
-H "Content-Type: text/xml" \
--user "$SOAP_USERNAME:$SOAP_PASSWORD" \
-d "$soap_request" \
"http://$SOAP_HOST:$SOAP_PORT/" 2>/dev/null)
# Flatten response for reliable parsing
local flat_response
flat_response=$(echo "$response" | tr -d '\n' | sed 's/\r//g')
# Check if response contains fault
if echo "$flat_response" | grep -q "SOAP-ENV:Fault"; then
# Extract fault string for error reporting
echo "$flat_response" | sed -n 's/.*<faultstring>\(.*\)<\/faultstring>.*/\1/p' | sed 's/&#xD;//g'
return 1
fi
# Extract successful result
echo "$flat_response" | sed -n 's/.*<result>\(.*\)<\/result>.*/\1/p' | sed 's/&#xD;//g'
return 0
}
# Generate Base32 TOTP secret
generate_totp_secret() {
# Use existing generation logic from generate-2fa-qr.sh
if command -v base32 >/dev/null 2>&1; then
openssl rand 10 | base32 -w0 | head -c16
else
# Fallback using Python
python3 -c "
import base64
import os
secret_bytes = os.urandom(10)
secret_b32 = base64.b32encode(secret_bytes).decode('ascii').rstrip('=')
print(secret_b32[:16])
"
fi
}
# Validate Base32 secret format
validate_base32_secret() {
local secret="$1"
if [[ ! "$secret" =~ ^[A-Z2-7]+$ ]]; then
return 1
fi
if [ ${#secret} -ne 16 ]; then
err "AzerothCore SOAP requires a 16-character Base32 secret (got ${#secret})"
return 1
fi
return 0
}
# =============================================================================
# ACCOUNT DISCOVERY FUNCTIONS
# =============================================================================
# Get all accounts that need 2FA setup
get_accounts_needing_2fa() {
local force="$1"
local query
if [ "$force" = "true" ]; then
# Include accounts that already have 2FA when force is enabled
query="SELECT username FROM account
WHERE username NOT LIKE 'rndbot%'
AND username NOT LIKE 'playerbot%'
ORDER BY username;"
else
# Only accounts without 2FA
query="SELECT username FROM account
WHERE (totp_secret IS NULL OR totp_secret = '')
AND username NOT LIKE 'rndbot%'
AND username NOT LIKE 'playerbot%'
ORDER BY username;"
fi
mysql_query "$query" | tail -n +2 # Remove header row
}
# Check if specific account exists
account_exists() {
local username="$1"
local result
result=$(mysql_query "SELECT COUNT(*) FROM account WHERE username = '$username';" | tail -n +2)
[ "$result" -eq 1 ]
}
# Check if account already has 2FA
account_has_2fa() {
local username="$1"
local result
result=$(mysql_query "SELECT COUNT(*) FROM account WHERE username = '$username' AND totp_secret IS NOT NULL AND totp_secret != '';" | tail -n +2)
[ "$result" -eq 1 ]
}
# =============================================================================
# 2FA SETUP FUNCTIONS
# =============================================================================
# Generate and set up 2FA for a single account
setup_2fa_for_account() {
local username="$1"
local force="$2"
local secret=""
local qr_output=""
info "Processing account: $username"
# Check if account exists
if ! account_exists "$username"; then
err "Account '$username' does not exist, skipping"
return 1
fi
# Check if account already has 2FA
if account_has_2fa "$username" && [ "$force" != "true" ]; then
warn "Account '$username' already has 2FA configured, use --force to regenerate"
return 0
fi
# Generate TOTP secret
secret=$(generate_totp_secret)
if [ -z "$secret" ] || ! validate_base32_secret "$secret"; then
err "Failed to generate valid TOTP secret for $username"
return 1
fi
if [ "$OPT_DRY_RUN" = "true" ]; then
log "DRY RUN: Would set 2FA secret for $username: $secret"
return 0
fi
# Set 2FA using official AzerothCore SOAP API
local soap_result
if ! soap_result=$(soap_execute_command ".account set 2fa $username $secret"); then
err "Failed to set 2FA for $username via SOAP API: $soap_result"
return 1
fi
# Verify success message
if ! echo "$soap_result" | grep -q "Successfully enabled two-factor authentication"; then
err "Unexpected SOAP response for $username: $soap_result"
return 1
fi
# Generate QR code if format is 'qr'
if [ "$OPT_FORMAT" = "qr" ]; then
qr_output="$QR_CODES_DIR/${username}_2fa_qr.png"
if ! "$SCRIPT_DIR/generate-2fa-qr.sh" -u "$username" -s "$secret" -i "$OPT_ISSUER" -o "$qr_output" >/dev/null; then
warn "Failed to generate QR code for $username, but secret was saved"
fi
fi
# Log setup information
echo "$username,$secret,$(date -u +"%Y-%m-%d %H:%M:%S UTC")" >> "$SECRETS_BACKUP"
echo "account set 2fa $username $secret" >> "$CONSOLE_COMMANDS"
ok "2FA configured for account: $username"
return 0
}
# =============================================================================
# OUTPUT AND REPORTING FUNCTIONS
# =============================================================================
# Create output directory structure
create_output_structure() {
local timestamp
timestamp=$(date +"%Y%m%d%H%M%S")
if [ -n "$OPT_OUTPUT_DIR" ]; then
OUTPUT_BASE_DIR="$OPT_OUTPUT_DIR"
else
OUTPUT_BASE_DIR="$PROJECT_ROOT/2fa-setup-$timestamp"
fi
# Create directories
mkdir -p "$OUTPUT_BASE_DIR"
QR_CODES_DIR="$OUTPUT_BASE_DIR/qr-codes"
mkdir -p "$QR_CODES_DIR"
# Set up output files
SETUP_REPORT="$OUTPUT_BASE_DIR/setup-report.txt"
CONSOLE_COMMANDS="$OUTPUT_BASE_DIR/console-commands.txt"
SECRETS_BACKUP="$OUTPUT_BASE_DIR/secrets-backup.csv"
# Initialize files
echo "# AzerothCore 2FA Console Commands" > "$CONSOLE_COMMANDS"
echo "# Generated on $(date)" >> "$CONSOLE_COMMANDS"
echo "" >> "$CONSOLE_COMMANDS"
echo "username,secret,generated_date" > "$SECRETS_BACKUP"
info "Output directory: $OUTPUT_BASE_DIR"
}
# Generate final setup report
generate_setup_report() {
local total_processed="$1"
local successful="$2"
local failed="$3"
{
echo "AzerothCore Bulk 2FA Setup Report"
echo "================================="
echo ""
echo "Generated: $(date)"
echo "Command: $0 $*"
echo ""
echo "Summary:"
echo "--------"
echo "Total accounts processed: $total_processed"
echo "Successfully configured: $successful"
echo "Failed: $failed"
echo ""
echo "Output Files:"
echo "-------------"
echo "- QR Codes: $QR_CODES_DIR/"
echo "- Console Commands: $CONSOLE_COMMANDS"
echo "- Secrets Backup: $SECRETS_BACKUP"
echo ""
echo "Next Steps:"
echo "-----------"
echo "1. Distribute QR codes to users securely"
echo "2. Users scan QR codes with authenticator apps"
echo "3. Verify setup using console commands if needed"
echo "4. Store secrets backup securely and delete when no longer needed"
echo ""
echo "Security Notes:"
echo "--------------"
echo "- QR codes contain sensitive TOTP secrets"
echo "- Secrets backup file contains plaintext secrets"
echo "- Delete or encrypt these files after distribution"
echo "- Secrets are also stored in AzerothCore database"
} > "$SETUP_REPORT"
info "Setup report generated: $SETUP_REPORT"
}
# =============================================================================
# MAIN SCRIPT LOGIC
# =============================================================================
# Parse command line arguments
parse_arguments() {
while [[ $# -gt 0 ]]; do
case $1 in
--all)
OPT_ALL=true
shift
;;
--account)
if [ -z "$2" ]; then
fatal "Option --account requires a username argument"
fi
OPT_ACCOUNTS+=("$2")
shift 2
;;
--force)
OPT_FORCE=true
shift
;;
--output-dir)
if [ -z "$2" ]; then
fatal "Option --output-dir requires a path argument"
fi
OPT_OUTPUT_DIR="$2"
shift 2
;;
--dry-run)
OPT_DRY_RUN=true
shift
;;
--issuer)
if [ -z "$2" ]; then
fatal "Option --issuer requires a name argument"
fi
OPT_ISSUER="$2"
shift 2
;;
--format)
if [ -z "$2" ]; then
fatal "Option --format requires qr or manual"
fi
if [[ "$2" != "qr" && "$2" != "manual" ]]; then
fatal "Format must be 'qr' or 'manual'"
fi
OPT_FORMAT="$2"
shift 2
;;
--soap-user)
if [ -z "$2" ]; then
fatal "Option --soap-user requires a username argument"
fi
SOAP_USERNAME="$2"
shift 2
;;
--soap-pass)
if [ -z "$2" ]; then
fatal "Option --soap-pass requires a password argument"
fi
SOAP_PASSWORD="$2"
shift 2
;;
-h|--help)
show_usage
exit 0
;;
*)
fatal "Unknown option: $1"
;;
esac
done
}
# Main execution function
main() {
local accounts_to_process=()
local total_processed=0
local successful=0
local failed=0
# Show help if no arguments were provided
if [ $# -eq 0 ]; then
show_usage
exit 1
fi
# Parse arguments
parse_arguments "$@"
# Validate options
if [ "$OPT_ALL" = "false" ] && [ ${#OPT_ACCOUNTS[@]} -eq 0 ]; then
fatal "Must specify either --all or --account USERNAME"
fi
if [ "$OPT_ALL" = "true" ] && [ ${#OPT_ACCOUNTS[@]} -gt 0 ]; then
fatal "Cannot use --all with specific --account options"
fi
# Load environment variables
MYSQL_PASSWORD=$(read_env "MYSQL_ROOT_PASSWORD" "")
if [ -z "$MYSQL_PASSWORD" ]; then
fatal "MYSQL_ROOT_PASSWORD not found in environment"
fi
# Require SOAP credentials via CLI flags
if [ -z "$SOAP_USERNAME" ] || [ -z "$SOAP_PASSWORD" ]; then
fatal "SOAP credentials required. Provide --soap-user and --soap-pass."
fi
# Check container health
check_containers
# Create output structure
create_output_structure
# Determine accounts to process
if [ "$OPT_ALL" = "true" ]; then
info "Discovering accounts that need 2FA setup..."
readarray -t accounts_to_process < <(get_accounts_needing_2fa "$OPT_FORCE")
if [ ${#accounts_to_process[@]} -eq 0 ]; then
if [ "$OPT_FORCE" = "true" ]; then
warn "No accounts found in database"
else
ok "All accounts already have 2FA configured"
fi
exit 0
fi
info "Found ${#accounts_to_process[@]} accounts to process"
else
accounts_to_process=("${OPT_ACCOUNTS[@]}")
fi
# Display dry run information
if [ "$OPT_DRY_RUN" = "true" ]; then
warn "DRY RUN MODE - No changes will be made"
info "Would process the following accounts:"
for account in "${accounts_to_process[@]}"; do
echo " - $account"
done
echo ""
fi
# Process each account
info "Processing ${#accounts_to_process[@]} accounts..."
for account in "${accounts_to_process[@]}"; do
total_processed=$((total_processed + 1))
if setup_2fa_for_account "$account" "$OPT_FORCE"; then
successful=$((successful + 1))
else
failed=$((failed + 1))
fi
done
# Generate final report
if [ "$OPT_DRY_RUN" = "false" ]; then
generate_setup_report "$total_processed" "$successful" "$failed"
# Summary
echo ""
ok "Bulk 2FA setup completed"
info "Processed: $total_processed accounts"
info "Successful: $successful"
info "Failed: $failed"
info "Output directory: $OUTPUT_BASE_DIR"
if [ "$failed" -gt 0 ]; then
warn "Some accounts failed to process. Check the output for details."
exit 1
fi
else
info "Dry run completed. Use without --dry-run to execute."
if [ "$failed" -gt 0 ]; then
warn "Some accounts would fail to process."
exit 1
fi
fi
}
# Execute main function with all arguments
main "$@"

View File

@@ -24,6 +24,34 @@ STATUS_FILE="${DB_GUARD_STATUS_FILE:-/tmp/db-guard.status}"
ERROR_FILE="${DB_GUARD_ERROR_FILE:-/tmp/db-guard.error}"
MODULE_SQL_HOST_PATH="${MODULE_SQL_HOST_PATH:-/modules-sql}"
SEED_CONF_SCRIPT="${SEED_DBIMPORT_CONF_SCRIPT:-/tmp/seed-dbimport-conf.sh}"
if [ -f "$SEED_CONF_SCRIPT" ]; then
# shellcheck source=/dev/null
. "$SEED_CONF_SCRIPT"
elif ! command -v seed_dbimport_conf >/dev/null 2>&1; then
seed_dbimport_conf(){
local conf="/azerothcore/env/dist/etc/dbimport.conf"
local dist="${conf}.dist"
mkdir -p "$(dirname "$conf")"
[ -f "$conf" ] && return 0
if [ -f "$dist" ]; then
cp "$dist" "$conf"
else
warn "dbimport.conf missing and no dist available; writing minimal defaults"
cat > "$conf" <<EOF
LoginDatabaseInfo = "localhost;3306;root;root;acore_auth"
WorldDatabaseInfo = "localhost;3306;root;root;acore_world"
CharacterDatabaseInfo = "localhost;3306;root;root;acore_characters"
PlayerbotsDatabaseInfo = "localhost;3306;root;root;acore_playerbots"
EnableDatabases = 15
Updates.AutoSetup = 1
MySQLExecutable = "/usr/bin/mysql"
TempDir = "/azerothcore/env/dist/etc/temp"
EOF
fi
}
fi
declare -a DB_SCHEMAS=()
for var in DB_AUTH_NAME DB_WORLD_NAME DB_CHARACTERS_NAME DB_PLAYERBOTS_NAME; do
value="${!var:-}"
@@ -85,15 +113,6 @@ rehydrate(){
"$IMPORT_SCRIPT"
}
ensure_dbimport_conf(){
local conf="/azerothcore/env/dist/etc/dbimport.conf"
local dist="${conf}.dist"
if [ ! -f "$conf" ] && [ -f "$dist" ]; then
cp "$dist" "$conf"
fi
mkdir -p /azerothcore/env/dist/temp
}
sync_host_stage_files(){
local host_root="${MODULE_SQL_HOST_PATH}"
[ -d "$host_root" ] || return 0
@@ -110,7 +129,7 @@ sync_host_stage_files(){
dbimport_verify(){
local bin_dir="/azerothcore/env/dist/bin"
ensure_dbimport_conf
seed_dbimport_conf
sync_host_stage_files
if [ ! -x "${bin_dir}/dbimport" ]; then
warn "dbimport binary not found at ${bin_dir}/dbimport"

View File

@@ -32,6 +32,22 @@ SHOW_PENDING=0
SHOW_MODULES=1
CONTAINER_NAME="ac-mysql"
resolve_path(){
local base="$1" path="$2"
if command -v python3 >/dev/null 2>&1; then
python3 - "$base" "$path" <<'PY'
import os, sys
base, path = sys.argv[1:3]
if os.path.isabs(path):
print(os.path.normpath(path))
else:
print(os.path.normpath(os.path.join(base, path)))
PY
else
(cd "$base" && realpath -m "$path")
fi
}
usage() {
cat <<'EOF'
Usage: ./db-health-check.sh [options]
@@ -73,6 +89,10 @@ if [ -f "$PROJECT_ROOT/.env" ]; then
set +a
fi
BACKUP_PATH_RAW="${BACKUP_PATH:-${STORAGE_PATH:-./storage}/backups}"
BACKUP_PATH="$(resolve_path "$PROJECT_ROOT" "$BACKUP_PATH_RAW")"
CONTAINER_NAME="${CONTAINER_MYSQL:-$CONTAINER_NAME}"
MYSQL_HOST="${MYSQL_HOST:-ac-mysql}"
MYSQL_PORT="${MYSQL_PORT:-3306}"
MYSQL_USER="${MYSQL_USER:-root}"
@@ -263,7 +283,7 @@ show_module_updates() {
# Get backup information
get_backup_info() {
local backup_dir="$PROJECT_ROOT/storage/backups"
local backup_dir="$BACKUP_PATH"
if [ ! -d "$backup_dir" ]; then
printf " ${ICON_INFO} No backups directory found\n"

View File

@@ -81,15 +81,6 @@ wait_for_mysql(){
return 1
}
ensure_dbimport_conf(){
local conf="/azerothcore/env/dist/etc/dbimport.conf"
local dist="${conf}.dist"
if [ ! -f "$conf" ] && [ -f "$dist" ]; then
cp "$dist" "$conf"
fi
mkdir -p /azerothcore/env/dist/temp
}
case "${1:-}" in
-h|--help)
print_help
@@ -106,6 +97,34 @@ esac
echo "🔧 Conditional AzerothCore Database Import"
echo "========================================"
SEED_CONF_SCRIPT="${SEED_DBIMPORT_CONF_SCRIPT:-/tmp/seed-dbimport-conf.sh}"
if [ -f "$SEED_CONF_SCRIPT" ]; then
# shellcheck source=/dev/null
. "$SEED_CONF_SCRIPT"
elif ! command -v seed_dbimport_conf >/dev/null 2>&1; then
seed_dbimport_conf(){
local conf="/azerothcore/env/dist/etc/dbimport.conf"
local dist="${conf}.dist"
mkdir -p "$(dirname "$conf")"
[ -f "$conf" ] && return 0
if [ -f "$dist" ]; then
cp "$dist" "$conf"
else
echo "⚠️ dbimport.conf missing and no dist available; using localhost defaults" >&2
cat > "$conf" <<EOF
LoginDatabaseInfo = "localhost;3306;root;root;acore_auth"
WorldDatabaseInfo = "localhost;3306;root;root;acore_world"
CharacterDatabaseInfo = "localhost;3306;root;root;acore_characters"
PlayerbotsDatabaseInfo = "localhost;3306;root;root;acore_playerbots"
EnableDatabases = 15
Updates.AutoSetup = 1
MySQLExecutable = "/usr/bin/mysql"
TempDir = "/azerothcore/env/dist/etc/temp"
EOF
fi
}
fi
if ! wait_for_mysql; then
echo "❌ MySQL service is unavailable; aborting database import"
exit 1
@@ -158,6 +177,8 @@ echo "🔧 Starting database import process..."
echo "🔍 Checking for backups to restore..."
# Allow tolerant scanning; re-enable -e after search.
set +e
# Define backup search paths in priority order
BACKUP_SEARCH_PATHS=(
"/backups"
@@ -198,10 +219,12 @@ if [ -z "$backup_path" ]; then
echo "📦 Latest daily backup found: $latest_daily"
for backup_file in "$BACKUP_DIRS/daily/$latest_daily"/*.sql.gz; do
if [ -f "$backup_file" ] && [ -s "$backup_file" ]; then
if timeout 10 zcat "$backup_file" 2>/dev/null | head -20 | grep -q "CREATE DATABASE\|INSERT INTO\|CREATE TABLE"; then
if timeout 10 gzip -t "$backup_file" >/dev/null 2>&1; then
echo "✅ Valid daily backup file: $(basename "$backup_file")"
backup_path="$BACKUP_DIRS/daily/$latest_daily"
break 2
else
echo "⚠️ gzip validation failed for $(basename "$backup_file")"
fi
fi
done
@@ -216,10 +239,12 @@ if [ -z "$backup_path" ]; then
echo "📦 Latest hourly backup found: $latest_hourly"
for backup_file in "$BACKUP_DIRS/hourly/$latest_hourly"/*.sql.gz; do
if [ -f "$backup_file" ] && [ -s "$backup_file" ]; then
if timeout 10 zcat "$backup_file" >/dev/null 2>&1; then
if timeout 10 gzip -t "$backup_file" >/dev/null 2>&1; then
echo "✅ Valid hourly backup file: $(basename "$backup_file")"
backup_path="$BACKUP_DIRS/hourly/$latest_hourly"
break 2
else
echo "⚠️ gzip validation failed for $(basename "$backup_file")"
fi
fi
done
@@ -238,10 +263,12 @@ if [ -z "$backup_path" ]; then
echo "🔍 Validating timestamped backup content..."
for backup_file in "$BACKUP_DIRS/$latest_timestamped"/*.sql.gz; do
if [ -f "$backup_file" ] && [ -s "$backup_file" ]; then
if timeout 10 zcat "$backup_file" >/dev/null 2>&1; then
if timeout 10 gzip -t "$backup_file" >/dev/null 2>&1; then
echo "✅ Valid timestamped backup found: $(basename "$backup_file")"
backup_path="$BACKUP_DIRS/$latest_timestamped"
break 2
else
echo "⚠️ gzip validation failed for $(basename "$backup_file")"
fi
fi
done
@@ -253,13 +280,16 @@ if [ -z "$backup_path" ]; then
# Check for manual backups (*.sql files)
if [ -z "$backup_path" ]; then
echo "🔍 Checking for manual backup files..."
latest_manual=$(ls -1t "$BACKUP_DIRS"/*.sql 2>/dev/null | head -n 1)
if [ -n "$latest_manual" ] && [ -f "$latest_manual" ]; then
echo "📦 Found manual backup: $(basename "$latest_manual")"
if timeout 10 head -20 "$latest_manual" >/dev/null 2>&1; then
echo "✅ Valid manual backup file: $(basename "$latest_manual")"
backup_path="$latest_manual"
break
latest_manual=""
if ls "$BACKUP_DIRS"/*.sql >/dev/null 2>&1; then
latest_manual=$(ls -1t "$BACKUP_DIRS"/*.sql | head -n 1)
if [ -n "$latest_manual" ] && [ -f "$latest_manual" ]; then
echo "📦 Found manual backup: $(basename "$latest_manual")"
if timeout 10 head -20 "$latest_manual" >/dev/null 2>&1; then
echo "✅ Valid manual backup file: $(basename "$latest_manual")"
backup_path="$latest_manual"
break
fi
fi
fi
fi
@@ -272,6 +302,7 @@ if [ -z "$backup_path" ]; then
done
fi
set -e
echo "🔄 Final backup path result: '$backup_path'"
if [ -n "$backup_path" ]; then
echo "📦 Found backup: $(basename "$backup_path")"
@@ -357,7 +388,7 @@ if [ -n "$backup_path" ]; then
return 0
fi
ensure_dbimport_conf
seed_dbimport_conf
cd /azerothcore/env/dist/bin
echo "🔄 Running dbimport to apply any missing updates..."
@@ -424,23 +455,73 @@ fi
echo "🗄️ Creating fresh AzerothCore databases..."
mysql -h ${CONTAINER_MYSQL} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} -e "
CREATE DATABASE IF NOT EXISTS ${DB_AUTH_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE DATABASE IF NOT EXISTS ${DB_WORLD_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE DATABASE IF NOT EXISTS ${DB_CHARACTERS_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE DATABASE IF NOT EXISTS acore_playerbots DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
DROP DATABASE IF EXISTS ${DB_AUTH_NAME};
DROP DATABASE IF EXISTS ${DB_WORLD_NAME};
DROP DATABASE IF EXISTS ${DB_CHARACTERS_NAME};
DROP DATABASE IF EXISTS ${DB_PLAYERBOTS_NAME:-acore_playerbots};
CREATE DATABASE ${DB_AUTH_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE DATABASE ${DB_WORLD_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE DATABASE ${DB_CHARACTERS_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE DATABASE ${DB_PLAYERBOTS_NAME:-acore_playerbots} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
SHOW DATABASES;" || { echo "❌ Failed to create databases"; exit 1; }
echo "✅ Fresh databases created - proceeding with schema import"
ensure_dbimport_conf
echo "🚀 Running database import..."
cd /azerothcore/env/dist/bin
seed_dbimport_conf
maybe_run_base_import(){
local mysql_host="${CONTAINER_MYSQL:-ac-mysql}"
local mysql_port="${MYSQL_PORT:-3306}"
local mysql_user="${MYSQL_USER:-root}"
local mysql_pass="${MYSQL_ROOT_PASSWORD:-root}"
import_dir(){
local db="$1" dir="$2"
[ -d "$dir" ] || return 0
echo "🔧 Importing base schema for ${db} from $(basename "$dir")..."
for f in $(ls "$dir"/*.sql 2>/dev/null | LC_ALL=C sort); do
MYSQL_PWD="$mysql_pass" mysql -h "$mysql_host" -P "$mysql_port" -u "$mysql_user" "$db" < "$f" >/dev/null 2>&1 || true
done
}
needs_import(){
local db="$1"
local count
count="$(MYSQL_PWD="$mysql_pass" mysql -h "$mysql_host" -P "$mysql_port" -u "$mysql_user" -N -B -e "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema='${db}';" 2>/dev/null || echo 0)"
[ "${count:-0}" -eq 0 ] && return 0
local updates
updates="$(MYSQL_PWD="$mysql_pass" mysql -h "$mysql_host" -P "$mysql_port" -u "$mysql_user" -N -B -e "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema='${db}' AND table_name='updates';" 2>/dev/null || echo 0)"
[ "${updates:-0}" -eq 0 ]
}
if needs_import "${DB_WORLD_NAME:-acore_world}"; then
import_dir "${DB_WORLD_NAME:-acore_world}" "/azerothcore/data/sql/base/db_world"
fi
if needs_import "${DB_AUTH_NAME:-acore_auth}"; then
import_dir "${DB_AUTH_NAME:-acore_auth}" "/azerothcore/data/sql/base/db_auth"
fi
if needs_import "${DB_CHARACTERS_NAME:-acore_characters}"; then
import_dir "${DB_CHARACTERS_NAME:-acore_characters}" "/azerothcore/data/sql/base/db_characters"
fi
}
maybe_run_base_import
if ./dbimport; then
echo "✅ Database import completed successfully!"
echo "$(date): Database import completed successfully" > "$RESTORE_STATUS_DIR/.import-completed" || echo "$(date): Database import completed successfully" > "$MARKER_STATUS_DIR/.import-completed"
import_marker_msg="$(date): Database import completed successfully"
if [ -w "$RESTORE_STATUS_DIR" ]; then
echo "$import_marker_msg" > "$RESTORE_STATUS_DIR/.import-completed"
elif [ -w "$MARKER_STATUS_DIR" ]; then
echo "$import_marker_msg" > "$MARKER_STATUS_DIR/.import-completed" 2>/dev/null || true
fi
else
echo "❌ Database import failed!"
echo "$(date): Database import failed" > "$RESTORE_STATUS_DIR/.import-failed" || echo "$(date): Database import failed" > "$MARKER_STATUS_DIR/.import-failed"
if [ -w "$RESTORE_STATUS_DIR" ]; then
echo "$(date): Database import failed" > "$RESTORE_STATUS_DIR/.import-failed"
elif [ -w "$MARKER_STATUS_DIR" ]; then
echo "$(date): Database import failed" > "$MARKER_STATUS_DIR/.import-failed" 2>/dev/null || true
fi
exit 1
fi

116
scripts/bash/generate-2fa-qr.py Executable file
View File

@@ -0,0 +1,116 @@
#!/usr/bin/env python3
"""
AzerothCore 2FA QR Code Generator (Python version)
Generates TOTP secrets and QR codes for AzerothCore accounts
"""
import argparse
import base64
import os
import sys
import re
def validate_base32(secret):
"""Validate Base32 secret format"""
if not re.match(r'^[A-Z2-7]+$', secret):
print("Error: Invalid Base32 secret. Only A-Z and 2-7 characters allowed.", file=sys.stderr)
return False
if len(secret) != 16:
print(f"Error: AzerothCore SOAP requires a 16-character Base32 secret (got {len(secret)}).", file=sys.stderr)
return False
return True
def generate_secret():
"""Generate a random 16-character Base32 secret (AzerothCore SOAP requirement)"""
secret_bytes = os.urandom(10)
secret_b32 = base64.b32encode(secret_bytes).decode('ascii').rstrip('=')
return secret_b32[:16]
def generate_qr_code(uri, output_path):
"""Generate QR code using available library"""
try:
import qrcode
qr = qrcode.QRCode(
version=1,
error_correction=qrcode.constants.ERROR_CORRECT_L,
box_size=6,
border=4,
)
qr.add_data(uri)
qr.make(fit=True)
img = qr.make_image(fill_color="black", back_color="white")
img.save(output_path)
return True
except ImportError:
print("Error: qrcode library not installed.", file=sys.stderr)
print("Install it with: pip3 install qrcode[pil]", file=sys.stderr)
return False
def main():
parser = argparse.ArgumentParser(
description="Generate TOTP secrets and QR codes for AzerothCore 2FA",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
%(prog)s -u john_doe
%(prog)s -u john_doe -o /tmp/qr.png
%(prog)s -u john_doe -s JBSWY3DPEHPK3PXP -i MyServer
"""
)
parser.add_argument('-u', '--username', required=True,
help='Target username for 2FA setup')
parser.add_argument('-o', '--output',
help='Path to save QR code image (default: ./USERNAME_2fa_qr.png)')
parser.add_argument('-s', '--secret',
help='Use existing 16-character Base32 secret (generates random if not provided)')
parser.add_argument('-i', '--issuer', default='AzerothCore',
help='Issuer name for the TOTP entry (default: AzerothCore)')
args = parser.parse_args()
# Set default output path
if not args.output:
args.output = f"./{args.username}_2fa_qr.png"
# Generate or validate secret
if args.secret:
print("Using provided secret...")
if not validate_base32(args.secret):
sys.exit(1)
secret = args.secret
else:
print("Generating new TOTP secret...")
secret = generate_secret()
print(f"Generated secret: {secret}")
# Create TOTP URI
uri = f"otpauth://totp/{args.issuer}:{args.username}?secret={secret}&issuer={args.issuer}"
# Generate QR code
print("Generating QR code...")
if generate_qr_code(uri, args.output):
print(f"✓ QR code generated successfully: {args.output}")
else:
print("\nManual setup information:")
print(f"Secret: {secret}")
print(f"URI: {uri}")
sys.exit(1)
# Display setup information
print("\n=== AzerothCore 2FA Setup Information ===")
print(f"Username: {args.username}")
print(f"Secret: {secret}")
print(f"QR Code: {args.output}")
print(f"Issuer: {args.issuer}")
print("\nNext steps:")
print("1. Share the QR code image with the user")
print("2. User scans QR code with authenticator app")
print("3. Run on AzerothCore console:")
print(f" account set 2fa {args.username} {secret}")
print("4. User can now use 6-digit codes for login")
print("\nSecurity Note: Keep the secret secure and delete the QR code after setup.")
if __name__ == "__main__":
main()

166
scripts/bash/generate-2fa-qr.sh Executable file
View File

@@ -0,0 +1,166 @@
#!/bin/bash
# AzerothCore 2FA QR Code Generator
# Generates TOTP secrets and QR codes for AzerothCore accounts
set -e
# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Function to display usage
show_usage() {
echo "Usage: $0 -u USERNAME [-o OUTPUT_PATH] [-s SECRET] [-i ISSUER]"
echo ""
echo "Options:"
echo " -u USERNAME Target username for 2FA setup (required)"
echo " -o OUTPUT_PATH Path to save QR code image (default: ./USERNAME_2fa_qr.png)"
echo " -s SECRET Use existing 16-character Base32 secret (generates random if not provided)"
echo " -i ISSUER Issuer name for the TOTP entry (default: AzerothCore)"
echo " -h Show this help message"
echo ""
echo "Examples:"
echo " $0 -u john_doe"
echo " $0 -u john_doe -o /tmp/qr.png"
echo " $0 -u john_doe -s JBSWY3DPEHPK3PXP -i MyServer"
}
# Function to validate Base32
validate_base32() {
local secret="$1"
if [[ ! "$secret" =~ ^[A-Z2-7]+$ ]]; then
echo -e "${RED}Error: Invalid Base32 secret. Only A-Z and 2-7 characters allowed.${NC}" >&2
return 1
fi
if [ ${#secret} -ne 16 ]; then
echo -e "${RED}Error: AzerothCore SOAP requires a 16-character Base32 secret (got ${#secret}).${NC}" >&2
return 1
fi
}
# Function to generate Base32 secret
generate_secret() {
# Generate 10 random bytes and encode as 16-character Base32 (AzerothCore SOAP requirement)
if command -v base32 >/dev/null 2>&1; then
openssl rand 10 | base32 -w0 | head -c16
else
# Fallback using Python if base32 command not available
python3 -c "
import base64
import os
secret_bytes = os.urandom(10)
secret_b32 = base64.b32encode(secret_bytes).decode('ascii').rstrip('=')
print(secret_b32[:16])
"
fi
}
# Default values
USERNAME=""
OUTPUT_PATH=""
SECRET=""
ISSUER="AzerothCore"
# Parse command line arguments
while getopts "u:o:s:i:h" opt; do
case ${opt} in
u )
USERNAME="$OPTARG"
;;
o )
OUTPUT_PATH="$OPTARG"
;;
s )
SECRET="$OPTARG"
;;
i )
ISSUER="$OPTARG"
;;
h )
show_usage
exit 0
;;
\? )
echo -e "${RED}Invalid option: $OPTARG${NC}" 1>&2
show_usage
exit 1
;;
: )
echo -e "${RED}Invalid option: $OPTARG requires an argument${NC}" 1>&2
show_usage
exit 1
;;
esac
done
# Validate required parameters
if [ -z "$USERNAME" ]; then
echo -e "${RED}Error: Username is required.${NC}" >&2
show_usage
exit 1
fi
# Set default output path if not provided
if [ -z "$OUTPUT_PATH" ]; then
OUTPUT_PATH="./${USERNAME}_2fa_qr.png"
fi
# Generate secret if not provided
if [ -z "$SECRET" ]; then
echo -e "${BLUE}Generating new TOTP secret...${NC}"
SECRET=$(generate_secret)
if [ -z "$SECRET" ]; then
echo -e "${RED}Error: Failed to generate secret.${NC}" >&2
exit 1
fi
echo -e "${GREEN}Generated secret: $SECRET${NC}"
else
echo -e "${BLUE}Using provided secret...${NC}"
if ! validate_base32 "$SECRET"; then
exit 1
fi
fi
# Create TOTP URI
URI="otpauth://totp/${ISSUER}:${USERNAME}?secret=${SECRET}&issuer=${ISSUER}"
# Check if qrencode is available
if ! command -v qrencode >/dev/null 2>&1; then
echo -e "${RED}Error: qrencode is not installed.${NC}" >&2
echo "Install it with: sudo apt-get install qrencode (Ubuntu/Debian) or brew install qrencode (macOS)"
echo ""
echo -e "${BLUE}Manual setup information:${NC}"
echo "Secret: $SECRET"
echo "URI: $URI"
exit 1
fi
# Generate QR code
echo -e "${BLUE}Generating QR code...${NC}"
if echo "$URI" | qrencode -s 6 -o "$OUTPUT_PATH"; then
echo -e "${GREEN}✓ QR code generated successfully: $OUTPUT_PATH${NC}"
else
echo -e "${RED}Error: Failed to generate QR code.${NC}" >&2
exit 1
fi
# Display setup information
echo ""
echo -e "${YELLOW}=== AzerothCore 2FA Setup Information ===${NC}"
echo "Username: $USERNAME"
echo "Secret: $SECRET"
echo "QR Code: $OUTPUT_PATH"
echo "Issuer: $ISSUER"
echo ""
echo -e "${BLUE}Next steps:${NC}"
echo "1. Share the QR code image with the user"
echo "2. User scans QR code with authenticator app"
echo "3. Run on AzerothCore console:"
echo -e " ${GREEN}account set 2fa $USERNAME $SECRET${NC}"
echo "4. User can now use 6-digit codes for login"
echo ""
echo -e "${YELLOW}Security Note: Keep the secret secure and delete the QR code after setup.${NC}"

283
scripts/bash/import-pdumps.sh Executable file
View File

@@ -0,0 +1,283 @@
#!/bin/bash
# Process and import character pdump files from import/pdumps/ directory
set -euo pipefail
INVOCATION_DIR="$PWD"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR/../.." # Go to project root
COLOR_RED='\033[0;31m'
COLOR_GREEN='\033[0;32m'
COLOR_YELLOW='\033[1;33m'
COLOR_BLUE='\033[0;34m'
COLOR_RESET='\033[0m'
log(){ printf '%b\n' "${COLOR_GREEN}$*${COLOR_RESET}"; }
warn(){ printf '%b\n' "${COLOR_YELLOW}$*${COLOR_RESET}"; }
err(){ printf '%b\n' "${COLOR_RED}$*${COLOR_RESET}"; }
info(){ printf '%b\n' "${COLOR_BLUE}$*${COLOR_RESET}"; }
fatal(){ err "$*"; exit 1; }
# Source environment variables
if [ -f ".env" ]; then
set -a
source .env
set +a
fi
IMPORT_DIR="./import/pdumps"
MYSQL_PW="${MYSQL_ROOT_PASSWORD:-}"
AUTH_DB="${ACORE_DB_AUTH_NAME:-acore_auth}"
CHARACTERS_DB="${ACORE_DB_CHARACTERS_NAME:-acore_characters}"
DEFAULT_ACCOUNT="${DEFAULT_IMPORT_ACCOUNT:-}"
INTERACTIVE=${INTERACTIVE:-true}
usage(){
cat <<'EOF'
Usage: ./import-pdumps.sh [options]
Automatically process and import all character pdump files from import/pdumps/ directory.
Options:
--password PASS MySQL root password (overrides env)
--account ACCOUNT Default account for imports (overrides env)
--auth-db NAME Auth database name (overrides env)
--characters-db NAME Characters database name (overrides env)
--non-interactive Don't prompt for missing information
-h, --help Show this help and exit
Directory Structure:
import/pdumps/
├── character1.pdump # Will be imported with default settings
├── character2.sql # SQL dump files also supported
└── configs/ # Optional: per-file configuration
├── character1.conf # account=testuser, name=NewName
└── character2.conf # account=12345, guid=5000
Configuration File Format (.conf):
account=target_account_name_or_id
name=new_character_name # Optional: rename character
guid=force_specific_guid # Optional: force GUID
Environment Variables:
MYSQL_ROOT_PASSWORD # MySQL root password
DEFAULT_IMPORT_ACCOUNT # Default account for imports
ACORE_DB_AUTH_NAME # Auth database name
ACORE_DB_CHARACTERS_NAME # Characters database name
Examples:
# Import all pdumps with environment settings
./import-pdumps.sh
# Import with specific password and account
./import-pdumps.sh --password mypass --account testuser
EOF
}
check_dependencies(){
if ! docker ps >/dev/null 2>&1; then
fatal "Docker is not running or accessible"
fi
if ! docker exec ac-mysql mysql --version >/dev/null 2>&1; then
fatal "MySQL container (ac-mysql) is not running or accessible"
fi
}
parse_config_file(){
local config_file="$1"
local -A config=()
if [[ -f "$config_file" ]]; then
while IFS='=' read -r key value; do
# Skip comments and empty lines
[[ "$key" =~ ^[[:space:]]*# ]] && continue
[[ -z "$key" ]] && continue
# Remove leading/trailing whitespace
key=$(echo "$key" | sed 's/^[[:space:]]*//;s/[[:space:]]*$//')
value=$(echo "$value" | sed 's/^[[:space:]]*//;s/[[:space:]]*$//')
config["$key"]="$value"
done < "$config_file"
fi
# Export as variables for the calling function
export CONFIG_ACCOUNT="${config[account]:-}"
export CONFIG_NAME="${config[name]:-}"
export CONFIG_GUID="${config[guid]:-}"
}
prompt_for_account(){
local filename="$1"
if [[ "$INTERACTIVE" != "true" ]]; then
fatal "No account specified for $filename and running in non-interactive mode"
fi
echo ""
warn "No account specified for: $filename"
echo "Available options:"
echo " 1. Provide account name or ID"
echo " 2. Skip this file"
echo ""
while true; do
read -p "Enter account name/ID (or 'skip'): " account_input
case "$account_input" in
skip|Skip|SKIP)
return 1
;;
"")
warn "Please enter an account name/ID or 'skip'"
continue
;;
*)
echo "$account_input"
return 0
;;
esac
done
}
process_pdump_file(){
local pdump_file="$1"
local filename
filename=$(basename "$pdump_file")
local config_file="$IMPORT_DIR/configs/${filename%.*}.conf"
info "Processing: $filename"
# Parse configuration file if it exists
parse_config_file "$config_file"
# Determine account
local target_account="${CONFIG_ACCOUNT:-$DEFAULT_ACCOUNT}"
if [[ -z "$target_account" ]]; then
if ! target_account=$(prompt_for_account "$filename"); then
warn "Skipping $filename (no account provided)"
return 0
fi
fi
# Build command arguments
local cmd_args=(
--file "$pdump_file"
--account "$target_account"
--password "$MYSQL_PW"
--auth-db "$AUTH_DB"
--characters-db "$CHARACTERS_DB"
)
# Add optional parameters if specified in config
[[ -n "$CONFIG_NAME" ]] && cmd_args+=(--name "$CONFIG_NAME")
[[ -n "$CONFIG_GUID" ]] && cmd_args+=(--guid "$CONFIG_GUID")
log "Importing $filename to account $target_account"
[[ -n "$CONFIG_NAME" ]] && log " Character name: $CONFIG_NAME"
[[ -n "$CONFIG_GUID" ]] && log " Forced GUID: $CONFIG_GUID"
# Execute the import
if "./scripts/bash/pdump-import.sh" "${cmd_args[@]}"; then
log "✅ Successfully imported: $filename"
# Move processed file to processed/ subdirectory
local processed_dir="$IMPORT_DIR/processed"
mkdir -p "$processed_dir"
mv "$pdump_file" "$processed_dir/"
[[ -f "$config_file" ]] && mv "$config_file" "$processed_dir/"
else
err "❌ Failed to import: $filename"
return 1
fi
}
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case "$1" in
--password)
[[ $# -ge 2 ]] || fatal "--password requires a value"
MYSQL_PW="$2"
shift 2
;;
--account)
[[ $# -ge 2 ]] || fatal "--account requires a value"
DEFAULT_ACCOUNT="$2"
shift 2
;;
--auth-db)
[[ $# -ge 2 ]] || fatal "--auth-db requires a value"
AUTH_DB="$2"
shift 2
;;
--characters-db)
[[ $# -ge 2 ]] || fatal "--characters-db requires a value"
CHARACTERS_DB="$2"
shift 2
;;
--non-interactive)
INTERACTIVE=false
shift
;;
-h|--help)
usage
exit 0
;;
*)
fatal "Unknown option: $1"
;;
esac
done
# Validate required parameters
[[ -n "$MYSQL_PW" ]] || fatal "MySQL password required (use --password or set MYSQL_ROOT_PASSWORD)"
# Check dependencies
check_dependencies
# Check if import directory exists and has files
if [[ ! -d "$IMPORT_DIR" ]]; then
info "Import directory doesn't exist: $IMPORT_DIR"
info "Create the directory and place your .pdump or .sql files there."
exit 0
fi
# Find pdump files
shopt -s nullglob
pdump_files=("$IMPORT_DIR"/*.pdump "$IMPORT_DIR"/*.sql)
shopt -u nullglob
if [[ ${#pdump_files[@]} -eq 0 ]]; then
info "No pdump files found in $IMPORT_DIR"
info "Place your .pdump or .sql files in this directory to import them."
exit 0
fi
log "Found ${#pdump_files[@]} pdump file(s) to process"
# Create configs directory if it doesn't exist
mkdir -p "$IMPORT_DIR/configs"
# Process each file
processed=0
failed=0
for pdump_file in "${pdump_files[@]}"; do
if process_pdump_file "$pdump_file"; then
((processed++))
else
((failed++))
fi
done
echo ""
log "Import summary:"
log " ✅ Processed: $processed"
[[ $failed -gt 0 ]] && err " ❌ Failed: $failed"
if [[ $processed -gt 0 ]]; then
log ""
log "Character imports completed! Processed files moved to $IMPORT_DIR/processed/"
log "You can now log in and access your imported characters."
fi

View File

@@ -141,6 +141,10 @@ run_post_install_hooks(){
export MODULES_ROOT="${MODULES_ROOT:-/modules}"
export LUA_SCRIPTS_TARGET="/azerothcore/lua_scripts"
# Pass build environment variables to hooks
export STACK_SOURCE_VARIANT="${STACK_SOURCE_VARIANT:-}"
export MODULES_REBUILD_SOURCE_PATH="${MODULES_REBUILD_SOURCE_PATH:-}"
# Execute the hook script
if "$hook_script"; then
ok "Hook '$hook' completed successfully"
@@ -174,7 +178,18 @@ install_enabled_modules(){
continue
fi
if [ -d "$dir/.git" ]; then
info "$dir already present; skipping clone"
info "$dir already present; checking for updates"
(cd "$dir" && git fetch origin >/dev/null 2>&1 || warn "Failed to fetch updates for $dir")
local current_branch
current_branch=$(cd "$dir" && git rev-parse --abbrev-ref HEAD 2>/dev/null || echo "master")
if (cd "$dir" && git pull origin "$current_branch" 2>&1 | grep -q "Already up to date"); then
info "$dir is already up to date"
else
ok "$dir updated from remote"
fi
if [ -n "$ref" ]; then
(cd "$dir" && git checkout "$ref") || warn "Unable to checkout ref $ref for $dir"
fi
elif [ -d "$dir" ]; then
warn "$dir exists but is not a git repository; leaving in place"
else
@@ -467,6 +482,7 @@ load_sql_helper(){
# Module SQL is now staged at runtime by stage-modules.sh which copies files to
# /azerothcore/data/sql/updates/ (core directory) where they ARE scanned and processed.
track_module_state(){
echo 'Checking for module changes that require rebuild...'

View File

@@ -1,7 +1,7 @@
#!/bin/bash
# Utility to migrate module images (and optionally storage) to a remote host.
# Assumes module images have already been rebuilt locally.
# Utility to migrate deployment images (and optionally storage) to a remote host.
# Assumes your runtime images have already been built or pulled locally.
set -euo pipefail
@@ -41,6 +41,74 @@ resolve_project_image(){
echo "${project_name}:${tag}"
}
declare -a DEPLOY_IMAGE_REFS=()
declare -a CLEANUP_IMAGE_REFS=()
declare -A DEPLOY_IMAGE_SET=()
declare -A CLEANUP_IMAGE_SET=()
add_deploy_image_ref(){
local image="$1"
[ -z "$image" ] && return
if [[ -z "${DEPLOY_IMAGE_SET[$image]:-}" ]]; then
DEPLOY_IMAGE_SET["$image"]=1
DEPLOY_IMAGE_REFS+=("$image")
fi
add_cleanup_image_ref "$image"
}
add_cleanup_image_ref(){
local image="$1"
[ -z "$image" ] && return
if [[ -z "${CLEANUP_IMAGE_SET[$image]:-}" ]]; then
CLEANUP_IMAGE_SET["$image"]=1
CLEANUP_IMAGE_REFS+=("$image")
fi
}
collect_deploy_image_refs(){
local auth_modules world_modules auth_playerbots world_playerbots db_import client_data bots_client_data
local auth_standard world_standard client_data_standard
auth_modules="$(read_env_value AC_AUTHSERVER_IMAGE_MODULES "$(resolve_project_image "authserver-modules-latest")")"
world_modules="$(read_env_value AC_WORLDSERVER_IMAGE_MODULES "$(resolve_project_image "worldserver-modules-latest")")"
auth_playerbots="$(read_env_value AC_AUTHSERVER_IMAGE_PLAYERBOTS "$(resolve_project_image "authserver-playerbots")")"
world_playerbots="$(read_env_value AC_WORLDSERVER_IMAGE_PLAYERBOTS "$(resolve_project_image "worldserver-playerbots")")"
db_import="$(read_env_value AC_DB_IMPORT_IMAGE "$(resolve_project_image "db-import-playerbots")")"
client_data="$(read_env_value AC_CLIENT_DATA_IMAGE_PLAYERBOTS "$(resolve_project_image "client-data-playerbots")")"
auth_standard="$(read_env_value AC_AUTHSERVER_IMAGE "acore/ac-wotlk-authserver:master")"
world_standard="$(read_env_value AC_WORLDSERVER_IMAGE "acore/ac-wotlk-worldserver:master")"
client_data_standard="$(read_env_value AC_CLIENT_DATA_IMAGE "acore/ac-wotlk-client-data:master")"
local refs=(
"$auth_modules"
"$world_modules"
"$auth_playerbots"
"$world_playerbots"
"$db_import"
"$client_data"
"$auth_standard"
"$world_standard"
"$client_data_standard"
)
for ref in "${refs[@]}"; do
add_deploy_image_ref "$ref"
done
# Include default project-tagged images for cleanup even if env moved to custom tags
local fallback_refs=(
"$(resolve_project_image "authserver-modules-latest")"
"$(resolve_project_image "worldserver-modules-latest")"
"$(resolve_project_image "authserver-playerbots")"
"$(resolve_project_image "worldserver-playerbots")"
"$(resolve_project_image "db-import-playerbots")"
"$(resolve_project_image "client-data-playerbots")"
)
for ref in "${fallback_refs[@]}"; do
add_cleanup_image_ref "$ref"
done
}
ensure_host_writable(){
local path="$1"
[ -n "$path" ] || return 0
@@ -76,9 +144,13 @@ Options:
--port PORT SSH port (default: 22)
--identity PATH SSH private key (passed to scp/ssh)
--project-dir DIR Remote project directory (default: ~/<project-name>)
--env-file PATH Use this env file for image lookup and upload (default: ./.env)
--tarball PATH Output path for the image tar (default: ./local-storage/images/acore-modules-images.tar)
--storage PATH Remote storage directory (default: <project-dir>/storage)
--skip-storage Do not sync the storage directory
--skip-env Do not upload .env to the remote host
--preserve-containers Skip stopping/removing existing remote containers and images
--clean-containers Stop/remove existing ac-* containers and project images on remote
--copy-source Copy the full local project directory instead of syncing via git
--yes, -y Auto-confirm prompts (for existing deployments)
--help Show this help
@@ -95,6 +167,9 @@ REMOTE_STORAGE=""
SKIP_STORAGE=0
ASSUME_YES=0
COPY_SOURCE=0
SKIP_ENV=0
PRESERVE_CONTAINERS=0
CLEAN_CONTAINERS=0
while [[ $# -gt 0 ]]; do
case "$1" in
@@ -103,9 +178,13 @@ while [[ $# -gt 0 ]]; do
--port) PORT="$2"; shift 2;;
--identity) IDENTITY="$2"; shift 2;;
--project-dir) PROJECT_DIR="$2"; shift 2;;
--env-file) ENV_FILE="$2"; shift 2;;
--tarball) TARBALL="$2"; shift 2;;
--storage) REMOTE_STORAGE="$2"; shift 2;;
--skip-storage) SKIP_STORAGE=1; shift;;
--skip-env) SKIP_ENV=1; shift;;
--preserve-containers) PRESERVE_CONTAINERS=1; shift;;
--clean-containers) CLEAN_CONTAINERS=1; shift;;
--copy-source) COPY_SOURCE=1; shift;;
--yes|-y) ASSUME_YES=1; shift;;
--help|-h) usage; exit 0;;
@@ -119,6 +198,19 @@ if [[ -z "$HOST" || -z "$USER" ]]; then
exit 1
fi
if [[ "$CLEAN_CONTAINERS" -eq 1 && "$PRESERVE_CONTAINERS" -eq 1 ]]; then
echo "Cannot combine --clean-containers with --preserve-containers." >&2
exit 1
fi
# Normalize env file path if provided and recompute defaults
if [ -n "$ENV_FILE" ] && [ -f "$ENV_FILE" ]; then
ENV_FILE="$(cd "$(dirname "$ENV_FILE")" && pwd)/$(basename "$ENV_FILE")"
else
ENV_FILE="$PROJECT_ROOT/.env"
fi
DEFAULT_PROJECT_NAME="$(project_name::resolve "$ENV_FILE" "$TEMPLATE_FILE")"
expand_remote_path(){
local path="$1"
case "$path" in
@@ -145,6 +237,27 @@ ensure_host_writable "$LOCAL_STORAGE_ROOT"
TARBALL="${TARBALL:-${LOCAL_STORAGE_ROOT}/images/acore-modules-images.tar}"
ensure_host_writable "$(dirname "$TARBALL")"
# Resolve module SQL staging paths (local and remote)
resolve_path_relative_to_project(){
local path="$1" root="$2"
if [[ "$path" != /* ]]; then
# drop leading ./ if present
path="${path#./}"
path="${root%/}/$path"
fi
echo "${path%/}"
}
STAGE_SQL_PATH_RAW="$(read_env_value STAGE_PATH_MODULE_SQL "${LOCAL_STORAGE_ROOT:-./local-storage}/module-sql-updates")"
# Ensure STORAGE_PATH_LOCAL is defined to avoid set -u failures during expansion
if [ -z "${STORAGE_PATH_LOCAL:-}" ]; then
STORAGE_PATH_LOCAL="$LOCAL_STORAGE_ROOT"
fi
# Expand any env references (e.g., ${STORAGE_PATH_LOCAL})
STAGE_SQL_PATH_RAW="$(eval "echo \"$STAGE_SQL_PATH_RAW\"")"
LOCAL_STAGE_SQL_DIR="$(resolve_path_relative_to_project "$STAGE_SQL_PATH_RAW" "$PROJECT_ROOT")"
REMOTE_STAGE_SQL_DIR="$(resolve_path_relative_to_project "$STAGE_SQL_PATH_RAW" "$PROJECT_DIR")"
SCP_OPTS=(-P "$PORT")
SSH_OPTS=(-p "$PORT")
if [[ -n "$IDENTITY" ]]; then
@@ -200,14 +313,35 @@ validate_remote_environment(){
local running_containers
running_containers=$(run_ssh "docker ps --filter 'name=ac-' --format '{{.Names}}' 2>/dev/null | wc -l")
if [ "$running_containers" -gt 0 ]; then
echo "⚠️ Warning: Found $running_containers running AzerothCore containers"
echo " Migration will overwrite existing deployment"
if [ "$ASSUME_YES" != "1" ]; then
read -r -p " Continue with migration? [y/N]: " reply
case "$reply" in
[Yy]*) echo " Proceeding with migration..." ;;
*) echo " Migration cancelled."; exit 1 ;;
esac
if [ "$PRESERVE_CONTAINERS" -eq 1 ]; then
echo "⚠️ Found $running_containers running AzerothCore containers; --preserve-containers set, leaving them running."
if [ "$ASSUME_YES" != "1" ]; then
read -r -p " Continue without stopping containers? [y/N]: " reply
case "$reply" in
[Yy]*) echo " Proceeding with migration (containers preserved)..." ;;
*) echo " Migration cancelled."; exit 1 ;;
esac
fi
elif [ "$CLEAN_CONTAINERS" -eq 1 ]; then
echo "⚠️ Found $running_containers running AzerothCore containers"
echo " --clean-containers set: they will be stopped/removed during migration."
if [ "$ASSUME_YES" != "1" ]; then
read -r -p " Continue with cleanup? [y/N]: " reply
case "$reply" in
[Yy]*) echo " Proceeding with cleanup..." ;;
*) echo " Migration cancelled."; exit 1 ;;
esac
fi
else
echo "⚠️ Warning: Found $running_containers running AzerothCore containers"
echo " Migration will NOT stop them automatically. Use --clean-containers to stop/remove."
if [ "$ASSUME_YES" != "1" ]; then
read -r -p " Continue with migration? [y/N]: " reply
case "$reply" in
[Yy]*) echo " Proceeding with migration..." ;;
*) echo " Migration cancelled."; exit 1 ;;
esac
fi
fi
fi
@@ -223,6 +357,25 @@ validate_remote_environment(){
echo "✅ Remote environment validation complete"
}
confirm_remote_storage_overwrite(){
if [[ $SKIP_STORAGE -ne 0 ]]; then
return
fi
if [[ "$ASSUME_YES" = "1" ]]; then
return
fi
local has_content
has_content=$(run_ssh "if [ -d '$REMOTE_STORAGE' ]; then find '$REMOTE_STORAGE' -mindepth 1 -maxdepth 1 -print -quit; fi")
if [ -n "$has_content" ]; then
echo "⚠️ Remote storage at $REMOTE_STORAGE contains existing data."
read -r -p " Continue and sync local storage over it? [y/N]: " reply
case "${reply,,}" in
y|yes) echo " Proceeding with storage sync..." ;;
*) echo " Skipping storage sync for this run."; SKIP_STORAGE=1 ;;
esac
fi
}
copy_source_tree(){
echo " • Copying full local project directory..."
ensure_remote_temp_dir
@@ -286,27 +439,23 @@ setup_remote_repository(){
}
cleanup_stale_docker_resources(){
if [ "$PRESERVE_CONTAINERS" -eq 1 ]; then
echo "⋅ Skipping remote container/image cleanup (--preserve-containers)"
return
fi
if [ "$CLEAN_CONTAINERS" -ne 1 ]; then
echo "⋅ Skipping remote runtime cleanup (containers and images preserved)."
return
fi
echo "⋅ Cleaning up stale Docker resources on remote..."
# Get project name to target our containers/images specifically
local project_name
project_name="$(resolve_project_name)"
# Stop and remove old containers
echo " • Removing old containers..."
run_ssh "docker ps -a --filter 'name=ac-' --format '{{.Names}}' | xargs -r docker rm -f 2>/dev/null || true"
# Remove old project images to force fresh load
echo " • Removing old project images..."
local images_to_remove=(
"${project_name}:authserver-modules-latest"
"${project_name}:worldserver-modules-latest"
"${project_name}:authserver-playerbots"
"${project_name}:worldserver-playerbots"
"${project_name}:db-import-playerbots"
"${project_name}:client-data-playerbots"
)
for img in "${images_to_remove[@]}"; do
for img in "${CLEANUP_IMAGE_REFS[@]}"; do
run_ssh "docker rmi '$img' 2>/dev/null || true"
done
@@ -320,31 +469,25 @@ cleanup_stale_docker_resources(){
validate_remote_environment
echo "⋅ Exporting module images to $TARBALL"
collect_deploy_image_refs
echo "⋅ Exporting deployment images to $TARBALL"
# Ensure destination directory exists
ensure_host_writable "$(dirname "$TARBALL")"
# Check which images are available and collect them
IMAGES_TO_SAVE=()
project_auth_modules="$(resolve_project_image "authserver-modules-latest")"
project_world_modules="$(resolve_project_image "worldserver-modules-latest")"
project_auth_playerbots="$(resolve_project_image "authserver-playerbots")"
project_world_playerbots="$(resolve_project_image "worldserver-playerbots")"
project_db_import="$(resolve_project_image "db-import-playerbots")"
project_client_data="$(resolve_project_image "client-data-playerbots")"
for image in \
"$project_auth_modules" \
"$project_world_modules" \
"$project_auth_playerbots" \
"$project_world_playerbots" \
"$project_db_import" \
"$project_client_data"; do
MISSING_IMAGES=()
for image in "${DEPLOY_IMAGE_REFS[@]}"; do
if docker image inspect "$image" >/dev/null 2>&1; then
IMAGES_TO_SAVE+=("$image")
else
MISSING_IMAGES+=("$image")
fi
done
if [ ${#IMAGES_TO_SAVE[@]} -eq 0 ]; then
echo "❌ No AzerothCore images found to migrate. Run './build.sh' first or pull standard images."
echo "❌ No AzerothCore images found to migrate. Run './build.sh' first or pull the images defined in your .env."
exit 1
fi
@@ -352,6 +495,13 @@ echo "⋅ Found ${#IMAGES_TO_SAVE[@]} images to migrate:"
printf ' • %s\n' "${IMAGES_TO_SAVE[@]}"
docker image save "${IMAGES_TO_SAVE[@]}" > "$TARBALL"
if [ ${#MISSING_IMAGES[@]} -gt 0 ]; then
echo "⚠️ Skipping ${#MISSING_IMAGES[@]} images not present locally (will need to pull on remote if required):"
printf ' • %s\n' "${MISSING_IMAGES[@]}"
fi
confirm_remote_storage_overwrite
if [[ $SKIP_STORAGE -eq 0 ]]; then
if [[ -d storage ]]; then
echo "⋅ Syncing storage to remote"
@@ -387,6 +537,18 @@ if [[ $SKIP_STORAGE -eq 0 ]]; then
rm -f "$modules_tar"
run_ssh "tar -xf '$REMOTE_TEMP_DIR/acore-modules.tar' -C '$REMOTE_STORAGE/modules' && rm '$REMOTE_TEMP_DIR/acore-modules.tar'"
fi
# Sync module SQL staging directory (STAGE_PATH_MODULE_SQL)
if [[ -d "$LOCAL_STAGE_SQL_DIR" ]]; then
echo "⋅ Syncing module SQL staging to remote"
run_ssh "rm -rf '$REMOTE_STAGE_SQL_DIR' && mkdir -p '$REMOTE_STAGE_SQL_DIR'"
sql_tar=$(mktemp)
tar -cf "$sql_tar" -C "$LOCAL_STAGE_SQL_DIR" .
ensure_remote_temp_dir
run_scp "$sql_tar" "$USER@$HOST:$REMOTE_TEMP_DIR/acore-module-sql.tar"
rm -f "$sql_tar"
run_ssh "tar -xf '$REMOTE_TEMP_DIR/acore-module-sql.tar' -C '$REMOTE_STAGE_SQL_DIR' && rm '$REMOTE_TEMP_DIR/acore-module-sql.tar'"
fi
fi
reset_remote_post_install_marker(){
@@ -406,9 +568,35 @@ ensure_remote_temp_dir
run_scp "$TARBALL" "$USER@$HOST:$REMOTE_TEMP_DIR/acore-modules-images.tar"
run_ssh "docker load < '$REMOTE_TEMP_DIR/acore-modules-images.tar' && rm '$REMOTE_TEMP_DIR/acore-modules-images.tar'"
if [[ -f .env ]]; then
echo "⋅ Uploading .env"
run_scp .env "$USER@$HOST:$PROJECT_DIR/.env"
if [[ -f "$ENV_FILE" ]]; then
if [[ $SKIP_ENV -eq 1 ]]; then
echo "⋅ Skipping .env upload (--skip-env)"
else
remote_env_path="$PROJECT_DIR/.env"
upload_env=1
if run_ssh "test -f '$remote_env_path'"; then
if [ "$ASSUME_YES" = "1" ]; then
echo "⋅ Overwriting existing remote .env (auto-confirm)"
elif [ -t 0 ]; then
read -r -p "⚠️ Remote .env exists at $remote_env_path. Overwrite? [y/N]: " reply
case "$reply" in
[Yy]*) ;;
*) upload_env=0 ;;
esac
else
echo "⚠️ Remote .env exists at $remote_env_path; skipping upload (no confirmation available)"
upload_env=0
fi
fi
if [[ $upload_env -eq 1 ]]; then
echo "⋅ Uploading .env"
run_scp "$ENV_FILE" "$USER@$HOST:$remote_env_path"
else
echo "⋅ Keeping existing remote .env"
fi
fi
fi
echo "⋅ Remote prepares completed"

344
scripts/bash/pdump-import.sh Executable file
View File

@@ -0,0 +1,344 @@
#!/bin/bash
# Import character pdump files into AzerothCore database
set -euo pipefail
INVOCATION_DIR="$PWD"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR"
COLOR_RED='\033[0;31m'
COLOR_GREEN='\033[0;32m'
COLOR_YELLOW='\033[1;33m'
COLOR_BLUE='\033[0;34m'
COLOR_RESET='\033[0m'
log(){ printf '%b\n' "${COLOR_GREEN}$*${COLOR_RESET}"; }
warn(){ printf '%b\n' "${COLOR_YELLOW}$*${COLOR_RESET}"; }
err(){ printf '%b\n' "${COLOR_RED}$*${COLOR_RESET}"; }
info(){ printf '%b\n' "${COLOR_BLUE}$*${COLOR_RESET}"; }
fatal(){ err "$*"; exit 1; }
MYSQL_PW=""
PDUMP_FILE=""
TARGET_ACCOUNT=""
NEW_CHARACTER_NAME=""
FORCE_GUID=""
AUTH_DB="acore_auth"
CHARACTERS_DB="acore_characters"
DRY_RUN=false
BACKUP_BEFORE=true
usage(){
cat <<'EOF'
Usage: ./pdump-import.sh [options]
Import character pdump files into AzerothCore database.
Required Options:
-f, --file FILE Pdump file to import (.pdump or .sql format)
-a, --account ACCOUNT Target account name or ID for character import
-p, --password PASS MySQL root password
Optional:
-n, --name NAME New character name (if different from dump)
-g, --guid GUID Force specific character GUID
--auth-db NAME Auth database schema name (default: acore_auth)
--characters-db NAME Characters database schema name (default: acore_characters)
--dry-run Validate pdump without importing
--no-backup Skip pre-import backup (not recommended)
-h, --help Show this help and exit
Examples:
# Import character from pdump file
./pdump-import.sh --file character.pdump --account testaccount --password azerothcore123
# Import with new character name
./pdump-import.sh --file oldchar.pdump --account newaccount --name "NewCharName" --password azerothcore123
# Validate pdump file without importing
./pdump-import.sh --file character.pdump --account testaccount --password azerothcore123 --dry-run
Notes:
- Account must exist in the auth database before import
- Character names must be unique across the server
- Pre-import backup is created automatically (can be disabled with --no-backup)
- Use --dry-run to validate pdump structure before actual import
EOF
}
validate_account(){
local account="$1"
if [[ "$account" =~ ^[0-9]+$ ]]; then
# Account ID provided
local count
count=$(docker exec ac-mysql mysql -uroot -p"$MYSQL_PW" -N -B -e \
"SELECT COUNT(*) FROM ${AUTH_DB}.account WHERE id = $account;")
[[ "$count" -eq 1 ]] || fatal "Account ID $account not found in auth database"
else
# Account name provided
local count
count=$(docker exec ac-mysql mysql -uroot -p"$MYSQL_PW" -N -B -e \
"SELECT COUNT(*) FROM ${AUTH_DB}.account WHERE username = '$account';")
[[ "$count" -eq 1 ]] || fatal "Account '$account' not found in auth database"
fi
}
get_account_id(){
local account="$1"
if [[ "$account" =~ ^[0-9]+$ ]]; then
echo "$account"
else
docker exec ac-mysql mysql -uroot -p"$MYSQL_PW" -N -B -e \
"SELECT id FROM ${AUTH_DB}.account WHERE username = '$account';"
fi
}
validate_character_name(){
local name="$1"
# Check character name format (WoW naming rules)
if [[ ! "$name" =~ ^[A-Za-z]{2,12}$ ]]; then
fatal "Invalid character name: '$name'. Must be 2-12 letters, no numbers or special characters."
fi
# Check if character name already exists
local count
count=$(docker exec ac-mysql mysql -uroot -p"$MYSQL_PW" -N -B -e \
"SELECT COUNT(*) FROM ${CHARACTERS_DB}.characters WHERE name = '$name';")
[[ "$count" -eq 0 ]] || fatal "Character name '$name' already exists in database"
}
get_next_guid(){
docker exec ac-mysql mysql -uroot -p"$MYSQL_PW" -N -B -e \
"SELECT COALESCE(MAX(guid), 0) + 1 FROM ${CHARACTERS_DB}.characters;"
}
validate_pdump_format(){
local file="$1"
if [[ ! -f "$file" ]]; then
fatal "Pdump file not found: $file"
fi
# Check if file is readable and has SQL-like content
if ! head -10 "$file" | grep -q -i "INSERT\|UPDATE\|CREATE\|ALTER"; then
warn "File does not appear to contain SQL statements. Continuing anyway..."
fi
info "Pdump file validation: OK"
}
backup_characters(){
local timestamp
timestamp=$(date +%Y%m%d_%H%M%S)
local backup_file="manual-backups/characters-pre-pdump-import-${timestamp}.sql"
mkdir -p manual-backups
log "Creating backup: $backup_file"
docker exec ac-mysql mysqldump -uroot -p"$MYSQL_PW" "$CHARACTERS_DB" > "$backup_file"
echo "$backup_file"
}
process_pdump_sql(){
local file="$1"
local account_id="$2"
local new_guid="${3:-}"
local new_name="${4:-}"
# Create temporary processed file
local temp_file
temp_file=$(mktemp)
# Process the pdump SQL file
# Replace account references and optionally GUID/name
if [[ -n "$new_guid" && -n "$new_name" ]]; then
sed -e "s/\([^0-9]\)[0-9]\+\([^0-9].*account.*=\)/\1${account_id}\2/g" \
-e "s/\([^0-9]\)[0-9]\+\([^0-9].*guid.*=\)/\1${new_guid}\2/g" \
-e "s/'[^']*'\([^']*name.*=\)/'${new_name}'\1/g" \
"$file" > "$temp_file"
elif [[ -n "$new_guid" ]]; then
sed -e "s/\([^0-9]\)[0-9]\+\([^0-9].*account.*=\)/\1${account_id}\2/g" \
-e "s/\([^0-9]\)[0-9]\+\([^0-9].*guid.*=\)/\1${new_guid}\2/g" \
"$file" > "$temp_file"
elif [[ -n "$new_name" ]]; then
sed -e "s/\([^0-9]\)[0-9]\+\([^0-9].*account.*=\)/\1${account_id}\2/g" \
-e "s/'[^']*'\([^']*name.*=\)/'${new_name}'\1/g" \
"$file" > "$temp_file"
else
sed -e "s/\([^0-9]\)[0-9]\+\([^0-9].*account.*=\)/\1${account_id}\2/g" \
"$file" > "$temp_file"
fi
echo "$temp_file"
}
import_pdump(){
local processed_file="$1"
log "Importing character data into $CHARACTERS_DB database"
if docker exec -i ac-mysql mysql -uroot -p"$MYSQL_PW" "$CHARACTERS_DB" < "$processed_file"; then
log "Character import completed successfully"
else
fatal "Character import failed. Check MySQL logs for details."
fi
}
case "${1:-}" in
-h|--help) usage; exit 0;;
esac
# Parse command line arguments
POSITIONAL=()
while [[ $# -gt 0 ]]; do
case "$1" in
-f|--file)
[[ $# -ge 2 ]] || fatal "--file requires a file path"
PDUMP_FILE="$2"
shift 2
;;
-a|--account)
[[ $# -ge 2 ]] || fatal "--account requires an account name or ID"
TARGET_ACCOUNT="$2"
shift 2
;;
-p|--password)
[[ $# -ge 2 ]] || fatal "--password requires a value"
MYSQL_PW="$2"
shift 2
;;
-n|--name)
[[ $# -ge 2 ]] || fatal "--name requires a character name"
NEW_CHARACTER_NAME="$2"
shift 2
;;
-g|--guid)
[[ $# -ge 2 ]] || fatal "--guid requires a GUID number"
FORCE_GUID="$2"
shift 2
;;
--auth-db)
[[ $# -ge 2 ]] || fatal "--auth-db requires a value"
AUTH_DB="$2"
shift 2
;;
--characters-db)
[[ $# -ge 2 ]] || fatal "--characters-db requires a value"
CHARACTERS_DB="$2"
shift 2
;;
--dry-run)
DRY_RUN=true
shift
;;
--no-backup)
BACKUP_BEFORE=false
shift
;;
-h|--help)
usage
exit 0
;;
--)
shift
while [[ $# -gt 0 ]]; do
POSITIONAL+=("$1")
shift
done
break
;;
-*)
fatal "Unknown option: $1"
;;
*)
POSITIONAL+=("$1")
shift
;;
esac
done
# Validate required arguments
[[ -n "$PDUMP_FILE" ]] || fatal "Pdump file is required. Use --file FILE"
[[ -n "$TARGET_ACCOUNT" ]] || fatal "Target account is required. Use --account ACCOUNT"
[[ -n "$MYSQL_PW" ]] || fatal "MySQL password is required. Use --password PASS"
# Resolve relative paths
if [[ ! "$PDUMP_FILE" =~ ^/ ]]; then
PDUMP_FILE="$INVOCATION_DIR/$PDUMP_FILE"
fi
# Validate inputs
log "Validating pdump file..."
validate_pdump_format "$PDUMP_FILE"
log "Validating target account..."
validate_account "$TARGET_ACCOUNT"
ACCOUNT_ID=$(get_account_id "$TARGET_ACCOUNT")
log "Target account ID: $ACCOUNT_ID"
if [[ -n "$NEW_CHARACTER_NAME" ]]; then
log "Validating new character name..."
validate_character_name "$NEW_CHARACTER_NAME"
fi
# Determine GUID
if [[ -n "$FORCE_GUID" ]]; then
CHARACTER_GUID="$FORCE_GUID"
log "Using forced GUID: $CHARACTER_GUID"
else
CHARACTER_GUID=$(get_next_guid)
log "Using next available GUID: $CHARACTER_GUID"
fi
# Process pdump file
log "Processing pdump file..."
PROCESSED_FILE=$(process_pdump_sql "$PDUMP_FILE" "$ACCOUNT_ID" "$CHARACTER_GUID" "$NEW_CHARACTER_NAME")
if $DRY_RUN; then
info "DRY RUN: Pdump processing completed successfully"
info "Processed file saved to: $PROCESSED_FILE"
info "Account ID: $ACCOUNT_ID"
info "Character GUID: $CHARACTER_GUID"
[[ -n "$NEW_CHARACTER_NAME" ]] && info "Character name: $NEW_CHARACTER_NAME"
info "Run without --dry-run to perform actual import"
rm -f "$PROCESSED_FILE"
exit 0
fi
# Create backup before import
BACKUP_FILE=""
if $BACKUP_BEFORE; then
BACKUP_FILE=$(backup_characters)
fi
# Stop world server to prevent issues during import
log "Stopping world server for safe import..."
docker stop ac-worldserver >/dev/null 2>&1 || warn "World server was not running"
# Perform import
trap 'rm -f "$PROCESSED_FILE"' EXIT
import_pdump "$PROCESSED_FILE"
# Restart world server
log "Restarting world server..."
docker start ac-worldserver >/dev/null 2>&1
# Wait for server to initialize
log "Waiting for world server to initialize..."
for i in {1..30}; do
if docker exec ac-worldserver pgrep worldserver >/dev/null 2>&1; then
log "World server is running"
break
fi
if [ $i -eq 30 ]; then
warn "World server took longer than expected to start"
fi
sleep 2
done
# Verify import
CHARACTER_COUNT=$(docker exec ac-mysql mysql -uroot -p"$MYSQL_PW" -N -B -e \
"SELECT COUNT(*) FROM ${CHARACTERS_DB}.characters WHERE account = $ACCOUNT_ID;")
log "Import completed successfully!"
log "Characters on account $TARGET_ACCOUNT: $CHARACTER_COUNT"
[[ -n "$BACKUP_FILE" ]] && log "Backup created: $BACKUP_FILE"
info "Character import from pdump completed. You can now log in and play!"

View File

@@ -0,0 +1,88 @@
#!/bin/bash
# Ensure dbimport.conf exists with usable connection values.
set -euo pipefail 2>/dev/null || set -eu
# Usage: seed_dbimport_conf [conf_dir]
# - conf_dir: target directory (defaults to DBIMPORT_CONF_DIR or /azerothcore/env/dist/etc)
seed_dbimport_conf() {
local conf_dir="${1:-${DBIMPORT_CONF_DIR:-/azerothcore/env/dist/etc}}"
local conf="${conf_dir}/dbimport.conf"
local dist="${conf}.dist"
local source_root="${DBIMPORT_SOURCE_ROOT:-${AC_SOURCE_DIR:-/local-storage-root/source/azerothcore-playerbots}}"
if [ ! -d "$source_root" ]; then
local fallback="/local-storage-root/source/azerothcore-wotlk"
if [ -d "$fallback" ]; then
source_root="$fallback"
fi
fi
local source_dist="${DBIMPORT_DIST_PATH:-${source_root}/src/tools/dbimport/dbimport.conf.dist}"
# Put temp dir inside the writable config mount so non-root can create files.
local temp_dir="${DBIMPORT_TEMP_DIR:-/azerothcore/env/dist/etc/temp}"
mkdir -p "$conf_dir" "$temp_dir"
# Prefer a real .dist from the source tree if it exists.
if [ -f "$source_dist" ]; then
cp -n "$source_dist" "$dist" 2>/dev/null || true
fi
if [ ! -f "$conf" ]; then
if [ -f "$dist" ]; then
cp "$dist" "$conf"
else
echo "⚠️ dbimport.conf.dist not found; generating minimal dbimport.conf" >&2
cat > "$conf" <<EOF
LoginDatabaseInfo = "localhost;3306;root;root;acore_auth"
WorldDatabaseInfo = "localhost;3306;root;root;acore_world"
CharacterDatabaseInfo = "localhost;3306;root;root;acore_characters"
PlayerbotsDatabaseInfo = "localhost;3306;root;root;acore_playerbots"
EnableDatabases = 15
Updates.AutoSetup = 1
MySQLExecutable = "/usr/bin/mysql"
TempDir = "/azerothcore/env/dist/temp"
EOF
fi
fi
set_conf() {
local key="$1" value="$2" file="$3" quoted="${4:-true}"
local formatted="$value"
if [ "$quoted" = "true" ]; then
formatted="\"${value}\""
fi
if grep -qE "^[[:space:]]*${key}[[:space:]]*=" "$file"; then
sed -i "s|^[[:space:]]*${key}[[:space:]]*=.*|${key} = ${formatted}|" "$file"
else
printf '%s = %s\n' "$key" "$formatted" >> "$file"
fi
}
local host="${CONTAINER_MYSQL:-${MYSQL_HOST:-localhost}}"
local port="${MYSQL_PORT:-3306}"
local user="${MYSQL_USER:-root}"
local pass="${MYSQL_ROOT_PASSWORD:-root}"
local db_auth="${DB_AUTH_NAME:-acore_auth}"
local db_world="${DB_WORLD_NAME:-acore_world}"
local db_chars="${DB_CHARACTERS_NAME:-acore_characters}"
local db_bots="${DB_PLAYERBOTS_NAME:-acore_playerbots}"
set_conf "LoginDatabaseInfo" "${host};${port};${user};${pass};${db_auth}" "$conf"
set_conf "WorldDatabaseInfo" "${host};${port};${user};${pass};${db_world}" "$conf"
set_conf "CharacterDatabaseInfo" "${host};${port};${user};${pass};${db_chars}" "$conf"
set_conf "PlayerbotsDatabaseInfo" "${host};${port};${user};${pass};${db_bots}" "$conf"
set_conf "EnableDatabases" "${AC_UPDATES_ENABLE_DATABASES:-15}" "$conf" false
set_conf "Updates.AutoSetup" "${AC_UPDATES_AUTO_SETUP:-1}" "$conf" false
set_conf "Updates.ExceptionShutdownDelay" "${AC_UPDATES_EXCEPTION_SHUTDOWN_DELAY:-10000}" "$conf" false
set_conf "Updates.AllowedModules" "${DB_UPDATES_ALLOWED_MODULES:-all}" "$conf"
set_conf "Updates.Redundancy" "${DB_UPDATES_REDUNDANCY:-1}" "$conf" false
set_conf "Database.Reconnect.Seconds" "${DB_RECONNECT_SECONDS:-5}" "$conf" false
set_conf "Database.Reconnect.Attempts" "${DB_RECONNECT_ATTEMPTS:-5}" "$conf" false
set_conf "LoginDatabase.WorkerThreads" "${DB_LOGIN_WORKER_THREADS:-1}" "$conf" false
set_conf "WorldDatabase.WorkerThreads" "${DB_WORLD_WORKER_THREADS:-1}" "$conf" false
set_conf "CharacterDatabase.WorkerThreads" "${DB_CHARACTER_WORKER_THREADS:-1}" "$conf" false
set_conf "LoginDatabase.SynchThreads" "${DB_LOGIN_SYNCH_THREADS:-1}" "$conf" false
set_conf "WorldDatabase.SynchThreads" "${DB_WORLD_SYNCH_THREADS:-1}" "$conf" false
set_conf "CharacterDatabase.SynchThreads" "${DB_CHARACTER_SYNCH_THREADS:-1}" "$conf" false
set_conf "MySQLExecutable" "/usr/bin/mysql" "$conf"
set_conf "TempDir" "$temp_dir" "$conf"
}

View File

@@ -259,14 +259,14 @@ SENTINEL_FILE="$LOCAL_STORAGE_PATH/modules/.requires_rebuild"
MODULES_META_DIR="$STORAGE_PATH/modules/.modules-meta"
RESTORE_PRESTAGED_FLAG="$MODULES_META_DIR/.restore-prestaged"
MODULES_ENABLED_FILE="$MODULES_META_DIR/modules-enabled.txt"
MODULE_SQL_STAGE_PATH="$(read_env MODULE_SQL_STAGE_PATH "$STORAGE_PATH/module-sql-updates")"
MODULE_SQL_STAGE_PATH="$(eval "echo \"$MODULE_SQL_STAGE_PATH\"")"
if [[ "$MODULE_SQL_STAGE_PATH" != /* ]]; then
MODULE_SQL_STAGE_PATH="$PROJECT_DIR/$MODULE_SQL_STAGE_PATH"
STAGE_PATH_MODULE_SQL="$(read_env STAGE_PATH_MODULE_SQL "$STORAGE_PATH/module-sql-updates")"
STAGE_PATH_MODULE_SQL="$(eval "echo \"$STAGE_PATH_MODULE_SQL\"")"
if [[ "$STAGE_PATH_MODULE_SQL" != /* ]]; then
STAGE_PATH_MODULE_SQL="$PROJECT_DIR/$STAGE_PATH_MODULE_SQL"
fi
MODULE_SQL_STAGE_PATH="$(canonical_path "$MODULE_SQL_STAGE_PATH")"
mkdir -p "$MODULE_SQL_STAGE_PATH"
ensure_host_writable "$MODULE_SQL_STAGE_PATH"
STAGE_PATH_MODULE_SQL="$(canonical_path "$STAGE_PATH_MODULE_SQL")"
mkdir -p "$STAGE_PATH_MODULE_SQL"
ensure_host_writable "$STAGE_PATH_MODULE_SQL"
HOST_STAGE_HELPER_IMAGE="$(read_env ALPINE_IMAGE "alpine:latest")"
declare -A ENABLED_MODULES=()
@@ -439,7 +439,7 @@ esac
# Stage module SQL to core updates directory (after containers start)
host_stage_clear(){
docker run --rm \
-v "$MODULE_SQL_STAGE_PATH":/host-stage \
-v "$STAGE_PATH_MODULE_SQL":/host-stage \
"$HOST_STAGE_HELPER_IMAGE" \
sh -c 'find /host-stage -type f -name "MODULE_*.sql" -delete' >/dev/null 2>&1 || true
}
@@ -447,7 +447,7 @@ host_stage_clear(){
host_stage_reset_dir(){
local dir="$1"
docker run --rm \
-v "$MODULE_SQL_STAGE_PATH":/host-stage \
-v "$STAGE_PATH_MODULE_SQL":/host-stage \
"$HOST_STAGE_HELPER_IMAGE" \
sh -c "mkdir -p /host-stage/$dir && rm -f /host-stage/$dir/MODULE_*.sql" >/dev/null 2>&1 || true
}
@@ -461,7 +461,7 @@ copy_to_host_stage(){
local base_name
base_name="$(basename "$file_path")"
docker run --rm \
-v "$MODULE_SQL_STAGE_PATH":/host-stage \
-v "$STAGE_PATH_MODULE_SQL":/host-stage \
-v "$src_dir":/src \
"$HOST_STAGE_HELPER_IMAGE" \
sh -c "mkdir -p /host-stage/$core_dir && cp \"/src/$base_name\" \"/host-stage/$core_dir/$target_name\"" >/dev/null 2>&1

View File

@@ -4,11 +4,16 @@ import os
import re
import socket
import subprocess
import sys
import time
from pathlib import Path
PROJECT_DIR = Path(__file__).resolve().parents[2]
ENV_FILE = PROJECT_DIR / ".env"
DEFAULT_ACORE_STANDARD_REPO = "https://github.com/azerothcore/azerothcore-wotlk.git"
DEFAULT_ACORE_PLAYERBOTS_REPO = "https://github.com/mod-playerbots/azerothcore-wotlk.git"
DEFAULT_ACORE_STANDARD_BRANCH = "master"
DEFAULT_ACORE_PLAYERBOTS_BRANCH = "Playerbot"
def load_env():
env = {}
@@ -150,6 +155,195 @@ def volume_info(name, fallback=None):
pass
return {"name": name, "exists": False, "mountpoint": "-"}
def detect_source_variant(env):
variant = read_env(env, "STACK_SOURCE_VARIANT", "").strip().lower()
if variant in ("playerbots", "playerbot"):
return "playerbots"
if variant == "core":
return "core"
if read_env(env, "STACK_IMAGE_MODE", "").strip().lower() == "playerbots":
return "playerbots"
if read_env(env, "MODULE_PLAYERBOTS", "0") == "1" or read_env(env, "PLAYERBOT_ENABLED", "0") == "1":
return "playerbots"
return "core"
def repo_config_for_variant(env, variant):
if variant == "playerbots":
repo = read_env(env, "ACORE_REPO_PLAYERBOTS", DEFAULT_ACORE_PLAYERBOTS_REPO)
branch = read_env(env, "ACORE_BRANCH_PLAYERBOTS", DEFAULT_ACORE_PLAYERBOTS_BRANCH)
else:
repo = read_env(env, "ACORE_REPO_STANDARD", DEFAULT_ACORE_STANDARD_REPO)
branch = read_env(env, "ACORE_BRANCH_STANDARD", DEFAULT_ACORE_STANDARD_BRANCH)
return repo, branch
def image_labels(image):
try:
result = subprocess.run(
["docker", "image", "inspect", "--format", "{{json .Config.Labels}}", image],
capture_output=True,
text=True,
check=True,
timeout=3,
)
labels = json.loads(result.stdout or "{}")
if isinstance(labels, dict):
return {k: (v or "").strip() for k, v in labels.items()}
except Exception:
pass
return {}
def first_label(labels, keys):
for key in keys:
value = labels.get(key, "")
if value:
return value
return ""
def short_commit(commit):
commit = commit.strip()
if re.fullmatch(r"[0-9a-fA-F]{12,}", commit):
return commit[:12]
return commit
def git_info_from_path(path):
repo_path = Path(path)
if not (repo_path / ".git").exists():
return None
def run_git(args):
try:
result = subprocess.run(
["git"] + args,
cwd=repo_path,
capture_output=True,
text=True,
check=True,
)
return result.stdout.strip()
except Exception:
return ""
commit = run_git(["rev-parse", "HEAD"])
if not commit:
return None
return {
"commit": commit,
"commit_short": run_git(["rev-parse", "--short", "HEAD"]) or short_commit(commit),
"date": run_git(["log", "-1", "--format=%cd", "--date=iso-strict"]),
"repo": run_git(["remote", "get-url", "origin"]),
"branch": run_git(["rev-parse", "--abbrev-ref", "HEAD"]),
"path": str(repo_path),
}
def candidate_source_paths(env, variant):
paths = []
for key in ("MODULES_REBUILD_SOURCE_PATH", "SOURCE_DIR"):
value = read_env(env, key, "")
if value:
paths.append(value)
local_root = read_env(env, "STORAGE_PATH_LOCAL", "./local-storage")
primary_dir = "azerothcore-playerbots" if variant == "playerbots" else "azerothcore"
fallback_dir = "azerothcore" if variant == "playerbots" else "azerothcore-playerbots"
paths.append(os.path.join(local_root, "source", primary_dir))
paths.append(os.path.join(local_root, "source", fallback_dir))
normalized = []
for p in paths:
expanded = expand_path(p, env)
try:
normalized.append(str(Path(expanded).expanduser().resolve()))
except Exception:
normalized.append(str(Path(expanded).expanduser()))
# Deduplicate while preserving order
seen = set()
unique_paths = []
for p in normalized:
if p not in seen:
seen.add(p)
unique_paths.append(p)
return unique_paths
def build_info(service_data, env):
variant = detect_source_variant(env)
repo, branch = repo_config_for_variant(env, variant)
info = {
"variant": variant,
"repo": repo,
"branch": branch,
"image": "",
"commit": "",
"commit_date": "",
"commit_source": "",
"source_path": "",
}
image_candidates = []
for svc in service_data:
if svc.get("name") in ("ac-worldserver", "ac-authserver", "ac-db-import"):
image = svc.get("image") or ""
if image:
image_candidates.append(image)
for env_key in (
"AC_WORLDSERVER_IMAGE_PLAYERBOTS",
"AC_WORLDSERVER_IMAGE_MODULES",
"AC_WORLDSERVER_IMAGE",
"AC_AUTHSERVER_IMAGE_PLAYERBOTS",
"AC_AUTHSERVER_IMAGE_MODULES",
"AC_AUTHSERVER_IMAGE",
):
value = read_env(env, env_key, "")
if value:
image_candidates.append(value)
seen = set()
deduped_images = []
for img in image_candidates:
if img not in seen:
seen.add(img)
deduped_images.append(img)
commit_label_keys = [
"build.source_commit",
"org.opencontainers.image.revision",
"org.opencontainers.image.version",
]
date_label_keys = [
"build.source_date",
"org.opencontainers.image.created",
"build.timestamp",
]
for image in deduped_images:
labels = image_labels(image)
if not info["image"]:
info["image"] = image
if not labels:
continue
commit = short_commit(first_label(labels, commit_label_keys))
date = first_label(labels, date_label_keys)
if commit or date:
info["commit"] = commit
info["commit_date"] = date
info["commit_source"] = "image-label"
info["image"] = image
return info
for path in candidate_source_paths(env, variant):
git_meta = git_info_from_path(path)
if git_meta:
info["commit"] = git_meta.get("commit_short") or short_commit(git_meta.get("commit", ""))
info["commit_date"] = git_meta.get("date", "")
info["commit_source"] = "source-tree"
info["source_path"] = git_meta.get("path", "")
info["repo"] = git_meta.get("repo") or info["repo"]
info["branch"] = git_meta.get("branch") or info["branch"]
return info
return info
def expand_path(value, env):
storage = read_env(env, "STORAGE_PATH", "./storage")
local_storage = read_env(env, "STORAGE_PATH_LOCAL", "./local-storage")
@@ -175,13 +369,61 @@ def mysql_query(env, database, query):
except Exception:
return 0
def escape_like_prefix(prefix):
# Basic escape for single quotes in SQL literals
return prefix.replace("'", "''")
def bot_prefixes(env):
prefixes = []
for key in ("PLAYERBOT_ACCOUNT_PREFIXES", "PLAYERBOT_ACCOUNT_PREFIX"):
raw = read_env(env, key, "")
for part in raw.replace(",", " ").split():
part = part.strip()
if part:
prefixes.append(part)
# Default fallback if nothing configured
if not prefixes:
prefixes.extend(["playerbot", "rndbot", "bot"])
return prefixes
def user_stats(env):
db_auth = read_env(env, "DB_AUTH_NAME", "acore_auth")
db_characters = read_env(env, "DB_CHARACTERS_NAME", "acore_characters")
accounts = mysql_query(env, db_auth, "SELECT COUNT(*) FROM account;")
online = mysql_query(env, db_auth, "SELECT COUNT(*) FROM account WHERE online = 1;")
prefixes = bot_prefixes(env)
account_conditions = []
for prefix in prefixes:
prefix = escape_like_prefix(prefix)
upper_prefix = prefix.upper()
account_conditions.append(f"UPPER(username) NOT LIKE '{upper_prefix}%%'")
account_query = "SELECT COUNT(*) FROM account"
if account_conditions:
account_query += " WHERE " + " AND ".join(account_conditions)
accounts = mysql_query(env, db_auth, account_query + ";")
online_conditions = ["c.online = 1"]
for prefix in prefixes:
prefix = escape_like_prefix(prefix)
upper_prefix = prefix.upper()
online_conditions.append(f"UPPER(a.username) NOT LIKE '{upper_prefix}%%'")
online_query = (
f"SELECT COUNT(DISTINCT a.id) FROM `{db_characters}`.characters c "
f"JOIN `{db_auth}`.account a ON a.id = c.account "
f"WHERE {' AND '.join(online_conditions)};"
)
online = mysql_query(env, db_characters, online_query)
active = mysql_query(env, db_auth, "SELECT COUNT(*) FROM account WHERE last_login >= DATE_SUB(UTC_TIMESTAMP(), INTERVAL 7 DAY);")
characters = mysql_query(env, db_characters, "SELECT COUNT(*) FROM characters;")
character_conditions = []
for prefix in prefixes:
prefix = escape_like_prefix(prefix)
upper_prefix = prefix.upper()
character_conditions.append(f"UPPER(a.username) NOT LIKE '{upper_prefix}%%'")
characters_query = (
f"SELECT COUNT(*) FROM `{db_characters}`.characters c "
f"JOIN `{db_auth}`.account a ON a.id = c.account"
)
if character_conditions:
characters_query += " WHERE " + " AND ".join(character_conditions)
characters = mysql_query(env, db_characters, characters_query + ";")
return {
"accounts": accounts,
"online": online,
@@ -227,8 +469,14 @@ def docker_stats():
def main():
env = load_env()
project = read_env(env, "COMPOSE_PROJECT_NAME", "acore-compose")
network = read_env(env, "NETWORK_NAME", "azerothcore")
project = read_env(env, "COMPOSE_PROJECT_NAME")
if not project:
print(json.dumps({"error": "COMPOSE_PROJECT_NAME not set in environment"}), file=sys.stderr)
sys.exit(1)
network = read_env(env, "NETWORK_NAME")
if not network:
print(json.dumps({"error": "NETWORK_NAME not set in environment"}), file=sys.stderr)
sys.exit(1)
services = [
("ac-mysql", "MySQL"),
@@ -274,6 +522,8 @@ def main():
"mysql_data": volume_info(f"{project}_mysql-data", "mysql-data"),
}
build = build_info(service_data, env)
data = {
"timestamp": time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()),
"project": project,
@@ -285,6 +535,7 @@ def main():
"volumes": volumes,
"users": user_stats(env),
"stats": docker_stats(),
"build": build,
}
print(json.dumps(data))

65
scripts/bash/test-2fa-token.py Executable file
View File

@@ -0,0 +1,65 @@
#!/usr/bin/env python3
"""
Test TOTP token generation for AzerothCore 2FA
"""
import base64
import hmac
import hashlib
import struct
import time
import argparse
def generate_totp(secret, timestamp=None, interval=30):
"""Generate TOTP token from Base32 secret"""
if timestamp is None:
timestamp = int(time.time())
# Calculate time counter
counter = timestamp // interval
# Decode Base32 secret
# Add padding if needed
secret = secret.upper()
missing_padding = len(secret) % 8
if missing_padding:
secret += '=' * (8 - missing_padding)
key = base64.b32decode(secret)
# Pack counter as big-endian 8-byte integer
counter_bytes = struct.pack('>Q', counter)
# Generate HMAC-SHA1 hash
hmac_hash = hmac.new(key, counter_bytes, hashlib.sha1).digest()
# Dynamic truncation
offset = hmac_hash[-1] & 0xf
code = struct.unpack('>I', hmac_hash[offset:offset + 4])[0]
code &= 0x7fffffff
code %= 1000000
return f"{code:06d}"
def main():
parser = argparse.ArgumentParser(description="Generate TOTP tokens for testing")
parser.add_argument('-s', '--secret', required=True, help='Base32 secret')
parser.add_argument('-t', '--time', type=int, help='Unix timestamp (default: current time)')
parser.add_argument('-c', '--count', type=int, default=1, help='Number of tokens to generate')
args = parser.parse_args()
timestamp = args.time or int(time.time())
print(f"Secret: {args.secret}")
print(f"Timestamp: {timestamp} ({time.ctime(timestamp)})")
print(f"Interval: 30 seconds")
print()
for i in range(args.count):
current_time = timestamp + (i * 30)
token = generate_totp(args.secret, current_time)
print(f"Time: {time.ctime(current_time)} | Token: {token}")
if __name__ == "__main__":
main()

View File

@@ -22,6 +22,32 @@ ICON_ERROR="❌"
ICON_INFO=""
ICON_TEST="🧪"
resolve_path(){
local base="$1" path="$2"
if command -v python3 >/dev/null 2>&1; then
python3 - "$base" "$path" <<'PY'
import os, sys
base, path = sys.argv[1:3]
if os.path.isabs(path):
print(os.path.normpath(path))
else:
print(os.path.normpath(os.path.join(base, path)))
PY
else
(cd "$base" && realpath -m "$path")
fi
}
if [ -f "$PROJECT_ROOT/.env" ]; then
set -a
# shellcheck disable=SC1091
source "$PROJECT_ROOT/.env"
set +a
fi
LOCAL_MODULES_DIR_RAW="${STORAGE_PATH_LOCAL:-./local-storage}/modules"
LOCAL_MODULES_DIR="$(resolve_path "$PROJECT_ROOT" "$LOCAL_MODULES_DIR_RAW")"
# Counters
TESTS_TOTAL=0
TESTS_PASSED=0
@@ -117,7 +143,7 @@ info "Running: python3 scripts/python/modules.py generate"
if python3 scripts/python/modules.py \
--env-path .env \
--manifest config/module-manifest.json \
generate --output-dir local-storage/modules > /tmp/phase1-modules-generate.log 2>&1; then
generate --output-dir "$LOCAL_MODULES_DIR" > /tmp/phase1-modules-generate.log 2>&1; then
ok "Module state generation successful"
else
# Check if it's just warnings
@@ -130,11 +156,11 @@ fi
# Test 4: Verify SQL manifest created
test_header "SQL Manifest Verification"
if [ -f local-storage/modules/.sql-manifest.json ]; then
ok "SQL manifest created: local-storage/modules/.sql-manifest.json"
if [ -f "$LOCAL_MODULES_DIR/.sql-manifest.json" ]; then
ok "SQL manifest created: $LOCAL_MODULES_DIR/.sql-manifest.json"
# Check manifest structure
module_count=$(python3 -c "import json; data=json.load(open('local-storage/modules/.sql-manifest.json')); print(len(data.get('modules', [])))" 2>/dev/null || echo "0")
module_count=$(python3 -c "import json; data=json.load(open('$LOCAL_MODULES_DIR/.sql-manifest.json')); print(len(data.get('modules', [])))" 2>/dev/null || echo "0")
info "Modules with SQL: $module_count"
if [ "$module_count" -gt 0 ]; then
@@ -142,7 +168,7 @@ if [ -f local-storage/modules/.sql-manifest.json ]; then
# Show first module
info "Sample module SQL info:"
python3 -c "import json; data=json.load(open('local-storage/modules/.sql-manifest.json')); m=data['modules'][0] if data['modules'] else {}; print(f\" Name: {m.get('name', 'N/A')}\n SQL files: {len(m.get('sql_files', {}))}\") " 2>/dev/null || true
python3 -c "import json; data=json.load(open('$LOCAL_MODULES_DIR/.sql-manifest.json')); m=data['modules'][0] if data['modules'] else {}; print(f\" Name: {m.get('name', 'N/A')}\n SQL files: {len(m.get('sql_files', {}))}\") " 2>/dev/null || true
else
warn "No modules with SQL files (expected if modules not yet staged)"
fi
@@ -152,19 +178,19 @@ fi
# Test 5: Verify modules.env created
test_header "Module Environment File Check"
if [ -f local-storage/modules/modules.env ]; then
if [ -f "$LOCAL_MODULES_DIR/modules.env" ]; then
ok "modules.env created"
# Check for key exports
if grep -q "MODULES_ENABLED=" local-storage/modules/modules.env; then
if grep -q "MODULES_ENABLED=" "$LOCAL_MODULES_DIR/modules.env"; then
ok "MODULES_ENABLED variable present"
fi
if grep -q "MODULES_REQUIRES_CUSTOM_BUILD=" local-storage/modules/modules.env; then
if grep -q "MODULES_REQUIRES_CUSTOM_BUILD=" "$LOCAL_MODULES_DIR/modules.env"; then
ok "Build requirement flags present"
# Check if build required
source local-storage/modules/modules.env
source "$LOCAL_MODULES_DIR/modules.env"
if [ "${MODULES_REQUIRES_CUSTOM_BUILD:-0}" = "1" ]; then
info "Custom build required (C++ modules enabled)"
else
@@ -177,8 +203,8 @@ fi
# Test 6: Check build requirement
test_header "Build Requirement Check"
if [ -f local-storage/modules/modules.env ]; then
source local-storage/modules/modules.env
if [ -f "$LOCAL_MODULES_DIR/modules.env" ]; then
source "$LOCAL_MODULES_DIR/modules.env"
info "MODULES_REQUIRES_CUSTOM_BUILD=${MODULES_REQUIRES_CUSTOM_BUILD:-0}"
info "MODULES_REQUIRES_PLAYERBOT_SOURCE=${MODULES_REQUIRES_PLAYERBOT_SOURCE:-0}"

121
scripts/bash/update-remote.sh Executable file
View File

@@ -0,0 +1,121 @@
#!/bin/bash
# Helper to push a fresh build to a remote host with minimal downtime and no data touch by default.
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
DEFAULT_PROJECT_DIR="~$(printf '/%s' "$(basename "$ROOT_DIR")")"
HOST=""
USER=""
PORT=22
IDENTITY=""
PROJECT_DIR="$DEFAULT_PROJECT_DIR"
PUSH_ENV=0
PUSH_STORAGE=0
CLEAN_CONTAINERS=0
AUTO_DEPLOY=1
ASSUME_YES=0
usage(){
cat <<'EOF'
Usage: scripts/bash/update-remote.sh --host HOST --user USER [options]
Options:
--host HOST Remote hostname or IP (required)
--user USER SSH username on remote host (required)
--port PORT SSH port (default: 22)
--identity PATH SSH private key
--project-dir DIR Remote project directory (default: ~/<repo-name>)
--remote-path DIR Alias for --project-dir (backward compat)
--push-env Upload local .env to remote (default: skip)
--push-storage Sync ./storage to remote (default: skip)
--clean-containers Stop/remove remote ac-* containers & project images during migration (default: preserve)
--no-auto-deploy Do not trigger remote deploy after migration
--yes Auto-confirm prompts
--help Show this help
EOF
}
while [[ $# -gt 0 ]]; do
case "$1" in
--host) HOST="$2"; shift 2;;
--user) USER="$2"; shift 2;;
--port) PORT="$2"; shift 2;;
--identity) IDENTITY="$2"; shift 2;;
--project-dir) PROJECT_DIR="$2"; shift 2;;
--remote-path) PROJECT_DIR="$2"; shift 2;;
--push-env) PUSH_ENV=1; shift;;
--push-storage) PUSH_STORAGE=1; shift;;
--clean-containers) CLEAN_CONTAINERS=1; shift;;
--no-auto-deploy) AUTO_DEPLOY=0; shift;;
--yes) ASSUME_YES=1; shift;;
--help|-h) usage; exit 0;;
*) echo "Unknown option: $1" >&2; usage; exit 1;;
esac
done
if [[ -z "$HOST" || -z "$USER" ]]; then
echo "--host and --user are required" >&2
usage
exit 1
fi
deploy_args=(--remote --remote-host "$HOST" --remote-user "$USER")
if [ -n "$PROJECT_DIR" ]; then
deploy_args+=(--remote-project-dir "$PROJECT_DIR")
fi
if [ -n "$IDENTITY" ]; then
deploy_args+=(--remote-identity "$IDENTITY")
fi
if [ "$PORT" != "22" ]; then
deploy_args+=(--remote-port "$PORT")
fi
if [ "$PUSH_STORAGE" -ne 1 ]; then
deploy_args+=(--remote-skip-storage)
fi
if [ "$PUSH_ENV" -ne 1 ]; then
deploy_args+=(--remote-skip-env)
fi
if [ "$CLEAN_CONTAINERS" -eq 1 ]; then
deploy_args+=(--remote-clean-containers)
else
deploy_args+=(--remote-preserve-containers)
fi
if [ "$AUTO_DEPLOY" -eq 1 ]; then
deploy_args+=(--remote-auto-deploy)
fi
deploy_args+=(--no-watch)
if [ "$ASSUME_YES" -eq 1 ]; then
deploy_args+=(--yes)
fi
echo "Remote update plan:"
echo " Host/User : ${USER}@${HOST}:${PORT}"
echo " Project Dir : ${PROJECT_DIR}"
echo " Push .env : $([ "$PUSH_ENV" -eq 1 ] && echo yes || echo no)"
echo " Push storage : $([ "$PUSH_STORAGE" -eq 1 ] && echo yes || echo no)"
echo " Cleanup mode : $([ "$CLEAN_CONTAINERS" -eq 1 ] && echo 'clean containers' || echo 'preserve containers')"
echo " Auto deploy : $([ "$AUTO_DEPLOY" -eq 1 ] && echo yes || echo no)"
if [ "$AUTO_DEPLOY" -eq 1 ] && [ "$PUSH_ENV" -ne 1 ]; then
echo " ⚠️ Auto-deploy is enabled but push-env is off; remote deploy will fail without a valid .env."
fi
if [ "$ASSUME_YES" -ne 1 ]; then
read -r -p "Proceed with remote update? [y/N]: " reply
reply="${reply:-n}"
case "${reply,,}" in
y|yes) ;;
*) echo "Aborted."; exit 1 ;;
esac
deploy_args+=(--yes)
fi
cd "$ROOT_DIR"
./deploy.sh "${deploy_args[@]}"

301
scripts/bash/validate-env.sh Executable file
View File

@@ -0,0 +1,301 @@
#!/bin/bash
# Validate environment configuration for AzerothCore RealmMaster
# Usage: ./scripts/bash/validate-env.sh [--strict] [--quiet]
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
ENV_FILE="$PROJECT_ROOT/.env"
TEMPLATE_FILE="$PROJECT_ROOT/.env.template"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
# Flags
STRICT_MODE=false
QUIET_MODE=false
EXIT_CODE=0
# Parse arguments
while [[ $# -gt 0 ]]; do
case "$1" in
--strict)
STRICT_MODE=true
shift
;;
--quiet)
QUIET_MODE=true
shift
;;
-h|--help)
cat <<EOF
Usage: $0 [OPTIONS]
Validates environment configuration for required variables.
OPTIONS:
--strict Fail on missing optional variables
--quiet Only show errors, suppress info/success messages
-h, --help Show this help
EXIT CODES:
0 - All required variables present
1 - Missing required variables
2 - Missing optional variables (only in --strict mode)
REQUIRED VARIABLES:
Project Configuration:
COMPOSE_PROJECT_NAME - Project name for containers/images
NETWORK_NAME - Docker network name
Repository Configuration:
ACORE_REPO_STANDARD - Standard AzerothCore repository URL
ACORE_BRANCH_STANDARD - Standard AzerothCore branch name
ACORE_REPO_PLAYERBOTS - Playerbots repository URL
ACORE_BRANCH_PLAYERBOTS - Playerbots branch name
Storage Paths:
STORAGE_PATH - Main storage path
STORAGE_PATH_LOCAL - Local storage path
Database Configuration:
MYSQL_ROOT_PASSWORD - MySQL root password
MYSQL_USER - MySQL user (typically root)
MYSQL_PORT - MySQL port (typically 3306)
MYSQL_HOST - MySQL hostname
DB_AUTH_NAME - Auth database name
DB_WORLD_NAME - World database name
DB_CHARACTERS_NAME - Characters database name
DB_PLAYERBOTS_NAME - Playerbots database name
Container Configuration:
CONTAINER_MYSQL - MySQL container name
CONTAINER_USER - Container user (format: uid:gid)
OPTIONAL VARIABLES (checked with --strict):
MySQL Performance:
MYSQL_INNODB_BUFFER_POOL_SIZE - InnoDB buffer pool size
MYSQL_INNODB_LOG_FILE_SIZE - InnoDB log file size
MYSQL_INNODB_REDO_LOG_CAPACITY - InnoDB redo log capacity
Database Connection:
DB_RECONNECT_SECONDS - Database reconnection delay
DB_RECONNECT_ATTEMPTS - Database reconnection attempts
Build Configuration:
MODULES_REBUILD_SOURCE_PATH - Path to source for module builds
Backup Configuration:
BACKUP_PATH - Backup storage path
BACKUP_RETENTION_DAYS - Daily backup retention
BACKUP_RETENTION_HOURS - Hourly backup retention
Image Configuration:
AC_AUTHSERVER_IMAGE - Auth server Docker image
AC_WORLDSERVER_IMAGE - World server Docker image
AC_DB_IMPORT_IMAGE - Database import Docker image
EXAMPLES:
$0 # Basic validation
$0 --strict # Strict validation (check optional vars)
$0 --quiet # Only show errors
EOF
exit 0
;;
*)
echo -e "${RED}Unknown option: $1${NC}" >&2
exit 1
;;
esac
done
log_info() {
$QUIET_MODE || echo -e "${BLUE} $*${NC}"
}
log_success() {
$QUIET_MODE || echo -e "${GREEN}$*${NC}"
}
log_warning() {
echo -e "${YELLOW}⚠️ $*${NC}" >&2
}
log_error() {
echo -e "${RED}$*${NC}" >&2
}
# Load environment
load_env() {
local file="$1"
if [[ ! -f "$file" ]]; then
return 1
fi
set -a
# shellcheck disable=SC1090
source "$file" 2>/dev/null || return 1
set +a
return 0
}
# Check if variable is set and non-empty
check_var() {
local var_name="$1"
local var_value="${!var_name:-}"
if [[ -z "$var_value" ]]; then
return 1
fi
return 0
}
# Validate required variables
validate_required() {
local missing=()
local required_vars=(
# Project Configuration
"COMPOSE_PROJECT_NAME"
"NETWORK_NAME"
# Repository Configuration
"ACORE_REPO_STANDARD"
"ACORE_BRANCH_STANDARD"
"ACORE_REPO_PLAYERBOTS"
"ACORE_BRANCH_PLAYERBOTS"
# Storage Paths
"STORAGE_PATH"
"STORAGE_PATH_LOCAL"
# Database Configuration
"MYSQL_ROOT_PASSWORD"
"MYSQL_USER"
"MYSQL_PORT"
"MYSQL_HOST"
"DB_AUTH_NAME"
"DB_WORLD_NAME"
"DB_CHARACTERS_NAME"
"DB_PLAYERBOTS_NAME"
# Container Configuration
"CONTAINER_MYSQL"
"CONTAINER_USER"
)
log_info "Checking required variables..."
for var in "${required_vars[@]}"; do
if check_var "$var"; then
log_success "$var=${!var}"
else
log_error "$var is not set"
missing+=("$var")
fi
done
if [[ ${#missing[@]} -gt 0 ]]; then
log_error "Missing required variables: ${missing[*]}"
return 1
fi
log_success "All required variables are set"
return 0
}
# Validate optional variables (strict mode)
validate_optional() {
local missing=()
local optional_vars=(
# MySQL Performance Tuning
"MYSQL_INNODB_BUFFER_POOL_SIZE"
"MYSQL_INNODB_LOG_FILE_SIZE"
"MYSQL_INNODB_REDO_LOG_CAPACITY"
# Database Connection Settings
"DB_RECONNECT_SECONDS"
"DB_RECONNECT_ATTEMPTS"
# Build Configuration
"MODULES_REBUILD_SOURCE_PATH"
# Backup Configuration
"BACKUP_PATH"
"BACKUP_RETENTION_DAYS"
"BACKUP_RETENTION_HOURS"
# Image Configuration
"AC_AUTHSERVER_IMAGE"
"AC_WORLDSERVER_IMAGE"
"AC_DB_IMPORT_IMAGE"
)
log_info "Checking optional variables..."
for var in "${optional_vars[@]}"; do
if check_var "$var"; then
log_success "$var is set"
else
log_warning "$var is not set (using default)"
missing+=("$var")
fi
done
if [[ ${#missing[@]} -gt 0 ]]; then
log_warning "Optional variables not set: ${missing[*]}"
return 2
fi
log_success "All optional variables are set"
return 0
}
# Main validation
main() {
log_info "Validating environment configuration..."
echo ""
# Check if .env exists
if [[ ! -f "$ENV_FILE" ]]; then
log_error ".env file not found at $ENV_FILE"
log_info "Copy .env.template to .env and configure it:"
log_info " cp $TEMPLATE_FILE $ENV_FILE"
exit 1
fi
# Load environment
if ! load_env "$ENV_FILE"; then
log_error "Failed to load $ENV_FILE"
exit 1
fi
log_success "Loaded environment from $ENV_FILE"
echo ""
# Validate required variables
if ! validate_required; then
EXIT_CODE=1
fi
echo ""
# Validate optional variables if strict mode
if $STRICT_MODE; then
if ! validate_optional; then
[[ $EXIT_CODE -eq 0 ]] && EXIT_CODE=2
fi
echo ""
fi
# Final summary
if [[ $EXIT_CODE -eq 0 ]]; then
log_success "Environment validation passed ✨"
elif [[ $EXIT_CODE -eq 1 ]]; then
log_error "Environment validation failed (missing required variables)"
elif [[ $EXIT_CODE -eq 2 ]]; then
log_warning "Environment validation passed with warnings (missing optional variables)"
fi
exit $EXIT_CODE
}
main "$@"

View File

@@ -98,12 +98,23 @@ read_env_value(){
if [ -f "$env_path" ]; then
value="$(grep -E "^${key}=" "$env_path" | tail -n1 | cut -d'=' -f2- | tr -d '\r')"
fi
# Fallback to template defaults if not set in the chosen env file
if [ -z "$value" ] && [ -f "$TEMPLATE_FILE" ]; then
value="$(grep -E "^${key}=" "$TEMPLATE_FILE" | tail -n1 | cut -d'=' -f2- | tr -d '\r')"
fi
if [ -z "$value" ]; then
value="$default"
fi
echo "$value"
}
MYSQL_EXTERNAL_PORT="$(read_env_value MYSQL_EXTERNAL_PORT 64306)"
AUTH_EXTERNAL_PORT="$(read_env_value AUTH_EXTERNAL_PORT 3784)"
WORLD_EXTERNAL_PORT="$(read_env_value WORLD_EXTERNAL_PORT 8215)"
SOAP_EXTERNAL_PORT="$(read_env_value SOAP_EXTERNAL_PORT 7778)"
PMA_EXTERNAL_PORT="$(read_env_value PMA_EXTERNAL_PORT 8081)"
KEIRA3_EXTERNAL_PORT="$(read_env_value KEIRA3_EXTERNAL_PORT 4201)"
handle_auto_rebuild(){
local storage_path
storage_path="$(read_env_value STORAGE_PATH_LOCAL "./local-storage")"
@@ -171,7 +182,7 @@ health_checks(){
check_health ac-worldserver || ((failures++))
if [ "$QUICK" = false ]; then
info "Port checks"
for port in 64306 3784 8215 7778 8081 4201; do
for port in "$MYSQL_EXTERNAL_PORT" "$AUTH_EXTERNAL_PORT" "$WORLD_EXTERNAL_PORT" "$SOAP_EXTERNAL_PORT" "$PMA_EXTERNAL_PORT" "$KEIRA3_EXTERNAL_PORT"; do
if timeout 3 bash -c "</dev/tcp/127.0.0.1/$port" 2>/dev/null; then ok "port $port: open"; else warn "port $port: closed"; fi
done
fi
@@ -190,7 +201,7 @@ main(){
fi
health_checks
handle_auto_rebuild
info "Endpoints: MySQL:64306, Auth:3784, World:8215, SOAP:7778, phpMyAdmin:8081, Keira3:4201"
info "Endpoints: MySQL:${MYSQL_EXTERNAL_PORT}, Auth:${AUTH_EXTERNAL_PORT}, World:${WORLD_EXTERNAL_PORT}, SOAP:${SOAP_EXTERNAL_PORT}, phpMyAdmin:${PMA_EXTERNAL_PORT}, Keira3:${KEIRA3_EXTERNAL_PORT}"
}
main "$@"

View File

@@ -1,6 +1,6 @@
module acore-compose/statusdash
module azerothcore-realmmaster/statusdash
go 1.22.2
go 1.22
require (
github.com/gizak/termui/v3 v3.1.0 // indirect

View File

@@ -4,6 +4,8 @@ import (
"encoding/json"
"fmt"
"log"
"net"
"os"
"os/exec"
"strings"
"time"
@@ -61,17 +63,114 @@ type Module struct {
Type string `json:"type"`
}
type BuildInfo struct {
Variant string `json:"variant"`
Repo string `json:"repo"`
Branch string `json:"branch"`
Image string `json:"image"`
Commit string `json:"commit"`
CommitDate string `json:"commit_date"`
CommitSource string `json:"commit_source"`
SourcePath string `json:"source_path"`
}
type Snapshot struct {
Timestamp string `json:"timestamp"`
Project string `json:"project"`
Network string `json:"network"`
Services []Service `json:"services"`
Ports []Port `json:"ports"`
Modules []Module `json:"modules"`
Storage map[string]DirInfo `json:"storage"`
Volumes map[string]VolumeInfo `json:"volumes"`
Users UserStats `json:"users"`
Stats map[string]ContainerStats `json:"stats"`
Timestamp string `json:"timestamp"`
Project string `json:"project"`
Network string `json:"network"`
Services []Service `json:"services"`
Ports []Port `json:"ports"`
Modules []Module `json:"modules"`
Storage map[string]DirInfo `json:"storage"`
Volumes map[string]VolumeInfo `json:"volumes"`
Users UserStats `json:"users"`
Stats map[string]ContainerStats `json:"stats"`
Build BuildInfo `json:"build"`
}
var persistentServiceOrder = []string{
"ac-mysql",
"ac-db-guard",
"ac-authserver",
"ac-worldserver",
"ac-phpmyadmin",
"ac-keira3",
"ac-backup",
}
func humanDuration(d time.Duration) string {
if d < time.Minute {
return "<1m"
}
days := d / (24 * time.Hour)
d -= days * 24 * time.Hour
hours := d / time.Hour
d -= hours * time.Hour
mins := d / time.Minute
switch {
case days > 0:
return fmt.Sprintf("%dd %dh", days, hours)
case hours > 0:
return fmt.Sprintf("%dh %dm", hours, mins)
default:
return fmt.Sprintf("%dm", mins)
}
}
func formatUptime(startedAt string) string {
if startedAt == "" {
return "-"
}
parsed, err := time.Parse(time.RFC3339Nano, startedAt)
if err != nil {
parsed, err = time.Parse(time.RFC3339, startedAt)
if err != nil {
return "-"
}
}
if parsed.IsZero() {
return "-"
}
uptime := time.Since(parsed)
if uptime < 0 {
uptime = 0
}
return humanDuration(uptime)
}
func primaryIPv4() string {
ifaces, err := net.Interfaces()
if err != nil {
return ""
}
for _, iface := range ifaces {
if iface.Flags&net.FlagUp == 0 || iface.Flags&net.FlagLoopback != 0 {
continue
}
addrs, err := iface.Addrs()
if err != nil {
continue
}
for _, addr := range addrs {
var ip net.IP
switch v := addr.(type) {
case *net.IPNet:
ip = v.IP
case *net.IPAddr:
ip = v.IP
}
if ip == nil || ip.IsLoopback() {
continue
}
ip = ip.To4()
if ip == nil {
continue
}
return ip.String()
}
}
return ""
}
func runSnapshot() (*Snapshot, error) {
@@ -87,27 +186,76 @@ func runSnapshot() (*Snapshot, error) {
return snap, nil
}
func buildServicesTable(s *Snapshot) *TableNoCol {
table := NewTableNoCol()
rows := [][]string{{"Service", "Status", "Health", "CPU%", "Memory"}}
for _, svc := range s.Services {
cpu := "-"
mem := "-"
if stats, ok := s.Stats[svc.Name]; ok {
cpu = fmt.Sprintf("%.1f", stats.CPU)
mem = strings.Split(stats.Memory, " / ")[0] // Just show used, not total
}
// Combine health with exit code for stopped containers
health := svc.Health
if svc.Status != "running" && svc.ExitCode != "0" && svc.ExitCode != "" {
health = fmt.Sprintf("%s (%s)", svc.Health, svc.ExitCode)
}
rows = append(rows, []string{svc.Label, svc.Status, health, cpu, mem})
func partitionServices(all []Service) ([]Service, []Service) {
byName := make(map[string]Service)
for _, svc := range all {
byName[svc.Name] = svc
}
seen := make(map[string]bool)
persistent := make([]Service, 0, len(persistentServiceOrder))
for _, name := range persistentServiceOrder {
if svc, ok := byName[name]; ok {
persistent = append(persistent, svc)
seen[name] = true
}
}
setups := make([]Service, 0, len(all))
for _, svc := range all {
if seen[svc.Name] {
continue
}
setups = append(setups, svc)
}
return persistent, setups
}
func buildServicesTable(s *Snapshot) *TableNoCol {
runningServices, setupServices := partitionServices(s.Services)
table := NewTableNoCol()
rows := [][]string{{"Service", "Status", "Health", "Uptime", "CPU%", "Memory"}}
appendRows := func(services []Service) {
for _, svc := range services {
cpu := "-"
mem := "-"
if svcStats, ok := s.Stats[svc.Name]; ok {
cpu = fmt.Sprintf("%.1f", svcStats.CPU)
mem = strings.Split(svcStats.Memory, " / ")[0] // Just show used, not total
}
health := svc.Health
if svc.Status != "running" && svc.ExitCode != "0" && svc.ExitCode != "" {
health = fmt.Sprintf("%s (%s)", svc.Health, svc.ExitCode)
}
rows = append(rows, []string{svc.Label, svc.Status, health, formatUptime(svc.StartedAt), cpu, mem})
}
}
appendRows(runningServices)
appendRows(setupServices)
table.Rows = rows
table.RowSeparator = false
table.Border = true
table.Title = "Services"
for i := 1; i < len(table.Rows); i++ {
if table.RowStyles == nil {
table.RowStyles = make(map[int]ui.Style)
}
state := strings.ToLower(table.Rows[i][2])
switch state {
case "running", "healthy":
table.RowStyles[i] = ui.NewStyle(ui.ColorGreen)
case "restarting", "unhealthy":
table.RowStyles[i] = ui.NewStyle(ui.ColorRed)
case "exited":
table.RowStyles[i] = ui.NewStyle(ui.ColorYellow)
default:
table.RowStyles[i] = ui.NewStyle(ui.ColorWhite)
}
}
return table
}
@@ -115,9 +263,9 @@ func buildPortsTable(s *Snapshot) *TableNoCol {
table := NewTableNoCol()
rows := [][]string{{"Port", "Number", "Reachable"}}
for _, p := range s.Ports {
state := "down"
state := "Closed"
if p.Reachable {
state = "up"
state = "Open"
}
rows = append(rows, []string{p.Name, p.Port, state})
}
@@ -145,7 +293,6 @@ func buildModulesList(s *Snapshot) *widgets.List {
func buildStorageParagraph(s *Snapshot) *widgets.Paragraph {
var b strings.Builder
fmt.Fprintf(&b, "STORAGE:\n")
entries := []struct {
Key string
Label string
@@ -161,23 +308,20 @@ func buildStorageParagraph(s *Snapshot) *widgets.Paragraph {
if !ok {
continue
}
mark := "○"
if info.Exists {
mark = "●"
}
fmt.Fprintf(&b, " %-15s %s %s (%s)\n", item.Label, mark, info.Path, info.Size)
fmt.Fprintf(&b, " %-15s %s (%s)\n", item.Label, info.Path, info.Size)
}
par := widgets.NewParagraph()
par.Title = "Storage"
par.Text = b.String()
par.Text = strings.TrimRight(b.String(), "\n")
par.Border = true
par.BorderStyle = ui.NewStyle(ui.ColorYellow)
par.PaddingLeft = 0
par.PaddingRight = 0
return par
}
func buildVolumesParagraph(s *Snapshot) *widgets.Paragraph {
var b strings.Builder
fmt.Fprintf(&b, "VOLUMES:\n")
entries := []struct {
Key string
Label string
@@ -190,47 +334,89 @@ func buildVolumesParagraph(s *Snapshot) *widgets.Paragraph {
if !ok {
continue
}
mark := "○"
if info.Exists {
mark = "●"
}
fmt.Fprintf(&b, " %-13s %s %s\n", item.Label, mark, info.Mountpoint)
fmt.Fprintf(&b, " %-13s %s\n", item.Label, info.Mountpoint)
}
par := widgets.NewParagraph()
par.Title = "Volumes"
par.Text = b.String()
par.Text = strings.TrimRight(b.String(), "\n")
par.Border = true
par.BorderStyle = ui.NewStyle(ui.ColorYellow)
par.PaddingLeft = 0
par.PaddingRight = 0
return par
}
func simplifyRepo(repo string) string {
repo = strings.TrimSpace(repo)
repo = strings.TrimSuffix(repo, ".git")
repo = strings.TrimPrefix(repo, "https://")
repo = strings.TrimPrefix(repo, "http://")
repo = strings.TrimPrefix(repo, "git@")
repo = strings.TrimPrefix(repo, "github.com:")
repo = strings.TrimPrefix(repo, "gitlab.com:")
repo = strings.TrimPrefix(repo, "github.com/")
repo = strings.TrimPrefix(repo, "gitlab.com/")
return repo
}
func buildInfoParagraph(s *Snapshot) *widgets.Paragraph {
build := s.Build
var lines []string
if build.Branch != "" {
lines = append(lines, fmt.Sprintf("Branch: %s", build.Branch))
}
if repo := simplifyRepo(build.Repo); repo != "" {
lines = append(lines, fmt.Sprintf("Repo: %s", repo))
}
commitLine := "Git: unknown"
if build.Commit != "" {
commitLine = fmt.Sprintf("Git: %s", build.Commit)
switch build.CommitSource {
case "image-label":
commitLine += " [image]"
case "source-tree":
commitLine += " [source]"
}
}
lines = append(lines, commitLine)
if build.Image != "" {
// Skip image line to keep header compact
}
lines = append(lines, fmt.Sprintf("Updated: %s", s.Timestamp))
par := widgets.NewParagraph()
par.Title = "Build"
par.Text = strings.Join(lines, "\n")
par.Border = true
par.BorderStyle = ui.NewStyle(ui.ColorYellow)
return par
}
func renderSnapshot(s *Snapshot, selectedModule int) (*widgets.List, *ui.Grid) {
servicesTable := buildServicesTable(s)
for i := 1; i < len(servicesTable.Rows); i++ {
if servicesTable.RowStyles == nil {
servicesTable.RowStyles = make(map[int]ui.Style)
}
state := strings.ToLower(servicesTable.Rows[i][1])
switch state {
case "running", "healthy":
servicesTable.RowStyles[i] = ui.NewStyle(ui.ColorGreen)
case "restarting", "unhealthy":
servicesTable.RowStyles[i] = ui.NewStyle(ui.ColorRed)
case "exited":
servicesTable.RowStyles[i] = ui.NewStyle(ui.ColorYellow)
default:
servicesTable.RowStyles[i] = ui.NewStyle(ui.ColorWhite)
}
hostname, err := os.Hostname()
if err != nil || hostname == "" {
hostname = "unknown"
}
ip := primaryIPv4()
if ip == "" {
ip = "unknown"
}
servicesTable := buildServicesTable(s)
portsTable := buildPortsTable(s)
for i := 1; i < len(portsTable.Rows); i++ {
if portsTable.RowStyles == nil {
portsTable.RowStyles = make(map[int]ui.Style)
}
if portsTable.Rows[i][2] == "up" {
if portsTable.Rows[i][2] == "Open" {
portsTable.RowStyles[i] = ui.NewStyle(ui.ColorGreen)
} else {
portsTable.RowStyles[i] = ui.NewStyle(ui.ColorRed)
portsTable.RowStyles[i] = ui.NewStyle(ui.ColorYellow)
}
}
modulesList := buildModulesList(s)
@@ -247,50 +433,88 @@ func renderSnapshot(s *Snapshot, selectedModule int) (*widgets.List, *ui.Grid) {
moduleInfoPar.Title = "Module Info"
if selectedModule >= 0 && selectedModule < len(s.Modules) {
mod := s.Modules[selectedModule]
moduleInfoPar.Text = fmt.Sprintf("%s\n\nCategory: %s\nType: %s", mod.Description, mod.Category, mod.Type)
moduleInfoPar.Text = fmt.Sprintf("%s\nCategory: %s\nType: %s", mod.Description, mod.Category, mod.Type)
} else {
moduleInfoPar.Text = "Select a module to view info"
}
moduleInfoPar.Border = true
moduleInfoPar.BorderStyle = ui.NewStyle(ui.ColorMagenta)
storagePar := buildStorageParagraph(s)
storagePar.Border = true
storagePar.BorderStyle = ui.NewStyle(ui.ColorYellow)
storagePar.PaddingLeft = 1
storagePar.PaddingRight = 1
volumesPar := buildVolumesParagraph(s)
header := widgets.NewParagraph()
header.Text = fmt.Sprintf("Project: %s\nNetwork: %s\nUpdated: %s", s.Project, s.Network, s.Timestamp)
header.Text = fmt.Sprintf("Host: %s\nIP: %s\nProject: %s\nNetwork: %s", hostname, ip, s.Project, s.Network)
header.Border = true
buildPar := buildInfoParagraph(s)
usersPar := widgets.NewParagraph()
usersPar.Text = fmt.Sprintf("USERS:\n Accounts: %d\n Online: %d\n Characters: %d\n Active 7d: %d", s.Users.Accounts, s.Users.Online, s.Users.Characters, s.Users.Active7d)
usersPar.Title = "Users"
usersPar.Text = fmt.Sprintf(" Online: %d\n Accounts: %d\n Characters: %d\n Active 7d: %d", s.Users.Online, s.Users.Accounts, s.Users.Characters, s.Users.Active7d)
usersPar.Border = true
const headerRowFrac = 0.18
const middleRowFrac = 0.43
const bottomRowFrac = 0.39
// Derive inner row ratios from the computed bottom row height so that
// internal containers tile their parent with the same spacing behavior
// as top-level rows.
grid := ui.NewGrid()
termWidth, termHeight := ui.TerminalDimensions()
headerHeight := int(float64(termHeight) * headerRowFrac)
middleHeight := int(float64(termHeight) * middleRowFrac)
bottomHeight := termHeight - headerHeight - middleHeight
if bottomHeight <= 0 {
bottomHeight = int(float64(termHeight) * bottomRowFrac)
}
helpHeight := int(float64(bottomHeight) * 0.32)
if helpHeight < 1 {
helpHeight = 1
}
moduleInfoHeight := bottomHeight - helpHeight
if moduleInfoHeight < 1 {
moduleInfoHeight = 1
}
storageHeight := int(float64(bottomHeight) * 0.513)
if storageHeight < 1 {
storageHeight = 1
}
volumesHeight := bottomHeight - storageHeight
if volumesHeight < 1 {
volumesHeight = 1
}
helpRatio := float64(helpHeight) / float64(bottomHeight)
moduleInfoRatio := float64(moduleInfoHeight) / float64(bottomHeight)
storageRatio := float64(storageHeight) / float64(bottomHeight)
volumesRatio := float64(volumesHeight) / float64(bottomHeight)
grid.SetRect(0, 0, termWidth, termHeight)
grid.Set(
ui.NewRow(0.18,
ui.NewCol(0.6, header),
ui.NewCol(0.4, usersPar),
ui.NewRow(headerRowFrac,
ui.NewCol(0.34, header),
ui.NewCol(0.33, buildPar),
ui.NewCol(0.33, usersPar),
),
ui.NewRow(0.42,
ui.NewRow(middleRowFrac,
ui.NewCol(0.6, servicesTable),
ui.NewCol(0.4, portsTable),
),
ui.NewRow(0.40,
ui.NewRow(bottomRowFrac,
ui.NewCol(0.25, modulesList),
ui.NewCol(0.15,
ui.NewRow(0.30, helpPar),
ui.NewRow(0.70, moduleInfoPar),
ui.NewRow(helpRatio, helpPar),
ui.NewRow(moduleInfoRatio, moduleInfoPar),
),
ui.NewCol(0.6,
ui.NewRow(0.55,
ui.NewRow(storageRatio,
ui.NewCol(1.0, storagePar),
),
ui.NewRow(0.45,
ui.NewRow(volumesRatio,
ui.NewCol(1.0, volumesPar),
),
),

View File

@@ -41,9 +41,68 @@ Reads patch definitions from module metadata.
## Module-Specific Hooks
Module-specific hooks are named after their primary module:
- `mod-ale-patches` - Apply mod-ale compatibility fixes
- `black-market-setup` - Black Market specific setup
Module-specific hooks are named after their primary module and handle unique setup requirements.
### `mod-ale-patches`
Applies compatibility patches for mod-ale (ALE - AzerothCore Lua Engine, formerly Eluna) when building with the AzerothCore playerbots fork.
**Auto-Detection:**
The hook automatically detects if you're building with the playerbots fork by checking:
1. `STACK_SOURCE_VARIANT=playerbots` environment variable
2. `MODULES_REBUILD_SOURCE_PATH` contains "azerothcore-playerbots"
**Patches Applied:**
#### SendTrainerList Compatibility Fix
**When Applied:** Automatically for playerbots fork (or when `APPLY_SENDTRAINERLIST_PATCH=1`)
**What it fixes:** Adds missing `GetGUID()` call to fix trainer list display
**File:** `src/LuaEngine/methods/PlayerMethods.h`
**Change:**
```cpp
// Before (broken)
player->GetSession()->SendTrainerList(obj);
// After (fixed)
player->GetSession()->SendTrainerList(obj->GetGUID());
```
#### MovePath Compatibility Fix
**When Applied:** Only when explicitly enabled with `APPLY_MOVEPATH_PATCH=1` (disabled by default)
**What it fixes:** Updates deprecated waypoint movement API
**File:** `src/LuaEngine/methods/CreatureMethods.h`
**Change:**
```cpp
// Before (deprecated)
MoveWaypoint(creature->GetWaypointPath(), true);
// After (updated API)
MovePath(creature->GetWaypointPath(), FORCED_MOVEMENT_RUN);
```
**Note:** Currently disabled by default as testing shows it's not required for normal operation.
**Feature Flags:**
```bash
# Automatically set for playerbots fork
APPLY_SENDTRAINERLIST_PATCH=1
# Disabled by default - enable if needed
APPLY_MOVEPATH_PATCH=0
```
**Debug Output:**
The hook provides detailed debug information during builds:
```
🔧 mod-ale-patches: Applying playerbots fork compatibility fixes to mod-ale
✅ Playerbots detected via MODULES_REBUILD_SOURCE_PATH
✅ Applied SendTrainerList compatibility fix
✅ Applied 1 compatibility patch(es)
```
**Why This Exists:**
The playerbots fork has slightly different API signatures in certain WorldSession methods. These patches ensure mod-ale (Eluna) compiles and functions correctly with both standard AzerothCore and the playerbots fork.
### `black-market-setup`
Black Market specific setup tasks.
## Usage in Manifest

View File

@@ -1,5 +1,6 @@
#!/bin/bash
# Module-specific hook for mod-ale compatibility patches
# NOTE: These patches are primarily needed for the AzerothCore playerbots fork
set -e
# Hook environment
@@ -7,12 +8,42 @@ MODULE_KEY="${MODULE_KEY:-}"
MODULE_DIR="${MODULE_DIR:-}"
MODULE_NAME="${MODULE_NAME:-}"
# Detect if we're building with playerbots fork
IS_PLAYERBOTS_FORK=0
# Method 1: Check STACK_SOURCE_VARIANT environment variable
if [ "${STACK_SOURCE_VARIANT:-}" = "playerbots" ]; then
IS_PLAYERBOTS_FORK=1
echo " ✅ Playerbots detected via STACK_SOURCE_VARIANT"
# Method 2: Check MODULES_REBUILD_SOURCE_PATH
elif [ -n "${MODULES_REBUILD_SOURCE_PATH:-}" ] && echo "${MODULES_REBUILD_SOURCE_PATH}" | grep -q "azerothcore-playerbots"; then
IS_PLAYERBOTS_FORK=1
echo " ✅ Playerbots detected via MODULES_REBUILD_SOURCE_PATH"
else
echo " ❌ Playerbots fork not detected"
echo " 🔍 Debug: STACK_SOURCE_VARIANT='${STACK_SOURCE_VARIANT:-}'"
echo " 🔍 Debug: MODULES_REBUILD_SOURCE_PATH='${MODULES_REBUILD_SOURCE_PATH:-}'"
fi
# Feature flags (set to 0 to disable specific patches)
APPLY_MOVEPATH_PATCH="${APPLY_MOVEPATH_PATCH:-0}" # Disabled by default - appears unnecessary
# SendTrainerList patch: auto-detect based on fork, but can be overridden
if [ -z "${APPLY_SENDTRAINERLIST_PATCH:-}" ]; then
APPLY_SENDTRAINERLIST_PATCH="$IS_PLAYERBOTS_FORK" # Only needed for playerbots fork
else
APPLY_SENDTRAINERLIST_PATCH="${APPLY_SENDTRAINERLIST_PATCH}"
fi
if [ -z "$MODULE_DIR" ] || [ ! -d "$MODULE_DIR" ]; then
echo "❌ mod-ale-patches: Invalid module directory: $MODULE_DIR"
exit 2
fi
echo "🔧 mod-ale-patches: Applying compatibility fixes to $MODULE_NAME"
if [ "$IS_PLAYERBOTS_FORK" = "1" ]; then
echo "🔧 mod-ale-patches: Applying playerbots fork compatibility fixes to $MODULE_NAME"
else
echo "🔧 mod-ale-patches: Checking compatibility fixes for $MODULE_NAME"
fi
# Apply MovePath compatibility patch
apply_movepath_patch() {
@@ -37,10 +68,42 @@ apply_movepath_patch() {
fi
}
# Apply SendTrainerList compatibility patch
apply_sendtrainerlist_patch() {
local target_file="$MODULE_DIR/src/LuaEngine/methods/PlayerMethods.h"
if [ ! -f "$target_file" ]; then
echo " ⚠️ SendTrainerList patch target file missing: $target_file"
return 1
fi
# Check if the buggy code exists (without GetGUID())
if grep -q 'player->GetSession()->SendTrainerList(obj);' "$target_file"; then
# Apply the fix by adding ->GetGUID()
if sed -i 's/player->GetSession()->SendTrainerList(obj);/player->GetSession()->SendTrainerList(obj->GetGUID());/' "$target_file"; then
echo " ✅ Applied SendTrainerList compatibility fix"
return 0
else
echo " ❌ Failed to apply SendTrainerList compatibility fix"
return 2
fi
else
echo " ✅ SendTrainerList compatibility fix already present"
return 0
fi
}
# Apply all patches
patch_count=0
if apply_movepath_patch; then
patch_count=$((patch_count + 1))
if [ "$APPLY_MOVEPATH_PATCH" = "1" ]; then
if apply_movepath_patch; then
patch_count=$((patch_count + 1))
fi
fi
if [ "$APPLY_SENDTRAINERLIST_PATCH" = "1" ]; then
if apply_sendtrainerlist_patch; then
patch_count=$((patch_count + 1))
fi
fi
if [ $patch_count -eq 0 ]; then

View File

@@ -371,12 +371,7 @@ def build_state(env_path: Path, manifest_path: Path) -> ModuleCollectionState:
for unknown_key in extra_env_modules:
warnings.append(f".env defines {unknown_key} but it is missing from the manifest")
# Warn if manifest entry lacks .env toggle
for module in modules:
if module.key not in env_map and module.key not in os.environ:
warnings.append(
f"Manifest includes {module.key} but .env does not define it (defaulting to 0)"
)
# Skip warnings for missing modules - they default to disabled (0) as intended
return ModuleCollectionState(
manifest_path=manifest_path,
@@ -588,14 +583,16 @@ def handle_generate(args: argparse.Namespace) -> int:
write_outputs(state, output_dir)
if state.warnings:
warning_block = "\n".join(f"- {warning}" for warning in state.warnings)
module_keys_with_warnings = sorted(
{warning.split()[0].strip(":,") for warning in state.warnings if warning.startswith("MODULE_")}
)
warning_lines = []
if module_keys_with_warnings:
warning_lines.append(f"- Modules with warnings: {', '.join(module_keys_with_warnings)}")
warning_lines.extend(f"- {warning}" for warning in state.warnings)
warning_block = textwrap.indent("\n".join(warning_lines), " ")
print(
textwrap.dedent(
f"""\
⚠️ Module manifest warnings detected:
{warning_block}
"""
),
f"⚠️ Module manifest warnings detected:\n{warning_block}\n",
file=sys.stderr,
)
if state.errors:

View File

@@ -50,6 +50,9 @@ def clean(value: str) -> str:
def cmd_keys(manifest_path: str) -> None:
manifest = load_manifest(manifest_path)
for entry in iter_modules(manifest):
# Skip blocked modules
if entry.get("status") == "blocked":
continue
print(entry["key"])
@@ -96,7 +99,7 @@ def cmd_metadata(manifest_path: str) -> None:
def cmd_sorted_keys(manifest_path: str) -> None:
manifest = load_manifest(manifest_path)
modules = list(iter_modules(manifest))
modules = [entry for entry in iter_modules(manifest) if entry.get("status") != "blocked"]
modules.sort(
key=lambda item: (
# Primary sort by order (default to 5000 if not specified)

View File

@@ -28,8 +28,9 @@ def normalize_modules(raw_modules: Iterable[str], profile: Path) -> List[str]:
if not value:
continue
modules.append(value)
if not modules:
raise ValueError(f"Profile {profile.name}: modules list cannot be empty")
# Allow empty modules list for vanilla/minimal profiles
if not modules and "vanilla" not in profile.stem.lower() and "minimal" not in profile.stem.lower():
raise ValueError(f"Profile {profile.name}: modules list cannot be empty (except for vanilla/minimal profiles)")
return modules
@@ -79,7 +80,7 @@ def cmd_list(directory: Path) -> int:
profiles.sort(key=lambda item: item[4])
for name, modules, label, description, order in profiles:
modules_csv = ",".join(modules)
modules_csv = ",".join(modules) if modules else "-"
print("\t".join([name, modules_csv, label, description, str(order)]))
return 0

View File

@@ -18,6 +18,7 @@ import re
import sys
import time
from dataclasses import dataclass
from pathlib import Path
from typing import Dict, Iterable, List, Optional, Sequence
from urllib import error, parse, request
@@ -45,7 +46,7 @@ CATEGORY_BY_TYPE = {
"data": "data",
"cpp": "uncategorized",
}
USER_AGENT = "acore-compose-module-manifest"
USER_AGENT = "azerothcore-realmmaster-module-manifest"
def parse_args(argv: Sequence[str]) -> argparse.Namespace:
@@ -87,6 +88,16 @@ def parse_args(argv: Sequence[str]) -> argparse.Namespace:
action="store_true",
help="Print verbose progress information",
)
parser.add_argument(
"--update-template",
default=".env.template",
help="Update .env.template with missing module variables (default: %(default)s)",
)
parser.add_argument(
"--skip-template",
action="store_true",
help="Skip updating .env.template",
)
return parser.parse_args(argv)
@@ -273,6 +284,117 @@ def collect_repositories(
return list(seen.values())
def update_env_template(manifest_path: str, template_path: str) -> bool:
"""Update .env.template with module variables for active modules only.
Args:
manifest_path: Path to the module manifest JSON file
template_path: Path to .env.template file
Returns:
True if template was updated, False if no changes needed
"""
# Load manifest to get all module keys
manifest = load_manifest(manifest_path)
modules = manifest.get("modules", [])
if not modules:
return False
# Extract only active module keys
active_module_keys = set()
disabled_module_keys = set()
for module in modules:
key = module.get("key")
status = module.get("status", "active")
if key:
if status == "active":
active_module_keys.add(key)
else:
disabled_module_keys.add(key)
if not active_module_keys and not disabled_module_keys:
return False
# Check if template file exists
template_file = Path(template_path)
if not template_file.exists():
print(f"Warning: .env.template not found at {template_path}")
return False
# Read current template content
try:
current_content = template_file.read_text(encoding="utf-8")
current_lines = current_content.splitlines()
except Exception as exc:
print(f"Error reading .env.template: {exc}")
return False
# Find which module variables are currently in the template
existing_vars = set()
current_module_lines = []
non_module_lines = []
for line in current_lines:
stripped = line.strip()
if "=" in stripped and not stripped.startswith("#"):
var_name = stripped.split("=", 1)[0].strip()
if var_name.startswith("MODULE_"):
existing_vars.add(var_name)
current_module_lines.append((var_name, line))
else:
non_module_lines.append(line)
else:
non_module_lines.append(line)
# Determine what needs to change
missing_vars = active_module_keys - existing_vars
vars_to_remove = disabled_module_keys & existing_vars
vars_to_keep = active_module_keys & existing_vars
changes_made = False
# Report what will be done
if missing_vars:
print(f"📝 Adding {len(missing_vars)} active module variable(s) to .env.template:")
for var in sorted(missing_vars):
print(f" + {var}=0")
changes_made = True
if vars_to_remove:
print(f"🗑️ Removing {len(vars_to_remove)} disabled module variable(s) from .env.template:")
for var in sorted(vars_to_remove):
print(f" - {var}")
changes_made = True
if not changes_made:
print("✅ .env.template is up to date with active modules")
return False
# Build new content: non-module lines + active module lines
new_lines = non_module_lines[:]
# Add existing active module variables (preserve their current values)
for var_name, original_line in current_module_lines:
if var_name in vars_to_keep:
new_lines.append(original_line)
# Add new active module variables
for var in sorted(missing_vars):
new_lines.append(f"{var}=0")
# Write updated content
try:
new_content = "\n".join(new_lines) + "\n"
template_file.write_text(new_content, encoding="utf-8")
print("✅ .env.template updated successfully")
print(f" Active modules: {len(active_module_keys)}")
print(f" Disabled modules removed: {len(vars_to_remove)}")
return True
except Exception as exc:
print(f"Error writing .env.template: {exc}")
return False
def main(argv: Sequence[str]) -> int:
args = parse_args(argv)
topics = args.topics or DEFAULT_TOPICS
@@ -291,6 +413,13 @@ def main(argv: Sequence[str]) -> int:
handle.write("\n")
print(f"Updated manifest {args.manifest}: added {added}, refreshed {updated}")
# Update .env.template if requested (always run to clean up disabled modules)
if not args.skip_template:
template_updated = update_env_template(args.manifest, args.update_template)
if template_updated:
print(f"Updated {args.update_template} with active modules only")
return 0

431
setup.sh
View File

@@ -3,9 +3,9 @@ set -e
clear
# ==============================================
# azerothcore-rm - Interactive .env generator
# AzerothCore-RealmMaster - Interactive .env generator
# ==============================================
# Mirrors options from scripts/setup-server.sh but targets azerothcore-rm/.env
# Mirrors options from scripts/setup-server.sh but targets .env
# Get script directory for template reading
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
@@ -16,6 +16,12 @@ TEMPLATE_FILE="$SCRIPT_DIR/.env.template"
source "$SCRIPT_DIR/scripts/bash/project_name.sh"
DEFAULT_PROJECT_NAME="$(project_name::resolve "$ENV_FILE" "$TEMPLATE_FILE")"
# ==============================================
# Feature Flags
# ==============================================
# Set to 0 to disable server configuration preset selection
ENABLE_CONFIG_PRESETS="${ENABLE_CONFIG_PRESETS:-0}"
# ==============================================
# Constants (auto-loaded from .env.template)
# ==============================================
@@ -331,25 +337,57 @@ show_wow_header() {
echo -e "${RED}"
cat <<'EOF'
:::. :::::::::.,:::::: :::::::.. ... :::::::::::: :: .: .,-::::: ... :::::::.. .,::::::
;;`;; '`````;;;;;;;'''' ;;;;``;;;; .;;;;;;;.;;;;;;;;'''',;; ;;, ,;;;'````' .;;;;;;;. ;;;;``;;;; ;;;;''''
,[[ '[[, .n[[' [[cccc [[[,/[[[' ,[[ \[[, [[ ,[[[,,,[[[ [[[ ,[[ \[[,[[[,/[[[' [[cccc
c$$$cc$$$c ,$$P" $$"""" $$$$$$c $$$, $$$ $$ "$$$"""$$$ $$$ $$$, $$$$$$$$$c $$""""
888 888,,888bo,_ 888oo,__ 888b "88bo,"888,_ _,88P 88, 888 "88o`88bo,__,o,"888,_ _,88P888b "88bo,888oo,__
YMM ""` `""*UMM """"YUMMMMMMM "W" "YMMMMMP" MMM MMM YMM "YUMMMMMP" "YMMMMMP" MMMM "W" """"\MMM
___ ___ ___ ___ ___ ___ ___
.'`~ ``. .'`~ ``. .'`~ ``. .'`~ ``. .'`~ ``. .'`~ ``. .'`~ ``.
)`_ ._ ( )`_ ._ ( )`_ ._ ( )`_ ._ ( )`_ ._ ( )`_ ._ ( )`_ ._ (
|(_/^\_)| |(_/^\_)| |(_/^\_)| |(_/^\_)| |(_/^\_)| |(_/^\_)| |(_/^\_)|
`-.`''.-' `-.`''.-' `-.`''.-' `-.`''.-' `-.`''.-' `-.`''.-' `-.`''.-'
""" """ """ """ """ """ """
.')'=.'_`.='(`. .')'=.'_`.='(`. .')'=.'_`.='(`. .')'=.'_`.='(`. .')'=.'_`.='(`. .')'=.'_`.='(`. .')'=.'_`.='(`.
:| -.._H_,.- |: :| -.._H_,.- |: :| -.._H_,.- |: :| -.._H_,.- |: :| -.._H_,.- |: :| -.._H_,.- |: :| -.._H_,.- |:
|: -.__H__.- :| |: -.__H__.- :| |: -.__H__.- :| |: -.__H__.- :| |: -.__H__.- :| |: -.__H__.- :| |: -.__H__.- :|
<' `--V--' `> <' `--V--' `> <' `--V--' `> <' `--V--' `> <' `--V--' `> <' `--V--' `> <' `--V--' `>
art: littlebitspace@https://littlebitspace.com/
##
### :*
##### .**#
###### ***##
****###* *****##.
******##- ******###.
.*********###= ********###
************##### #****###:+* ********####
***********+****##########**********##**# ********#####
********=+***********######**********######*#**+*******###+
-+*****=**************#######*******####**#####**##*****####-
++**++****************#########**####***####***#####****####:
:++*******************#*******####*****#****######***##*****#######
*= -++++++******************************###**********###******######
.+***. :++++++++***************************#+*#*-*******************#**+
++*****= =+++=+++***************************+**###**************++*#####*
-++*****+++- -=++++++++*********+++++**###**+++=+*###**+*********##+++*+++##
+++*********+++=-=+++++++++****+++***+++++*####***+++**=**#*==***#####*++***+*+
+++++***********++=-=++++++++*++****=++*++*#######**+=-=+****+*#########***==+*#*
=+++++++*****++++===-++++++++=+++++=++*+=-+#**#**=####****#**+-+**************##*
++++++++++++++======++++++++=====+++++=-+++*+##########*****==*######*****####
+++++++=++++++====++++++++++========---++++*****#######**==***#*******####*
++===++++++++=====+++++++=+++:::--:::.++++++*****####**+=**************#
=+++++=: =+=====-+++++++++++++++++++++==+++--==----:-++++++****####****+=+*+*******:
++++++++++++++++==+++++++++++++++++++++=+=-===-----:+++++++++**+++****####***+++
=++++++++++++++++++++++++++++++++++++=++++======----==+++++++=+************:
:++++++++++++++=+++++++++++++++++++======-------:-====+****************.
=----=+++-==++++++*******++++++++++++++===============****************=
-=---==-=====--+++++++++++++++++++++++++++===+++++++********++#***#++******
+++++========+=====----++++++++++++++++===+++++===--=**********+=++*++********
+++==========-=============-----:-=++=====+++++++++++++++=-=***********+*********
==----=+===+=================+++++++++++++++++++++++++=-********************
.======++++++===============---:::::==++++++++++++++++++++++=**********++*******:
+++==--::-=+++++++++++++========+===--=+- :::=-=++++++++++++++++++++++ +*****++**+***
.-----::::-=++++++++++++++++++==::-----++. :=+++++++++++++++++++*..-+*********=
:=+++++++++++++++++==.:--===-+++++++++++**++++++:::-********
++++++++++++++++++=+++++++++++++**+++++*****==******
.++++++++++++=-:.-+++++++++***++++************+
+++=========:.=+=-::++*****+*************
-++++++++==+: ..::=-. ..::::=********
.+========+==+++==========---::-+*-
++++++++++++=======-======
++++++++++++++======++
-=======++++++:
...
:::. :::::::::.,:::::: :::::::.. ... :::::::::::: :: .: .,-::::: ... :::::::.. .,::::::
;;`;; '`````;;;;;;;'''' ;;;;``;;;; .;;;;;;;.;;;;;;;;'''',;; ;;, ,;;;'````' .;;;;;;;. ;;;;``;;;; ;;;;''''
,[[ '[[, .n[[' [[cccc [[[,/[[[' ,[[ \[[, [[ ,[[[,,,[[[ [[[ ,[[ \[[,[[[,/[[[' [[cccc
c$$$cc$$$c ,$$P" $$"""" $$$$$$c $$$, $$$ $$ "$$$"""$$$ $$$ $$$, $$$$$$$$$c $$""""
888 888,,888bo,_ 888oo,__ 888b "88bo,"888,_ _,88P 88, 888 "88o`88bo,__,o,"888,_ _,88P888b "88bo,888oo,__
YMM ""` `""*UMM """"YUMMMMMMM "W" "YMMMMMP" MMM MMM YMM "YUMMMMMP" "YMMMMMP" MMMM "W" """"\MMM
EOF
echo -e "${NC}"
}
@@ -578,8 +616,6 @@ main(){
local CLI_PLAYERBOT_ENABLED=""
local CLI_PLAYERBOT_MIN=""
local CLI_PLAYERBOT_MAX=""
local CLI_AUTO_REBUILD=0
local CLI_MODULES_SOURCE=""
local FORCE_OVERWRITE=0
local CLI_ENABLE_MODULES_RAW=()
@@ -593,7 +629,7 @@ main(){
Usage: ./setup.sh [options]
Description:
Interactive wizard that generates azerothcore-rm/.env for the
Interactive wizard that generates .env for the
profiles-based compose. Prompts for deployment type, ports, storage,
MySQL credentials, backup retention, and module presets or manual
toggles.
@@ -622,9 +658,6 @@ Options:
--playerbot-enabled 0|1 Override PLAYERBOT_ENABLED flag
--playerbot-min-bots N Override PLAYERBOT_MIN_BOTS value
--playerbot-max-bots N Override PLAYERBOT_MAX_BOTS value
--auto-rebuild-on-deploy Enable automatic rebuild during deploys
--modules-rebuild-source PATH Source checkout used for module rebuilds
--deploy-after Run ./deploy.sh automatically after setup completes
--force Overwrite existing .env without prompting
EOF
exit 0
@@ -779,25 +812,10 @@ EOF
--playerbot-max-bots=*)
CLI_PLAYERBOT_MAX="${1#*=}"; shift
;;
--auto-rebuild-on-deploy)
CLI_AUTO_REBUILD=1
shift
;;
--modules-rebuild-source)
[[ $# -ge 2 ]] || { say ERROR "--modules-rebuild-source requires a value"; exit 1; }
CLI_MODULES_SOURCE="$2"; shift 2
;;
--modules-rebuild-source=*)
CLI_MODULES_SOURCE="${1#*=}"; shift
;;
--force)
FORCE_OVERWRITE=1
shift
;;
--deploy-after)
CLI_DEPLOY_AFTER=1
shift
;;
*)
echo "Unknown argument: $1" >&2
echo "Use --help for usage" >&2
@@ -829,7 +847,7 @@ EOF
fi
show_wow_header
say INFO "This will create azerothcore-rm/.env for compose profiles."
say INFO "This will create .env for compose profiles."
# Deployment type
say HEADER "DEPLOYMENT TYPE"
@@ -983,58 +1001,65 @@ fi
BACKUP_DAILY_TIME=$(ask "Daily backup hour (00-23, UTC)" "${CLI_BACKUP_TIME:-$DEFAULT_BACKUP_TIME}" validate_number)
# Server configuration
say HEADER "SERVER CONFIGURATION PRESET"
local SERVER_CONFIG_PRESET
if [ -n "$CLI_CONFIG_PRESET" ]; then
SERVER_CONFIG_PRESET="$CLI_CONFIG_PRESET"
say INFO "Using preset from command line: $SERVER_CONFIG_PRESET"
if [ "$ENABLE_CONFIG_PRESETS" = "1" ]; then
say HEADER "SERVER CONFIGURATION PRESET"
if [ -n "$CLI_CONFIG_PRESET" ]; then
SERVER_CONFIG_PRESET="$CLI_CONFIG_PRESET"
say INFO "Using preset from command line: $SERVER_CONFIG_PRESET"
else
declare -A CONFIG_PRESET_NAMES=()
declare -A CONFIG_PRESET_DESCRIPTIONS=()
declare -A CONFIG_MENU_INDEX=()
local config_dir="$SCRIPT_DIR/config/presets"
local menu_index=1
echo "Choose a server configuration preset:"
if [ -x "$SCRIPT_DIR/scripts/python/parse-config-presets.py" ] && [ -d "$config_dir" ]; then
while IFS=$'\t' read -r preset_key preset_name preset_desc; do
[ -n "$preset_key" ] || continue
CONFIG_PRESET_NAMES["$preset_key"]="$preset_name"
CONFIG_PRESET_DESCRIPTIONS["$preset_key"]="$preset_desc"
CONFIG_MENU_INDEX[$menu_index]="$preset_key"
echo "$menu_index) $preset_name"
echo " $preset_desc"
menu_index=$((menu_index + 1))
done < <(python3 "$SCRIPT_DIR/scripts/python/parse-config-presets.py" list --presets-dir "$config_dir")
else
# Fallback if parser script not available
CONFIG_MENU_INDEX[1]="none"
CONFIG_PRESET_NAMES["none"]="Default (No Preset)"
CONFIG_PRESET_DESCRIPTIONS["none"]="Use default AzerothCore settings"
echo "1) Default (No Preset)"
echo " Use default AzerothCore settings without any modifications"
fi
local max_config_option=$((menu_index - 1))
if [ "$NON_INTERACTIVE" = "1" ]; then
SERVER_CONFIG_PRESET="none"
say INFO "Non-interactive mode: Using default configuration preset"
else
while true; do
read -p "$(echo -e "${YELLOW}🎯 Select server configuration [1-$max_config_option]: ${NC}")" choice
if [[ "$choice" =~ ^[0-9]+$ ]] && [ "$choice" -ge 1 ] && [ "$choice" -le "$max_config_option" ]; then
SERVER_CONFIG_PRESET="${CONFIG_MENU_INDEX[$choice]}"
local chosen_name="${CONFIG_PRESET_NAMES[$SERVER_CONFIG_PRESET]}"
say INFO "Selected: $chosen_name"
break
else
say ERROR "Please select a number between 1 and $max_config_option"
fi
done
fi
fi
else
declare -A CONFIG_PRESET_NAMES=()
declare -A CONFIG_PRESET_DESCRIPTIONS=()
declare -A CONFIG_MENU_INDEX=()
local config_dir="$SCRIPT_DIR/config/presets"
local menu_index=1
echo "Choose a server configuration preset:"
if [ -x "$SCRIPT_DIR/scripts/python/parse-config-presets.py" ] && [ -d "$config_dir" ]; then
while IFS=$'\t' read -r preset_key preset_name preset_desc; do
[ -n "$preset_key" ] || continue
CONFIG_PRESET_NAMES["$preset_key"]="$preset_name"
CONFIG_PRESET_DESCRIPTIONS["$preset_key"]="$preset_desc"
CONFIG_MENU_INDEX[$menu_index]="$preset_key"
echo "$menu_index) $preset_name"
echo " $preset_desc"
menu_index=$((menu_index + 1))
done < <(python3 "$SCRIPT_DIR/scripts/python/parse-config-presets.py" list --presets-dir "$config_dir")
else
# Fallback if parser script not available
CONFIG_MENU_INDEX[1]="none"
CONFIG_PRESET_NAMES["none"]="Default (No Preset)"
CONFIG_PRESET_DESCRIPTIONS["none"]="Use default AzerothCore settings"
echo "1) Default (No Preset)"
echo " Use default AzerothCore settings without any modifications"
fi
local max_config_option=$((menu_index - 1))
if [ "$NON_INTERACTIVE" = "1" ]; then
SERVER_CONFIG_PRESET="none"
say INFO "Non-interactive mode: Using default configuration preset"
else
while true; do
read -p "$(echo -e "${YELLOW}🎯 Select server configuration [1-$max_config_option]: ${NC}")" choice
if [[ "$choice" =~ ^[0-9]+$ ]] && [ "$choice" -ge 1 ] && [ "$choice" -le "$max_config_option" ]; then
SERVER_CONFIG_PRESET="${CONFIG_MENU_INDEX[$choice]}"
local chosen_name="${CONFIG_PRESET_NAMES[$SERVER_CONFIG_PRESET]}"
say INFO "Selected: $chosen_name"
break
else
say ERROR "Please select a number between 1 and $max_config_option"
fi
done
fi
# Config presets disabled - use default
SERVER_CONFIG_PRESET="none"
say INFO "Server configuration presets disabled - using default settings"
fi
local MODE_SELECTION=""
@@ -1114,12 +1139,29 @@ fi
MODE_PRESET_NAME="$CLI_MODULE_PRESET"
fi
# Function to determine source branch for a preset
get_preset_source_branch() {
local preset_name="$1"
local preset_modules="${MODULE_PRESET_CONFIGS[$preset_name]:-}"
# Check if playerbots module is in the preset
if [[ "$preset_modules" == *"MODULE_PLAYERBOTS"* ]]; then
echo "azerothcore-playerbots"
else
echo "azerothcore-wotlk"
fi
}
# Module config
say HEADER "MODULE PRESET"
echo "1) ${MODULE_PRESET_LABELS[$DEFAULT_PRESET_SUGGESTED]:-⭐ Suggested Modules}"
echo "2) ${MODULE_PRESET_LABELS[$DEFAULT_PRESET_PLAYERBOTS]:-🤖 Playerbots + Suggested modules}"
echo "3) ⚙️ Manual selection"
echo "4) 🚫 No modules"
printf " %s) %s\n" "1" "⭐ Suggested Modules"
printf " %s (%s)\n" "Baseline solo-friendly quality of life mix" "azerothcore-wotlk"
printf " %s) %s\n" "2" "🤖 Playerbots + Suggested modules"
printf " %s (%s)\n" "Suggested stack plus playerbots enabled" "azerothcore-playerbots"
printf " %s) %s\n" "3" "⚙️ Manual selection"
printf " %s (%s)\n" "Choose individual modules manually" "(depends on modules)"
printf " %s) %s\n" "4" "🚫 No modules"
printf " %s (%s)\n" "Pure AzerothCore with no modules" "azerothcore-wotlk"
local menu_index=5
declare -A MENU_PRESET_INDEX=()
@@ -1138,13 +1180,16 @@ fi
for entry in "${ORDERED_PRESETS[@]}"; do
local preset_name="${entry#*::}"
[ -n "${MODULE_PRESET_CONFIGS[$preset_name]:-}" ] || continue
local pretty_name
local pretty_name preset_desc
if [ -n "${MODULE_PRESET_LABELS[$preset_name]:-}" ]; then
pretty_name="${MODULE_PRESET_LABELS[$preset_name]}"
else
pretty_name=$(echo "$preset_name" | tr '_-' ' ' | awk '{for(i=1;i<=NF;i++){$i=toupper(substr($i,1,1)) substr($i,2)}}1')
fi
echo "${menu_index}) ${pretty_name} (config/module-profiles/${preset_name}.json)"
preset_desc="${MODULE_PRESET_DESCRIPTIONS[$preset_name]:-No description available}"
local source_branch=$(get_preset_source_branch "$preset_name")
printf " %s) %s\n" "$menu_index" "$pretty_name"
printf " %s (%s)\n" "$preset_desc" "$source_branch"
MENU_PRESET_INDEX[$menu_index]="$preset_name"
menu_index=$((menu_index + 1))
done
@@ -1210,8 +1255,6 @@ fi
local PLAYERBOT_MIN_BOTS="${DEFAULT_PLAYERBOT_MIN:-40}"
local PLAYERBOT_MAX_BOTS="${DEFAULT_PLAYERBOT_MAX:-40}"
local AUTO_REBUILD_ON_DEPLOY=$CLI_AUTO_REBUILD
local MODULES_REBUILD_SOURCE_PATH_VALUE="${CLI_MODULES_SOURCE}"
local NEEDS_CXX_REBUILD=0
local module_mode_label=""
@@ -1241,7 +1284,7 @@ fi
"automation" "quality-of-life" "gameplay-enhancement" "npc-service"
"pvp" "progression" "economy" "social" "account-wide"
"customization" "scripting" "admin" "premium" "minigame"
"content" "rewards" "developer"
"content" "rewards" "developer" "database" "tooling" "uncategorized"
)
declare -A category_titles=(
["automation"]="🤖 Automation"
@@ -1261,30 +1304,18 @@ fi
["content"]="🏰 Content"
["rewards"]="🎁 Rewards"
["developer"]="🛠️ Developer Tools"
["database"]="🗄️ Database"
["tooling"]="🔨 Tooling"
["uncategorized"]="📦 Miscellaneous"
)
declare -A processed_categories=()
# Group modules by category using arrays
declare -A modules_by_category
local key
for key in "${selection_keys[@]}"; do
[ -n "${KNOWN_MODULE_LOOKUP[$key]:-}" ] || continue
local category="${MODULE_CATEGORY_MAP[$key]:-uncategorized}"
if [ -z "${modules_by_category[$category]:-}" ]; then
modules_by_category[$category]="$key"
else
modules_by_category[$category]="${modules_by_category[$category]} $key"
fi
done
# Process modules by category
local cat
for cat in "${category_order[@]}"; do
render_category() {
local cat="$1"
local module_list="${modules_by_category[$cat]:-}"
[ -n "$module_list" ] || continue
[ -n "$module_list" ] || return 0
# Check if this category has any valid modules before showing header
local has_valid_modules=0
# Split the space-separated string properly
local -a module_array
IFS=' ' read -ra module_array <<< "$module_list"
for key in "${module_array[@]}"; do
@@ -1296,14 +1327,12 @@ fi
fi
done
# Skip category if no valid modules
[ "$has_valid_modules" = "1" ] || continue
[ "$has_valid_modules" = "1" ] || return 0
# Display category header only when we have valid modules
local cat_title="${category_titles[$cat]:-$cat}"
printf '\n%b\n' "${BOLD}${CYAN}═══ ${cat_title} ═══${NC}"
# Process modules in this category
local first_in_cat=1
for key in "${module_array[@]}"; do
[ -n "${KNOWN_MODULE_LOOKUP[$key]:-}" ] || continue
local status_lc="${MODULE_STATUS_MAP[$key],,}"
@@ -1313,6 +1342,10 @@ fi
printf -v "$key" '%s' "0"
continue
fi
if [ "$first_in_cat" -ne 1 ]; then
printf '\n'
fi
first_in_cat=0
local prompt_label
prompt_label="$(module_display_name "$key")"
if [ "${MODULE_NEEDS_BUILD_MAP[$key]}" = "1" ]; then
@@ -1340,6 +1373,30 @@ fi
printf -v "$key" '%s' "0"
fi
done
processed_categories["$cat"]=1
}
# Group modules by category using arrays
declare -A modules_by_category
local key
for key in "${selection_keys[@]}"; do
[ -n "${KNOWN_MODULE_LOOKUP[$key]:-}" ] || continue
local category="${MODULE_CATEGORY_MAP[$key]:-uncategorized}"
if [ -z "${modules_by_category[$category]:-}" ]; then
modules_by_category[$category]="$key"
else
modules_by_category[$category]="${modules_by_category[$category]} $key"
fi
done
# Process modules by category (ordered, then any new categories)
local cat
for cat in "${category_order[@]}"; do
render_category "$cat"
done
for cat in "${!modules_by_category[@]}"; do
[ -n "${processed_categories[$cat]:-}" ] && continue
render_category "$cat"
done
module_mode_label="preset 3 (Manual)"
elif [ "$MODE_SELECTION" = "4" ]; then
@@ -1430,11 +1487,16 @@ fi
MODULES_CPP_LIST="$(IFS=','; printf '%s' "${enabled_cpp_module_keys[*]}")"
fi
local STACK_IMAGE_MODE="standard"
# Determine source variant based ONLY on playerbots module
local STACK_SOURCE_VARIANT="core"
if [ "$MODULE_PLAYERBOTS" = "1" ] || [ "$PLAYERBOT_ENABLED" = "1" ]; then
STACK_IMAGE_MODE="playerbots"
STACK_SOURCE_VARIANT="playerbots"
fi
# Determine image mode based on source variant and build requirements
local STACK_IMAGE_MODE="standard"
if [ "$STACK_SOURCE_VARIANT" = "playerbots" ]; then
STACK_IMAGE_MODE="playerbots"
elif [ "$NEEDS_CXX_REBUILD" = "1" ]; then
STACK_IMAGE_MODE="modules"
fi
@@ -1459,7 +1521,6 @@ fi
printf " %-18s %s\n" "Storage Path:" "$STORAGE_PATH"
printf " %-18s %s\n" "Container User:" "$CONTAINER_USER"
printf " %-18s Daily %s:00 UTC, keep %sd/%sh\n" "Backups:" "$BACKUP_DAILY_TIME" "$BACKUP_RETENTION_DAYS" "$BACKUP_RETENTION_HOURS"
printf " %-18s %s\n" "Source checkout:" "$default_source_rel"
printf " %-18s %s\n" "Modules images:" "$AC_AUTHSERVER_IMAGE_MODULES_VALUE | $AC_WORLDSERVER_IMAGE_MODULES_VALUE"
printf " %-18s %s\n" "Modules preset:" "$SUMMARY_MODE_TEXT"
@@ -1506,41 +1567,37 @@ fi
echo ""
say WARNING "These modules require compiling AzerothCore from source."
say INFO "Run './build.sh' to compile your custom modules before deployment."
if [ "$CLI_AUTO_REBUILD" = "1" ]; then
AUTO_REBUILD_ON_DEPLOY=1
else
AUTO_REBUILD_ON_DEPLOY=$(ask_yn "Enable automatic rebuild during future deploys?" "$( [ "$AUTO_REBUILD_ON_DEPLOY" = "1" ] && echo y || echo n )")
fi
# Set build sentinel to indicate rebuild is needed
local sentinel="$LOCAL_STORAGE_ROOT_ABS/modules/.requires_rebuild"
mkdir -p "$(dirname "$sentinel")"
touch "$sentinel"
say INFO "Build sentinel created at $sentinel"
if touch "$sentinel" 2>/dev/null; then
say INFO "Build sentinel created at $sentinel"
else
say WARNING "Could not create build sentinel at $sentinel (permissions/ownership); forcing with sudo..."
if command -v sudo >/dev/null 2>&1; then
if sudo mkdir -p "$(dirname "$sentinel")" \
&& sudo chown -R "$(id -u):$(id -g)" "$(dirname "$sentinel")" \
&& sudo touch "$sentinel"; then
say INFO "Build sentinel created at $sentinel (after fixing ownership)"
else
say ERROR "Failed to force build sentinel creation at $sentinel. Fix permissions and rerun setup."
exit 1
fi
else
say ERROR "Cannot force build sentinel creation (sudo unavailable). Fix permissions on $(dirname "$sentinel") and rerun setup."
exit 1
fi
fi
fi
local default_source_rel="${LOCAL_STORAGE_ROOT}/source/azerothcore"
if [ "$NEEDS_CXX_REBUILD" = "1" ] || [ "$MODULE_PLAYERBOTS" = "1" ]; then
if [ "$STACK_SOURCE_VARIANT" = "playerbots" ]; then
default_source_rel="${LOCAL_STORAGE_ROOT}/source/azerothcore-playerbots"
fi
if [ -n "$MODULES_REBUILD_SOURCE_PATH_VALUE" ]; then
local storage_abs="$STORAGE_PATH"
if [[ "$storage_abs" != /* ]]; then
storage_abs="$(pwd)/${storage_abs#./}"
fi
local candidate_path="$MODULES_REBUILD_SOURCE_PATH_VALUE"
if [[ "$candidate_path" != /* ]]; then
candidate_path="$(pwd)/${candidate_path#./}"
fi
if [[ "$candidate_path" == "$storage_abs"* ]]; then
say WARNING "MODULES_REBUILD_SOURCE_PATH is inside shared storage (${candidate_path}). Using local workspace ${default_source_rel} instead."
MODULES_REBUILD_SOURCE_PATH_VALUE="$default_source_rel"
fi
fi
# Module staging will be handled directly in the rebuild section below
# Persist rebuild source path for downstream build scripts
MODULES_REBUILD_SOURCE_PATH="$default_source_rel"
# Confirm write
@@ -1556,10 +1613,6 @@ fi
[ "$cont" = "1" ] || { say ERROR "Aborted"; exit 1; }
fi
if [ -z "$MODULES_REBUILD_SOURCE_PATH_VALUE" ]; then
MODULES_REBUILD_SOURCE_PATH_VALUE="$default_source_rel"
fi
DB_PLAYERBOTS_NAME=${DB_PLAYERBOTS_NAME:-$DEFAULT_DB_PLAYERBOTS_NAME}
HOST_ZONEINFO_PATH=${HOST_ZONEINFO_PATH:-$DEFAULT_HOST_ZONEINFO_PATH}
MYSQL_INNODB_REDO_LOG_CAPACITY=${MYSQL_INNODB_REDO_LOG_CAPACITY:-$DEFAULT_MYSQL_INNODB_REDO_LOG_CAPACITY}
@@ -1621,7 +1674,7 @@ fi
{
cat <<EOF
# Generated by azerothcore-rm/setup.sh
# Generated by setup.sh
# Compose overrides (set to 1 to include matching file under compose-overrides/)
# mysql-expose.yml -> exposes MySQL externally via COMPOSE_OVERRIDE_MYSQL_EXPOSE_ENABLED
@@ -1633,6 +1686,15 @@ COMPOSE_PROJECT_NAME=$DEFAULT_COMPOSE_PROJECT_NAME
STORAGE_PATH=$STORAGE_PATH
STORAGE_PATH_LOCAL=$LOCAL_STORAGE_ROOT
STORAGE_CONFIG_PATH=$(get_template_value "STORAGE_CONFIG_PATH")
STORAGE_LOGS_PATH=$(get_template_value "STORAGE_LOGS_PATH")
STORAGE_MODULES_PATH=$(get_template_value "STORAGE_MODULES_PATH")
STORAGE_LUA_SCRIPTS_PATH=$(get_template_value "STORAGE_LUA_SCRIPTS_PATH")
STORAGE_MODULES_META_PATH=$(get_template_value "STORAGE_MODULES_META_PATH")
STORAGE_MODULE_SQL_PATH=$(get_template_value "STORAGE_MODULE_SQL_PATH")
STORAGE_INSTALL_MARKERS_PATH=$(get_template_value "STORAGE_INSTALL_MARKERS_PATH")
STORAGE_CLIENT_DATA_PATH=$(get_template_value "STORAGE_CLIENT_DATA_PATH")
STORAGE_LOCAL_SOURCE_PATH=$(get_template_value "STORAGE_LOCAL_SOURCE_PATH")
BACKUP_PATH=$BACKUP_PATH
TZ=$DEFAULT_TZ
@@ -1701,10 +1763,31 @@ CONTAINER_USER=$CONTAINER_USER
CONTAINER_MYSQL=$DEFAULT_CONTAINER_MYSQL
CONTAINER_DB_IMPORT=$DEFAULT_CONTAINER_DB_IMPORT
CONTAINER_DB_INIT=$DEFAULT_CONTAINER_DB_INIT
CONTAINER_DB_GUARD=$(get_template_value "CONTAINER_DB_GUARD")
CONTAINER_BACKUP=$DEFAULT_CONTAINER_BACKUP
CONTAINER_MODULES=$DEFAULT_CONTAINER_MODULES
CONTAINER_POST_INSTALL=$DEFAULT_CONTAINER_POST_INSTALL
# Database Guard Defaults
DB_GUARD_RECHECK_SECONDS=$(get_template_value "DB_GUARD_RECHECK_SECONDS")
DB_GUARD_RETRY_SECONDS=$(get_template_value "DB_GUARD_RETRY_SECONDS")
DB_GUARD_WAIT_ATTEMPTS=$(get_template_value "DB_GUARD_WAIT_ATTEMPTS")
DB_GUARD_HEALTH_MAX_AGE=$(get_template_value "DB_GUARD_HEALTH_MAX_AGE")
DB_GUARD_HEALTHCHECK_INTERVAL=$(get_template_value "DB_GUARD_HEALTHCHECK_INTERVAL")
DB_GUARD_HEALTHCHECK_TIMEOUT=$(get_template_value "DB_GUARD_HEALTHCHECK_TIMEOUT")
DB_GUARD_HEALTHCHECK_RETRIES=$(get_template_value "DB_GUARD_HEALTHCHECK_RETRIES")
DB_GUARD_VERIFY_INTERVAL_SECONDS=$(get_template_value "DB_GUARD_VERIFY_INTERVAL_SECONDS")
# Module SQL staging
STAGE_PATH_MODULE_SQL=$(get_template_value "STAGE_PATH_MODULE_SQL")
# Modules rebuild source path
MODULES_REBUILD_SOURCE_PATH=$MODULES_REBUILD_SOURCE_PATH
# SQL Source Overlay
SOURCE_DIR=$(get_template_value "SOURCE_DIR")
AC_SQL_SOURCE_PATH=$(get_template_value "AC_SQL_SOURCE_PATH")
# Ports
AUTH_EXTERNAL_PORT=$AUTH_EXTERNAL_PORT
AUTH_PORT=$DEFAULT_AUTH_INTERNAL_PORT
@@ -1721,16 +1804,22 @@ REALM_PORT=$REALM_PORT
BACKUP_RETENTION_DAYS=$BACKUP_RETENTION_DAYS
BACKUP_RETENTION_HOURS=$BACKUP_RETENTION_HOURS
BACKUP_DAILY_TIME=$BACKUP_DAILY_TIME
BACKUP_INTERVAL_MINUTES=$(get_template_value "BACKUP_INTERVAL_MINUTES")
BACKUP_EXTRA_DATABASES=$(get_template_value "BACKUP_EXTRA_DATABASES")
BACKUP_HEALTHCHECK_MAX_MINUTES=$BACKUP_HEALTHCHECK_MAX_MINUTES
BACKUP_HEALTHCHECK_GRACE_SECONDS=$BACKUP_HEALTHCHECK_GRACE_SECONDS
EOF
echo
echo "# Modules"
for module_key in "${MODULE_KEYS[@]}"; do
printf "%s=%s\n" "$module_key" "${!module_key:-0}"
done
cat <<EOF
echo "# Modules"
for module_key in "${MODULE_KEYS[@]}"; do
local module_value="${!module_key:-0}"
# Only write enabled modules (value=1) to .env
if [ "$module_value" = "1" ]; then
printf "%s=%s\n" "$module_key" "$module_value"
fi
done
cat <<EOF
# Client data
CLIENT_DATA_VERSION=${CLIENT_DATA_VERSION:-$DEFAULT_CLIENT_DATA_VERSION}
@@ -1749,12 +1838,8 @@ MODULES_CPP_LIST=$MODULES_CPP_LIST
MODULES_REQUIRES_CUSTOM_BUILD=$MODULES_REQUIRES_CUSTOM_BUILD
MODULES_REQUIRES_PLAYERBOT_SOURCE=$MODULES_REQUIRES_PLAYERBOT_SOURCE
# Rebuild automation
AUTO_REBUILD_ON_DEPLOY=$AUTO_REBUILD_ON_DEPLOY
MODULES_REBUILD_SOURCE_PATH=$MODULES_REBUILD_SOURCE_PATH_VALUE
# Eluna
AC_ELUNA_ENABLED=$DEFAULT_ELUNA_ENABLED
# Eluna
AC_ELUNA_ENABLED=$DEFAULT_ELUNA_ENABLED
AC_ELUNA_TRACE_BACK=$DEFAULT_ELUNA_TRACE_BACK
AC_ELUNA_AUTO_RELOAD=$DEFAULT_ELUNA_AUTO_RELOAD
AC_ELUNA_BYTECODE_CACHE=$DEFAULT_ELUNA_BYTECODE_CACHE
@@ -1823,16 +1908,6 @@ EOF
printf ' 🚀 Quick deploy: ./deploy.sh\n'
fi
if [ "${CLI_DEPLOY_AFTER:-0}" = "1" ]; then
local deploy_args=(bash "./deploy.sh" --yes)
if [ "$MODULE_PLAYERBOTS" != "1" ]; then
deploy_args+=(--profile standard)
fi
say INFO "Launching deploy after setup (--deploy-after enabled)"
if ! "${deploy_args[@]}"; then
say WARNING "Automatic deploy failed; please run ./deploy.sh manually."
fi
fi
}
main "$@"

117
update-latest.sh Executable file
View File

@@ -0,0 +1,117 @@
#!/bin/bash
#
# Safe wrapper to update to the latest commit on the current branch and run deploy.
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$ROOT_DIR"
BLUE='\033[0;34m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'; RED='\033[0;31m'; NC='\033[0m'
info(){ printf '%b\n' "${BLUE} $*${NC}"; }
ok(){ printf '%b\n' "${GREEN}$*${NC}"; }
warn(){ printf '%b\n' "${YELLOW}⚠️ $*${NC}"; }
err(){ printf '%b\n' "${RED}$*${NC}"; }
FORCE_DIRTY=0
DEPLOY_ARGS=()
SKIP_BUILD=0
AUTO_DEPLOY=0
usage(){
cat <<'EOF'
Usage: ./update-latest.sh [--force] [--help] [deploy args...]
Updates the current git branch with a fast-forward pull, runs a fresh build,
and optionally runs ./deploy.sh with any additional arguments you provide
(e.g., --yes --no-watch).
Options:
--force Skip the dirty-tree check (not recommended; you may lose changes)
--skip-build Do not run ./build.sh after updating
--deploy Auto-run ./deploy.sh after build (non-interactive)
--help Show this help
Examples:
./update-latest.sh --yes --no-watch
./update-latest.sh --deploy --yes --no-watch
./update-latest.sh --force --skip-build
./update-latest.sh --force --deploy --remote --remote-host my.host --remote-user sam --yes
EOF
}
while [[ $# -gt 0 ]]; do
case "$1" in
--force) FORCE_DIRTY=1; shift;;
--skip-build) SKIP_BUILD=1; shift;;
--deploy) AUTO_DEPLOY=1; shift;;
--help|-h) usage; exit 0;;
*) DEPLOY_ARGS+=("$1"); shift;;
esac
done
command -v git >/dev/null 2>&1 || { err "git is required"; exit 1; }
if [ "$FORCE_DIRTY" -ne 1 ]; then
if [ -n "$(git status --porcelain)" ]; then
err "Working tree is dirty. Commit/stash or re-run with --force."
exit 1
fi
fi
current_branch="$(git rev-parse --abbrev-ref HEAD 2>/dev/null || true)"
if [ -z "$current_branch" ] || [ "$current_branch" = "HEAD" ]; then
err "Cannot update: detached HEAD or unknown branch."
exit 1
fi
if ! git ls-remote --exit-code --heads origin "$current_branch" >/dev/null 2>&1; then
err "Remote branch origin/$current_branch not found."
exit 1
fi
info "Fetching latest changes from origin/$current_branch"
git fetch --prune origin
info "Fast-forwarding to origin/$current_branch"
if ! git merge --ff-only "origin/$current_branch"; then
err "Fast-forward failed. Resolve manually or rebase, then rerun."
exit 1
fi
ok "Repository updated to $(git rev-parse --short HEAD)"
if [ "$SKIP_BUILD" -ne 1 ]; then
info "Running build.sh --yes"
if ! "$ROOT_DIR/build.sh" --yes; then
err "Build failed. Resolve issues and re-run."
exit 1
fi
ok "Build completed"
else
warn "Skipping build (--skip-build set)"
fi
# Offer to run deploy
if [ "$AUTO_DEPLOY" -eq 1 ]; then
info "Auto-deploy enabled; running deploy.sh ${DEPLOY_ARGS[*]:-(no extra args)}"
exec "$ROOT_DIR/deploy.sh" "${DEPLOY_ARGS[@]}"
fi
if [ -t 0 ]; then
read -r -p "Run deploy.sh now? [y/N]: " reply
reply="${reply:-n}"
case "$reply" in
[Yy]*)
info "Running deploy.sh ${DEPLOY_ARGS[*]:-(no extra args)}"
exec "$ROOT_DIR/deploy.sh" "${DEPLOY_ARGS[@]}"
;;
*)
ok "Update (and build) complete. Run ./deploy.sh ${DEPLOY_ARGS[*]} when ready."
exit 0
;;
esac
else
warn "Non-interactive mode and --deploy not set; skipping deploy."
ok "Update (and build) complete. Run ./deploy.sh ${DEPLOY_ARGS[*]} when ready."
fi

View File

@@ -1,350 +0,0 @@
[
{
"key": "MODULE_INDIVIDUAL_PROGRESSION",
"repo_name": "ZhengPeiRu21/mod-individual-progression",
"topic": "azerothcore-module",
"repo_url": "https://github.com/ZhengPeiRu21/mod-individual-progression"
},
{
"key": "MODULE_PLAYERBOTS",
"repo_name": "mod-playerbots/mod-playerbots",
"topic": "azerothcore-module",
"repo_url": "https://github.com/mod-playerbots/mod-playerbots"
},
{
"key": "MODULE_OLLAMA_CHAT",
"repo_name": "DustinHendrickson/mod-ollama-chat",
"topic": "azerothcore-module",
"repo_url": "https://github.com/DustinHendrickson/mod-ollama-chat"
},
{
"key": "MODULE_PLAYER_BOT_LEVEL_BRACKETS",
"repo_name": "DustinHendrickson/mod-player-bot-level-brackets",
"topic": "azerothcore-module",
"repo_url": "https://github.com/DustinHendrickson/mod-player-bot-level-brackets"
},
{
"key": "MODULE_DUEL_RESET",
"repo_name": "azerothcore/mod-duel-reset",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-duel-reset"
},
{
"key": "MODULE_AOE_LOOT",
"repo_name": "azerothcore/mod-aoe-loot",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-aoe-loot"
},
{
"key": "MODULE_TIC_TAC_TOE",
"repo_name": "azerothcore/mod-tic-tac-toe",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-tic-tac-toe"
},
{
"key": "MODULE_NPC_BEASTMASTER",
"repo_name": "azerothcore/mod-npc-beastmaster",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-npc-beastmaster"
},
{
"key": "MODULE_MORPHSUMMON",
"repo_name": "azerothcore/mod-morphsummon",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-morphsummon"
},
{
"key": "MODULE_WORGOBLIN",
"repo_name": "heyitsbench/mod-worgoblin",
"topic": "azerothcore-module",
"repo_url": "https://github.com/heyitsbench/mod-worgoblin"
},
{
"key": "MODULE_SKELETON_MODULE",
"repo_name": "azerothcore/skeleton-module",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/skeleton-module"
},
{
"key": "MODULE_AUTOBALANCE",
"repo_name": "azerothcore/mod-autobalance",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-autobalance"
},
{
"key": "MODULE_TRANSMOG",
"repo_name": "azerothcore/mod-transmog",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-transmog"
},
{
"key": "MODULE_ARAC",
"repo_name": "heyitsbench/mod-arac",
"topic": "azerothcore-module",
"repo_url": "https://github.com/heyitsbench/mod-arac"
},
{
"key": "MODULE_GLOBAL_CHAT",
"repo_name": "azerothcore/mod-global-chat",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-global-chat"
},
{
"key": "MODULE_PRESTIGE_DRAFT_MODE",
"repo_name": "Youpeoples/Prestige-and-Draft-Mode",
"topic": "azerothcore-module",
"repo_url": "https://github.com/Youpeoples/Prestige-and-Draft-Mode"
},
{
"key": "MODULE_BLACK_MARKET_AUCTION_HOUSE",
"repo_name": "Youpeoples/Black-Market-Auction-House",
"topic": "azerothcore-module",
"repo_url": "https://github.com/Youpeoples/Black-Market-Auction-House"
},
{
"key": "MODULE_ULTIMATE_FULL_LOOT_PVP",
"repo_name": "Youpeoples/Ultimate-Full-Loot-Pvp",
"topic": "azerothcore-module",
"repo_url": "https://github.com/Youpeoples/Ultimate-Full-Loot-Pvp"
},
{
"key": "MODULE_SERVER_AUTO_SHUTDOWN",
"repo_name": "azerothcore/mod-server-auto-shutdown",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-server-auto-shutdown"
},
{
"key": "MODULE_TIME_IS_TIME",
"repo_name": "dunjeon/mod-TimeIsTime",
"topic": "azerothcore-module",
"repo_url": "https://github.com/dunjeon/mod-TimeIsTime"
},
{
"key": "MODULE_WAR_EFFORT",
"repo_name": "azerothcore/mod-war-effort",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-war-effort"
},
{
"key": "MODULE_FIREWORKS",
"repo_name": "azerothcore/mod-fireworks-on-level",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-fireworks-on-level"
},
{
"key": "MODULE_NPC_ENCHANTER",
"repo_name": "azerothcore/mod-npc-enchanter",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-npc-enchanter"
},
{
"key": "MODULE_NPC_BUFFER",
"repo_name": "azerothcore/mod-npc-buffer",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-npc-buffer"
},
{
"key": "MODULE_PVP_TITLES",
"repo_name": "azerothcore/mod-pvp-titles",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-pvp-titles"
},
{
"key": "MODULE_CHALLENGE_MODES",
"repo_name": "ZhengPeiRu21/mod-challenge-modes",
"topic": "azerothcore-module",
"repo_url": "https://github.com/ZhengPeiRu21/mod-challenge-modes"
},
{
"key": "MODULE_TREASURE_CHEST_SYSTEM",
"repo_name": "zyggy123/Treasure-Chest-System",
"topic": "azerothcore-module",
"repo_url": "https://github.com/zyggy123/Treasure-Chest-System"
},
{
"key": "MODULE_ASSISTANT",
"repo_name": "noisiver/mod-assistant",
"topic": "azerothcore-module",
"repo_url": "https://github.com/noisiver/mod-assistant"
},
{
"key": "MODULE_STATBOOSTER",
"repo_name": "AnchyDev/StatBooster",
"topic": "azerothcore-module",
"repo_url": "https://github.com/AnchyDev/StatBooster"
},
{
"key": "MODULE_BG_SLAVERYVALLEY",
"repo_name": "Helias/mod-bg-slaveryvalley",
"topic": "azerothcore-module",
"repo_url": "https://github.com/Helias/mod-bg-slaveryvalley"
},
{
"key": "MODULE_REAGENT_BANK",
"repo_name": "ZhengPeiRu21/mod-reagent-bank",
"topic": "azerothcore-module",
"repo_url": "https://github.com/ZhengPeiRu21/mod-reagent-bank"
},
{
"key": "MODULE_ELUNA_TS",
"repo_name": "azerothcore/eluna-ts",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/eluna-ts"
},
{
"key": "MODULE_AZEROTHSHARD",
"repo_name": "azerothcore/mod-azerothshard",
"topic": "azerothcore-module",
"repo_url": "https://github.com/azerothcore/mod-azerothshard"
},
{
"key": "MODULE_LEVEL_GRANT",
"repo_name": "michaeldelago/mod-quest-count-level",
"topic": "azerothcore-module",
"repo_url": "https://github.com/michaeldelago/mod-quest-count-level"
},
{
"key": "MODULE_DUNGEON_RESPAWN",
"repo_name": "AnchyDev/DungeonRespawn",
"topic": "azerothcore-module",
"repo_url": "https://github.com/AnchyDev/DungeonRespawn"
},
{
"key": "MODULE_LUA_AH_BOT",
"repo_name": "mostlynick3/azerothcore-lua-ah-bot",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/mostlynick3/azerothcore-lua-ah-bot"
},
{
"key": "MODULE_ACCOUNTWIDE_SYSTEMS",
"repo_name": "Aldori15/azerothcore-eluna-accountwide",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/Aldori15/azerothcore-eluna-accountwide"
},
{
"key": "MODULE_ELUNA_SCRIPTS",
"repo_name": "Isidorsson/Eluna-scripts",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/Isidorsson/Eluna-scripts"
},
{
"key": "MODULE_TRANSMOG_AIO",
"repo_name": "DanieltheDeveloper/azerothcore-transmog-3.3.5a",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/DanieltheDeveloper/azerothcore-transmog-3.3.5a"
},
{
"key": "MODULE_HARDCORE_MODE",
"repo_name": "PrivateDonut/hardcore_mode",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/PrivateDonut/hardcore_mode"
},
{
"key": "MODULE_RECRUIT_A_FRIEND",
"repo_name": "55Honey/Acore_RecruitAFriend",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/55Honey/Acore_RecruitAFriend"
},
{
"key": "MODULE_EVENT_SCRIPTS",
"repo_name": "55Honey/Acore_eventScripts",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/55Honey/Acore_eventScripts"
},
{
"key": "MODULE_LOTTERY_LUA",
"repo_name": "zyggy123/lottery-lua",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/zyggy123/lottery-lua"
},
{
"key": "MODULE_HORADRIC_CUBE",
"repo_name": "TITIaio/Horadric-Cube-for-World-of-Warcraft",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/TITIaio/Horadric-Cube-for-World-of-Warcraft"
},
{
"key": "MODULE_GLOBAL_MAIL_BANKING_AUCTIONS",
"repo_name": "Aldori15/azerothcore-global-mail_banking_auctions",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/Aldori15/azerothcore-global-mail_banking_auctions"
},
{
"key": "MODULE_LEVEL_UP_REWARD",
"repo_name": "55Honey/Acore_LevelUpReward",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/55Honey/Acore_LevelUpReward"
},
{
"key": "MODULE_AIO_BLACKJACK",
"repo_name": "Manmadedrummer/AIO-Blackjack",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/Manmadedrummer/AIO-Blackjack"
},
{
"key": "MODULE_NPCBOT_EXTENDED_COMMANDS",
"repo_name": "Day36512/Npcbot_Extended_Commands",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/Day36512/Npcbot_Extended_Commands"
},
{
"key": "MODULE_ACTIVE_CHAT",
"repo_name": "Day36512/ActiveChat",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/Day36512/ActiveChat"
},
{
"key": "MODULE_MULTIVENDOR",
"repo_name": "Shadowveil-WotLK/AzerothCore-lua-MultiVendor",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/Shadowveil-WotLK/AzerothCore-lua-MultiVendor"
},
{
"key": "MODULE_EXCHANGE_NPC",
"repo_name": "55Honey/Acore_ExchangeNpc",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/55Honey/Acore_ExchangeNpc"
},
{
"key": "MODULE_DYNAMIC_TRADER",
"repo_name": "Day36512/Dynamic-Trader",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/Day36512/Dynamic-Trader"
},
{
"key": "MODULE_DISCORD_NOTIFIER",
"repo_name": "0xCiBeR/Acore_DiscordNotifier",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/0xCiBeR/Acore_DiscordNotifier"
},
{
"key": "MODULE_ZONE_CHECK",
"repo_name": "55Honey/Acore_Zonecheck",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/55Honey/Acore_Zonecheck"
},
{
"key": "MODULE_HARDCORE_MODE",
"repo_name": "HellionOP/Lua-HardcoreMode",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/HellionOP/Lua-HardcoreMode"
},
{
"key": "MODULE_SEND_AND_BIND",
"repo_name": "55Honey/Acore_SendAndBind",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/55Honey/Acore_SendAndBind"
},
{
"key": "MODULE_TEMP_ANNOUNCEMENTS",
"repo_name": "55Honey/Acore_TempAnnouncements",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/55Honey/Acore_TempAnnouncements"
},
{
"key": "MODULE_CARBON_COPY",
"repo_name": "55Honey/Acore_CarbonCopy",
"topic": "azerothcore-lua",
"repo_url": "https://github.com/55Honey/Acore_CarbonCopy"
}
]