refactoring and adding automations

This commit is contained in:
Deckard
2025-10-17 01:40:50 -04:00
parent 4bf22d1829
commit 859a214e12
65 changed files with 4622 additions and 956 deletions

View File

@@ -1,239 +0,0 @@
# AzerothCore Cleanup Script
This script provides safe and comprehensive cleanup options for AzerothCore Docker resources with multiple levels of cleanup intensity.
## Quick Reference
```bash
cd scripts
# Safe cleanup - stop containers only
./cleanup.sh --soft
# Moderate cleanup - remove containers and networks (preserves data)
./cleanup.sh --hard
# Complete cleanup - remove everything (DESTROYS ALL DATA)
./cleanup.sh --nuclear
# See what would happen without doing it
./cleanup.sh --hard --dry-run
```
## Cleanup Levels
### 🟢 **Soft Cleanup** (`--soft`)
- **What it does**: Stops all AzerothCore containers
- **What it preserves**: Everything (data, networks, images)
- **Use case**: Temporary shutdown, reboot, or switching between deployments
- **Recovery**: Quick restart with deployment script
```bash
./cleanup.sh --soft
```
**After soft cleanup:**
- All your game data is safe
- Quick restart: `./deploy-and-check.sh --skip-deploy`
### 🟡 **Hard Cleanup** (`--hard`)
- **What it does**: Removes containers and networks
- **What it preserves**: Data volumes and Docker images
- **Use case**: Clean slate deployment while keeping your data
- **Recovery**: Full deployment (but reuses existing data)
```bash
./cleanup.sh --hard
```
**After hard cleanup:**
- Your database and game data is preserved
- Fresh deployment: `./deploy-and-check.sh`
- No need to re-download client data
### 🔴 **Nuclear Cleanup** (`--nuclear`)
- **What it does**: Removes EVERYTHING
- **What it preserves**: Nothing
- **Use case**: Complete fresh start or when troubleshooting major issues
- **Recovery**: Full deployment with fresh downloads
```bash
./cleanup.sh --nuclear
```
**⚠️ WARNING: This permanently deletes ALL AzerothCore data including:**
- Database schemas and characters
- Client data (15GB+ will need re-download)
- Configuration files
- Logs and backups
- All containers and images
## Command Options
| Option | Description |
|--------|-------------|
| `--soft` | Stop containers only (safest) |
| `--hard` | Remove containers + networks (preserves data) |
| `--nuclear` | Complete removal (DESTROYS ALL DATA) |
| `--dry-run` | Show what would be done without actually doing it |
| `--force` | Skip confirmation prompts (useful for scripts) |
| `--help` | Show help message |
## Examples
### Safe Exploration
```bash
# See what would be removed with hard cleanup
./cleanup.sh --hard --dry-run
# See what would be removed with nuclear cleanup
./cleanup.sh --nuclear --dry-run
```
### Automated Scripts
```bash
# Force cleanup without prompts (for CI/CD)
./cleanup.sh --hard --force
# Dry run for validation
./cleanup.sh --nuclear --dry-run --force
```
### Interactive Cleanup
```bash
# Standard cleanup with confirmation
./cleanup.sh --hard
# Will prompt: "Are you sure? (yes/no):"
```
## What Gets Cleaned
### Resources Identified
The script automatically identifies and shows:
- **Containers**: All `ac-*` containers (running and stopped)
- **Networks**: `azerothcore` and related networks
- **Volumes**: AzerothCore data volumes (if any named volumes exist)
- **Images**: AzerothCore server images and related tools
### Cleanup Actions by Level
| Resource Type | Soft | Hard | Nuclear |
|---------------|------|------|---------|
| Containers | Stop | Remove | Remove |
| Networks | Keep | Remove | Remove |
| Volumes | Keep | Keep | **DELETE** |
| Images | Keep | Keep | **DELETE** |
| Local Data | Keep | Keep | **DELETE** |
## Recovery After Cleanup
### After Soft Cleanup
```bash
# Quick restart (containers only)
./deploy-and-check.sh --skip-deploy
# Or restart specific layer
docker compose -f ../docker-compose-azerothcore-services.yml up -d
```
### After Hard Cleanup
```bash
# Full deployment (reuses existing data)
./deploy-and-check.sh
```
### After Nuclear Cleanup
```bash
# Complete fresh deployment
./deploy-and-check.sh
# This will:
# - Download ~15GB client data again
# - Import fresh database schemas
# - Create new containers and networks
```
## Safety Features
### Confirmation Prompts
- All destructive operations require confirmation
- Clear warnings about data loss
- Use `--force` to skip prompts for automation
### Dry Run Mode
- See exactly what would be done
- No actual changes made
- Perfect for understanding impact
### Resource Detection
- Shows current resources before cleanup
- Identifies exactly what will be affected
- Prevents unnecessary operations
## Integration with Other Scripts
### Combined Usage
```bash
# Complete refresh workflow
./cleanup.sh --hard --force
./deploy-and-check.sh
# Troubleshooting workflow
./cleanup.sh --nuclear --dry-run # See what would be removed
./cleanup.sh --nuclear --force # If needed
./deploy-and-check.sh # Fresh start
```
### CI/CD Usage
```bash
# Automated cleanup in pipelines
./cleanup.sh --hard --force
./deploy-and-check.sh --skip-deploy || ./deploy-and-check.sh
```
## Troubleshooting
### Common Issues
**Cleanup hangs or fails:**
```bash
# Force remove stuck containers
docker kill $(docker ps -q --filter "name=ac-")
docker rm $(docker ps -aq --filter "name=ac-")
```
**Permission errors:**
```bash
# Some local directories might need sudo
sudo ./cleanup.sh --nuclear
```
**Resources not found:**
- This is normal if no AzerothCore deployment exists
- Script will show "No resources found" and exit safely
### Manual Cleanup
If the script fails, you can manually clean up:
```bash
# Manual container removal
docker ps -a --format '{{.Names}}' | grep '^ac-' | xargs docker rm -f
# Manual network removal
docker network rm azerothcore
# Manual volume removal (DESTROYS DATA)
docker volume ls --format '{{.Name}}' | grep 'ac_' | xargs docker volume rm
# Manual image removal
docker images --format '{{.Repository}}:{{.Tag}}' | grep '^acore/' | xargs docker rmi
docker images --format '{{.Repository}}:{{.Tag}}' | grep '^uprightbass360/azerothcore-wotlk-playerbots' | xargs docker rmi
```
## Exit Codes
- **0**: Cleanup completed successfully
- **1**: Error occurred or user cancelled operation
Use these exit codes in scripts to handle cleanup results appropriately.

View File

@@ -1,157 +0,0 @@
# AzerothCore Deployment & Health Check
This document describes how to use the automated deployment and health check script for the AzerothCore Docker stack.
## Quick Start
The script is located in the `scripts/` directory and should be run from there:
### Full Deployment and Health Check
```bash
cd scripts
./deploy-and-check.sh
```
### Health Check Only (Skip Deployment)
```bash
cd scripts
./deploy-and-check.sh --skip-deploy
```
### Quick Health Check (Basic Tests Only)
```bash
cd scripts
./deploy-and-check.sh --skip-deploy --quick-check
```
## Script Features
### Deployment
- **Layered Deployment**: Deploys database → services → tools layers in correct order
- **Dependency Waiting**: Waits for each layer to be ready before proceeding
- **Error Handling**: Stops on errors with clear error messages
- **Progress Monitoring**: Shows deployment progress and status
### Health Checks
- **Container Health**: Verifies all containers are running and healthy
- **Port Connectivity**: Tests all external ports are accessible
- **Web Service Verification**: Validates web interfaces are responding correctly
- **Database Validation**: Confirms database schemas and realm configuration
- **Comprehensive Reporting**: Color-coded status with detailed results
## Command Line Options
| Option | Description |
|--------|-------------|
| `--skip-deploy` | Skip the deployment phase, only run health checks |
| `--quick-check` | Run basic health checks only (faster, less comprehensive) |
| `--help` | Show usage information |
## What Gets Checked
### Container Health Status
-**ac-mysql**: Database server
-**ac-backup**: Automated backup service
-**ac-authserver**: Authentication server
-**ac-worldserver**: Game world server
-**ac-phpmyadmin**: Database management interface
-**ac-keira3**: Database editor
### Port Connectivity Tests
- **Database Layer**: MySQL (64306)
- **Services Layer**: Auth Server (3784), World Server (8215), SOAP API (7778)
- **Tools Layer**: PHPMyAdmin (8081), Keira3 (4201)
### Web Service Health Checks
- **PHPMyAdmin**: HTTP response and content verification
- **Keira3**: Health endpoint and content verification
### Database Validation
- **Schema Verification**: Confirms all required databases exist
- **Realm Configuration**: Validates realm setup
## Service URLs and Credentials
### Web Interfaces
- 🌐 **PHPMyAdmin**: http://localhost:8081
- 🛠️ **Keira3**: http://localhost:4201
### Game Connections
- 🎮 **Game Server**: localhost:8215
- 🔐 **Auth Server**: localhost:3784
- 🔧 **SOAP API**: localhost:7778
- 🗄️ **MySQL**: localhost:64306
### Default Credentials
- **MySQL**: root / azerothcore123
## Deployment Process
The script follows this deployment sequence:
### 1. Database Layer
- Deploys MySQL database server
- Waits for MySQL to be ready
- Runs database initialization
- Imports AzerothCore schemas
- Starts backup service
### 2. Services Layer
- Deploys authentication server
- Starts client data download/extraction (10-20 minutes)
- Deploys world server (waits for client data)
- Starts module management service
### 3. Tools Layer
- Deploys PHPMyAdmin database interface
- Deploys Keira3 database editor
## Troubleshooting
### Common Issues
**Port conflicts**: If ports are already in use, modify the environment files to use different external ports.
**Slow client data download**: The initial download is ~15GB and may take 10-30 minutes depending on connection speed.
**Container restart loops**: Check container logs with `docker logs <container-name>` for specific error messages.
### Manual Checks
```bash
# Check container status
docker ps | grep ac-
# Check specific container logs
docker logs ac-worldserver --tail 50
# Test port connectivity manually
nc -z localhost 8215
# Check container health
docker inspect ac-mysql --format='{{.State.Health.Status}}'
```
### Recovery Commands
```bash
# Restart specific layer
docker compose -f docker-compose-azerothcore-services.yml restart
# Reset specific service
docker compose -f docker-compose-azerothcore-services.yml stop ac-worldserver
docker compose -f docker-compose-azerothcore-services.yml up -d ac-worldserver
# Full reset (WARNING: destroys all data)
docker compose -f docker-compose-azerothcore-tools.yml down
docker compose -f docker-compose-azerothcore-services.yml down
docker compose -f docker-compose-azerothcore-database.yml down
docker volume prune -f
```
## Script Exit Codes
- **0**: All health checks passed successfully
- **1**: Health check failures detected or deployment errors
Use the exit code in CI/CD pipelines or automated deployment scripts to determine deployment success.

View File

@@ -1,361 +0,0 @@
# GitHub-Hosted Service Scripts Documentation
This document describes the GitHub-hosted scripts that are automatically downloaded and executed by Docker containers during AzerothCore deployment.
## Overview
The AzerothCore Docker deployment uses a hybrid script management approach:
- **Local Scripts**: Run from your environment for setup and management
- **GitHub-Hosted Scripts**: Downloaded at runtime by containers for service operations
This pattern ensures Portainer compatibility while maintaining flexibility and maintainability.
## GitHub-Hosted Scripts
### 🗂️ `download-client-data.sh`
**Purpose**: Downloads and extracts WoW 3.3.5a client data files (~15GB)
**Features**:
- Intelligent caching system to avoid re-downloads
- Progress monitoring during extraction
- Integrity verification of downloaded files
- Fallback URLs for reliability
- Automatic directory structure validation
**Container Usage**: `ac-client-data` service
**Volumes Required**:
- `/cache` - For caching downloaded files
- `/azerothcore/data` - For extracted game data
**Environment Variables**:
```bash
# Automatically set by container, no manual configuration needed
```
**Process Flow**:
1. Fetches latest release info from wowgaming/client-data
2. Checks cache for existing files
3. Downloads if not cached or corrupted
4. Extracts with progress monitoring
5. Validates directory structure (maps, vmaps, mmaps, dbc)
---
### 🔧 `manage-modules.sh`
**Purpose**: Comprehensive AzerothCore module management and configuration
**Features**:
- Dynamic module installation based on environment variables
- Automatic removal of disabled modules
- Configuration file management (.conf.dist → .conf)
- SQL script execution for module databases
- Module state tracking for rebuild detection
- Integration with external SQL script library
**Container Usage**: `ac-modules` service
**Volumes Required**:
- `/modules` - Module installation directory
- `/azerothcore/env/dist/etc` - Configuration files
**Environment Variables**:
```bash
# Git Configuration
GIT_EMAIL=your-email@example.com
GIT_PAT=your-github-token
GIT_USERNAME=your-username
# Module Toggle Variables (1=enabled, 0=disabled)
MODULE_PLAYERBOTS=1
MODULE_AOE_LOOT=1
MODULE_LEARN_SPELLS=1
MODULE_FIREWORKS=1
MODULE_INDIVIDUAL_PROGRESSION=1
MODULE_AHBOT=1
MODULE_AUTOBALANCE=1
MODULE_TRANSMOG=1
MODULE_NPC_BUFFER=1
MODULE_DYNAMIC_XP=1
MODULE_SOLO_LFG=1
MODULE_1V1_ARENA=1
MODULE_PHASED_DUELS=1
MODULE_BREAKING_NEWS=1
MODULE_BOSS_ANNOUNCER=1
MODULE_ACCOUNT_ACHIEVEMENTS=1
MODULE_AUTO_REVIVE=1
MODULE_GAIN_HONOR_GUARD=1
MODULE_ELUNA=1
MODULE_TIME_IS_TIME=1
MODULE_POCKET_PORTAL=1
MODULE_RANDOM_ENCHANTS=1
MODULE_SOLOCRAFT=1
MODULE_PVP_TITLES=1
MODULE_NPC_BEASTMASTER=1
MODULE_NPC_ENCHANTER=1
MODULE_INSTANCE_RESET=1
MODULE_LEVEL_GRANT=1
MODULE_ARAC=1
MODULE_ASSISTANT=1
MODULE_REAGENT_BANK=1
MODULE_BLACK_MARKET_AUCTION_HOUSE=1
# Database Configuration
CONTAINER_MYSQL=ac-mysql
MYSQL_ROOT_PASSWORD=your-password
DB_AUTH_NAME=acore_auth
DB_WORLD_NAME=acore_world
DB_CHARACTERS_NAME=acore_characters
```
**Process Flow**:
1. Sets up Git configuration for module downloads
2. Removes disabled modules from `/modules` directory
3. Clones enabled modules from GitHub repositories
4. Installs module configuration files
5. Executes module SQL scripts via `manage-modules-sql.sh`
6. Tracks module state changes for rebuild detection
7. Downloads rebuild script for user convenience
---
### 🗄️ `manage-modules-sql.sh`
**Purpose**: SQL script execution functions for module database setup
**Features**:
- Systematic SQL file discovery and execution
- Support for multiple database targets (auth, world, characters)
- Error handling and logging
- MariaDB client installation if needed
**Container Usage**: Sourced by `manage-modules.sh`
**Dependencies**: Requires MariaDB/MySQL client tools
**Function**: `execute_module_sql_scripts()`
- Executes SQL for all enabled modules
- Searches common SQL directories (`data/sql/`, `sql/`)
- Handles auth, world, and character database scripts
---
### 🚀 `mysql-startup.sh`
**Purpose**: MySQL initialization with backup restoration support
**Features**:
- NFS-compatible permission handling
- Automatic backup detection and restoration
- Support for multiple backup formats (daily, hourly, legacy)
- Configurable MySQL parameters
- Background restore operations
**Container Usage**: `ac-mysql` service
**Volumes Required**:
- `/var/lib/mysql-runtime` - Runtime MySQL data (tmpfs)
- `/backups` - Backup storage directory
**Environment Variables**:
```bash
MYSQL_CHARACTER_SET=utf8mb4
MYSQL_COLLATION=utf8mb4_unicode_ci
MYSQL_MAX_CONNECTIONS=500
MYSQL_INNODB_BUFFER_POOL_SIZE=1G
MYSQL_INNODB_LOG_FILE_SIZE=256M
MYSQL_ROOT_PASSWORD=your-password
```
**Process Flow**:
1. Creates and configures runtime MySQL directory
2. Scans for available backups (daily → hourly → legacy)
3. Starts MySQL in background if restore needed
4. Downloads and executes restore script from GitHub
5. Runs MySQL normally if no restore required
---
### ⏰ `backup-scheduler.sh`
**Purpose**: Enhanced backup scheduler with hourly and daily schedules
**Features**:
- Configurable backup timing
- Separate hourly and daily backup retention
- Automatic backup script downloading
- Collision avoidance between backup types
- Initial backup execution
**Container Usage**: `ac-backup` service
**Volumes Required**:
- `/backups` - Backup storage directory
**Environment Variables**:
```bash
BACKUP_DAILY_TIME=03 # Hour for daily backups (UTC)
BACKUP_RETENTION_DAYS=7 # Daily backup retention
BACKUP_RETENTION_HOURS=48 # Hourly backup retention
MYSQL_HOST=ac-mysql
MYSQL_ROOT_PASSWORD=your-password
```
**Process Flow**:
1. Downloads backup scripts from GitHub
2. Waits for MySQL to be available
3. Executes initial daily backup
4. Runs continuous scheduler loop
5. Executes hourly/daily backups based on time
---
### 🏗️ `db-init.sh`
**Purpose**: Database creation and initialization
**Features**:
- MySQL readiness validation
- Legacy backup restoration support
- AzerothCore database creation
- Character set and collation configuration
**Container Usage**: `ac-db-init` service
**Environment Variables**:
```bash
MYSQL_HOST=ac-mysql
MYSQL_USER=root
MYSQL_ROOT_PASSWORD=your-password
DB_AUTH_NAME=acore_auth
DB_WORLD_NAME=acore_world
DB_CHARACTERS_NAME=acore_characters
MYSQL_CHARACTER_SET=utf8mb4
MYSQL_COLLATION=utf8mb4_unicode_ci
DB_WAIT_RETRIES=60
DB_WAIT_SLEEP=5
```
---
### 📥 `db-import.sh`
**Purpose**: Database schema import operations
**Features**:
- Database availability verification
- Dynamic configuration file generation
- AzerothCore dbimport execution
- Extended timeout handling
**Container Usage**: `ac-db-import` service
**Environment Variables**:
```bash
CONTAINER_MYSQL=ac-mysql
MYSQL_PORT=3306
MYSQL_USER=root
MYSQL_ROOT_PASSWORD=your-password
DB_AUTH_NAME=acore_auth
DB_WORLD_NAME=acore_world
DB_CHARACTERS_NAME=acore_characters
```
## Script Deployment Pattern
### Download Pattern
All GitHub-hosted scripts use this consistent pattern:
```bash
# Install curl if needed
apk add --no-cache curl # Alpine
# OR
apt-get update && apt-get install -y curl # Debian/Ubuntu
# Download script
curl -fsSL https://raw.githubusercontent.com/uprightbass360/acore-compose/main/scripts/SCRIPT_NAME.sh -o /tmp/SCRIPT_NAME.sh
# Make executable and run
chmod +x /tmp/SCRIPT_NAME.sh
/tmp/SCRIPT_NAME.sh
```
### Error Handling
- All scripts use `set -e` for immediate exit on errors
- Network failures trigger retries or fallback mechanisms
- Missing dependencies are automatically installed
- Detailed logging with emoji indicators for easy monitoring
### Security Considerations
- Scripts are downloaded from the official repository
- HTTPS is used for all downloads
- File integrity is verified where applicable
- Minimal privilege escalation (only when necessary)
## Troubleshooting
### Common Issues
**Script Download Failures**:
```bash
# Check network connectivity
ping raw.githubusercontent.com
# Manual download test
curl -v https://raw.githubusercontent.com/uprightbass360/acore-compose/main/scripts/download-client-data.sh
```
**Module Installation Issues**:
```bash
# Check module environment variables
docker exec ac-modules env | grep MODULE_
# Verify Git authentication
docker exec ac-modules git config --list
```
**Database Connection Issues**:
```bash
# Test MySQL connectivity
docker exec ac-db-init mysql -h ac-mysql -u root -p[password] -e "SELECT 1;"
# Check database container status
docker logs ac-mysql
```
### Manual Script Testing
You can download and test any GitHub-hosted script manually:
```bash
# Create test environment
mkdir -p /tmp/script-test
cd /tmp/script-test
# Download script
curl -fsSL https://raw.githubusercontent.com/uprightbass360/acore-compose/main/scripts/SCRIPT_NAME.sh -o test-script.sh
# Review script content
cat test-script.sh
# Set required environment variables
export MYSQL_ROOT_PASSWORD=testpass
# ... other variables as needed
# Execute (with caution - some scripts modify filesystem)
chmod +x test-script.sh
./test-script.sh
```
## Benefits of GitHub-Hosted Pattern
### ✅ Portainer Compatibility
- Only requires `docker-compose.yml` and `.env` files
- No additional file dependencies
- Works with any Docker Compose deployment method
### ✅ Maintainability
- Scripts can be updated without rebuilding containers
- Version control for all service logic
- Easy rollback to previous versions
### ✅ Consistency
- Same scripts across all environments
- Centralized script management
- Reduced configuration drift
### ✅ Reliability
- Fallback mechanisms for network failures
- Automatic dependency installation
- Comprehensive error handling
This pattern makes the AzerothCore deployment both powerful and portable, suitable for everything from local development to production Portainer deployments.

View File

@@ -1,237 +0,0 @@
# Scripts Directory
This directory contains deployment, configuration, and management scripts for the AzerothCore Docker deployment.
## Core Scripts
### 🚀 Setup & Deployment
- **`setup-server.sh`** - Interactive server setup wizard (recommended for new users)
- **`deploy-and-check.sh`** - Automated deployment and comprehensive health check script
- **`auto-post-install.sh`** - Automated post-installation configuration
### 🔧 Configuration & Management
- **`configure-modules.sh`** - Module configuration analysis and guidance tool
- **`setup-eluna.sh`** - Lua scripting environment setup
- **`update-realmlist.sh`** - Update server address in realmlist configuration
- **`update-config.sh`** - Configuration file updates and management
### 💾 Backup & Restore
- **`backup.sh`** - Manual database backup
- **`backup-hourly.sh`** - Hourly automated backup script
- **`backup-daily.sh`** - Daily automated backup script
- **`backup-scheduler.sh`** - Enhanced backup scheduler with hourly and daily schedules
- **`restore.sh`** - Database restoration from backup
### 🧹 Maintenance
- **`cleanup.sh`** - Resource cleanup script with multiple cleanup levels
- **`rebuild-with-modules.sh`** - Rebuild containers with module compilation
- **`test-local-worldserver.sh`** - Local worldserver testing
### 🔧 Service Management (GitHub-hosted)
- **`download-client-data.sh`** - Downloads and extracts WoW client data files
- **`manage-modules.sh`** - Comprehensive module management and configuration
- **`manage-modules-sql.sh`** - SQL execution functions for module database setup
- **`mysql-startup.sh`** - MySQL initialization with backup restoration support
- **`db-init.sh`** - Database creation and initialization
- **`db-import.sh`** - Database schema import operations
### 📚 Documentation
- **`DEPLOYMENT.md`** - Complete documentation for deployment scripts
- **`CLEANUP.md`** - Complete documentation for cleanup scripts
- **`GITHUB-HOSTED-SCRIPTS.md`** - Comprehensive documentation for service scripts
## Quick Usage
### 🆕 First-Time Setup (Recommended)
```bash
# Interactive setup wizard
./scripts/setup-server.sh
```
### 🔧 Module Configuration Analysis
```bash
# Check module configuration requirements
./scripts/configure-modules.sh
```
### 🎮 Lua Scripting Setup
```bash
# Setup Eluna scripting environment
./scripts/setup-eluna.sh
```
### 🩺 Health Checks & Deployment
**Run Health Check on Current Deployment**
```bash
cd scripts
./deploy-and-check.sh --skip-deploy
```
**Full Deployment with Health Checks**
```bash
cd scripts
./deploy-and-check.sh
```
**Quick Health Check (Basic Tests Only)**
```bash
cd scripts
./deploy-and-check.sh --skip-deploy --quick-check
```
### 🧹 Cleanup Resources
```bash
cd scripts
# Stop containers only (safe)
./cleanup.sh --soft
# Remove containers + networks (preserves data)
./cleanup.sh --hard
# Complete removal (DESTROYS ALL DATA)
./cleanup.sh --nuclear
# Dry run to see what would happen
./cleanup.sh --hard --dry-run
```
### 💾 Backup & Restore Operations
```bash
# Manual backup
./scripts/backup.sh
# Restore from backup
./scripts/restore.sh backup_filename.sql
# Setup automated backups (already configured in containers)
# Hourly: ./scripts/backup-hourly.sh
# Daily: ./scripts/backup-daily.sh
```
### ☁️ GitHub-Hosted Script Usage
The GitHub-hosted scripts are automatically executed by Docker containers, but you can also run them manually for testing:
```bash
# Download and test client data script
curl -fsSL https://raw.githubusercontent.com/uprightbass360/acore-compose/main/scripts/download-client-data.sh -o /tmp/download-client-data.sh
chmod +x /tmp/download-client-data.sh
# Note: Requires proper environment variables and volume mounts
# Download and test module management script
curl -fsSL https://raw.githubusercontent.com/uprightbass360/acore-compose/main/scripts/manage-modules.sh -o /tmp/manage-modules.sh
chmod +x /tmp/manage-modules.sh
# Note: Requires module environment variables
# Download backup scheduler
curl -fsSL https://raw.githubusercontent.com/uprightbass360/acore-compose/main/scripts/backup-scheduler.sh -o /tmp/backup-scheduler.sh
chmod +x /tmp/backup-scheduler.sh
# Note: Requires backup environment variables
```
**Script Dependencies:**
- **Client Data Script**: Requires `/cache` and `/azerothcore/data` volumes
- **Module Scripts**: Require module environment variables and `/modules` volume
- **Database Scripts**: Require MySQL environment variables and connectivity
- **Backup Scripts**: Require `/backups` volume and MySQL connectivity
## GitHub-Hosted Service Scripts
The AzerothCore deployment uses a hybrid approach for script management:
### 🏠 Local Scripts
Traditional scripts that you run directly from your local environment for setup, configuration, and management tasks.
### ☁️ GitHub-Hosted Scripts
Service scripts that are automatically downloaded and executed by Docker containers at runtime. These scripts handle:
- **Client Data Management**: Automated download and caching of ~15GB WoW client data
- **Module Management**: Dynamic installation and configuration of AzerothCore modules
- **Database Operations**: MySQL initialization, backup restoration, and schema imports
- **Service Initialization**: Container startup logic with error handling and logging
**Benefits of GitHub-Hosted Scripts:**
-**Portainer Compatible**: Only requires docker-compose.yml and .env files
-**Always Current**: Scripts are pulled from the latest repository version
-**Maintainable**: Updates don't require container rebuilds
-**Consistent**: Same logic across all deployment environments
## Features
### 🚀 Setup & Deployment Features
**Interactive Setup Wizard**: Guided configuration for new users
**Automated Server Deployment**: Complete three-layer deployment system
**Module Management**: Automated installation and configuration of 13 enhanced modules
**Post-Install Automation**: Automatic database setup, realmlist configuration, and service restart
### 🔧 Configuration Features
**Module Analysis**: Identifies missing configurations and requirements
**Lua Scripting Setup**: Automated Eluna environment with example scripts
**Realmlist Management**: Dynamic server address configuration
**Config File Management**: Automated .conf file generation from .conf.dist templates
### 🩺 Health & Monitoring Features
**Container Health Validation**: Checks all core containers
**Port Connectivity Tests**: Validates all external ports
**Web Service Verification**: HTTP response and content validation
**Database Validation**: Schema and realm configuration checks
**Comprehensive Reporting**: Color-coded status with detailed results
### 💾 Backup & Maintenance Features
**Automated Backups**: Scheduled hourly and daily database backups
**Manual Backup/Restore**: On-demand backup and restoration tools
**Multi-Level Cleanup**: Safe, hard, and nuclear cleanup options
**Container Rebuilding**: Module compilation and container rebuilding support
## Script Usage Examples
### First-Time Server Setup
```bash
# Complete guided setup (recommended)
./scripts/setup-server.sh
# Follow the interactive prompts to configure:
# - Server network settings
# - Storage locations
# - Database passwords
# - Module selections
```
### Post-Installation Configuration
```bash
# Analyze and configure modules
./scripts/configure-modules.sh
# Setup Lua scripting environment
./scripts/setup-eluna.sh
# Update server address after IP changes
./scripts/update-realmlist.sh new.server.address
```
### Maintenance Operations
```bash
# Health check existing deployment
./scripts/deploy-and-check.sh --skip-deploy
# Clean restart (preserves data)
./scripts/cleanup.sh --hard
./scripts/deploy-and-check.sh
# Backup before major changes
./scripts/backup.sh
```
## Configuration Variables
The scripts work with the updated environment variable names:
- `MYSQL_EXTERNAL_PORT` (database port)
- `AUTH_EXTERNAL_PORT` (authentication server port)
- `WORLD_EXTERNAL_PORT` (world server port)
- `SOAP_EXTERNAL_PORT` (SOAP API port)
- `MYSQL_ROOT_PASSWORD` (database root password)
- `SERVER_ADDRESS` (external server address)
- `STORAGE_ROOT` (data storage location)
For complete documentation, see `DEPLOYMENT.md` and `CLEANUP.md`.

View File

@@ -1,172 +0,0 @@
# Test Local Worldserver Performance
This test setup allows you to compare the performance of worldserver with game files stored locally within the container vs. external volume mount.
## What This Tests
### 🧪 **Test Configuration**: Local Game Files with NFS Caching
- Game files (maps, vmaps, mmaps, DBC) cached on NFS and copied to local container storage
- **No external volume mount** for `/azerothcore/data` (files stored locally for performance)
- **NFS cache** for downloaded files (persistent across container restarts)
- First run: ~15GB download and extraction time
- Subsequent runs: ~5-10 minutes (extraction only from cache)
### 📊 **Comparison with Standard Configuration**: External Volume
- Game files stored in external volume mount
- Persistent across container restarts
- One-time download, reused across deployments
## Quick Start
### Prerequisites
Make sure the database and authserver are running first:
```bash
# Start database layer
docker-compose --env-file docker-compose-azerothcore-database.env -f docker-compose-azerothcore-database.yml up -d
# Start authserver (minimal requirement)
docker-compose --env-file docker-compose-azerothcore-services.env -f docker-compose-azerothcore-services.yml up -d ac-authserver
```
### Run the Test
```bash
cd scripts
# Start test worldserver (downloads files locally)
./test-local-worldserver.sh
# Monitor logs
./test-local-worldserver.sh --logs
# Cleanup when done
./test-local-worldserver.sh --cleanup
```
## Test Details
### Port Configuration
- **Test Worldserver**: `localhost:8216` (game), `localhost:7779` (SOAP)
- **Regular Worldserver**: `localhost:8215` (game), `localhost:7778` (SOAP)
Both can run simultaneously without conflicts.
### Download Process
The test worldserver will:
1. Check for cached client data in NFS storage
2. If cached: Copy from cache (fast)
3. If not cached: Download ~15GB client data from GitHub releases and cache it
4. Extract maps, vmaps, mmaps, and DBC files to local container storage
5. Verify all required directories exist
6. Start the worldserver
**Expected startup time**:
- First run: 20-30 minutes (download + extraction)
- Subsequent runs: 5-10 minutes (extraction only from cache)
### Storage Locations
- **Game Files**: `/azerothcore/data` (inside container, not mounted - for performance testing)
- **Cache**: External mount at `storage/azerothcore/cache-test/` (persistent across restarts)
- **Config**: External mount (shared with regular deployment)
- **Logs**: External mount at `storage/azerothcore/logs-test/`
## Performance Metrics to Compare
### Startup Time
- **Regular**: ~2-3 minutes (files already extracted in external volume)
- **Test (first run)**: ~20-30 minutes (download + extraction + cache)
- **Test (cached)**: ~5-10 minutes (extraction only from cache)
### Runtime Performance
Compare these during gameplay:
- Map loading times
- Zone transitions
- Server responsiveness
- Memory usage
- CPU utilization
### Storage Usage
- **Regular**: Persistent ~15GB in external volume
- **Test**: ~15GB cache in external volume + ~15GB ephemeral inside container
- **Test Total**: ~30GB during operation (cache + local copy)
## Monitoring Commands
```bash
# Check container status
docker ps | grep test
# Monitor logs
docker logs ac-worldserver-test -f
# Check game data size (local in container)
docker exec ac-worldserver-test du -sh /azerothcore/data/*
# Check cache size (persistent)
ls -la storage/azerothcore/cache-test/
du -sh storage/azerothcore/cache-test/*
# Check cached version
cat storage/azerothcore/cache-test/client-data-version.txt
# Check server processes
docker exec ac-worldserver-test ps aux | grep worldserver
# Monitor resource usage
docker stats ac-worldserver-test
```
## Testing Scenarios
### 1. Startup Performance
```bash
# Time the full startup
time ./test-local-worldserver.sh
# Compare with regular worldserver restart
docker restart ac-worldserver
```
### 2. Runtime Performance
Connect a game client to both servers and compare:
- Zone loading times
- Combat responsiveness
- Large area rendering
### 3. Resource Usage
```bash
# Compare memory usage
docker stats ac-worldserver ac-worldserver-test --no-stream
# Compare disk I/O
docker exec ac-worldserver-test iostat 1 5
docker exec ac-worldserver iostat 1 5
```
## Cleanup
```bash
# Stop and remove test container
./test-local-worldserver.sh --cleanup
# Remove test logs
rm -rf storage/azerothcore/logs-test/
```
## Expected Results
### Pros of Local Files
- Potentially faster file I/O (no network mount overhead)
- Self-contained container
- No external volume dependencies
### Cons of Local Files
- Much longer startup time (20-30 minutes)
- Re-download on every container recreation
- Larger container footprint
- No persistence across restarts
## Conclusion
This test will help determine if the performance benefits of local file storage outweigh the significant startup time and storage overhead costs.

View File

@@ -1,11 +1,12 @@
#!/bin/bash
# ac-compose
set -e
echo "🚀 AzerothCore Auto Post-Install Configuration"
echo "=============================================="
# Install required packages
apk add --no-cache curl mysql-client bash docker-cli-compose jq
apk add --no-cache curl mysql-client bash docker-cli-compose jq || apk add --no-cache curl mysql-client bash jq
# Create install markers directory
mkdir -p /install-markers
@@ -15,7 +16,6 @@ if [ -f "/install-markers/post-install-completed" ]; then
echo "✅ Post-install configuration already completed"
echo " Marker file found: /install-markers/post-install-completed"
echo "🔄 To re-run post-install configuration, delete the marker file and restart this container"
echo "📝 Command: docker exec ${CONTAINER_POST_INSTALL} rm -f /install-markers/post-install-completed"
echo ""
echo "🏃 Keeping container alive for manual operations..."
tail -f /dev/null
@@ -50,8 +50,6 @@ else
if [ ! -f "/azerothcore/config/authserver.conf" ] || [ ! -f "/azerothcore/config/worldserver.conf" ]; then
echo "❌ Configuration files not found after waiting"
echo " Expected: /azerothcore/config/authserver.conf"
echo " Expected: /azerothcore/config/worldserver.conf"
exit 1
fi
@@ -59,126 +57,34 @@ else
echo ""
echo "🔧 Step 1: Updating configuration files..."
# Download and execute update-config.sh
curl -fsSL https://raw.githubusercontent.com/uprightbass360/acore-compose/main/scripts/update-config.sh -o /tmp/update-config.sh
chmod +x /tmp/update-config.sh
# Update DB connection lines and any necessary settings directly with sed
sed -i "s|^LoginDatabaseInfo *=.*|LoginDatabaseInfo = \"${MYSQL_HOST};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_AUTH_NAME}\"|" /azerothcore/config/authserver.conf || true
sed -i "s|^LoginDatabaseInfo *=.*|LoginDatabaseInfo = \"${MYSQL_HOST};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_AUTH_NAME}\"|" /azerothcore/config/worldserver.conf || true
sed -i "s|^WorldDatabaseInfo *=.*|WorldDatabaseInfo = \"${MYSQL_HOST};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_WORLD_NAME}\"|" /azerothcore/config/worldserver.conf || true
sed -i "s|^CharacterDatabaseInfo *=.*|CharacterDatabaseInfo = \"${MYSQL_HOST};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_CHARACTERS_NAME}\"|" /azerothcore/config/worldserver.conf || true
# Modify script to use container environment
sed -i 's|docker-compose-azerothcore-services.env|/project/docker-compose-azerothcore-services.env|' /tmp/update-config.sh
sed -i 's|CONFIG_DIR="${STORAGE_PATH}/config"|CONFIG_DIR="/azerothcore/config"|' /tmp/update-config.sh
# Execute update-config.sh
cd /project
/tmp/update-config.sh
if [ $? -eq 0 ]; then
echo "✅ Configuration files updated successfully"
else
echo "❌ Failed to update configuration files"
exit 1
fi
echo "✅ Configuration files updated"
# Step 2: Update realmlist table
echo ""
echo "🌐 Step 2: Updating realmlist table..."
mysql -h "${MYSQL_HOST}" -u"${MYSQL_USER}" -p"${MYSQL_ROOT_PASSWORD}" --skip-ssl-verify "${DB_AUTH_NAME}" -e "
UPDATE realmlist SET address='${SERVER_ADDRESS}', port=${REALM_PORT} WHERE id=1;
" || echo "⚠️ Could not update realmlist table"
# Download and execute update-realmlist.sh
curl -fsSL https://raw.githubusercontent.com/uprightbass360/acore-compose/main/scripts/update-realmlist.sh -o /tmp/update-realmlist.sh
chmod +x /tmp/update-realmlist.sh
echo "✅ Realmlist updated"
# Modify script to use container environment
sed -i 's|docker-compose-azerothcore-services.env|/project/docker-compose-azerothcore-services.env|' /tmp/update-realmlist.sh
# Replace all docker exec mysql commands with direct mysql commands
sed -i "s|docker exec ac-mysql mysql -u \"\${MYSQL_USER}\" -p\"\${MYSQL_ROOT_PASSWORD}\" \"\${DB_AUTH_NAME}\"|mysql -h \"${MYSQL_HOST}\" -u\"${MYSQL_USER}\" -p\"${MYSQL_ROOT_PASSWORD}\" --skip-ssl-verify \"${DB_AUTH_NAME}\"|g" /tmp/update-realmlist.sh
sed -i "s|docker exec ac-mysql mysql -u \"\${MYSQL_USER}\" -p\"\${MYSQL_ROOT_PASSWORD}\"|mysql -h \"${MYSQL_HOST}\" -u\"${MYSQL_USER}\" -p\"${MYSQL_ROOT_PASSWORD}\" --skip-ssl-verify|g" /tmp/update-realmlist.sh
# Execute update-realmlist.sh
cd /project
/tmp/update-realmlist.sh
if [ $? -eq 0 ]; then
echo "✅ Realmlist table updated successfully"
else
echo "❌ Failed to update realmlist table"
exit 1
fi
# Step 3: Restart services to apply changes
echo ""
echo " Step 3: Restarting services to apply changes..."
echo "📝 Configuration changes have been applied to files"
echo "🔄 Restarting authserver and worldserver to pick up new configuration..."
# Detect container runtime (Docker or Podman)
CONTAINER_CMD=""
if command -v docker >/dev/null 2>&1; then
# Check if we can connect to Docker daemon
if docker version >/dev/null 2>&1; then
CONTAINER_CMD="docker"
echo "🐳 Detected Docker runtime"
fi
fi
if [ -z "$CONTAINER_CMD" ] && command -v podman >/dev/null 2>&1; then
# Check if we can connect to Podman
if podman version >/dev/null 2>&1; then
CONTAINER_CMD="podman"
echo "🦭 Detected Podman runtime"
fi
fi
if [ -z "$CONTAINER_CMD" ]; then
echo "⚠️ No container runtime detected (docker/podman) - skipping restart"
else
# Restart authserver
if [ -n "$CONTAINER_AUTHSERVER" ]; then
echo "🔄 Restarting authserver container: $CONTAINER_AUTHSERVER"
if $CONTAINER_CMD restart "$CONTAINER_AUTHSERVER" 2>/dev/null; then
echo "✅ Authserver restarted successfully"
else
echo "⚠️ Failed to restart authserver (may not be running yet)"
fi
fi
# Restart worldserver
if [ -n "$CONTAINER_WORLDSERVER" ]; then
echo "🔄 Restarting worldserver container: $CONTAINER_WORLDSERVER"
if $CONTAINER_CMD restart "$CONTAINER_WORLDSERVER" 2>/dev/null; then
echo "✅ Worldserver restarted successfully"
else
echo "⚠️ Failed to restart worldserver (may not be running yet)"
fi
fi
fi
echo "✅ Service restart completed"
echo " Step 3: (Optional) Restart services to apply changes — handled externally"
# Create completion marker
echo "$(date)" > /install-markers/post-install-completed
echo "NEW_INSTALL_DATE=$(date)" >> /install-markers/post-install-completed
echo "CONFIG_FILES_UPDATED=true" >> /install-markers/post-install-completed
echo "REALMLIST_UPDATED=true" >> /install-markers/post-install-completed
echo "SERVICES_RESTARTED=true" >> /install-markers/post-install-completed
echo ""
echo "🎉 Auto post-install configuration completed successfully!"
echo ""
echo "📋 Summary of changes:"
echo " ✅ AuthServer configured with production database settings"
echo " ✅ WorldServer configured with production database settings"
echo " ✅ Realmlist updated with server address: ${SERVER_ADDRESS}:${REALM_PORT}"
echo " ✅ Services restarted to apply changes"
echo " ✅ Completion marker created: /install-markers/post-install-completed"
echo ""
echo "🎮 Your AzerothCore server is now ready for production!"
echo " Players can connect to: ${SERVER_ADDRESS}:${REALM_PORT}"
echo ""
echo "💡 Next steps:"
echo " 1. Create admin accounts using the worldserver console"
echo " 2. Test client connectivity"
echo " 3. Configure any additional modules as needed"
echo ""
echo "🏃 Keeping container alive for future manual operations..."
tail -f /dev/null
fi
fi

View File

@@ -1,97 +0,0 @@
#!/bin/bash
set -e
# Configuration from environment variables
MYSQL_HOST=${MYSQL_HOST:-ac-mysql}
MYSQL_PORT=${MYSQL_PORT:-3306}
MYSQL_USER=${MYSQL_USER:-root}
MYSQL_PASSWORD=${MYSQL_PASSWORD:-password}
BACKUP_DIR="/backups"
RETENTION_DAYS=${BACKUP_RETENTION_DAYS:-3}
DATE_FORMAT="%Y%m%d_%H%M%S"
# Database names from environment variables - core databases
DATABASES=("${DB_AUTH_NAME:-acore_auth}" "${DB_WORLD_NAME:-acore_world}" "${DB_CHARACTERS_NAME:-acore_characters}")
# Check if acore_playerbots database exists and add it to backup list
echo "Checking for optional acore_playerbots database..."
if mysql -h$MYSQL_HOST -P$MYSQL_PORT -u$MYSQL_USER -p$MYSQL_PASSWORD -e "USE acore_playerbots;" 2>/dev/null; then
DATABASES+=("acore_playerbots")
echo "✅ acore_playerbots database found - will be included in backup"
else
echo " acore_playerbots database not found - skipping (this is normal for some installations)"
fi
# Create daily backup directory
DAILY_DIR="$BACKUP_DIR/daily"
mkdir -p $DAILY_DIR
# Generate timestamp
TIMESTAMP=$(date +$DATE_FORMAT)
BACKUP_SUBDIR="$DAILY_DIR/$TIMESTAMP"
mkdir -p $BACKUP_SUBDIR
echo "[$TIMESTAMP] Starting AzerothCore daily backup..."
echo "[$TIMESTAMP] Databases to backup: ${DATABASES[@]}"
# Backup each database with additional options for daily backups
for db in "${DATABASES[@]}"; do
echo "[$TIMESTAMP] Backing up database: $db"
mysqldump -h$MYSQL_HOST -P$MYSQL_PORT -u$MYSQL_USER -p$MYSQL_PASSWORD \
--single-transaction --routines --triggers --events \
--hex-blob --quick --lock-tables=false \
--add-drop-database --databases $db \
--master-data=2 --flush-logs \
| gzip > $BACKUP_SUBDIR/${db}.sql.gz
if [ $? -eq 0 ]; then
SIZE=$(du -h $BACKUP_SUBDIR/${db}.sql.gz | cut -f1)
echo "[$TIMESTAMP] ✅ Successfully backed up $db ($SIZE)"
else
echo "[$TIMESTAMP] ❌ Failed to backup $db"
exit 1
fi
done
# Create comprehensive backup manifest for daily backups
BACKUP_SIZE=$(du -sh $BACKUP_SUBDIR | cut -f1)
MYSQL_VERSION=$(mysql -h$MYSQL_HOST -P$MYSQL_PORT -u$MYSQL_USER -p$MYSQL_PASSWORD -e 'SELECT VERSION();' -s -N)
cat > $BACKUP_SUBDIR/manifest.json <<EOF
{
"timestamp": "$TIMESTAMP",
"type": "daily",
"databases": ["${DATABASES[@]}"],
"backup_size": "$BACKUP_SIZE",
"retention_days": $RETENTION_DAYS,
"mysql_version": "$MYSQL_VERSION",
"backup_method": "mysqldump with master-data and flush-logs",
"created_by": "acore-compose2 backup system"
}
EOF
# Create database statistics for daily backups
echo "[$TIMESTAMP] Generating database statistics..."
for db in "${DATABASES[@]}"; do
echo "[$TIMESTAMP] Statistics for $db:"
mysql -h$MYSQL_HOST -P$MYSQL_PORT -u$MYSQL_USER -p$MYSQL_PASSWORD -e "
SELECT
TABLE_SCHEMA as 'Database',
COUNT(*) as 'Tables',
ROUND(SUM(DATA_LENGTH + INDEX_LENGTH) / 1024 / 1024, 2) as 'Size_MB'
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = '$db'
GROUP BY TABLE_SCHEMA;
" >> $BACKUP_SUBDIR/database_stats.txt
done
# Clean up old daily backups (keep only last N days)
echo "[$TIMESTAMP] Cleaning up daily backups older than $RETENTION_DAYS days..."
find $DAILY_DIR -type d -name "[0-9]*" -mtime +$RETENTION_DAYS -exec rm -rf {} + 2>/dev/null || true
# Log backup completion
echo "[$TIMESTAMP] ✅ Daily backup completed successfully"
echo "[$TIMESTAMP] Backup location: $BACKUP_SUBDIR"
echo "[$TIMESTAMP] Backup size: $BACKUP_SIZE"
echo "[$TIMESTAMP] Current daily backups:"
ls -la $DAILY_DIR/ | tail -n +2

View File

@@ -1,75 +0,0 @@
#!/bin/bash
set -e
# Configuration from environment variables
MYSQL_HOST=${MYSQL_HOST:-ac-mysql}
MYSQL_PORT=${MYSQL_PORT:-3306}
MYSQL_USER=${MYSQL_USER:-root}
MYSQL_PASSWORD=${MYSQL_PASSWORD:-password}
BACKUP_DIR="/backups"
RETENTION_HOURS=${BACKUP_RETENTION_HOURS:-6}
DATE_FORMAT="%Y%m%d_%H%M%S"
# Database names from environment variables - core databases
DATABASES=("${DB_AUTH_NAME:-acore_auth}" "${DB_WORLD_NAME:-acore_world}" "${DB_CHARACTERS_NAME:-acore_characters}")
# Check if acore_playerbots database exists and add it to backup list
echo "Checking for optional acore_playerbots database..."
if mysql -h$MYSQL_HOST -P$MYSQL_PORT -u$MYSQL_USER -p$MYSQL_PASSWORD -e "USE acore_playerbots;" 2>/dev/null; then
DATABASES+=("acore_playerbots")
echo "✅ acore_playerbots database found - will be included in backup"
else
echo " acore_playerbots database not found - skipping (this is normal for some installations)"
fi
# Create hourly backup directory
HOURLY_DIR="$BACKUP_DIR/hourly"
mkdir -p $HOURLY_DIR
# Generate timestamp
TIMESTAMP=$(date +$DATE_FORMAT)
BACKUP_SUBDIR="$HOURLY_DIR/$TIMESTAMP"
mkdir -p $BACKUP_SUBDIR
echo "[$TIMESTAMP] Starting AzerothCore hourly backup..."
echo "[$TIMESTAMP] Databases to backup: ${DATABASES[@]}"
# Backup each database
for db in "${DATABASES[@]}"; do
echo "[$TIMESTAMP] Backing up database: $db"
mysqldump -h$MYSQL_HOST -P$MYSQL_PORT -u$MYSQL_USER -p$MYSQL_PASSWORD \
--single-transaction --routines --triggers --events \
--hex-blob --quick --lock-tables=false \
--add-drop-database --databases $db \
| gzip > $BACKUP_SUBDIR/${db}.sql.gz
if [ $? -eq 0 ]; then
SIZE=$(du -h $BACKUP_SUBDIR/${db}.sql.gz | cut -f1)
echo "[$TIMESTAMP] ✅ Successfully backed up $db ($SIZE)"
else
echo "[$TIMESTAMP] ❌ Failed to backup $db"
exit 1
fi
done
# Create backup manifest
cat > $BACKUP_SUBDIR/manifest.json <<EOF
{
"timestamp": "$TIMESTAMP",
"type": "hourly",
"databases": ["${DATABASES[@]}"],
"backup_size": "$(du -sh $BACKUP_SUBDIR | cut -f1)",
"retention_hours": $RETENTION_HOURS,
"mysql_version": "$(mysql -h$MYSQL_HOST -P$MYSQL_PORT -u$MYSQL_USER -p$MYSQL_PASSWORD -e 'SELECT VERSION();' -s -N)"
}
EOF
# Clean up old hourly backups (keep only last N hours)
echo "[$TIMESTAMP] Cleaning up hourly backups older than $RETENTION_HOURS hours..."
find $HOURLY_DIR -type d -name "[0-9]*" -mmin +$((RETENTION_HOURS * 60)) -exec rm -rf {} + 2>/dev/null || true
# Log backup completion
echo "[$TIMESTAMP] ✅ Hourly backup completed successfully"
echo "[$TIMESTAMP] Backup location: $BACKUP_SUBDIR"
echo "[$TIMESTAMP] Current hourly backups:"
ls -la $HOURLY_DIR/ | tail -n +2

135
scripts/backup-scheduler.sh Normal file → Executable file
View File

@@ -1,57 +1,104 @@
#!/bin/bash
# ac-compose
set -e
echo "🔧 Starting enhanced backup service with hourly and daily schedules..."
BACKUP_DIR_BASE="/backups"
HOURLY_DIR="$BACKUP_DIR_BASE/hourly"
DAILY_DIR="$BACKUP_DIR_BASE/daily"
RETENTION_HOURS=${BACKUP_RETENTION_HOURS:-6}
RETENTION_DAYS=${BACKUP_RETENTION_DAYS:-3}
DAILY_TIME=${BACKUP_DAILY_TIME:-09}
MYSQL_PORT=${MYSQL_PORT:-3306}
# Install curl if not available (handle different package managers)
# NOTE: curl is already available in mysql:8.0 base image, commenting out to fix operator precedence issue
# microdnf install -y curl || yum install -y curl || apt-get update && apt-get install -y curl
mkdir -p "$HOURLY_DIR" "$DAILY_DIR"
# Download backup scripts from GitHub
echo "📥 Downloading backup scripts from GitHub..."
curl -fsSL https://raw.githubusercontent.com/uprightbass360/acore-compose/main/scripts/backup.sh -o /tmp/backup.sh
curl -fsSL https://raw.githubusercontent.com/uprightbass360/acore-compose/main/scripts/backup-hourly.sh -o /tmp/backup-hourly.sh
curl -fsSL https://raw.githubusercontent.com/uprightbass360/acore-compose/main/scripts/backup-daily.sh -o /tmp/backup-daily.sh
chmod +x /tmp/backup.sh /tmp/backup-hourly.sh /tmp/backup-daily.sh
log() { echo "[$(date '+%F %T')] $*"; }
# Wait for MySQL to be ready before starting backup service
echo "⏳ Waiting for MySQL to be ready..."
sleep 30
# Build database list from env (include optional acore_playerbots if present)
database_list() {
local dbs=("${DB_AUTH_NAME}" "${DB_WORLD_NAME}" "${DB_CHARACTERS_NAME}")
if mysql -h"${MYSQL_HOST}" -P"${MYSQL_PORT}" -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -e "USE acore_playerbots;" >/dev/null 2>&1; then
dbs+=("acore_playerbots")
log "Detected optional database: acore_playerbots (will be backed up)"
fi
printf '%s\n' "${dbs[@]}"
}
# Run initial daily backup
echo "🚀 Running initial daily backup..."
/tmp/backup-daily.sh
run_backup() {
local tier_dir="$1" # hourly or daily dir
local tier_type="$2" # "hourly" or "daily"
local ts=$(date '+%Y%m%d_%H%M%S')
local target_dir="$tier_dir/$ts"
mkdir -p "$target_dir"
log "Starting ${tier_type} backup to $target_dir"
# Enhanced scheduler with hourly and daily backups
echo "⏰ Starting enhanced backup scheduler:"
echo " 📅 Daily backups: ${BACKUP_DAILY_TIME}:00 UTC (retention: ${BACKUP_RETENTION_DAYS} days)"
echo " ⏰ Hourly backups: every hour (retention: ${BACKUP_RETENTION_HOURS} hours)"
local -a dbs
mapfile -t dbs < <(database_list)
# Track last backup times to avoid duplicates
last_daily_hour=""
last_hourly_minute=""
for db in "${dbs[@]}"; do
log "Backing up database: $db"
if mysqldump \
-h"${MYSQL_HOST}" -P"${MYSQL_PORT}" -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" \
--single-transaction --routines --triggers --events \
--hex-blob --quick --lock-tables=false \
--add-drop-database --databases "$db" \
| gzip -c > "$target_dir/${db}.sql.gz"; then
log "✅ Successfully backed up $db"
else
log "❌ Failed to back up $db"
fi
done
# Create backup manifest (parity with scripts/backup.sh and backup-hourly.sh)
local size; size=$(du -sh "$target_dir" | cut -f1)
local mysql_ver; mysql_ver=$(mysql -h"${MYSQL_HOST}" -P"${MYSQL_PORT}" -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -e 'SELECT VERSION();' -s -N 2>/dev/null || echo "unknown")
if [ "$tier_type" = "hourly" ]; then
cat > "$target_dir/manifest.json" <<EOF
{
"timestamp": "${ts}",
"type": "hourly",
"databases": [$(printf '"%s",' "${dbs[@]}" | sed 's/,$//')],
"backup_size": "${size}",
"retention_hours": ${RETENTION_HOURS},
"mysql_version": "${mysql_ver}"
}
EOF
else
cat > "$target_dir/manifest.json" <<EOF
{
"timestamp": "${ts}",
"type": "daily",
"databases": [$(printf '"%s",' "${dbs[@]}" | sed 's/,$//')],
"backup_size": "${size}",
"retention_days": ${RETENTION_DAYS},
"mysql_version": "${mysql_ver}"
}
EOF
fi
log "Backup complete: $target_dir (size ${size})"
}
cleanup_old() {
find "$HOURLY_DIR" -mindepth 1 -maxdepth 1 -type d -mmin +$((RETENTION_HOURS*60)) -print -exec rm -rf {} + 2>/dev/null || true
find "$DAILY_DIR" -mindepth 1 -maxdepth 1 -type d -mtime +$RETENTION_DAYS -print -exec rm -rf {} + 2>/dev/null || true
}
log "Backup scheduler starting: hourly($RETENTION_HOURS h), daily($RETENTION_DAYS d at ${DAILY_TIME}:00)"
while true; do
current_hour=$(date +%H)
current_minute=$(date +%M)
current_time="$current_hour:$current_minute"
minute=$(date '+%M')
hour=$(date '+%H')
# Daily backup check (configurable time)
if [ "$current_hour" = "${BACKUP_DAILY_TIME}" ] && [ "$current_minute" = "00" ] && [ "$last_daily_hour" != "$current_hour" ]; then
echo "📅 [$(date)] Daily backup time reached, running daily backup..."
/tmp/backup-daily.sh
last_daily_hour="$current_hour"
# Sleep for 2 minutes to avoid running multiple times
sleep 120
# Hourly backup check (every hour at minute 0, except during daily backup)
elif [ "$current_minute" = "00" ] && [ "$current_hour" != "${BACKUP_DAILY_TIME}" ] && [ "$last_hourly_minute" != "$current_minute" ]; then
echo "⏰ [$(date)] Hourly backup time reached, running hourly backup..."
/tmp/backup-hourly.sh
last_hourly_minute="$current_minute"
# Sleep for 2 minutes to avoid running multiple times
sleep 120
else
# Sleep for 1 minute before checking again
sleep 60
if [ "$minute" = "00" ]; then
run_backup "$HOURLY_DIR" "hourly"
fi
done
if [ "$hour" = "$DAILY_TIME" ] && [ "$minute" = "00" ]; then
run_backup "$DAILY_DIR" "daily"
fi
cleanup_old
sleep 60
done

View File

@@ -1,73 +0,0 @@
#!/bin/bash
set -e
# Configuration from environment variables
MYSQL_HOST=${MYSQL_HOST:-ac-mysql}
MYSQL_PORT=${MYSQL_PORT:-3306}
MYSQL_USER=${MYSQL_USER:-root}
MYSQL_PASSWORD=${MYSQL_PASSWORD:-password}
BACKUP_DIR="/backups"
RETENTION_DAYS=${BACKUP_RETENTION_DAYS:-7}
DATE_FORMAT="%Y%m%d_%H%M%S"
# Database names - core databases
DATABASES=("acore_auth" "acore_world" "acore_characters")
# Check if acore_playerbots database exists and add it to backup list
echo "Checking for optional acore_playerbots database..."
if mysql -h$MYSQL_HOST -P$MYSQL_PORT -u$MYSQL_USER -p$MYSQL_PASSWORD -e "USE acore_playerbots;" 2>/dev/null; then
DATABASES+=("acore_playerbots")
echo "✅ acore_playerbots database found - will be included in backup"
else
echo " acore_playerbots database not found - skipping (this is normal for some installations)"
fi
# Create backup directory
mkdir -p $BACKUP_DIR
# Generate timestamp
TIMESTAMP=$(date +$DATE_FORMAT)
BACKUP_SUBDIR="$BACKUP_DIR/$TIMESTAMP"
mkdir -p $BACKUP_SUBDIR
echo "[$TIMESTAMP] Starting AzerothCore database backup..."
echo "[$TIMESTAMP] Databases to backup: ${DATABASES[@]}"
# Backup each database
for db in "${DATABASES[@]}"; do
echo "[$TIMESTAMP] Backing up database: $db"
mysqldump -h$MYSQL_HOST -P$MYSQL_PORT -u$MYSQL_USER -p$MYSQL_PASSWORD \
--single-transaction --routines --triggers --events \
--hex-blob --quick --lock-tables=false \
--add-drop-database --databases $db \
| gzip > $BACKUP_SUBDIR/${db}.sql.gz
if [ $? -eq 0 ]; then
SIZE=$(du -h $BACKUP_SUBDIR/${db}.sql.gz | cut -f1)
echo "[$TIMESTAMP] ✅ Successfully backed up $db ($SIZE)"
else
echo "[$TIMESTAMP] ❌ Failed to backup $db"
exit 1
fi
done
# Create backup manifest
cat > $BACKUP_SUBDIR/manifest.json <<EOF
{
"timestamp": "$TIMESTAMP",
"databases": ["${DATABASES[@]}"],
"backup_size": "$(du -sh $BACKUP_SUBDIR | cut -f1)",
"retention_days": $RETENTION_DAYS,
"mysql_version": "$(mysql -h$MYSQL_HOST -P$MYSQL_PORT -u$MYSQL_USER -p$MYSQL_PASSWORD -e 'SELECT VERSION();' -s -N)"
}
EOF
# Clean up old backups based on retention policy
echo "[$TIMESTAMP] Cleaning up backups older than $RETENTION_DAYS days..."
find $BACKUP_DIR -type d -name "[0-9]*" -mtime +$RETENTION_DAYS -exec rm -rf {} + 2>/dev/null || true
# Log backup completion
echo "[$TIMESTAMP] ✅ Backup completed successfully"
echo "[$TIMESTAMP] Backup location: $BACKUP_SUBDIR"
echo "[$TIMESTAMP] Current backups:"
ls -la $BACKUP_DIR/

View File

@@ -1,419 +0,0 @@
#!/bin/bash
# ==============================================
# AzerothCore Docker Cleanup Script
# ==============================================
# This script provides various levels of cleanup for AzerothCore Docker resources
# Usage: ./cleanup.sh [--soft] [--hard] [--nuclear] [--dry-run]
set -e # Exit on any error
# Change to the project root directory (parent of scripts directory)
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
cd "$PROJECT_ROOT"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
MAGENTA='\033[0;35m'
NC='\033[0m' # No Color
# Script options
CLEANUP_LEVEL=""
DRY_RUN=false
FORCE=false
PRESERVE_BACKUPS=false
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--soft)
CLEANUP_LEVEL="soft"
shift
;;
--hard)
CLEANUP_LEVEL="hard"
shift
;;
--nuclear)
CLEANUP_LEVEL="nuclear"
shift
;;
--dry-run)
DRY_RUN=true
shift
;;
--force)
FORCE=true
shift
;;
--preserve-backups)
PRESERVE_BACKUPS=true
shift
;;
-h|--help)
echo "AzerothCore Docker Cleanup Script"
echo ""
echo "Usage: $0 [CLEANUP_LEVEL] [OPTIONS]"
echo ""
echo "CLEANUP LEVELS:"
echo " --soft Stop containers only (preserves data)"
echo " --hard Stop containers + remove containers + networks (preserves volumes/data)"
echo " --nuclear Complete removal: containers + networks + volumes + images (DESTROYS ALL DATA)"
echo ""
echo "OPTIONS:"
echo " --dry-run Show what would be done without actually doing it"
echo " --preserve-backups Keep database backup files when cleaning storage"
echo " --force Skip confirmation prompts"
echo " --help Show this help message"
echo ""
echo "EXAMPLES:"
echo " $0 --soft # Stop all containers"
echo " $0 --hard --dry-run # Show what hard cleanup would do"
echo " $0 --nuclear --force # Complete removal without prompts"
exit 0
;;
*)
echo "Unknown option $1"
echo "Use --help for usage information"
exit 1
;;
esac
done
# Function to print colored output
print_status() {
local status=$1
local message=$2
case $status in
"INFO")
echo -e "${BLUE} ${message}${NC}"
;;
"SUCCESS")
echo -e "${GREEN}${message}${NC}"
;;
"WARNING")
echo -e "${YELLOW}⚠️ ${message}${NC}"
;;
"ERROR")
echo -e "${RED}${message}${NC}"
;;
"DANGER")
echo -e "${RED}💀 ${message}${NC}"
;;
"HEADER")
echo -e "\n${MAGENTA}=== ${message} ===${NC}"
;;
esac
}
# Function to execute command with dry-run support
execute_command() {
local description=$1
local command=$2
if [ "$DRY_RUN" = true ]; then
print_status "INFO" "[DRY RUN] Would execute: $description"
echo " Command: $command"
else
print_status "INFO" "Executing: $description"
if eval "$command"; then
print_status "SUCCESS" "Completed: $description"
else
print_status "WARNING" "Failed or no action needed: $description"
fi
fi
}
# Function to get confirmation
get_confirmation() {
local message=$1
if [ "$FORCE" = true ]; then
print_status "INFO" "Force mode enabled, skipping confirmation"
return 0
fi
echo -e "${YELLOW}⚠️ ${message}${NC}"
read -p "Are you sure? (yes/no): " response
case $response in
yes|YES|y|Y)
return 0
;;
*)
print_status "INFO" "Operation cancelled by user"
exit 0
;;
esac
}
# Function to show current resources
show_current_resources() {
print_status "HEADER" "CURRENT AZEROTHCORE RESOURCES"
echo -e "${BLUE}Containers:${NC}"
if docker ps -a --format "table {{.Names}}\t{{.Status}}\t{{.Image}}" | grep -E "ac-|acore" | head -20; then
echo ""
else
echo " No AzerothCore containers found"
fi
echo -e "${BLUE}Networks:${NC}"
if docker network ls --format "table {{.Name}}\t{{.Driver}}\t{{.Scope}}" | grep -E "azerothcore|acore"; then
echo ""
else
echo " No AzerothCore networks found"
fi
echo -e "${BLUE}Volumes:${NC}"
if docker volume ls --format "table {{.Name}}\t{{.Driver}}" | grep -E "ac_|acore|azerothcore"; then
echo ""
else
echo " No AzerothCore volumes found"
fi
echo -e "${BLUE}Images:${NC}"
if docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}" | grep -E "acore|azerothcore|phpmyadmin|keira3|uprightbass360.*playerbots" | head -10; then
echo ""
else
echo " No AzerothCore-related images found"
fi
}
# Function to perform soft cleanup
soft_cleanup() {
print_status "HEADER" "SOFT CLEANUP - STOPPING CONTAINERS"
get_confirmation "This will stop all AzerothCore containers but preserve all data."
# Stop modules layer (if exists)
execute_command "Stop modules layer" \
"docker compose --env-file docker-compose-azerothcore-modules-custom.env -f docker-compose-azerothcore-modules.yml down 2>/dev/null || docker compose --env-file docker-compose-azerothcore-modules.env -f docker-compose-azerothcore-modules.yml down 2>/dev/null || true"
# Stop tools layer (if exists)
execute_command "Stop tools layer" \
"docker compose --env-file docker-compose-azerothcore-tools-custom.env -f docker-compose-azerothcore-tools.yml down 2>/dev/null || docker compose --env-file docker-compose-azerothcore-tools.env -f docker-compose-azerothcore-tools.yml down 2>/dev/null || true"
# Stop services layer
execute_command "Stop services layer" \
"docker compose --env-file docker-compose-azerothcore-services-custom.env -f docker-compose-azerothcore-services.yml down 2>/dev/null || docker compose --env-file docker-compose-azerothcore-services.env -f docker-compose-azerothcore-services.yml down"
# Stop database layer
execute_command "Stop database layer" \
"docker compose --env-file docker-compose-azerothcore-database-custom.env -f docker-compose-azerothcore-database.yml down 2>/dev/null || docker compose --env-file docker-compose-azerothcore-database.env -f docker-compose-azerothcore-database.yml down"
print_status "SUCCESS" "Soft cleanup completed - all containers stopped"
print_status "INFO" "Data volumes and images are preserved"
print_status "INFO" "Use deployment script to restart services"
}
# Function to perform hard cleanup
hard_cleanup() {
print_status "HEADER" "HARD CLEANUP - REMOVING CONTAINERS AND NETWORKS"
get_confirmation "This will remove all containers and networks but preserve data volumes and images."
# Remove containers and networks
execute_command "Remove modules layer (containers + networks)" \
"docker compose --env-file docker-compose-azerothcore-modules-custom.env -f docker-compose-azerothcore-modules.yml down --remove-orphans 2>/dev/null || docker compose --env-file docker-compose-azerothcore-modules.env -f docker-compose-azerothcore-modules.yml down --remove-orphans 2>/dev/null || true"
execute_command "Remove tools layer (containers + networks)" \
"docker compose --env-file docker-compose-azerothcore-tools-custom.env -f docker-compose-azerothcore-tools.yml down --remove-orphans 2>/dev/null || docker compose --env-file docker-compose-azerothcore-tools.env -f docker-compose-azerothcore-tools.yml down --remove-orphans 2>/dev/null || true"
execute_command "Remove services layer (containers + networks)" \
"docker compose --env-file docker-compose-azerothcore-services-custom.env -f docker-compose-azerothcore-services.yml down --remove-orphans 2>/dev/null || docker compose --env-file docker-compose-azerothcore-services.env -f docker-compose-azerothcore-services.yml down --remove-orphans"
execute_command "Remove database layer (containers + networks)" \
"docker compose --env-file docker-compose-azerothcore-database-custom.env -f docker-compose-azerothcore-database.yml down --remove-orphans 2>/dev/null || docker compose --env-file docker-compose-azerothcore-database.env -f docker-compose-azerothcore-database.yml down --remove-orphans"
# Clean up any remaining AzerothCore containers
execute_command "Remove any remaining AzerothCore containers" \
"docker ps -a --format '{{.Names}}' | grep -E '^ac-' | xargs -r docker rm -f"
# Clean up AzerothCore networks
execute_command "Remove AzerothCore networks" \
"docker network ls --format '{{.Name}}' | grep -E 'azerothcore|acore' | xargs -r docker network rm"
print_status "SUCCESS" "Hard cleanup completed - containers and networks removed"
print_status "INFO" "Data volumes and images are preserved"
print_status "INFO" "Run full deployment script to recreate the stack"
}
# Function to perform nuclear cleanup
nuclear_cleanup() {
print_status "HEADER" "NUCLEAR CLEANUP - COMPLETE REMOVAL"
print_status "DANGER" "THIS WILL DESTROY ALL DATA AND REMOVE EVERYTHING!"
get_confirmation "This will permanently delete ALL AzerothCore data, containers, networks, volumes, and images. This action CANNOT be undone!"
# Stop and remove everything
execute_command "Stop and remove modules layer (with volumes)" \
"docker compose --env-file docker-compose-azerothcore-modules-custom.env -f docker-compose-azerothcore-modules.yml down --volumes --remove-orphans 2>/dev/null || docker compose --env-file docker-compose-azerothcore-modules.env -f docker-compose-azerothcore-modules.yml down --volumes --remove-orphans 2>/dev/null || true"
execute_command "Stop and remove tools layer (with volumes)" \
"docker compose --env-file docker-compose-azerothcore-tools-custom.env -f docker-compose-azerothcore-tools.yml down --volumes --remove-orphans 2>/dev/null || docker compose --env-file docker-compose-azerothcore-tools.env -f docker-compose-azerothcore-tools.yml down --volumes --remove-orphans 2>/dev/null || true"
execute_command "Stop and remove services layer (with volumes)" \
"docker compose --env-file docker-compose-azerothcore-services-custom.env -f docker-compose-azerothcore-services.yml down --volumes --remove-orphans 2>/dev/null || docker compose --env-file docker-compose-azerothcore-services.env -f docker-compose-azerothcore-services.yml down --volumes --remove-orphans 2>/dev/null || true"
execute_command "Stop and remove database layer (with volumes)" \
"docker compose --env-file docker-compose-azerothcore-database-custom.env -f docker-compose-azerothcore-database.yml down --volumes --remove-orphans 2>/dev/null || docker compose --env-file docker-compose-azerothcore-database.env -f docker-compose-azerothcore-database.yml down --volumes --remove-orphans 2>/dev/null || true"
# Remove any remaining containers
execute_command "Remove any remaining AzerothCore containers" \
"docker ps -a --format '{{.Names}}' | grep -E '^ac-|acore' | xargs -r docker rm -f"
# Remove networks
execute_command "Remove AzerothCore networks" \
"docker network ls --format '{{.Name}}' | grep -E 'azerothcore|acore' | xargs -r docker network rm"
# Remove volumes
execute_command "Remove AzerothCore volumes" \
"docker volume ls --format '{{.Name}}' | grep -E '^ac_|acore|azerothcore' | xargs -r docker volume rm"
# Remove images
execute_command "Remove AzerothCore server images" \
"docker images --format '{{.Repository}}:{{.Tag}}' | grep -E '^acore/' | xargs -r docker rmi"
execute_command "Remove mod-playerbots images" \
"docker images --format '{{.Repository}}:{{.Tag}}' | grep -E '^uprightbass360/azerothcore-wotlk-playerbots' | xargs -r docker rmi"
execute_command "Remove related tool images" \
"docker images --format '{{.Repository}}:{{.Tag}}' | grep -E 'phpmyadmin|uprightbass360/keira3' | xargs -r docker rmi"
# Clean up local data directories
if [ "$PRESERVE_BACKUPS" = true ]; then
# Create a function to clean storage while preserving backups
cleanup_storage_preserve_backups() {
if [ -d "./storage" ]; then
# Find the storage path from environment files
STORAGE_ROOT=$(grep "^STORAGE_ROOT=" docker-compose-azerothcore-database*.env 2>/dev/null | head -1 | cut -d'=' -f2 || echo "/nfs/azerothcore")
BACKUP_PATH="${STORAGE_ROOT}/backups"
# Temporarily move backups if they exist
if [ -d "${BACKUP_PATH}" ]; then
print_status "INFO" "Preserving backups at ${BACKUP_PATH}"
sudo mkdir -p /tmp/azerothcore-backups-preserve 2>/dev/null || mkdir -p /tmp/azerothcore-backups-preserve
sudo cp -r "${BACKUP_PATH}" /tmp/azerothcore-backups-preserve/ 2>/dev/null || cp -r "${BACKUP_PATH}" /tmp/azerothcore-backups-preserve/
fi
# Remove storage directories
sudo rm -rf ./storage 2>/dev/null || rm -rf ./storage 2>/dev/null || true
# Restore backups if they were preserved
if [ -d "/tmp/azerothcore-backups-preserve/backups" ]; then
sudo mkdir -p "${STORAGE_ROOT}" 2>/dev/null || mkdir -p "${STORAGE_ROOT}"
sudo mv /tmp/azerothcore-backups-preserve/backups "${BACKUP_PATH}" 2>/dev/null || mv /tmp/azerothcore-backups-preserve/backups "${BACKUP_PATH}"
sudo rm -rf /tmp/azerothcore-backups-preserve 2>/dev/null || rm -rf /tmp/azerothcore-backups-preserve
print_status "SUCCESS" "Backups preserved at ${BACKUP_PATH}"
fi
fi
# Still remove ./backups directory (local backups, not NFS backups)
sudo rm -rf ./backups 2>/dev/null || rm -rf ./backups 2>/dev/null || true
}
execute_command "Remove storage directories (preserving backups)" \
"cleanup_storage_preserve_backups"
else
execute_command "Remove local storage directories" \
"sudo rm -rf ./storage ./backups 2>/dev/null || rm -rf ./storage ./backups 2>/dev/null || true"
fi
# System cleanup
execute_command "Clean up unused Docker resources" \
"docker system prune -af --volumes"
print_status "SUCCESS" "Nuclear cleanup completed - everything removed"
print_status "DANGER" "ALL AZEROTHCORE DATA HAS BEEN PERMANENTLY DELETED"
print_status "INFO" "Run full deployment script to start fresh"
}
# Function to show cleanup summary
show_cleanup_summary() {
local level=$1
print_status "HEADER" "CLEANUP SUMMARY"
case $level in
"soft")
echo -e "${GREEN}✅ Containers: Stopped${NC}"
echo -e "${BLUE} Networks: Preserved${NC}"
echo -e "${BLUE} Volumes: Preserved (data safe)${NC}"
echo -e "${BLUE} Images: Preserved${NC}"
echo ""
echo -e "${GREEN}Next steps:${NC}"
echo " • To restart: cd scripts && ./deploy-and-check.sh --skip-deploy"
echo " • To deploy fresh: cd scripts && ./deploy-and-check.sh"
;;
"hard")
echo -e "${GREEN}✅ Containers: Removed${NC}"
echo -e "${GREEN}✅ Networks: Removed${NC}"
echo -e "${BLUE} Volumes: Preserved (data safe)${NC}"
echo -e "${BLUE} Images: Preserved${NC}"
echo ""
echo -e "${GREEN}Next steps:${NC}"
echo " • To deploy: cd scripts && ./deploy-and-check.sh"
;;
"nuclear")
echo -e "${RED}💀 Containers: DESTROYED${NC}"
echo -e "${RED}💀 Networks: DESTROYED${NC}"
echo -e "${RED}💀 Volumes: DESTROYED${NC}"
echo -e "${RED}💀 Images: DESTROYED${NC}"
echo -e "${RED}💀 Data: PERMANENTLY DELETED${NC}"
echo ""
echo -e "${YELLOW}Next steps:${NC}"
echo " • To start fresh: cd scripts && ./deploy-and-check.sh"
echo " • This will re-download ~15GB of client data"
;;
esac
}
# Main execution
main() {
print_status "HEADER" "AZEROTHCORE CLEANUP SCRIPT"
# Check if docker is available
if ! command -v docker &> /dev/null; then
print_status "ERROR" "Docker is not installed or not in PATH"
exit 1
fi
# Show help if no cleanup level specified
if [ -z "$CLEANUP_LEVEL" ]; then
echo "Please specify a cleanup level:"
echo " --soft Stop containers only (safe)"
echo " --hard Remove containers + networks (preserves data)"
echo " --nuclear Complete removal (DESTROYS ALL DATA)"
echo ""
echo "Use --help for more information"
exit 1
fi
# Show current resources
show_current_resources
# Execute cleanup based on level
case $CLEANUP_LEVEL in
"soft")
soft_cleanup
;;
"hard")
hard_cleanup
;;
"nuclear")
nuclear_cleanup
;;
esac
# Show final summary
show_cleanup_summary "$CLEANUP_LEVEL"
print_status "SUCCESS" "🧹 Cleanup completed successfully!"
}
# Run main function
main "$@"

View File

@@ -1,290 +0,0 @@
#!/bin/bash
# ==============================================
# AzerothCore Module Configuration Script
# ==============================================
# Handles post-installation configuration that requires manual setup beyond Docker automation
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
MAGENTA='\033[0;35m'
NC='\033[0m' # No Color
# Function to print colored output
print_status() {
local status=$1
local message=$2
case $status in
"INFO")
echo -e "${BLUE} ${message}${NC}"
;;
"SUCCESS")
echo -e "${GREEN}${message}${NC}"
;;
"WARNING")
echo -e "${YELLOW}⚠️ ${message}${NC}"
;;
"ERROR")
echo -e "${RED}${message}${NC}"
;;
"HEADER")
echo -e "\n${MAGENTA}=== ${message} ===${NC}"
;;
"CRITICAL")
echo -e "${RED}🚨 CRITICAL: ${message}${NC}"
;;
esac
}
# Load environment variables
if [ -f "docker-compose-azerothcore-services.env" ]; then
source docker-compose-azerothcore-services.env
else
print_status "ERROR" "Environment file not found. Run from acore-compose directory."
exit 1
fi
print_status "HEADER" "AZEROTHCORE MODULE CONFIGURATION ANALYSIS"
echo "This script analyzes your enabled modules and identifies manual configuration requirements."
echo ""
# Check which modules are enabled
ENABLED_MODULES=()
[ "$MODULE_PLAYERBOTS" = "1" ] && ENABLED_MODULES+=("PLAYERBOTS")
[ "$MODULE_AOE_LOOT" = "1" ] && ENABLED_MODULES+=("AOE_LOOT")
[ "$MODULE_LEARN_SPELLS" = "1" ] && ENABLED_MODULES+=("LEARN_SPELLS")
[ "$MODULE_FIREWORKS" = "1" ] && ENABLED_MODULES+=("FIREWORKS")
[ "$MODULE_INDIVIDUAL_PROGRESSION" = "1" ] && ENABLED_MODULES+=("INDIVIDUAL_PROGRESSION")
[ "$MODULE_TRANSMOG" = "1" ] && ENABLED_MODULES+=("TRANSMOG")
[ "$MODULE_SOLO_LFG" = "1" ] && ENABLED_MODULES+=("SOLO_LFG")
[ "$MODULE_ELUNA" = "1" ] && ENABLED_MODULES+=("ELUNA")
[ "$MODULE_ARAC" = "1" ] && ENABLED_MODULES+=("ARAC")
[ "$MODULE_NPC_ENCHANTER" = "1" ] && ENABLED_MODULES+=("NPC_ENCHANTER")
[ "$MODULE_ASSISTANT" = "1" ] && ENABLED_MODULES+=("ASSISTANT")
[ "$MODULE_REAGENT_BANK" = "1" ] && ENABLED_MODULES+=("REAGENT_BANK")
[ "$MODULE_BLACK_MARKET_AUCTION_HOUSE" = "1" ] && ENABLED_MODULES+=("BLACK_MARKET")
print_status "INFO" "Found ${#ENABLED_MODULES[@]} enabled modules: ${ENABLED_MODULES[*]}"
echo ""
# Critical Compatibility Issues
print_status "HEADER" "CRITICAL COMPATIBILITY ISSUES"
if [[ " ${ENABLED_MODULES[*]} " =~ " PLAYERBOTS " ]]; then
print_status "CRITICAL" "mod-playerbots REQUIRES CUSTOM AZEROTHCORE BRANCH"
echo " 🔗 Required: liyunfan1223/azerothcore-wotlk/tree/Playerbot"
echo " ❌ Current: Standard AzerothCore (INCOMPATIBLE)"
echo " 📋 Action: Switch to Playerbot branch OR disable MODULE_PLAYERBOTS"
echo ""
fi
# Client-Side Requirements
print_status "HEADER" "CLIENT-SIDE PATCH REQUIREMENTS"
CLIENT_PATCHES_NEEDED=false
if [[ " ${ENABLED_MODULES[*]} " =~ " INDIVIDUAL_PROGRESSION " ]]; then
print_status "WARNING" "mod-individual-progression requires CLIENT PATCHES"
echo " 📁 Location: ${STORAGE_PATH}/modules/mod-individual-progression/optional/"
echo " 📦 Required: patch-V.mpq (Vanilla crafting/recipes)"
echo " 📦 Optional: patch-J.mpq (Vanilla login screen)"
echo " 📦 Optional: patch-U.mpq (Vanilla loading screens)"
echo " 🎯 Install: Copy to client WoW/Data/ directory"
CLIENT_PATCHES_NEEDED=true
echo ""
fi
if [[ " ${ENABLED_MODULES[*]} " =~ " ARAC " ]]; then
print_status "WARNING" "mod-arac requires CLIENT PATCHES"
echo " 📦 Required: Patch-A.MPQ"
echo " 📁 Location: ${STORAGE_PATH}/modules/mod-arac/patch-contents/"
echo " 🎯 Install: Copy Patch-A.MPQ to client WoW/Data/ directory"
echo " 🔧 Server: DBC files automatically applied during module setup"
CLIENT_PATCHES_NEEDED=true
echo ""
fi
if [ "$CLIENT_PATCHES_NEEDED" = true ]; then
print_status "INFO" "Client patches must be distributed manually to all players"
fi
# Critical Server Configuration Requirements
print_status "HEADER" "CRITICAL SERVER CONFIGURATION"
CONFIG_CHANGES_NEEDED=false
if [[ " ${ENABLED_MODULES[*]} " =~ " INDIVIDUAL_PROGRESSION " ]]; then
print_status "CRITICAL" "mod-individual-progression requires worldserver.conf changes"
echo " ⚙️ Required: EnablePlayerSettings = 1"
echo " ⚙️ Required: DBC.EnforceItemAttributes = 0"
echo " 📁 File: ${STORAGE_PATH}/config/worldserver.conf"
CONFIG_CHANGES_NEEDED=true
echo ""
fi
if [[ " ${ENABLED_MODULES[*]} " =~ " AOE_LOOT " ]]; then
print_status "WARNING" "mod-aoe-loot requires worldserver.conf optimization"
echo " ⚙️ Required: Rate.Corpse.Decay.Looted = 0.01 (default: 0.5)"
echo " 📁 File: ${STORAGE_PATH}/config/worldserver.conf"
CONFIG_CHANGES_NEEDED=true
echo ""
fi
# Manual NPC Spawning Requirements
print_status "HEADER" "MANUAL NPC SPAWNING REQUIRED"
NPC_SPAWNING_NEEDED=false
if [[ " ${ENABLED_MODULES[*]} " =~ " TRANSMOG " ]]; then
print_status "INFO" "mod-transmog requires NPC spawning"
echo " 🤖 Command: .npc add 190010"
echo " 📍 Location: Spawn in major cities (Stormwind, Orgrimmar, etc.)"
NPC_SPAWNING_NEEDED=true
echo ""
fi
if [[ " ${ENABLED_MODULES[*]} " =~ " NPC_ENCHANTER " ]]; then
print_status "INFO" "mod-npc-enchanter requires NPC spawning"
echo " 🤖 Command: .npc add [enchanter_id]"
echo " 📍 Location: Spawn in major cities"
NPC_SPAWNING_NEEDED=true
echo ""
fi
if [[ " ${ENABLED_MODULES[*]} " =~ " REAGENT_BANK " ]]; then
print_status "INFO" "mod-reagent-bank requires NPC spawning"
echo " 🤖 Command: .npc add 290011"
echo " 📍 Location: Spawn in major cities"
NPC_SPAWNING_NEEDED=true
echo ""
fi
if [ "$NPC_SPAWNING_NEEDED" = true ]; then
print_status "INFO" "Use GM account with level 3 permissions to spawn NPCs"
fi
# Configuration File Management
print_status "HEADER" "CONFIGURATION FILE SETUP"
echo "Module configuration files are automatically copied during container startup:"
echo ""
for module in "${ENABLED_MODULES[@]}"; do
case $module in
"PLAYERBOTS")
echo " 📝 playerbots.conf - Bot behavior, RandomBot settings"
;;
"AOE_LOOT")
echo " 📝 mod_aoe_loot.conf - Loot range, group settings"
;;
"LEARN_SPELLS")
echo " 📝 mod_learnspells.conf - Auto-learn behavior"
;;
"FIREWORKS")
echo " 📝 mod_fireworks.conf - Level-up effects"
;;
"INDIVIDUAL_PROGRESSION")
echo " 📝 individual_progression.conf - Era progression settings"
;;
"TRANSMOG")
echo " 📝 transmog.conf - Transmogrification rules"
;;
"SOLO_LFG")
echo " 📝 SoloLfg.conf - Solo dungeon finder settings"
;;
"ELUNA")
echo " 📝 mod_LuaEngine.conf - Lua scripting engine"
;;
*)
;;
esac
done
# Database Backup Recommendation
print_status "HEADER" "DATABASE BACKUP RECOMMENDATION"
if [[ " ${ENABLED_MODULES[*]} " =~ " ARAC " ]] || [[ " ${ENABLED_MODULES[*]} " =~ " INDIVIDUAL_PROGRESSION " ]]; then
print_status "CRITICAL" "Database backup STRONGLY RECOMMENDED"
echo " 💾 Modules modify core database tables"
echo " 🔄 Backup command: docker exec ac-mysql mysqldump -u root -p\${MYSQL_ROOT_PASSWORD} --all-databases > backup.sql"
echo ""
fi
# Performance Considerations
print_status "HEADER" "PERFORMANCE CONSIDERATIONS"
if [[ " ${ENABLED_MODULES[*]} " =~ " PLAYERBOTS " ]]; then
print_status "WARNING" "mod-playerbots can significantly impact server performance"
echo " 🤖 Default: 500 RandomBots (MinRandomBots/MaxRandomBots)"
echo " 💡 Recommendation: Start with lower numbers and scale up"
echo " 📊 Monitor: CPU usage, memory consumption, database load"
echo ""
fi
if [[ " ${ENABLED_MODULES[*]} " =~ " ELUNA " ]]; then
print_status "INFO" "mod-eluna performance depends on Lua script complexity"
echo " 📜 Complex scripts can impact server performance"
echo " 🔍 Monitor script execution times"
echo ""
fi
# Eluna Lua Scripting Setup
if [[ " ${ENABLED_MODULES[*]} " =~ " ELUNA " ]]; then
print_status "HEADER" "ELUNA LUA SCRIPTING REQUIREMENTS"
if [ -d "${STORAGE_PATH}/lua_scripts" ]; then
print_status "SUCCESS" "Lua scripts directory exists: ${STORAGE_PATH}/lua_scripts"
SCRIPT_COUNT=$(find "${STORAGE_PATH}/lua_scripts" -name "*.lua" 2>/dev/null | wc -l)
print_status "INFO" "Found $SCRIPT_COUNT Lua script(s)"
else
print_status "WARNING" "Lua scripts directory missing: ${STORAGE_PATH}/lua_scripts"
print_status "INFO" "Run ./scripts/setup-eluna.sh to create directory and example scripts"
fi
print_status "INFO" "Eluna Script Management:"
echo " 🔄 Reload scripts: .reload eluna"
echo " 📁 Script location: ${STORAGE_PATH}/lua_scripts"
echo " ⚠️ Compatibility: AzerothCore mod-eluna only (NOT standard Eluna)"
echo " 📋 Requirements: English DBC files recommended"
echo ""
fi
# Summary and Next Steps
print_status "HEADER" "SUMMARY AND NEXT STEPS"
echo "📋 REQUIRED MANUAL ACTIONS:"
echo ""
if [[ " ${ENABLED_MODULES[*]} " =~ " PLAYERBOTS " ]]; then
echo "1. 🔧 CRITICAL: Switch to Playerbot AzerothCore branch OR disable MODULE_PLAYERBOTS"
fi
if [ "$CONFIG_CHANGES_NEEDED" = true ]; then
echo "2. ⚙️ Edit worldserver.conf with required settings (see above)"
fi
if [ "$CLIENT_PATCHES_NEEDED" = true ]; then
echo "3. 📦 Distribute client patches to all players"
fi
if [ "$NPC_SPAWNING_NEEDED" = true ]; then
echo "4. 🤖 Spawn required NPCs using GM commands"
fi
echo ""
echo "📖 RECOMMENDED ORDER:"
echo " 1. Complete server configuration changes"
echo " 2. Rebuild containers with: ./scripts/rebuild-with-modules.sh"
echo " 3. Test in development environment first"
echo " 4. Create GM account and spawn NPCs"
echo " 5. Distribute client patches to players"
echo " 6. Monitor performance and adjust settings as needed"
echo ""
print_status "SUCCESS" "Module configuration analysis complete!"
print_status "INFO" "Review all CRITICAL and WARNING items before deploying to production"

View File

@@ -1,6 +1,49 @@
#!/bin/bash
# ac-compose
set -e
print_help() {
cat <<'EOF'
Usage: db-import-conditional.sh [options]
Description:
Conditionally restores AzerothCore databases from backups if available;
otherwise creates fresh databases and runs the dbimport tool to populate
schemas. Uses status markers to prevent overwriting restored data.
Options:
-h, --help Show this help message and exit
Environment variables:
CONTAINER_MYSQL Hostname of the MySQL container (default: ac-mysql)
MYSQL_PORT MySQL port (default: 3306)
MYSQL_USER MySQL user (default: root)
MYSQL_ROOT_PASSWORD MySQL password for the user above
DB_AUTH_NAME Auth DB name (default: acore_auth)
DB_WORLD_NAME World DB name (default: acore_world)
DB_CHARACTERS_NAME Characters DB name (default: acore_characters)
BACKUP DIRS Uses /backups/{daily,timestamped} if present
STATUS MARKERS Uses /var/lib/mysql-persistent/.restore-*
Notes:
- If a valid backup is detected and successfully restored, schema import is skipped.
- On fresh setups, the script creates databases and runs dbimport.
EOF
}
case "${1:-}" in
-h|--help)
print_help
exit 0
;;
"") ;;
*)
echo "Unknown option: $1" >&2
print_help
exit 1
;;
esac
echo "🔧 Conditional AzerothCore Database Import"
echo "========================================"
@@ -12,7 +55,6 @@ RESTORE_FAILED_MARKER="$RESTORE_STATUS_DIR/.restore-failed"
RESTORE_SUCCESS_MARKER_TMP="$MARKER_STATUS_DIR/.restore-completed"
RESTORE_FAILED_MARKER_TMP="$MARKER_STATUS_DIR/.restore-failed"
# Ensure we can write to the status directory, fallback to tmp
mkdir -p "$RESTORE_STATUS_DIR" 2>/dev/null || true
if ! touch "$RESTORE_STATUS_DIR/.test-write" 2>/dev/null; then
echo "⚠️ Cannot write to $RESTORE_STATUS_DIR, using $MARKER_STATUS_DIR for markers"
@@ -24,23 +66,16 @@ fi
echo "🔍 Checking restoration status..."
# Check if backup was successfully restored
if [ -f "$RESTORE_SUCCESS_MARKER" ]; then
echo "✅ Backup restoration completed successfully"
echo "📄 Restoration details:"
cat "$RESTORE_SUCCESS_MARKER"
echo ""
cat "$RESTORE_SUCCESS_MARKER" || true
echo "🚫 Skipping database import - data already restored from backup"
echo "💡 This prevents overwriting restored data with fresh schema"
exit 0
fi
# Check if restoration failed (fresh databases created)
if [ -f "$RESTORE_FAILED_MARKER" ]; then
echo " No backup was restored - fresh databases detected"
echo "📄 Database creation details:"
cat "$RESTORE_FAILED_MARKER"
echo ""
cat "$RESTORE_FAILED_MARKER" || true
echo "▶️ Proceeding with database import to populate fresh databases"
else
echo "⚠️ No restoration status found - assuming fresh installation"
@@ -50,66 +85,11 @@ fi
echo ""
echo "🔧 Starting database import process..."
# First attempt backup restoration
echo "🔍 Checking for backups to restore..."
BACKUP_DIRS="/backups"
# Function to restore from backup (directory or single file)
restore_from_directory() {
local backup_path="$1"
echo "🔄 Restoring from backup: $backup_path"
local restore_success=true
# Handle single .sql file (legacy backup)
if [ -f "$backup_path" ] && [[ "$backup_path" == *.sql ]]; then
echo "📥 Restoring legacy backup file: $(basename "$backup_path")"
if timeout 300 mysql -h ${CONTAINER_MYSQL} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} < "$backup_path"; then
echo "✅ Successfully restored legacy backup"
return 0
else
echo "❌ Failed to restore legacy backup"
return 1
fi
fi
# Handle directory with .sql.gz files (modern timestamped backups)
if [ -d "$backup_path" ]; then
echo "🔄 Restoring from backup directory: $backup_path"
# Restore each database backup
for backup_file in "$backup_path"/*.sql.gz; do
if [ -f "$backup_file" ]; then
local db_name=$(basename "$backup_file" .sql.gz)
echo "📥 Restoring database: $db_name"
if timeout 300 zcat "$backup_file" | mysql -h ${CONTAINER_MYSQL} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD}; then
echo "✅ Successfully restored $db_name"
else
echo "❌ Failed to restore $db_name"
restore_success=false
fi
fi
done
if [ "$restore_success" = true ]; then
return 0
else
return 1
fi
fi
# If we get here, backup_path is neither a valid .sql file nor a directory
echo "❌ Invalid backup path: $backup_path (not a .sql file or directory)"
return 1
}
# Attempt backup restoration with full functionality restored
echo "🔄 Checking for backups..."
backup_path=""
# Priority 1: Legacy single backup file with content validation
echo "🔍 Checking for legacy backup file..."
if [ -f "/var/lib/mysql-persistent/backup.sql" ]; then
echo "📄 Found legacy backup file, validating content..."
@@ -123,52 +103,40 @@ else
echo "🔍 No legacy backup found"
fi
# Priority 2: Modern timestamped backups (only if no legacy backup found)
if [ -z "$backup_path" ] && [ -d "$BACKUP_DIRS" ]; then
echo "📁 Backup directory exists, checking for timestamped backups..."
if [ "$(ls -A $BACKUP_DIRS 2>/dev/null | wc -l)" -gt 0 ]; then
# Check daily backups first
if [ -d "$BACKUP_DIRS/daily" ] && [ "$(ls -A $BACKUP_DIRS/daily 2>/dev/null | wc -l)" -gt 0 ]; then
echo "📅 Found daily backup directory, finding latest..."
latest_daily=$(ls -1t $BACKUP_DIRS/daily 2>/dev/null | head -n 1)
if [ -n "$(ls -A "$BACKUP_DIRS" 2>/dev/null)" ]; then
if [ -d "$BACKUP_DIRS/daily" ]; then
echo "🔍 Checking for daily backups..."
latest_daily=$(ls -1t "$BACKUP_DIRS/daily" 2>/dev/null | head -n 1)
if [ -n "$latest_daily" ] && [ -d "$BACKUP_DIRS/daily/$latest_daily" ]; then
echo "📦 Checking backup directory: $latest_daily"
# Check if directory has .sql.gz files
if ls "$BACKUP_DIRS/daily/$latest_daily"/*.sql.gz >/dev/null 2>&1; then
# Validate at least one backup file has content
echo "🔍 Validating backup content..."
for backup_file in "$BACKUP_DIRS/daily/$latest_daily"/*.sql.gz; do
if [ -f "$backup_file" ] && [ -s "$backup_file" ]; then
# Use timeout to prevent hanging on zcat
if timeout 10 zcat "$backup_file" 2>/dev/null | head -20 | grep -q "CREATE DATABASE\|INSERT INTO\|CREATE TABLE"; then
echo "✅ Valid backup found: $(basename $backup_file)"
backup_path="$BACKUP_DIRS/daily/$latest_daily"
break
fi
echo "📦 Latest daily backup found: $latest_daily"
for backup_file in "$BACKUP_DIRS/daily/$latest_daily"/*.sql.gz; do
if [ -f "$backup_file" ] && [ -s "$backup_file" ]; then
if timeout 10 zcat "$backup_file" 2>/dev/null | head -20 | grep -q "CREATE DATABASE\|INSERT INTO\|CREATE TABLE"; then
echo " Valid daily backup file: $(basename "$backup_file")"
backup_path="$BACKUP_DIRS/daily/$latest_daily"
break
fi
done
else
echo "⚠️ No .sql.gz files found in backup directory"
fi
fi
done
else
echo "📅 No daily backup directory found"
fi
else
echo "📅 No daily backup directory found"
# Check for timestamped backup directories (legacy format: YYYYMMDD_HHMMSS)
echo "🔍 Checking for timestamped backup directories..."
timestamped_backups=$(ls -1t $BACKUP_DIRS 2>/dev/null | grep -E '^[0-9]{8}_[0-9]{6}$' | head -n 1)
if [ -n "$timestamped_backups" ]; then
latest_timestamped="$timestamped_backups"
echo "📦 Found timestamped backup: $latest_timestamped"
if [ -d "$BACKUP_DIRS/$latest_timestamped" ]; then
# Check if directory has .sql.gz files
if ls "$BACKUP_DIRS/$latest_timestamped"/*.sql.gz >/dev/null 2>&1; then
# Validate at least one backup file has content
echo "🔍 Validating timestamped backup content..."
for backup_file in "$BACKUP_DIRS/$latest_timestamped"/*.sql.gz; do
if [ -f "$backup_file" ] && [ -s "$backup_file" ]; then
# Use timeout to prevent hanging on zcat
if timeout 10 zcat "$backup_file" 2>/dev/null | head -20 | grep -q "CREATE DATABASE\|INSERT INTO\|CREATE TABLE"; then
echo "✅ Valid timestamped backup found: $(basename $backup_file)"
echo "✅ Valid timestamped backup found: $(basename "$backup_file")"
backup_path="$BACKUP_DIRS/$latest_timestamped"
break
fi
@@ -191,71 +159,47 @@ fi
echo "🔄 Final backup path result: '$backup_path'"
if [ -n "$backup_path" ]; then
echo "📦 Found backup: $(basename $backup_path)"
if restore_from_directory "$backup_path"; then
echo "✅ Database restoration completed successfully!"
echo "$(date): Backup successfully restored from $backup_path" > "$RESTORE_SUCCESS_MARKER"
echo "🚫 Skipping schema import - data already restored from backup"
exit 0
else
echo "❌ Backup restoration failed - proceeding with fresh setup"
echo "$(date): Backup restoration failed - proceeding with fresh setup" > "$RESTORE_FAILED_MARKER"
echo "📦 Found backup: $(basename "$backup_path")"
if [ -d "$backup_path" ]; then
echo "🔄 Restoring from backup directory: $backup_path"
restore_success=true
for backup_file in "$backup_path"/*.sql.gz; do
if [ -f "$backup_file" ]; then
if timeout 300 zcat "$backup_file" | mysql -h ${CONTAINER_MYSQL} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD}; then
echo "✅ Restored $(basename "$backup_file")"
else
echo "❌ Failed to restore $(basename "$backup_file")"; restore_success=false
fi
fi
done
if [ "$restore_success" = true ]; then
echo "$(date): Backup successfully restored from $backup_path" > "$RESTORE_SUCCESS_MARKER"
exit 0
else
echo "$(date): Backup restoration failed - proceeding with fresh setup" > "$RESTORE_FAILED_MARKER"
fi
elif [ -f "$backup_path" ]; then
echo "🔄 Restoring from backup file: $backup_path"
if timeout 300 mysql -h ${CONTAINER_MYSQL} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} < "$backup_path"; then
echo "$(date): Backup successfully restored from $backup_path" > "$RESTORE_SUCCESS_MARKER"
exit 0
else
echo "$(date): Backup restoration failed - proceeding with fresh setup" > "$RESTORE_FAILED_MARKER"
fi
fi
else
echo " No valid backups found - proceeding with fresh setup"
echo "$(date): No backup found - fresh setup needed" > "$RESTORE_FAILED_MARKER"
fi
# Create fresh databases if restoration didn't happen
echo "🗄️ Creating fresh AzerothCore databases..."
mysql -h ${CONTAINER_MYSQL} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} -e "
CREATE DATABASE IF NOT EXISTS ${DB_AUTH_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE DATABASE IF NOT EXISTS ${DB_WORLD_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE DATABASE IF NOT EXISTS ${DB_CHARACTERS_NAME} DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
SHOW DATABASES;" || {
echo "❌ Failed to create databases"
exit 1
}
SHOW DATABASES;" || { echo "❌ Failed to create databases"; exit 1; }
echo "✅ Fresh databases created - proceeding with schema import"
# Wait for databases to be ready (they should exist now)
echo "⏳ Verifying databases are accessible..."
for i in $(seq 1 10); do
if mysql -h ${CONTAINER_MYSQL} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} -e "USE ${DB_AUTH_NAME}; USE ${DB_WORLD_NAME}; USE ${DB_CHARACTERS_NAME};" >/dev/null 2>&1; then
echo "✅ All databases accessible"
break
fi
echo "⏳ Waiting for databases... attempt $i/10"
sleep 2
done
# Verify databases are actually empty before importing
echo "🔍 Verifying databases are empty before import..."
check_table_count() {
local db_name="$1"
local count=$(mysql -h ${CONTAINER_MYSQL} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} -e "
SELECT COUNT(*) FROM information_schema.tables
WHERE table_schema='$db_name' AND table_type='BASE TABLE';" -s -N 2>/dev/null || echo "0")
echo "$count"
}
auth_tables=$(check_table_count "${DB_AUTH_NAME}")
world_tables=$(check_table_count "${DB_WORLD_NAME}")
char_tables=$(check_table_count "${DB_CHARACTERS_NAME}")
echo "📊 Current table counts:"
echo " ${DB_AUTH_NAME}: $auth_tables tables"
echo " ${DB_WORLD_NAME}: $world_tables tables"
echo " ${DB_CHARACTERS_NAME}: $char_tables tables"
# Warn if databases appear to have data
if [ "$auth_tables" -gt 5 ] || [ "$world_tables" -gt 50 ] || [ "$char_tables" -gt 5 ]; then
echo "⚠️ WARNING: Databases appear to contain data!"
echo "⚠️ Import may overwrite existing data. Consider backing up first."
echo "⚠️ Continuing in 10 seconds... (Ctrl+C to cancel)"
sleep 10
fi
echo "📝 Creating dbimport configuration..."
mkdir -p /azerothcore/env/dist/etc
cat > /azerothcore/env/dist/etc/dbimport.conf <<EOF
@@ -264,68 +208,17 @@ WorldDatabaseInfo = "${CONTAINER_MYSQL};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT
CharacterDatabaseInfo = "${CONTAINER_MYSQL};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_CHARACTERS_NAME}"
Updates.EnableDatabases = 7
Updates.AutoSetup = 1
# Required configuration properties
MySQLExecutable = ""
TempDir = ""
SourceDirectory = ""
Updates.AllowedModules = "all"
LoginDatabase.WorkerThreads = 1
LoginDatabase.SynchThreads = 1
WorldDatabase.WorkerThreads = 1
WorldDatabase.SynchThreads = 1
CharacterDatabase.WorkerThreads = 1
CharacterDatabase.SynchThreads = 1
Updates.Redundancy = 1
Updates.AllowRehash = 1
Updates.ArchivedRedundancy = 0
Updates.CleanDeadRefMaxCount = 3
# Logging configuration
Appender.Console=1,3,6
Logger.root=3,Console
EOF
echo "🚀 Running database import..."
cd /azerothcore/env/dist/bin
# Run dbimport with error handling
if ./dbimport; then
echo "✅ Database import completed successfully!"
# Create import completion marker
if touch "$RESTORE_STATUS_DIR/.import-completed" 2>/dev/null; then
echo "$(date): Database import completed successfully" > "$RESTORE_STATUS_DIR/.import-completed"
else
echo "$(date): Database import completed successfully" > "$MARKER_STATUS_DIR/.import-completed"
echo "⚠️ Using temporary location for completion marker"
fi
# Verify import was successful
echo "🔍 Verifying import results..."
auth_tables_after=$(check_table_count "${DB_AUTH_NAME}")
world_tables_after=$(check_table_count "${DB_WORLD_NAME}")
char_tables_after=$(check_table_count "${DB_CHARACTERS_NAME}")
echo "📊 Post-import table counts:"
echo " ${DB_AUTH_NAME}: $auth_tables_after tables"
echo " ${DB_WORLD_NAME}: $world_tables_after tables"
echo " ${DB_CHARACTERS_NAME}: $char_tables_after tables"
if [ "$auth_tables_after" -gt 0 ] && [ "$world_tables_after" -gt 0 ]; then
echo "✅ Import verification successful - databases populated"
else
echo "⚠️ Import verification failed - databases may be empty"
fi
echo "$(date): Database import completed successfully" > "$RESTORE_STATUS_DIR/.import-completed" || echo "$(date): Database import completed successfully" > "$MARKER_STATUS_DIR/.import-completed"
else
echo "❌ Database import failed!"
if touch "$RESTORE_STATUS_DIR/.import-failed" 2>/dev/null; then
echo "$(date): Database import failed" > "$RESTORE_STATUS_DIR/.import-failed"
else
echo "$(date): Database import failed" > "$MARKER_STATUS_DIR/.import-failed"
echo "⚠️ Using temporary location for failed marker"
fi
echo "$(date): Database import failed" > "$RESTORE_STATUS_DIR/.import-failed" || echo "$(date): Database import failed" > "$MARKER_STATUS_DIR/.import-failed"
exit 1
fi
echo "🎉 Database import process complete!"
echo "🎉 Database import process complete!"

View File

@@ -1,50 +0,0 @@
#!/bin/bash
set -e
echo 'Waiting for databases to be ready...'
# Wait for databases to exist with longer timeout
for i in $(seq 1 120); do
if mysql -h ${CONTAINER_MYSQL} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} -e "USE ${DB_AUTH_NAME}; USE ${DB_WORLD_NAME}; USE ${DB_CHARACTERS_NAME};" >/dev/null 2>&1; then
echo "✅ All databases accessible"
break
fi
echo "⏳ Waiting for databases... attempt $i/120"
sleep 5
done
echo 'Creating config file for dbimport...'
mkdir -p /azerothcore/env/dist/etc
cat > /azerothcore/env/dist/etc/dbimport.conf <<EOF
LoginDatabaseInfo = "${CONTAINER_MYSQL};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_AUTH_NAME}"
WorldDatabaseInfo = "${CONTAINER_MYSQL};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_WORLD_NAME}"
CharacterDatabaseInfo = "${CONTAINER_MYSQL};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_CHARACTERS_NAME}"
Updates.EnableDatabases = 7
Updates.AutoSetup = 1
# Required configuration properties
MySQLExecutable = ""
TempDir = ""
SourceDirectory = ""
Updates.AllowedModules = "all"
LoginDatabase.WorkerThreads = 1
LoginDatabase.SynchThreads = 1
WorldDatabase.WorkerThreads = 1
WorldDatabase.SynchThreads = 1
CharacterDatabase.WorkerThreads = 1
CharacterDatabase.SynchThreads = 1
Updates.Redundancy = 1
Updates.AllowRehash = 1
Updates.ArchivedRedundancy = 0
Updates.CleanDeadRefMaxCount = 3
# Logging configuration
Appender.Console=1,3,6
Logger.root=3,Console
EOF
echo 'Running database import...'
cd /azerothcore/env/dist/bin
./dbimport
echo 'Database import complete!'

View File

@@ -1,204 +0,0 @@
#!/bin/bash
set -e
echo "🔧 Enhanced AzerothCore Database Initialization"
echo "=============================================="
# Restoration status markers
RESTORE_STATUS_DIR="/var/lib/mysql-persistent"
RESTORE_SUCCESS_MARKER="$RESTORE_STATUS_DIR/.restore-completed"
RESTORE_FAILED_MARKER="$RESTORE_STATUS_DIR/.restore-failed"
BACKUP_DIRS="/backups"
# Clean up old status markers
rm -f "$RESTORE_SUCCESS_MARKER" "$RESTORE_FAILED_MARKER"
echo "🔧 Waiting for MySQL to be ready..."
# Wait for MySQL to be responsive with longer timeout
for i in $(seq 1 ${DB_WAIT_RETRIES}); do
if mysql -h ${MYSQL_HOST} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} -e "SELECT 1;" >/dev/null 2>&1; then
echo "✅ MySQL is responsive"
break
fi
echo "⏳ Waiting for MySQL... attempt $i/${DB_WAIT_RETRIES}"
sleep ${DB_WAIT_SLEEP}
done
# Function to check if databases have data (not just schema)
check_database_populated() {
local db_name="$1"
local table_count=$(mysql -h ${MYSQL_HOST} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} -e "
SELECT COUNT(*) FROM information_schema.tables
WHERE table_schema='$db_name' AND table_type='BASE TABLE';" -s -N 2>/dev/null || echo "0")
if [ "$table_count" -gt 0 ]; then
echo "🔍 Database $db_name has $table_count tables"
return 0
else
echo "🔍 Database $db_name is empty or doesn't exist"
return 1
fi
}
# Function to validate backup integrity
validate_backup() {
local backup_path="$1"
echo "🔍 Validating backup: $backup_path"
if [ -f "$backup_path" ]; then
# Check if it's a valid SQL file
if head -10 "$backup_path" | grep -q "CREATE DATABASE\|INSERT INTO\|CREATE TABLE"; then
echo "✅ Backup appears valid"
return 0
fi
fi
echo "❌ Backup validation failed"
return 1
}
# Function to find and validate the most recent backup
find_latest_backup() {
# Priority 1: Legacy single backup file
if [ -f "/var/lib/mysql-persistent/backup.sql" ]; then
if validate_backup "/var/lib/mysql-persistent/backup.sql"; then
echo "/var/lib/mysql-persistent/backup.sql"
return 0
fi
fi
# Priority 2: Modern timestamped backups
if [ -d "$BACKUP_DIRS" ] && [ "$(ls -A $BACKUP_DIRS)" ]; then
# Try daily backups first
if [ -d "$BACKUP_DIRS/daily" ] && [ "$(ls -A $BACKUP_DIRS/daily)" ]; then
local latest_daily=$(ls -1t $BACKUP_DIRS/daily | head -n 1)
if [ -n "$latest_daily" ] && [ -d "$BACKUP_DIRS/daily/$latest_daily" ]; then
echo "$BACKUP_DIRS/daily/$latest_daily"
return 0
fi
fi
# Try hourly backups second
if [ -d "$BACKUP_DIRS/hourly" ] && [ "$(ls -A $BACKUP_DIRS/hourly)" ]; then
local latest_hourly=$(ls -1t $BACKUP_DIRS/hourly | head -n 1)
if [ -n "$latest_hourly" ] && [ -d "$BACKUP_DIRS/hourly/$latest_hourly" ]; then
echo "$BACKUP_DIRS/hourly/$latest_hourly"
return 0
fi
fi
# Try legacy timestamped backups
local latest_legacy=$(ls -1dt $BACKUP_DIRS/[0-9]* 2>/dev/null | head -n 1)
if [ -n "$latest_legacy" ] && [ -d "$latest_legacy" ]; then
echo "$latest_legacy"
return 0
fi
fi
return 1
}
# Function to restore from timestamped backup directory
restore_from_directory() {
local backup_dir="$1"
echo "🔄 Restoring from backup directory: $backup_dir"
local restore_success=true
# Restore each database backup
for backup_file in "$backup_dir"/*.sql.gz; do
if [ -f "$backup_file" ]; then
local db_name=$(basename "$backup_file" .sql.gz)
echo "📥 Restoring database: $db_name"
if zcat "$backup_file" | mysql -h ${MYSQL_HOST} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD}; then
echo "✅ Successfully restored $db_name"
else
echo "❌ Failed to restore $db_name"
restore_success=false
fi
fi
done
if [ "$restore_success" = true ]; then
return 0
else
return 1
fi
}
# Function to restore from single SQL file
restore_from_file() {
local backup_file="$1"
echo "🔄 Restoring from backup file: $backup_file"
if mysql -h ${MYSQL_HOST} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} < "$backup_file"; then
echo "✅ Successfully restored from $backup_file"
return 0
else
echo "❌ Failed to restore from $backup_file"
return 1
fi
}
# Main backup detection and restoration logic
backup_restored=false
# Check if databases already have data
if check_database_populated "${DB_AUTH_NAME}" && check_database_populated "${DB_WORLD_NAME}"; then
echo "✅ Databases already populated - skipping backup detection"
backup_restored=true
else
echo "🔍 Databases appear empty - checking for backups to restore..."
backup_path=$(find_latest_backup)
if [ $? -eq 0 ] && [ -n "$backup_path" ]; then
if [ -f "$backup_path" ]; then
echo "📦 Found legacy backup file: $(basename $backup_path)"
if restore_from_file "$backup_path"; then
backup_restored=true
fi
elif [ -d "$backup_path" ]; then
echo "📦 Found backup directory: $(basename $backup_path)"
if restore_from_directory "$backup_path"; then
backup_restored=true
fi
fi
else
echo " No valid backups found"
fi
fi
# Create databases if restore didn't happen or failed
if [ "$backup_restored" = false ]; then
echo "🗄️ Creating fresh AzerothCore databases..."
mysql -h ${MYSQL_HOST} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} -e "
CREATE DATABASE IF NOT EXISTS ${DB_AUTH_NAME} DEFAULT CHARACTER SET ${MYSQL_CHARACTER_SET} COLLATE ${MYSQL_COLLATION};
CREATE DATABASE IF NOT EXISTS ${DB_WORLD_NAME} DEFAULT CHARACTER SET ${MYSQL_CHARACTER_SET} COLLATE ${MYSQL_COLLATION};
CREATE DATABASE IF NOT EXISTS ${DB_CHARACTERS_NAME} DEFAULT CHARACTER SET ${MYSQL_CHARACTER_SET} COLLATE ${MYSQL_COLLATION};
SHOW DATABASES;
" || {
echo "❌ Failed to create databases"
exit 1
}
echo "✅ Fresh databases created!"
fi
# Set restoration status markers for db-import service
if [ "$backup_restored" = true ]; then
echo "📝 Creating restoration success marker"
touch "$RESTORE_SUCCESS_MARKER"
echo "$(date): Backup successfully restored" > "$RESTORE_SUCCESS_MARKER"
echo "🚫 DB import will be skipped - restoration completed successfully"
else
echo "📝 Creating restoration failed marker"
touch "$RESTORE_FAILED_MARKER"
echo "$(date): No backup restored - fresh databases created" > "$RESTORE_FAILED_MARKER"
echo "▶️ DB import will proceed - fresh databases need population"
fi
echo "✅ Database initialization complete!"
echo " Backup restored: $backup_restored"
echo " Status marker: $([ "$backup_restored" = true ] && echo "$RESTORE_SUCCESS_MARKER" || echo "$RESTORE_FAILED_MARKER")"

View File

@@ -1,34 +0,0 @@
#!/bin/bash
set -e
echo "🔧 Waiting for MySQL to be ready..."
# Wait for MySQL to be responsive with longer timeout
for i in $(seq 1 ${DB_WAIT_RETRIES}); do
if mysql -h ${MYSQL_HOST} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} -e "SELECT 1;" >/dev/null 2>&1; then
echo "✅ MySQL is responsive"
break
fi
echo "⏳ Waiting for MySQL... attempt $i/${DB_WAIT_RETRIES}"
sleep ${DB_WAIT_SLEEP}
done
# Check if we should restore from backup
if [ -f "/var/lib/mysql-persistent/backup.sql" ]; then
echo "🔄 Restoring databases from backup..."
mysql -h ${MYSQL_HOST} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} < /var/lib/mysql-persistent/backup.sql || {
echo "⚠️ Backup restore failed, will create fresh databases"
}
fi
echo "🗄️ Creating/verifying AzerothCore databases..."
mysql -h ${MYSQL_HOST} -u${MYSQL_USER} -p${MYSQL_ROOT_PASSWORD} -e "
CREATE DATABASE IF NOT EXISTS ${DB_AUTH_NAME} DEFAULT CHARACTER SET ${MYSQL_CHARACTER_SET} COLLATE ${MYSQL_COLLATION};
CREATE DATABASE IF NOT EXISTS ${DB_WORLD_NAME} DEFAULT CHARACTER SET ${MYSQL_CHARACTER_SET} COLLATE ${MYSQL_COLLATION};
CREATE DATABASE IF NOT EXISTS ${DB_CHARACTERS_NAME} DEFAULT CHARACTER SET ${MYSQL_CHARACTER_SET} COLLATE ${MYSQL_COLLATION};
SHOW DATABASES;
" || {
echo "❌ Failed to create databases"
exit 1
}
echo "✅ Databases ready!"

View File

@@ -1,375 +0,0 @@
#!/bin/bash
# ==============================================
# AzerothCore Podman Deployment & Health Check Script (Distrobox Compatible)
# ==============================================
# This script deploys the complete AzerothCore stack using Podman via distrobox-host-exec
# Usage: ./deploy-and-check-distrobox.sh [--skip-deploy] [--quick-check]
set -e # Exit on any error
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Script options
SKIP_DEPLOY=false
QUICK_CHECK=false
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--skip-deploy)
SKIP_DEPLOY=true
shift
;;
--quick-check)
QUICK_CHECK=true
shift
;;
-h|--help)
echo "Usage: $0 [--skip-deploy] [--quick-check]"
echo " --skip-deploy Skip deployment, only run health checks"
echo " --quick-check Run basic health checks only"
exit 0
;;
*)
echo "Unknown option $1"
exit 1
;;
esac
done
# Function to print colored output
print_status() {
local status=$1
local message=$2
case $status in
"INFO")
echo -e "${BLUE} ${message}${NC}"
;;
"SUCCESS")
echo -e "${GREEN}${message}${NC}"
;;
"WARNING")
echo -e "${YELLOW}⚠️ ${message}${NC}"
;;
"ERROR")
echo -e "${RED}${message}${NC}"
;;
"HEADER")
echo -e "\n${BLUE}=== ${message} ===${NC}"
;;
esac
}
# Function to check if a port is accessible
check_port() {
local port=$1
local service_name=$2
local timeout=${3:-5}
if timeout $timeout bash -c "echo >/dev/tcp/localhost/$port" 2>/dev/null; then
print_status "SUCCESS" "$service_name (port $port): CONNECTED"
return 0
else
print_status "ERROR" "$service_name (port $port): FAILED"
return 1
fi
}
# Function to wait for a service to be ready
wait_for_service() {
local service_name=$1
local max_attempts=$2
local check_command=$3
print_status "INFO" "Waiting for $service_name to be ready..."
for i in $(seq 1 $max_attempts); do
if eval "$check_command" &>/dev/null; then
print_status "SUCCESS" "$service_name is ready!"
return 0
fi
if [ $i -eq $max_attempts ]; then
print_status "ERROR" "$service_name failed to start after $max_attempts attempts"
return 1
fi
echo -n "."
sleep 5
done
}
# Function to check container health
check_container_health() {
local container_name=$1
# Check if container is running
if distrobox-host-exec podman ps --format '{{.Names}}' 2>/dev/null | grep -q "^${container_name}$"; then
print_status "SUCCESS" "$container_name: running"
return 0
else
print_status "ERROR" "$container_name: not running"
return 1
fi
}
# Function to deploy the stack
deploy_stack() {
print_status "HEADER" "DEPLOYING AZEROTHCORE STACK"
# Check if environment files exist
for env_file in "docker-compose-azerothcore-database.env" "docker-compose-azerothcore-services.env"; do
if [ ! -f "$env_file" ]; then
print_status "ERROR" "Environment file $env_file not found"
exit 1
fi
done
print_status "INFO" "Step 1: Cleaning up existing containers..."
distrobox-host-exec bash -c "podman rm -f ac-mysql ac-backup ac-db-init ac-db-import ac-authserver ac-worldserver ac-client-data 2>/dev/null || true"
print_status "INFO" "Step 2: Creating required directories..."
mkdir -p storage/azerothcore/{mysql-data,backups,config,data,logs,modules,lua_scripts,cache}
print_status "INFO" "Step 3: Creating network..."
distrobox-host-exec bash -c "podman network create azerothcore --subnet 172.20.0.0/16 --gateway 172.20.0.1 2>/dev/null || true"
print_status "INFO" "Step 4: Starting MySQL..."
distrobox-host-exec bash -c "podman run -d --name ac-mysql --network azerothcore --network-alias ac-mysql -p 64306:3306 \
-e MYSQL_ROOT_PASSWORD=azerothcore123 -e MYSQL_ROOT_HOST='%' -e MYSQL_ALLOW_EMPTY_PASSWORD=no \
-v ./storage/azerothcore/mysql-data:/var/lib/mysql-persistent \
-v ./storage/azerothcore/backups:/backups \
--tmpfs /var/lib/mysql-runtime:size=2G \
--restart unless-stopped \
docker.io/library/mysql:8.0 \
mysqld --datadir=/var/lib/mysql-runtime --default-authentication-plugin=mysql_native_password \
--character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --max_connections=1000 \
--innodb-buffer-pool-size=256M --innodb-log-file-size=64M"
# Wait for MySQL
wait_for_service "MySQL" 24 "distrobox-host-exec podman exec ac-mysql mysql -uroot -pazerothcore123 -e 'SELECT 1' 2>/dev/null"
print_status "INFO" "Step 5: Starting backup service..."
distrobox-host-exec bash -c "podman run -d --name ac-backup --network azerothcore \
-e MYSQL_HOST=ac-mysql -e MYSQL_PORT=3306 -e MYSQL_USER=root -e MYSQL_PASSWORD=azerothcore123 \
-e BACKUP_RETENTION_DAYS=3 -e BACKUP_RETENTION_HOURS=6 -e BACKUP_DAILY_TIME=09 \
-e DB_AUTH_NAME=acore_auth -e DB_WORLD_NAME=acore_world -e DB_CHARACTERS_NAME=acore_characters -e TZ=UTC \
-v ./storage/azerothcore/backups:/backups -w /tmp --restart unless-stopped \
docker.io/library/mysql:8.0 /bin/bash -c \
'microdnf install -y curl || yum install -y curl; \
curl -fsSL https://raw.githubusercontent.com/uprightbass360/acore-compose/main/scripts/backup-scheduler.sh -o /tmp/backup-scheduler.sh; \
chmod +x /tmp/backup-scheduler.sh; /tmp/backup-scheduler.sh'"
print_status "INFO" "Step 6: Initializing databases..."
distrobox-host-exec bash -c "podman run -d --name ac-db-init --network azerothcore \
-e MYSQL_PWD=azerothcore123 -e MYSQL_HOST=ac-mysql -e MYSQL_USER=root -e MYSQL_ROOT_PASSWORD=azerothcore123 \
-e DB_WAIT_RETRIES=60 -e DB_WAIT_SLEEP=10 \
-e DB_AUTH_NAME=acore_auth -e DB_WORLD_NAME=acore_world -e DB_CHARACTERS_NAME=acore_characters \
-e MYSQL_CHARACTER_SET=utf8mb4 -e MYSQL_COLLATION=utf8mb4_unicode_ci \
-v ./storage/azerothcore/mysql-data:/var/lib/mysql-persistent --restart no \
docker.io/library/mysql:8.0 sh -c \
'microdnf install -y curl || yum install -y curl; \
curl -fsSL https://raw.githubusercontent.com/uprightbass360/acore-compose/main/scripts/db-init.sh -o /tmp/db-init.sh; \
chmod +x /tmp/db-init.sh; /tmp/db-init.sh'"
# Wait for db-init to complete
wait_for_service "Database Init" 36 "distrobox-host-exec podman ps -a --format '{{.Names}} {{.Status}}' | grep 'ac-db-init' | grep -q 'Exited (0)'"
print_status "INFO" "Step 7: Importing database..."
sudo chmod -R 777 storage/azerothcore/config 2>/dev/null || true
distrobox-host-exec bash -c "podman run -d --name ac-db-import --network azerothcore --privileged \
-e AC_DATA_DIR=/azerothcore/data -e AC_LOGS_DIR=/azerothcore/logs \
-e AC_LOGIN_DATABASE_INFO='ac-mysql;3306;root;azerothcore123;acore_auth' \
-e AC_WORLD_DATABASE_INFO='ac-mysql;3306;root;azerothcore123;acore_world' \
-e AC_CHARACTER_DATABASE_INFO='ac-mysql;3306;root;azerothcore123;acore_characters' \
-e AC_CLOSE_IDLE_CONNECTIONS=false -e AC_UPDATES_ENABLE_DATABASES=7 -e AC_UPDATES_AUTO_SETUP=1 \
-e AC_LOG_LEVEL=1 -e AC_LOGGER_ROOT_CONFIG='1,Console' -e AC_LOGGER_SERVER_CONFIG='1,Console' -e AC_APPENDER_CONSOLE_CONFIG='1,2,0' \
-v ./storage/azerothcore/config:/azerothcore/env/dist/etc -u 0:0 --restart no \
docker.io/acore/ac-wotlk-db-import:14.0.0-dev"
# Wait for db-import to complete
wait_for_service "Database Import" 60 "distrobox-host-exec podman ps -a --format '{{.Names}} {{.Status}}' | grep 'ac-db-import' | grep -q 'Exited (0)'"
print_status "INFO" "Step 8: Starting client data download..."
distrobox-host-exec bash -c "podman run -d --name ac-client-data --network azerothcore --privileged \
-v ./storage/azerothcore/data:/azerothcore/data -v ./storage/azerothcore/cache:/cache -w /tmp --restart no \
docker.io/library/alpine:latest sh -c \
'apk add --no-cache curl unzip wget ca-certificates p7zip jq; \
chown -R 1001:1001 /azerothcore/data /cache 2>/dev/null || true; mkdir -p /cache; \
curl -fsSL https://raw.githubusercontent.com/uprightbass360/acore-compose/main/scripts/download-client-data.sh -o /tmp/download-client-data.sh; \
chmod +x /tmp/download-client-data.sh; /tmp/download-client-data.sh'" &
print_status "INFO" "Step 9: Starting Auth Server..."
distrobox-host-exec bash -c "podman run -d --name ac-authserver --network azerothcore --privileged -p 3784:3724 \
-e AC_LOGIN_DATABASE_INFO='ac-mysql;3306;root;azerothcore123;acore_auth' \
-e AC_UPDATES_ENABLE_DATABASES=0 -e AC_BIND_IP='0.0.0.0' -e AC_LOG_LEVEL=1 \
-e AC_LOGGER_ROOT_CONFIG='1,Console' -e AC_LOGGER_SERVER_CONFIG='1,Console' -e AC_APPENDER_CONSOLE_CONFIG='1,2,0' \
-v ./storage/azerothcore/config:/azerothcore/env/dist/etc --cap-add SYS_NICE --restart unless-stopped \
docker.io/acore/ac-wotlk-authserver:14.0.0-dev"
# Wait for authserver
wait_for_service "Auth Server" 12 "check_container_health ac-authserver"
print_status "INFO" "Step 10: Waiting for client data (this may take 10-20 minutes)..."
print_status "INFO" "World Server will start once data download completes..."
print_status "SUCCESS" "Deployment in progress! Client data downloading in background."
print_status "INFO" "World Server will be started manually once client data is ready."
}
# Function to start worldserver
start_worldserver() {
print_status "INFO" "Starting World Server..."
distrobox-host-exec bash -c "podman run -d --name ac-worldserver --network azerothcore --privileged -t -p 8215:8085 -p 7778:7878 \
-e AC_LOGIN_DATABASE_INFO='ac-mysql;3306;root;azerothcore123;acore_auth' \
-e AC_WORLD_DATABASE_INFO='ac-mysql;3306;root;azerothcore123;acore_world' \
-e AC_CHARACTER_DATABASE_INFO='ac-mysql;3306;root;azerothcore123;acore_characters' \
-e AC_UPDATES_ENABLE_DATABASES=0 -e AC_BIND_IP='0.0.0.0' -e AC_DATA_DIR='/azerothcore/data' \
-e AC_SOAP_PORT=7878 -e AC_PROCESS_PRIORITY=0 -e PLAYERBOT_ENABLED=1 -e PLAYERBOT_MAX_BOTS=40 -e AC_LOG_LEVEL=2 \
-v ./storage/azerothcore/data:/azerothcore/data \
-v ./storage/azerothcore/config:/azerothcore/env/dist/etc \
-v ./storage/azerothcore/logs:/azerothcore/logs \
-v ./storage/azerothcore/modules:/azerothcore/modules \
-v ./storage/azerothcore/lua_scripts:/azerothcore/lua_scripts \
--cap-add SYS_NICE --restart unless-stopped \
docker.io/acore/ac-wotlk-worldserver:14.0.0-dev"
wait_for_service "World Server" 12 "check_container_health ac-worldserver"
}
# Function to perform health checks
perform_health_checks() {
print_status "HEADER" "CONTAINER HEALTH STATUS"
# Check all containers
local containers=("ac-mysql" "ac-backup" "ac-authserver" "ac-worldserver")
local container_failures=0
for container in "${containers[@]}"; do
if distrobox-host-exec podman ps -a --format '{{.Names}}' 2>/dev/null | grep -q "^${container}$"; then
if ! check_container_health "$container"; then
((container_failures++))
fi
fi
done
print_status "HEADER" "PORT CONNECTIVITY TESTS"
# Database Layer
print_status "INFO" "Database Layer:"
local port_failures=0
if ! check_port 64306 "MySQL"; then ((port_failures++)); fi
# Services Layer
print_status "INFO" "Services Layer:"
if ! check_port 3784 "Auth Server"; then ((port_failures++)); fi
if distrobox-host-exec podman ps --format '{{.Names}}' 2>/dev/null | grep -q "^ac-worldserver$"; then
if ! check_port 8215 "World Server"; then ((port_failures++)); fi
if ! check_port 7778 "SOAP API"; then ((port_failures++)); fi
else
print_status "INFO" "World Server: not started yet (waiting for client data)"
fi
if [ "$QUICK_CHECK" = false ]; then
print_status "HEADER" "DATABASE CONNECTIVITY TEST"
# Test database connectivity and verify schemas
if distrobox-host-exec podman exec ac-mysql mysql -uroot -pazerothcore123 -e "SHOW DATABASES;" 2>/dev/null | grep -q "acore_auth"; then
print_status "SUCCESS" "Database schemas: verified"
else
print_status "ERROR" "Database schemas: verification failed"
((container_failures++))
fi
# Test realm configuration
realm_count=$(distrobox-host-exec podman exec ac-mysql mysql -uroot -pazerothcore123 -e "USE acore_auth; SELECT COUNT(*) FROM realmlist;" 2>/dev/null | tail -1)
if [ "$realm_count" -gt 0 ] 2>/dev/null; then
print_status "SUCCESS" "Realm configuration: $realm_count realm(s) configured"
else
print_status "WARNING" "Realm configuration: no realms configured yet (post-install needed)"
fi
# Check for playerbots database
if distrobox-host-exec podman exec ac-mysql mysql -uroot -pazerothcore123 -e "SHOW DATABASES;" 2>/dev/null | grep -q "acore_playerbots"; then
print_status "SUCCESS" "Playerbots database: detected"
else
print_status "INFO" "Playerbots database: not present (standard installation)"
fi
fi
print_status "HEADER" "DEPLOYMENT SUMMARY"
# Summary
local total_failures=$((container_failures + port_failures))
if [ $total_failures -eq 0 ]; then
print_status "SUCCESS" "All services are healthy and operational!"
print_status "INFO" "Available services:"
echo " 🎮 Game Server: localhost:8215"
echo " 🔐 Auth Server: localhost:3784"
echo " 🔧 SOAP API: localhost:7778"
echo " 🗄️ MySQL: localhost:64306"
echo ""
print_status "INFO" "Default credentials:"
echo " 🗄️ MySQL: root / azerothcore123"
return 0
else
print_status "WARNING" "Health check completed with $total_failures issue(s)"
print_status "INFO" "Check container logs for details: distrobox-host-exec podman logs <container-name>"
return 1
fi
}
# Function to show container status
show_container_status() {
print_status "HEADER" "CONTAINER STATUS OVERVIEW"
echo -e "${BLUE}Container Name\t\tStatus${NC}"
echo "=============================================="
distrobox-host-exec podman ps -a --format "table {{.Names}}\t{{.Status}}" 2>/dev/null | grep ac- || echo "No containers found"
}
# Main execution
main() {
print_status "HEADER" "AZEROTHCORE DEPLOYMENT & HEALTH CHECK (DISTROBOX/PODMAN)"
# Check if distrobox-host-exec is available
if ! command -v distrobox-host-exec &> /dev/null; then
print_status "ERROR" "distrobox-host-exec is not available - are you running in a distrobox?"
exit 1
fi
# Check if podman is available on host
if ! distrobox-host-exec podman version &> /dev/null; then
print_status "ERROR" "Podman is not available on the host system"
exit 1
fi
# Deploy the stack unless skipped
if [ "$SKIP_DEPLOY" = false ]; then
deploy_stack
else
print_status "INFO" "Skipping deployment, running health checks only..."
fi
# Show container status
show_container_status
# Perform health checks
if perform_health_checks; then
print_status "SUCCESS" "🎉 AzerothCore stack deployment successful!"
exit 0
else
print_status "INFO" "⚠️ Some services may still be starting - check status with: distrobox-host-exec podman ps -a"
exit 0
fi
}
# Run main function
main "$@"

View File

@@ -1,541 +0,0 @@
#!/bin/bash
# ==============================================
# AzerothCore Docker Deployment & Health Check Script
# ==============================================
# This script deploys the complete AzerothCore stack and performs comprehensive health checks
# Usage: ./deploy-and-check.sh [--skip-deploy] [--quick-check] [--setup]
set -e # Exit on any error
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Script options
SKIP_DEPLOY=false
QUICK_CHECK=false
RUN_SETUP=false
MODULES_ENABLED=false
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--skip-deploy)
SKIP_DEPLOY=true
shift
;;
--quick-check)
QUICK_CHECK=true
shift
;;
--setup)
RUN_SETUP=true
shift
;;
-h|--help)
echo "Usage: $0 [--skip-deploy] [--quick-check] [--setup]"
echo " --skip-deploy Skip deployment, only run health checks"
echo " --quick-check Run basic health checks only"
echo " --setup Run interactive server setup before deployment"
exit 0
;;
*)
echo "Unknown option $1"
exit 1
;;
esac
done
# Function to print colored output
print_status() {
local status=$1
local message=$2
case $status in
"INFO")
echo -e "${BLUE} ${message}${NC}"
;;
"SUCCESS")
echo -e "${GREEN}${message}${NC}"
;;
"WARNING")
echo -e "${YELLOW}⚠️ ${message}${NC}"
;;
"ERROR")
echo -e "${RED}${message}${NC}"
;;
"HEADER")
echo -e "\n${BLUE}=== ${message} ===${NC}"
;;
esac
}
# Function to check if a port is accessible
check_port() {
local port=$1
local service_name=$2
local timeout=${3:-5}
if timeout $timeout bash -c "echo >/dev/tcp/localhost/$port" 2>/dev/null; then
print_status "SUCCESS" "$service_name (port $port): CONNECTED"
return 0
else
print_status "ERROR" "$service_name (port $port): FAILED"
return 1
fi
}
# Function to format seconds as MM:SS
format_time() {
local total_seconds=$1
local minutes=$((total_seconds / 60))
local seconds=$((total_seconds % 60))
printf "%d:%02d" "$minutes" "$seconds"
}
# Function to wait for a service to be ready
wait_for_service() {
local service_name=$1
local max_attempts=$2
local check_command=$3
local container_name=""
# Extract container name from common patterns
if echo "$check_command" | grep -q "ac-client-data"; then
container_name="ac-client-data"
elif echo "$check_command" | grep -q "ac-db-import"; then
container_name="ac-db-import"
elif echo "$check_command" | grep -q "ac-mysql"; then
container_name="ac-mysql"
elif echo "$check_command" | grep -q "ac-worldserver"; then
container_name="ac-worldserver"
elif echo "$check_command" | grep -q "ac-authserver"; then
container_name="ac-authserver"
fi
local timeout_formatted=$(format_time $((max_attempts * 5)))
print_status "INFO" "Waiting for $service_name to be ready... (timeout: $timeout_formatted)"
for i in $(seq 1 $max_attempts); do
if eval "$check_command" &>/dev/null; then
print_status "SUCCESS" "$service_name is ready!"
return 0
fi
if [ $i -eq $max_attempts ]; then
print_status "ERROR" "$service_name failed to start after $max_attempts attempts"
if [ -n "$container_name" ]; then
print_status "INFO" "Last few log lines from $container_name:"
docker logs "$container_name" --tail 5 2>/dev/null | sed 's/^/ /' || echo " (no logs available)"
fi
return 1
fi
# Show progress with more informative output
local elapsed=$((i * 5))
local remaining=$(( (max_attempts - i) * 5))
local elapsed_formatted=$(format_time $elapsed)
local remaining_formatted=$(format_time $remaining)
if [ -n "$container_name" ]; then
# Get container status
local status=$(docker inspect --format='{{.State.Status}}' "$container_name" 2>/dev/null || echo "unknown")
local health=$(docker inspect --format='{{.State.Health.Status}}' "$container_name" 2>/dev/null || echo "no-health-check")
# Show different progress info based on service
case "$service_name" in
"Client Data")
local last_log=$(docker logs "$container_name" --tail 1 2>/dev/null | head -c 80 || echo "...")
printf "${YELLOW}${NC} %s elapsed, %s remaining | Status: %s | Latest: %s\n" "$elapsed_formatted" "$remaining_formatted" "$status" "$last_log"
;;
"Database Import")
printf "${YELLOW}${NC} %s elapsed, %s remaining | Status: %s | Importing databases...\n" "$elapsed_formatted" "$remaining_formatted" "$status"
;;
*)
printf "${YELLOW}${NC} %s elapsed, %s remaining | Status: %s" "$elapsed_formatted" "$remaining_formatted" "$status"
if [ "$health" != "no-health-check" ]; then
printf " | Health: %s" "$health"
fi
printf "\n"
;;
esac
else
printf "${YELLOW}${NC} %s elapsed, %s remaining | Checking...\n" "$elapsed_formatted" "$remaining_formatted"
fi
sleep 5
done
}
# Function to check container health
check_container_health() {
local container_name=$1
local status=$(docker inspect --format='{{.State.Health.Status}}' $container_name 2>/dev/null || echo "no-health-check")
if [ "$status" = "healthy" ]; then
print_status "SUCCESS" "$container_name: healthy"
return 0
elif [ "$status" = "no-health-check" ] || [ "$status" = "<no value>" ]; then
# Check if container is running
if docker ps --format '{{.Names}}' | grep -q "^${container_name}$"; then
print_status "SUCCESS" "$container_name: running (no health check)"
return 0
else
print_status "ERROR" "$container_name: not running"
return 1
fi
else
print_status "WARNING" "$container_name: $status"
return 1
fi
}
# Function to check web service health
check_web_service() {
local url=$1
local service_name=$2
local expected_pattern=$3
response=$(curl -s --max-time 10 "$url" 2>/dev/null || echo "")
if [ -n "$expected_pattern" ]; then
if echo "$response" | grep -q "$expected_pattern"; then
print_status "SUCCESS" "$service_name: HTTP OK (content verified)"
return 0
else
print_status "ERROR" "$service_name: HTTP OK but content verification failed"
return 1
fi
else
if [ -n "$response" ]; then
print_status "SUCCESS" "$service_name: HTTP OK"
return 0
else
print_status "ERROR" "$service_name: HTTP failed"
return 1
fi
fi
}
# Function to deploy the stack
deploy_stack() {
print_status "HEADER" "DEPLOYING AZEROTHCORE STACK"
# Check if custom environment files exist first, then fallback to base files
DB_ENV_FILE="./docker-compose-azerothcore-database-custom.env"
SERVICES_ENV_FILE="./docker-compose-azerothcore-services-custom.env"
MODULES_ENV_FILE="./docker-compose-azerothcore-modules-custom.env"
TOOLS_ENV_FILE="./docker-compose-azerothcore-tools-custom.env"
# Fallback to base files if custom files don't exist
if [ ! -f "$DB_ENV_FILE" ]; then
DB_ENV_FILE="./docker-compose-azerothcore-database.env"
fi
if [ ! -f "$SERVICES_ENV_FILE" ]; then
SERVICES_ENV_FILE="./docker-compose-azerothcore-services.env"
fi
if [ ! -f "$MODULES_ENV_FILE" ]; then
MODULES_ENV_FILE="./docker-compose-azerothcore-modules.env"
fi
if [ ! -f "$TOOLS_ENV_FILE" ]; then
TOOLS_ENV_FILE="./docker-compose-azerothcore-tools.env"
fi
# Check if required environment files exist
for env_file in "$DB_ENV_FILE" "$SERVICES_ENV_FILE" "$TOOLS_ENV_FILE"; do
if [ ! -f "$env_file" ]; then
print_status "ERROR" "Environment file $env_file not found"
print_status "INFO" "Run ./scripts/setup-server.sh first to create environment files"
exit 1
fi
done
# Check if modules are enabled (set global variable)
if [ -f "$MODULES_ENV_FILE" ]; then
MODULES_ENABLED=true
else
MODULES_ENABLED=false
fi
print_status "INFO" "Step 1: Deploying database layer..."
docker compose --env-file "$DB_ENV_FILE" -f ./docker-compose-azerothcore-database.yml up -d --remove-orphans
# Wait for database initialization
wait_for_service "MySQL" 24 "docker exec ac-mysql mysql -uroot -pazerothcore123 -e 'SELECT 1' >/dev/null 2>&1"
# Wait for database import (can succeed with backup restore OR fail without backup)
print_status "INFO" "Waiting for Database Import to complete (backup restore attempt)..."
local import_result=""
local elapsed=0
local max_time=180 # 3 minutes max for import to complete
while [ $elapsed -lt $max_time ]; do
local import_status=$(docker inspect ac-db-import --format='{{.State.Status}}' 2>/dev/null || echo "unknown")
if [ "$import_status" = "exited" ]; then
local exit_code=$(docker inspect ac-db-import --format='{{.State.ExitCode}}' 2>/dev/null || echo "unknown")
if [ "$exit_code" = "0" ]; then
print_status "SUCCESS" "Database Import completed successfully (backup restored)"
import_result="restored"
break
else
print_status "INFO" "Database Import failed (no valid backup found - expected for fresh setup)"
import_result="failed"
break
fi
fi
printf "${YELLOW}${NC} ${elapsed}s elapsed, $((max_time - elapsed))s remaining | Status: $import_status | Checking for backup...\n"
sleep 5
elapsed=$((elapsed + 5))
done
if [ -z "$import_result" ]; then
print_status "ERROR" "Database Import did not complete within timeout"
exit 1
fi
# If import failed (no backup), wait for init to create databases
if [ "$import_result" = "failed" ]; then
print_status "INFO" "Waiting for Database Init to create fresh databases..."
local init_elapsed=0
local init_max_time=120 # 2 minutes for init
while [ $init_elapsed -lt $init_max_time ]; do
local init_status=$(docker inspect ac-db-init --format='{{.State.Status}}' 2>/dev/null || echo "created")
if [ "$init_status" = "exited" ]; then
local init_exit_code=$(docker inspect ac-db-init --format='{{.State.ExitCode}}' 2>/dev/null || echo "unknown")
if [ "$init_exit_code" = "0" ]; then
print_status "SUCCESS" "Database Init completed successfully (fresh databases created)"
break
else
print_status "ERROR" "Database Init failed"
print_status "INFO" "Last few log lines from ac-db-init:"
docker logs ac-db-init --tail 10 2>/dev/null || true
exit 1
fi
elif [ "$init_status" = "running" ]; then
printf "${YELLOW}${NC} ${init_elapsed}s elapsed, $((init_max_time - init_elapsed))s remaining | Status: $init_status | Creating databases...\n"
else
printf "${YELLOW}${NC} ${init_elapsed}s elapsed, $((init_max_time - init_elapsed))s remaining | Status: $init_status | Waiting to start...\n"
fi
sleep 5
init_elapsed=$((init_elapsed + 5))
done
if [ $init_elapsed -ge $init_max_time ]; then
print_status "ERROR" "Database Init did not complete within timeout"
exit 1
fi
fi
print_status "INFO" "Step 2: Deploying services layer..."
docker compose --env-file "$SERVICES_ENV_FILE" -f ./docker-compose-azerothcore-services.yml up -d 2>&1 | grep -v "Found orphan containers"
# Wait for client data extraction
print_status "INFO" "Waiting for client data download and extraction (this may take 15-25 minutes)..."
print_status "INFO" "Press Ctrl+C to exit if needed..."
wait_for_service "Client Data" 480 "docker logs ac-client-data 2>/dev/null | grep -q 'Game data setup complete'"
# Wait for worldserver to be healthy
wait_for_service "World Server" 24 "check_container_health ac-worldserver"
# Deploy modules if enabled
if [ "$MODULES_ENABLED" = true ]; then
print_status "INFO" "Step 3: Deploying modules layer..."
docker compose --env-file "$MODULES_ENV_FILE" -f ./docker-compose-azerothcore-modules.yml up -d 2>&1 | grep -v "Found orphan containers"
# Wait for modules to be ready
sleep 5
STEP_NUMBER=4
else
print_status "INFO" "Modules layer skipped (no custom modules configuration found)"
STEP_NUMBER=3
fi
print_status "INFO" "Step $STEP_NUMBER: Deploying tools layer..."
docker compose --env-file "$TOOLS_ENV_FILE" -f ./docker-compose-azerothcore-tools.yml up -d
# Wait for tools to be ready
sleep 10
print_status "SUCCESS" "Deployment completed!"
}
# Function to perform health checks
perform_health_checks() {
print_status "HEADER" "CONTAINER HEALTH STATUS"
# Check all containers
local containers=("ac-mysql" "ac-backup" "ac-authserver" "ac-worldserver" "ac-phpmyadmin" "ac-keira3")
# Add modules container if modules are enabled
if [ "$MODULES_ENABLED" = true ]; then
containers+=("ac-modules")
fi
local container_failures=0
for container in "${containers[@]}"; do
# Only check containers that actually exist
if docker ps -a --format '{{.Names}}' | grep -q "^${container}$"; then
if ! check_container_health "$container"; then
# Only count as failure if container is not running, not just missing health check
if ! docker ps --format '{{.Names}}' | grep -q "^${container}$"; then
((container_failures++))
fi
fi
fi
done
print_status "HEADER" "PORT CONNECTIVITY TESTS"
# Database Layer
print_status "INFO" "Database Layer:"
local port_failures=0
if ! check_port 64306 "MySQL"; then ((port_failures++)); fi
# Services Layer
print_status "INFO" "Services Layer:"
if ! check_port 3784 "Auth Server"; then ((port_failures++)); fi
if ! check_port 8215 "World Server"; then ((port_failures++)); fi
if ! check_port 7778 "SOAP API"; then ((port_failures++)); fi
# Tools Layer
print_status "INFO" "Tools Layer:"
if ! check_port 8081 "PHPMyAdmin"; then ((port_failures++)); fi
if ! check_port 4201 "Keira3"; then ((port_failures++)); fi
if [ "$QUICK_CHECK" = false ]; then
print_status "HEADER" "WEB SERVICE HEALTH CHECKS"
local web_failures=0
if ! check_web_service "http://localhost:8081/" "PHPMyAdmin" "phpMyAdmin"; then ((web_failures++)); fi
if ! check_web_service "http://localhost:4201/health" "Keira3" "healthy"; then ((web_failures++)); fi
print_status "HEADER" "DATABASE CONNECTIVITY TEST"
# Test database connectivity and verify schemas
if docker exec ac-mysql mysql -uroot -pazerothcore123 -e "SHOW DATABASES;" 2>/dev/null | grep -q "acore_auth"; then
print_status "SUCCESS" "Database schemas: verified"
else
print_status "ERROR" "Database schemas: verification failed"
((web_failures++))
fi
# Test realm configuration
realm_count=$(docker exec ac-mysql mysql -uroot -pazerothcore123 -e "USE acore_auth; SELECT COUNT(*) FROM realmlist;" 2>/dev/null | tail -1)
if [ "$realm_count" -gt 0 ] 2>/dev/null; then
print_status "SUCCESS" "Realm configuration: $realm_count realm(s) configured"
else
print_status "ERROR" "Realm configuration: no realms found"
((web_failures++))
fi
fi
print_status "HEADER" "DEPLOYMENT SUMMARY"
# Summary
local total_failures=$((container_failures + port_failures + ${web_failures:-0}))
if [ $total_failures -eq 0 ]; then
print_status "SUCCESS" "All services are healthy and operational!"
print_status "INFO" "Available services:"
echo " 🌐 PHPMyAdmin: http://localhost:8081"
echo " 🛠️ Keira3: http://localhost:4201"
echo " 🎮 Game Server: localhost:8215"
echo " 🔐 Auth Server: localhost:3784"
echo " 🔧 SOAP API: localhost:7778"
echo " 🗄️ MySQL: localhost:64306"
echo ""
print_status "INFO" "Default credentials:"
echo " 🗄️ MySQL: root / azerothcore123"
return 0
else
print_status "ERROR" "Health check failed with $total_failures issue(s)"
print_status "INFO" "Check container logs for details: docker logs <container-name>"
return 1
fi
}
# Function to show container status
show_container_status() {
print_status "HEADER" "CONTAINER STATUS OVERVIEW"
echo -e "${BLUE}Container Name\t\tStatus\t\t\tPorts${NC}"
echo "=================================================================="
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | grep ac- | while read line; do
echo "$line"
done
}
# Main execution
main() {
print_status "HEADER" "AZEROTHCORE DEPLOYMENT & HEALTH CHECK"
# Check if docker is available
if ! command -v docker &> /dev/null; then
print_status "ERROR" "Docker is not installed or not in PATH"
exit 1
fi
# Check if docker compose is available
if ! docker compose version &> /dev/null; then
print_status "ERROR" "Docker Compose is not available"
exit 1
fi
# Run setup if requested
if [ "$RUN_SETUP" = true ]; then
print_status "HEADER" "RUNNING SERVER SETUP"
print_status "INFO" "Starting interactive server configuration..."
# Change to parent directory to run setup script
cd "$(dirname "$(pwd)")"
if [ -f "scripts/setup-server.sh" ]; then
bash scripts/setup-server.sh
if [ $? -ne 0 ]; then
print_status "ERROR" "Server setup failed or was cancelled"
exit 1
fi
else
print_status "ERROR" "Setup script not found at scripts/setup-server.sh"
exit 1
fi
# Return to scripts directory
cd scripts
print_status "SUCCESS" "Server setup completed!"
echo ""
fi
# Deploy the stack unless skipped
if [ "$SKIP_DEPLOY" = false ]; then
deploy_stack
else
print_status "INFO" "Skipping deployment, running health checks only..."
fi
# Show container status
show_container_status
# Perform health checks
if perform_health_checks; then
print_status "SUCCESS" "🎉 AzerothCore stack is fully operational!"
exit 0
else
print_status "ERROR" "❌ Health check failed - see issues above"
exit 1
fi
}
# Run main function
main "$@"

207
scripts/download-client-data.sh Normal file → Executable file
View File

@@ -1,32 +1,40 @@
#!/bin/bash
# ac-compose
set -e
echo '🚀 Starting AzerothCore game data setup...'
# Get the latest release info from wowgaming/client-data
echo '📡 Fetching latest client data release info...'
RELEASE_INFO=$(wget -qO- https://api.github.com/repos/wowgaming/client-data/releases/latest 2>/dev/null)
REQUESTED_TAG="${CLIENT_DATA_VERSION:-}"
if [ -n "$REQUESTED_TAG" ]; then
echo "📌 Using requested client data version: $REQUESTED_TAG"
LATEST_TAG="$REQUESTED_TAG"
LATEST_URL="https://github.com/wowgaming/client-data/releases/download/${REQUESTED_TAG}/data.zip"
else
echo '📡 Fetching latest client data release info...'
RELEASE_INFO=$(wget -qO- https://api.github.com/repos/wowgaming/client-data/releases/latest 2>/dev/null)
if [ -n "$RELEASE_INFO" ]; then
LATEST_URL=$(echo "$RELEASE_INFO" | grep '"browser_download_url":' | grep '\.zip' | cut -d'"' -f4 | head -1)
LATEST_TAG=$(echo "$RELEASE_INFO" | grep '"tag_name":' | cut -d'"' -f4)
LATEST_SIZE=$(echo "$RELEASE_INFO" | grep '"size":' | head -1 | grep -o '[0-9]*')
fi
if [ -n "$RELEASE_INFO" ]; then
LATEST_URL=$(echo "$RELEASE_INFO" | grep '"browser_download_url":' | grep '\.zip' | cut -d'"' -f4 | head -1)
LATEST_TAG=$(echo "$RELEASE_INFO" | grep '"tag_name":' | cut -d'"' -f4)
LATEST_SIZE=$(echo "$RELEASE_INFO" | grep '"size":' | head -1 | grep -o '[0-9]*')
fi
if [ -z "$LATEST_URL" ]; then
echo '❌ Could not fetch latest release URL'
echo '📥 Using fallback: direct download from v16 release'
LATEST_URL='https://github.com/wowgaming/client-data/releases/download/v16/data.zip'
LATEST_TAG='v16'
LATEST_SIZE='0'
echo '❌ Could not fetch client-data release information. Aborting.'
exit 1
fi
fi
echo "📍 Latest release: $LATEST_TAG"
echo "📥 Download URL: $LATEST_URL"
# Cache file paths
CACHE_FILE="/cache/client-data-$LATEST_TAG.zip"
VERSION_FILE="/cache/client-data-version.txt"
CACHE_DIR="/cache"
mkdir -p "$CACHE_DIR"
CACHE_FILE="${CACHE_DIR}/client-data-${LATEST_TAG}.zip"
TMP_FILE="${CACHE_FILE}.tmp"
VERSION_FILE="${CACHE_DIR}/client-data-version.txt"
# Check if we have a cached version
if [ -f "$CACHE_FILE" ] && [ -f "$VERSION_FILE" ]; then
@@ -36,7 +44,22 @@ if [ -f "$CACHE_FILE" ] && [ -f "$VERSION_FILE" ]; then
echo "📊 Cached file size: $(ls -lh "$CACHE_FILE" | awk '{print $5}')"
# Verify cache file integrity
if unzip -t "$CACHE_FILE" > /dev/null 2>&1; then
echo "🔍 Verifying cached file integrity..."
CACHE_INTEGRITY_OK=false
if command -v 7z >/dev/null 2>&1; then
if 7z t "$CACHE_FILE" >/dev/null 2>&1; then
CACHE_INTEGRITY_OK=true
fi
fi
if [ "$CACHE_INTEGRITY_OK" = "false" ]; then
if unzip -t "$CACHE_FILE" > /dev/null 2>&1; then
CACHE_INTEGRITY_OK=true
fi
fi
if [ "$CACHE_INTEGRITY_OK" = "true" ]; then
echo "✅ Cache file integrity verified"
echo "⚡ Using cached download - skipping download phase"
cp "$CACHE_FILE" data.zip
@@ -47,140 +70,90 @@ if [ -f "$CACHE_FILE" ] && [ -f "$VERSION_FILE" ]; then
else
echo "📦 Cache version ($CACHED_VERSION) differs from latest ($LATEST_TAG)"
echo "🗑️ Removing old cache"
rm -f /cache/client-data-*.zip "$VERSION_FILE"
rm -f "${CACHE_DIR}"/client-data-*.zip "$VERSION_FILE"
fi
fi
# Download if we don't have a valid cached file
if [ ! -f "data.zip" ]; then
echo "📥 Downloading client data (~15GB, may take 10-30 minutes)..."
echo "📥 Downloading client data (~15GB)..."
echo "📍 Source: $LATEST_URL"
# Download with clean progress indication
echo "📥 Starting download..."
wget --progress=dot:giga -O "$CACHE_FILE.tmp" "$LATEST_URL" 2>&1 | sed 's/^/📊 /' || {
echo '❌ wget failed, trying curl...'
curl -L --progress-bar -o "$CACHE_FILE.tmp" "$LATEST_URL" || {
echo '❌ All download methods failed'
rm -f "$CACHE_FILE.tmp"
exit 1
if command -v aria2c >/dev/null 2>&1; then
aria2c --max-connection-per-server=8 --split=8 --min-split-size=10M \
--summary-interval=5 --download-result=hide \
--console-log-level=warn --show-console-readout=false \
--dir "$CACHE_DIR" -o "$(basename "$TMP_FILE")" "$LATEST_URL" || {
echo '⚠️ aria2c failed, falling back to wget...'
wget --progress=dot:giga -O "$TMP_FILE" "$LATEST_URL" 2>&1 | sed 's/^/📊 /' || {
echo '❌ wget failed, trying curl...'
curl -L --progress-bar -o "$TMP_FILE" "$LATEST_URL" || {
echo '❌ All download methods failed'
rm -f "$TMP_FILE"
exit 1
}
}
}
}
else
echo "📥 Using wget (aria2c not available)..."
wget --progress=dot:giga -O "$TMP_FILE" "$LATEST_URL" 2>&1 | sed 's/^/📊 /' || {
echo '❌ wget failed, trying curl...'
curl -L --progress-bar -o "$TMP_FILE" "$LATEST_URL" || {
echo '❌ All download methods failed'
rm -f "$TMP_FILE"
exit 1
}
}
fi
# Verify download integrity
if unzip -t "$CACHE_FILE.tmp" > /dev/null 2>&1; then
mv "$CACHE_FILE.tmp" "$CACHE_FILE"
echo "🔍 Verifying download integrity..."
INTEGRITY_OK=false
if command -v 7z >/dev/null 2>&1; then
if 7z t "$TMP_FILE" >/dev/null 2>&1; then
INTEGRITY_OK=true
fi
fi
if [ "$INTEGRITY_OK" = "false" ]; then
if unzip -t "$TMP_FILE" > /dev/null 2>&1; then
INTEGRITY_OK=true
fi
fi
if [ "$INTEGRITY_OK" = "true" ]; then
mv "$TMP_FILE" "$CACHE_FILE"
echo "$LATEST_TAG" > "$VERSION_FILE"
echo '✅ Download completed and verified'
echo "📊 File size: $(ls -lh "$CACHE_FILE" | awk '{print $5}')"
cp "$CACHE_FILE" data.zip
else
echo '❌ Downloaded file is corrupted'
rm -f "$CACHE_FILE.tmp"
rm -f "$TMP_FILE"
exit 1
fi
fi
echo '📂 Extracting client data (this may take 10-15 minutes)...'
echo '⏳ Please wait while extracting...'
# Clear existing data if extraction failed previously
echo '📂 Extracting client data (this may take some minutes)...'
rm -rf /azerothcore/data/maps /azerothcore/data/vmaps /azerothcore/data/mmaps /azerothcore/data/dbc
# Extract with detailed progress tracking
echo '🔄 Starting extraction with progress monitoring...'
# Start extraction in background with overwrite
unzip -o -q data.zip -d /azerothcore/data/ &
UNZIP_PID=$!
LAST_CHECK_TIME=0
# Monitor progress with directory size checks
while kill -0 "$UNZIP_PID" 2>/dev/null; do
CURRENT_TIME=$(date +%s)
if [ $((CURRENT_TIME - LAST_CHECK_TIME)) -ge 30 ]; then
LAST_CHECK_TIME=$CURRENT_TIME
# Check what's been extracted so far
PROGRESS_MSG="📊 Progress at $(date '+%H:%M:%S'):"
if [ -d "/azerothcore/data/dbc" ] && [ -n "$(ls -A /azerothcore/data/dbc 2>/dev/null)" ]; then
DBC_SIZE=$(du -sh /azerothcore/data/dbc 2>/dev/null | cut -f1)
PROGRESS_MSG="$PROGRESS_MSG DBC($DBC_SIZE)"
fi
if [ -d "/azerothcore/data/maps" ] && [ -n "$(ls -A /azerothcore/data/maps 2>/dev/null)" ]; then
MAPS_SIZE=$(du -sh /azerothcore/data/maps 2>/dev/null | cut -f1)
PROGRESS_MSG="$PROGRESS_MSG Maps($MAPS_SIZE)"
fi
if [ -d "/azerothcore/data/vmaps" ] && [ -n "$(ls -A /azerothcore/data/vmaps 2>/dev/null)" ]; then
VMAPS_SIZE=$(du -sh /azerothcore/data/vmaps 2>/dev/null | cut -f1)
PROGRESS_MSG="$PROGRESS_MSG VMaps($VMAPS_SIZE)"
fi
if [ -d "/azerothcore/data/mmaps" ] && [ -n "$(ls -A /azerothcore/data/mmaps 2>/dev/null)" ]; then
MMAPS_SIZE=$(du -sh /azerothcore/data/mmaps 2>/dev/null | cut -f1)
PROGRESS_MSG="$PROGRESS_MSG MMaps($MMAPS_SIZE)"
fi
echo "$PROGRESS_MSG"
fi
sleep 5
done
wait "$UNZIP_PID"
UNZIP_EXIT_CODE=$?
if [ $UNZIP_EXIT_CODE -ne 0 ]; then
echo '❌ Extraction failed'
rm -f data.zip
exit 1
if command -v 7z >/dev/null 2>&1; then
7z x -aoa -o/azerothcore/data/ data.zip >/dev/null 2>&1
else
unzip -o -q data.zip -d /azerothcore/data/
fi
# Handle nested Data directory issue - move contents if extracted to Data subdirectory
if [ -d "/azerothcore/data/Data" ] && [ -n "$(ls -A /azerothcore/data/Data 2>/dev/null)" ]; then
echo '🔧 Fixing data directory structure (moving from Data/ subdirectory)...'
# Move all contents from Data subdirectory to the root data directory
for item in /azerothcore/data/Data/*; do
if [ -e "$item" ]; then
mv "$item" /azerothcore/data/ 2>/dev/null || {
echo "⚠️ Could not move $(basename "$item"), using copy instead..."
cp -r "$item" /azerothcore/data/
rm -rf "$item"
}
fi
done
# Remove empty Data directory
rmdir /azerothcore/data/Data 2>/dev/null || true
echo '✅ Data directory structure fixed'
fi
# Clean up temporary extraction file (keep cached version)
rm -f data.zip
echo '✅ Client data extraction complete!'
echo '📁 Verifying extracted directories:'
# Verify required directories exist and have content
ALL_GOOD=true
for dir in maps vmaps mmaps dbc; do
if [ -d "/azerothcore/data/$dir" ] && [ -n "$(ls -A /azerothcore/data/$dir 2>/dev/null)" ]; then
DIR_SIZE=$(du -sh /azerothcore/data/$dir 2>/dev/null | cut -f1)
echo "$dir directory: OK ($DIR_SIZE)"
else
echo "$dir directory: MISSING or EMPTY"
ALL_GOOD=false
exit 1
fi
done
if [ "$ALL_GOOD" = "true" ]; then
echo '🎉 Game data setup complete! AzerothCore worldserver can now start.'
echo "💾 Cached version $LATEST_TAG for future use"
else
echo '❌ Some directories are missing or empty'
exit 1
fi
echo '🎉 Game data setup complete! AzerothCore worldserver can now start.'

3
scripts/manage-modules-sql.sh Normal file → Executable file
View File

@@ -1,4 +1,5 @@
#!/bin/bash
# ac-compose
set -e
# Function to execute SQL files for a module
@@ -194,4 +195,4 @@ execute_module_sql_scripts() {
if [ "$MODULE_BLACK_MARKET_AUCTION_HOUSE" = "1" ] && [ -d "mod-black-market" ]; then
execute_module_sql "mod-black-market" "Black Market"
fi
}
}

84
scripts/manage-modules.sh Normal file → Executable file
View File

@@ -1,10 +1,11 @@
#!/bin/bash
# ac-compose
set -e
echo 'Setting up git user'
git config --global user.name "$GIT_USERNAME"
git config --global user.email "$GIT_EMAIL"
git config --global url.https://$GIT_PAT@github.com/.insteadOf https://github.com/
git config --global user.name "${GIT_USERNAME:-ac-compose}"
git config --global user.email "${GIT_EMAIL:-noreply@azerothcore.org}"
# PAT not needed for public repositories
echo 'Initializing module management...'
cd /modules
@@ -538,18 +539,6 @@ if [ "$MODULE_LEVEL_GRANT" != "1" ]; then
rm -f /azerothcore/env/dist/etc/levelGrant.conf*
fi
if [ "$MODULE_ASSISTANT" != "1" ]; then
rm -f /azerothcore/env/dist/etc/mod_assistant.conf*
fi
if [ "$MODULE_REAGENT_BANK" != "1" ]; then
rm -f /azerothcore/env/dist/etc/mod_reagent_bank.conf*
fi
if [ "$MODULE_BLACK_MARKET_AUCTION_HOUSE" != "1" ]; then
rm -f /azerothcore/env/dist/etc/mod_black_market.conf*
fi
# Install configuration files for enabled modules
for module_dir in mod-*; do
if [ -d "$module_dir" ]; then
@@ -558,15 +547,21 @@ for module_dir in mod-*; do
fi
done
echo 'Configuration file management complete.'
# Load SQL runner if present
if [ -f "/scripts/manage-modules-sql.sh" ]; then
. /scripts/manage-modules-sql.sh
elif [ -f "/tmp/scripts/manage-modules-sql.sh" ]; then
. /tmp/scripts/manage-modules-sql.sh
else
echo "⚠️ SQL helper not found, skipping module SQL execution"
fi
# Source the SQL module management functions
source /scripts/manage-modules-sql.sh
echo 'Executing module SQL scripts...'
execute_module_sql_scripts
echo 'SQL execution complete.'
# Execute SQLs for enabled modules (via helper)
if declare -f execute_module_sql_scripts >/dev/null 2>&1; then
echo 'Executing module SQL scripts...'
execute_module_sql_scripts
echo 'SQL execution complete.'
fi
# Module state tracking and rebuild logic
echo 'Checking for module changes that require rebuild...'
@@ -576,7 +571,7 @@ CURRENT_STATE=""
REBUILD_REQUIRED=0
# Create current module state hash
for module_var in MODULE_PLAYERBOTS MODULE_AOE_LOOT MODULE_LEARN_SPELLS MODULE_FIREWORKS MODULE_INDIVIDUAL_PROGRESSION MODULE_AHBOT MODULE_AUTOBALANCE MODULE_TRANSMOG MODULE_NPC_BUFFER MODULE_DYNAMIC_XP MODULE_SOLO_LFG MODULE_1V1_ARENA MODULE_PHASED_DUELS MODULE_BREAKING_NEWS MODULE_BOSS_ANNOUNCER MODULE_ACCOUNT_ACHIEVEMENTS MODULE_AUTO_REVIVE MODULE_GAIN_HONOR_GUARD MODULE_ELUNA MODULE_TIME_IS_TIME MODULE_POCKET_PORTAL MODULE_RANDOM_ENCHANTS MODULE_SOLOCRAFT MODULE_PVP_TITLES MODULE_NPC_BEASTMASTER MODULE_NPC_ENCHANTER MODULE_INSTANCE_RESET MODULE_LEVEL_GRANT; do
for module_var in MODULE_PLAYERBOTS MODULE_AOE_LOOT MODULE_LEARN_SPELLS MODULE_FIREWORKS MODULE_INDIVIDUAL_PROGRESSION MODULE_AHBOT MODULE_AUTOBALANCE MODULE_TRANSMOG MODULE_NPC_BUFFER MODULE_DYNAMIC_XP MODULE_SOLO_LFG MODULE_1V1_ARENA MODULE_PHASED_DUELS MODULE_BREAKING_NEWS MODULE_BOSS_ANNOUNCER MODULE_ACCOUNT_ACHIEVEMENTS MODULE_AUTO_REVIVE MODULE_GAIN_HONOR_GUARD MODULE_ELUNA MODULE_TIME_IS_TIME MODULE_POCKET_PORTAL MODULE_RANDOM_ENCHANTS MODULE_SOLOCRAFT MODULE_PVP_TITLES MODULE_NPC_BEASTMASTER MODULE_NPC_ENCHANTER MODULE_INSTANCE_RESET MODULE_LEVEL_GRANT MODULE_ARAC MODULE_ASSISTANT MODULE_REAGENT_BANK MODULE_BLACK_MARKET_AUCTION_HOUSE; do
eval "value=\$$module_var"
CURRENT_STATE="$CURRENT_STATE$module_var=$value|"
done
@@ -598,9 +593,9 @@ fi
# Save current state
echo "$CURRENT_STATE" > "$MODULES_STATE_FILE"
# Check if any C++ modules are enabled (all current modules require compilation)
# Check if any C++ modules are enabled (modules requiring source compilation)
# NOTE: mod-playerbots uses pre-built images and doesn't require rebuild
ENABLED_MODULES=""
[ "$MODULE_PLAYERBOTS" = "1" ] && ENABLED_MODULES="$ENABLED_MODULES mod-playerbots"
[ "$MODULE_AOE_LOOT" = "1" ] && ENABLED_MODULES="$ENABLED_MODULES mod-aoe-loot"
[ "$MODULE_LEARN_SPELLS" = "1" ] && ENABLED_MODULES="$ENABLED_MODULES mod-learn-spells"
[ "$MODULE_FIREWORKS" = "1" ] && ENABLED_MODULES="$ENABLED_MODULES mod-fireworks-on-level"
@@ -619,7 +614,6 @@ ENABLED_MODULES=""
[ "$MODULE_AUTO_REVIVE" = "1" ] && ENABLED_MODULES="$ENABLED_MODULES mod-auto-revive"
[ "$MODULE_GAIN_HONOR_GUARD" = "1" ] && ENABLED_MODULES="$ENABLED_MODULES mod-gain-honor-guard"
[ "$MODULE_ELUNA" = "1" ] && ENABLED_MODULES="$ENABLED_MODULES mod-eluna"
[ "$MODULE_ARAC" = "1" ] && ENABLED_MODULES="$ENABLED_MODULES mod-arac"
[ "$MODULE_TIME_IS_TIME" = "1" ] && ENABLED_MODULES="$ENABLED_MODULES mod-time-is-time"
[ "$MODULE_POCKET_PORTAL" = "1" ] && ENABLED_MODULES="$ENABLED_MODULES mod-pocket-portal"
[ "$MODULE_RANDOM_ENCHANTS" = "1" ] && ENABLED_MODULES="$ENABLED_MODULES mod-random-enchants"
@@ -629,9 +623,6 @@ ENABLED_MODULES=""
[ "$MODULE_NPC_ENCHANTER" = "1" ] && ENABLED_MODULES="$ENABLED_MODULES mod-npc-enchanter"
[ "$MODULE_INSTANCE_RESET" = "1" ] && ENABLED_MODULES="$ENABLED_MODULES mod-instance-reset"
[ "$MODULE_LEVEL_GRANT" = "1" ] && ENABLED_MODULES="$ENABLED_MODULES mod-quest-count-level"
[ "$MODULE_ASSISTANT" = "1" ] && ENABLED_MODULES="$ENABLED_MODULES mod-assistant"
[ "$MODULE_REAGENT_BANK" = "1" ] && ENABLED_MODULES="$ENABLED_MODULES mod-reagent-bank"
# Note: mod-black-market is Lua-based, doesn't need C++ compilation
if [ -n "$ENABLED_MODULES" ]; then
ENABLED_COUNT=$(echo $ENABLED_MODULES | wc -w)
@@ -646,14 +637,10 @@ if [ -n "$ENABLED_MODULES" ]; then
echo "Module configuration has changed. To integrate C++ modules into AzerothCore:"
echo ""
echo "1. Stop current services:"
echo " docker compose -f docker-compose-azerothcore-services.yml down"
echo " docker compose down"
echo ""
echo "2. Build with source-based compilation:"
echo " docker compose -f /tmp/acore-dev-test/docker-compose.yml build"
echo " docker compose -f /tmp/acore-dev-test/docker-compose.yml up -d"
echo ""
echo "3. Or use the automated rebuild script (if available):"
echo " ./scripts/rebuild-with-modules.sh"
echo "2. Build with source-based compilation (external process)"
echo " ./scripts/rebuild-with-modules.sh (if available)"
echo ""
echo "📋 NOTE: Source-based build will compile AzerothCore with all enabled modules"
echo "⏱️ Expected build time: 15-45 minutes depending on system performance"
@@ -665,21 +652,14 @@ fi
echo 'Module management complete.'
# Download rebuild script from GitHub for local access
echo '📥 Downloading rebuild-with-modules.sh from GitHub...'
apk add --no-cache curl
if curl -fsSL https://raw.githubusercontent.com/uprightbass360/acore-compose/main/scripts/rebuild-with-modules.sh -o /tmp/rebuild-with-modules.sh 2>/dev/null; then
echo '✅ Downloaded rebuild-with-modules.sh from GitHub'
chmod +x /tmp/rebuild-with-modules.sh
echo '📍 Script available at: /tmp/rebuild-with-modules.sh'
elif [ -f "/project/scripts/rebuild-with-modules.sh" ]; then
echo '📁 Using local rebuild-with-modules.sh for testing'
cp /project/scripts/rebuild-with-modules.sh /tmp/rebuild-with-modules.sh
chmod +x /tmp/rebuild-with-modules.sh
echo '✅ Copied to /tmp/rebuild-with-modules.sh'
REBUILD_SENTINEL="/modules/.requires_rebuild"
if [ "$REBUILD_REQUIRED" = "1" ] && [ -n "$ENABLED_MODULES" ]; then
echo "$ENABLED_MODULES" > "$REBUILD_SENTINEL"
else
echo '⚠️ Warning: rebuild-with-modules.sh not found in GitHub or locally'
rm -f "$REBUILD_SENTINEL" 2>/dev/null || true
fi
echo 'Keeping container alive...'
tail -f /dev/null
# Optional: keep container alive for inspection in CI/debug contexts
if [ "${MODULES_DEBUG_KEEPALIVE:-0}" = "1" ]; then
tail -f /dev/null
fi

View File

@@ -1,77 +0,0 @@
#!/bin/bash
set -e
echo "🔧 Starting MySQL with NFS-compatible setup and auto-restore..."
mkdir -p /var/lib/mysql-runtime
chown -R mysql:mysql /var/lib/mysql-runtime
chmod 755 /var/lib/mysql-runtime
# Check if MySQL data directory is empty (fresh start)
if [ ! -d "/var/lib/mysql-runtime/mysql" ]; then
echo "🆕 Fresh MySQL installation detected..."
# Check for available backups (prefer daily, fallback to hourly, then legacy)
if [ -d "/backups" ] && [ "$(ls -A /backups)" ]; then
# Try daily backups first
if [ -d "/backups/daily" ] && [ "$(ls -A /backups/daily)" ]; then
LATEST_BACKUP=$(ls -1t /backups/daily | head -n 1)
if [ -n "$LATEST_BACKUP" ] && [ -d "/backups/daily/$LATEST_BACKUP" ]; then
echo "📦 Latest daily backup found: $LATEST_BACKUP"
echo "🔄 Will restore after MySQL initializes..."
export RESTORE_BACKUP="/backups/daily/$LATEST_BACKUP"
fi
# Try hourly backups second
elif [ -d "/backups/hourly" ] && [ "$(ls -A /backups/hourly)" ]; then
LATEST_BACKUP=$(ls -1t /backups/hourly | head -n 1)
if [ -n "$LATEST_BACKUP" ] && [ -d "/backups/hourly/$LATEST_BACKUP" ]; then
echo "📦 Latest hourly backup found: $LATEST_BACKUP"
echo "🔄 Will restore after MySQL initializes..."
export RESTORE_BACKUP="/backups/hourly/$LATEST_BACKUP"
fi
# Try legacy backup structure last
else
LATEST_BACKUP=$(ls -1t /backups | head -n 1)
if [ -n "$LATEST_BACKUP" ] && [ -d "/backups/$LATEST_BACKUP" ]; then
echo "📦 Latest legacy backup found: $LATEST_BACKUP"
echo "🔄 Will restore after MySQL initializes..."
export RESTORE_BACKUP="/backups/$LATEST_BACKUP"
else
echo "🆕 No valid backups found, will initialize fresh..."
fi
fi
else
echo "🆕 No backup directory found, will initialize fresh..."
fi
else
echo "📁 Existing MySQL data found, skipping restore..."
fi
echo "🚀 Starting MySQL server with custom datadir..."
# Set defaults for any missing environment variables
MYSQL_CHARACTER_SET=${MYSQL_CHARACTER_SET:-utf8mb4}
MYSQL_COLLATION=${MYSQL_COLLATION:-utf8mb4_unicode_ci}
MYSQL_MAX_CONNECTIONS=${MYSQL_MAX_CONNECTIONS:-1000}
MYSQL_INNODB_BUFFER_POOL_SIZE=${MYSQL_INNODB_BUFFER_POOL_SIZE:-256M}
MYSQL_INNODB_LOG_FILE_SIZE=${MYSQL_INNODB_LOG_FILE_SIZE:-64M}
echo "📊 MySQL Configuration:"
echo " Character Set: $MYSQL_CHARACTER_SET"
echo " Collation: $MYSQL_COLLATION"
echo " Max Connections: $MYSQL_MAX_CONNECTIONS"
echo " Buffer Pool Size: $MYSQL_INNODB_BUFFER_POOL_SIZE"
echo " Log File Size: $MYSQL_INNODB_LOG_FILE_SIZE"
# For now, skip restore and just start MySQL normally
# The restore functionality can be added back later once the basic stack is working
echo "🚀 Starting MySQL without restore for initial deployment..."
# Normal startup without restore
exec docker-entrypoint.sh mysqld \
--datadir=/var/lib/mysql-runtime \
--default-authentication-plugin=mysql_native_password \
--character-set-server=$MYSQL_CHARACTER_SET \
--collation-server=$MYSQL_COLLATION \
--max_connections=$MYSQL_MAX_CONNECTIONS \
--innodb-buffer-pool-size=$MYSQL_INNODB_BUFFER_POOL_SIZE \
--innodb-log-file-size=$MYSQL_INNODB_LOG_FILE_SIZE

View File

@@ -1,102 +0,0 @@
#!/bin/bash
# AzerothCore Post-Installation Setup Script
# Configures fresh authserver and worldserver installations for production
set -e
echo "🚀 AzerothCore Post-Installation Setup"
echo "====================================="
echo ""
# Load environment variables from env file if it exists
if [ -f "docker-compose-azerothcore-services.env" ]; then
echo "📂 Loading environment from docker-compose-azerothcore-services.env"
set -a # automatically export all variables
source docker-compose-azerothcore-services.env
set +a # turn off automatic export
echo ""
fi
# Configuration variables from environment
MYSQL_HOST="${MYSQL_HOST:-ac-mysql}"
MYSQL_PORT="${MYSQL_PORT:-3306}"
MYSQL_USER="${MYSQL_USER:-root}"
MYSQL_ROOT_PASSWORD="${MYSQL_ROOT_PASSWORD:-azerothcore123}"
DB_AUTH_NAME="${DB_AUTH_NAME:-acore_auth}"
DB_WORLD_NAME="${DB_WORLD_NAME:-acore_world}"
DB_CHARACTERS_NAME="${DB_CHARACTERS_NAME:-acore_characters}"
STORAGE_PATH="${STORAGE_PATH:-./storage/azerothcore}"
SERVER_ADDRESS="${SERVER_ADDRESS:-127.0.0.1}"
SERVER_PORT="${REALM_PORT:-8085}"
echo "📋 Configuration Summary:"
echo " Database: ${MYSQL_HOST}:${MYSQL_PORT}"
echo " Auth DB: ${DB_AUTH_NAME}"
echo " World DB: ${DB_WORLD_NAME}"
echo " Characters DB: ${DB_CHARACTERS_NAME}"
echo " Storage: ${STORAGE_PATH}"
echo " Server: ${SERVER_ADDRESS}:${SERVER_PORT}"
echo ""
# Step 1: Update configuration files
echo "🔧 Step 1: Updating configuration files..."
if [ ! -x "./scripts/update-config.sh" ]; then
echo "❌ Error: update-config.sh script not found or not executable"
exit 1
fi
echo "password" | sudo -S STORAGE_PATH="${STORAGE_PATH}" ./scripts/update-config.sh
if [ $? -eq 0 ]; then
echo "✅ Configuration files updated successfully"
else
echo "❌ Failed to update configuration files"
exit 1
fi
echo ""
# Step 2: Update realmlist table
echo "🌐 Step 2: Updating realmlist table..."
if [ ! -x "./scripts/update-realmlist.sh" ]; then
echo "❌ Error: update-realmlist.sh script not found or not executable"
exit 1
fi
./scripts/update-realmlist.sh
if [ $? -eq 0 ]; then
echo "✅ Realmlist table updated successfully"
else
echo "❌ Failed to update realmlist table"
exit 1
fi
echo ""
# Step 3: Restart services to apply changes
echo "🔄 Step 3: Restarting services to apply changes..."
docker compose -f docker-compose-azerothcore-services.yml restart ac-authserver ac-worldserver
if [ $? -eq 0 ]; then
echo "✅ Services restarted successfully"
else
echo "❌ Failed to restart services"
exit 1
fi
echo ""
echo "🎉 Post-installation setup completed successfully!"
echo ""
echo "📋 Summary of changes:"
echo " ✅ AuthServer configured with production database settings"
echo " ✅ WorldServer configured with production database settings"
echo " ✅ Realmlist updated with server address: ${SERVER_ADDRESS}:${SERVER_PORT}"
echo " ✅ Services restarted to apply changes"
echo ""
echo "🎮 Your AzerothCore server is now ready for production!"
echo " Players can connect to: ${SERVER_ADDRESS}:${SERVER_PORT}"
echo ""
echo "💡 Next steps:"
echo " 1. Create admin accounts using the worldserver console"
echo " 2. Test client connectivity"
echo " 3. Configure any additional modules as needed"

View File

@@ -1,128 +1,195 @@
#!/bin/bash
# AzerothCore Module Rebuild Script
# Automates the process of rebuilding AzerothCore with enabled modules
# ac-compose helper to rebuild AzerothCore from source with enabled modules.
set -e
echo "🔧 AzerothCore Module Rebuild Script"
echo "==================================="
echo ""
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(dirname "$SCRIPT_DIR")"
ENV_FILE="$PROJECT_DIR/.env"
# Check if source repository exists
SOURCE_COMPOSE="/tmp/acore-dev-test/docker-compose.yml"
usage(){
cat <<EOF
Usage: $(basename "$0") [options]
Options:
--yes, -y Skip interactive confirmation prompts
--source PATH Override MODULES_REBUILD_SOURCE_PATH from .env
--skip-stop Do not run 'docker compose down' in the source tree before rebuilding
-h, --help Show this help
EOF
}
read_env(){
local key="$1" default="$2" env_path="$ENV_FILE" value
if [ -f "$env_path" ]; then
value="$(grep -E "^${key}=" "$env_path" | tail -n1 | cut -d'=' -f2- | tr -d '\r')"
fi
if [ -z "$value" ]; then
value="$default"
fi
echo "$value"
}
confirm(){
local prompt="$1" default="$2" reply
if [ "$ASSUME_YES" = "1" ]; then
return 0
fi
while true; do
if [ "$default" = "y" ]; then
read -r -p "$prompt [Y/n]: " reply
reply="${reply:-y}"
else
read -r -p "$prompt [y/N]: " reply
reply="${reply:-n}"
fi
case "$reply" in
[Yy]*) return 0 ;;
[Nn]*) return 1 ;;
esac
done
}
ASSUME_YES=0
SOURCE_OVERRIDE=""
SKIP_STOP=0
while [[ $# -gt 0 ]]; do
case "$1" in
--yes|-y) ASSUME_YES=1; shift;;
--source) SOURCE_OVERRIDE="$2"; shift 2;;
--skip-stop) SKIP_STOP=1; shift;;
-h|--help) usage; exit 0;;
*) echo "Unknown option: $1" >&2; usage; exit 1;;
esac
done
if ! command -v docker >/dev/null 2>&1; then
echo "❌ Docker CLI not found in PATH."
exit 1
fi
STORAGE_PATH="$(read_env STORAGE_PATH "./storage")"
if [[ "$STORAGE_PATH" != /* ]]; then
STORAGE_PATH="$PROJECT_DIR/$STORAGE_PATH"
fi
MODULES_DIR="$STORAGE_PATH/modules"
SENTINEL_FILE="$MODULES_DIR/.requires_rebuild"
REBUILD_SOURCE_PATH="$SOURCE_OVERRIDE"
if [ -z "$REBUILD_SOURCE_PATH" ]; then
REBUILD_SOURCE_PATH="$(read_env MODULES_REBUILD_SOURCE_PATH "")"
fi
if [ -z "$REBUILD_SOURCE_PATH" ]; then
cat <<EOF
❌ MODULES_REBUILD_SOURCE_PATH is not configured.
Set MODULES_REBUILD_SOURCE_PATH in .env to the AzerothCore source repository
that contains the Docker Compose file used for source builds, then rerun:
scripts/rebuild-with-modules.sh --yes
EOF
exit 1
fi
if [[ "$REBUILD_SOURCE_PATH" != /* ]]; then
REBUILD_SOURCE_PATH="$(realpath "$REBUILD_SOURCE_PATH" 2>/dev/null || echo "$REBUILD_SOURCE_PATH")"
fi
SOURCE_COMPOSE="$REBUILD_SOURCE_PATH/docker-compose.yml"
if [ ! -f "$SOURCE_COMPOSE" ]; then
echo "Error: Source-based Docker Compose file not found at $SOURCE_COMPOSE"
echo "Please ensure AzerothCore source repository is available for compilation."
exit 1
echo "Source docker-compose.yml not found at $SOURCE_COMPOSE"
exit 1
fi
# Check current module configuration
echo "📋 Checking current module configuration..."
declare -A MODULE_REPO_MAP=(
[MODULE_AOE_LOOT]=mod-aoe-loot
[MODULE_LEARN_SPELLS]=mod-learn-spells
[MODULE_FIREWORKS]=mod-fireworks-on-level
[MODULE_INDIVIDUAL_PROGRESSION]=mod-individual-progression
[MODULE_AHBOT]=mod-ahbot
[MODULE_AUTOBALANCE]=mod-autobalance
[MODULE_TRANSMOG]=mod-transmog
[MODULE_NPC_BUFFER]=mod-npc-buffer
[MODULE_DYNAMIC_XP]=mod-dynamic-xp
[MODULE_SOLO_LFG]=mod-solo-lfg
[MODULE_1V1_ARENA]=mod-1v1-arena
[MODULE_PHASED_DUELS]=mod-phased-duels
[MODULE_BREAKING_NEWS]=mod-breaking-news-override
[MODULE_BOSS_ANNOUNCER]=mod-boss-announcer
[MODULE_ACCOUNT_ACHIEVEMENTS]=mod-account-achievements
[MODULE_AUTO_REVIVE]=mod-auto-revive
[MODULE_GAIN_HONOR_GUARD]=mod-gain-honor-guard
[MODULE_ELUNA]=mod-eluna
[MODULE_TIME_IS_TIME]=mod-TimeIsTime
[MODULE_POCKET_PORTAL]=mod-pocket-portal
[MODULE_RANDOM_ENCHANTS]=mod-random-enchants
[MODULE_SOLOCRAFT]=mod-solocraft
[MODULE_PVP_TITLES]=mod-pvp-titles
[MODULE_NPC_BEASTMASTER]=mod-npc-beastmaster
[MODULE_NPC_ENCHANTER]=mod-npc-enchanter
[MODULE_INSTANCE_RESET]=mod-instance-reset
[MODULE_LEVEL_GRANT]=mod-quest-count-level
)
MODULES_ENABLED=0
ENABLED_MODULES=""
compile_modules=()
for key in "${!MODULE_REPO_MAP[@]}"; do
if [ "$(read_env "$key" "0")" = "1" ]; then
compile_modules+=("${MODULE_REPO_MAP[$key]}")
fi
done
# Read environment file to check enabled modules
if [ -f "docker-compose-azerothcore-services.env" ]; then
while IFS= read -r line; do
if echo "$line" | grep -q "^MODULE_.*=1$"; then
MODULE_NAME=$(echo "$line" | cut -d'=' -f1)
MODULES_ENABLED=$((MODULES_ENABLED + 1))
ENABLED_MODULES="$ENABLED_MODULES $MODULE_NAME"
fi
done < docker-compose-azerothcore-services.env
if [ ${#compile_modules[@]} -eq 0 ]; then
echo "✅ No C++ modules enabled that require a source rebuild."
rm -f "$SENTINEL_FILE" 2>/dev/null || true
exit 0
fi
echo "🔧 Modules requiring compilation:"
for mod in "${compile_modules[@]}"; do
echo "$mod"
done
if [ ! -d "$MODULES_DIR" ]; then
echo "⚠️ Modules directory not found at $MODULES_DIR"
fi
if ! confirm "Proceed with source rebuild in $REBUILD_SOURCE_PATH? (15-45 minutes)" n; then
echo "❌ Rebuild cancelled"
exit 1
fi
pushd "$REBUILD_SOURCE_PATH" >/dev/null
if [ "$SKIP_STOP" != "1" ]; then
echo "🛑 Stopping existing source services (if any)..."
docker compose down || true
fi
if [ -d "$MODULES_DIR" ]; then
echo "🔄 Syncing enabled modules into source tree..."
mkdir -p modules
if command -v rsync >/dev/null 2>&1; then
rsync -a --delete "$MODULES_DIR"/ modules/
else
rm -rf modules/*
cp -R "$MODULES_DIR"/. modules/
fi
else
echo "⚠️ Warning: Environment file not found, checking default configuration..."
echo "⚠️ No modules directory found at $MODULES_DIR; continuing without sync."
fi
echo "🔍 Found $MODULES_ENABLED enabled modules"
if [ $MODULES_ENABLED -eq 0 ]; then
echo "✅ No modules enabled - rebuild not required"
echo "You can use pre-built containers for better performance."
exit 0
fi
echo "📦 Enabled modules:$ENABLED_MODULES"
echo ""
# Confirm rebuild
read -p "🤔 Proceed with rebuild? This will take 15-45 minutes. (y/N): " -n 1 -r
echo ""
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "❌ Rebuild cancelled"
exit 0
fi
echo ""
echo "🛑 Stopping current services..."
docker compose -f docker-compose-azerothcore-services.yml down || echo "⚠️ Services may not be running"
echo ""
echo "🔧 Starting source-based compilation..."
echo "⏱️ This will take 15-45 minutes depending on your system..."
echo ""
# Build with source
cd /tmp/acore-dev-test
echo "📁 Switched to source directory: $(pwd)"
# Copy modules to source build
echo "📋 Copying modules to source build..."
if [ -d "/home/upb/src/acore-compose2/storage/azerothcore/modules" ]; then
# Ensure modules directory exists in source
mkdir -p modules
# Copy enabled modules only
echo "🔄 Syncing enabled modules..."
for module_dir in /home/upb/src/acore-compose2/storage/azerothcore/modules/*/; do
if [ -d "$module_dir" ]; then
module_name=$(basename "$module_dir")
echo " Copying $module_name..."
cp -r "$module_dir" modules/
fi
done
else
echo "⚠️ Warning: No modules directory found"
fi
# Start build process
echo ""
echo "🚀 Building AzerothCore with modules..."
docker compose build --no-cache
if [ $? -eq 0 ]; then
echo ""
echo "✅ Build completed successfully!"
echo ""
echo "🟢 Starting source services..."
docker compose up -d
# Start services
echo "🟢 Starting services with compiled modules..."
docker compose up -d
popd >/dev/null
if [ $? -eq 0 ]; then
echo ""
echo "🎉 SUCCESS! AzerothCore is now running with compiled modules."
echo ""
echo "📊 Service status:"
docker compose ps
echo ""
echo "📝 To monitor logs:"
echo " docker compose logs -f"
echo ""
echo "🌐 Server should be available on configured ports once fully started."
else
echo "❌ Failed to start services"
exit 1
fi
else
echo "❌ Build failed"
echo ""
echo "🔍 Check build logs for errors:"
echo " docker compose logs"
exit 1
fi
rm -f "$SENTINEL_FILE" 2>/dev/null || true
echo ""
echo "✅ Rebuild process complete!"
echo "🎉 SUCCESS! AzerothCore source build completed with modules."

View File

@@ -1,55 +0,0 @@
#!/bin/bash
set -e
MYSQL_HOST=${MYSQL_HOST:-ac-mysql}
MYSQL_PORT=${MYSQL_PORT:-3306}
MYSQL_USER=${MYSQL_USER:-root}
MYSQL_PASSWORD=${MYSQL_PASSWORD:-password}
BACKUP_DIR="/backups"
if [ -z "$1" ]; then
echo "Usage: restore.sh <backup_timestamp>"
echo "Available backups:"
ls -la $BACKUP_DIR/ | grep "^d" | grep "[0-9]"
exit 1
fi
TIMESTAMP=$1
BACKUP_SUBDIR="$BACKUP_DIR/$TIMESTAMP"
if [ ! -d "$BACKUP_SUBDIR" ]; then
echo "❌ Backup not found: $BACKUP_SUBDIR"
exit 1
fi
echo "⚠️ WARNING: This will overwrite existing databases!"
echo "Restoring from backup: $TIMESTAMP"
# List databases that will be restored
echo "Databases that will be restored:"
for backup_file in $BACKUP_SUBDIR/*.sql.gz; do
if [ -f "$backup_file" ]; then
db_name=$(basename "$backup_file" .sql.gz)
echo " - $db_name"
fi
done
echo "Press Ctrl+C within 10 seconds to cancel..."
sleep 10
# Restore databases
for backup_file in $BACKUP_SUBDIR/*.sql.gz; do
if [ -f "$backup_file" ]; then
db_name=$(basename "$backup_file" .sql.gz)
echo "Restoring database: $db_name"
# Restore with error handling for optional databases
if zcat "$backup_file" | mysql -h$MYSQL_HOST -P$MYSQL_PORT -u$MYSQL_USER -p$MYSQL_PASSWORD; then
echo "✅ Successfully restored $db_name"
else
echo "⚠️ Warning: Failed to restore $db_name (this may be normal for optional databases)"
fi
fi
done
echo "✅ Database restore completed"

View File

@@ -1,453 +0,0 @@
#!/bin/bash
# ==============================================
# AzerothCore Eluna Lua Scripting Setup
# ==============================================
# Sets up Lua scripting environment for mod-eluna
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
MAGENTA='\033[0;35m'
NC='\033[0m' # No Color
# Function to print colored output
print_status() {
local status=$1
local message=$2
case $status in
"INFO")
echo -e "${BLUE} ${message}${NC}"
;;
"SUCCESS")
echo -e "${GREEN}${message}${NC}"
;;
"WARNING")
echo -e "${YELLOW}⚠️ ${message}${NC}"
;;
"ERROR")
echo -e "${RED}${message}${NC}"
;;
"HEADER")
echo -e "\n${MAGENTA}=== ${message} ===${NC}"
;;
esac
}
# Load environment variables
if [ -f "docker-compose-azerothcore-services.env" ]; then
source docker-compose-azerothcore-services.env
else
print_status "ERROR" "Environment file not found. Run from acore-compose directory."
exit 1
fi
print_status "HEADER" "AZEROTHCORE ELUNA LUA SCRIPTING SETUP"
# Check if Eluna is enabled
if [ "$MODULE_ELUNA" != "1" ]; then
print_status "ERROR" "MODULE_ELUNA is not enabled. Set MODULE_ELUNA=1 in environment file."
exit 1
fi
print_status "SUCCESS" "mod-eluna is enabled"
# Create lua_scripts directory
LUA_SCRIPTS_DIR="${STORAGE_PATH}/lua_scripts"
print_status "INFO" "Creating Lua scripts directory: $LUA_SCRIPTS_DIR"
if [ ! -d "$LUA_SCRIPTS_DIR" ]; then
mkdir -p "$LUA_SCRIPTS_DIR"
print_status "SUCCESS" "Created lua_scripts directory"
else
print_status "INFO" "lua_scripts directory already exists"
fi
# Create typescript directory for ac-eluna container
TYPESCRIPT_DIR="${STORAGE_PATH}/typescript"
print_status "INFO" "Creating TypeScript scripts directory: $TYPESCRIPT_DIR"
if [ ! -d "$TYPESCRIPT_DIR" ]; then
mkdir -p "$TYPESCRIPT_DIR"
print_status "SUCCESS" "Created typescript directory"
else
print_status "INFO" "typescript directory already exists"
fi
# Create example scripts
print_status "HEADER" "CREATING EXAMPLE LUA SCRIPTS"
# Welcome script
cat > "$LUA_SCRIPTS_DIR/welcome.lua" << 'EOF'
-- ==============================================
-- Welcome Script for AzerothCore mod-eluna
-- ==============================================
-- Sends welcome message to players on login
local PLAYER_EVENT_ON_LOGIN = 3
local function OnPlayerLogin(event, player)
local playerName = player:GetName()
local accountId = player:GetAccountId()
-- Send welcome message
player:SendBroadcastMessage("|cff00ff00Welcome to the AzerothCore server, " .. playerName .. "!|r")
player:SendBroadcastMessage("|cffyellow🎮 This server features custom modules and Lua scripting!|r")
-- Log the login
print("Player " .. playerName .. " (Account: " .. accountId .. ") has logged in")
end
-- Register the event
RegisterPlayerEvent(PLAYER_EVENT_ON_LOGIN, OnPlayerLogin)
print("✅ Welcome script loaded successfully")
EOF
print_status "SUCCESS" "Created example welcome.lua script"
# Server info script
cat > "$LUA_SCRIPTS_DIR/server_info.lua" << 'EOF'
-- ==============================================
-- Server Info Commands for AzerothCore mod-eluna
-- ==============================================
-- Provides custom server information commands
local function ServerInfoCommand(player, command)
if command == "info" or command == "serverinfo" then
player:SendBroadcastMessage("|cff00ffffServer Information:|r")
player:SendBroadcastMessage("• Core: AzerothCore with mod-eluna")
player:SendBroadcastMessage("• Lua Scripting: Enabled")
player:SendBroadcastMessage("• Active Modules: 13 gameplay enhancing modules")
player:SendBroadcastMessage("• Features: Playerbots, Transmog, Solo LFG, and more!")
return false -- Command handled
end
return true -- Command not handled, continue processing
end
-- Register the command handler
local PLAYER_EVENT_ON_COMMAND = 42
RegisterPlayerEvent(PLAYER_EVENT_ON_COMMAND, ServerInfoCommand)
print("✅ Server info commands loaded successfully")
print(" Usage: .info or .serverinfo")
EOF
print_status "SUCCESS" "Created example server_info.lua script"
# Level reward script
cat > "$LUA_SCRIPTS_DIR/level_rewards.lua" << 'EOF'
-- ==============================================
-- Level Reward Script for AzerothCore mod-eluna
-- ==============================================
-- Gives rewards to players when they level up
local PLAYER_EVENT_ON_LEVEL_CHANGE = 13
local function OnPlayerLevelUp(event, player, oldLevel)
local newLevel = player:GetLevel()
local playerName = player:GetName()
-- Skip if level decreased (rare edge case)
if newLevel <= oldLevel then
return
end
-- Congratulate the player
player:SendBroadcastMessage("|cffff6600Congratulations on reaching level " .. newLevel .. "!|r")
-- Give rewards for milestone levels
local milestoneRewards = {
[10] = {item = 6948, count = 1, message = "Hearthstone for your travels!"},
[20] = {gold = 100, message = "1 gold to help with expenses!"},
[30] = {gold = 500, message = "5 gold for your dedication!"},
[40] = {gold = 1000, message = "10 gold for reaching level 40!"},
[50] = {gold = 2000, message = "20 gold for reaching level 50!"},
[60] = {gold = 5000, message = "50 gold for reaching the original level cap!"},
[70] = {gold = 10000, message = "100 gold for reaching the Burning Crusade cap!"},
[80] = {gold = 20000, message = "200 gold for reaching max level!"}
}
local reward = milestoneRewards[newLevel]
if reward then
if reward.item then
player:AddItem(reward.item, reward.count or 1)
end
if reward.gold then
player:ModifyMoney(reward.gold * 10000) -- Convert gold to copper
end
player:SendBroadcastMessage("|cffff0000Milestone Reward: " .. reward.message .. "|r")
-- Announce to server for major milestones
if newLevel >= 60 then
SendWorldMessage("|cffff6600" .. playerName .. " has reached level " .. newLevel .. "! Congratulations!|r")
end
end
print("Player " .. playerName .. " leveled from " .. oldLevel .. " to " .. newLevel)
end
-- Register the event
RegisterPlayerEvent(PLAYER_EVENT_ON_LEVEL_CHANGE, OnPlayerLevelUp)
print("✅ Level rewards script loaded successfully")
EOF
print_status "SUCCESS" "Created example level_rewards.lua script"
# Create a main loader script
cat > "$LUA_SCRIPTS_DIR/init.lua" << 'EOF'
-- ==============================================
-- Main Loader Script for AzerothCore mod-eluna
-- ==============================================
-- This script loads all other Lua scripts
print("🚀 Loading AzerothCore Lua Scripts...")
-- Load all scripts in this directory
-- Note: Individual scripts are loaded automatically by mod-eluna
-- This file serves as documentation for loaded scripts
local loadedScripts = {
"welcome.lua - Player welcome messages on login",
"server_info.lua - Custom server information commands",
"level_rewards.lua - Milestone rewards for leveling"
}
print("📜 Available Lua Scripts:")
for i, script in ipairs(loadedScripts) do
print(" " .. i .. ". " .. script)
end
print("✅ Lua script initialization complete")
print("🔧 To reload scripts: .reload eluna")
EOF
print_status "SUCCESS" "Created init.lua loader script"
# Create TypeScript example
print_status "HEADER" "CREATING TYPESCRIPT EXAMPLE"
cat > "$TYPESCRIPT_DIR/index.ts" << 'EOF'
// ==============================================
// TypeScript Example for AzerothCore Eluna-TS
// ==============================================
// This TypeScript file will be compiled to Lua by ac-eluna container
// Event constants
const PLAYER_EVENT_ON_LOGIN = 3;
const PLAYER_EVENT_ON_LEVEL_CHANGE = 13;
// Welcome message for players
function OnPlayerLogin(event: number, player: Player): void {
const playerName = player.GetName();
const playerLevel = player.GetLevel();
player.SendBroadcastMessage(
`|cff00ff00Welcome ${playerName}! You are level ${playerLevel}.|r`
);
player.SendBroadcastMessage(
"|cffyellow🚀 This server supports TypeScript scripting via Eluna-TS!|r"
);
print(`TypeScript: Player ${playerName} logged in at level ${playerLevel}`);
}
// Level up rewards
function OnPlayerLevelUp(event: number, player: Player, oldLevel: number): void {
const newLevel = player.GetLevel();
const playerName = player.GetName();
if (newLevel <= oldLevel) {
return;
}
player.SendBroadcastMessage(
`|cffff6600Congratulations on reaching level ${newLevel}!|r`
);
// Milestone rewards
const rewards: { [key: number]: { gold?: number; message: string } } = {
10: { gold: 100, message: "1 gold for reaching level 10!" },
20: { gold: 500, message: "5 gold for reaching level 20!" },
30: { gold: 1000, message: "10 gold for reaching level 30!" },
40: { gold: 2000, message: "20 gold for reaching level 40!" },
50: { gold: 5000, message: "50 gold for reaching level 50!" },
60: { gold: 10000, message: "100 gold for reaching the original cap!" },
70: { gold: 20000, message: "200 gold for reaching TBC cap!" },
80: { gold: 50000, message: "500 gold for reaching max level!" }
};
const reward = rewards[newLevel];
if (reward) {
if (reward.gold) {
player.ModifyMoney(reward.gold * 10000); // Convert to copper
}
player.SendBroadcastMessage(`|cffff0000${reward.message}|r`);
if (newLevel >= 60) {
SendWorldMessage(
`|cffff6600${playerName} has reached level ${newLevel}! Congratulations!|r`
);
}
}
print(`TypeScript: Player ${playerName} leveled from ${oldLevel} to ${newLevel}`);
}
// Register events
RegisterPlayerEvent(PLAYER_EVENT_ON_LOGIN, OnPlayerLogin);
RegisterPlayerEvent(PLAYER_EVENT_ON_LEVEL_CHANGE, OnPlayerLevelUp);
print("✅ TypeScript scripts loaded and will be compiled to Lua by ac-eluna");
EOF
print_status "SUCCESS" "Created TypeScript example: index.ts"
# Create Eluna configuration documentation
cat > "$LUA_SCRIPTS_DIR/README.md" << 'EOF'
# AzerothCore Eluna Lua Scripts
This directory contains Lua scripts for the AzerothCore mod-eluna engine.
## Available Scripts
### welcome.lua
- Sends welcome messages to players on login
- Logs player login events
- Demonstrates basic player event handling
### server_info.lua
- Provides `.info` and `.serverinfo` commands
- Shows server configuration and features
- Demonstrates custom command registration
### level_rewards.lua
- Gives rewards to players at milestone levels (10, 20, 30, etc.)
- Announces major level achievements to the server
- Demonstrates player level change events and item/gold rewards
### init.lua
- Documentation script listing all available scripts
- Serves as a reference for loaded functionality
## Script Management
### Reloading Scripts
```
.reload eluna
```
### Adding New Scripts
1. Create `.lua` file in this directory
2. Use RegisterPlayerEvent, RegisterCreatureEvent, etc. to register events
3. Reload scripts with `.reload eluna` command
### Configuration
Eluna configuration is managed in `/azerothcore/config/mod_LuaEngine.conf`:
- Script path: `lua_scripts` (this directory)
- Auto-reload: Disabled by default (enable for development)
- Bytecode cache: Enabled for performance
## Event Types
Common event types for script development:
- `PLAYER_EVENT_ON_LOGIN = 3`
- `PLAYER_EVENT_ON_LOGOUT = 4`
- `PLAYER_EVENT_ON_LEVEL_CHANGE = 13`
- `PLAYER_EVENT_ON_COMMAND = 42`
- `CREATURE_EVENT_ON_SPAWN = 5`
- `SPELL_EVENT_ON_CAST = 1`
## API Reference
### Player Methods
- `player:GetName()` - Get player name
- `player:GetLevel()` - Get player level
- `player:SendBroadcastMessage(msg)` - Send message to player
- `player:AddItem(itemId, count)` - Add item to player
- `player:ModifyMoney(copper)` - Add/remove money (in copper)
### Global Functions
- `print(message)` - Log to server console
- `SendWorldMessage(message)` - Send message to all players
- `RegisterPlayerEvent(eventId, function)` - Register player event handler
## Development Tips
1. **Test in Development**: Enable auto-reload during development
2. **Error Handling**: Use pcall() for error-safe script execution
3. **Performance**: Avoid heavy operations in frequently called events
4. **Debugging**: Use print() statements for debugging output
## Compatibility Notes
- **AzerothCore Specific**: These scripts are for AzerothCore's mod-eluna
- **Not Compatible**: Standard Eluna scripts will NOT work
- **API Differences**: AzerothCore mod-eluna has different API than standard Eluna
EOF
print_status "SUCCESS" "Created comprehensive README.md documentation"
# Check if volume mount exists in docker-compose
print_status "HEADER" "CHECKING DOCKER COMPOSE CONFIGURATION"
if grep -q "lua_scripts" docker-compose-azerothcore-services.yml; then
print_status "SUCCESS" "lua_scripts volume mount already configured"
else
print_status "WARNING" "lua_scripts volume mount not found in docker-compose-azerothcore-services.yml"
print_status "INFO" "You may need to add volume mount to worldserver service:"
echo " volumes:"
echo " - \${STORAGE_PATH}/lua_scripts:/azerothcore/lua_scripts"
fi
# Check if Eluna container is configured
if grep -q "ac-eluna:" docker-compose-azerothcore-services.yml; then
print_status "SUCCESS" "Eluna container (ac-eluna) is configured"
else
print_status "INFO" "No separate Eluna container found (using embedded mod-eluna)"
fi
# Check for Black Market integration
if [ "$MODULE_BLACK_MARKET_AUCTION_HOUSE" = "1" ]; then
print_status "INFO" "Black Market Auction House module enabled - requires Eluna integration"
if [ -f "$LUA_SCRIPTS_DIR/bmah_server.lua" ]; then
print_status "SUCCESS" "Black Market Lua script found in lua_scripts directory"
else
print_status "WARNING" "Black Market Lua script not found - will be copied during module installation"
fi
fi
# Summary
print_status "HEADER" "SETUP COMPLETE"
echo "📁 Lua Scripts Directory: $LUA_SCRIPTS_DIR"
echo "📁 TypeScript Directory: $TYPESCRIPT_DIR"
echo ""
echo "📜 Example Scripts Created:"
echo " Lua Scripts:"
echo " • welcome.lua - Player login messages"
echo " • server_info.lua - Custom info commands"
echo " • level_rewards.lua - Milestone rewards"
echo " • init.lua - Script loader documentation"
echo " • README.md - Complete documentation"
echo ""
echo " TypeScript Scripts:"
echo " • index.ts - TypeScript example with type safety"
echo ""
print_status "INFO" "Next Steps:"
echo "1. Start/restart your worldserver container"
echo "2. Test scripts with GM commands:"
echo " • .reload eluna"
echo " • .info (test server_info.lua)"
echo "3. Login with a character to test welcome.lua"
echo "4. Level up a character to test level_rewards.lua"
echo ""
print_status "SUCCESS" "Eluna Lua scripting environment setup complete!"
print_status "WARNING" "Remember: AzerothCore mod-eluna is NOT compatible with standard Eluna scripts"

View File

@@ -1,947 +0,0 @@
#!/bin/bash
# ==============================================
# AzerothCore Server Setup Script
# ==============================================
# Interactive script to configure common server settings and generate deployment-ready environment files
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
MAGENTA='\033[0;35m'
NC='\033[0m' # No Color
# Function to print colored output
print_status() {
local status=$1
local message=$2
case $status in
"INFO")
echo -e "${BLUE} ${message}${NC}"
;;
"SUCCESS")
echo -e "${GREEN}${message}${NC}"
;;
"WARNING")
echo -e "${YELLOW}⚠️ ${message}${NC}"
;;
"ERROR")
echo -e "${RED}${message}${NC}"
;;
"HEADER")
echo -e "\n${MAGENTA}=== ${message} ===${NC}"
;;
"PROMPT")
echo -e "${YELLOW}🔧 ${message}${NC}"
;;
esac
}
# Function to validate IP address
validate_ip() {
local ip=$1
if [[ $ip =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
return 0
else
return 1
fi
}
# Function to validate port number
validate_port() {
local port=$1
if [[ $port =~ ^[0-9]+$ ]] && [ $port -ge 1 ] && [ $port -le 65535 ]; then
return 0
else
return 1
fi
}
# Function to validate number
validate_number() {
local num=$1
if [[ $num =~ ^[0-9]+$ ]]; then
return 0
else
return 1
fi
}
# Function to prompt for input with validation
prompt_input() {
local prompt=$1
local default=$2
local validator=$3
local value=""
while true; do
if [ -n "$default" ]; then
read -p "$(echo -e "${YELLOW}🔧 ${prompt} [${default}]: ${NC}")" value
value=${value:-$default}
else
read -p "$(echo -e "${YELLOW}🔧 ${prompt}: ${NC}")" value
fi
if [ -z "$validator" ] || $validator "$value"; then
echo "$value"
return 0
else
print_status "ERROR" "Invalid input. Please try again."
fi
done
}
# Function to prompt for yes/no input
prompt_yes_no() {
local prompt=$1
local default=$2
while true; do
if [ "$default" = "y" ]; then
read -p "$(echo -e "${YELLOW}🔧 ${prompt} [Y/n]: ${NC}")" value
value=${value:-y}
else
read -p "$(echo -e "${YELLOW}🔧 ${prompt} [y/N]: ${NC}")" value
value=${value:-n}
fi
case $value in
[Yy]*) echo "1"; return 0 ;;
[Nn]*) echo "0"; return 0 ;;
*) print_status "ERROR" "Please answer y or n" ;;
esac
done
}
# Function to show deployment type info
show_deployment_info() {
local type=$1
case $type in
"local")
print_status "INFO" "Local Development Setup:"
echo " - Server accessible only on this machine"
echo " - Server address: 127.0.0.1"
echo " - Storage: ./storage (local directory)"
echo " - Perfect for development and testing"
;;
"lan")
print_status "INFO" "LAN Server Setup:"
echo " - Server accessible on local network"
echo " - Requires your machine's LAN IP address"
echo " - Storage: configurable"
echo " - Good for home networks or office environments"
;;
"public")
print_status "INFO" "Public Server Setup:"
echo " - Server accessible from the internet"
echo " - Requires public IP or domain name"
echo " - Requires port forwarding configuration"
echo " - Storage: recommended to use persistent storage"
;;
esac
echo ""
}
# Main configuration function
main() {
print_status "HEADER" "AZEROTHCORE SERVER SETUP"
echo "This script will help you configure your AzerothCore server for deployment."
echo "It will create customized environment files based on your configuration."
echo ""
# Check if we're in the right directory
if [ ! -f "docker-compose-azerothcore-database.env" ] || [ ! -f "docker-compose-azerothcore-services.env" ]; then
print_status "ERROR" "Environment files not found. Please run this script from the acore-compose directory."
exit 1
fi
# Deployment type selection
print_status "HEADER" "DEPLOYMENT TYPE"
echo "Select your deployment type:"
echo "1) Local Development (single machine)"
echo "2) LAN Server (local network)"
echo "3) Public Server (internet accessible)"
echo ""
while true; do
read -p "$(echo -e "${YELLOW}🔧 Select deployment type [1-3]: ${NC}")" deploy_type
case $deploy_type in
1)
DEPLOYMENT_TYPE="local"
show_deployment_info "local"
break
;;
2)
DEPLOYMENT_TYPE="lan"
show_deployment_info "lan"
break
;;
3)
DEPLOYMENT_TYPE="public"
show_deployment_info "public"
break
;;
*)
print_status "ERROR" "Please select 1, 2, or 3"
;;
esac
done
# Permission scheme selection
print_status "HEADER" "PERMISSION SCHEME"
echo "Select your container permission scheme:"
echo "1) Local Development (WSL/Docker Desktop)"
echo " - PUID=0, PGID=0 (root permissions)"
echo " - Best for: Local development, WSL, Docker Desktop"
echo " - Storage: Local directories with full access"
echo ""
echo "2) NFS Server Deployment"
echo " - PUID=1001, PGID=1000 (sharing user)"
echo " - Best for: NFS mounts, multi-user servers"
echo " - Storage: Network storage with user mapping"
echo ""
echo "3) Custom"
echo " - User-specified PUID/PGID values"
echo " - Best for: Specific user requirements"
echo " - Storage: User-specified storage path"
echo " - Manual PUID/PGID input with validation"
echo ""
while true; do
read -p "$(echo -e "${YELLOW}🔧 Select permission scheme [1-3]: ${NC}")" permission_scheme
case $permission_scheme in
1)
PERMISSION_SCHEME="local-dev"
PUID=0
PGID=0
SCHEME_DESCRIPTION="Local Development (0:0) - Root permissions for local development"
print_status "INFO" "Permission scheme: Local Development"
echo " - PUID=0, PGID=0 (root permissions)"
echo " - Optimized for WSL and Docker Desktop environments"
echo ""
break
;;
2)
PERMISSION_SCHEME="nfs-server"
PUID=1001
PGID=1000
SCHEME_DESCRIPTION="NFS Server Deployment (1001:1000) - Sharing user for network storage"
print_status "INFO" "Permission scheme: NFS Server Deployment"
echo " - PUID=1001, PGID=1000 (sharing user)"
echo " - Compatible with NFS mounts and multi-user servers"
echo ""
break
;;
3)
PERMISSION_SCHEME="custom"
print_status "INFO" "Permission scheme: Custom"
echo " - Manual PUID/PGID configuration"
echo ""
PUID=$(prompt_input "Enter PUID (user ID)" "1000" validate_number)
PGID=$(prompt_input "Enter PGID (group ID)" "1000" validate_number)
SCHEME_DESCRIPTION="Custom (${PUID}:${PGID}) - User-specified permissions"
print_status "SUCCESS" "Custom permissions set: PUID=${PUID}, PGID=${PGID}"
echo ""
break
;;
*)
print_status "ERROR" "Please select 1, 2, or 3"
;;
esac
done
# Server configuration
print_status "HEADER" "SERVER CONFIGURATION"
# Server address configuration
if [ "$DEPLOYMENT_TYPE" = "local" ]; then
SERVER_ADDRESS="127.0.0.1"
print_status "INFO" "Server address set to: $SERVER_ADDRESS"
else
if [ "$DEPLOYMENT_TYPE" = "lan" ]; then
# Try to detect LAN IP
LAN_IP=$(ip route get 1.1.1.1 2>/dev/null | head -1 | awk '{print $7}' || echo "")
if [ -n "$LAN_IP" ]; then
SERVER_ADDRESS=$(prompt_input "Enter server IP address" "$LAN_IP" validate_ip)
else
SERVER_ADDRESS=$(prompt_input "Enter server IP address (e.g., 192.168.1.100)" "" validate_ip)
fi
else
# Public server
SERVER_ADDRESS=$(prompt_input "Enter server address (IP or domain)" "your-domain.com" "")
fi
fi
# Port configuration
REALM_PORT=$(prompt_input "Enter client connection port" "8215" validate_port)
AUTH_EXTERNAL_PORT=$(prompt_input "Enter auth server port" "3784" validate_port)
SOAP_EXTERNAL_PORT=$(prompt_input "Enter SOAP API port" "7778" validate_port)
MYSQL_EXTERNAL_PORT=$(prompt_input "Enter MySQL external port" "64306" validate_port)
# Database configuration
print_status "HEADER" "DATABASE CONFIGURATION"
MYSQL_ROOT_PASSWORD=$(prompt_input "Enter MySQL root password" "azerothcore123" "")
# Storage configuration
print_status "HEADER" "STORAGE CONFIGURATION"
if [ "$DEPLOYMENT_TYPE" = "local" ]; then
STORAGE_ROOT="./storage"
print_status "INFO" "Storage path set to: $STORAGE_ROOT"
else
echo "Storage options:"
echo "1) ./storage (local directory)"
echo "2) /nfs/azerothcore (NFS mount)"
echo "3) Custom path"
while true; do
read -p "$(echo -e "${YELLOW}🔧 Select storage option [1-3]: ${NC}")" storage_option
case $storage_option in
1)
STORAGE_ROOT="./storage"
break
;;
2)
STORAGE_ROOT="/nfs/azerothcore"
break
;;
3)
STORAGE_ROOT=$(prompt_input "Enter custom storage path" "/mnt/azerothcore-data" "")
break
;;
*)
print_status "ERROR" "Please select 1, 2, or 3"
;;
esac
done
fi
# Storage directory pre-creation option
print_status "HEADER" "STORAGE DIRECTORY SETUP"
echo "Docker may have permission issues with NFS/network storage when auto-creating directories."
echo "Pre-creating directories with correct permissions can prevent deployment issues."
echo ""
echo "Would you like to pre-create storage directories?"
echo "1) Yes - Create directories now (recommended for NFS/network storage)"
echo "2) No - Let Docker auto-create directories (may cause permission issues)"
while true; do
read -p "$(echo -e "${YELLOW}🔧 Pre-create storage directories? [1-2]: ${NC}")" precreate_option
case $precreate_option in
1)
PRE_CREATE_DIRECTORIES=true
break
;;
2)
PRE_CREATE_DIRECTORIES=false
break
;;
*)
print_status "ERROR" "Please select 1 or 2"
;;
esac
done
# Create directories if requested
if [ "$PRE_CREATE_DIRECTORIES" = true ]; then
print_status "INFO" "Creating storage directories..."
STORAGE_PATH="${STORAGE_ROOT}"
# Create all required directories
DIRECTORIES=(
"$STORAGE_PATH/config"
"$STORAGE_PATH/data"
"$STORAGE_PATH/cache"
"$STORAGE_PATH/logs"
"$STORAGE_PATH/modules"
"$STORAGE_PATH/mysql-data"
"$STORAGE_PATH/typescript"
"$STORAGE_PATH/backups"
)
for dir in "${DIRECTORIES[@]}"; do
if [ ! -d "$dir" ]; then
mkdir -p "$dir"
print_status "SUCCESS" "Created: $dir"
else
print_status "INFO" "Already exists: $dir"
fi
done
# Set permissions for better compatibility
chmod -R 755 "$STORAGE_PATH" 2>/dev/null || print_status "WARNING" "Could not set directory permissions (this may be normal for NFS)"
print_status "SUCCESS" "Storage directories created successfully!"
echo ""
fi
# Backup configuration
print_status "HEADER" "BACKUP CONFIGURATION"
BACKUP_RETENTION_DAYS=$(prompt_input "Days to keep daily backups" "3" validate_number)
BACKUP_RETENTION_HOURS=$(prompt_input "Hours to keep hourly backups" "6" validate_number)
BACKUP_DAILY_TIME=$(prompt_input "Daily backup time (24h format, e.g., 09 for 9 AM)" "09" "")
# Optional: Timezone
TIMEZONE=$(prompt_input "Server timezone" "America/New_York" "")
# Module Configuration
print_status "HEADER" "MODULE CONFIGURATION"
echo "AzerothCore supports 25+ enhancement modules. Choose your setup:"
echo "1) Suggested Modules (recommended for beginners)"
echo "2) Playerbots Setup (AI companions + solo-friendly modules)"
echo "3) Manual Selection (advanced users)"
echo "4) No Modules (vanilla experience)"
echo ""
MODULE_SELECTION_MODE=""
while true; do
read -p "$(echo -e "${YELLOW}🔧 Select module configuration [1-4]: ${NC}")" module_choice
case $module_choice in
1)
MODULE_SELECTION_MODE="suggested"
print_status "INFO" "Suggested Modules Selected:"
echo " ✅ Solo LFG - Dungeon finder for solo players"
echo " ✅ Solocraft - Scale content for solo players"
echo " ✅ Autobalance - Dynamic dungeon difficulty"
echo " ✅ AH Bot - Auction house automation"
echo " ✅ Transmog - Equipment appearance customization"
echo " ✅ NPC Buffer - Convenience buffs"
echo " ✅ Learn Spells - Auto-learn class spells"
echo " ✅ Fireworks - Level-up celebrations"
echo ""
break
;;
2)
MODULE_SELECTION_MODE="playerbots"
print_status "INFO" "Playerbots Setup Selected:"
echo " 🤖 Playerbots - AI companions and guild members"
echo " ✅ Solo LFG - Dungeon finder for solo players"
echo " ✅ Solocraft - Scale content for solo players"
echo " ✅ Autobalance - Dynamic dungeon difficulty"
echo " ✅ AH Bot - Auction house automation"
echo " ✅ Transmog - Equipment appearance customization"
echo " ✅ NPC Buffer - Convenience buffs"
echo " ✅ Learn Spells - Auto-learn class spells"
echo " ✅ Fireworks - Level-up celebrations"
echo ""
print_status "WARNING" "Playerbots requires special build - this setup uses uprightbass360/azerothcore-wotlk-playerbots"
echo ""
break
;;
3)
MODULE_SELECTION_MODE="manual"
print_status "INFO" "Manual Module Selection:"
echo " You will be prompted for each of the 25+ available modules"
echo " This allows full customization of your server experience"
echo ""
break
;;
4)
MODULE_SELECTION_MODE="none"
print_status "INFO" "No Modules Selected:"
echo " Pure AzerothCore experience without enhancements"
echo " You can add modules later if needed"
echo ""
break
;;
*)
print_status "ERROR" "Please select 1, 2, 3, or 4"
;;
esac
done
# Initialize all modules to disabled
MODULE_PLAYERBOTS=0
MODULE_AOE_LOOT=0
MODULE_LEARN_SPELLS=0
MODULE_FIREWORKS=0
MODULE_INDIVIDUAL_PROGRESSION=0
MODULE_AHBOT=0
MODULE_AUTOBALANCE=0
MODULE_TRANSMOG=0
MODULE_NPC_BUFFER=0
MODULE_DYNAMIC_XP=0
MODULE_SOLO_LFG=0
MODULE_1V1_ARENA=0
MODULE_PHASED_DUELS=0
MODULE_BREAKING_NEWS=0
MODULE_BOSS_ANNOUNCER=0
MODULE_ACCOUNT_ACHIEVEMENTS=0
MODULE_AUTO_REVIVE=0
MODULE_GAIN_HONOR_GUARD=0
MODULE_ELUNA=0
MODULE_TIME_IS_TIME=0
MODULE_POCKET_PORTAL=0
MODULE_RANDOM_ENCHANTS=0
MODULE_SOLOCRAFT=0
MODULE_PVP_TITLES=0
MODULE_NPC_BEASTMASTER=0
MODULE_NPC_ENCHANTER=0
MODULE_INSTANCE_RESET=0
MODULE_LEVEL_GRANT=0
MODULE_ASSISTANT=0
MODULE_REAGENT_BANK=0
MODULE_BLACK_MARKET_AUCTION_HOUSE=0
MODULE_ARAC=0
# Configure modules based on selection
if [ "$MODULE_SELECTION_MODE" = "suggested" ]; then
# Enable suggested modules for beginners
MODULE_SOLO_LFG=1
MODULE_SOLOCRAFT=1
MODULE_AUTOBALANCE=1
MODULE_AHBOT=1
MODULE_TRANSMOG=1
MODULE_NPC_BUFFER=1
MODULE_LEARN_SPELLS=1
MODULE_FIREWORKS=1
elif [ "$MODULE_SELECTION_MODE" = "playerbots" ]; then
# Enable playerbots + solo-friendly modules
MODULE_PLAYERBOTS=1
MODULE_SOLO_LFG=1
MODULE_SOLOCRAFT=1
MODULE_AUTOBALANCE=1
MODULE_AHBOT=1
MODULE_TRANSMOG=1
MODULE_NPC_BUFFER=1
MODULE_LEARN_SPELLS=1
MODULE_FIREWORKS=1
elif [ "$MODULE_SELECTION_MODE" = "manual" ]; then
print_status "PROMPT" "Configure each module (y/n):"
# Core Gameplay Modules
echo -e "\n${BLUE}🎮 Core Gameplay Modules:${NC}"
MODULE_PLAYERBOTS=$(prompt_yes_no "Playerbots - AI companions (uses uprightbass360/azerothcore-wotlk-playerbots build)" "n")
MODULE_SOLO_LFG=$(prompt_yes_no "Solo LFG - Dungeon finder for solo players" "n")
MODULE_SOLOCRAFT=$(prompt_yes_no "Solocraft - Scale dungeons/raids for solo play" "n")
MODULE_AUTOBALANCE=$(prompt_yes_no "Autobalance - Dynamic difficulty scaling" "n")
# Quality of Life Modules
echo -e "\n${BLUE}🛠️ Quality of Life Modules:${NC}"
MODULE_TRANSMOG=$(prompt_yes_no "Transmog - Equipment appearance customization" "n")
MODULE_NPC_BUFFER=$(prompt_yes_no "NPC Buffer - Convenience buff NPCs" "n")
MODULE_LEARN_SPELLS=$(prompt_yes_no "Learn Spells - Auto-learn class spells on level" "n")
MODULE_AOE_LOOT=$(prompt_yes_no "AOE Loot - Loot multiple corpses at once" "n")
MODULE_FIREWORKS=$(prompt_yes_no "Fireworks - Celebrate level ups" "n")
MODULE_ASSISTANT=$(prompt_yes_no "Assistant - Multi-service NPC" "n")
# Economy & Auction House
echo -e "\n${BLUE}💰 Economy Modules:${NC}"
MODULE_AHBOT=$(prompt_yes_no "AH Bot - Auction house automation" "n")
MODULE_REAGENT_BANK=$(prompt_yes_no "Reagent Bank - Material storage system" "n")
MODULE_BLACK_MARKET_AUCTION_HOUSE=$(prompt_yes_no "Black Market - MoP-style black market" "n")
# PvP & Arena
echo -e "\n${BLUE}⚔️ PvP Modules:${NC}"
MODULE_1V1_ARENA=$(prompt_yes_no "1v1 Arena - Solo arena battles" "n")
MODULE_PHASED_DUELS=$(prompt_yes_no "Phased Duels - Instanced dueling" "n")
MODULE_PVP_TITLES=$(prompt_yes_no "PvP Titles - Additional honor titles" "n")
# Progression & Experience
echo -e "\n${BLUE}📈 Progression Modules:${NC}"
MODULE_INDIVIDUAL_PROGRESSION=$(prompt_yes_no "Individual Progression - Per-player vanilla→TBC→WotLK" "n")
MODULE_DYNAMIC_XP=$(prompt_yes_no "Dynamic XP - Customizable experience rates" "n")
MODULE_LEVEL_GRANT=$(prompt_yes_no "Level Grant - Quest-based leveling rewards" "n")
MODULE_ACCOUNT_ACHIEVEMENTS=$(prompt_yes_no "Account Achievements - Account-wide achievements" "n")
# Server Management & Features
echo -e "\n${BLUE}🔧 Server Features:${NC}"
MODULE_BREAKING_NEWS=$(prompt_yes_no "Breaking News - Login screen announcements" "n")
MODULE_BOSS_ANNOUNCER=$(prompt_yes_no "Boss Announcer - Server-wide boss kill announcements" "n")
MODULE_AUTO_REVIVE=$(prompt_yes_no "Auto Revive - Automatic resurrection" "n")
MODULE_ELUNA=$(prompt_yes_no "Eluna - Lua scripting engine" "n")
# Special & Utility
echo -e "\n${BLUE}🎯 Utility Modules:${NC}"
MODULE_NPC_BEASTMASTER=$(prompt_yes_no "NPC Beastmaster - Pet management NPC" "n")
MODULE_NPC_ENCHANTER=$(prompt_yes_no "NPC Enchanter - Enchanting services" "n")
MODULE_RANDOM_ENCHANTS=$(prompt_yes_no "Random Enchants - Diablo-style random item stats" "n")
MODULE_POCKET_PORTAL=$(prompt_yes_no "Pocket Portal - Portable teleportation" "n")
MODULE_INSTANCE_RESET=$(prompt_yes_no "Instance Reset - Manual instance resets" "n")
MODULE_TIME_IS_TIME=$(prompt_yes_no "Time is Time - Real-time game world" "n")
MODULE_GAIN_HONOR_GUARD=$(prompt_yes_no "Gain Honor Guard - Honor from guard kills" "n")
MODULE_ARAC=$(prompt_yes_no "All Races All Classes - Remove class restrictions (REQUIRES CLIENT PATCH)" "n")
fi
# Summary
print_status "HEADER" "CONFIGURATION SUMMARY"
echo "Deployment Type: $DEPLOYMENT_TYPE"
echo "Permission Scheme: $SCHEME_DESCRIPTION"
echo "Server Address: $SERVER_ADDRESS"
echo "Client Port: $REALM_PORT"
echo "Auth Port: $AUTH_EXTERNAL_PORT"
echo "SOAP Port: $SOAP_EXTERNAL_PORT"
echo "MySQL Port: $MYSQL_EXTERNAL_PORT"
echo "Storage Path: $STORAGE_ROOT"
echo "Daily Backup Time: ${BACKUP_DAILY_TIME}:00 UTC"
echo "Backup Retention: ${BACKUP_RETENTION_DAYS} days, ${BACKUP_RETENTION_HOURS} hours"
# Module summary
if [ "$MODULE_SELECTION_MODE" = "suggested" ]; then
echo "Modules: Suggested preset (8 modules)"
elif [ "$MODULE_SELECTION_MODE" = "playerbots" ]; then
echo "Modules: Playerbots preset (9 modules including AI companions)"
elif [ "$MODULE_SELECTION_MODE" = "manual" ]; then
ENABLED_COUNT=0
[ "$MODULE_SOLO_LFG" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_SOLOCRAFT" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_AUTOBALANCE" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_PLAYERBOTS" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_TRANSMOG" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_NPC_BUFFER" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_LEARN_SPELLS" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_AOE_LOOT" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_FIREWORKS" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_ASSISTANT" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_AHBOT" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_REAGENT_BANK" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_BLACK_MARKET_AUCTION_HOUSE" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_1V1_ARENA" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_PHASED_DUELS" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_PVP_TITLES" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_INDIVIDUAL_PROGRESSION" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_DYNAMIC_XP" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_LEVEL_GRANT" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_ACCOUNT_ACHIEVEMENTS" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_BREAKING_NEWS" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_BOSS_ANNOUNCER" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_AUTO_REVIVE" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_ELUNA" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_NPC_BEASTMASTER" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_NPC_ENCHANTER" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_RANDOM_ENCHANTS" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_POCKET_PORTAL" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_INSTANCE_RESET" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_TIME_IS_TIME" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_GAIN_HONOR_GUARD" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
[ "$MODULE_ARAC" = "1" ] && ENABLED_COUNT=$((ENABLED_COUNT + 1))
echo "Modules: Custom selection ($ENABLED_COUNT modules)"
else
echo "Modules: None (vanilla experience)"
fi
echo ""
# Confirmation
while true; do
read -p "$(echo -e "${YELLOW}🔧 Proceed with this configuration? [y/N]: ${NC}")" confirm
case $confirm in
[Yy]*)
break
;;
[Nn]*|"")
print_status "INFO" "Configuration cancelled"
exit 0
;;
*)
print_status "ERROR" "Please answer y or n"
;;
esac
done
# Create custom environment files
print_status "HEADER" "CREATING ENVIRONMENT FILES"
# Create custom database environment file
print_status "INFO" "Creating custom database environment file..."
cp docker-compose-azerothcore-database.env docker-compose-azerothcore-database-custom.env
# Substitute values in database env file using a different delimiter
sed -i "s#STORAGE_ROOT=.*#STORAGE_ROOT=${STORAGE_ROOT}#" docker-compose-azerothcore-database-custom.env
sed -i "s#MYSQL_ROOT_PASSWORD=.*#MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}#" docker-compose-azerothcore-database-custom.env
sed -i "s#MYSQL_EXTERNAL_PORT=.*#MYSQL_EXTERNAL_PORT=${MYSQL_EXTERNAL_PORT}#" docker-compose-azerothcore-database-custom.env
sed -i "s#BACKUP_RETENTION_DAYS=.*#BACKUP_RETENTION_DAYS=${BACKUP_RETENTION_DAYS}#" docker-compose-azerothcore-database-custom.env
sed -i "s#BACKUP_RETENTION_HOURS=.*#BACKUP_RETENTION_HOURS=${BACKUP_RETENTION_HOURS}#" docker-compose-azerothcore-database-custom.env
sed -i "s#BACKUP_DAILY_TIME=.*#BACKUP_DAILY_TIME=${BACKUP_DAILY_TIME}#" docker-compose-azerothcore-database-custom.env
sed -i "s#TZ=.*#TZ=${TIMEZONE}#" docker-compose-azerothcore-database-custom.env
# Apply permission scheme settings
sed -i "s#PUID=.*#PUID=${PUID}#" docker-compose-azerothcore-database-custom.env
sed -i "s#PGID=.*#PGID=${PGID}#" docker-compose-azerothcore-database-custom.env
# Toggle database images based on playerbots module selection
if [ "$MODULE_PLAYERBOTS" = "1" ]; then
# Swap AC_DB_IMPORT_IMAGE to enable mod-playerbots database
sed -i 's/\(AC_DB_IMPORT_IMAGE\)=\(.*\)/\1_TEMP=\2/' docker-compose-azerothcore-database-custom.env
sed -i 's/\(AC_DB_IMPORT_IMAGE\)_DISABLED=\(.*\)/\1=\2/' docker-compose-azerothcore-database-custom.env
sed -i 's/\(AC_DB_IMPORT_IMAGE\)_TEMP=\(.*\)/\1_DISABLED=\2/' docker-compose-azerothcore-database-custom.env
fi
# Create custom services environment file
print_status "INFO" "Creating custom services environment file..."
cp docker-compose-azerothcore-services.env docker-compose-azerothcore-services-custom.env
# Substitute values in services env file using a different delimiter
sed -i "s#STORAGE_ROOT=.*#STORAGE_ROOT=${STORAGE_ROOT}#" docker-compose-azerothcore-services-custom.env
sed -i "s#MYSQL_ROOT_PASSWORD=.*#MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}#" docker-compose-azerothcore-services-custom.env
sed -i "s#AUTH_EXTERNAL_PORT=.*#AUTH_EXTERNAL_PORT=${AUTH_EXTERNAL_PORT}#" docker-compose-azerothcore-services-custom.env
sed -i "s#WORLD_EXTERNAL_PORT=.*#WORLD_EXTERNAL_PORT=${REALM_PORT}#" docker-compose-azerothcore-services-custom.env
sed -i "s#SOAP_EXTERNAL_PORT=.*#SOAP_EXTERNAL_PORT=${SOAP_EXTERNAL_PORT}#" docker-compose-azerothcore-services-custom.env
sed -i "s#SERVER_ADDRESS=.*#SERVER_ADDRESS=${SERVER_ADDRESS}#" docker-compose-azerothcore-services-custom.env
sed -i "s#REALM_PORT=.*#REALM_PORT=${REALM_PORT}#" docker-compose-azerothcore-services-custom.env
# Apply permission scheme settings
sed -i "s#PUID=.*#PUID=${PUID}#" docker-compose-azerothcore-services-custom.env
sed -i "s#PGID=.*#PGID=${PGID}#" docker-compose-azerothcore-services-custom.env
# Toggle Docker images based on playerbots module selection
if [ "$MODULE_PLAYERBOTS" = "1" ]; then
# Switch to playerbots images (using _PLAYERBOTS variants)
sed -i 's/^AC_AUTHSERVER_IMAGE=.*/AC_AUTHSERVER_IMAGE=uprightbass360\/azerothcore-wotlk-playerbots:authserver-Playerbot/' docker-compose-azerothcore-services-custom.env
sed -i 's/^AC_WORLDSERVER_IMAGE=.*/AC_WORLDSERVER_IMAGE=uprightbass360\/azerothcore-wotlk-playerbots:worldserver-Playerbot/' docker-compose-azerothcore-services-custom.env
sed -i 's/^AC_CLIENT_DATA_IMAGE=.*/AC_CLIENT_DATA_IMAGE=uprightbass360\/azerothcore-wotlk-playerbots:client-data-Playerbot/' docker-compose-azerothcore-services-custom.env
sed -i 's/^MODULE_PLAYERBOTS=.*/MODULE_PLAYERBOTS=1/' docker-compose-azerothcore-services-custom.env
fi
# Create custom tools environment file
print_status "INFO" "Creating custom tools environment file..."
cp docker-compose-azerothcore-tools.env docker-compose-azerothcore-tools-custom.env
# Substitute values in tools env file using a different delimiter
sed -i "s#STORAGE_ROOT=.*#STORAGE_ROOT=${STORAGE_ROOT}#" docker-compose-azerothcore-tools-custom.env
# Apply permission scheme settings
sed -i "s#PUID=.*#PUID=${PUID}#" docker-compose-azerothcore-tools-custom.env
sed -i "s#PGID=.*#PGID=${PGID}#" docker-compose-azerothcore-tools-custom.env
# Toggle tools images based on playerbots module selection
if [ "$MODULE_PLAYERBOTS" = "1" ]; then
# Swap AC_TOOLS_IMAGE to enable mod-playerbots tools
sed -i 's/\(AC_TOOLS_IMAGE\)=\(.*\)/\1_TEMP=\2/' docker-compose-azerothcore-tools-custom.env
sed -i 's/\(AC_TOOLS_IMAGE\)_DISABLED=\(.*\)/\1=\2/' docker-compose-azerothcore-tools-custom.env
sed -i 's/\(AC_TOOLS_IMAGE\)_TEMP=\(.*\)/\1_DISABLED=\2/' docker-compose-azerothcore-tools-custom.env
fi
# Create custom modules environment file (only if modules are enabled)
if [ "$MODULE_SELECTION_MODE" != "none" ]; then
print_status "INFO" "Creating custom modules environment file..."
cp docker-compose-azerothcore-modules.env docker-compose-azerothcore-modules-custom.env
# Substitute values in modules env file
sed -i "s#STORAGE_ROOT=.*#STORAGE_ROOT=${STORAGE_ROOT}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MYSQL_ROOT_PASSWORD=.*#MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}#" docker-compose-azerothcore-modules-custom.env
# Apply permission scheme settings
sed -i "s#PUID=.*#PUID=${PUID}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#PGID=.*#PGID=${PGID}#" docker-compose-azerothcore-modules-custom.env
# Set all module variables
sed -i "s#MODULE_PLAYERBOTS=.*#MODULE_PLAYERBOTS=${MODULE_PLAYERBOTS}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_AOE_LOOT=.*#MODULE_AOE_LOOT=${MODULE_AOE_LOOT}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_LEARN_SPELLS=.*#MODULE_LEARN_SPELLS=${MODULE_LEARN_SPELLS}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_FIREWORKS=.*#MODULE_FIREWORKS=${MODULE_FIREWORKS}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_INDIVIDUAL_PROGRESSION=.*#MODULE_INDIVIDUAL_PROGRESSION=${MODULE_INDIVIDUAL_PROGRESSION}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_AHBOT=.*#MODULE_AHBOT=${MODULE_AHBOT}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_AUTOBALANCE=.*#MODULE_AUTOBALANCE=${MODULE_AUTOBALANCE}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_TRANSMOG=.*#MODULE_TRANSMOG=${MODULE_TRANSMOG}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_NPC_BUFFER=.*#MODULE_NPC_BUFFER=${MODULE_NPC_BUFFER}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_DYNAMIC_XP=.*#MODULE_DYNAMIC_XP=${MODULE_DYNAMIC_XP}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_SOLO_LFG=.*#MODULE_SOLO_LFG=${MODULE_SOLO_LFG}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_1V1_ARENA=.*#MODULE_1V1_ARENA=${MODULE_1V1_ARENA}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_PHASED_DUELS=.*#MODULE_PHASED_DUELS=${MODULE_PHASED_DUELS}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_BREAKING_NEWS=.*#MODULE_BREAKING_NEWS=${MODULE_BREAKING_NEWS}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_BOSS_ANNOUNCER=.*#MODULE_BOSS_ANNOUNCER=${MODULE_BOSS_ANNOUNCER}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_ACCOUNT_ACHIEVEMENTS=.*#MODULE_ACCOUNT_ACHIEVEMENTS=${MODULE_ACCOUNT_ACHIEVEMENTS}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_AUTO_REVIVE=.*#MODULE_AUTO_REVIVE=${MODULE_AUTO_REVIVE}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_GAIN_HONOR_GUARD=.*#MODULE_GAIN_HONOR_GUARD=${MODULE_GAIN_HONOR_GUARD}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_ELUNA=.*#MODULE_ELUNA=${MODULE_ELUNA}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_TIME_IS_TIME=.*#MODULE_TIME_IS_TIME=${MODULE_TIME_IS_TIME}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_POCKET_PORTAL=.*#MODULE_POCKET_PORTAL=${MODULE_POCKET_PORTAL}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_RANDOM_ENCHANTS=.*#MODULE_RANDOM_ENCHANTS=${MODULE_RANDOM_ENCHANTS}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_SOLOCRAFT=.*#MODULE_SOLOCRAFT=${MODULE_SOLOCRAFT}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_PVP_TITLES=.*#MODULE_PVP_TITLES=${MODULE_PVP_TITLES}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_NPC_BEASTMASTER=.*#MODULE_NPC_BEASTMASTER=${MODULE_NPC_BEASTMASTER}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_NPC_ENCHANTER=.*#MODULE_NPC_ENCHANTER=${MODULE_NPC_ENCHANTER}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_INSTANCE_RESET=.*#MODULE_INSTANCE_RESET=${MODULE_INSTANCE_RESET}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_LEVEL_GRANT=.*#MODULE_LEVEL_GRANT=${MODULE_LEVEL_GRANT}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_ASSISTANT=.*#MODULE_ASSISTANT=${MODULE_ASSISTANT}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_REAGENT_BANK=.*#MODULE_REAGENT_BANK=${MODULE_REAGENT_BANK}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_BLACK_MARKET_AUCTION_HOUSE=.*#MODULE_BLACK_MARKET_AUCTION_HOUSE=${MODULE_BLACK_MARKET_AUCTION_HOUSE}#" docker-compose-azerothcore-modules-custom.env
sed -i "s#MODULE_ARAC=.*#MODULE_ARAC=${MODULE_ARAC}#" docker-compose-azerothcore-modules-custom.env
fi
# Format selection
print_status "HEADER" "OUTPUT FORMAT"
echo "Select your preferred deployment format:"
echo "1) Environment files (Docker Compose + env files)"
echo "2) Flattened YAML files (Portainer Stack compatible)"
echo ""
while true; do
read -p "$(echo -e "${YELLOW}🔧 Select output format [1-2]: ${NC}")" format_type
case $format_type in
1)
OUTPUT_FORMAT="env"
break
;;
2)
OUTPUT_FORMAT="portainer"
break
;;
*)
print_status "ERROR" "Please select 1 or 2"
;;
esac
done
print_status "SUCCESS" "Custom environment files created:"
echo " - docker-compose-azerothcore-database-custom.env"
echo " - docker-compose-azerothcore-services-custom.env"
echo " - docker-compose-azerothcore-tools-custom.env"
if [ "$MODULE_SELECTION_MODE" != "none" ]; then
echo " - docker-compose-azerothcore-modules-custom.env"
fi
echo ""
# Generate Portainer YAML files if selected
if [ "$OUTPUT_FORMAT" = "portainer" ]; then
generate_portainer_yamls
fi
# Deployment instructions
print_status "HEADER" "DEPLOYMENT INSTRUCTIONS"
if [ "$OUTPUT_FORMAT" = "portainer" ]; then
echo "To deploy your server using Portainer stacks:"
echo ""
echo "1. Create and deploy database stack:"
echo " • Copy portainer-database-stack.yml contents"
echo " • Create new stack in Portainer"
echo " • Wait for healthy status"
echo ""
echo "2. Create and deploy services stack:"
echo " • Copy portainer-services-stack.yml contents"
echo " • Create new stack in Portainer"
echo ""
if [ "$MODULE_SELECTION_MODE" != "none" ]; then
echo "3. Create and deploy modules stack:"
echo " • Copy portainer-modules-stack.yml contents"
echo " • Create new stack in Portainer"
echo ""
echo "4. Create and deploy tools stack (optional):"
echo " • Copy portainer-tools-stack.yml contents"
echo " • Create new stack in Portainer"
echo ""
else
echo "3. Create and deploy tools stack (optional):"
echo " • Copy portainer-tools-stack.yml contents"
echo " • Create new stack in Portainer"
echo ""
fi
else
echo "To deploy your server with Docker Compose:"
echo ""
echo "1. Deploy database layer:"
echo " docker compose --env-file docker-compose-azerothcore-database-custom.env -f docker-compose-azerothcore-database.yml up -d"
echo ""
echo "2. Deploy services layer:"
echo " docker compose --env-file docker-compose-azerothcore-services-custom.env -f docker-compose-azerothcore-services.yml up -d"
echo ""
if [ "$MODULE_SELECTION_MODE" != "none" ]; then
echo "3. Deploy modules layer (installs and configures selected modules):"
echo " docker compose --env-file docker-compose-azerothcore-modules-custom.env -f docker-compose-azerothcore-modules.yml up -d"
echo ""
echo "4. Deploy tools layer (optional):"
echo " docker compose --env-file docker-compose-azerothcore-tools-custom.env -f docker-compose-azerothcore-tools.yml up -d"
echo ""
else
echo "3. Deploy tools layer (optional):"
echo " docker compose --env-file docker-compose-azerothcore-tools-custom.env -f docker-compose-azerothcore-tools.yml up -d"
echo ""
fi
fi
if [ "$DEPLOYMENT_TYPE" != "local" ]; then
print_status "WARNING" "Additional configuration required for ${DEPLOYMENT_TYPE} deployment:"
echo " - Ensure firewall allows traffic on configured ports"
if [ "$DEPLOYMENT_TYPE" = "public" ]; then
echo " - Configure port forwarding on your router:"
echo " - ${REALM_PORT} (client connections)"
echo " - ${AUTH_EXTERNAL_PORT} (auth server)"
echo " - ${SOAP_EXTERNAL_PORT} (SOAP API)"
fi
echo ""
fi
# Client configuration
print_status "HEADER" "CLIENT CONFIGURATION"
echo "Configure your WoW 3.3.5a client by editing realmlist.wtf:"
if [ "$REALM_PORT" = "8215" ]; then
echo " set realmlist ${SERVER_ADDRESS}"
else
echo " set realmlist ${SERVER_ADDRESS} ${REALM_PORT}"
fi
echo ""
# Playerbots usage information
if [ "$MODULE_SELECTION_MODE" = "playerbots" ] || [ "$MODULE_PLAYERBOTS" = "1" ]; then
print_status "HEADER" "PLAYERBOTS USAGE"
echo "Your server includes AI playerbots! Here are the key commands:"
echo ""
echo "🤖 Guild Bot Management:"
echo " .bot add <name> - Add a random bot to your guild"
echo " .bot add <name> <class> - Add a bot of specific class"
echo " .bot remove <name> - Remove a bot from your guild"
echo " .guild create <name> - Create a guild (if needed)"
echo ""
echo "🎮 Bot Control:"
echo " .bot invite <name> - Invite bot to group"
echo " .bot uninvite <name> - Remove bot from group"
echo " .bot command <action> - Send commands to your bots"
echo ""
echo "⚙️ Bot Configuration:"
echo " .bot settings - View bot configuration options"
echo " .bot stats - Show server bot statistics"
echo ""
echo "📖 For more commands, visit: https://github.com/celguar/playerbots"
echo ""
fi
print_status "SUCCESS" "🎉 Server setup complete!"
if [ "$OUTPUT_FORMAT" = "portainer" ]; then
print_status "INFO" "Your Portainer YAML stack files are ready for deployment."
else
print_status "INFO" "Your custom environment files are ready for deployment."
fi
}
# Function to generate flattened Portainer YAML files
generate_portainer_yamls() {
print_status "INFO" "Generating Portainer-compatible YAML files..."
# Generate database stack
print_status "INFO" "Creating portainer-database-stack.yml..."
docker compose --env-file docker-compose-azerothcore-database-custom.env -f docker-compose-azerothcore-database.yml config > portainer-database-stack.yml
# Generate services stack
print_status "INFO" "Creating portainer-services-stack.yml..."
docker compose --env-file docker-compose-azerothcore-services-custom.env -f docker-compose-azerothcore-services.yml config > portainer-services-stack.yml
# Generate tools stack
print_status "INFO" "Creating portainer-tools-stack.yml..."
docker compose --env-file docker-compose-azerothcore-tools-custom.env -f docker-compose-azerothcore-tools.yml config > portainer-tools-stack.yml
# Generate modules stack (if modules are enabled)
if [ "$MODULE_SELECTION_MODE" != "none" ]; then
print_status "INFO" "Creating portainer-modules-stack.yml..."
docker compose --env-file docker-compose-azerothcore-modules-custom.env -f docker-compose-azerothcore-modules.yml config > portainer-modules-stack.yml
fi
print_status "SUCCESS" "Portainer YAML files generated:"
echo " - portainer-database-stack.yml"
echo " - portainer-services-stack.yml"
echo " - portainer-tools-stack.yml"
if [ "$MODULE_SELECTION_MODE" != "none" ]; then
echo " - portainer-modules-stack.yml"
fi
echo ""
print_status "INFO" "These files can be copied and pasted directly into Portainer stacks."
print_status "INFO" "Deploy in order: database → services → modules → tools"
}
# Run main function
main "$@"

View File

@@ -1,326 +0,0 @@
#!/bin/bash
# ==============================================
# AzerothCore Service Status Script
# ==============================================
# This script displays the current status of all AzerothCore services
# Usage: ./status.sh [--watch] [--logs]
set -e
# Change to the project root directory (parent of scripts directory)
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
cd "$PROJECT_ROOT"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
MAGENTA='\033[0;35m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color
# Options
WATCH_MODE=false
SHOW_LOGS=false
LOG_LINES=5
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
--watch|-w)
WATCH_MODE=true
shift
;;
--logs|-l)
SHOW_LOGS=true
shift
;;
--lines)
LOG_LINES="$2"
shift 2
;;
-h|--help)
echo "AzerothCore Service Status Script"
echo ""
echo "Usage: $0 [OPTIONS]"
echo ""
echo "OPTIONS:"
echo " --watch, -w Watch mode - continuously update status"
echo " --logs, -l Show recent log entries for each service"
echo " --lines N Number of log lines to show (default: 5)"
echo " --help, -h Show this help message"
echo ""
echo "EXAMPLES:"
echo " $0 Show current status"
echo " $0 --watch Continuously monitor status"
echo " $0 --logs Show status with recent logs"
exit 0
;;
*)
echo "Unknown option: $1"
echo "Use --help for usage information"
exit 1
;;
esac
done
# Function to print status with color
print_status() {
local level=$1
local message=$2
local timestamp=$(date '+%H:%M:%S')
case $level in
"SUCCESS"|"HEALTHY")
printf "${GREEN}✅ [%s] %s${NC}\n" "$timestamp" "$message"
;;
"WARNING"|"UNHEALTHY")
printf "${YELLOW}⚠️ [%s] %s${NC}\n" "$timestamp" "$message"
;;
"ERROR"|"FAILED")
printf "${RED}❌ [%s] %s${NC}\n" "$timestamp" "$message"
;;
"INFO")
printf "${BLUE} [%s] %s${NC}\n" "$timestamp" "$message"
;;
"HEADER")
printf "${MAGENTA}🚀 [%s] %s${NC}\n" "$timestamp" "$message"
;;
*)
printf "${CYAN}📋 [%s] %s${NC}\n" "$timestamp" "$message"
;;
esac
}
# Function to get container status with health
get_container_status() {
local container_name=$1
local status=""
local health=""
local uptime=""
if docker ps -a --format "table {{.Names}}" | grep -q "^${container_name}$"; then
status=$(docker inspect --format='{{.State.Status}}' "$container_name" 2>/dev/null || echo "unknown")
health=$(docker inspect --format='{{.State.Health.Status}}' "$container_name" 2>/dev/null || echo "no-health-check")
uptime=$(docker inspect --format='{{.State.StartedAt}}' "$container_name" 2>/dev/null | xargs -I {} date -d {} '+%H:%M:%S' 2>/dev/null || echo "unknown")
# Format status with color
case "$status" in
"running")
if [ "$health" = "healthy" ]; then
printf "${GREEN}${NC} Running (healthy) - Started: %s\n" "$uptime"
elif [ "$health" = "unhealthy" ]; then
printf "${RED}${NC} Running (unhealthy) - Started: %s\n" "$uptime"
elif [ "$health" = "starting" ]; then
printf "${YELLOW}${NC} Running (starting) - Started: %s\n" "$uptime"
else
printf "${GREEN}${NC} Running - Started: %s\n" "$uptime"
fi
;;
"exited")
local exit_code=$(docker inspect --format='{{.State.ExitCode}}' "$container_name" 2>/dev/null || echo "unknown")
if [ "$exit_code" = "0" ]; then
printf "${YELLOW}${NC} Exited (0) - Completed successfully\n"
else
printf "${RED}${NC} Exited (%s) - Failed\n" "$exit_code"
fi
;;
"restarting")
printf "${YELLOW}${NC} Restarting - Started: %s\n" "$uptime"
;;
"paused")
printf "${YELLOW}${NC} Paused - Started: %s\n" "$uptime"
;;
"created")
printf "${CYAN}${NC} Created (not started)\n"
;;
*)
printf "${RED}${NC} %s\n" "$status"
;;
esac
else
printf "${RED}${NC} Not found\n"
fi
}
# Function to show service logs
show_service_logs() {
local container_name=$1
local service_display_name=$2
if docker ps -a --format "table {{.Names}}" | grep -q "^${container_name}$"; then
printf " ${CYAN}📄 Recent logs:${NC}\n"
docker logs "$container_name" --tail "$LOG_LINES" 2>/dev/null | sed 's/^/ /' || printf " ${YELLOW}(no logs available)${NC}\n"
echo ""
fi
}
# Function to display service status
display_service_status() {
local container_name=$1
local service_display_name=$2
local description=$3
printf "${CYAN}%-20s${NC} " "$service_display_name"
get_container_status "$container_name"
# Show image name if container exists
if docker ps -a --format "table {{.Names}}" | grep -q "^${container_name}$"; then
local image_name=$(docker inspect --format='{{.Config.Image}}' "$container_name" 2>/dev/null || echo "unknown")
printf " ${CYAN}🏷️ Image: $image_name${NC}\n"
fi
if [ "$SHOW_LOGS" = true ]; then
show_service_logs "$container_name" "$service_display_name"
fi
}
# Function to get database info
get_database_info() {
if docker ps --format "table {{.Names}}" | grep -q "^ac-mysql$"; then
local db_count=$(docker exec ac-mysql mysql -u root -pazerothcore123 -e "SHOW DATABASES;" 2>/dev/null | grep -E "^(acore_|mysql|information_schema|performance_schema)" | wc -l || echo "0")
local user_count=$(docker exec ac-mysql mysql -u root -pazerothcore123 -D acore_auth -e "SELECT COUNT(*) FROM account;" 2>/dev/null | tail -1 || echo "0")
printf " ${CYAN}📊 Databases: $db_count | User accounts: $user_count${NC}\n"
fi
}
# Function to get client data progress
get_client_data_progress() {
if docker ps --format "table {{.Names}}" | grep -q "^ac-client-data$"; then
local last_progress=$(docker logs ac-client-data --tail 1 2>/dev/null | grep "Progress" || echo "")
if [ -n "$last_progress" ]; then
printf " ${CYAN}📊 $last_progress${NC}\n"
fi
fi
}
# Function to get enabled modules info
get_enabled_modules() {
printf "${CYAN}%-20s${NC} " "Enabled Modules"
# Check if modules are enabled by looking for environment files
local modules_enabled=false
local module_count=0
local modules_list=""
if [ -f "docker-compose-azerothcore-modules.env" ] || [ -f "docker-compose-azerothcore-modules-custom.env" ]; then
# Check for playerbots module
if docker ps --format "table {{.Names}}" | grep -q "^ac-modules$"; then
if docker logs ac-modules 2>/dev/null | grep -q "playerbot\|playerbots"; then
modules_list="playerbots"
module_count=$((module_count + 1))
modules_enabled=true
fi
fi
# Check for eluna module
if docker ps --format "table {{.Names}}" | grep -q "^ac-eluna$"; then
if [ -n "$modules_list" ]; then
modules_list="$modules_list, eluna"
else
modules_list="eluna"
fi
module_count=$((module_count + 1))
modules_enabled=true
fi
fi
if [ "$modules_enabled" = true ]; then
printf "${GREEN}${NC} $module_count modules active\n"
printf " ${CYAN}📦 Modules: $modules_list${NC}\n"
else
printf "${YELLOW}${NC} No modules enabled\n"
fi
}
# Main status display function
show_status() {
# Capture all output to a temp file, then display at once
local temp_file=$(mktemp)
{
print_status "HEADER" "AZEROTHCORE SERVICE STATUS"
echo ""
# Database Layer
printf "${MAGENTA}=== DATABASE LAYER ===${NC}\n"
display_service_status "ac-mysql" "MySQL Database" "Core database server"
if docker ps --format "table {{.Names}}" | grep -q "^ac-mysql$"; then
get_database_info
fi
display_service_status "ac-backup" "Backup Service" "Database backup automation"
display_service_status "ac-db-init" "DB Initializer" "Database initialization (one-time)"
display_service_status "ac-db-import" "DB Import" "Database import (one-time)"
echo ""
# Services Layer
printf "${MAGENTA}=== SERVICES LAYER ===${NC}\n"
display_service_status "ac-authserver" "Auth Server" "Player authentication"
display_service_status "ac-worldserver" "World Server" "Game world simulation"
display_service_status "ac-client-data" "Client Data" "Game data download/extraction"
if docker ps --format "table {{.Names}}" | grep -q "^ac-client-data$"; then
get_client_data_progress
fi
echo ""
# Support Services
printf "${MAGENTA}=== SUPPORT SERVICES ===${NC}\n"
display_service_status "ac-modules" "Module Manager" "Server module management"
display_service_status "ac-eluna" "Eluna Engine" "Lua scripting engine"
display_service_status "ac-post-install" "Post-Install" "Configuration automation"
echo ""
# Enabled Modules
printf "${MAGENTA}=== MODULE STATUS ===${NC}\n"
get_enabled_modules
echo ""
# Network and ports
printf "${MAGENTA}=== NETWORK STATUS ===${NC}\n"
if docker network ls | grep -q azerothcore; then
printf "${CYAN}%-20s${NC} ${GREEN}${NC} Network 'azerothcore' exists\n" "Docker Network"
else
printf "${CYAN}%-20s${NC} ${RED}${NC} Network 'azerothcore' missing\n" "Docker Network"
fi
# Check if auth server port is accessible
if docker ps --format "table {{.Names}}\t{{.Ports}}" | grep ac-authserver | grep -q "3784"; then
printf "${CYAN}%-20s${NC} ${GREEN}${NC} Port 3784 (Auth) exposed\n" "Auth Port"
else
printf "${CYAN}%-20s${NC} ${RED}${NC} Port 3784 (Auth) not exposed\n" "Auth Port"
fi
# Check if world server port is accessible
if docker ps --format "table {{.Names}}\t{{.Ports}}" | grep ac-worldserver | grep -q "8215"; then
printf "${CYAN}%-20s${NC} ${GREEN}${NC} Port 8215 (World) exposed\n" "World Port"
else
printf "${CYAN}%-20s${NC} ${RED}${NC} Port 8215 (World) not exposed\n" "World Port"
fi
echo ""
printf "${CYAN}Last updated: $(date '+%Y-%m-%d %H:%M:%S')${NC}\n"
if [ "$WATCH_MODE" = true ]; then
echo ""
print_status "INFO" "Press Ctrl+C to exit watch mode"
fi
} > "$temp_file"
# Clear screen and display all content at once
clear 2>/dev/null || printf '\033[2J\033[H'
cat "$temp_file"
rm "$temp_file"
}
# Main execution
if [ "$WATCH_MODE" = true ]; then
while true; do
show_status
sleep 3
done
else
show_status
fi

View File

@@ -1,195 +0,0 @@
#!/bin/bash
set -e
echo "🧪 Testing Enhanced Backup Detection Logic"
echo "========================================="
# Test configuration
RESTORE_STATUS_DIR="./storage/azerothcore/mysql-data"
RESTORE_SUCCESS_MARKER="$RESTORE_STATUS_DIR/.restore-completed"
RESTORE_FAILED_MARKER="$RESTORE_STATUS_DIR/.restore-failed"
BACKUP_DIRS="./storage/azerothcore/backups"
# Clean up old status markers
rm -f "$RESTORE_SUCCESS_MARKER" "$RESTORE_FAILED_MARKER"
echo "🔍 Test Environment:"
echo " Backup directory: $BACKUP_DIRS"
echo " Status directory: $RESTORE_STATUS_DIR"
echo ""
# Function to validate backup
validate_backup() {
local backup_path="$1"
echo "🔍 Validating backup: $backup_path"
if [ -f "$backup_path" ]; then
# Check if it's a valid SQL file
if head -10 "$backup_path" | grep -q "CREATE DATABASE\|INSERT INTO\|CREATE TABLE\|DROP DATABASE"; then
echo "✅ Backup appears valid"
return 0
fi
fi
echo "❌ Backup validation failed"
return 1
}
# Function to find and validate the most recent backup
find_latest_backup() {
echo "🔍 Searching for available backups..."
# Priority 1: Legacy single backup file
if [ -f "./storage/azerothcore/mysql-data/backup.sql" ]; then
if validate_backup "./storage/azerothcore/mysql-data/backup.sql"; then
echo "📦 Found valid legacy backup: backup.sql"
echo "./storage/azerothcore/mysql-data/backup.sql"
return 0
fi
fi
# Priority 2: Modern timestamped backups
if [ -d "$BACKUP_DIRS" ] && [ "$(ls -A $BACKUP_DIRS 2>/dev/null)" ]; then
# Try daily backups first
if [ -d "$BACKUP_DIRS/daily" ] && [ "$(ls -A $BACKUP_DIRS/daily 2>/dev/null)" ]; then
local latest_daily=$(ls -1t $BACKUP_DIRS/daily 2>/dev/null | head -n 1)
if [ -n "$latest_daily" ] && [ -d "$BACKUP_DIRS/daily/$latest_daily" ]; then
echo "📦 Found daily backup: $latest_daily"
echo "$BACKUP_DIRS/daily/$latest_daily"
return 0
fi
fi
# Try hourly backups second
if [ -d "$BACKUP_DIRS/hourly" ] && [ "$(ls -A $BACKUP_DIRS/hourly 2>/dev/null)" ]; then
local latest_hourly=$(ls -1t $BACKUP_DIRS/hourly 2>/dev/null | head -n 1)
if [ -n "$latest_hourly" ] && [ -d "$BACKUP_DIRS/hourly/$latest_hourly" ]; then
echo "📦 Found hourly backup: $latest_hourly"
echo "$BACKUP_DIRS/hourly/$latest_hourly"
return 0
fi
fi
# Try legacy timestamped backups
local latest_legacy=$(ls -1dt $BACKUP_DIRS/[0-9]* 2>/dev/null | head -n 1)
if [ -n "$latest_legacy" ] && [ -d "$latest_legacy" ]; then
echo "📦 Found legacy timestamped backup: $(basename $latest_legacy)"
echo "$latest_legacy"
return 0
fi
# Try individual SQL files in backup root
local sql_files=$(ls -1t $BACKUP_DIRS/*.sql $BACKUP_DIRS/*.sql.gz 2>/dev/null | head -n 1)
if [ -n "$sql_files" ]; then
echo "📦 Found individual SQL backup: $(basename $sql_files)"
echo "$sql_files"
return 0
fi
fi
echo " No valid backups found"
return 1
}
# Function to simulate restore from timestamped backup directory
simulate_restore_from_directory() {
local backup_dir="$1"
echo "🔄 Simulating restore from backup directory: $backup_dir"
local restore_success=true
local file_count=0
# Check each database backup
for backup_file in "$backup_dir"/*.sql.gz "$backup_dir"/*.sql; do
if [ -f "$backup_file" ]; then
local db_name=$(basename "$backup_file" .sql.gz)
db_name=$(basename "$db_name" .sql)
echo "📥 Would restore database: $db_name from $(basename $backup_file)"
file_count=$((file_count + 1))
fi
done
if [ $file_count -gt 0 ]; then
echo "✅ Would successfully restore $file_count database(s)"
return 0
else
echo "❌ No database files found in backup directory"
return 1
fi
}
# Function to simulate restore from single SQL file
simulate_restore_from_file() {
local backup_file="$1"
echo "🔄 Simulating restore from backup file: $backup_file"
if [ -f "$backup_file" ]; then
echo "✅ Would successfully restore from $(basename $backup_file)"
return 0
else
echo "❌ Backup file not found: $backup_file"
return 1
fi
}
# Main backup detection and restoration logic
echo "🧪 Running backup detection test..."
echo ""
backup_restored=false
backup_path=$(find_latest_backup)
if [ $? -eq 0 ] && [ -n "$backup_path" ]; then
echo ""
echo "📦 Latest backup found: $backup_path"
echo ""
if [ -f "$backup_path" ]; then
# Single file backup
if simulate_restore_from_file "$backup_path"; then
backup_restored=true
fi
elif [ -d "$backup_path" ]; then
# Directory backup
if simulate_restore_from_directory "$backup_path"; then
backup_restored=true
fi
fi
else
echo ""
echo " No backups found - would create fresh databases"
fi
echo ""
echo "📊 Test Results:"
echo " Backup detected: $([ $? -eq 0 ] && echo "Yes" || echo "No")"
echo " Backup path: ${backup_path:-"None"}"
echo " Would restore: $backup_restored"
# Simulate status marker creation
if [ "$backup_restored" = true ]; then
echo "📝 Would create restoration success marker: $RESTORE_SUCCESS_MARKER"
mkdir -p "$(dirname $RESTORE_SUCCESS_MARKER)"
echo "$(date): [TEST] Backup successfully restored from $backup_path" > "$RESTORE_SUCCESS_MARKER"
echo "🚫 DB import would be SKIPPED - restoration completed successfully"
else
echo "📝 Would create restoration failed marker: $RESTORE_FAILED_MARKER"
mkdir -p "$(dirname $RESTORE_FAILED_MARKER)"
echo "$(date): [TEST] No backup restored - fresh databases would be created" > "$RESTORE_FAILED_MARKER"
echo "▶️ DB import would PROCEED - fresh databases need population"
fi
echo ""
echo "🏁 Test Complete!"
echo ""
echo "📁 Created status marker files:"
ls -la "$RESTORE_STATUS_DIR"/.restore-* "$RESTORE_STATUS_DIR"/.import-* 2>/dev/null || echo " No marker files found"
echo ""
echo "📝 Status marker contents:"
for marker in "$RESTORE_SUCCESS_MARKER" "$RESTORE_FAILED_MARKER"; do
if [ -f "$marker" ]; then
echo " $(basename $marker): $(cat $marker)"
fi
done

View File

@@ -1,47 +0,0 @@
#!/bin/bash
# Test script to verify acore_playerbots database detection
# This script simulates the database detection logic without running an actual backup
set -e
# Configuration from environment variables
MYSQL_HOST=${MYSQL_HOST:-ac-mysql}
MYSQL_PORT=${MYSQL_PORT:-3306}
MYSQL_USER=${MYSQL_USER:-root}
MYSQL_PASSWORD=${MYSQL_PASSWORD:-password}
echo "=== Testing AzerothCore Database Detection ==="
echo ""
# Core databases
DATABASES=("acore_auth" "acore_world" "acore_characters")
echo "Core databases: ${DATABASES[@]}"
# Test if acore_playerbots database exists
echo ""
echo "Testing for acore_playerbots database..."
if mysql -h$MYSQL_HOST -P$MYSQL_PORT -u$MYSQL_USER -p$MYSQL_PASSWORD -e "USE acore_playerbots;" 2>/dev/null; then
DATABASES+=("acore_playerbots")
echo "✅ acore_playerbots database found - would be included in backup"
else
echo " acore_playerbots database not found - would be skipped (this is normal for some installations)"
fi
echo ""
echo "Final database list that would be backed up: ${DATABASES[@]}"
echo ""
# Test connection to each database that would be backed up
echo "Testing connection to each database:"
for db in "${DATABASES[@]}"; do
if mysql -h$MYSQL_HOST -P$MYSQL_PORT -u$MYSQL_USER -p$MYSQL_PASSWORD -e "USE $db; SELECT 'OK' as status;" 2>/dev/null | grep -q OK; then
echo "$db: Connection successful"
else
echo "$db: Connection failed"
fi
done
echo ""
echo "=== Database Detection Test Complete ==="

View File

@@ -1,225 +0,0 @@
#!/bin/bash
# ==============================================
# TEST LOCAL WORLDSERVER DEPLOYMENT SCRIPT
# ==============================================
# This script tests worldserver performance with local game files
# vs. external volume mount
set -e # Exit on any error
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
MAGENTA='\033[0;35m'
NC='\033[0m' # No Color
# Function to print colored output
print_status() {
local status=$1
local message=$2
case $status in
"INFO")
echo -e "${BLUE} ${message}${NC}"
;;
"SUCCESS")
echo -e "${GREEN}${message}${NC}"
;;
"WARNING")
echo -e "${YELLOW}⚠️ ${message}${NC}"
;;
"ERROR")
echo -e "${RED}${message}${NC}"
;;
"HEADER")
echo -e "\n${MAGENTA}=== ${message} ===${NC}"
;;
"TEST")
echo -e "${YELLOW}🧪 ${message}${NC}"
;;
esac
}
# Parse command line arguments
CLEANUP=false
LOGS=false
while [[ $# -gt 0 ]]; do
case $1 in
--cleanup)
CLEANUP=true
shift
;;
--logs)
LOGS=true
shift
;;
-h|--help)
echo "Test Local Worldserver Deployment Script"
echo ""
echo "Usage: $0 [OPTIONS]"
echo ""
echo "OPTIONS:"
echo " --cleanup Stop and remove test worldserver"
echo " --logs Follow test worldserver logs"
echo " --help Show this help message"
echo ""
echo "EXAMPLES:"
echo " $0 # Deploy test worldserver"
echo " $0 --logs # Follow logs of running test"
echo " $0 --cleanup # Clean up test deployment"
exit 0
;;
*)
echo "Unknown option $1"
echo "Use --help for usage information"
exit 1
;;
esac
done
# Change to parent directory for compose commands
cd "$(dirname "$0")/.."
if [ "$CLEANUP" = true ]; then
print_status "HEADER" "CLEANING UP TEST WORLDSERVER"
print_status "INFO" "Stopping test worldserver..."
docker-compose --env-file docker-compose-test-worldserver.env -f docker-compose-test-worldserver.yml down
print_status "INFO" "Removing test container if exists..."
docker rm -f ac-worldserver-test 2>/dev/null || true
print_status "SUCCESS" "Test cleanup completed"
exit 0
fi
if [ "$LOGS" = true ]; then
print_status "HEADER" "FOLLOWING TEST WORLDSERVER LOGS"
docker logs ac-worldserver-test -f
exit 0
fi
# Main deployment
print_status "HEADER" "DEPLOYING TEST WORLDSERVER WITH LOCAL GAME FILES"
# Check if docker is available
if ! command -v docker &> /dev/null; then
print_status "ERROR" "Docker is not installed or not in PATH"
exit 1
fi
# Check if main database is running
if ! docker ps | grep ac-mysql > /dev/null; then
print_status "ERROR" "Main database (ac-mysql) is not running"
print_status "INFO" "Please start the database layer first:"
print_status "INFO" " docker-compose --env-file docker-compose-azerothcore-database.env -f docker-compose-azerothcore-database.yml up -d"
exit 1
fi
# Check if authserver is running
if ! docker ps | grep ac-authserver > /dev/null; then
print_status "ERROR" "Auth server (ac-authserver) is not running"
print_status "INFO" "Please start the services layer first (or at least authserver):"
print_status "INFO" " docker-compose --env-file docker-compose-azerothcore-services.env -f docker-compose-azerothcore-services.yml up -d ac-authserver"
exit 1
fi
# Check if regular worldserver is running (warn about port conflicts)
if docker ps | grep ac-worldserver | grep -v test > /dev/null; then
print_status "WARNING" "Regular worldserver is running - test uses different ports"
print_status "INFO" "Test worldserver ports: 8216 (world), 7779 (SOAP)"
print_status "INFO" "Regular worldserver ports: 8215 (world), 7778 (SOAP)"
fi
print_status "INFO" "Prerequisites check passed"
# Check for cached files
if [ -f "storage/azerothcore/cache-test/client-data-version.txt" ]; then
CACHED_VERSION=$(cat storage/azerothcore/cache-test/client-data-version.txt 2>/dev/null)
print_status "INFO" "Found cached game files (version: $CACHED_VERSION)"
print_status "SUCCESS" "No internet download needed - using cached files!"
print_status "INFO" "Expected startup time: 5-10 minutes (extraction only)"
else
print_status "WARNING" "No cached files found - will download ~15GB from internet"
print_status "INFO" "Expected startup time: 20-30 minutes (download + extraction)"
fi
# Start test worldserver
print_status "TEST" "Starting test worldserver with cached local game files..."
print_status "INFO" "Cache location: storage/azerothcore/cache-test/"
print_status "INFO" "Game files will be copied to local container storage for performance testing"
print_status "INFO" "Test worldserver will be available on port 8216"
# Record start time
START_TIME=$(date +%s)
print_status "INFO" "Deployment started at: $(date)"
# Start the test container
docker-compose --env-file docker-compose-test-worldserver.env -f docker-compose-test-worldserver.yml up -d
print_status "SUCCESS" "Test worldserver container started"
print_status "INFO" "Container name: ac-worldserver-test"
print_status "HEADER" "MONITORING TEST DEPLOYMENT"
print_status "INFO" "Following logs for the first few minutes..."
print_status "INFO" "Press Ctrl+C to stop following logs (container will continue running)"
print_status "INFO" ""
print_status "TEST" "=== LIVE LOG OUTPUT ==="
# Follow logs for a bit
timeout 300 docker logs ac-worldserver-test -f 2>/dev/null || true
print_status "INFO" ""
print_status "HEADER" "TEST DEPLOYMENT STATUS"
# Check if container is still running
if docker ps | grep ac-worldserver-test > /dev/null; then
print_status "SUCCESS" "Test container is running"
# Calculate elapsed time
CURRENT_TIME=$(date +%s)
ELAPSED=$((CURRENT_TIME - START_TIME))
ELAPSED_MIN=$((ELAPSED / 60))
print_status "INFO" "Elapsed time: ${ELAPSED_MIN} minutes"
print_status "INFO" "Container status: $(docker ps --format '{{.Status}}' --filter name=ac-worldserver-test)"
print_status "HEADER" "USEFUL COMMANDS"
echo -e "${BLUE}Monitor logs:${NC}"
echo " $0 --logs"
echo " docker logs ac-worldserver-test -f"
echo ""
echo -e "${BLUE}Check container status:${NC}"
echo " docker ps | grep test"
echo " docker exec ac-worldserver-test ps aux | grep worldserver"
echo ""
echo -e "${BLUE}Check game data (local in container):${NC}"
echo " docker exec ac-worldserver-test ls -la /azerothcore/data/"
echo " docker exec ac-worldserver-test du -sh /azerothcore/data/*"
echo ""
echo -e "${BLUE}Check cached files (persistent):${NC}"
echo " ls -la storage/azerothcore/cache-test/"
echo " du -sh storage/azerothcore/cache-test/*"
echo " cat storage/azerothcore/cache-test/client-data-version.txt"
echo ""
echo -e "${BLUE}Connect to test server:${NC}"
echo " Game Port: localhost:8216"
echo " SOAP Port: localhost:7779"
echo ""
echo -e "${BLUE}Performance comparison:${NC}"
echo " docker stats ac-worldserver ac-worldserver-test --no-stream"
echo ""
echo -e "${BLUE}Cleanup test:${NC}"
echo " $0 --cleanup"
echo " rm -rf storage/azerothcore/cache-test/ # Remove cache"
else
print_status "ERROR" "Test container has stopped or failed"
print_status "INFO" "Check logs for details:"
print_status "INFO" " docker logs ac-worldserver-test"
exit 1
fi

View File

@@ -1,96 +0,0 @@
#!/bin/bash
# ==============================================
# Playerbots Toggle Script
# ==============================================
# Simple script to enable/disable playerbots without rebuilding
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Function to print colored output
print_status() {
local status=$1
local message=$2
case $status in
"INFO")
echo -e "${BLUE} ${message}${NC}"
;;
"SUCCESS")
echo -e "${GREEN}${message}${NC}"
;;
"WARNING")
echo -e "${YELLOW}⚠️ ${message}${NC}"
;;
"ERROR")
echo -e "${RED}${message}${NC}"
;;
esac
}
# Change to project root
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
cd "$PROJECT_ROOT"
ENV_FILE="docker-compose-azerothcore-services.env"
if [ ! -f "$ENV_FILE" ]; then
print_status "ERROR" "Environment file not found: $ENV_FILE"
exit 1
fi
# Check current state
current_state=$(grep "^MODULE_PLAYERBOTS=" "$ENV_FILE" | cut -d'=' -f2)
current_authserver=$(grep "^AC_AUTHSERVER_IMAGE=" "$ENV_FILE" | cut -d'=' -f2)
if [[ "$current_authserver" == *"playerbots"* ]]; then
is_playerbots_active=true
else
is_playerbots_active=false
fi
print_status "INFO" "CURRENT PLAYERBOTS STATUS"
echo "Module Setting: MODULE_PLAYERBOTS=$current_state"
echo "Active Images: $(if $is_playerbots_active; then echo "Playerbots"; else echo "Standard AzerothCore"; fi)"
echo ""
if [ "$1" = "status" ]; then
exit 0
fi
# Toggle logic
if $is_playerbots_active; then
print_status "WARNING" "Disabling playerbots (switching to standard AzerothCore images)"
# Switch to standard images
sed -i.bak \
-e 's/^AC_AUTHSERVER_IMAGE=uprightbass360.*/AC_AUTHSERVER_IMAGE=acore\/ac-wotlk-authserver:14.0.0-dev/' \
-e 's/^AC_WORLDSERVER_IMAGE=uprightbass360.*/AC_WORLDSERVER_IMAGE=acore\/ac-wotlk-worldserver:14.0.0-dev/' \
-e 's/^MODULE_PLAYERBOTS=1/MODULE_PLAYERBOTS=0/' \
"$ENV_FILE"
print_status "SUCCESS" "Playerbots disabled"
else
print_status "INFO" "Enabling playerbots (switching to pre-built playerbots images)"
# Switch to playerbots images
sed -i.bak \
-e 's/^AC_AUTHSERVER_IMAGE=acore.*/AC_AUTHSERVER_IMAGE=uprightbass360\/azerothcore-wotlk-playerbots:authserver-Playerbot/' \
-e 's/^AC_WORLDSERVER_IMAGE=acore.*/AC_WORLDSERVER_IMAGE=uprightbass360\/azerothcore-wotlk-playerbots:worldserver-Playerbot/' \
-e 's/^MODULE_PLAYERBOTS=0/MODULE_PLAYERBOTS=1/' \
"$ENV_FILE"
print_status "SUCCESS" "Playerbots enabled"
fi
print_status "INFO" "To apply changes, redeploy the services:"
echo " docker compose --env-file $ENV_FILE -f docker-compose-azerothcore-services.yml up -d"
echo ""
print_status "INFO" "No rebuild required - using pre-built images!"

View File

@@ -1,96 +0,0 @@
#!/bin/bash
# AzerothCore Configuration Update Script
# Updates .conf files with production database settings
set -e
echo "🔧 AzerothCore Configuration Update Script"
echo "=========================================="
# Load environment variables from env file if it exists
if [ -f "docker-compose-azerothcore-services.env" ]; then
echo "📂 Loading environment from docker-compose-azerothcore-services.env"
set -a # automatically export all variables
source docker-compose-azerothcore-services.env
set +a # turn off automatic export
echo ""
fi
# Configuration variables from environment
MYSQL_HOST="${MYSQL_HOST:-ac-mysql}"
MYSQL_PORT="${MYSQL_PORT:-3306}"
MYSQL_USER="${MYSQL_USER:-root}"
MYSQL_ROOT_PASSWORD="${MYSQL_ROOT_PASSWORD:-azerothcore123}"
DB_AUTH_NAME="${DB_AUTH_NAME:-acore_auth}"
DB_WORLD_NAME="${DB_WORLD_NAME:-acore_world}"
DB_CHARACTERS_NAME="${DB_CHARACTERS_NAME:-acore_characters}"
# Configuration file paths
CONFIG_DIR="${STORAGE_PATH}/config"
AUTHSERVER_CONF="${CONFIG_DIR}/authserver.conf"
WORLDSERVER_CONF="${CONFIG_DIR}/worldserver.conf"
echo "📍 Configuration directory: ${CONFIG_DIR}"
# Check if configuration files exist
if [ ! -f "${AUTHSERVER_CONF}" ]; then
echo "❌ Error: ${AUTHSERVER_CONF} not found"
exit 1
fi
if [ ! -f "${WORLDSERVER_CONF}" ]; then
echo "❌ Error: ${WORLDSERVER_CONF} not found"
exit 1
fi
echo "✅ Configuration files found"
# Backup original files
echo "💾 Creating backups..."
cp "${AUTHSERVER_CONF}" "${AUTHSERVER_CONF}.backup.$(date +%Y%m%d_%H%M%S)"
cp "${WORLDSERVER_CONF}" "${WORLDSERVER_CONF}.backup.$(date +%Y%m%d_%H%M%S)"
# Update AuthServer configuration
echo "🔧 Updating AuthServer configuration..."
sed -i "s/^LoginDatabaseInfo = .*/LoginDatabaseInfo = \"${MYSQL_HOST};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_AUTH_NAME}\"/" "${AUTHSERVER_CONF}"
# Verify AuthServer update
AUTH_UPDATED=$(grep "LoginDatabaseInfo" "${AUTHSERVER_CONF}" | grep "${MYSQL_HOST}")
if [ -n "${AUTH_UPDATED}" ]; then
echo "✅ AuthServer configuration updated successfully"
echo " ${AUTH_UPDATED}"
else
echo "❌ Failed to update AuthServer configuration"
exit 1
fi
# Update WorldServer configuration
echo "🔧 Updating WorldServer configuration..."
sed -i "s/^LoginDatabaseInfo = .*/LoginDatabaseInfo = \"${MYSQL_HOST};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_AUTH_NAME}\"/" "${WORLDSERVER_CONF}"
sed -i "s/^WorldDatabaseInfo = .*/WorldDatabaseInfo = \"${MYSQL_HOST};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_WORLD_NAME}\"/" "${WORLDSERVER_CONF}"
sed -i "s/^CharacterDatabaseInfo = .*/CharacterDatabaseInfo = \"${MYSQL_HOST};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_CHARACTERS_NAME}\"/" "${WORLDSERVER_CONF}"
# Verify WorldServer updates
LOGIN_UPDATED=$(grep "^LoginDatabaseInfo" "${WORLDSERVER_CONF}" | grep "${MYSQL_HOST}")
WORLD_UPDATED=$(grep "^WorldDatabaseInfo" "${WORLDSERVER_CONF}" | grep "${MYSQL_HOST}")
CHARACTER_UPDATED=$(grep "^CharacterDatabaseInfo" "${WORLDSERVER_CONF}" | grep "${MYSQL_HOST}")
if [ -n "${LOGIN_UPDATED}" ] && [ -n "${WORLD_UPDATED}" ] && [ -n "${CHARACTER_UPDATED}" ]; then
echo "✅ WorldServer configuration updated successfully"
echo " Login: ${LOGIN_UPDATED}"
echo " World: ${WORLD_UPDATED}"
echo " Character: ${CHARACTER_UPDATED}"
else
echo "❌ Failed to update WorldServer configuration"
exit 1
fi
echo ""
echo "🎉 Configuration update completed successfully!"
echo "📋 Updated files:"
echo " - ${AUTHSERVER_CONF}"
echo " - ${WORLDSERVER_CONF}"
echo ""
echo "💡 Restart authserver and worldserver services to apply changes:"
echo " docker compose -f docker-compose-azerothcore-services.yml restart ac-authserver ac-worldserver"

View File

@@ -1,112 +0,0 @@
#!/bin/bash
# AzerothCore Realmlist Update Script
# Updates the realmlist table with production server address and port
set -e
echo "🌐 AzerothCore Realmlist Update Script"
echo "======================================"
# Store any pre-existing environment variables
SAVED_SERVER_ADDRESS="$SERVER_ADDRESS"
SAVED_REALM_PORT="$REALM_PORT"
# Load environment variables from env file if it exists
if [ -f "docker-compose-azerothcore-services.env" ]; then
echo "📂 Loading environment from docker-compose-azerothcore-services.env"
set -a # automatically export all variables
source docker-compose-azerothcore-services.env
set +a # turn off automatic export
fi
# Restore command line variables if they were set
if [ -n "$SAVED_SERVER_ADDRESS" ]; then
SERVER_ADDRESS="$SAVED_SERVER_ADDRESS"
echo "🔧 Using command line SERVER_ADDRESS: $SERVER_ADDRESS"
fi
if [ -n "$SAVED_REALM_PORT" ]; then
REALM_PORT="$SAVED_REALM_PORT"
echo "🔧 Using command line REALM_PORT: $REALM_PORT"
fi
# Configuration variables from environment
MYSQL_HOST="${MYSQL_HOST:-ac-mysql}"
MYSQL_PORT="${MYSQL_PORT:-3306}"
MYSQL_USER="${MYSQL_USER:-root}"
MYSQL_ROOT_PASSWORD="${MYSQL_ROOT_PASSWORD:-azerothcore123}"
DB_AUTH_NAME="${DB_AUTH_NAME:-acore_auth}"
# Server configuration - Loaded from environment file or command line
SERVER_ADDRESS="${SERVER_ADDRESS:-127.0.0.1}"
SERVER_PORT="${REALM_PORT:-8085}"
REALM_ID="${REALM_ID:-1}"
echo "📍 Database: ${MYSQL_HOST}:${MYSQL_PORT}/${DB_AUTH_NAME}"
echo "🌐 Server Address: ${SERVER_ADDRESS}:${SERVER_PORT}"
echo "🏰 Realm ID: ${REALM_ID}"
# Test database connection
echo "🔌 Testing database connection..."
docker exec ac-mysql mysql -u "${MYSQL_USER}" -p"${MYSQL_ROOT_PASSWORD}" "${DB_AUTH_NAME}" -e "SELECT 1;" > /dev/null 2>&1
if [ $? -eq 0 ]; then
echo "✅ Database connection successful"
else
echo "❌ Database connection failed"
exit 1
fi
# Check current realmlist entries
echo "📋 Current realmlist entries:"
docker exec ac-mysql mysql -u "${MYSQL_USER}" -p"${MYSQL_ROOT_PASSWORD}" "${DB_AUTH_NAME}" -e "SELECT id, name, address, localAddress, localSubnetMask, port, icon, flag, timezone, allowedSecurityLevel, population, gamebuild FROM realmlist;"
# Check if realm ID exists before updating
echo "🔍 Checking if realm ID ${REALM_ID} exists..."
REALM_EXISTS=$(docker exec ac-mysql mysql -u "${MYSQL_USER}" -p"${MYSQL_ROOT_PASSWORD}" "${DB_AUTH_NAME}" -se "SELECT COUNT(*) FROM realmlist WHERE id = ${REALM_ID};")
if [ "${REALM_EXISTS}" -eq 0 ]; then
echo "❌ Error: Realm ID ${REALM_ID} does not exist in realmlist table"
echo "💡 Available realm IDs:"
docker exec ac-mysql mysql -u "${MYSQL_USER}" -p"${MYSQL_ROOT_PASSWORD}" "${DB_AUTH_NAME}" -e "SELECT id, name FROM realmlist;"
exit 1
fi
echo "✅ Realm ID ${REALM_ID} found"
# Check if update is needed (compare current values)
CURRENT_VALUES=$(docker exec ac-mysql mysql -u "${MYSQL_USER}" -p"${MYSQL_ROOT_PASSWORD}" "${DB_AUTH_NAME}" -se "SELECT CONCAT(address, ':', port) FROM realmlist WHERE id = ${REALM_ID};")
TARGET_VALUES="${SERVER_ADDRESS}:${SERVER_PORT}"
if [ "${CURRENT_VALUES}" = "${TARGET_VALUES}" ]; then
echo " Values already match target (${TARGET_VALUES}) - no update needed"
echo "✅ Realmlist is already configured correctly"
else
echo "🔧 Updating existing realm ID ${REALM_ID} from ${CURRENT_VALUES} to ${TARGET_VALUES}..."
docker exec ac-mysql mysql -u "${MYSQL_USER}" -p"${MYSQL_ROOT_PASSWORD}" "${DB_AUTH_NAME}" -e "UPDATE realmlist SET address = '${SERVER_ADDRESS}', port = ${SERVER_PORT} WHERE id = ${REALM_ID};"
if [ $? -eq 0 ]; then
# Verify the change was applied
NEW_VALUES=$(docker exec ac-mysql mysql -u "${MYSQL_USER}" -p"${MYSQL_ROOT_PASSWORD}" "${DB_AUTH_NAME}" -se "SELECT CONCAT(address, ':', port) FROM realmlist WHERE id = ${REALM_ID};")
if [ "${NEW_VALUES}" = "${TARGET_VALUES}" ]; then
echo "✅ Realmlist update successful (${CURRENT_VALUES}${NEW_VALUES})"
else
echo "❌ Update failed - values did not change (${NEW_VALUES})"
exit 1
fi
else
echo "❌ Failed to execute UPDATE statement"
exit 1
fi
fi
# Verify the update
echo "📋 Updated realmlist entries:"
docker exec ac-mysql mysql -u "${MYSQL_USER}" -p"${MYSQL_ROOT_PASSWORD}" "${DB_AUTH_NAME}" -e "SELECT id, name, address, localAddress, localSubnetMask, port, icon, flag, timezone, allowedSecurityLevel, population, gamebuild FROM realmlist WHERE id = ${REALM_ID};"
echo ""
echo "🎉 Realmlist update completed successfully!"
echo "📋 Summary:"
echo " - Realm ID: ${REALM_ID}"
echo " - Address: ${SERVER_ADDRESS}"
echo " - Port: ${SERVER_PORT}"
echo ""
echo "💡 Players should now connect to: ${SERVER_ADDRESS}:${SERVER_PORT}"

View File

@@ -1,179 +0,0 @@
#!/bin/bash
# ==============================================
# Wait for Client Data and Start World Server
# ==============================================
# This script monitors the client data download and automatically starts
# the world server once the data is ready
# Usage: ./wait-and-start-worldserver.sh
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Function to print colored output
print_status() {
local status=$1
local message=$2
case $status in
"INFO")
echo -e "${BLUE} ${message}${NC}"
;;
"SUCCESS")
echo -e "${GREEN}${message}${NC}"
;;
"WARNING")
echo -e "${YELLOW}⚠️ ${message}${NC}"
;;
"ERROR")
echo -e "${RED}${message}${NC}"
;;
"HEADER")
echo -e "\n${BLUE}=== ${message} ===${NC}"
;;
esac
}
print_status "HEADER" "WAITING FOR CLIENT DATA AND STARTING WORLD SERVER"
# Check if distrobox-host-exec is available
if ! command -v distrobox-host-exec &> /dev/null; then
print_status "ERROR" "distrobox-host-exec is not available"
exit 1
fi
# Check if client-data container exists
if ! distrobox-host-exec podman ps -a --format '{{.Names}}' 2>/dev/null | grep -q "^ac-client-data$"; then
print_status "ERROR" "ac-client-data container not found"
print_status "INFO" "Run the deployment script first: ./scripts/deploy-and-check-distrobox.sh"
exit 1
fi
# Check if client data is already complete
print_status "INFO" "Checking client data status..."
if distrobox-host-exec podman logs ac-client-data 2>&1 | grep -q "Game data setup complete"; then
print_status "SUCCESS" "Client data already complete!"
else
# Monitor the download progress
print_status "INFO" "Client data download in progress..."
print_status "INFO" "Monitoring progress (Ctrl+C to stop monitoring, script will continue)..."
LAST_LINE=""
CHECK_COUNT=0
while true; do
# Check if container is still running or has completed
CONTAINER_STATUS=$(distrobox-host-exec podman ps -a --format '{{.Names}} {{.Status}}' 2>/dev/null | grep "^ac-client-data" | awk '{print $2}')
if [[ "$CONTAINER_STATUS" == "Exited" ]]; then
# Container finished, check if successful
EXIT_CODE=$(distrobox-host-exec podman inspect ac-client-data --format='{{.State.ExitCode}}' 2>/dev/null)
if [ "$EXIT_CODE" = "0" ]; then
print_status "SUCCESS" "Client data download and extraction completed!"
break
else
print_status "ERROR" "Client data container failed with exit code $EXIT_CODE"
print_status "INFO" "Check logs: distrobox-host-exec podman logs ac-client-data"
exit 1
fi
fi
# Show progress every 30 seconds
if [ $((CHECK_COUNT % 6)) -eq 0 ]; then
# Get latest progress line
CURRENT_LINE=$(distrobox-host-exec podman logs --tail 5 ac-client-data 2>&1 | grep -E "(📊|📂|📁|✅|🎉)" | tail -1)
if [ "$CURRENT_LINE" != "$LAST_LINE" ] && [ -n "$CURRENT_LINE" ]; then
echo "$CURRENT_LINE"
LAST_LINE="$CURRENT_LINE"
fi
fi
((CHECK_COUNT++))
sleep 5
done
fi
# Verify data directories exist
print_status "INFO" "Verifying client data directories..."
DATA_DIRS=("maps" "vmaps" "mmaps" "dbc")
MISSING_DIRS=()
for dir in "${DATA_DIRS[@]}"; do
if [ -d "storage/azerothcore/data/$dir" ] && [ -n "$(ls -A storage/azerothcore/data/$dir 2>/dev/null)" ]; then
DIR_SIZE=$(du -sh storage/azerothcore/data/$dir 2>/dev/null | cut -f1)
print_status "SUCCESS" "$dir directory exists ($DIR_SIZE)"
else
print_status "ERROR" "$dir directory missing or empty"
MISSING_DIRS+=("$dir")
fi
done
if [ ${#MISSING_DIRS[@]} -gt 0 ]; then
print_status "ERROR" "Cannot start world server - missing data directories"
exit 1
fi
# Check if world server is already running
if distrobox-host-exec podman ps --format '{{.Names}}' 2>/dev/null | grep -q "^ac-worldserver$"; then
print_status "WARNING" "World server is already running"
print_status "INFO" "To restart: distrobox-host-exec podman restart ac-worldserver"
exit 0
fi
# Remove any stopped world server container
distrobox-host-exec podman rm -f ac-worldserver 2>/dev/null || true
# Start the world server
print_status "INFO" "Starting World Server..."
distrobox-host-exec bash -c "podman run -d --name ac-worldserver --network azerothcore --privileged -t \
-p 8215:8085 -p 7778:7878 \
-e AC_LOGIN_DATABASE_INFO='ac-mysql;3306;root;azerothcore123;acore_auth' \
-e AC_WORLD_DATABASE_INFO='ac-mysql;3306;root;azerothcore123;acore_world' \
-e AC_CHARACTER_DATABASE_INFO='ac-mysql;3306;root;azerothcore123;acore_characters' \
-e AC_UPDATES_ENABLE_DATABASES=0 \
-e AC_BIND_IP='0.0.0.0' \
-e AC_DATA_DIR='/azerothcore/data' \
-e AC_SOAP_PORT=7878 \
-e AC_PROCESS_PRIORITY=0 \
-e PLAYERBOT_ENABLED=1 \
-e PLAYERBOT_MAX_BOTS=40 \
-e AC_LOG_LEVEL=2 \
-v ./storage/azerothcore/data:/azerothcore/data \
-v ./storage/azerothcore/config:/azerothcore/env/dist/etc \
-v ./storage/azerothcore/logs:/azerothcore/logs \
-v ./storage/azerothcore/modules:/azerothcore/modules \
-v ./storage/azerothcore/lua_scripts:/azerothcore/lua_scripts \
--cap-add SYS_NICE \
--restart unless-stopped \
docker.io/acore/ac-wotlk-worldserver:14.0.0-dev" 2>&1 | grep -v "level=error.*graph driver"
print_status "INFO" "Waiting for world server to start..."
sleep 10
# Check if world server is running
if distrobox-host-exec podman ps --format '{{.Names}}' 2>/dev/null | grep -q "^ac-worldserver$"; then
print_status "SUCCESS" "World server started successfully!"
# Show initial logs
print_status "INFO" "Initial world server logs:"
distrobox-host-exec podman logs --tail 15 ac-worldserver 2>&1 | grep -v "level=error.*graph driver" || true
print_status "HEADER" "WORLD SERVER STATUS"
print_status "SUCCESS" "🎮 World Server: Running on port 8215"
print_status "SUCCESS" "🔧 SOAP API: Available on port 7778"
print_status "INFO" "Monitor logs: distrobox-host-exec podman logs -f ac-worldserver"
print_status "INFO" "Connect with WoW client: Set realmlist to 127.0.0.1:8215"
else
print_status "ERROR" "World server failed to start"
print_status "INFO" "Check logs: distrobox-host-exec podman logs ac-worldserver"
exit 1
fi
print_status "SUCCESS" "🎉 AzerothCore is now fully operational!"