1 Commits

Author SHA1 Message Date
uprightbass360
3b11e23546 refactor and compress code 2025-12-02 21:43:05 -05:00
19 changed files with 2181 additions and 200 deletions

372
CLEANUP_TODO.md Normal file
View File

@@ -0,0 +1,372 @@
# AzerothCore RealmMaster - Cleanup TODO
## Overview
This document outlines systematic cleanup opportunities using the proven methodology from our successful consolidation. Each phase must be validated and tested incrementally without breaking existing functionality.
## Methodology
1. **Analyze** - Map dependencies and usage patterns
2. **Consolidate** - Create shared libraries/templates
3. **Replace** - Update scripts to use centralized versions
4. **Test** - Validate each change incrementally
5. **Document** - Track changes and dependencies
---
## Phase 1: Complete Script Function Consolidation
**Priority: HIGH** | **Risk: LOW** | **Impact: HIGH**
### Status
**Completed**: Master scripts (deploy.sh, build.sh, cleanup.sh) + 4 critical scripts
🔄 **Remaining**: 10+ scripts with duplicate logging functions
### Remaining Scripts to Consolidate
```bash
# Root level scripts
./changelog.sh # Has: info(), warn(), err()
./update-latest.sh # Has: info(), ok(), warn(), err()
# Backup system scripts
./scripts/bash/backup-export.sh # Has: info(), ok(), warn(), err()
./scripts/bash/backup-import.sh # Has: info(), ok(), warn(), err()
# Database scripts
./scripts/bash/db-guard.sh # Has: info(), warn(), err()
./scripts/bash/db-health-check.sh # Has: info(), ok(), warn(), err()
# Module & verification scripts
./scripts/bash/verify-sql-updates.sh # Has: info(), warn(), err()
./scripts/bash/manage-modules.sh # Has: info(), ok(), warn(), err()
./scripts/bash/repair-storage-permissions.sh # Has: info(), warn(), err()
./scripts/bash/test-phase1-integration.sh # Has: info(), ok(), warn(), err()
```
### Implementation Plan
**Step 1.1**: Consolidate Root Level Scripts (changelog.sh, update-latest.sh)
- Add lib/common.sh sourcing with error handling
- Remove duplicate function definitions
- Test functionality with `--help` flags
**Step 1.2**: Consolidate Backup System Scripts
- Update backup-export.sh and backup-import.sh
- Ensure backup operations still work correctly
- Test with dry-run flags where available
**Step 1.3**: Consolidate Database Scripts
- Update db-guard.sh and db-health-check.sh
- **CRITICAL**: These run in containers - verify mount paths work
- Test with existing database connections
**Step 1.4**: Consolidate Module & Verification Scripts
- Update manage-modules.sh, verify-sql-updates.sh, repair-storage-permissions.sh
- Test module staging and SQL verification workflows
- Verify test-phase1-integration.sh still functions
### Validation Tests
```bash
# Test each script category after consolidation
./changelog.sh --help
./update-latest.sh --help
./scripts/bash/backup-export.sh --dry-run
./scripts/bash/manage-modules.sh --list
```
---
## Phase 2: Docker Compose YAML Anchor Completion
**Priority: HIGH** | **Risk: MEDIUM** | **Impact: HIGH**
### Status
**Completed**: Basic YAML anchors, 2 authserver services consolidated
🔄 **Remaining**: 4 worldserver services, database services, volume patterns
### Current Docker Compose Analysis
```yaml
# Services needing consolidation:
- ac-worldserver-standard # ~45 lines → can reduce to ~10
- ac-worldserver-playerbots # ~45 lines → can reduce to ~10
- ac-worldserver-modules # ~45 lines → can reduce to ~10
- ac-authserver-modules # ~30 lines → can reduce to ~8
# Database services with repeated patterns:
- ac-db-import # Repeated volume mounts
- ac-db-guard # Similar environment variables
- ac-db-init # Similar MySQL connection patterns
# Volume mount patterns repeated 15+ times:
- ${STORAGE_CONFIG_PATH}:/azerothcore/env/dist/etc
- ${STORAGE_LOGS_PATH}:/azerothcore/logs
- ${BACKUP_PATH}:/backups
```
### Implementation Plan
**Step 2.1**: Complete Worldserver Service Consolidation
- Extend x-worldserver-common anchor to cover all variants
- Consolidate ac-worldserver-standard, ac-worldserver-playerbots, ac-worldserver-modules
- Test each Docker profile: `docker compose --profile services-standard config`
**Step 2.2**: Database Services Consolidation
- Create x-database-common anchor for shared database configurations
- Create x-database-volumes anchor for repeated volume patterns
- Update ac-db-import, ac-db-guard, ac-db-init services
**Step 2.3**: Complete Authserver Consolidation
- Consolidate remaining ac-authserver-modules service
- Verify all three profiles work: standard, playerbots, modules
### Validation Tests
```bash
# Test all profiles generate valid configurations
docker compose --profile services-standard config --quiet
docker compose --profile services-playerbots config --quiet
docker compose --profile services-modules config --quiet
# Test actual deployment (non-destructive)
docker compose --profile services-standard up --dry-run
```
---
## Phase 3: Utility Function Libraries
**Priority: MEDIUM** | **Risk: MEDIUM** | **Impact: MEDIUM**
### Status
**Completed**: All three utility libraries created and tested
**Completed**: Integration with backup-import.sh as proof of concept
🔄 **Remaining**: Update remaining 14+ scripts to use new libraries
### Created Libraries
**✅ scripts/bash/lib/mysql-utils.sh** - COMPLETED
- MySQL connection management: `mysql_test_connection()`, `mysql_wait_for_connection()`
- Query execution: `mysql_exec_with_retry()`, `mysql_query()`, `docker_mysql_query()`
- Database utilities: `mysql_database_exists()`, `mysql_get_table_count()`
- Backup/restore: `mysql_backup_database()`, `mysql_restore_database()`
- Configuration: `mysql_validate_configuration()`, `mysql_print_configuration()`
**✅ scripts/bash/lib/docker-utils.sh** - COMPLETED
- Container management: `docker_get_container_status()`, `docker_wait_for_container_state()`
- Execution: `docker_exec_with_retry()`, `docker_is_container_running()`
- Project management: `docker_get_project_name()`, `docker_list_project_containers()`
- Image operations: `docker_get_container_image()`, `docker_pull_image_with_retry()`
- Compose integration: `docker_compose_validate()`, `docker_compose_deploy()`
- System utilities: `docker_check_daemon()`, `docker_cleanup_system()`
**✅ scripts/bash/lib/env-utils.sh** - COMPLETED
- Environment management: `env_read_with_fallback()`, `env_read_typed()`, `env_update_value()`
- Path utilities: `path_resolve_absolute()`, `file_ensure_writable_dir()`
- File operations: `file_create_backup()`, `file_set_permissions()`
- Configuration: `config_read_template_value()`, `config_validate_env()`
- System detection: `system_detect_os()`, `system_check_requirements()`
### Integration Status
**✅ Proof of Concept**: backup-import.sh updated with fallback compatibility
- Uses new utility functions when available
- Maintains backward compatibility with graceful fallbacks
- Tested and functional
### Remaining Implementation
**Step 3.4**: Update High-Priority Scripts
- backup-export.sh: Use mysql-utils and env-utils functions
- db-guard.sh: Use mysql-utils for database operations
- deploy-tools.sh: Use docker-utils for container management
- verify-deployment.sh: Use docker-utils for status checking
**Step 3.5**: Update Database Scripts
- db-health-check.sh: Use mysql-utils for health validation
- db-import-conditional.sh: Use mysql-utils and env-utils
- manual-backup.sh: Use mysql-utils backup functions
**Step 3.6**: Update Deployment Scripts
- migrate-stack.sh: Use docker-utils for remote operations
- stage-modules.sh: Use env-utils for path management
- rebuild-with-modules.sh: Use docker-utils for build operations
### Validation Tests - COMPLETED ✅
```bash
# Test MySQL utilities
source scripts/bash/lib/mysql-utils.sh
mysql_print_configuration # ✅ PASSED
# Test Docker utilities
source scripts/bash/lib/docker-utils.sh
docker_print_system_info # ✅ PASSED
# Test Environment utilities
source scripts/bash/lib/env-utils.sh
env_utils_validate # ✅ PASSED
# Test integrated script
./backup-import.sh --help # ✅ PASSED with new libraries
```
### Next Steps
- Continue with Step 3.4: Update backup-export.sh, db-guard.sh, deploy-tools.sh
- Implement progressive rollout with testing after each script update
- Complete remaining 11 scripts in dependency order
---
## Phase 4: Error Handling Standardization
**Priority: MEDIUM** | **Risk: LOW** | **Impact: MEDIUM**
### Analysis
**Current State**: Mixed error handling patterns across scripts
```bash
# Found patterns:
set -e # 45 scripts
set -euo pipefail # 23 scripts
set -eu # 8 scripts
(no error handling) # 12 scripts
```
### Implementation Plan
**Step 4.1**: Standardize Error Handling
- Add `set -euo pipefail` to all scripts where safe
- Add error traps for cleanup in critical scripts
- Implement consistent exit codes
**Step 4.2**: Add Script Validation Framework
- Create validation helper functions
- Add dependency checking to critical scripts
- Implement graceful degradation where possible
### Target Pattern
```bash
#!/bin/bash
set -euo pipefail
# Error handling setup
trap 'echo "❌ Error on line $LINENO" >&2' ERR
trap 'cleanup_on_exit' EXIT
# Source libraries with validation
source_lib_or_exit() {
local lib_path="$1"
if ! source "$lib_path" 2>/dev/null; then
echo "❌ FATAL: Cannot load $lib_path" >&2
exit 1
fi
}
```
---
## Phase 5: Configuration Template Consolidation
**Priority: LOW** | **Risk: LOW** | **Impact: LOW**
### Analysis
**Found**: 71 instances of duplicate color definitions across scripts
**Found**: Multiple .env template patterns that could be standardized
### Implementation Plan
**Step 5.1**: Color Definition Consolidation
- Ensure all scripts use lib/common.sh colors exclusively
- Remove remaining duplicate color definitions
- Add color theme support (optional)
**Step 5.2**: Configuration Template Cleanup
- Consolidate environment variable patterns
- Create shared configuration validation
- Standardize default value patterns
---
## Implementation Priority Order
### **Week 1: High Impact, Low Risk**
- [ ] Phase 1.1-1.2: Consolidate remaining root and backup scripts
- [ ] Phase 2.1: Complete worldserver YAML anchor consolidation
- [ ] Validate: All major scripts and Docker profiles work
### **Week 2: Complete Core Consolidation**
- [ ] Phase 1.3-1.4: Consolidate database and module scripts
- [ ] Phase 2.2-2.3: Complete database service and authserver consolidation
- [ ] Validate: Full deployment pipeline works end-to-end
### **Week 3: Utility Libraries**
- [ ] Phase 3.1: Create and implement MySQL utility library
- [ ] Phase 3.2: Create and implement Docker utility library
- [ ] Validate: Scripts using new libraries function correctly
### **Week 4: Polish and Standardization**
- [ ] Phase 3.3: Complete environment utility library
- [ ] Phase 4.1-4.2: Standardize error handling
- [ ] Phase 5.1-5.2: Final cleanup of colors and configs
- [ ] Validate: Complete system testing
---
## Validation Framework
### **Incremental Testing**
Each phase must pass these tests before proceeding:
**Script Functionality Tests:**
```bash
# Master scripts
./deploy.sh --help && ./build.sh --help && ./cleanup.sh --help
# Docker compose validation
docker compose config --quiet
# Profile validation
for profile in services-standard services-playerbots services-modules; do
docker compose --profile $profile config --quiet
done
```
**Integration Tests:**
```bash
# End-to-end validation (non-destructive)
./deploy.sh --profile services-standard --dry-run --no-watch
./scripts/bash/verify-deployment.sh --profile services-standard
```
**Regression Prevention:**
- Git commit after each completed phase
- Tag successful consolidations
- Maintain rollback procedures
---
## Risk Mitigation
### **Container Script Dependencies**
- **High Risk**: Scripts mounted into containers (db-guard.sh, backup-scheduler.sh)
- **Mitigation**: Test container mounting before consolidating
- **Validation**: Verify scripts work inside container environment
### **Remote Deployment Impact**
- **Medium Risk**: SSH deployment scripts (migrate-stack.sh)
- **Mitigation**: Test remote deployment on non-production host
- **Validation**: Verify remote script sourcing works correctly
### **Docker Compose Version Compatibility**
- **Medium Risk**: Advanced YAML anchors may not work on older versions
- **Mitigation**: Add version detection and warnings
- **Validation**: Test on minimum supported Docker Compose version
---
## Success Metrics
### **Quantitative Goals**
- Reduce duplicate logging functions from 14 → 0 scripts
- Reduce Docker compose file from ~1000 → ~600 lines
- Reduce color definitions from 71 → 1 centralized location
- Consolidate MySQL connection patterns from 22 → 1 library
### **Qualitative Goals**
- Single source of truth for common functionality
- Consistent user experience across all scripts
- Maintainable and extensible architecture
- Clear dependency relationships
- Robust error handling and validation
### **Completion Criteria**
- [ ] All scripts source centralized libraries exclusively
- [ ] No duplicate function definitions remain
- [ ] Docker compose uses YAML anchors for all repeated patterns
- [ ] Comprehensive test suite validates all functionality
- [ ] Documentation updated to reflect new architecture

View File

@@ -9,6 +9,13 @@ set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ENV_PATH="$ROOT_DIR/.env" ENV_PATH="$ROOT_DIR/.env"
TEMPLATE_PATH="$ROOT_DIR/.env.template" TEMPLATE_PATH="$ROOT_DIR/.env.template"
# Source common library with proper error handling
if ! source "$ROOT_DIR/scripts/bash/lib/common.sh" 2>/dev/null; then
echo "❌ FATAL: Cannot load $ROOT_DIR/scripts/bash/lib/common.sh" >&2
echo "This library is required for build.sh to function." >&2
exit 1
fi
source "$ROOT_DIR/scripts/bash/project_name.sh" source "$ROOT_DIR/scripts/bash/project_name.sh"
# Default project name (read from .env or template) # Default project name (read from .env or template)
@@ -17,11 +24,7 @@ ASSUME_YES=0
FORCE_REBUILD=0 FORCE_REBUILD=0
SKIP_SOURCE_SETUP=0 SKIP_SOURCE_SETUP=0
CUSTOM_SOURCE_PATH="" CUSTOM_SOURCE_PATH=""
BLUE='\033[0;34m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'; RED='\033[0;31m'; NC='\033[0m' # Color definitions and logging functions now provided by lib/common.sh
info(){ printf '%b\n' "${BLUE} $*${NC}"; }
ok(){ printf '%b\n' "${GREEN}$*${NC}"; }
warn(){ printf '%b\n' "${YELLOW}⚠️ $*${NC}"; }
err(){ printf '%b\n' "${RED}$*${NC}"; }
show_build_header(){ show_build_header(){
printf '\n%b\n' "${BLUE}🔨 AZEROTHCORE BUILD SYSTEM 🔨${NC}" printf '\n%b\n' "${BLUE}🔨 AZEROTHCORE BUILD SYSTEM 🔨${NC}"

View File

@@ -7,6 +7,12 @@ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$SCRIPT_DIR" PROJECT_ROOT="$SCRIPT_DIR"
cd "$PROJECT_ROOT" cd "$PROJECT_ROOT"
# Source common library for standardized logging
if ! source "$SCRIPT_DIR/scripts/bash/lib/common.sh" 2>/dev/null; then
echo "❌ FATAL: Cannot load $SCRIPT_DIR/scripts/bash/lib/common.sh" >&2
exit 1
fi
# Load environment configuration (available on deployed servers) # Load environment configuration (available on deployed servers)
if [ -f ".env" ]; then if [ -f ".env" ]; then
set -a set -a
@@ -20,11 +26,10 @@ OUTPUT_DIR="${CHANGELOG_OUTPUT_DIR:-./changelogs}"
DAYS_BACK="${CHANGELOG_DAYS_BACK:-7}" DAYS_BACK="${CHANGELOG_DAYS_BACK:-7}"
FORMAT="${CHANGELOG_FORMAT:-markdown}" FORMAT="${CHANGELOG_FORMAT:-markdown}"
# Colors for output # Specialized logging with timestamp for changelog context
GREEN='\033[0;32m'; BLUE='\033[0;34m'; YELLOW='\033[1;33m'; NC='\033[0m' log() { info "[$(date '+%H:%M:%S')] $*"; }
log() { echo -e "${BLUE}[$(date '+%H:%M:%S')]${NC} $*" >&2; } success() { ok "$*"; }
success() { echo -e "${GREEN}${NC} $*" >&2; } # warn() function already provided by lib/common.sh
warn() { echo -e "${YELLOW}⚠️${NC} $*" >&2; }
usage() { usage() {
cat <<EOF cat <<EOF

View File

@@ -14,6 +14,13 @@ PROJECT_DIR="${SCRIPT_DIR}"
DEFAULT_COMPOSE_FILE="${PROJECT_DIR}/docker-compose.yml" DEFAULT_COMPOSE_FILE="${PROJECT_DIR}/docker-compose.yml"
ENV_FILE="${PROJECT_DIR}/.env" ENV_FILE="${PROJECT_DIR}/.env"
TEMPLATE_FILE="${PROJECT_DIR}/.env.template" TEMPLATE_FILE="${PROJECT_DIR}/.env.template"
# Source common library with proper error handling
if ! source "${PROJECT_DIR}/scripts/bash/lib/common.sh" 2>/dev/null; then
echo "❌ FATAL: Cannot load ${PROJECT_DIR}/scripts/bash/lib/common.sh" >&2
echo "This library is required for cleanup.sh to function." >&2
exit 1
fi
source "${PROJECT_DIR}/scripts/bash/project_name.sh" source "${PROJECT_DIR}/scripts/bash/project_name.sh"
# Default project name (read from .env or template) # Default project name (read from .env or template)
@@ -21,17 +28,16 @@ DEFAULT_PROJECT_NAME="$(project_name::resolve "$ENV_FILE" "$TEMPLATE_FILE")"
source "${PROJECT_DIR}/scripts/bash/compose_overrides.sh" source "${PROJECT_DIR}/scripts/bash/compose_overrides.sh"
declare -a COMPOSE_FILE_ARGS=() declare -a COMPOSE_FILE_ARGS=()
# Colors # Color definitions now provided by lib/common.sh
RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'; BLUE='\033[0;34m'; MAGENTA='\033[0;35m'; NC='\033[0m' # Legacy print_status function for cleanup.sh compatibility
print_status() { print_status() {
case "$1" in case "$1" in
INFO) echo -e "${BLUE} ${2}${NC}";; INFO) info "${2}";;
SUCCESS) echo -e "${GREEN}${2}${NC}";; SUCCESS) ok "${2}";;
WARNING) echo -e "${YELLOW}⚠️ ${2}${NC}";; WARNING) warn "${2}";;
ERROR) echo -e "${RED}${2}${NC}";; ERROR) err "${2}";;
DANGER) echo -e "${RED}💀 ${2}${NC}";; DANGER) printf '%b\n' "${RED}💀 ${2}${NC}";;
HEADER) echo -e "\n${MAGENTA}=== ${2} ===${NC}";; HEADER) printf '\n%b\n' "${CYAN}=== ${2} ===${NC}";;
esac esac
} }

View File

@@ -12,6 +12,13 @@ ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
DEFAULT_COMPOSE_FILE="$ROOT_DIR/docker-compose.yml" DEFAULT_COMPOSE_FILE="$ROOT_DIR/docker-compose.yml"
ENV_PATH="$ROOT_DIR/.env" ENV_PATH="$ROOT_DIR/.env"
TEMPLATE_PATH="$ROOT_DIR/.env.template" TEMPLATE_PATH="$ROOT_DIR/.env.template"
# Source common library with proper error handling
if ! source "$ROOT_DIR/scripts/bash/lib/common.sh" 2>/dev/null; then
echo "❌ FATAL: Cannot load $ROOT_DIR/scripts/bash/lib/common.sh" >&2
echo "This library is required for deploy.sh to function." >&2
exit 1
fi
source "$ROOT_DIR/scripts/bash/project_name.sh" source "$ROOT_DIR/scripts/bash/project_name.sh"
# Default project name (read from .env or template) # Default project name (read from .env or template)
@@ -46,11 +53,7 @@ MODULE_STATE_INITIALIZED=0
declare -a MODULES_COMPILE_LIST=() declare -a MODULES_COMPILE_LIST=()
declare -a COMPOSE_FILE_ARGS=() declare -a COMPOSE_FILE_ARGS=()
BLUE='\033[0;34m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'; RED='\033[0;31m'; NC='\033[0m' # Color definitions and logging functions now provided by lib/common.sh
info(){ printf '%b\n' "${BLUE} $*${NC}"; }
ok(){ printf '%b\n' "${GREEN}$*${NC}"; }
warn(){ printf '%b\n' "${YELLOW}⚠️ $*${NC}"; }
err(){ printf '%b\n' "${RED}$*${NC}"; }
show_deployment_header(){ show_deployment_header(){
printf '\n%b\n' "${BLUE}⚔️ AZEROTHCORE REALM DEPLOYMENT ⚔️${NC}" printf '\n%b\n' "${BLUE}⚔️ AZEROTHCORE REALM DEPLOYMENT ⚔️${NC}"

View File

@@ -1,11 +1,110 @@
name: ${COMPOSE_PROJECT_NAME} name: ${COMPOSE_PROJECT_NAME}
# =============================================================================
# YAML ANCHORS - Shared Configuration Templates
# =============================================================================
x-logging: &logging-default x-logging: &logging-default
driver: json-file driver: json-file
options: options:
max-size: "10m" max-size: "10m"
max-file: "3" max-file: "3"
# Common database connection environment variables
x-database-config: &database-config
CONTAINER_MYSQL: ${CONTAINER_MYSQL}
MYSQL_PORT: ${MYSQL_PORT}
MYSQL_USER: ${MYSQL_USER}
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
DB_AUTH_NAME: ${DB_AUTH_NAME}
DB_WORLD_NAME: ${DB_WORLD_NAME}
DB_CHARACTERS_NAME: ${DB_CHARACTERS_NAME}
DB_RECONNECT_SECONDS: ${DB_RECONNECT_SECONDS}
DB_RECONNECT_ATTEMPTS: ${DB_RECONNECT_ATTEMPTS}
# AzerothCore database connection strings
x-azerothcore-databases: &azerothcore-databases
AC_LOGIN_DATABASE_INFO: "${CONTAINER_MYSQL};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_AUTH_NAME}"
AC_WORLD_DATABASE_INFO: "${CONTAINER_MYSQL};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_WORLD_NAME}"
AC_CHARACTER_DATABASE_INFO: "${CONTAINER_MYSQL};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_CHARACTERS_NAME}"
# Common storage volume mounts
x-storage-volumes: &storage-volumes
- ${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}:/azerothcore/env/dist/etc
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/logs
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/env/dist/logs
# Authserver common configuration
x-authserver-common: &authserver-common
user: "${CONTAINER_USER}"
environment:
<<: *azerothcore-databases
AC_UPDATES_ENABLE_DATABASES: "0"
AC_BIND_IP: "0.0.0.0"
AC_LOG_LEVEL: "1"
AC_LOGGER_ROOT_CONFIG: "1,Console"
AC_LOGGER_SERVER_CONFIG: "1,Console"
AC_APPENDER_CONSOLE_CONFIG: "1,2,0"
volumes: *storage-volumes
ports:
- "${AUTH_EXTERNAL_PORT}:${AUTH_PORT}"
restart: unless-stopped
logging: *logging-default
networks:
- azerothcore
cap_add: ["SYS_NICE"]
healthcheck: &auth-healthcheck
test: ["CMD", "sh", "-c", "ps aux | grep '[a]uthserver' | grep -v grep || exit 1"]
interval: ${AUTH_HEALTHCHECK_INTERVAL}
timeout: ${AUTH_HEALTHCHECK_TIMEOUT}
retries: ${AUTH_HEALTHCHECK_RETRIES}
start_period: ${AUTH_HEALTHCHECK_START_PERIOD}
# Worldserver common configuration
x-worldserver-common: &worldserver-common
user: "${CONTAINER_USER}"
stdin_open: true
tty: true
environment:
<<: *azerothcore-databases
AC_UPDATES_ENABLE_DATABASES: "7"
AC_BIND_IP: "0.0.0.0"
AC_DATA_DIR: "/azerothcore/data"
AC_SOAP_PORT: "${SOAP_PORT}"
AC_PROCESS_PRIORITY: "0"
AC_ELUNA_ENABLED: "${AC_ELUNA_ENABLED}"
AC_ELUNA_TRACE_BACK: "${AC_ELUNA_TRACE_BACK}"
AC_ELUNA_AUTO_RELOAD: "${AC_ELUNA_AUTO_RELOAD}"
AC_ELUNA_BYTECODE_CACHE: "${AC_ELUNA_BYTECODE_CACHE}"
AC_ELUNA_SCRIPT_PATH: "${AC_ELUNA_SCRIPT_PATH}"
AC_ELUNA_REQUIRE_PATHS: "${AC_ELUNA_REQUIRE_PATHS}"
AC_ELUNA_REQUIRE_CPATHS: "${AC_ELUNA_REQUIRE_CPATHS}"
AC_ELUNA_AUTO_RELOAD_INTERVAL: "${AC_ELUNA_AUTO_RELOAD_INTERVAL}"
PLAYERBOT_ENABLED: "${PLAYERBOT_ENABLED}"
PLAYERBOT_MAX_BOTS: "${PLAYERBOT_MAX_BOTS}"
AC_LOG_LEVEL: "2"
ports:
- "${WORLD_EXTERNAL_PORT}:${WORLD_PORT}"
- "${SOAP_EXTERNAL_PORT}:${SOAP_PORT}"
volumes:
- ${CLIENT_DATA_PATH:-${STORAGE_CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}}:/azerothcore/data
- ${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}:/azerothcore/env/dist/etc
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/logs
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/env/dist/logs
- ${STORAGE_MODULES_PATH:-${STORAGE_PATH}/modules}:/azerothcore/modules
- ${STORAGE_LUA_SCRIPTS_PATH:-${STORAGE_PATH}/lua_scripts}:/azerothcore/lua_scripts
restart: unless-stopped
logging: *logging-default
networks:
- azerothcore
cap_add: ["SYS_NICE"]
healthcheck: &world-healthcheck
test: ["CMD", "sh", "-c", "ps aux | grep '[w]orldserver' | grep -v grep || exit 1"]
interval: ${WORLD_HEALTHCHECK_INTERVAL}
timeout: ${WORLD_HEALTHCHECK_TIMEOUT}
retries: ${WORLD_HEALTHCHECK_RETRIES}
start_period: ${WORLD_HEALTHCHECK_START_PERIOD}
services: services:
# ===================== # =====================
# Database Layer (db) # Database Layer (db)
@@ -515,10 +614,10 @@ services:
# Services - Standard (services-standard) # Services - Standard (services-standard)
# ===================== # =====================
ac-authserver-standard: ac-authserver-standard:
<<: *authserver-common
profiles: ["services-standard"] profiles: ["services-standard"]
image: ${AC_AUTHSERVER_IMAGE} image: ${AC_AUTHSERVER_IMAGE}
container_name: ac-authserver container_name: ac-authserver
user: "${CONTAINER_USER}"
depends_on: depends_on:
ac-mysql: ac-mysql:
condition: service_healthy condition: service_healthy
@@ -526,94 +625,26 @@ services:
condition: service_completed_successfully condition: service_completed_successfully
ac-db-init: ac-db-init:
condition: service_completed_successfully condition: service_completed_successfully
environment:
AC_LOGIN_DATABASE_INFO: "${CONTAINER_MYSQL};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_AUTH_NAME}"
AC_UPDATES_ENABLE_DATABASES: "0"
AC_BIND_IP: "0.0.0.0"
AC_LOG_LEVEL: "1"
AC_LOGGER_ROOT_CONFIG: "1,Console"
AC_LOGGER_SERVER_CONFIG: "1,Console"
AC_APPENDER_CONSOLE_CONFIG: "1,2,0"
ports:
- "${AUTH_EXTERNAL_PORT}:${AUTH_PORT}"
restart: unless-stopped
logging: *logging-default
networks:
- azerothcore
volumes:
- ${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}:/azerothcore/env/dist/etc
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/logs
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/env/dist/logs
cap_add: ["SYS_NICE"]
healthcheck:
test: ["CMD", "sh", "-c", "ps aux | grep '[a]uthserver' | grep -v grep || exit 1"]
interval: ${AUTH_HEALTHCHECK_INTERVAL}
timeout: ${AUTH_HEALTHCHECK_TIMEOUT}
retries: ${AUTH_HEALTHCHECK_RETRIES}
start_period: ${AUTH_HEALTHCHECK_START_PERIOD}
ac-worldserver-standard: ac-worldserver-standard:
<<: *worldserver-common
profiles: ["services-standard"] profiles: ["services-standard"]
image: ${AC_WORLDSERVER_IMAGE} image: ${AC_WORLDSERVER_IMAGE}
container_name: ac-worldserver container_name: ac-worldserver
user: "${CONTAINER_USER}"
stdin_open: true
tty: true
depends_on: depends_on:
ac-authserver-standard: ac-authserver-standard:
condition: service_healthy condition: service_healthy
ac-client-data-standard: ac-client-data-standard:
condition: service_completed_successfully condition: service_completed_successfully
environment:
AC_LOGIN_DATABASE_INFO: "${CONTAINER_MYSQL};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_AUTH_NAME}"
AC_WORLD_DATABASE_INFO: "${CONTAINER_MYSQL};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_WORLD_NAME}"
AC_CHARACTER_DATABASE_INFO: "${CONTAINER_MYSQL};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_CHARACTERS_NAME}"
AC_UPDATES_ENABLE_DATABASES: "7"
AC_BIND_IP: "0.0.0.0"
AC_DATA_DIR: "/azerothcore/data"
AC_SOAP_PORT: "${SOAP_PORT}"
AC_PROCESS_PRIORITY: "0"
AC_ELUNA_ENABLED: "${AC_ELUNA_ENABLED}"
AC_ELUNA_TRACE_BACK: "${AC_ELUNA_TRACE_BACK}"
AC_ELUNA_AUTO_RELOAD: "${AC_ELUNA_AUTO_RELOAD}"
AC_ELUNA_BYTECODE_CACHE: "${AC_ELUNA_BYTECODE_CACHE}"
AC_ELUNA_SCRIPT_PATH: "${AC_ELUNA_SCRIPT_PATH}"
AC_ELUNA_REQUIRE_PATHS: "${AC_ELUNA_REQUIRE_PATHS}"
AC_ELUNA_REQUIRE_CPATHS: "${AC_ELUNA_REQUIRE_CPATHS}"
AC_ELUNA_AUTO_RELOAD_INTERVAL: "${AC_ELUNA_AUTO_RELOAD_INTERVAL}"
PLAYERBOT_ENABLED: "${PLAYERBOT_ENABLED}"
PLAYERBOT_MAX_BOTS: "${PLAYERBOT_MAX_BOTS}"
AC_LOG_LEVEL: "2"
ports:
- "${WORLD_EXTERNAL_PORT}:${WORLD_PORT}"
- "${SOAP_EXTERNAL_PORT}:${SOAP_PORT}"
volumes:
- ${CLIENT_DATA_PATH:-${STORAGE_CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}}:/azerothcore/data
- ${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}:/azerothcore/env/dist/etc
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/logs
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/env/dist/logs
- ${STORAGE_MODULES_PATH:-${STORAGE_PATH}/modules}:/azerothcore/modules
- ${STORAGE_LUA_SCRIPTS_PATH:-${STORAGE_PATH}/lua_scripts}:/azerothcore/lua_scripts
restart: unless-stopped
logging: *logging-default
networks:
- azerothcore
cap_add: ["SYS_NICE"]
healthcheck:
test: ["CMD", "sh", "-c", "ps aux | grep '[w]orldserver' | grep -v grep || exit 1"]
interval: ${WORLD_HEALTHCHECK_INTERVAL}
timeout: ${WORLD_HEALTHCHECK_TIMEOUT}
retries: ${WORLD_HEALTHCHECK_RETRIES}
start_period: ${WORLD_HEALTHCHECK_START_PERIOD}
# ===================== # =====================
# Services - Playerbots (services-playerbots) # Services - Playerbots (services-playerbots)
# ===================== # =====================
ac-authserver-playerbots: ac-authserver-playerbots:
<<: *authserver-common
profiles: ["services-playerbots"] profiles: ["services-playerbots"]
image: ${AC_AUTHSERVER_IMAGE_PLAYERBOTS} image: ${AC_AUTHSERVER_IMAGE_PLAYERBOTS}
container_name: ac-authserver container_name: ac-authserver
user: "${CONTAINER_USER}"
depends_on: depends_on:
ac-mysql: ac-mysql:
condition: service_healthy condition: service_healthy
@@ -622,7 +653,7 @@ services:
ac-db-init: ac-db-init:
condition: service_completed_successfully condition: service_completed_successfully
environment: environment:
AC_LOGIN_DATABASE_INFO: "${CONTAINER_MYSQL};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_AUTH_NAME}" <<: *azerothcore-databases
AC_UPDATES_ENABLE_DATABASES: "0" AC_UPDATES_ENABLE_DATABASES: "0"
AC_BIND_IP: "0.0.0.0" AC_BIND_IP: "0.0.0.0"
TZ: "${TZ}" TZ: "${TZ}"
@@ -630,21 +661,6 @@ services:
AC_LOGGER_ROOT_CONFIG: "1,Console" AC_LOGGER_ROOT_CONFIG: "1,Console"
AC_LOGGER_SERVER_CONFIG: "1,Console" AC_LOGGER_SERVER_CONFIG: "1,Console"
AC_APPENDER_CONSOLE_CONFIG: "1,2,0" AC_APPENDER_CONSOLE_CONFIG: "1,2,0"
ports:
- "${AUTH_EXTERNAL_PORT}:${AUTH_PORT}"
restart: unless-stopped
logging: *logging-default
networks:
- azerothcore
volumes:
- ${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}:/azerothcore/env/dist/etc
cap_add: ["SYS_NICE"]
healthcheck:
test: ["CMD", "sh", "-c", "ps aux | grep '[a]uthserver' | grep -v grep || exit 1"]
interval: ${AUTH_HEALTHCHECK_INTERVAL}
timeout: ${AUTH_HEALTHCHECK_TIMEOUT}
retries: ${AUTH_HEALTHCHECK_RETRIES}
start_period: ${AUTH_HEALTHCHECK_START_PERIOD}
ac-authserver-modules: ac-authserver-modules:
profiles: ["services-modules"] profiles: ["services-modules"]
@@ -683,12 +699,10 @@ services:
start_period: ${AUTH_HEALTHCHECK_START_PERIOD} start_period: ${AUTH_HEALTHCHECK_START_PERIOD}
ac-worldserver-playerbots: ac-worldserver-playerbots:
<<: *worldserver-common
profiles: ["services-playerbots"] profiles: ["services-playerbots"]
image: ${AC_WORLDSERVER_IMAGE_PLAYERBOTS} image: ${AC_WORLDSERVER_IMAGE_PLAYERBOTS}
container_name: ac-worldserver container_name: ac-worldserver
user: "${CONTAINER_USER}"
stdin_open: true
tty: true
depends_on: depends_on:
ac-authserver-playerbots: ac-authserver-playerbots:
condition: service_healthy condition: service_healthy
@@ -697,9 +711,7 @@ services:
ac-db-guard: ac-db-guard:
condition: service_healthy condition: service_healthy
environment: environment:
AC_LOGIN_DATABASE_INFO: "${CONTAINER_MYSQL};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_AUTH_NAME}" <<: *azerothcore-databases
AC_WORLD_DATABASE_INFO: "${CONTAINER_MYSQL};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_WORLD_NAME}"
AC_CHARACTER_DATABASE_INFO: "${CONTAINER_MYSQL};${MYSQL_PORT};${MYSQL_USER};${MYSQL_ROOT_PASSWORD};${DB_CHARACTERS_NAME}"
AC_UPDATES_ENABLE_DATABASES: "7" AC_UPDATES_ENABLE_DATABASES: "7"
AC_BIND_IP: "0.0.0.0" AC_BIND_IP: "0.0.0.0"
AC_DATA_DIR: "/azerothcore/data" AC_DATA_DIR: "/azerothcore/data"
@@ -717,27 +729,6 @@ services:
PLAYERBOT_ENABLED: "${PLAYERBOT_ENABLED}" PLAYERBOT_ENABLED: "${PLAYERBOT_ENABLED}"
PLAYERBOT_MAX_BOTS: "${PLAYERBOT_MAX_BOTS}" PLAYERBOT_MAX_BOTS: "${PLAYERBOT_MAX_BOTS}"
AC_LOG_LEVEL: "2" AC_LOG_LEVEL: "2"
ports:
- "${WORLD_EXTERNAL_PORT}:${WORLD_PORT}"
- "${SOAP_EXTERNAL_PORT}:${SOAP_PORT}"
volumes:
- ${CLIENT_DATA_PATH:-${STORAGE_CLIENT_DATA_PATH:-${STORAGE_PATH}/client-data}}:/azerothcore/data
- ${STORAGE_CONFIG_PATH:-${STORAGE_PATH}/config}:/azerothcore/env/dist/etc
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/logs
- ${STORAGE_LOGS_PATH:-${STORAGE_PATH}/logs}:/azerothcore/env/dist/logs
- ${STORAGE_MODULES_PATH:-${STORAGE_PATH}/modules}:/azerothcore/modules
- ${STORAGE_LUA_SCRIPTS_PATH:-${STORAGE_PATH}/lua_scripts}:/azerothcore/lua_scripts
restart: unless-stopped
logging: *logging-default
networks:
- azerothcore
cap_add: ["SYS_NICE"]
healthcheck:
test: ["CMD", "sh", "-c", "ps aux | grep '[w]orldserver' | grep -v grep || exit 1"]
interval: ${WORLD_HEALTHCHECK_INTERVAL}
timeout: ${WORLD_HEALTHCHECK_TIMEOUT}
retries: ${WORLD_HEALTHCHECK_RETRIES}
start_period: ${WORLD_HEALTHCHECK_START_PERIOD}
ac-worldserver-modules: ac-worldserver-modules:
profiles: ["services-modules"] profiles: ["services-modules"]

View File

@@ -195,8 +195,10 @@ else
# Step 3: Update realmlist table # Step 3: Update realmlist table
echo "" echo ""
echo "🌐 Step 3: Updating realmlist table..." echo "🌐 Step 3: Updating realmlist table..."
echo " 🔧 Setting realm address to: ${SERVER_ADDRESS}:${REALM_PORT}"
mysql -h "${MYSQL_HOST}" -u"${MYSQL_USER}" -p"${MYSQL_ROOT_PASSWORD}" --skip-ssl-verify "${DB_AUTH_NAME}" -e " mysql -h "${MYSQL_HOST}" -u"${MYSQL_USER}" -p"${MYSQL_ROOT_PASSWORD}" --skip-ssl-verify "${DB_AUTH_NAME}" -e "
UPDATE realmlist SET address='${SERVER_ADDRESS}', port=${REALM_PORT} WHERE id=1; UPDATE realmlist SET address='${SERVER_ADDRESS}', port=${REALM_PORT} WHERE id=1;
SELECT CONCAT(' ✓ Realm configured: ', name, ' at ', address, ':', port) AS status FROM realmlist WHERE id=1;
" || echo "⚠️ Could not update realmlist table" " || echo "⚠️ Could not update realmlist table"
echo "✅ Realmlist updated" echo "✅ Realmlist updated"

View File

@@ -7,6 +7,17 @@ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)" PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
cd "$SCRIPT_DIR" cd "$SCRIPT_DIR"
# Source common libraries for standardized functionality
if ! source "$SCRIPT_DIR/lib/common.sh" 2>/dev/null; then
echo "❌ FATAL: Cannot load $SCRIPT_DIR/lib/common.sh" >&2
exit 1
fi
# Source utility libraries
source "$SCRIPT_DIR/lib/mysql-utils.sh" 2>/dev/null || warn "MySQL utilities not available"
source "$SCRIPT_DIR/lib/docker-utils.sh" 2>/dev/null || warn "Docker utilities not available"
source "$SCRIPT_DIR/lib/env-utils.sh" 2>/dev/null || warn "Environment utilities not available"
# Load environment defaults if present # Load environment defaults if present
if [ -f "$PROJECT_ROOT/.env" ]; then if [ -f "$PROJECT_ROOT/.env" ]; then
set -a set -a
@@ -63,7 +74,7 @@ Examples:
EOF EOF
} }
err(){ printf 'Error: %s\n' "$*" >&2; } # Use standardized error function from lib/common.sh
die(){ err "$1"; exit 1; } die(){ err "$1"; exit 1; }
normalize_token(){ normalize_token(){
@@ -104,7 +115,11 @@ remove_from_list(){
arr=("${filtered[@]}") arr=("${filtered[@]}")
} }
# Use env-utils.sh function if available, fallback to local implementation
resolve_relative(){ resolve_relative(){
if command -v path_resolve_absolute >/dev/null 2>&1; then
path_resolve_absolute "$2" "$1"
else
local base="$1" path="$2" local base="$1" path="$2"
if command -v python3 >/dev/null 2>&1; then if command -v python3 >/dev/null 2>&1; then
python3 - "$base" "$path" <<'PY' python3 - "$base" "$path" <<'PY'
@@ -120,6 +135,7 @@ PY
else else
die "python3 is required but was not found on PATH" die "python3 is required but was not found on PATH"
fi fi
fi
} }
json_string(){ json_string(){
@@ -248,7 +264,13 @@ generated_at="$(date --iso-8601=seconds)"
dump_db(){ dump_db(){
local schema="$1" outfile="$2" local schema="$1" outfile="$2"
echo "Dumping ${schema} -> ${outfile}" echo "Dumping ${schema} -> ${outfile}"
# Use mysql-utils.sh function if available, fallback to direct command
if command -v mysql_backup_database >/dev/null 2>&1; then
mysql_backup_database "$schema" "$outfile" "gzip" "$MYSQL_CONTAINER" "$MYSQL_PW"
else
docker exec "$MYSQL_CONTAINER" mysqldump -uroot -p"$MYSQL_PW" "$schema" | gzip > "$outfile" docker exec "$MYSQL_CONTAINER" mysqldump -uroot -p"$MYSQL_PW" "$schema" | gzip > "$outfile"
fi
} }
for db in "${ACTIVE_DBS[@]}"; do for db in "${ACTIVE_DBS[@]}"; do

View File

@@ -6,15 +6,19 @@ INVOCATION_DIR="$PWD"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR" cd "$SCRIPT_DIR"
COLOR_RED='\033[0;31m' # Source common libraries for standardized functionality
COLOR_GREEN='\033[0;32m' if ! source "$SCRIPT_DIR/lib/common.sh" 2>/dev/null; then
COLOR_YELLOW='\033[1;33m' echo "❌ FATAL: Cannot load $SCRIPT_DIR/lib/common.sh" >&2
COLOR_RESET='\033[0m' exit 1
fi
log(){ printf '%b\n' "${COLOR_GREEN}$*${COLOR_RESET}"; } # Source utility libraries
warn(){ printf '%b\n' "${COLOR_YELLOW}$*${COLOR_RESET}"; } source "$SCRIPT_DIR/lib/mysql-utils.sh" 2>/dev/null || warn "MySQL utilities not available"
err(){ printf '%b\n' "${COLOR_RED}$*${COLOR_RESET}"; } source "$SCRIPT_DIR/lib/docker-utils.sh" 2>/dev/null || warn "Docker utilities not available"
fatal(){ err "$*"; exit 1; } source "$SCRIPT_DIR/lib/env-utils.sh" 2>/dev/null || warn "Environment utilities not available"
# Use log() for main output to maintain existing behavior
log() { ok "$*"; }
SUPPORTED_DBS=(auth characters world) SUPPORTED_DBS=(auth characters world)
declare -A SUPPORTED_SET=() declare -A SUPPORTED_SET=()
@@ -102,7 +106,11 @@ remove_from_list(){
arr=("${filtered[@]}") arr=("${filtered[@]}")
} }
# Use env-utils.sh function if available, fallback to local implementation
resolve_relative(){ resolve_relative(){
if command -v path_resolve_absolute >/dev/null 2>&1; then
path_resolve_absolute "$2" "$1"
else
local base="$1" path="$2" local base="$1" path="$2"
if command -v python3 >/dev/null 2>&1; then if command -v python3 >/dev/null 2>&1; then
python3 - "$base" "$path" <<'PY' python3 - "$base" "$path" <<'PY'
@@ -118,6 +126,7 @@ PY
else else
fatal "python3 is required but was not found on PATH" fatal "python3 is required but was not found on PATH"
fi fi
fi
} }
load_manifest(){ load_manifest(){
@@ -280,7 +289,13 @@ backup_db(){
local out="manual-backups/${label}-pre-import-$(timestamp).sql" local out="manual-backups/${label}-pre-import-$(timestamp).sql"
mkdir -p manual-backups mkdir -p manual-backups
log "Backing up current ${schema} to ${out}" log "Backing up current ${schema} to ${out}"
# Use mysql-utils.sh function if available, fallback to direct command
if command -v mysql_backup_database >/dev/null 2>&1; then
mysql_backup_database "$schema" "$out" "none" "ac-mysql" "$MYSQL_PW"
else
docker exec ac-mysql mysqldump -uroot -p"$MYSQL_PW" "$schema" > "$out" docker exec ac-mysql mysqldump -uroot -p"$MYSQL_PW" "$schema" > "$out"
fi
} }
restore(){ restore(){
@@ -302,7 +317,22 @@ db_selected(){
} }
count_rows(){ count_rows(){
# Use mysql-utils.sh function if available, fallback to direct command
if command -v docker_mysql_query >/dev/null 2>&1; then
# Extract database name from query for mysql-utils function
local query="$1"
local db_name
# Simple extraction - assumes "FROM database.table" or "database.table" pattern
if [[ "$query" =~ FROM[[:space:]]+([^.[:space:]]+)\. ]]; then
db_name="${BASH_REMATCH[1]}"
docker_mysql_query "$db_name" "$query" "ac-mysql" "$MYSQL_PW"
else
# Fallback to original method if can't parse database
docker exec ac-mysql mysql -uroot -p"$MYSQL_PW" -N -B -e "$query"
fi
else
docker exec ac-mysql mysql -uroot -p"$MYSQL_PW" -N -B -e "$1" docker exec ac-mysql mysql -uroot -p"$MYSQL_PW" -N -B -e "$1"
fi
} }
case "${1:-}" in case "${1:-}" in

View File

@@ -6,18 +6,14 @@ set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR" cd "$SCRIPT_DIR"
COLOR_RED='\033[0;31m' # Source common library for standardized logging
COLOR_GREEN='\033[0;32m' if ! source "$SCRIPT_DIR/lib/common.sh" 2>/dev/null; then
COLOR_YELLOW='\033[1;33m' echo "❌ FATAL: Cannot load $SCRIPT_DIR/lib/common.sh" >&2
COLOR_BLUE='\033[0;34m' exit 1
COLOR_CYAN='\033[0;36m' fi
COLOR_RESET='\033[0m'
log(){ printf '%b\n' "${COLOR_GREEN}$*${COLOR_RESET}"; } # Use log() instead of info() for main output to maintain existing behavior
info(){ printf '%b\n' "${COLOR_CYAN}$*${COLOR_RESET}"; } log() { ok "$*"; }
warn(){ printf '%b\n' "${COLOR_YELLOW}$*${COLOR_RESET}"; }
err(){ printf '%b\n' "${COLOR_RED}$*${COLOR_RESET}"; }
fatal(){ err "$*"; exit 1; }
MYSQL_PW="" MYSQL_PW=""
BACKUP_DIR="" BACKUP_DIR=""

View File

@@ -4,9 +4,31 @@
# automatically rerun db-import-conditional to hydrate from backups. # automatically rerun db-import-conditional to hydrate from backups.
set -euo pipefail set -euo pipefail
log(){ echo "🛡️ [db-guard] $*"; } # Source common library if available (container environment)
warn(){ echo "⚠️ [db-guard] $*" >&2; } SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
err(){ echo "❌ [db-guard] $*" >&2; } if [ -f "$SCRIPT_DIR/../scripts/bash/lib/common.sh" ]; then
# Running from project root
source "$SCRIPT_DIR/../scripts/bash/lib/common.sh"
db_guard_log() { info "🛡️ [db-guard] $*"; }
db_guard_warn() { warn "[db-guard] $*"; }
db_guard_err() { err "[db-guard] $*"; }
elif [ -f "$SCRIPT_DIR/lib/common.sh" ]; then
# Running from scripts/bash directory
source "$SCRIPT_DIR/lib/common.sh"
db_guard_log() { info "🛡️ [db-guard] $*"; }
db_guard_warn() { warn "[db-guard] $*"; }
db_guard_err() { err "[db-guard] $*"; }
else
# Fallback for container environment where lib/common.sh may not be available
db_guard_log(){ echo "🛡️ [db-guard] $*"; }
db_guard_warn(){ echo "⚠️ [db-guard] $*" >&2; }
db_guard_err(){ echo "❌ [db-guard] $*" >&2; }
fi
# Maintain compatibility with existing function names
log() { db_guard_log "$*"; }
warn() { db_guard_warn "$*"; }
err() { db_guard_err "$*"; }
MYSQL_HOST="${CONTAINER_MYSQL:-ac-mysql}" MYSQL_HOST="${CONTAINER_MYSQL:-ac-mysql}"
MYSQL_PORT="${MYSQL_PORT:-3306}" MYSQL_PORT="${MYSQL_PORT:-3306}"

View File

@@ -6,6 +6,13 @@ set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}" )" && pwd)" SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}" )" && pwd)"
ROOT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)" ROOT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
# Source common library for standardized logging
if ! source "$ROOT_DIR/scripts/bash/lib/common.sh" 2>/dev/null; then
echo "❌ FATAL: Cannot load $ROOT_DIR/scripts/bash/lib/common.sh" >&2
exit 1
fi
DEFAULT_COMPOSE_FILE="$ROOT_DIR/docker-compose.yml" DEFAULT_COMPOSE_FILE="$ROOT_DIR/docker-compose.yml"
ENV_FILE="$ROOT_DIR/.env" ENV_FILE="$ROOT_DIR/.env"
TEMPLATE_FILE="$ROOT_DIR/.env.template" TEMPLATE_FILE="$ROOT_DIR/.env.template"
@@ -16,17 +23,6 @@ DEFAULT_PROJECT_NAME="$(project_name::resolve "$ENV_FILE" "$TEMPLATE_FILE")"
source "$ROOT_DIR/scripts/bash/compose_overrides.sh" source "$ROOT_DIR/scripts/bash/compose_overrides.sh"
declare -a COMPOSE_FILE_ARGS=() declare -a COMPOSE_FILE_ARGS=()
BLUE='\033[0;34m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
NC='\033[0m'
info(){ echo -e "${BLUE} $*${NC}"; }
ok(){ echo -e "${GREEN}$*${NC}"; }
warn(){ echo -e "${YELLOW}⚠️ $*${NC}"; }
err(){ echo -e "${RED}$*${NC}"; }
read_env(){ read_env(){
local key="$1" default="${2:-}" value="" local key="$1" default="${2:-}" value=""
if [ -f "$ENV_FILE" ]; then if [ -f "$ENV_FILE" ]; then

View File

@@ -50,9 +50,9 @@ log() {
printf '%b\n' "${GREEN}$*${NC}" printf '%b\n' "${GREEN}$*${NC}"
} }
# Log warning messages (yellow with warning icon) # Log warning messages (yellow with warning icon, to stderr for compatibility)
warn() { warn() {
printf '%b\n' "${YELLOW}⚠️ $*${NC}" printf '%b\n' "${YELLOW}⚠️ $*${NC}" >&2
} }
# Log error messages (red with error icon, continues execution) # Log error messages (red with error icon, continues execution)

View File

@@ -0,0 +1,530 @@
#!/bin/bash
#
# Docker utility library for AzerothCore RealmMaster scripts
# This library provides standardized Docker operations, container management,
# and deployment functions.
#
# Usage: source /path/to/scripts/bash/lib/docker-utils.sh
#
# Prevent multiple sourcing
if [ -n "${_DOCKER_UTILS_LIB_LOADED:-}" ]; then
return 0
fi
_DOCKER_UTILS_LIB_LOADED=1
# Source common library for logging functions
DOCKER_UTILS_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if [ -f "$DOCKER_UTILS_DIR/common.sh" ]; then
source "$DOCKER_UTILS_DIR/common.sh"
elif command -v info >/dev/null 2>&1; then
# Common functions already available
:
else
# Fallback logging functions
info() { printf '\033[0;34m %s\033[0m\n' "$*"; }
warn() { printf '\033[1;33m⚠ %s\033[0m\n' "$*" >&2; }
err() { printf '\033[0;31m❌ %s\033[0m\n' "$*" >&2; }
fatal() { err "$*"; exit 1; }
fi
# =============================================================================
# DOCKER CONTAINER MANAGEMENT
# =============================================================================
# Get container status
# Returns: running, exited, paused, restarting, removing, dead, created, or "not_found"
#
# Usage:
# status=$(docker_get_container_status "ac-mysql")
# if [ "$status" = "running" ]; then
# echo "Container is running"
# fi
#
docker_get_container_status() {
local container_name="$1"
if ! docker ps -a --format "table {{.Names}}\t{{.Status}}" | grep -q "^$container_name"; then
echo "not_found"
return 1
fi
docker inspect --format='{{.State.Status}}' "$container_name" 2>/dev/null || echo "not_found"
}
# Check if container is running
# Returns 0 if running, 1 if not running or not found
#
# Usage:
# if docker_is_container_running "ac-mysql"; then
# echo "MySQL container is running"
# fi
#
docker_is_container_running() {
local container_name="$1"
local status
status=$(docker_get_container_status "$container_name")
[ "$status" = "running" ]
}
# Wait for container to reach desired state
# Returns 0 if container reaches state within timeout, 1 if timeout
#
# Usage:
# docker_wait_for_container_state "ac-mysql" "running" 30
# docker_wait_for_container_state "ac-mysql" "exited" 10
#
docker_wait_for_container_state() {
local container_name="$1"
local desired_state="$2"
local timeout="${3:-30}"
local check_interval="${4:-2}"
local elapsed=0
info "Waiting for container '$container_name' to reach state '$desired_state' (timeout: ${timeout}s)"
while [ $elapsed -lt $timeout ]; do
local current_state
current_state=$(docker_get_container_status "$container_name")
if [ "$current_state" = "$desired_state" ]; then
info "Container '$container_name' reached desired state: $desired_state"
return 0
fi
sleep "$check_interval"
elapsed=$((elapsed + check_interval))
done
err "Container '$container_name' did not reach state '$desired_state' within ${timeout}s (current: $current_state)"
return 1
}
# Execute command in container with retry logic
# Handles container availability and connection issues
#
# Usage:
# docker_exec_with_retry "ac-mysql" "mysql -uroot -ppassword -e 'SELECT 1'"
# echo "SELECT 1" | docker_exec_with_retry "ac-mysql" "mysql -uroot -ppassword"
#
docker_exec_with_retry() {
local container_name="$1"
local command="$2"
local max_attempts="${3:-3}"
local retry_delay="${4:-2}"
local interactive="${5:-false}"
if ! docker_is_container_running "$container_name"; then
err "Container '$container_name' is not running"
return 1
fi
local attempt=1
while [ $attempt -le $max_attempts ]; do
if [ "$interactive" = "true" ]; then
if docker exec -i "$container_name" sh -c "$command"; then
return 0
fi
else
if docker exec "$container_name" sh -c "$command"; then
return 0
fi
fi
if [ $attempt -lt $max_attempts ]; then
warn "Docker exec failed in '$container_name' (attempt $attempt/$max_attempts), retrying in ${retry_delay}s..."
sleep "$retry_delay"
fi
attempt=$((attempt + 1))
done
err "Docker exec failed in '$container_name' after $max_attempts attempts"
return 1
}
# =============================================================================
# DOCKER COMPOSE PROJECT MANAGEMENT
# =============================================================================
# Get project name from environment or docker-compose.yml
# Returns the Docker Compose project name
#
# Usage:
# project_name=$(docker_get_project_name)
# echo "Project: $project_name"
#
docker_get_project_name() {
# Check environment variable first
if [ -n "${COMPOSE_PROJECT_NAME:-}" ]; then
echo "$COMPOSE_PROJECT_NAME"
return 0
fi
# Check for docker-compose.yml name directive
if [ -f "docker-compose.yml" ] && command -v python3 >/dev/null 2>&1; then
local project_name
project_name=$(python3 -c "
import yaml
try:
with open('docker-compose.yml', 'r') as f:
data = yaml.safe_load(f)
print(data.get('name', ''))
except:
print('')
" 2>/dev/null)
if [ -n "$project_name" ]; then
echo "$project_name"
return 0
fi
fi
# Fallback to directory name
basename "$PWD" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]//g'
}
# List containers for current project
# Returns list of container names with optional filtering
#
# Usage:
# containers=$(docker_list_project_containers)
# running_containers=$(docker_list_project_containers "running")
#
docker_list_project_containers() {
local status_filter="${1:-}"
local project_name
project_name=$(docker_get_project_name)
local filter_arg=""
if [ -n "$status_filter" ]; then
filter_arg="--filter status=$status_filter"
fi
# Use project label to find containers
docker ps -a $filter_arg --filter "label=com.docker.compose.project=$project_name" --format "{{.Names}}" 2>/dev/null
}
# Stop project containers gracefully
# Stops containers with configurable timeout
#
# Usage:
# docker_stop_project_containers 30 # Stop with 30s timeout
# docker_stop_project_containers # Use default 10s timeout
#
docker_stop_project_containers() {
local timeout="${1:-10}"
local containers
containers=$(docker_list_project_containers "running")
if [ -z "$containers" ]; then
info "No running containers found for project"
return 0
fi
info "Stopping project containers with ${timeout}s timeout: $containers"
echo "$containers" | xargs -r docker stop -t "$timeout"
}
# Start project containers
# Starts containers that are stopped but exist
#
# Usage:
# docker_start_project_containers
#
docker_start_project_containers() {
local containers
containers=$(docker_list_project_containers "exited")
if [ -z "$containers" ]; then
info "No stopped containers found for project"
return 0
fi
info "Starting project containers: $containers"
echo "$containers" | xargs -r docker start
}
# =============================================================================
# DOCKER IMAGE MANAGEMENT
# =============================================================================
# Get image information for container
# Returns image name:tag for specified container
#
# Usage:
# image=$(docker_get_container_image "ac-mysql")
# echo "MySQL container using image: $image"
#
docker_get_container_image() {
local container_name="$1"
if ! docker_is_container_running "$container_name"; then
# Try to get from stopped container
docker inspect --format='{{.Config.Image}}' "$container_name" 2>/dev/null || echo "unknown"
else
docker inspect --format='{{.Config.Image}}' "$container_name" 2>/dev/null || echo "unknown"
fi
}
# Check if image exists locally
# Returns 0 if image exists, 1 if not found
#
# Usage:
# if docker_image_exists "mysql:8.0"; then
# echo "MySQL image is available"
# fi
#
docker_image_exists() {
local image_name="$1"
docker images --format "{{.Repository}}:{{.Tag}}" | grep -q "^${image_name}$"
}
# Pull image with retry logic
# Handles temporary network issues and registry problems
#
# Usage:
# docker_pull_image_with_retry "mysql:8.0"
# docker_pull_image_with_retry "azerothcore/ac-wotlk-worldserver:latest" 5 10
#
docker_pull_image_with_retry() {
local image_name="$1"
local max_attempts="${2:-3}"
local retry_delay="${3:-5}"
if docker_image_exists "$image_name"; then
info "Image '$image_name' already exists locally"
return 0
fi
local attempt=1
while [ $attempt -le $max_attempts ]; do
info "Pulling image '$image_name' (attempt $attempt/$max_attempts)"
if docker pull "$image_name"; then
info "Successfully pulled image '$image_name'"
return 0
fi
if [ $attempt -lt $max_attempts ]; then
warn "Failed to pull image '$image_name', retrying in ${retry_delay}s..."
sleep "$retry_delay"
fi
attempt=$((attempt + 1))
done
err "Failed to pull image '$image_name' after $max_attempts attempts"
return 1
}
# =============================================================================
# DOCKER COMPOSE OPERATIONS
# =============================================================================
# Validate docker-compose.yml configuration
# Returns 0 if valid, 1 if invalid or errors found
#
# Usage:
# if docker_compose_validate; then
# echo "Docker Compose configuration is valid"
# fi
#
docker_compose_validate() {
local compose_file="${1:-docker-compose.yml}"
if [ ! -f "$compose_file" ]; then
err "Docker Compose file not found: $compose_file"
return 1
fi
if docker compose -f "$compose_file" config --quiet; then
info "Docker Compose configuration is valid"
return 0
else
err "Docker Compose configuration validation failed"
return 1
fi
}
# Get service status from docker-compose
# Returns service status or "not_found" if service doesn't exist
#
# Usage:
# status=$(docker_compose_get_service_status "ac-mysql")
#
docker_compose_get_service_status() {
local service_name="$1"
local project_name
project_name=$(docker_get_project_name)
# Get container name for the service
local container_name="${project_name}-${service_name}-1"
docker_get_container_status "$container_name"
}
# Deploy with profile and options
# Wrapper around docker compose up with standardized options
#
# Usage:
# docker_compose_deploy "services-standard" "--detach"
# docker_compose_deploy "services-modules" "--no-deps ac-worldserver"
#
docker_compose_deploy() {
local profile="${1:-services-standard}"
local additional_options="${2:-}"
if ! docker_compose_validate; then
err "Cannot deploy: Docker Compose configuration is invalid"
return 1
fi
info "Deploying with profile: $profile"
# Use exec to replace current shell for proper signal handling
if [ -n "$additional_options" ]; then
docker compose --profile "$profile" up $additional_options
else
docker compose --profile "$profile" up --detach
fi
}
# =============================================================================
# DOCKER SYSTEM UTILITIES
# =============================================================================
# Check Docker daemon availability
# Returns 0 if Docker is available, 1 if not
#
# Usage:
# if docker_check_daemon; then
# echo "Docker daemon is available"
# fi
#
docker_check_daemon() {
if docker info >/dev/null 2>&1; then
return 0
else
err "Docker daemon is not available or accessible"
return 1
fi
}
# Get Docker system information
# Returns formatted system info for debugging
#
# Usage:
# docker_print_system_info
#
docker_print_system_info() {
info "Docker System Information:"
if ! docker_check_daemon; then
err "Cannot retrieve Docker system information - daemon not available"
return 1
fi
local docker_version compose_version
docker_version=$(docker --version 2>/dev/null | cut -d' ' -f3 | tr -d ',' || echo "unknown")
compose_version=$(docker compose version --short 2>/dev/null || echo "unknown")
info " Docker Version: $docker_version"
info " Compose Version: $compose_version"
info " Project Name: $(docker_get_project_name)"
local running_containers
running_containers=$(docker_list_project_containers "running" | wc -l)
info " Running Containers: $running_containers"
}
# Cleanup unused Docker resources
# Removes stopped containers, unused networks, and dangling images
#
# Usage:
# docker_cleanup_system true # Include unused volumes
# docker_cleanup_system false # Preserve volumes (default)
#
docker_cleanup_system() {
local include_volumes="${1:-false}"
info "Cleaning up Docker system resources..."
# Remove stopped containers
local stopped_containers
stopped_containers=$(docker ps -aq --filter "status=exited")
if [ -n "$stopped_containers" ]; then
info "Removing stopped containers"
echo "$stopped_containers" | xargs docker rm
fi
# Remove unused networks
info "Removing unused networks"
docker network prune -f
# Remove dangling images
info "Removing dangling images"
docker image prune -f
# Remove unused volumes if requested
if [ "$include_volumes" = "true" ]; then
warn "Removing unused volumes (this may delete data!)"
docker volume prune -f
fi
info "Docker system cleanup completed"
}
# =============================================================================
# CONTAINER HEALTH AND MONITORING
# =============================================================================
# Get container resource usage
# Returns CPU and memory usage statistics
#
# Usage:
# docker_get_container_stats "ac-mysql"
#
docker_get_container_stats() {
local container_name="$1"
if ! docker_is_container_running "$container_name"; then
err "Container '$container_name' is not running"
return 1
fi
docker stats --no-stream --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}" "$container_name"
}
# Check container logs for errors
# Searches recent logs for error patterns
#
# Usage:
# docker_check_container_errors "ac-mysql" 100
#
docker_check_container_errors() {
local container_name="$1"
local lines="${2:-50}"
if ! docker ps -a --format "{{.Names}}" | grep -q "^${container_name}$"; then
err "Container '$container_name' not found"
return 1
fi
info "Checking last $lines log lines for errors in '$container_name'"
# Look for common error patterns
docker logs --tail "$lines" "$container_name" 2>&1 | grep -i "error\|exception\|fail\|fatal" || {
info "No obvious errors found in recent logs"
return 0
}
}
# =============================================================================
# INITIALIZATION
# =============================================================================
# Library loaded successfully
# Scripts can check for $_DOCKER_UTILS_LIB_LOADED to verify library is loaded

View File

@@ -0,0 +1,613 @@
#!/bin/bash
#
# Environment and file utility library for AzerothCore RealmMaster scripts
# This library provides enhanced environment variable handling, file operations,
# and path management functions.
#
# Usage: source /path/to/scripts/bash/lib/env-utils.sh
#
# Prevent multiple sourcing
if [ -n "${_ENV_UTILS_LIB_LOADED:-}" ]; then
return 0
fi
_ENV_UTILS_LIB_LOADED=1
# Source common library for logging functions
ENV_UTILS_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if [ -f "$ENV_UTILS_DIR/common.sh" ]; then
source "$ENV_UTILS_DIR/common.sh"
elif command -v info >/dev/null 2>&1; then
# Common functions already available
:
else
# Fallback logging functions
info() { printf '\033[0;34m %s\033[0m\n' "$*"; }
warn() { printf '\033[1;33m⚠ %s\033[0m\n' "$*" >&2; }
err() { printf '\033[0;31m❌ %s\033[0m\n' "$*" >&2; }
fatal() { err "$*"; exit 1; }
fi
# =============================================================================
# ENVIRONMENT VARIABLE MANAGEMENT
# =============================================================================
# Enhanced read_env function with advanced features
# Supports multiple .env files, environment variable precedence, and validation
#
# Usage:
# value=$(env_read_with_fallback "MYSQL_PASSWORD" "default_password")
# value=$(env_read_with_fallback "PORT" "" ".env.local" "validate_port")
#
env_read_with_fallback() {
local key="$1"
local default="${2:-}"
local env_file="${3:-${ENV_PATH:-${DEFAULT_ENV_PATH:-.env}}}"
local validator_func="${4:-}"
local value=""
# 1. Check if variable is already set in environment (highest precedence)
if [ -n "${!key:-}" ]; then
value="${!key}"
else
# 2. Read from .env file if it exists
if [ -f "$env_file" ]; then
# Extract value using grep and cut, handling various formats
value="$(grep -E "^${key}=" "$env_file" 2>/dev/null | tail -n1 | cut -d'=' -f2- | tr -d '\r')"
# Remove inline comments (everything after # that's not inside quotes)
value="$(echo "$value" | sed 's/[[:space:]]*#.*//' | sed 's/[[:space:]]*$//')"
# Strip quotes if present
if [[ "$value" == \"*\" && "$value" == *\" ]]; then
# Double quotes
value="${value:1:-1}"
elif [[ "$value" == \'*\' && "$value" == *\' ]]; then
# Single quotes
value="${value:1:-1}"
fi
fi
# 3. Use default if still empty
if [ -z "${value:-}" ]; then
value="$default"
fi
fi
# 4. Validate if validator function provided
if [ -n "$validator_func" ] && command -v "$validator_func" >/dev/null 2>&1; then
if ! "$validator_func" "$value"; then
err "Validation failed for $key: $value"
return 1
fi
fi
printf '%s\n' "${value}"
}
# Read environment variable with type conversion
# Supports string, int, bool, and path types
#
# Usage:
# port=$(env_read_typed "MYSQL_PORT" "int" "3306")
# debug=$(env_read_typed "DEBUG" "bool" "false")
# path=$(env_read_typed "DATA_PATH" "path" "/data")
#
env_read_typed() {
local key="$1"
local type="$2"
local default="${3:-}"
local value
value=$(env_read_with_fallback "$key" "$default")
case "$type" in
int|integer)
if ! [[ "$value" =~ ^[0-9]+$ ]]; then
err "Environment variable $key must be an integer: $value"
return 1
fi
echo "$value"
;;
bool|boolean)
case "${value,,}" in
true|yes|1|on|enabled) echo "true" ;;
false|no|0|off|disabled) echo "false" ;;
*) err "Environment variable $key must be boolean: $value"; return 1 ;;
esac
;;
path)
# Expand relative paths to absolute
if [ -n "$value" ]; then
path_resolve_absolute "$value"
fi
;;
string|*)
echo "$value"
;;
esac
}
# Update or add environment variable in .env file with backup
# Creates backup and maintains file integrity
#
# Usage:
# env_update_value "MYSQL_PASSWORD" "new_password"
# env_update_value "DEBUG" "true" ".env.local"
# env_update_value "PORT" "8080" ".env" "true" # create backup
#
env_update_value() {
local key="$1"
local value="$2"
local env_file="${3:-${ENV_PATH:-${DEFAULT_ENV_PATH:-.env}}}"
local create_backup="${4:-false}"
[ -n "$env_file" ] || return 0
# Create backup if requested
if [ "$create_backup" = "true" ] && [ -f "$env_file" ]; then
file_create_backup "$env_file"
fi
# Create file if it doesn't exist
if [ ! -f "$env_file" ]; then
file_ensure_writable_dir "$(dirname "$env_file")"
printf '%s=%s\n' "$key" "$value" >> "$env_file"
return 0
fi
# Update existing or append new
if grep -q "^${key}=" "$env_file"; then
# Use platform-appropriate sed in-place editing
local sed_opts=""
if [[ "$OSTYPE" == "darwin"* ]]; then
sed_opts="-i ''"
else
sed_opts="-i"
fi
# Use a temporary file for safer editing
local temp_file="${env_file}.tmp.$$"
sed "s|^${key}=.*|${key}=${value}|" "$env_file" > "$temp_file" && mv "$temp_file" "$env_file"
else
printf '\n%s=%s\n' "$key" "$value" >> "$env_file"
fi
info "Updated $key in $env_file"
}
# Load multiple environment files with precedence
# Later files override earlier ones
#
# Usage:
# env_load_multiple ".env" ".env.local" ".env.production"
#
env_load_multiple() {
local files=("$@")
local loaded_count=0
for env_file in "${files[@]}"; do
if [ -f "$env_file" ]; then
info "Loading environment from: $env_file"
set -a
# shellcheck disable=SC1090
source "$env_file"
set +a
loaded_count=$((loaded_count + 1))
fi
done
if [ $loaded_count -eq 0 ]; then
warn "No environment files found: ${files[*]}"
return 1
fi
info "Loaded $loaded_count environment file(s)"
return 0
}
# =============================================================================
# PATH AND FILE UTILITIES
# =============================================================================
# Resolve path to absolute form with proper error handling
# Handles both existing and non-existing paths
#
# Usage:
# abs_path=$(path_resolve_absolute "./relative/path")
# abs_path=$(path_resolve_absolute "/already/absolute")
#
path_resolve_absolute() {
local path="$1"
local base_dir="${2:-$PWD}"
if command -v python3 >/dev/null 2>&1; then
python3 - "$base_dir" "$path" <<'PY'
import os, sys
base, path = sys.argv[1:3]
if not path:
print(os.path.abspath(base))
elif os.path.isabs(path):
print(os.path.normpath(path))
else:
print(os.path.normpath(os.path.join(base, path)))
PY
elif command -v realpath >/dev/null 2>&1; then
if [ "${path:0:1}" = "/" ]; then
echo "$path"
else
realpath -m "$base_dir/$path"
fi
else
# Fallback manual resolution
if [ "${path:0:1}" = "/" ]; then
echo "$path"
else
echo "$base_dir/$path"
fi
fi
}
# Ensure directory exists and is writable with proper permissions
# Creates parent directories if needed
#
# Usage:
# file_ensure_writable_dir "/path/to/directory"
# file_ensure_writable_dir "/path/to/directory" "0755"
#
file_ensure_writable_dir() {
local dir="$1"
local permissions="${2:-0755}"
if [ ! -d "$dir" ]; then
if mkdir -p "$dir" 2>/dev/null; then
info "Created directory: $dir"
chmod "$permissions" "$dir" 2>/dev/null || warn "Could not set permissions on $dir"
else
err "Failed to create directory: $dir"
return 1
fi
fi
if [ ! -w "$dir" ]; then
if chmod u+w "$dir" 2>/dev/null; then
info "Made directory writable: $dir"
else
err "Directory not writable and cannot fix permissions: $dir"
return 1
fi
fi
return 0
}
# Create timestamped backup of file
# Supports custom backup directory and compression
#
# Usage:
# file_create_backup "/path/to/important.conf"
# file_create_backup "/path/to/file" "/backup/dir" "gzip"
#
file_create_backup() {
local file="$1"
local backup_dir="${2:-$(dirname "$file")}"
local compression="${3:-none}"
if [ ! -f "$file" ]; then
warn "File does not exist, skipping backup: $file"
return 0
fi
file_ensure_writable_dir "$backup_dir"
local filename basename backup_file
filename=$(basename "$file")
basename="${filename%.*}"
local extension="${filename##*.}"
# Create backup filename with timestamp
if [ "$filename" = "$basename" ]; then
# No extension
backup_file="${backup_dir}/${filename}.backup.$(date +%Y%m%d_%H%M%S)"
else
# Has extension
backup_file="${backup_dir}/${basename}.backup.$(date +%Y%m%d_%H%M%S).${extension}"
fi
case "$compression" in
gzip|gz)
if gzip -c "$file" > "${backup_file}.gz"; then
info "Created compressed backup: ${backup_file}.gz"
else
err "Failed to create compressed backup: ${backup_file}.gz"
return 1
fi
;;
none|*)
if cp "$file" "$backup_file"; then
info "Created backup: $backup_file"
else
err "Failed to create backup: $backup_file"
return 1
fi
;;
esac
return 0
}
# Set file permissions safely with validation
# Handles both numeric and symbolic modes
#
# Usage:
# file_set_permissions "/path/to/file" "0644"
# file_set_permissions "/path/to/script" "u+x"
#
file_set_permissions() {
local file="$1"
local permissions="$2"
local recursive="${3:-false}"
if [ ! -e "$file" ]; then
err "File or directory does not exist: $file"
return 1
fi
local chmod_opts=""
if [ "$recursive" = "true" ] && [ -d "$file" ]; then
chmod_opts="-R"
fi
if chmod $chmod_opts "$permissions" "$file" 2>/dev/null; then
info "Set permissions $permissions on $file"
return 0
else
err "Failed to set permissions $permissions on $file"
return 1
fi
}
# =============================================================================
# CONFIGURATION FILE UTILITIES
# =============================================================================
# Read value from template file with variable expansion support
# Enhanced version supporting more template formats
#
# Usage:
# value=$(config_read_template_value "MYSQL_PASSWORD" ".env.template")
# value=$(config_read_template_value "PORT" "config.template.yml" "yaml")
#
config_read_template_value() {
local key="$1"
local template_file="${2:-${TEMPLATE_FILE:-${TEMPLATE_PATH:-.env.template}}}"
local format="${3:-env}"
if [ ! -f "$template_file" ]; then
err "Template file not found: $template_file"
return 1
fi
case "$format" in
env)
local raw_line value
raw_line=$(grep "^${key}=" "$template_file" 2>/dev/null | head -1)
if [ -z "$raw_line" ]; then
err "Key '$key' not found in template: $template_file"
return 1
fi
value="${raw_line#*=}"
value=$(echo "$value" | sed 's/^"\(.*\)"$/\1/')
# Handle ${VAR:-default} syntax by extracting the default value
if [[ "$value" =~ ^\$\{[^}]*:-([^}]*)\}$ ]]; then
value="${BASH_REMATCH[1]}"
fi
echo "$value"
;;
yaml|yml)
if command -v python3 >/dev/null 2>&1; then
python3 -c "
import yaml, sys
try:
with open('$template_file', 'r') as f:
data = yaml.safe_load(f)
# Simple key lookup - can be enhanced for nested keys
print(data.get('$key', ''))
except:
sys.exit(1)
" 2>/dev/null
else
err "python3 required for YAML template parsing"
return 1
fi
;;
*)
err "Unsupported template format: $format"
return 1
;;
esac
}
# Validate configuration against schema
# Supports basic validation rules
#
# Usage:
# config_validate_env ".env" "required:MYSQL_PASSWORD,PORT;optional:DEBUG"
#
config_validate_env() {
local env_file="$1"
local rules="${2:-}"
if [ ! -f "$env_file" ]; then
err "Environment file not found: $env_file"
return 1
fi
if [ -z "$rules" ]; then
info "No validation rules specified"
return 0
fi
local validation_failed=false
# Parse validation rules
IFS=';' read -ra rule_sets <<< "$rules"
for rule_set in "${rule_sets[@]}"; do
IFS=':' read -ra rule_parts <<< "$rule_set"
local rule_type="${rule_parts[0]}"
local variables="${rule_parts[1]}"
case "$rule_type" in
required)
IFS=',' read -ra req_vars <<< "$variables"
for var in "${req_vars[@]}"; do
if ! grep -q "^${var}=" "$env_file" || [ -z "$(env_read_with_fallback "$var" "" "$env_file")" ]; then
err "Required environment variable missing or empty: $var"
validation_failed=true
fi
done
;;
optional)
# Optional variables - just log if missing
IFS=',' read -ra opt_vars <<< "$variables"
for var in "${opt_vars[@]}"; do
if ! grep -q "^${var}=" "$env_file"; then
info "Optional environment variable not set: $var"
fi
done
;;
esac
done
if [ "$validation_failed" = "true" ]; then
err "Environment validation failed"
return 1
fi
info "Environment validation passed"
return 0
}
# =============================================================================
# SYSTEM UTILITIES
# =============================================================================
# Detect operating system and distribution
# Returns standardized OS identifier
#
# Usage:
# os=$(system_detect_os)
# if [ "$os" = "ubuntu" ]; then
# echo "Running on Ubuntu"
# fi
#
system_detect_os() {
local os="unknown"
if [ -f /etc/os-release ]; then
# Source os-release for distribution info
local id
id=$(grep '^ID=' /etc/os-release | cut -d'=' -f2 | tr -d '"')
case "$id" in
ubuntu|debian|centos|rhel|fedora|alpine|arch)
os="$id"
;;
*)
os="linux"
;;
esac
elif [[ "$OSTYPE" == "darwin"* ]]; then
os="macos"
elif [[ "$OSTYPE" == "cygwin" || "$OSTYPE" == "msys" ]]; then
os="windows"
fi
echo "$os"
}
# Check system requirements
# Validates required commands and versions
#
# Usage:
# system_check_requirements "docker:20.0,python3:3.6"
#
system_check_requirements() {
local requirements="${1:-}"
if [ -z "$requirements" ]; then
return 0
fi
local check_failed=false
IFS=',' read -ra req_list <<< "$requirements"
for requirement in "${req_list[@]}"; do
IFS=':' read -ra req_parts <<< "$requirement"
local command="${req_parts[0]}"
local min_version="${req_parts[1]:-}"
if ! command -v "$command" >/dev/null 2>&1; then
err "Required command not found: $command"
check_failed=true
continue
fi
if [ -n "$min_version" ]; then
# Basic version checking - can be enhanced
info "Found $command (version checking not fully implemented)"
else
info "Found required command: $command"
fi
done
if [ "$check_failed" = "true" ]; then
err "System requirements check failed"
return 1
fi
info "System requirements check passed"
return 0
}
# =============================================================================
# INITIALIZATION AND VALIDATION
# =============================================================================
# Validate environment utility configuration
# Checks that utilities are working correctly
#
# Usage:
# env_utils_validate
#
env_utils_validate() {
info "Validating environment utilities..."
# Test path resolution
local test_path
test_path=$(path_resolve_absolute "." 2>/dev/null)
if [ -z "$test_path" ]; then
err "Path resolution utility not working"
return 1
fi
# Test directory operations
if ! file_ensure_writable_dir "/tmp/env-utils-test.$$"; then
err "Directory utility not working"
return 1
fi
rmdir "/tmp/env-utils-test.$$" 2>/dev/null || true
info "Environment utilities validation successful"
return 0
}
# =============================================================================
# INITIALIZATION
# =============================================================================
# Library loaded successfully
# Scripts can check for $_ENV_UTILS_LIB_LOADED to verify library is loaded

View File

@@ -0,0 +1,376 @@
#!/bin/bash
#
# MySQL utility library for AzerothCore RealmMaster scripts
# This library provides standardized MySQL operations, connection management,
# and database interaction functions.
#
# Usage: source /path/to/scripts/bash/lib/mysql-utils.sh
#
# Prevent multiple sourcing
if [ -n "${_MYSQL_UTILS_LIB_LOADED:-}" ]; then
return 0
fi
_MYSQL_UTILS_LIB_LOADED=1
# Source common library for logging functions
MYSQL_UTILS_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if [ -f "$MYSQL_UTILS_DIR/common.sh" ]; then
source "$MYSQL_UTILS_DIR/common.sh"
elif command -v info >/dev/null 2>&1; then
# Common functions already available
:
else
# Fallback logging functions
info() { printf '\033[0;34m %s\033[0m\n' "$*"; }
warn() { printf '\033[1;33m⚠ %s\033[0m\n' "$*" >&2; }
err() { printf '\033[0;31m❌ %s\033[0m\n' "$*" >&2; }
fatal() { err "$*"; exit 1; }
fi
# =============================================================================
# MYSQL CONNECTION CONFIGURATION
# =============================================================================
# Default MySQL configuration - can be overridden by environment
MYSQL_HOST="${MYSQL_HOST:-${CONTAINER_MYSQL:-ac-mysql}}"
MYSQL_PORT="${MYSQL_PORT:-3306}"
MYSQL_USER="${MYSQL_USER:-root}"
MYSQL_ROOT_PASSWORD="${MYSQL_ROOT_PASSWORD:-${MYSQL_PW:-azerothcore}}"
MYSQL_CONTAINER="${MYSQL_CONTAINER:-ac-mysql}"
# =============================================================================
# MYSQL CONNECTION FUNCTIONS
# =============================================================================
# Test MySQL connection with current configuration
# Returns 0 if connection successful, 1 if failed
#
# Usage:
# if mysql_test_connection; then
# echo "MySQL is available"
# fi
#
mysql_test_connection() {
local host="${1:-$MYSQL_HOST}"
local port="${2:-$MYSQL_PORT}"
local user="${3:-$MYSQL_USER}"
local password="${4:-$MYSQL_ROOT_PASSWORD}"
MYSQL_PWD="$password" mysql -h "$host" -P "$port" -u "$user" -e "SELECT 1" >/dev/null 2>&1
}
# Wait for MySQL to be ready with timeout
# Returns 0 if MySQL becomes available within timeout, 1 if timeout reached
#
# Usage:
# mysql_wait_for_connection 60 # Wait up to 60 seconds
# mysql_wait_for_connection # Use default 30 second timeout
#
mysql_wait_for_connection() {
local timeout="${1:-30}"
local retry_interval="${2:-2}"
local elapsed=0
info "Waiting for MySQL connection (${MYSQL_HOST}:${MYSQL_PORT}) with ${timeout}s timeout..."
while [ $elapsed -lt $timeout ]; do
if mysql_test_connection; then
info "MySQL connection established"
return 0
fi
sleep "$retry_interval"
elapsed=$((elapsed + retry_interval))
done
err "MySQL connection failed after ${timeout}s timeout"
return 1
}
# Execute MySQL command with retry logic
# Handles both direct queries and piped input
#
# Usage:
# mysql_exec_with_retry "database_name" "SELECT COUNT(*) FROM table;"
# echo "SELECT 1;" | mysql_exec_with_retry "database_name"
# mysql_exec_with_retry "database_name" < script.sql
#
mysql_exec_with_retry() {
local database="$1"
local query="${2:-}"
local max_attempts="${3:-3}"
local retry_delay="${4:-2}"
local attempt=1
while [ $attempt -le $max_attempts ]; do
if [ -n "$query" ]; then
# Direct query execution
if MYSQL_PWD="$MYSQL_ROOT_PASSWORD" mysql -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u "$MYSQL_USER" "$database" -e "$query"; then
return 0
fi
else
# Input from pipe/stdin
if MYSQL_PWD="$MYSQL_ROOT_PASSWORD" mysql -h "$MYSQL_HOST" -P "$MYSQL_PORT" -u "$MYSQL_USER" "$database"; then
return 0
fi
fi
if [ $attempt -lt $max_attempts ]; then
warn "MySQL query failed (attempt $attempt/$max_attempts), retrying in ${retry_delay}s..."
sleep "$retry_delay"
fi
attempt=$((attempt + 1))
done
err "MySQL query failed after $max_attempts attempts"
return 1
}
# Execute MySQL query and return result (no table headers)
# Optimized for single values and parsing
#
# Usage:
# count=$(mysql_query "acore_characters" "SELECT COUNT(*) FROM characters")
# tables=$(mysql_query "information_schema" "SHOW TABLES")
#
mysql_query() {
local database="$1"
local query="$2"
local host="${3:-$MYSQL_HOST}"
local port="${4:-$MYSQL_PORT}"
local user="${5:-$MYSQL_USER}"
local password="${6:-$MYSQL_ROOT_PASSWORD}"
MYSQL_PWD="$password" mysql -h "$host" -P "$port" -u "$user" -N -B "$database" -e "$query" 2>/dev/null
}
# =============================================================================
# DOCKER MYSQL FUNCTIONS
# =============================================================================
# Execute MySQL command inside Docker container
# Wrapper around docker exec with standardized MySQL connection
#
# Usage:
# docker_mysql_exec "acore_auth" "SELECT COUNT(*) FROM account;"
# echo "SELECT 1;" | docker_mysql_exec "acore_auth"
#
docker_mysql_exec() {
local database="$1"
local query="${2:-}"
local container="${3:-$MYSQL_CONTAINER}"
local password="${4:-$MYSQL_ROOT_PASSWORD}"
if [ -n "$query" ]; then
docker exec "$container" mysql -uroot -p"$password" "$database" -e "$query"
else
docker exec -i "$container" mysql -uroot -p"$password" "$database"
fi
}
# Execute MySQL query in Docker container (no table headers)
# Optimized for single values and parsing
#
# Usage:
# count=$(docker_mysql_query "acore_characters" "SELECT COUNT(*) FROM characters")
#
docker_mysql_query() {
local database="$1"
local query="$2"
local container="${3:-$MYSQL_CONTAINER}"
local password="${4:-$MYSQL_ROOT_PASSWORD}"
docker exec "$container" mysql -uroot -p"$password" -N -B "$database" -e "$query" 2>/dev/null
}
# Check if MySQL container is healthy and accepting connections
#
# Usage:
# if docker_mysql_is_ready; then
# echo "MySQL container is ready"
# fi
#
docker_mysql_is_ready() {
local container="${1:-$MYSQL_CONTAINER}"
local password="${2:-$MYSQL_ROOT_PASSWORD}"
docker exec "$container" mysqladmin ping -uroot -p"$password" >/dev/null 2>&1
}
# =============================================================================
# DATABASE UTILITY FUNCTIONS
# =============================================================================
# Check if database exists
# Returns 0 if database exists, 1 if not found
#
# Usage:
# if mysql_database_exists "acore_world"; then
# echo "World database found"
# fi
#
mysql_database_exists() {
local database_name="$1"
local result
result=$(mysql_query "information_schema" "SELECT COUNT(*) FROM SCHEMATA WHERE SCHEMA_NAME='$database_name'" 2>/dev/null || echo "0")
[ "$result" -gt 0 ] 2>/dev/null
}
# Get table count for database(s)
# Supports both single database and multiple database patterns
#
# Usage:
# count=$(mysql_get_table_count "acore_world")
# count=$(mysql_get_table_count "acore_auth,acore_characters")
#
mysql_get_table_count() {
local databases="$1"
local schema_list
# Convert comma-separated list to SQL IN clause format
schema_list=$(echo "$databases" | sed "s/,/','/g" | sed "s/^/'/" | sed "s/$/'/")
mysql_query "information_schema" "SELECT COUNT(*) FROM tables WHERE table_schema IN ($schema_list)"
}
# Get database connection string for applications
# Returns connection string in format: host;port;user;password;database
#
# Usage:
# conn_str=$(mysql_get_connection_string "acore_auth")
#
mysql_get_connection_string() {
local database="$1"
local host="${2:-$MYSQL_HOST}"
local port="${3:-$MYSQL_PORT}"
local user="${4:-$MYSQL_USER}"
local password="${5:-$MYSQL_ROOT_PASSWORD}"
printf '%s;%s;%s;%s;%s\n' "$host" "$port" "$user" "$password" "$database"
}
# =============================================================================
# BACKUP AND RESTORE UTILITIES
# =============================================================================
# Create database backup using mysqldump
# Supports both compressed and uncompressed output
#
# Usage:
# mysql_backup_database "acore_characters" "/path/to/backup.sql"
# mysql_backup_database "acore_world" "/path/to/backup.sql.gz" "gzip"
#
mysql_backup_database() {
local database="$1"
local output_file="$2"
local compression="${3:-none}"
local container="${4:-$MYSQL_CONTAINER}"
local password="${5:-$MYSQL_ROOT_PASSWORD}"
info "Creating backup of $database -> $output_file"
case "$compression" in
gzip|gz)
docker exec "$container" mysqldump -uroot -p"$password" "$database" | gzip > "$output_file"
;;
none|*)
docker exec "$container" mysqldump -uroot -p"$password" "$database" > "$output_file"
;;
esac
}
# Restore database from backup file
# Handles both compressed and uncompressed files automatically
#
# Usage:
# mysql_restore_database "acore_characters" "/path/to/backup.sql"
# mysql_restore_database "acore_world" "/path/to/backup.sql.gz"
#
mysql_restore_database() {
local database="$1"
local backup_file="$2"
local container="${3:-$MYSQL_CONTAINER}"
local password="${4:-$MYSQL_ROOT_PASSWORD}"
if [ ! -f "$backup_file" ]; then
err "Backup file not found: $backup_file"
return 1
fi
info "Restoring $database from $backup_file"
case "$backup_file" in
*.gz)
gzip -dc "$backup_file" | docker exec -i "$container" mysql -uroot -p"$password" "$database"
;;
*.sql)
docker exec -i "$container" mysql -uroot -p"$password" "$database" < "$backup_file"
;;
*)
warn "Unknown backup file format, treating as uncompressed SQL"
docker exec -i "$container" mysql -uroot -p"$password" "$database" < "$backup_file"
;;
esac
}
# =============================================================================
# VALIDATION AND DIAGNOSTICS
# =============================================================================
# Validate MySQL configuration and connectivity
# Comprehensive health check for MySQL setup
#
# Usage:
# mysql_validate_configuration
#
mysql_validate_configuration() {
info "Validating MySQL configuration..."
# Check required environment variables
if [ -z "$MYSQL_ROOT_PASSWORD" ]; then
err "MYSQL_ROOT_PASSWORD is not set"
return 1
fi
# Test basic connectivity
if ! mysql_test_connection; then
err "Cannot connect to MySQL at ${MYSQL_HOST}:${MYSQL_PORT}"
return 1
fi
# Check Docker container if using container setup
if docker ps --format "table {{.Names}}" | grep -q "$MYSQL_CONTAINER"; then
if ! docker_mysql_is_ready; then
err "MySQL container $MYSQL_CONTAINER is not ready"
return 1
fi
info "MySQL container $MYSQL_CONTAINER is healthy"
fi
info "MySQL configuration validation successful"
return 0
}
# Print MySQL configuration summary
# Useful for debugging and verification
#
# Usage:
# mysql_print_configuration
#
mysql_print_configuration() {
info "MySQL Configuration Summary:"
info " Host: $MYSQL_HOST"
info " Port: $MYSQL_PORT"
info " User: $MYSQL_USER"
info " Container: $MYSQL_CONTAINER"
info " Password: $([ -n "$MYSQL_ROOT_PASSWORD" ] && echo "***SET***" || echo "***NOT SET***")"
}
# =============================================================================
# INITIALIZATION
# =============================================================================
# Library loaded successfully
# Scripts can check for $_MYSQL_UTILS_LIB_LOADED to verify library is loaded

View File

@@ -3,8 +3,21 @@
# to re-copy SQL files. # to re-copy SQL files.
set -euo pipefail set -euo pipefail
info(){ echo "🔧 [restore-stage] $*"; } SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
warn(){ echo "⚠️ [restore-stage] $*" >&2; }
# Source common library for standardized logging
if ! source "$SCRIPT_DIR/lib/common.sh" 2>/dev/null; then
echo "❌ FATAL: Cannot load $SCRIPT_DIR/lib/common.sh" >&2
exit 1
fi
# Specialized prefixed logging for this restoration context
restore_info() { info "🔧 [restore-stage] $*"; }
restore_warn() { warn "[restore-stage] $*"; }
# Maintain compatibility with existing function calls
info() { restore_info "$*"; }
warn() { restore_warn "$*"; }
MODULES_DIR="${MODULES_DIR:-/modules}" MODULES_DIR="${MODULES_DIR:-/modules}"
MODULES_META_DIR="${MODULES_DIR}/.modules-meta" MODULES_META_DIR="${MODULES_DIR}/.modules-meta"

View File

@@ -4,13 +4,14 @@ set -e
# Simple profile-aware deploy + health check for profiles-verify/docker-compose.yml # Simple profile-aware deploy + health check for profiles-verify/docker-compose.yml
BLUE='\033[0;34m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'; RED='\033[0;31m'; NC='\033[0m'
info(){ echo -e "${BLUE} $*${NC}"; }
ok(){ echo -e "${GREEN}$*${NC}"; }
warn(){ echo -e "${YELLOW}⚠️ $*${NC}"; }
err(){ echo -e "${RED}$*${NC}"; }
PROJECT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)" PROJECT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
# Source common library for standardized logging
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if ! source "$SCRIPT_DIR/lib/common.sh" 2>/dev/null; then
echo "❌ FATAL: Cannot load $SCRIPT_DIR/lib/common.sh" >&2
exit 1
fi
COMPOSE_FILE="$PROJECT_DIR/docker-compose.yml" COMPOSE_FILE="$PROJECT_DIR/docker-compose.yml"
ENV_FILE="" ENV_FILE=""
TEMPLATE_FILE="$PROJECT_DIR/.env.template" TEMPLATE_FILE="$PROJECT_DIR/.env.template"

View File

@@ -7,11 +7,11 @@ set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$ROOT_DIR" cd "$ROOT_DIR"
BLUE='\033[0;34m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'; RED='\033[0;31m'; NC='\033[0m' # Source common library for standardized logging
info(){ printf '%b\n' "${BLUE} $*${NC}"; } if ! source "$ROOT_DIR/scripts/bash/lib/common.sh" 2>/dev/null; then
ok(){ printf '%b\n' "${GREEN}$*${NC}"; } echo "❌ FATAL: Cannot load $ROOT_DIR/scripts/bash/lib/common.sh" >&2
warn(){ printf '%b\n' "${YELLOW}⚠️ $*${NC}"; } exit 1
err(){ printf '%b\n' "${RED}$*${NC}"; } fi
FORCE_DIRTY=0 FORCE_DIRTY=0
DEPLOY_ARGS=() DEPLOY_ARGS=()