Files
AzerothCore-RealmMaster/scripts/bash/import-database-files.sh
uprightbass360 5c9f1d7389 feat: comprehensive module system and database management improvements
This commit introduces major enhancements to the module installation system,
database management, and configuration handling for AzerothCore deployments.

## Module System Improvements

### Module SQL Staging & Installation
- Refactor module SQL staging to properly handle AzerothCore's sql/ directory structure
- Fix SQL staging path to use correct AzerothCore format (sql/custom/db_*/*)
- Implement conditional module database importing based on enabled modules
- Add support for both cpp-modules and lua-scripts module types
- Handle rsync exit code 23 (permission warnings) gracefully during deployment

### Module Manifest & Automation
- Add automated module manifest generation via GitHub Actions workflow
- Implement Python-based module manifest updater with comprehensive validation
- Add module dependency tracking and SQL file discovery
- Support for blocked modules and module metadata management

## Database Management Enhancements

### Database Import System
- Add db-guard container for continuous database health monitoring and verification
- Implement conditional database import that skips when databases are current
- Add backup restoration and SQL staging coordination
- Support for Playerbots database (4th database) in all import operations
- Add comprehensive database health checking and status reporting

### Database Configuration
- Implement 10 new dbimport.conf settings from environment variables:
  - Database.Reconnect.Seconds/Attempts for connection reliability
  - Updates.AllowedModules for module auto-update control
  - Updates.Redundancy for data integrity checks
  - Worker/Synch thread settings for all three core databases
- Auto-apply dbimport.conf settings via auto-post-install.sh
- Add environment variable injection for db-import and db-guard containers

### Backup & Recovery
- Fix backup scheduler to prevent immediate execution on container startup
- Add backup status monitoring script with detailed reporting
- Implement backup import/export utilities
- Add database verification scripts for SQL update tracking

## User Import Directory

- Add new import/ directory for user-provided database files and configurations
- Support for custom SQL files, configuration overrides, and example templates
- Automatic import of user-provided databases and configs during initialization
- Documentation and examples for custom database imports

## Configuration & Environment

- Eliminate CLIENT_DATA_VERSION warning by adding default value syntax
- Improve CLIENT_DATA_VERSION documentation in .env.template
- Add comprehensive database import settings to .env and .env.template
- Update setup.sh to handle new configuration variables with proper defaults

## Monitoring & Debugging

- Add status dashboard with Go-based terminal UI (statusdash.go)
- Implement JSON status output (statusjson.sh) for programmatic access
- Add comprehensive database health check script
- Add repair-storage-permissions.sh utility for permission issues

## Testing & Documentation

- Add Phase 1 integration test suite for module installation verification
- Add comprehensive documentation for:
  - Database management (DATABASE_MANAGEMENT.md)
  - Module SQL analysis (AZEROTHCORE_MODULE_SQL_ANALYSIS.md)
  - Implementation mapping (IMPLEMENTATION_MAP.md)
  - SQL staging comparison and path coverage
  - Module assets and DBC file requirements
- Update SCRIPTS.md, ADVANCED.md, and troubleshooting documentation
- Update references from database-import/ to import/ directory

## Breaking Changes

- Renamed database-import/ directory to import/ for clarity
- Module SQL files now staged to AzerothCore-compatible paths
- db-guard container now required for proper database lifecycle management

## Bug Fixes

- Fix module SQL staging directory structure for AzerothCore compatibility
- Handle rsync exit code 23 gracefully during deployments
- Prevent backup from running immediately on container startup
- Correct SQL staging paths for proper module installation
2025-11-20 18:26:00 -05:00

126 lines
3.6 KiB
Bash
Executable File

#!/bin/bash
# Copy user database files or full backup archives from import/db/ or database-import/ to backup system
set -euo pipefail
# Source environment variables
if [ -f ".env" ]; then
set -a
source .env
set +a
fi
# Support both new (import/db) and legacy (database-import) directories
IMPORT_DIR_NEW="./import/db"
IMPORT_DIR_LEGACY="./database-import"
# Prefer new directory if it has files, otherwise fall back to legacy
IMPORT_DIR="$IMPORT_DIR_NEW"
if [ ! -d "$IMPORT_DIR" ] || [ -z "$(ls -A "$IMPORT_DIR" 2>/dev/null)" ]; then
IMPORT_DIR="$IMPORT_DIR_LEGACY"
fi
STORAGE_PATH="${STORAGE_PATH:-./storage}"
STORAGE_PATH_LOCAL="${STORAGE_PATH_LOCAL:-./local-storage}"
BACKUP_ROOT="${STORAGE_PATH}/backups"
MYSQL_DATA_VOLUME_NAME="${MYSQL_DATA_VOLUME_NAME:-mysql-data}"
ALPINE_IMAGE="${ALPINE_IMAGE:-alpine:latest}"
shopt -s nullglob
sql_files=("$IMPORT_DIR"/*.sql "$IMPORT_DIR"/*.sql.gz)
shopt -u nullglob
if [ ! -d "$IMPORT_DIR" ] || [ ${#sql_files[@]} -eq 0 ]; then
echo "📁 No loose database files found in $IMPORT_DIR - skipping import"
exit 0
fi
# Exit if backup system already has databases restored
has_restore_marker(){
# Prefer Docker volume marker (post-migration), fall back to legacy host path
if command -v docker >/dev/null 2>&1; then
if docker volume inspect "$MYSQL_DATA_VOLUME_NAME" >/dev/null 2>&1; then
if docker run --rm \
-v "${MYSQL_DATA_VOLUME_NAME}:/var/lib/mysql-persistent" \
"$ALPINE_IMAGE" \
sh -c 'test -f /var/lib/mysql-persistent/.restore-completed' >/dev/null 2>&1; then
return 0
fi
fi
fi
if [ -f "${STORAGE_PATH_LOCAL}/mysql-data/.restore-completed" ]; then
return 0
fi
return 1
}
if has_restore_marker; then
echo "✅ Database already restored - skipping import"
exit 0
fi
echo "📥 Found ${#sql_files[@]} database files in $IMPORT_DIR"
echo "📂 Bundling files for backup import..."
# Ensure backup directory exists
mkdir -p "$BACKUP_ROOT"
generate_unique_path(){
local target="$1"
local base="$target"
local counter=2
while [ -e "$target" ]; do
target="${base}_${counter}"
counter=$((counter + 1))
done
printf '%s\n' "$target"
}
stage_backup_directory(){
local src_dir="$1"
if [ -z "$src_dir" ] || [ ! -d "$src_dir" ]; then
echo "⚠️ Invalid source directory: $src_dir"
return 1
fi
local dirname
dirname="$(basename "$src_dir")"
local dest="$BACKUP_ROOT/$dirname"
dest="$(generate_unique_path "$dest")"
echo "📦 Copying backup directory $(basename "$src_dir")$(basename "$dest")"
if ! cp -a "$src_dir" "$dest"; then
echo "❌ Failed to copy backup directory"
return 1
fi
printf '%s\n' "$dest"
}
bundle_loose_files(){
local batch_timestamp
batch_timestamp="$(date +%Y%m%d_%H%M%S)"
local batch_name="ImportBackup_${batch_timestamp}"
local batch_dir="$IMPORT_DIR/$batch_name"
local moved=0
batch_dir="$(generate_unique_path "$batch_dir")"
if ! mkdir -p "$batch_dir"; then
echo "❌ Failed to create batch directory: $batch_dir"
exit 1
fi
for file in "${sql_files[@]}"; do
[ -f "$file" ] || continue
echo "📦 Moving $(basename "$file")$(basename "$batch_dir")/"
if ! mv "$file" "$batch_dir/"; then
echo "❌ Failed to move $file"
exit 1
fi
moved=$((moved + 1))
done
echo "🗂️ Created import batch $(basename "$batch_dir") with $moved file(s)"
local dest_path
dest_path="$(stage_backup_directory "$batch_dir")"
echo "✅ Backup batch copied to $(basename "$dest_path")"
echo "💡 Files will be automatically imported during deployment"
}
bundle_loose_files