feat: comprehensive module system and database management improvements

This commit introduces major enhancements to the module installation system,
database management, and configuration handling for AzerothCore deployments.

## Module System Improvements

### Module SQL Staging & Installation
- Refactor module SQL staging to properly handle AzerothCore's sql/ directory structure
- Fix SQL staging path to use correct AzerothCore format (sql/custom/db_*/*)
- Implement conditional module database importing based on enabled modules
- Add support for both cpp-modules and lua-scripts module types
- Handle rsync exit code 23 (permission warnings) gracefully during deployment

### Module Manifest & Automation
- Add automated module manifest generation via GitHub Actions workflow
- Implement Python-based module manifest updater with comprehensive validation
- Add module dependency tracking and SQL file discovery
- Support for blocked modules and module metadata management

## Database Management Enhancements

### Database Import System
- Add db-guard container for continuous database health monitoring and verification
- Implement conditional database import that skips when databases are current
- Add backup restoration and SQL staging coordination
- Support for Playerbots database (4th database) in all import operations
- Add comprehensive database health checking and status reporting

### Database Configuration
- Implement 10 new dbimport.conf settings from environment variables:
  - Database.Reconnect.Seconds/Attempts for connection reliability
  - Updates.AllowedModules for module auto-update control
  - Updates.Redundancy for data integrity checks
  - Worker/Synch thread settings for all three core databases
- Auto-apply dbimport.conf settings via auto-post-install.sh
- Add environment variable injection for db-import and db-guard containers

### Backup & Recovery
- Fix backup scheduler to prevent immediate execution on container startup
- Add backup status monitoring script with detailed reporting
- Implement backup import/export utilities
- Add database verification scripts for SQL update tracking

## User Import Directory

- Add new import/ directory for user-provided database files and configurations
- Support for custom SQL files, configuration overrides, and example templates
- Automatic import of user-provided databases and configs during initialization
- Documentation and examples for custom database imports

## Configuration & Environment

- Eliminate CLIENT_DATA_VERSION warning by adding default value syntax
- Improve CLIENT_DATA_VERSION documentation in .env.template
- Add comprehensive database import settings to .env and .env.template
- Update setup.sh to handle new configuration variables with proper defaults

## Monitoring & Debugging

- Add status dashboard with Go-based terminal UI (statusdash.go)
- Implement JSON status output (statusjson.sh) for programmatic access
- Add comprehensive database health check script
- Add repair-storage-permissions.sh utility for permission issues

## Testing & Documentation

- Add Phase 1 integration test suite for module installation verification
- Add comprehensive documentation for:
  - Database management (DATABASE_MANAGEMENT.md)
  - Module SQL analysis (AZEROTHCORE_MODULE_SQL_ANALYSIS.md)
  - Implementation mapping (IMPLEMENTATION_MAP.md)
  - SQL staging comparison and path coverage
  - Module assets and DBC file requirements
- Update SCRIPTS.md, ADVANCED.md, and troubleshooting documentation
- Update references from database-import/ to import/ directory

## Breaking Changes

- Renamed database-import/ directory to import/ for clarity
- Module SQL files now staged to AzerothCore-compatible paths
- db-guard container now required for proper database lifecycle management

## Bug Fixes

- Fix module SQL staging directory structure for AzerothCore compatibility
- Handle rsync exit code 23 gracefully during deployments
- Prevent backup from running immediately on container startup
- Correct SQL staging paths for proper module installation
This commit is contained in:
uprightbass360
2025-11-20 18:26:00 -05:00
committed by Deckard
parent 0d83f01995
commit e6231bb4a4
56 changed files with 11298 additions and 487 deletions

293
scripts/bash/statusjson.sh Executable file
View File

@@ -0,0 +1,293 @@
#!/usr/bin/env python3
import json
import os
import re
import socket
import subprocess
import time
from pathlib import Path
PROJECT_DIR = Path(__file__).resolve().parents[2]
ENV_FILE = PROJECT_DIR / ".env"
def load_env():
env = {}
if ENV_FILE.exists():
for line in ENV_FILE.read_text().splitlines():
if not line or line.strip().startswith('#'):
continue
if '=' not in line:
continue
key, val = line.split('=', 1)
val = val.split('#', 1)[0].strip()
env[key.strip()] = val
return env
def read_env(env, key, default=""):
return env.get(key, default)
def docker_exists(name):
result = subprocess.run([
"docker", "ps", "-a", "--format", "{{.Names}}"
], capture_output=True, text=True)
names = set(result.stdout.split())
return name in names
def docker_inspect(name, template):
try:
result = subprocess.run([
"docker", "inspect", f"--format={template}", name
], capture_output=True, text=True, check=True)
return result.stdout.strip()
except subprocess.CalledProcessError:
return ""
def service_snapshot(name, label):
status = "missing"
health = "none"
started = ""
image = ""
exit_code = ""
if docker_exists(name):
status = docker_inspect(name, "{{.State.Status}}") or status
health = docker_inspect(name, "{{if .State.Health}}{{.State.Health.Status}}{{else}}none{{end}}") or health
started = docker_inspect(name, "{{.State.StartedAt}}") or ""
image = docker_inspect(name, "{{.Config.Image}}") or ""
exit_code = docker_inspect(name, "{{.State.ExitCode}}") or "0"
return {
"name": name,
"label": label,
"status": status,
"health": health,
"started_at": started,
"image": image,
"exit_code": exit_code,
}
def port_reachable(port):
if not port:
return False
try:
port = int(port)
except ValueError:
return False
try:
with socket.create_connection(("127.0.0.1", port), timeout=1):
return True
except OSError:
return False
def module_list(env):
import json
from pathlib import Path
# Load module manifest
manifest_path = PROJECT_DIR / "config" / "module-manifest.json"
manifest_map = {}
if manifest_path.exists():
try:
manifest_data = json.loads(manifest_path.read_text())
for mod in manifest_data.get("modules", []):
manifest_map[mod["key"]] = mod
except Exception:
pass
modules = []
pattern = re.compile(r"^MODULE_([A-Z0-9_]+)=1$")
if ENV_FILE.exists():
for line in ENV_FILE.read_text().splitlines():
m = pattern.match(line.strip())
if m:
key = "MODULE_" + m.group(1)
raw = m.group(1).lower().replace('_', ' ')
title = raw.title()
# Look up manifest info
mod_info = manifest_map.get(key, {})
modules.append({
"name": title,
"key": key,
"description": mod_info.get("description", "No description available"),
"category": mod_info.get("category", "unknown"),
"type": mod_info.get("type", "unknown")
})
return modules
def dir_info(path):
p = Path(path)
exists = p.exists()
size = "--"
if exists:
try:
result = subprocess.run(
["du", "-sh", str(p)],
stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL,
text=True,
check=False,
)
if result.stdout:
size = result.stdout.split()[0]
except Exception:
size = "--"
return {"path": str(p), "exists": exists, "size": size}
def volume_info(name, fallback=None):
candidates = [name]
if fallback:
candidates.append(fallback)
for cand in candidates:
result = subprocess.run(["docker", "volume", "inspect", cand], capture_output=True, text=True)
if result.returncode == 0:
try:
data = json.loads(result.stdout)[0]
return {
"name": cand,
"exists": True,
"mountpoint": data.get("Mountpoint", "-")
}
except Exception:
pass
return {"name": name, "exists": False, "mountpoint": "-"}
def expand_path(value, env):
storage = read_env(env, "STORAGE_PATH", "./storage")
local_storage = read_env(env, "STORAGE_PATH_LOCAL", "./local-storage")
value = value.replace('${STORAGE_PATH}', storage)
value = value.replace('${STORAGE_PATH_LOCAL}', local_storage)
return value
def mysql_query(env, database, query):
password = read_env(env, "MYSQL_ROOT_PASSWORD")
user = read_env(env, "MYSQL_USER", "root")
if not password or not database:
return 0
cmd = [
"docker", "exec", "ac-mysql",
"mysql", "-N", "-B",
f"-u{user}", f"-p{password}", database,
"-e", query
]
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=True)
value = result.stdout.strip().splitlines()[-1]
return int(value)
except Exception:
return 0
def user_stats(env):
db_auth = read_env(env, "DB_AUTH_NAME", "acore_auth")
db_characters = read_env(env, "DB_CHARACTERS_NAME", "acore_characters")
accounts = mysql_query(env, db_auth, "SELECT COUNT(*) FROM account;")
online = mysql_query(env, db_auth, "SELECT COUNT(*) FROM account WHERE online = 1;")
active = mysql_query(env, db_auth, "SELECT COUNT(*) FROM account WHERE last_login >= DATE_SUB(UTC_TIMESTAMP(), INTERVAL 7 DAY);")
characters = mysql_query(env, db_characters, "SELECT COUNT(*) FROM characters;")
return {
"accounts": accounts,
"online": online,
"characters": characters,
"active7d": active,
}
def docker_stats():
"""Get CPU and memory stats for running containers"""
try:
result = subprocess.run([
"docker", "stats", "--no-stream", "--no-trunc",
"--format", "{{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}"
], capture_output=True, text=True, check=True, timeout=4)
stats = {}
for line in result.stdout.strip().splitlines():
parts = line.split('\t')
if len(parts) == 4:
name, cpu, mem_usage, mem_perc = parts
# Parse CPU percentage (e.g., "0.50%" -> 0.50)
cpu_val = cpu.replace('%', '').strip()
try:
cpu_float = float(cpu_val)
except ValueError:
cpu_float = 0.0
# Parse memory percentage
mem_perc_val = mem_perc.replace('%', '').strip()
try:
mem_perc_float = float(mem_perc_val)
except ValueError:
mem_perc_float = 0.0
stats[name] = {
"cpu": cpu_float,
"memory": mem_usage.strip(),
"memory_percent": mem_perc_float
}
return stats
except Exception:
return {}
def main():
env = load_env()
project = read_env(env, "COMPOSE_PROJECT_NAME", "acore-compose")
network = read_env(env, "NETWORK_NAME", "azerothcore")
services = [
("ac-mysql", "MySQL"),
("ac-backup", "Backup"),
("ac-volume-init", "Volume Init"),
("ac-storage-init", "Storage Init"),
("ac-db-init", "DB Init"),
("ac-db-import", "DB Import"),
("ac-authserver", "Auth Server"),
("ac-worldserver", "World Server"),
("ac-client-data", "Client Data"),
("ac-modules", "Module Manager"),
("ac-post-install", "Post Install"),
("ac-phpmyadmin", "phpMyAdmin"),
("ac-keira3", "Keira3"),
]
service_data = [service_snapshot(name, label) for name, label in services]
port_entries = [
{"name": "Auth", "port": read_env(env, "AUTH_EXTERNAL_PORT"), "reachable": port_reachable(read_env(env, "AUTH_EXTERNAL_PORT"))},
{"name": "World", "port": read_env(env, "WORLD_EXTERNAL_PORT"), "reachable": port_reachable(read_env(env, "WORLD_EXTERNAL_PORT"))},
{"name": "SOAP", "port": read_env(env, "SOAP_EXTERNAL_PORT"), "reachable": port_reachable(read_env(env, "SOAP_EXTERNAL_PORT"))},
{"name": "MySQL", "port": read_env(env, "MYSQL_EXTERNAL_PORT"), "reachable": port_reachable(read_env(env, "MYSQL_EXTERNAL_PORT")) if read_env(env, "COMPOSE_OVERRIDE_MYSQL_EXPOSE_ENABLED", "0") == "1" else False},
{"name": "phpMyAdmin", "port": read_env(env, "PMA_EXTERNAL_PORT"), "reachable": port_reachable(read_env(env, "PMA_EXTERNAL_PORT"))},
{"name": "Keira3", "port": read_env(env, "KEIRA3_EXTERNAL_PORT"), "reachable": port_reachable(read_env(env, "KEIRA3_EXTERNAL_PORT"))},
]
storage_path = expand_path(read_env(env, "STORAGE_PATH", "./storage"), env)
local_storage_path = expand_path(read_env(env, "STORAGE_PATH_LOCAL", "./local-storage"), env)
client_data_path = expand_path(read_env(env, "CLIENT_DATA_PATH", f"{storage_path}/client-data"), env)
storage_info = {
"storage": dir_info(storage_path),
"local_storage": dir_info(local_storage_path),
"client_data": dir_info(client_data_path),
"modules": dir_info(os.path.join(storage_path, "modules")),
"local_modules": dir_info(os.path.join(local_storage_path, "modules")),
}
volumes = {
"client_cache": volume_info(f"{project}_client-data-cache"),
"mysql_data": volume_info(f"{project}_mysql-data", "mysql-data"),
}
data = {
"timestamp": time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()),
"project": project,
"network": network,
"services": service_data,
"ports": port_entries,
"modules": module_list(env),
"storage": storage_info,
"volumes": volumes,
"users": user_stats(env),
"stats": docker_stats(),
}
print(json.dumps(data))
if __name__ == "__main__":
main()