Compare commits

...

16 Commits

Author SHA1 Message Date
a176dd2365 Makefile: avoid vault errors when detecting current host 2026-01-01 12:19:07 -05:00
98e0fc0bed Fix lint regressions after rebase 2026-01-01 12:10:45 -05:00
baf3e3de09 Refactor playbooks: servers/workstations, split monitoring, improve shell 2026-01-01 11:35:24 -05:00
69a39e5e5b Add POTE app project support and improve IP conflict detection (#3)
## Summary

This PR adds comprehensive support for deploying the **POTE** application project via Ansible, along with improvements to IP conflict detection and a new app stack provisioning system for Proxmox-managed LXC containers.

## Key Features

### 🆕 New Roles
- **`roles/pote`**: Python/venv deployment role for POTE (PostgreSQL, cron jobs, Alembic migrations)
- **`roles/app_setup`**: Generic app deployment role (Node.js/systemd)
- **`roles/base_os`**: Base OS hardening role

### 🛡️ Safety Improvements
- IP uniqueness validation within projects
- Proxmox-side IP conflict detection
- Enhanced error messages for IP conflicts

### 📦 New Playbooks
- `playbooks/app/site.yml`: End-to-end app stack deployment
- `playbooks/app/provision_vms.yml`: Proxmox guest provisioning
- `playbooks/app/configure_app.yml`: OS + application configuration

## Security
-  All secrets stored in encrypted vault.yml
-  Deploy keys excluded via .gitignore
-  No plaintext secrets committed

## Testing
-  POTE successfully deployed to dev/qa/prod environments
-  All components validated (Git, PostgreSQL, cron, migrations)

Co-authored-by: ilia <ilia@levkin.ca>
Reviewed-on: #3
2026-01-01 11:19:54 -05:00
e897b1a027 Fix: Resolve linting errors and improve firewall configuration (#2)
Some checks failed
CI / lint-and-test (push) Successful in 1m16s
CI / ansible-validation (push) Successful in 5m49s
CI / secret-scanning (push) Successful in 1m33s
CI / dependency-scan (push) Successful in 2m48s
CI / sast-scan (push) Successful in 5m46s
CI / license-check (push) Successful in 1m11s
CI / vault-check (push) Failing after 5m25s
CI / playbook-test (push) Successful in 5m32s
CI / container-scan (push) Successful in 4m32s
CI / sonar-analysis (push) Successful in 6m53s
CI / workflow-summary (push) Successful in 1m6s
- Fix UFW firewall to allow outbound traffic (was blocking all outbound)
- Add HOST parameter support to shell Makefile target
- Fix all ansible-lint errors (trailing spaces, missing newlines, document starts)
- Add changed_when: false to check commands
- Fix variable naming (vault_devGPU -> vault_devgpu)
- Update .ansible-lint config to exclude .gitea/ and allow strategy: free
- Fix NodeSource repository GPG key handling in shell playbook
- Add missing document starts to host_vars files
- Clean up empty lines in datascience role files

Reviewed-on: #2
2025-12-25 16:47:26 -05:00
95a301ae3f Merge pull request 'Fix: Update CI workflow to use Alpine-based images, install Node.js and Trivy with improved methods, and enhance dependency scanning steps' (#1) from update-ci into master
All checks were successful
CI / lint-and-test (push) Successful in 59s
CI / ansible-validation (push) Successful in 2m14s
CI / secret-scanning (push) Successful in 57s
CI / dependency-scan (push) Successful in 1m4s
CI / sast-scan (push) Successful in 1m57s
CI / license-check (push) Successful in 57s
CI / vault-check (push) Successful in 1m53s
CI / playbook-test (push) Successful in 1m57s
CI / container-scan (push) Successful in 1m26s
CI / sonar-analysis (push) Successful in 2m1s
CI / workflow-summary (push) Successful in 55s
Reviewed-on: #1
2025-12-17 22:45:00 -05:00
ilia
c017ec6941 Fix: Update CI workflow to install a fixed version of Trivy for improved reliability and error handling during installation
All checks were successful
CI / lint-and-test (pull_request) Successful in 1m2s
CI / ansible-validation (pull_request) Successful in 3m6s
CI / secret-scanning (pull_request) Successful in 56s
CI / dependency-scan (pull_request) Successful in 1m0s
CI / sast-scan (pull_request) Successful in 2m13s
CI / license-check (pull_request) Successful in 57s
CI / vault-check (pull_request) Successful in 2m8s
CI / playbook-test (pull_request) Successful in 2m2s
CI / container-scan (pull_request) Successful in 1m26s
CI / sonar-analysis (pull_request) Successful in 2m3s
CI / workflow-summary (pull_request) Successful in 52s
2025-12-15 15:50:04 -05:00
ilia
9e7ef8159b Fix: Update CI workflow to disable SCM in SonarScanner configuration for improved analysis accuracy
Some checks failed
CI / lint-and-test (pull_request) Successful in 57s
CI / ansible-validation (pull_request) Successful in 2m20s
CI / secret-scanning (pull_request) Successful in 54s
CI / dependency-scan (pull_request) Successful in 59s
CI / sast-scan (pull_request) Successful in 2m26s
CI / license-check (pull_request) Successful in 57s
CI / vault-check (pull_request) Successful in 2m34s
CI / playbook-test (pull_request) Successful in 2m37s
CI / container-scan (pull_request) Failing after 1m42s
CI / sonar-analysis (pull_request) Successful in 2m18s
CI / workflow-summary (pull_request) Successful in 52s
2025-12-15 15:36:15 -05:00
ilia
3828e04b13 Fix: Update CI workflow to install Git alongside Node.js and enhance SonarScanner installation process with improved error handling
All checks were successful
CI / lint-and-test (pull_request) Successful in 59s
CI / ansible-validation (pull_request) Successful in 3m32s
CI / secret-scanning (pull_request) Successful in 56s
CI / dependency-scan (pull_request) Successful in 1m3s
CI / sast-scan (pull_request) Successful in 2m54s
CI / license-check (pull_request) Successful in 59s
CI / vault-check (pull_request) Successful in 2m43s
CI / playbook-test (pull_request) Successful in 3m7s
CI / container-scan (pull_request) Successful in 1m54s
CI / sonar-analysis (pull_request) Successful in 2m5s
CI / workflow-summary (pull_request) Successful in 52s
2025-12-15 15:11:36 -05:00
ilia
d6655babd9 Refactor: Simplify connectivity analysis logic by breaking down into smaller helper functions for improved readability and maintainability
All checks were successful
CI / lint-and-test (pull_request) Successful in 1m0s
CI / ansible-validation (pull_request) Successful in 2m12s
CI / secret-scanning (pull_request) Successful in 54s
CI / dependency-scan (pull_request) Successful in 58s
CI / sast-scan (pull_request) Successful in 2m58s
CI / license-check (pull_request) Successful in 59s
CI / vault-check (pull_request) Successful in 2m50s
CI / playbook-test (pull_request) Successful in 2m42s
CI / container-scan (pull_request) Successful in 1m44s
CI / sonar-analysis (pull_request) Successful in 2m12s
CI / workflow-summary (pull_request) Successful in 51s
2025-12-15 14:55:10 -05:00
ilia
dc94395bbc Fix: Enhance SonarScanner error handling in CI workflow with detailed failure messages and troubleshooting guidance
All checks were successful
CI / lint-and-test (pull_request) Successful in 57s
CI / ansible-validation (pull_request) Successful in 2m20s
CI / secret-scanning (pull_request) Successful in 53s
CI / dependency-scan (pull_request) Successful in 58s
CI / sast-scan (pull_request) Successful in 2m14s
CI / license-check (pull_request) Successful in 55s
CI / vault-check (pull_request) Successful in 2m9s
CI / playbook-test (pull_request) Successful in 2m4s
CI / container-scan (pull_request) Successful in 1m27s
CI / sonar-analysis (pull_request) Successful in 2m5s
CI / workflow-summary (pull_request) Successful in 51s
2025-12-14 21:35:52 -05:00
ilia
699aaefac3 Fix: Update CI workflow to improve SonarScanner installation process with enhanced error handling and version management
All checks were successful
CI / lint-and-test (pull_request) Successful in 57s
CI / ansible-validation (pull_request) Successful in 2m16s
CI / secret-scanning (pull_request) Successful in 53s
CI / dependency-scan (pull_request) Successful in 57s
CI / sast-scan (pull_request) Successful in 2m5s
CI / license-check (pull_request) Successful in 54s
CI / vault-check (pull_request) Successful in 1m53s
CI / playbook-test (pull_request) Successful in 2m20s
CI / container-scan (pull_request) Successful in 1m35s
CI / sonar-analysis (pull_request) Successful in 2m16s
CI / workflow-summary (pull_request) Successful in 51s
2025-12-14 21:21:26 -05:00
ilia
277a22d962 Fix: Clean up duplicate repository entries in application and development roles 2025-12-14 21:21:19 -05:00
ilia
83a5d988af Fix: Update ansible-lint configuration to exclude specific paths and skip certain rules for improved linting flexibility
Some checks failed
CI / lint-and-test (pull_request) Successful in 58s
CI / ansible-validation (pull_request) Successful in 2m17s
CI / secret-scanning (pull_request) Successful in 53s
CI / dependency-scan (pull_request) Successful in 57s
CI / sast-scan (pull_request) Successful in 2m17s
CI / license-check (pull_request) Successful in 55s
CI / vault-check (pull_request) Successful in 2m20s
CI / playbook-test (pull_request) Successful in 2m16s
CI / container-scan (pull_request) Successful in 1m25s
CI / sonar-analysis (pull_request) Failing after 1m56s
CI / workflow-summary (pull_request) Successful in 50s
2025-12-14 21:04:45 -05:00
ilia
a45ee496e4 Fix: Update CI workflow to use Ubuntu 22.04 container, install Node.js and SonarScanner with improved methods, and enhance SonarQube connectivity verification
Some checks failed
CI / lint-and-test (pull_request) Successful in 57s
CI / ansible-validation (pull_request) Successful in 2m6s
CI / secret-scanning (pull_request) Successful in 53s
CI / dependency-scan (pull_request) Successful in 57s
CI / sast-scan (pull_request) Successful in 1m55s
CI / license-check (pull_request) Successful in 54s
CI / vault-check (pull_request) Successful in 1m58s
CI / playbook-test (pull_request) Successful in 1m58s
CI / container-scan (pull_request) Successful in 1m31s
CI / sonar-analysis (pull_request) Failing after 2m36s
CI / workflow-summary (pull_request) Successful in 50s
2025-12-14 20:51:36 -05:00
ilia
e54ecfefc1 Fix: Update CI workflow to enhance playbook syntax checking and improve SonarQube connectivity verification
Some checks failed
CI / lint-and-test (pull_request) Successful in 58s
CI / ansible-validation (pull_request) Successful in 2m15s
CI / secret-scanning (pull_request) Successful in 54s
CI / dependency-scan (pull_request) Successful in 58s
CI / sast-scan (pull_request) Successful in 2m11s
CI / license-check (pull_request) Successful in 54s
CI / vault-check (pull_request) Successful in 1m54s
CI / playbook-test (pull_request) Successful in 1m52s
CI / container-scan (pull_request) Successful in 1m27s
CI / sonar-analysis (pull_request) Failing after 50s
CI / workflow-summary (pull_request) Successful in 50s
2025-12-14 20:43:28 -05:00
133 changed files with 5305 additions and 3232 deletions

View File

@ -1,18 +1,29 @@
# Ansible Lint Configuration
--- ---
# Exclude patterns # ansible-lint configuration
#
# We exclude inventory host/group vars because many contain vault-encrypted content
# that cannot be parsed without vault secrets in CI/dev environments.
exclude_paths: exclude_paths:
- inventories/production/host_vars/
- inventories/production/group_vars/all/vault.yml
- inventories/production/group_vars/all/vault.example.yml
# Exclude patterns
- .cache/ - .cache/
- .github/ - .github/
- .gitea/
- .ansible/ - .ansible/
# Skip specific rules # Skip specific rules
skip_list: skip_list:
- yaml[line-length] # Allow longer lines in some cases - yaml[line-length] # Allow longer lines in some cases
- yaml[document-start] # Allow missing document start in vault files
- yaml[truthy] # Allow different truthy values in workflow files
- name[casing] # Allow mixed case in task names - name[casing] # Allow mixed case in task names
- args[module] # Skip args rule that causes "file name too long" issues - args[module] # Skip args rule that causes "file name too long" issues
- var-naming[no-role-prefix] # Allow shorter variable names for readability - var-naming[no-role-prefix] # Allow shorter variable names for readability
- risky-shell-pipe # Allow shell pipes in maintenance scripts - risky-shell-pipe # Allow shell pipes in maintenance scripts
- run-once[play] # Allow strategy: free for parallel execution
# Warn instead of error for these # Warn instead of error for these
warn_list: warn_list:

View File

@ -0,0 +1,33 @@
## Project rules (Ansible infrastructure repo)
### Canonical documentation
- Start here: `project-docs/index.md`
- Architecture: `project-docs/architecture.md`
- Standards: `project-docs/standards.md`
- Workflow: `project-docs/workflow.md`
- Decisions: `project-docs/decisions.md`
### Repo structure (high level)
- **Inventory**: `inventories/production/`
- **Playbooks**: `playbooks/`
- `playbooks/servers.yml`: server baseline
- `playbooks/workstations.yml`: workstation baseline + desktop apps on `desktop` group only
- `playbooks/app/*`: Proxmox app-project suite
- **Roles**: `roles/*` (standard Ansible role layout)
### Key standards to follow
- **YAML**: 2-space indentation; tasks must have `name:`
- **Modules**: prefer native modules; use FQCN (e.g., `ansible.builtin.*`, `community.general.*`)
- **Idempotency**: no “always-changed” shell tasks; use `changed_when:` / `creates:` / `removes:`
- **Secrets**: never commit plaintext; use Ansible Vault with `vault_`-prefixed vars
- **Makefile-first**: prefer `make ...` targets over raw `ansible-playbook`
### Architectural decisions (must not regress)
- Editor/IDE installation is **out of scope** for Ansible roles/playbooks.
- Monitoring is split: `monitoring_server` vs `monitoring_desktop`.
- Desktop applications run only for `desktop` group (via workstations playbook).

View File

@ -1,14 +1,71 @@
--- ---
name: CI name: CI
on: "on":
push: push:
branches: [master] branches: [master]
pull_request: pull_request:
types: [opened, synchronize, reopened]
jobs: jobs:
lint-and-test: # Check if CI should be skipped based on branch name or commit message
# Simple skip pattern: @skipci (case-insensitive)
skip-ci-check:
runs-on: ubuntu-latest runs-on: ubuntu-latest
outputs:
should-skip: ${{ steps.check.outputs.skip }}
steps:
- name: Check out code (for commit message)
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Check if CI should be skipped
id: check
run: |
# Simple skip pattern: @skipci (case-insensitive)
# Works in branch names and commit messages
SKIP_PATTERN="@skipci"
# Get branch name (works for both push and PR)
BRANCH_NAME="${GITHUB_HEAD_REF:-${GITHUB_REF#refs/heads/}}"
# Get commit message (works for both push and PR)
COMMIT_MSG="${GITHUB_EVENT_HEAD_COMMIT_MESSAGE:-}"
if [ -z "$COMMIT_MSG" ]; then
COMMIT_MSG="${GITHUB_EVENT_PULL_REQUEST_HEAD_COMMIT_MESSAGE:-}"
fi
if [ -z "$COMMIT_MSG" ]; then
COMMIT_MSG=$(git log -1 --pretty=%B 2>/dev/null || echo "")
fi
SKIP=0
# Check branch name (case-insensitive)
if echo "$BRANCH_NAME" | grep -qiF "$SKIP_PATTERN"; then
echo "Skipping CI: branch name contains '$SKIP_PATTERN'"
SKIP=1
fi
# Check commit message (case-insensitive)
if [ $SKIP -eq 0 ] && [ -n "$COMMIT_MSG" ]; then
if echo "$COMMIT_MSG" | grep -qiF "$SKIP_PATTERN"; then
echo "Skipping CI: commit message contains '$SKIP_PATTERN'"
SKIP=1
fi
fi
echo "skip=$SKIP" >> $GITHUB_OUTPUT
echo "Branch: $BRANCH_NAME"
echo "Commit: ${COMMIT_MSG:0:50}..."
echo "Skip CI: $SKIP"
lint-and-test:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest
# Skip push events for non-master branches (they'll be covered by PR events)
if: github.event_name == 'pull_request' || github.ref == 'refs/heads/master'
container: container:
image: node:20-bullseye image: node:20-bullseye
steps: steps:
@ -26,13 +83,17 @@ jobs:
continue-on-error: true continue-on-error: true
ansible-validation: ansible-validation:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest runs-on: ubuntu-latest
# Skip push events for non-master branches (they'll be covered by PR events)
if: github.event_name == 'pull_request' || github.ref == 'refs/heads/master'
container: container:
image: ubuntu:22.04 image: ubuntu:22.04
steps: steps:
- name: Install Node.js for checkout action - name: Install Node.js for checkout action
run: | run: |
apt-get update && apt-get install -y curl apt-get update && apt-get install -y curl git
curl -fsSL https://deb.nodesource.com/setup_20.x | bash - curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
apt-get install -y nodejs apt-get install -y nodejs
@ -60,6 +121,8 @@ jobs:
continue-on-error: true continue-on-error: true
secret-scanning: secret-scanning:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: container:
image: zricethezav/gitleaks:latest image: zricethezav/gitleaks:latest
@ -78,6 +141,8 @@ jobs:
continue-on-error: true continue-on-error: true
dependency-scan: dependency-scan:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: container:
image: aquasec/trivy:latest image: aquasec/trivy:latest
@ -93,6 +158,8 @@ jobs:
run: trivy fs --scanners vuln,secret --exit-code 0 . run: trivy fs --scanners vuln,secret --exit-code 0 .
sast-scan: sast-scan:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: container:
image: ubuntu:22.04 image: ubuntu:22.04
@ -116,6 +183,8 @@ jobs:
continue-on-error: true continue-on-error: true
license-check: license-check:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: container:
image: node:20-bullseye image: node:20-bullseye
@ -136,6 +205,8 @@ jobs:
continue-on-error: true continue-on-error: true
vault-check: vault-check:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: container:
image: ubuntu:22.04 image: ubuntu:22.04
@ -159,7 +230,7 @@ jobs:
- name: Validate vault files are encrypted - name: Validate vault files are encrypted
run: | run: |
echo "Checking for Ansible Vault files..." echo "Checking for Ansible Vault files..."
vault_files=$(find . -name "*vault*.yml" -o -name "*vault*.yaml" | grep -v ".git" || true) vault_files=$(find . -name "*vault*.yml" -o -name "*vault*.yaml" | grep -v ".git" | grep -v ".example" || true)
if [ -z "$vault_files" ]; then if [ -z "$vault_files" ]; then
echo "No vault files found" echo "No vault files found"
exit 0 exit 0
@ -182,6 +253,8 @@ jobs:
echo "All vault files are properly encrypted!" echo "All vault files are properly encrypted!"
playbook-test: playbook-test:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: container:
image: ubuntu:22.04 image: ubuntu:22.04
@ -218,12 +291,17 @@ jobs:
fi fi
done done
if [ $failed -eq 1 ]; then if [ $failed -eq 1 ]; then
echo "Some playbooks have errors (this is expected without inventory/vault)" echo "❌ Some playbooks have syntax errors!"
exit 0 echo "Note: This may be expected if playbooks require inventory/vault, but syntax errors should still be fixed."
exit 1
else
echo "✅ All playbooks passed syntax check"
fi fi
continue-on-error: true continue-on-error: true
container-scan: container-scan:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: container:
image: ubuntu:22.04 image: ubuntu:22.04
@ -239,22 +317,43 @@ jobs:
- name: Install Trivy - name: Install Trivy
run: | run: |
set -e
apt-get update && apt-get install -y wget curl tar apt-get update && apt-get install -y wget curl tar
# Try multiple download methods for reliability
echo "Downloading Trivy..." # Use a fixed, known-good Trivy version to avoid URL/redirect issues
if wget -q "https://github.com/aquasecurity/trivy/releases/latest/download/trivy_linux_amd64.tar.gz" -O /tmp/trivy.tar.gz 2>&1; then TRIVY_VERSION="0.58.2"
echo "Downloaded tar.gz, extracting..." TRIVY_URL="https://github.com/aquasecurity/trivy/releases/download/v${TRIVY_VERSION}/trivy_${TRIVY_VERSION}_Linux-64bit.tar.gz"
tar -xzf /tmp/trivy.tar.gz -C /tmp/ trivy
mv /tmp/trivy /usr/local/bin/trivy echo "Installing Trivy version: ${TRIVY_VERSION}"
elif wget -q "https://github.com/aquasecurity/trivy/releases/latest/download/trivy_linux_amd64" -O /usr/local/bin/trivy 2>&1; then echo "Downloading from: ${TRIVY_URL}"
echo "Downloaded binary directly"
else if ! wget --progress=bar:force "${TRIVY_URL}" -O /tmp/trivy.tar.gz 2>&1; then
echo "Failed to download Trivy, trying with version detection..." echo "❌ Failed to download Trivy archive"
TRIVY_VERSION=$(curl -s https://api.github.com/repos/aquasecurity/trivy/releases/latest | grep tag_name | cut -d '"' -f 4 | sed 's/v//') echo "Checking if file was partially downloaded:"
wget -q "https://github.com/aquasecurity/trivy/releases/download/v${TRIVY_VERSION}/trivy_${TRIVY_VERSION}_Linux-64bit.tar.gz" -O /tmp/trivy.tar.gz ls -lh /tmp/trivy.tar.gz 2>/dev/null || echo "No file found"
tar -xzf /tmp/trivy.tar.gz -C /tmp/ trivy exit 1
mv /tmp/trivy /usr/local/bin/trivy
fi fi
if [ ! -f /tmp/trivy.tar.gz ] || [ ! -s /tmp/trivy.tar.gz ]; then
echo "❌ Downloaded Trivy archive is missing or empty"
exit 1
fi
echo "Download complete. File size: $(du -h /tmp/trivy.tar.gz | cut -f1)"
echo "Extracting Trivy..."
if ! tar -xzf /tmp/trivy.tar.gz -C /tmp/ trivy; then
echo "❌ Failed to extract Trivy binary from archive"
tar -tzf /tmp/trivy.tar.gz 2>&1 | head -20 || true
exit 1
fi
if [ ! -f /tmp/trivy ]; then
echo "❌ Trivy binary not found after extraction"
ls -la /tmp/ | grep trivy || ls -la /tmp/ | head -20
exit 1
fi
mv /tmp/trivy /usr/local/bin/trivy
chmod +x /usr/local/bin/trivy chmod +x /usr/local/bin/trivy
/usr/local/bin/trivy --version /usr/local/bin/trivy --version
trivy --version trivy --version
@ -273,27 +372,139 @@ jobs:
continue-on-error: true continue-on-error: true
sonar-analysis: sonar-analysis:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: container:
image: sonarsource/sonar-scanner-cli:latest image: ubuntu:22.04
env: env:
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }} SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }} SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
steps: steps:
- name: Install Node.js for checkout action
run: |
apk add --no-cache nodejs npm curl
- name: Check out code - name: Check out code
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Install Java and SonarScanner
run: |
set -e
apt-get update && apt-get install -y wget curl unzip openjdk-21-jre
# Use a known working version to avoid download issues
SONAR_SCANNER_VERSION="5.0.1.3006"
SCANNER_URL="https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-${SONAR_SCANNER_VERSION}-linux.zip"
echo "Installing SonarScanner version: ${SONAR_SCANNER_VERSION}"
echo "Downloading from: ${SCANNER_URL}"
# Download with verbose error output
if ! wget --progress=bar:force "${SCANNER_URL}" -O /tmp/sonar-scanner.zip 2>&1; then
echo "❌ Failed to download SonarScanner"
echo "Checking if file was partially downloaded:"
ls -lh /tmp/sonar-scanner.zip 2>/dev/null || echo "No file found"
exit 1
fi
# Verify download
if [ ! -f /tmp/sonar-scanner.zip ] || [ ! -s /tmp/sonar-scanner.zip ]; then
echo "❌ Downloaded file is missing or empty"
exit 1
fi
echo "Download complete. File size: $(du -h /tmp/sonar-scanner.zip | cut -f1)"
echo "Extracting SonarScanner..."
if ! unzip -q /tmp/sonar-scanner.zip -d /tmp; then
echo "❌ Failed to extract SonarScanner"
echo "Archive info:"
file /tmp/sonar-scanner.zip || true
unzip -l /tmp/sonar-scanner.zip 2>&1 | head -20 || true
exit 1
fi
# Find the extracted directory (handle both naming conventions)
EXTRACTED_DIR=""
if [ -d "/tmp/sonar-scanner-${SONAR_SCANNER_VERSION}-linux" ]; then
EXTRACTED_DIR="/tmp/sonar-scanner-${SONAR_SCANNER_VERSION}-linux"
elif [ -d "/tmp/sonar-scanner-cli-${SONAR_SCANNER_VERSION}-linux" ]; then
EXTRACTED_DIR="/tmp/sonar-scanner-cli-${SONAR_SCANNER_VERSION}-linux"
else
# Try to find any sonar-scanner directory
EXTRACTED_DIR=$(find /tmp -maxdepth 1 -type d -name "*sonar-scanner*" | head -1)
fi
if [ -z "$EXTRACTED_DIR" ] || [ ! -d "$EXTRACTED_DIR" ]; then
echo "❌ SonarScanner directory not found after extraction"
echo "Contents of /tmp:"
ls -la /tmp/ | grep -E "(sonar|zip)" || ls -la /tmp/ | head -20
exit 1
fi
echo "Found extracted directory: ${EXTRACTED_DIR}"
mv "${EXTRACTED_DIR}" /opt/sonar-scanner
# Create symlink
if [ -f /opt/sonar-scanner/bin/sonar-scanner ]; then
ln -sf /opt/sonar-scanner/bin/sonar-scanner /usr/local/bin/sonar-scanner
chmod +x /opt/sonar-scanner/bin/sonar-scanner
chmod +x /usr/local/bin/sonar-scanner
else
echo "❌ sonar-scanner binary not found in /opt/sonar-scanner/bin/"
echo "Contents of /opt/sonar-scanner/bin/:"
ls -la /opt/sonar-scanner/bin/ || true
exit 1
fi
echo "Verifying installation..."
if ! sonar-scanner --version; then
echo "❌ SonarScanner verification failed"
echo "PATH: $PATH"
which sonar-scanner || echo "sonar-scanner not in PATH"
exit 1
fi
echo "✓ SonarScanner installed successfully"
- name: Verify SonarQube connection
run: |
echo "Checking SonarQube connectivity..."
if [ -z "$SONAR_HOST_URL" ] || [ -z "$SONAR_TOKEN" ]; then
echo "❌ ERROR: SONAR_HOST_URL or SONAR_TOKEN secrets are not set!"
echo "Please configure them in: Repository Settings → Actions → Secrets"
exit 1
fi
echo "✓ Secrets are configured"
echo "SonarQube URL: ${SONAR_HOST_URL}"
echo "Testing connectivity to SonarQube server..."
if curl -f -s -o /dev/null -w "%{http_code}" "${SONAR_HOST_URL}/api/system/status" | grep -q "200"; then
echo "✓ SonarQube server is reachable"
else
echo "⚠️ Warning: Could not verify SonarQube server connectivity"
fi
- name: Run SonarScanner - name: Run SonarScanner
run: | run: |
sonar-scanner \ echo "Starting SonarQube analysis..."
-Dsonar.projectKey=ansible-infra \ if ! sonar-scanner \
-Dsonar.projectKey=ansible \
-Dsonar.sources=. \ -Dsonar.sources=. \
-Dsonar.host.url=${SONAR_HOST_URL} \ -Dsonar.host.url=${SONAR_HOST_URL} \
-Dsonar.login=${SONAR_TOKEN} -Dsonar.token=${SONAR_TOKEN} \
-Dsonar.scm.disabled=true \
-Dsonar.python.version=3.10 \
-X; then
echo ""
echo "❌ SonarScanner analysis failed!"
echo ""
echo "Common issues:"
echo " 1. Project 'ansible' doesn't exist in SonarQube"
echo " → Create it manually in SonarQube UI"
echo " 2. Token doesn't have permission to analyze/create project"
echo " → Ensure token has 'Execute Analysis' permission"
echo " 3. Token doesn't have 'Create Projects' permission (if project doesn't exist)"
echo " → Grant this permission in SonarQube user settings"
echo ""
echo "Check SonarQube logs for more details."
exit 1
fi
continue-on-error: true continue-on-error: true
workflow-summary: workflow-summary:

10
.gitignore vendored
View File

@ -6,6 +6,16 @@
*.tmp *.tmp
*.bak *.bak
*~ *~
vault.yml.bak.*
# Deploy keys and SSH private keys - NEVER commit these!
*_deploy_key
*_deploy_key.pub
*.pem
*.key
id_rsa
id_ed25519
id_ecdsa
# Python bytecode # Python bytecode
__pycache__/ __pycache__/

154
Makefile
View File

@ -1,4 +1,4 @@
.PHONY: help bootstrap lint test check dev datascience inventory inventory-all local clean status tailscale tailscale-check tailscale-dev tailscale-status create-vault create-vm monitoring .PHONY: help bootstrap lint test check dev datascience inventory inventory-all local servers workstations clean status tailscale tailscale-check tailscale-dev tailscale-status create-vault create-vm monitoring
.DEFAULT_GOAL := help .DEFAULT_GOAL := help
## Colors for output ## Colors for output
@ -13,9 +13,12 @@ RESET := \033[0m
PLAYBOOK_SITE := playbooks/site.yml PLAYBOOK_SITE := playbooks/site.yml
PLAYBOOK_DEV := playbooks/development.yml PLAYBOOK_DEV := playbooks/development.yml
PLAYBOOK_LOCAL := playbooks/local.yml PLAYBOOK_LOCAL := playbooks/local.yml
PLAYBOOK_SERVERS := playbooks/servers.yml
PLAYBOOK_WORKSTATIONS := playbooks/workstations.yml
PLAYBOOK_MAINTENANCE := playbooks/maintenance.yml PLAYBOOK_MAINTENANCE := playbooks/maintenance.yml
PLAYBOOK_TAILSCALE := playbooks/tailscale.yml PLAYBOOK_TAILSCALE := playbooks/tailscale.yml
PLAYBOOK_PROXMOX := playbooks/infrastructure/proxmox-vm.yml PLAYBOOK_PROXMOX := playbooks/infrastructure/proxmox-vm.yml
PLAYBOOK_PROXMOX_INFO := playbooks/app/proxmox_info.yml
# Collection and requirement paths # Collection and requirement paths
COLLECTIONS_REQ := collections/requirements.yml COLLECTIONS_REQ := collections/requirements.yml
@ -32,7 +35,8 @@ ANSIBLE_ARGS := --vault-password-file ~/.ansible-vault-pass
## Auto-detect current host to exclude from remote operations ## Auto-detect current host to exclude from remote operations
CURRENT_IP := $(shell hostname -I | awk '{print $$1}') CURRENT_IP := $(shell hostname -I | awk '{print $$1}')
CURRENT_HOST := $(shell ansible-inventory --list | jq -r '._meta.hostvars | to_entries[] | select(.value.ansible_host == "$(CURRENT_IP)") | .key' 2>/dev/null | head -1) # NOTE: inventory parsing may require vault secrets. Keep this best-effort and silent in CI.
CURRENT_HOST := $(shell ansible-inventory --list --vault-password-file ~/.ansible-vault-pass 2>/dev/null | jq -r '._meta.hostvars | to_entries[] | select(.value.ansible_host == "$(CURRENT_IP)") | .key' 2>/dev/null | head -1)
EXCLUDE_CURRENT := $(if $(CURRENT_HOST),--limit '!$(CURRENT_HOST)',) EXCLUDE_CURRENT := $(if $(CURRENT_HOST),--limit '!$(CURRENT_HOST)',)
help: ## Show this help message help: ## Show this help message
@ -152,6 +156,18 @@ test-syntax: ## Run comprehensive syntax and validation checks
fi; \ fi; \
done done
@echo "" @echo ""
@echo "$(YELLOW)App Project Playbooks:$(RESET)"
@for playbook in playbooks/app/site.yml playbooks/app/provision_vms.yml playbooks/app/configure_app.yml playbooks/app/ssh_client_config.yml; do \
if [ -f "$$playbook" ]; then \
printf " %-25s " "$$playbook"; \
if ansible-playbook "$$playbook" --syntax-check >/dev/null 2>&1; then \
echo "$(GREEN)✓ OK$(RESET)"; \
else \
echo "$(RED)✗ FAIL$(RESET)"; \
fi; \
fi; \
done
@echo ""
@echo "$(YELLOW)Role Test Playbooks:$(RESET)" @echo "$(YELLOW)Role Test Playbooks:$(RESET)"
@for test_playbook in roles/*/tests/test.yml; do \ @for test_playbook in roles/*/tests/test.yml; do \
if [ -f "$$test_playbook" ]; then \ if [ -f "$$test_playbook" ]; then \
@ -195,11 +211,15 @@ test-syntax: ## Run comprehensive syntax and validation checks
@for yaml_file in inventories/production/group_vars/all/main.yml; do \ @for yaml_file in inventories/production/group_vars/all/main.yml; do \
if [ -f "$$yaml_file" ]; then \ if [ -f "$$yaml_file" ]; then \
printf " %-25s " "$$yaml_file (YAML)"; \ printf " %-25s " "$$yaml_file (YAML)"; \
if python3 -c "import yaml" >/dev/null 2>&1; then \
if python3 -c "import yaml; yaml.safe_load(open('$$yaml_file'))" >/dev/null 2>&1; then \ if python3 -c "import yaml; yaml.safe_load(open('$$yaml_file'))" >/dev/null 2>&1; then \
echo "$(GREEN)✓ OK$(RESET)"; \ echo "$(GREEN)✓ OK$(RESET)"; \
else \ else \
echo "$(RED)✗ FAIL$(RESET)"; \ echo "$(RED)✗ FAIL$(RESET)"; \
fi; \ fi; \
else \
echo "$(YELLOW)⚠ Skipped (PyYAML not installed)$(RESET)"; \
fi; \
fi; \ fi; \
done done
@printf " %-25s " "ansible.cfg (INI)"; \ @printf " %-25s " "ansible.cfg (INI)"; \
@ -234,15 +254,45 @@ local: ## Run the local playbook on localhost
@echo "$(YELLOW)Applying local playbook...$(RESET)" @echo "$(YELLOW)Applying local playbook...$(RESET)"
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_LOCAL) -K $(ANSIBLE_PLAYBOOK) $(PLAYBOOK_LOCAL) -K
servers: ## Run baseline server playbook (usage: make servers [GROUP=services] [HOST=host1])
@echo "$(YELLOW)Applying server baseline...$(RESET)"
@EXTRA=""; \
if [ -n "$(HOST)" ]; then \
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_SERVERS) --limit $(HOST); \
elif [ -n "$(GROUP)" ]; then \
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_SERVERS) -e target_group=$(GROUP); \
else \
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_SERVERS); \
fi
workstations: ## Run workstation baseline (usage: make workstations [GROUP=dev] [HOST=dev01])
@echo "$(YELLOW)Applying workstation baseline...$(RESET)"
@EXTRA=""; \
if [ -n "$(HOST)" ]; then \
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_WORKSTATIONS) --limit $(HOST); \
elif [ -n "$(GROUP)" ]; then \
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_WORKSTATIONS) -e target_group=$(GROUP); \
else \
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_WORKSTATIONS); \
fi
# Host-specific targets # Host-specific targets
dev: ## Run on specific host (usage: make dev HOST=dev01) dev: ## Run on specific host (usage: make dev HOST=dev01 [SUDO=true] [SSH_PASS=true])
ifndef HOST ifndef HOST
@echo "$(RED)Error: HOST parameter required$(RESET)" @echo "$(RED)Error: HOST parameter required$(RESET)"
@echo "Usage: make dev HOST=dev01" @echo "Usage: make dev HOST=dev01 [SUDO=true] [SSH_PASS=true]"
@exit 1 @exit 1
endif endif
@echo "$(YELLOW)Running on host: $(HOST)$(RESET)" @echo "$(YELLOW)Running on host: $(HOST)$(RESET)"
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_DEV) --limit $(HOST) @SSH_FLAGS=""; \
SUDO_FLAGS=""; \
if [ "$(SSH_PASS)" = "true" ]; then \
SSH_FLAGS="-k"; \
fi; \
if [ "$(SUDO)" = "true" ]; then \
SUDO_FLAGS="-K"; \
fi; \
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_DEV) --limit $(HOST) $(ANSIBLE_ARGS) $$SSH_FLAGS $$SUDO_FLAGS
# Data science role # Data science role
datascience: ## Install data science stack (usage: make datascience HOST=server01) datascience: ## Install data science stack (usage: make datascience HOST=server01)
@ -354,17 +404,26 @@ docker: ## Install/configure Docker only
@echo "$(YELLOW)Running Docker setup...$(RESET)" @echo "$(YELLOW)Running Docker setup...$(RESET)"
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_DEV) --tags docker $(ANSIBLE_PLAYBOOK) $(PLAYBOOK_DEV) --tags docker
shell: ## Configure shell only shell: ## Configure shell (usage: make shell [HOST=dev02] [SUDO=true])
@echo "$(YELLOW)Running shell configuration...$(RESET)" ifdef HOST
@echo "$(YELLOW)Running shell configuration on host: $(HOST)$(RESET)"
@if [ "$(SUDO)" = "true" ]; then \
$(ANSIBLE_PLAYBOOK) playbooks/shell.yml --limit $(HOST) $(ANSIBLE_ARGS) -K; \
else \
$(ANSIBLE_PLAYBOOK) playbooks/shell.yml --limit $(HOST) $(ANSIBLE_ARGS); \
fi
else
@echo "$(YELLOW)Running shell configuration on all dev hosts...$(RESET)"
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_DEV) --tags shell $(ANSIBLE_PLAYBOOK) $(PLAYBOOK_DEV) --tags shell
endif
shell-all: ## Configure shell on all shell_hosts (usage: make shell-all) shell-all: ## Configure shell on all hosts (usage: make shell-all)
@echo "$(YELLOW)Running shell configuration on all shell hosts...$(RESET)" @echo "$(YELLOW)Running shell configuration on all hosts...$(RESET)"
$(ANSIBLE_PLAYBOOK) playbooks/shell.yml $(ANSIBLE_ARGS) $(ANSIBLE_PLAYBOOK) playbooks/shell.yml $(ANSIBLE_ARGS)
apps: ## Install applications only apps: ## Install applications only
@echo "$(YELLOW)Installing applications...$(RESET)" @echo "$(YELLOW)Installing applications...$(RESET)"
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_DEV) --tags apps $(ANSIBLE_PLAYBOOK) $(PLAYBOOK_WORKSTATIONS) --tags apps
# Connectivity targets # Connectivity targets
ping: auto-fallback ## Ping hosts with colored output (usage: make ping [GROUP=dev] [HOST=dev01]) ping: auto-fallback ## Ping hosts with colored output (usage: make ping [GROUP=dev] [HOST=dev01])
@ -528,6 +587,81 @@ monitoring: ## Install monitoring tools on all machines
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_DEV) --tags monitoring $(ANSIBLE_PLAYBOOK) $(PLAYBOOK_DEV) --tags monitoring
@echo "$(GREEN)✓ Monitoring installation complete$(RESET)" @echo "$(GREEN)✓ Monitoring installation complete$(RESET)"
proxmox-info: ## Show Proxmox VM/LXC info (usage: make proxmox-info [PROJECT=projectA] [ALL=true] [TYPE=lxc|qemu|all])
@echo "$(YELLOW)Querying Proxmox guest info...$(RESET)"
@EXTRA=""; \
if [ -n "$(PROJECT)" ]; then EXTRA="$$EXTRA -e app_project=$(PROJECT)"; fi; \
if [ "$(ALL)" = "true" ]; then EXTRA="$$EXTRA -e proxmox_info_all=true"; fi; \
if [ -n "$(TYPE)" ]; then EXTRA="$$EXTRA -e proxmox_info_type=$(TYPE)"; fi; \
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_PROXMOX_INFO) $$EXTRA
app-provision: ## Provision app project containers/VMs on Proxmox (usage: make app-provision PROJECT=projectA)
ifndef PROJECT
@echo "$(RED)Error: PROJECT parameter required$(RESET)"
@echo "Usage: make app-provision PROJECT=projectA"
@exit 1
endif
@echo "$(YELLOW)Provisioning app project guests on Proxmox: $(PROJECT)$(RESET)"
$(ANSIBLE_PLAYBOOK) playbooks/app/provision_vms.yml -e app_project=$(PROJECT)
app-configure: ## Configure OS + app on project guests (usage: make app-configure PROJECT=projectA)
ifndef PROJECT
@echo "$(RED)Error: PROJECT parameter required$(RESET)"
@echo "Usage: make app-configure PROJECT=projectA"
@exit 1
endif
@echo "$(YELLOW)Configuring app project guests: $(PROJECT)$(RESET)"
$(ANSIBLE_PLAYBOOK) playbooks/app/configure_app.yml -e app_project=$(PROJECT)
app: ## Provision + configure app project (usage: make app PROJECT=projectA)
ifndef PROJECT
@echo "$(RED)Error: PROJECT parameter required$(RESET)"
@echo "Usage: make app PROJECT=projectA"
@exit 1
endif
@echo "$(YELLOW)Provisioning + configuring app project: $(PROJECT)$(RESET)"
$(ANSIBLE_PLAYBOOK) playbooks/app/site.yml -e app_project=$(PROJECT)
# Timeshift snapshot and rollback
timeshift-snapshot: ## Create Timeshift snapshot (usage: make timeshift-snapshot HOST=dev02)
ifndef HOST
@echo "$(RED)Error: HOST parameter required$(RESET)"
@echo "Usage: make timeshift-snapshot HOST=dev02"
@exit 1
endif
@echo "$(YELLOW)Creating Timeshift snapshot on $(HOST)...$(RESET)"
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_DEV) --limit $(HOST) --tags timeshift,snapshot
@echo "$(GREEN)✓ Snapshot created$(RESET)"
timeshift-list: ## List Timeshift snapshots (usage: make timeshift-list HOST=dev02)
ifndef HOST
@echo "$(RED)Error: HOST parameter required$(RESET)"
@echo "Usage: make timeshift-list HOST=dev02"
@exit 1
endif
@echo "$(YELLOW)Listing Timeshift snapshots on $(HOST)...$(RESET)"
@$(ANSIBLE_PLAYBOOK) playbooks/timeshift.yml --limit $(HOST) -e "timeshift_action=list" $(ANSIBLE_ARGS)
timeshift-restore: ## Restore from Timeshift snapshot (usage: make timeshift-restore HOST=dev02 SNAPSHOT=2025-12-17_21-30-00)
ifndef HOST
@echo "$(RED)Error: HOST parameter required$(RESET)"
@echo "Usage: make timeshift-restore HOST=dev02 SNAPSHOT=2025-12-17_21-30-00"
@exit 1
endif
ifndef SNAPSHOT
@echo "$(RED)Error: SNAPSHOT parameter required$(RESET)"
@echo "Usage: make timeshift-restore HOST=dev02 SNAPSHOT=2025-12-17_21-30-00"
@echo "$(YELLOW)Available snapshots:$(RESET)"
@$(MAKE) timeshift-list HOST=$(HOST)
@exit 1
endif
@echo "$(RED)WARNING: This will restore the system to snapshot $(SNAPSHOT)$(RESET)"
@echo "$(YELLOW)This action cannot be undone. Continue? [y/N]$(RESET)"
@read -r confirm && [ "$$confirm" = "y" ] || exit 1
@echo "$(YELLOW)Restoring snapshot $(SNAPSHOT) on $(HOST)...$(RESET)"
@$(ANSIBLE_PLAYBOOK) playbooks/timeshift.yml --limit $(HOST) -e "timeshift_action=restore timeshift_snapshot=$(SNAPSHOT)" $(ANSIBLE_ARGS)
@echo "$(GREEN)✓ Snapshot restored$(RESET)"
test-connectivity: ## Test host connectivity with detailed diagnostics and recommendations test-connectivity: ## Test host connectivity with detailed diagnostics and recommendations
@echo "$(YELLOW)Testing host connectivity...$(RESET)" @echo "$(YELLOW)Testing host connectivity...$(RESET)"
@if [ -f "test_connectivity.py" ]; then \ @if [ -f "test_connectivity.py" ]; then \

219
README.md
View File

@ -1,178 +1,81 @@
# Ansible Infrastructure Management # Ansible Infrastructure Management
Comprehensive infrastructure automation for development environments, server management, and VM provisioning. Ansible automation for development machines, service hosts, and **Proxmox-managed guests** (LXC-first, with a path for KVM VMs).
## 📊 **Current Status** ## Quick start
### ✅ **Completed Infrastructure**
- **Core System**: Base packages, SSH hardening, user management
- **Development Environment**: Git, Node.js, Python, Docker, modern CLI tools
- **Shell Configuration**: Zsh + Oh My Zsh + Powerlevel10k + plugins
- **Applications**: VS Code, Cursor, Brave, LibreOffice, desktop tools
- **Monitoring**: System monitoring tools + custom scripts (`sysinfo`, `netinfo`)
- **VPN Mesh**: Tailscale integration with automated auth keys
- **Security**: UFW firewall, fail2ban, SSH hardening
- **Maintenance**: Automated package updates and system cleanup
### 🎯 **Next Priorities**
1. **Enhanced monitoring**: Grafana + Prometheus dashboard
2. **Security hardening**: ClamAV antivirus, Lynis auditing, vulnerability scanning
3. **Centralized logging**: ELK stack for log aggregation
4. **CI/CD pipeline**: GitLab Runner or Jenkins integration
5. **Advanced security**: Intrusion detection, automated patching
## 🚀 Quick Start
```bash ```bash
# Install dependencies # Install Python deps + Ansible collections
make bootstrap make bootstrap
# Set up secrets management # Edit secrets (Proxmox credentials, SSH public key, etc.)
make create-vault make edit-group-vault
# Test configuration (comprehensive) # Validate the repo
make test make test-syntax
# Deploy to all hosts (dry run first)
make check
make apply
``` ```
## 📚 Documentation ## Proxmox app projects (LXC-first)
### Getting Started This repo can provision and configure **dev/qa/prod guests per application project** using the `app_projects` model.
- [**Initial Setup Guide**](docs/guides/setup.md) - First-time setup instructions
- [**Ansible Vault Guide**](docs/guides/vault.md) - Managing secrets securely
- [**Tailscale VPN Setup**](docs/guides/tailscale.md) - Mesh networking configuration
### Reference - **Configure projects**: `inventories/production/group_vars/all/main.yml` (`app_projects`)
- [**Installed Applications**](docs/reference/applications.md) - Complete software inventory - **Configure secrets**: `inventories/production/group_vars/all/vault.yml` (encrypted)
- [**Makefile Commands**](docs/reference/makefile.md) - All available make targets - **Run end-to-end**:
- [**Architecture Overview**](docs/reference/architecture.md) - System design and structure
## 🏗️ Project Structure ```bash
make app PROJECT=projectA
```
Other useful entry points:
- **Provision only**: `make app-provision PROJECT=projectA`
- **Configure only**: `make app-configure PROJECT=projectA`
- **Info / safety**: `make proxmox-info [PROJECT=projectA] [ALL=true] [TYPE=lxc|qemu|all]`
Safety notes:
- **IP conflict precheck**: provisioning fails if the target IP responds
(override with `-e allow_ip_conflicts=true` only if you really mean it).
- **VMID/CTID collision guardrail**: provisioning fails if the VMID exists but the guest name doesn't match
(override with `-e allow_vmid_collision=true` only if you really mean it).
- **No destructive playbooks**: this repo intentionally does **not** ship “destroy/decommission” automation.
Docs:
- `docs/guides/app_stack_proxmox.md`
- `docs/guides/app_stack_execution_flow.md`
## Project structure (relevant paths)
``` ```
ansible/ ansible/
├── Makefile # Task automation ├── Makefile
├── ansible.cfg # Ansible configuration ├── ansible.cfg
├── hosts # Inventory file ├── collections/requirements.yml
├── collections/ ├── inventories/production/
│ └── requirements.yml # Galaxy dependencies │ ├── hosts
├── group_vars/ # Global variables │ ├── group_vars/all/
│ ├── all.yml │ │ ├── main.yml
│ └── all/vault.yml # Encrypted secrets │ │ ├── vault.yml
├── host_vars/ # Host-specific configs │ │ └── vault.example.yml
├── roles/ # Ansible roles │ └── host_vars/
│ ├── base/ # Core system setup
│ ├── development/ # Dev tools
│ ├── docker/ # Container platform
│ ├── monitoring/ # System monitoring
│ ├── tailscale/ # VPN networking
│ └── ... # Additional roles
├── playbooks/ ├── playbooks/
│ ├── dev-playbook.yml # Development setup │ ├── app/
│ ├── local-playbook.yml # Local machine │ │ ├── site.yml
│ ├── maintenance-playbook.yml │ │ ├── provision_vms.yml
│ └── tailscale-playbook.yml │ │ ├── configure_app.yml
└── docs/ # Documentation │ │ └── proxmox_info.yml
├── guides/ # How-to guides │ └── site.yml
└── reference/ # Technical reference └── roles/
├── proxmox_vm/
├── base_os/
├── app_setup/
└── pote/
``` ```
## 🎯 Key Features ## Documentation
### Infrastructure Management - **Guides**: `docs/guides/`
- **Automated Provisioning**: Proxmox VM creation and configuration - **Reference**: `docs/reference/`
- **Configuration Management**: Consistent setup across all machines - **Project docs (architecture/standards/workflow)**: `project-docs/index.md`
- **Network Security**: Tailscale VPN mesh networking
- **System Maintenance**: Automated updates and cleanup
### Development Environment
- **Shell Environment**: Zsh + Oh My Zsh + Powerlevel10k
- **Container Platform**: Docker CE with Compose
- **Development Tools**: Node.js, Python, Git, build tools
- **Code Editors**: VS Code, Cursor IDE
### Security & Monitoring
- **SSH Hardening**: Modern crypto, key-only auth, fail2ban
- **Firewall**: UFW with sensible defaults
- **Monitoring Tools**: btop, iotop, nethogs, custom dashboards
## 🧪 Testing & Validation
### Comprehensive Testing
```bash
make test # Full test suite (lint + syntax + validation)
make test-syntax # Syntax and configuration validation only
make lint # Ansible-lint only
```
### Testing Coverage
- **Playbook syntax**: All main playbooks and infrastructure playbooks
- **Role validation**: All role test playbooks
- **Configuration files**: YAML and INI file validation
- **Documentation**: Markdown syntax and link checking (installed via `make bootstrap`)
- **Linting**: Full Ansible best practices validation
## 🖥️ Managed Hosts
| Host | Type | OS | Purpose |
|------|------|-----|---------|
| dev01 | Physical | Debian | Primary development |
| bottom | Physical | Debian | Secondary development |
| debianDesktopVM | VM | Debian | Desktop environment |
| giteaVM | VM | Alpine | Git repository hosting |
| portainerVM | VM | Alpine | Container management |
| homepageVM | VM | Debian | Service dashboard |
## 🔧 Common Tasks
```bash
# System Maintenance
make maintenance # Update all systems
make maintenance HOST=dev01 # Update specific host
# Development Setup
make docker # Install Docker
make shell # Configure shell
make apps # Install applications
# Network & Security
make tailscale # Deploy VPN
make security # Security hardening
make monitoring # Deploy monitoring
# Infrastructure
make create-vm # Create new VM
make status # Check connectivity
make facts # Gather system info
```
## 🛠️ Requirements
### Control Machine (where you run Ansible)
- Python 3.x with `pipx` (recommended) or `pip3`
- Node.js and `npm` (for documentation testing)
- SSH access to target hosts
- Ansible Vault password (for secrets)
### Target Hosts
- SSH server running
- Python 3.x
- `sudo` access for the Ansible user
### Dependency Management
All project dependencies are managed through standard requirements files:
- **`requirements.txt`** - Python packages (ansible, ansible-lint, etc.)
- **`package.json`** - Node.js packages (markdown tools)
- **`collections/requirements.yml`** - Ansible collections
**Setup**: Run `make bootstrap` to install all dependencies automatically.
## 📝 Contributing
1. Test changes with `make check` (dry run)
2. Follow existing patterns and naming conventions
3. Update documentation for new features
4. Encrypt sensitive data with Ansible Vault

View File

@ -1,4 +1,6 @@
--- ---
# Collections required for this repo.
# Install with: ansible-galaxy collection install -r collections/requirements.yml
collections: collections:
- name: community.general - name: community.general
version: ">=6.0.0" version: ">=6.0.0"

7
configure_app.yml Normal file
View File

@ -0,0 +1,7 @@
---
# Wrapper playbook
# Purpose:
# ansible-playbook -i inventories/production configure_app.yml -e app_project=projectA
- name: Configure app project guests
import_playbook: playbooks/app/configure_app.yml

157
docs/ROADMAP.md Normal file
View File

@ -0,0 +1,157 @@
# Project Roadmap & Future Improvements
Ideas and plans for enhancing the Ansible infrastructure.
## 🚀 Quick Wins (< 30 minutes each)
### Monitoring Enhancements
- [ ] Add Grafana + Prometheus for service monitoring dashboard
- [ ] Implement health check scripts for critical services
- [ ] Create custom Ansible callback plugin for better output
### Security Improvements
- [ ] Add ClamAV antivirus scanning
- [ ] Implement Lynis security auditing
- [ ] Set up automatic security updates with unattended-upgrades
- [ ] Add SSH key rotation mechanism
- [ ] Implement connection monitoring and alerting
## 📊 Medium Projects (1-2 hours each)
### Infrastructure Services
- [ ] **Centralized Logging**: Deploy ELK stack (Elasticsearch, Logstash, Kibana)
- [ ] **Container Orchestration**: Implement Docker Swarm or K3s
- [ ] **CI/CD Pipeline**: Set up GitLab Runner or Jenkins
- [ ] **Network Storage**: Configure NFS or Samba shares
- [ ] **DNS Server**: Deploy Pi-hole for ad blocking and local DNS
### New Service VMs
- [ ] **Monitoring VM**: Dedicated Prometheus + Grafana instance
- [ ] **Media VM**: Plex/Jellyfin media server
- [ ] **Security VM**: Security scanning and vulnerability monitoring
- [ ] **Database VM**: PostgreSQL/MySQL for application data
## 🎯 Service-Specific Enhancements
### giteaVM (Alpine)
Current: Git repository hosting ✅
- [ ] Add CI/CD runners
- [ ] Implement package registry
- [ ] Set up webhook integrations
- [ ] Add code review tools
### portainerVM (Alpine)
Current: Container management ✅
- [ ] Deploy Docker registry
- [ ] Add image vulnerability scanning
- [ ] Set up container monitoring
### homepageVM (Debian)
Current: Service dashboard ✅
- [ ] Add uptime monitoring (Uptime Kuma)
- [ ] Create public status page
- [ ] Implement service dependency mapping
- [ ] Add performance metrics display
### Development VMs
Current: Development environment ✅
- [ ] Add code quality tools (SonarQube)
- [ ] Deploy testing environments
- [ ] Implement development databases
- [ ] Set up local package caching (Artifactory/Nexus)
## 🔧 Ansible Improvements
### Role Enhancements
- [ ] Create reusable database role (PostgreSQL, MySQL, Redis)
- [ ] Develop monitoring role with multiple backends
- [ ] Build certificate management role (Let's Encrypt)
- [ ] Create reverse proxy role (nginx/traefik)
### Playbook Optimization
- [ ] Implement dynamic inventory from cloud providers
- [ ] Add parallel execution strategies
- [ ] Create rollback mechanisms
- [ ] Implement blue-green deployment patterns
### Testing & Quality
- [ ] Add Molecule tests for all roles
- [ ] Implement GitHub Actions CI/CD
- [ ] Create integration test suite
- [ ] Add performance benchmarking
## 📈 Long-term Goals
### High Availability
- [ ] Implement cluster management for critical services
- [ ] Set up load balancing
- [ ] Create disaster recovery procedures
- [ ] Implement automated failover
### Observability
- [ ] Full APM (Application Performance Monitoring)
- [ ] Distributed tracing
- [ ] Log aggregation and analysis
- [ ] Custom metrics and dashboards
### Automation
- [ ] GitOps workflow implementation
- [ ] Self-healing infrastructure
- [ ] Automated scaling
- [ ] Predictive maintenance
## 📝 Documentation Improvements
- [ ] Create video tutorials
- [ ] Add architecture diagrams
- [ ] Write troubleshooting guides
- [ ] Create role development guide
- [ ] Add contribution guidelines
## Priority Matrix
### ✅ **COMPLETED (This Week)**
1. ~~Fix any existing shell issues~~ - Shell configuration working
2. ~~Complete vault setup with all secrets~~ - Tailscale auth key in vault
3. ~~Deploy monitoring basics~~ - System monitoring deployed
4. ~~Fix Tailscale handler issues~~ - Case-sensitive handlers fixed
### 🎯 **IMMEDIATE (Next)**
1. **Security hardening** - ClamAV, Lynis, vulnerability scanning
2. **Enhanced monitoring** - Add Grafana + Prometheus
3. **Security hardening** - ClamAV, Lynis auditing
4. **SSH key management** - Fix remaining connectivity issues
### Short-term (This Month)
1. Centralized logging
2. Enhanced monitoring
3. Security auditing
4. Advanced security monitoring
### Medium-term (Quarter)
1. CI/CD pipeline
2. Container orchestration
3. Service mesh
4. Advanced monitoring
### Long-term (Year)
1. Full HA implementation
2. Multi-region support
3. Complete observability
4. Full automation
## Contributing
To add new ideas:
1. Create an issue in the repository
2. Label with `enhancement` or `feature`
3. Discuss in team meetings
4. Update this roadmap when approved
## Notes
- Focus on stability over features
- Security and monitoring are top priorities
- All changes should be tested in dev first
- Document everything as you go

View File

@ -0,0 +1,205 @@
# Security Hardening Implementation Plan
## 🔒 **Security Hardening Role Structure**
### **Phase 1: Antivirus Protection (ClamAV)**
**What gets installed:**
```bash
- clamav-daemon # Background scanning service
- clamav-freshclam # Virus definition updates
- clamav-milter # Email integration
- clamdscan # Command-line scanner
```
**What gets configured:**
- **Daily scans** at 3 AM of critical directories
- **Real-time monitoring** of `/home`, `/var/www`, `/tmp`
- **Automatic updates** of virus definitions
- **Email alerts** for detected threats
- **Quarantine system** for suspicious files
**Ansible tasks:**
```yaml
- name: Install ClamAV
apt:
name: [clamav-daemon, clamav-freshclam, clamdscan]
state: present
- name: Configure daily scans
cron:
name: "Daily ClamAV scan"
job: "/usr/bin/clamscan -r /home /var/www --log=/var/log/clamav/daily.log"
hour: "3"
minute: "0"
- name: Enable real-time scanning
systemd:
name: clamav-daemon
enabled: true
state: started
```
### **Phase 2: Security Auditing (Lynis)**
**What gets installed:**
```bash
- lynis # Security auditing tool
- rkhunter # Rootkit hunter
- chkrootkit # Additional rootkit detection
```
**What gets configured:**
- **Weekly security audits** with detailed reports
- **Baseline security scoring** for comparison
- **Automated hardening** of common issues
- **Email reports** to administrators
- **Trend tracking** of security improvements
**Ansible tasks:**
```yaml
- name: Install Lynis
get_url:
url: "https://downloads.cisofy.com/lynis/lynis-3.0.8.tar.gz"
dest: "/tmp/lynis.tar.gz"
- name: Extract and install Lynis
unarchive:
src: "/tmp/lynis.tar.gz"
dest: "/opt/"
remote_src: yes
- name: Create weekly audit cron
cron:
name: "Weekly Lynis audit"
job: "/opt/lynis/lynis audit system --quick --report-file /var/log/lynis/weekly-$(date +\\%Y\\%m\\%d).log"
weekday: "0"
hour: "2"
minute: "0"
```
### **Phase 3: Advanced Security Measures**
#### **File Integrity Monitoring (AIDE)**
```yaml
# Monitors critical system files for changes
- Tracks modifications to /etc, /bin, /sbin, /usr/bin
- Alerts on unauthorized changes
- Creates cryptographic checksums
- Daily integrity checks
```
#### **Intrusion Detection (Fail2ban Enhancement)**
```yaml
# Already have basic fail2ban, enhance with:
- SSH brute force protection ✅ (already done)
- Web application attack detection
- Port scan detection
- DDoS protection rules
- Geographic IP blocking
```
#### **System Hardening**
```yaml
# Kernel security parameters
- Disable unused network protocols
- Enable ASLR (Address Space Layout Randomization)
- Configure secure memory settings
- Harden network stack parameters
# Service hardening
- Disable unnecessary services
- Secure service configurations
- Implement principle of least privilege
- Configure secure file permissions
```
## 🎯 **Implementation Strategy**
### **Week 1: Basic Antivirus**
```bash
# Create security role
mkdir -p roles/security/{tasks,templates,handlers,defaults}
# Implement ClamAV
- Install and configure ClamAV
- Set up daily scans
- Configure email alerts
- Test malware detection
```
### **Week 2: Security Auditing**
```bash
# Add Lynis auditing
- Install Lynis security scanner
- Configure weekly audits
- Create reporting dashboard
- Baseline current security score
```
### **Week 3: Advanced Hardening**
```bash
# Implement AIDE and enhanced fail2ban
- File integrity monitoring
- Enhanced intrusion detection
- System parameter hardening
- Security policy enforcement
```
## 📊 **Expected Benefits**
### **Immediate (Week 1)**
- ✅ **Malware protection** on all systems
- ✅ **Automated threat detection**
- ✅ **Real-time file monitoring**
### **Short-term (Month 1)**
- ✅ **Security baseline** established
- ✅ **Vulnerability identification**
- ✅ **Automated hardening** applied
- ✅ **Security trend tracking**
### **Long-term (Ongoing)**
- ✅ **Proactive threat detection**
- ✅ **Compliance reporting**
- ✅ **Reduced attack surface**
- ✅ **Security incident prevention**
## 🚨 **Security Alerts & Monitoring**
### **Alert Types:**
1. **Critical**: Malware detected, system compromise
2. **High**: Failed security audit, integrity violation
3. **Medium**: Suspicious activity, configuration drift
4. **Low**: Routine scan results, update notifications
### **Notification Methods:**
- **Email alerts** for critical/high priority
- **Log aggregation** in centralized system
- **Dashboard indicators** in monitoring system
- **Weekly reports** with security trends
## 🔧 **Integration with Existing Infrastructure**
### **Works with your current setup:**
- ✅ **Fail2ban** - Enhanced with more rules
- ✅ **UFW firewall** - Additional hardening rules
- ✅ **SSH hardening** - Extended with key rotation
- ✅ **Monitoring** - Security metrics integration
- ✅ **Maintenance** - Security updates automation
### **Complements Proxmox + NAS:**
- **File-level protection** vs. VM snapshots
- **Real-time detection** vs. snapshot recovery
- **Proactive prevention** vs. reactive restoration
- **Security compliance** vs. data protection
## 📋 **Next Steps**
1. **Create security role** structure
2. **Implement ClamAV** antivirus protection
3. **Add Lynis** security auditing
4. **Configure monitoring** integration
5. **Test and validate** security improvements
Would you like me to start implementing the security role?

View File

@ -0,0 +1,173 @@
# App stack execution flow (what happens when you run it)
This document describes **exactly** what Ansible runs and what it changes when you execute the Proxmox app stack playbooks.
## Entry points
- Recommended end-to-end run:
- `playbooks/app/site.yml`
- Repo-root wrappers (equivalent):
- `site.yml` (imports `playbooks/site.yml`, and you can `--tags app`)
- `provision_vms.yml` (imports `playbooks/app/provision_vms.yml`)
- `configure_app.yml` (imports `playbooks/app/configure_app.yml`)
## High-level flow
When you run `playbooks/app/site.yml`, it imports two playbooks in order:
1. `playbooks/app/provision_vms.yml` (**Proxmox API changes happen here**)
2. `playbooks/app/configure_app.yml` (**SSH into guests and configure OS/app**)
## Variables that drive everything
All per-project/per-env inputs come from:
- `inventories/production/group_vars/all/main.yml``app_projects`
Each `app_projects.<project>.envs.<env>` contains:
- `name` (container hostname / inventory host name)
- `vmid` (Proxmox CTID)
- `ip` (static IP in CIDR form, e.g. `10.0.10.101/24`)
- `gateway` (e.g. `10.0.10.1`)
- `branch` (`dev`, `qa`, `main`)
- `env_vars` (key/value map written to `/srv/app/.env.<env>`)
Proxmox connection variables are also read from `inventories/production/group_vars/all/main.yml` but are usually vault-backed:
- `proxmox_host: "{{ vault_proxmox_host }}"`
- `proxmox_user: "{{ vault_proxmox_user }}"`
- `proxmox_node: "{{ vault_proxmox_node | default('pve') }}"`
## Phase 1: Provisioning via Proxmox API
### File chain
`playbooks/app/site.yml` imports `playbooks/app/provision_vms.yml`, which does:
- Validates `app_project` exists (if you passed one)
- Loops projects → includes `playbooks/app/provision_one_guest.yml`
- Loops envs inside the project → includes `playbooks/app/provision_one_env.yml`
### Preflight IP safety check
In `playbooks/app/provision_one_env.yml`:
- It runs `ping` against the target IP.
- If the IP responds, the play **fails** to prevent accidental duplicate-IP provisioning.
- You can override the guard (not recommended) with `-e allow_ip_conflicts=true`.
### What it creates/updates in Proxmox
In `playbooks/app/provision_one_env.yml` it calls role `roles/proxmox_vm` with LXC variables.
`roles/proxmox_vm/tasks/main.yml` dispatches:
- If `proxmox_guest_type == 'lxc'` → includes `roles/proxmox_vm/tasks/lxc.yml`
`roles/proxmox_vm/tasks/lxc.yml` performs:
1. **Build CT network config**
- Produces a `netif` dict like:
- `net0: name=eth0,bridge=vmbr0,firewall=1,ip=<CIDR>,gw=<GW>`
2. **Create/update the container**
- Uses `community.proxmox.proxmox` with:
- `state: present`
- `update: true` (so re-runs reconcile config)
- `vmid`, `hostname`, `ostemplate`, CPU/mem/swap, rootfs sizing, `netif`
- `pubkey` and optionally `password` for initial root access
3. **Start the container**
- Ensures `state: started` (if `lxc_start_after_create: true`)
4. **Wait for SSH**
- `wait_for: host=<ip> port=22`
### Dynamic inventory creation
Still in `playbooks/app/provision_one_env.yml`, it calls `ansible.builtin.add_host` so the guests become available to later plays:
- Adds the guest to groups:
- `app_all`
- `app_<project>_all`
- `app_<project>_<env>`
- Sets:
- `ansible_host` to the IP (without CIDR)
- `ansible_user: root` (bootstrap user for first config)
- `app_project`, `app_env` facts
## Phase 2: Configure OS + app on the guests
`playbooks/app/configure_app.yml` contains two plays:
### Play A: Build dynamic inventory (localhost)
This play exists so you can run `configure_app.yml` even if you didnt run provisioning in the same Ansible invocation.
- It loops over projects/envs from `app_projects`
- Adds hosts to:
- `app_all`, `app_<project>_all`, `app_<project>_<env>`
- Uses:
- `ansible_user: "{{ app_bootstrap_user | default('root') }}"`
### Play B: Configure the hosts (SSH + sudo)
Targets:
- If you pass `-e app_project=projectA``hosts: app_projectA_all`
- Otherwise → `hosts: app_all`
Tasks executed on each guest:
1. **Resolve effective project/env variables**
- `project_def = app_projects[app_project]`
- `env_def = app_projects[app_project].envs[app_env]`
2. **Role: `base_os`** (`roles/base_os/tasks/main.yml`)
- Updates apt cache
- Installs baseline packages (git/curl/nodejs/npm/ufw/etc.)
- Creates `appuser` (passwordless sudo)
- Adds your SSH public key to `appuser`
- Enables UFW and allows:
- SSH (22)
- backend port (default `3001`, overridable per project)
- frontend port (default `3000`, overridable per project)
3. **Role: `app_setup`** (`roles/app_setup/tasks/main.yml`)
- Creates:
- `/srv/app`
- `/srv/app/backend`
- `/srv/app/frontend`
- Writes the env file:
- `/srv/app/.env.<dev|qa|prod>` from template `roles/app_setup/templates/env.j2`
- Writes the deploy script:
- `/usr/local/bin/deploy_app.sh` from `roles/app_setup/templates/deploy_app.sh.j2`
- Script does:
- `git clone` if missing
- `git checkout/pull` correct branch
- runs backend install + migrations
- runs frontend install + build
- restarts systemd services
- Writes systemd units:
- `/etc/systemd/system/app-backend.service` from `app-backend.service.j2`
- `/etc/systemd/system/app-frontend.service` from `app-frontend.service.j2`
- Reloads systemd and enables/starts both services
## What changes on first run vs re-run
- **Provisioning**:
- First run: creates CTs in Proxmox, sets static IP config, starts them.
- Re-run: reconciles settings because `update: true` is used.
- **Configuration**:
- Mostly idempotent (directories/templates/users/firewall/services converge).
## Common “before you run” checklist
- Confirm `app_projects` has correct IPs/CTIDs/branches:
- `inventories/production/group_vars/all/main.yml`
- Ensure vault has Proxmox + SSH key material:
- `inventories/production/group_vars/all/vault.yml`
- Reference template: `inventories/production/group_vars/all/vault.example.yml`

View File

@ -0,0 +1,90 @@
# Proxmox App Projects (LXC-first)
This guide documents the **modular app-project stack** that provisions Proxmox guests (dev/qa/prod) and configures a full-stack app layout on them.
## What you get
- Proxmox provisioning via API (currently **LXC**; VM support remains via existing `roles/proxmox_vm` KVM path)
- A deployment user (`appuser`) with your SSH key
- `/srv/app/backend` and `/srv/app/frontend`
- Env file `/srv/app/.env.<dev|qa|prod>`
- `/usr/local/bin/deploy_app.sh` to pull the right branch and restart services
- systemd services:
- `app-backend.service`
- `app-frontend.service`
## Where to configure projects
Edit:
- `inventories/production/group_vars/all/main.yml`
Under `app_projects`, define projects like:
- `projectA.repo_url`
- `projectA.envs.dev|qa|prod.ip/gateway/branch`
- `projectA.guest_defaults` (cores/memory/rootfs sizing)
- `projectA.deploy.*` (install/build/migrate/start commands)
Adding **projectB** is just adding another top-level `app_projects.projectB` entry.
## Proxmox credentials (vault)
This repo already expects Proxmox connection vars in vault (see existing Proxmox playbooks). Ensure these exist in:
- `inventories/production/group_vars/all/vault.yml` (encrypted)
Common patterns:
- `vault_proxmox_host`: `10.0.10.201`
- `vault_proxmox_user`: e.g. `root@pam` or `ansible@pve`
- `vault_proxmox_node`: e.g. `pve`
- Either:
- `vault_proxmox_password`, or
- `vault_proxmox_token` + `vault_proxmox_token_id`
## Debian LXC template
The LXC provisioning uses `lxc_ostemplate`, defaulting to a Debian 12 template string like:
`local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst`
If your Proxmox has a different template filename, change `lxc_ostemplate` in `inventories/production/group_vars/all/main.yml`.
## Running it
Provision + configure one project:
```bash
ansible-playbook -i inventories/production playbooks/app/site.yml -e app_project=projectA
```
Provision + configure all projects in `app_projects`:
```bash
ansible-playbook -i inventories/production playbooks/app/site.yml
```
Only provisioning (Proxmox API):
```bash
ansible-playbook -i inventories/production playbooks/app/provision_vms.yml -e app_project=projectA
```
Only OS/app configuration:
```bash
ansible-playbook -i inventories/production playbooks/app/configure_app.yml -e app_project=projectA
```
## Optional: SSH aliases on your workstation
To write `~/.ssh/config` entries (disabled by default):
```bash
ansible-playbook -i inventories/production playbooks/app/ssh_client_config.yml -e manage_ssh_config=true -e app_project=projectA
```
This creates aliases like `projectA-dev`, `projectA-qa`, `projectA-prod`.

View File

@ -0,0 +1,28 @@
# Custom roles guide
This repo is designed to be extended by adding new roles under `roles/`.
## Role structure
Follow the standard Ansible role layout:
```
roles/<role_name>/
├── defaults/main.yml
├── handlers/main.yml
├── tasks/main.yml
├── templates/
├── files/
└── README.md
```
## Where to wire a new role
- Add it to the relevant playbook under `playbooks/` (or create a new playbook if its a major concern).
- Prefer tagging the role at inclusion time so `make <feature>` targets can use `--tags`.
## Standards (canonical)
- `project-docs/standards.md`
- `project-docs/decisions.md` (add an ADR entry for significant changes)

22
docs/guides/monitoring.md Normal file
View File

@ -0,0 +1,22 @@
# Monitoring guide
Monitoring is split by host type:
- **Servers**: `roles/monitoring_server/` (includes `fail2ban`, sysstat tooling)
- **Desktops/workstations**: `roles/monitoring_desktop/` (desktop-oriented tooling)
## Run monitoring only
```bash
# Dry-run
make monitoring CHECK=true
# Apply
make monitoring
```
## Notes
- Desktop apps are installed only on the `desktop` group via `playbooks/workstations.yml`.
- If you need packet analysis tools, keep them opt-in (see `docs/reference/applications.md`).

31
docs/guides/security.md Normal file
View File

@ -0,0 +1,31 @@
# Security hardening guide
This repos “security” work is primarily implemented via roles and inventory defaults.
## What runs where
- **SSH hardening + firewall**: `roles/ssh/`
- **Baseline packages/security utilities**: `roles/base/`
- **Monitoring + intrusion prevention (servers)**: `roles/monitoring_server/` (includes `fail2ban`)
- **Secrets**: Ansible Vault in `inventories/production/group_vars/all/vault.yml`
## Recommended flow
```bash
# Dry-run first
make check
# Apply only security-tagged roles
make security
```
## Secrets / Vault
Use vault for anything sensitive:
- Guide: `docs/guides/vault.md`
## Canonical standards
- `project-docs/standards.md`

View File

@ -129,7 +129,7 @@ vault_ssh_public_key: "ssh-ed25519 AAAA..."
## Step 7: Configure Variables ## Step 7: Configure Variables
### Global Settings ### Global Settings
Edit `group_vars/all.yml`: Edit `inventories/production/group_vars/all/main.yml`:
```yaml ```yaml
# Timezone and locale # Timezone and locale
timezone: "America/New_York" # Your timezone timezone: "America/New_York" # Your timezone
@ -145,7 +145,7 @@ ssh_permit_root_login: "no"
``` ```
### Host-Specific Settings ### Host-Specific Settings
Create/edit `host_vars/hostname.yml` for host-specific configuration. Create/edit `inventories/production/host_vars/<hostname>.yml` for host-specific configuration.
## Step 8: Test Configuration ## Step 8: Test Configuration
@ -159,7 +159,7 @@ make check
make check HOST=dev01 make check HOST=dev01
# Check specific role # Check specific role
ansible-playbook dev-playbook.yml --check --tags docker ansible-playbook playbooks/development.yml --check --tags docker
``` ```
## Step 9: Deploy ## Step 9: Deploy
@ -208,7 +208,7 @@ ansible dev -m shell -a "tailscale status"
### Vault Password Issues ### Vault Password Issues
- Check vault password file exists and has correct permissions - Check vault password file exists and has correct permissions
- Verify password is correct: `ansible-vault view group_vars/all/vault.yml` - Verify password is correct: `ansible-vault view inventories/production/group_vars/all/vault.yml`
### Python Not Found ### Python Not Found
- Install Python on target: `sudo apt install python3` - Install Python on target: `sudo apt install python3`

View File

@ -46,21 +46,21 @@ make tailscale-status
make tailscale-dev make tailscale-dev
# Specific hosts # Specific hosts
ansible-playbook tailscale-playbook.yml --limit "dev01,bottom" ansible-playbook playbooks/tailscale.yml --limit "dev01,bottom"
``` ```
### Manual Installation ### Manual Installation
```bash ```bash
# With custom auth key (not recommended - use vault instead) # With custom auth key (not recommended - use vault instead)
ansible-playbook tailscale-playbook.yml -e "tailscale_auth_key=your-key" ansible-playbook playbooks/tailscale.yml -e "tailscale_auth_key=your-key"
# As part of existing playbooks # As part of existing playbooks
ansible-playbook dev-playbook.yml --tags tailscale ansible-playbook playbooks/development.yml --tags tailscale
``` ```
## Configuration ## Configuration
### Global Settings (`group_vars/all.yml`) ### Global Settings (`inventories/production/group_vars/all/main.yml`)
```yaml ```yaml
tailscale_auth_key: "{{ vault_tailscale_auth_key }}" # From vault tailscale_auth_key: "{{ vault_tailscale_auth_key }}" # From vault
tailscale_accept_routes: true # Accept subnet routes tailscale_accept_routes: true # Accept subnet routes
@ -68,7 +68,7 @@ tailscale_accept_dns: true # Accept DNS settings
tailscale_ssh: true # Enable SSH over Tailscale tailscale_ssh: true # Enable SSH over Tailscale
``` ```
### Host-Specific Settings (`host_vars/hostname.yml`) ### Host-Specific Settings (`inventories/production/host_vars/<hostname>.yml`)
```yaml ```yaml
tailscale_hostname: "custom-name" # Override hostname tailscale_hostname: "custom-name" # Override hostname
tailscale_advertise_routes: "192.168.1.0/24" # Share local subnet tailscale_advertise_routes: "192.168.1.0/24" # Share local subnet
@ -100,7 +100,7 @@ sudo tailscale up
### Reset Connection ### Reset Connection
```bash ```bash
ansible-playbook tailscale-playbook.yml -e "tailscale_reset=true" ansible-playbook playbooks/tailscale.yml -e "tailscale_reset=true"
``` ```
## Security Best Practices ## Security Best Practices
@ -119,7 +119,7 @@ The role automatically detects OS and uses appropriate package manager.
## How It Works ## How It Works
1. **Playbook runs** → looks for `tailscale_auth_key` 1. **Playbook runs** → looks for `tailscale_auth_key`
2. **Checks `all.yml`** → finds `{{ vault_tailscale_auth_key }}` 2. **Checks inventory group vars** → finds `{{ vault_tailscale_auth_key }}`
3. **Decrypts vault** → retrieves actual auth key 3. **Decrypts vault** → retrieves actual auth key
4. **Installs Tailscale** → configures with your settings 4. **Installs Tailscale** → configures with your settings
5. **Connects to network** → machine appears in admin console 5. **Connects to network** → machine appears in admin console

211
docs/guides/timeshift.md Normal file
View File

@ -0,0 +1,211 @@
# Timeshift Snapshot and Rollback Guide
## Overview
Timeshift is a system restore utility that creates snapshots of your system before Ansible playbook execution.
This allows you to easily rollback if something goes wrong during configuration changes.
## How It Works
When you run a playbook, the Timeshift role automatically:
1. Checks if Timeshift is installed (installs if missing)
2. Creates a snapshot before making any changes
3. Tags the snapshot with "ansible" and "pre-playbook" for easy identification
## Usage
### Automatic Snapshots
Snapshots are created automatically when running playbooks:
```bash
# Run playbook - snapshot created automatically
make dev HOST=dev02
# Or run only snapshot creation
make timeshift-snapshot HOST=dev02
```
### List Snapshots
```bash
# List all snapshots on a host
make timeshift-list HOST=dev02
# Or manually on the host
ssh ladmin@192.168.20.28 "sudo timeshift --list"
```
### Restore from Snapshot
```bash
# Restore from a specific snapshot
make timeshift-restore HOST=dev02 SNAPSHOT=2025-12-17_21-30-00
# The command will:
# 1. Show available snapshots if SNAPSHOT is not provided
# 2. Ask for confirmation before restoring
# 3. Restore the system to that snapshot
```
### Manual Snapshot
```bash
# Create snapshot manually on host
ssh ladmin@192.168.20.28
sudo timeshift --create --comments "Manual snapshot before manual changes"
```
### Manual Restore
```bash
# SSH to host
ssh ladmin@192.168.20.28
# List snapshots
sudo timeshift --list
# Restore (interactive)
sudo timeshift --restore
# Or restore specific snapshot (non-interactive)
sudo timeshift --restore --snapshot '2025-12-17_21-30-00' --scripted
```
## Configuration
### Disable Auto-Snapshots
If you don't want automatic snapshots, disable them in `host_vars` or `group_vars`:
```yaml
# inventories/production/host_vars/dev02.yml
timeshift_auto_snapshot: false
```
### Customize Snapshot Settings
```yaml
# inventories/production/group_vars/dev/main.yml
timeshift_snapshot_description: "Pre-deployment snapshot"
timeshift_snapshot_tags: ["ansible", "deployment"]
timeshift_keep_daily: 7
timeshift_keep_weekly: 4
timeshift_keep_monthly: 6
```
## Important Notes
### Disk Space
- Snapshots require significant disk space (typically 10-50% of system size)
- RSYNC snapshots are larger but work on any filesystem
- BTRFS snapshots are smaller but require BTRFS filesystem
- Monitor disk usage: `df -h /timeshift`
### What Gets Backed Up
By default, Timeshift backs up:
- ✅ System files (`/etc`, `/usr`, `/boot`, etc.)
- ✅ System configuration
- ❌ User home directories (`/home`) - excluded by default
- ❌ User data
### Recovery Process
1. **Boot from recovery** (if system won't boot):
- Boot from live USB
- Install Timeshift: `sudo apt install timeshift`
- Run: `sudo timeshift --restore`
2. **Restore from running system**:
- SSH to host
- Run: `sudo timeshift --restore`
- Select snapshot and confirm
### Best Practices
1. **Always create snapshots before major changes**
```bash
make timeshift-snapshot HOST=dev02
make dev HOST=dev02
```
2. **Test rollback process** before you need it
```bash
# Create test snapshot
make timeshift-snapshot HOST=dev02
# Make a test change
# ...
# Practice restoring
make timeshift-list HOST=dev02
make timeshift-restore HOST=dev02 SNAPSHOT=<test-snapshot>
```
3. **Monitor snapshot disk usage**
```bash
ssh ladmin@192.168.20.28 "df -h /timeshift"
```
4. **Clean up old snapshots** if needed
```bash
ssh ladmin@192.168.20.28 "sudo timeshift --delete --snapshot 'OLD-SNAPSHOT'"
```
## Troubleshooting
### Snapshot Creation Fails
```bash
# Check Timeshift status
ssh ladmin@192.168.20.28 "sudo timeshift --list"
# Check disk space
ssh ladmin@192.168.20.28 "df -h"
# Check Timeshift logs
ssh ladmin@192.168.20.28 "sudo journalctl -u timeshift"
```
### Restore Fails
- Ensure you have enough disk space
- Check that snapshot still exists: `sudo timeshift --list`
- Try booting from recovery media if system won't boot
### Disk Full
```bash
# List snapshots
sudo timeshift --list
# Delete old snapshots
sudo timeshift --delete --snapshot 'OLD-SNAPSHOT'
# Or configure retention in group_vars
timeshift_keep_daily: 3 # Reduce from 7
timeshift_keep_weekly: 2 # Reduce from 4
```
## Integration with Ansible
The Timeshift role is automatically included in the development playbook and runs first to create snapshots before any changes are made.
This ensures you always have a restore point.
```yaml
# playbooks/development.yml
roles:
- {role: timeshift, tags: ['timeshift', 'snapshot']} # Runs first
- {role: base}
- {role: development}
# ... other roles
```
## See Also
- [Timeshift Documentation](https://github.com/teejee2008/timeshift)
- [Ansible Vault Guide](./vault.md) - For securing passwords
- [Maintenance Guide](../reference/makefile.md) - For system maintenance

View File

@ -6,7 +6,7 @@ Ansible Vault encrypts sensitive data like passwords and API keys while keeping
### Create Vault ### Create Vault
```bash ```bash
make create-vault make edit-group-vault
``` ```
### Add Secrets ### Add Secrets
@ -38,32 +38,31 @@ database_password: "{{ vault_db_password }}"
## File Structure ## File Structure
``` ```
group_vars/ inventories/production/
├── all.yml # Plain text configuration ├── group_vars/
└── all/ └── all/
└── vault.yml # Encrypted secrets (created by make create-vault) │ ├── main.yml # Plain text configuration
│ └── vault.yml # Encrypted secrets (edit with make edit-group-vault)
host_vars/ └── host_vars/
├── dev01.yml # Host-specific plain text ├── dev01.yml # Host-specific plain text
└── dev01/ └── dev01/
└── vault.yml # Host-specific secrets └── vault.yml # Host-specific secrets (edit with make edit-vault HOST=dev01)
``` ```
## Common Commands ## Common Commands
```bash ```bash
# Create new vault # Edit group vault (production inventory)
make create-vault make edit-group-vault
# Edit existing vault # Edit host-specific vault
make edit-vault # Global vault make edit-vault HOST=dev01
make edit-vault HOST=dev01 # Host-specific vault
# View decrypted contents # View decrypted contents
ansible-vault view group_vars/all/vault.yml ansible-vault view inventories/production/group_vars/all/vault.yml
# Change vault password # Change vault password
ansible-vault rekey group_vars/all/vault.yml ansible-vault rekey inventories/production/group_vars/all/vault.yml
``` ```
## Password Management ## Password Management

13
docs/project-docs.md Normal file
View File

@ -0,0 +1,13 @@
# Project docs (canonical)
Canonical project documentation lives in `project-docs/` (repo root):
- Index: `project-docs/index.md`
- Overview: `project-docs/overview.md`
- Architecture: `project-docs/architecture.md`
- Standards: `project-docs/standards.md`
- Workflow: `project-docs/workflow.md`
- Decisions (ADRs): `project-docs/decisions.md`
This file exists so users browsing `docs/` can quickly find the canonical project documentation.

View File

@ -54,10 +54,7 @@ Complete inventory of applications and tools deployed by Ansible playbooks.
| zsh | Z shell | apt | shell | | zsh | Z shell | apt | shell |
| tmux | Terminal multiplexer | apt | shell | | tmux | Terminal multiplexer | apt | shell |
| fzf | Fuzzy finder | apt | shell | | fzf | Fuzzy finder | apt | shell |
| oh-my-zsh | Zsh framework | git | shell | | zsh aliases | Minimal alias set (sourced from ~/.zshrc) | file | shell |
| powerlevel10k | Zsh theme | git | shell |
| zsh-syntax-highlighting | Syntax highlighting | git | shell |
| zsh-autosuggestions | Command suggestions | git | shell |
### 📊 Monitoring Tools ### 📊 Monitoring Tools
| Package | Description | Source | Role | | Package | Description | Source | Role |
@ -84,37 +81,58 @@ Complete inventory of applications and tools deployed by Ansible playbooks.
### 🖱️ Desktop Applications ### 🖱️ Desktop Applications
| Package | Description | Source | Role | | Package | Description | Source | Role |
|---------|-------------|--------|------| |---------|-------------|--------|------|
| brave-browser | Privacy-focused browser | brave | applications | | copyq | Clipboard manager (history/search) | apt | applications |
| libreoffice | Office suite | apt | applications |
| evince | PDF viewer | apt | applications | | evince | PDF viewer | apt | applications |
| redshift | Blue light filter | apt | applications | | redshift | Blue light filter | apt | applications |
### 📝 Code Editors ## Nice-to-have apps (not installed by default)
| Package | Description | Source | Role |
|---------|-------------|--------|------| These are good add-ons depending on how you use your workstations. Keep them opt-in to avoid bloating baseline installs.
| code | Visual Studio Code | snap | snap |
| cursor | AI-powered editor | snap | snap | ### Desktop / UX
- **flameshot**: screenshots + annotation
- **keepassxc**: local password manager (or use your preferred)
- **syncthing**: peer-to-peer file sync (if you want self-hosted sync)
- **remmina**: RDP/VNC client
- **mpv**: lightweight media player
### Developer workstation helpers
- **direnv**: per-project env var loading
- **shellcheck**: shell script linting
- **jq** / **yq**: JSON/YAML CLI tooling (already in base here, but listing for completeness)
- **ripgrep** / **fd-find**: fast search/find (already in base here)
### Networking / diagnostics
- **wireshark** (GUI) or **wireshark-common**: packet analysis (only if you need it)
- **iperf3**: bandwidth testing
- **dnsutils**: dig/nslookup tools
## Installation by Playbook ## Installation by Playbook
### dev-playbook.yml ### `playbooks/development.yml`
Installs all roles for development machines: Installs all roles for development machines:
- All system tools - All system tools
- Development environment - Development environment
- Docker platform - Docker platform
- Shell configuration - Shell configuration
- Desktop applications
- Monitoring tools - Monitoring tools
- Tailscale VPN - Tailscale VPN
### local-playbook.yml ### `playbooks/local.yml`
Installs for local machine management: Installs for local machine management:
- Core system tools - Core system tools
- Shell environment - Shell environment
- Development basics - Development basics
- Selected applications
### maintenance-playbook.yml ### `playbooks/workstations.yml`
Installs baseline for `dev:desktop:local`, and installs desktop apps only for the `desktop` group:
- Workstation baseline (dev + desktop + local)
- Desktop applications (desktop group only)
### `playbooks/maintenance.yml`
Maintains existing installations: Maintains existing installations:
- System updates - System updates
- Package cleanup - Package cleanup
@ -135,7 +153,6 @@ Maintains existing installations:
| snap | Snap packages | snapd daemon | | snap | Snap packages | snapd daemon |
| docker | Docker repository | Docker GPG key + repo | | docker | Docker repository | Docker GPG key + repo |
| tailscale | Tailscale repository | Tailscale GPG key + repo | | tailscale | Tailscale repository | Tailscale GPG key + repo |
| brave | Brave browser repository | Brave GPG key + repo |
| git | Git repositories | Direct clone | | git | Git repositories | Direct clone |
## Services Enabled ## Services Enabled

View File

@ -1,259 +1,10 @@
# Architecture Overview # Architecture (canonical doc moved)
Technical architecture and design of the Ansible infrastructure management system. The canonical architecture document is now:
## System Architecture - `project-docs/architecture.md`
``` This `docs/reference/architecture.md` file is kept as a pointer to avoid maintaining two competing sources of truth.
┌─────────────────────────────────────────────────────────────┐
│ Control Machine |
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Ansible │ │ Makefile │ │ Vault │ │
│ │ Engine │ │ Automation │ │ Secrets │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────┬───────────────────────────────────────┘
│ SSH + Tailscale VPN
┌─────────────────┴──────────────────────────────┐
│ │
┌───▼────────┐ ┌──────────────┐ ┌─────────────────▼────────┐
│ Dev │ │ Service │ │ Infrastructure │
│ Machines │ │ VMs │ │ VMs │
├────────────┤ ├──────────────┤ ├──────────────────────────┤
│ • dev01 │ │ • giteaVM │ │ • Proxmox Controller │
│ • bottom │ │ • portainerVM│ │ • Future VMs │
│ • desktop │ │ • homepageVM │ │ │
└────────────┘ └──────────────┘ └──────────────────────────┘
```
## Network Topology
### Physical Network
- **LAN**: 192.168.1.0/24 (typical home/office network)
- **Proxmox Host**: Hypervisor for VM management
- **Physical Machines**: Direct network access
### Tailscale Overlay Network
- **Mesh VPN**: Secure peer-to-peer connections
- **100.x.x.x**: Tailscale IP range
- **Zero-config**: Automatic NAT traversal
- **End-to-end encryption**: WireGuard protocol
## Host Groups
### Development (`dev`)
**Purpose**: Developer workstations and environments
- **Hosts**: dev01, bottom, debianDesktopVM
- **OS**: Debian/Ubuntu
- **Roles**: Full development stack
### Services
**Purpose**: Self-hosted services and applications
#### Gitea (`gitea`)
- **Host**: giteaVM
- **OS**: Alpine Linux (lightweight)
- **Service**: Git repository hosting
#### Portainer (`portainer`)
- **Host**: portainerVM
- **OS**: Alpine Linux
- **Service**: Container management UI
#### Homepage (`homepage`)
- **Host**: homepageVM
- **OS**: Debian
- **Service**: Service dashboard
### Infrastructure (`ansible`)
**Purpose**: Ansible control and automation
- **Host**: Ansible controller VM
- **OS**: Ubuntu Server
- **Service**: Infrastructure automation
### Local (`local`)
**Purpose**: Local machine management
- **Host**: localhost
- **Connection**: Local (no SSH)
## Playbook Architecture
### Core Playbooks
```yaml
dev-playbook.yml # Development environment setup
├── roles/maintenance # System updates
├── roles/base # Core packages
├── roles/ssh # SSH hardening
├── roles/user # User management
├── roles/development # Dev tools
├── roles/shell # Shell config
├── roles/docker # Container platform
├── roles/applications # Desktop apps
├── roles/snap # Snap packages
├── roles/tailscale # VPN setup
├── roles/monitoring # Monitoring tools
local-playbook.yml # Local machine
├── roles/base
├── roles/shell
├── roles/development
└── roles/tailscale
maintenance-playbook.yml # System maintenance
└── roles/maintenance
tailscale-playbook.yml # VPN deployment
└── roles/tailscale
proxmox-create-vm.yml # VM provisioning
└── roles/proxmox_vm
```
### Role Dependencies
```
base
├── Required by: all other roles
├── Provides: core utilities, security tools
└── Dependencies: none
ssh
├── Required by: secure access
├── Provides: hardened SSH, firewall
└── Dependencies: base
user
├── Required by: system access
├── Provides: user accounts, sudo
└── Dependencies: base
development
├── Required by: coding tasks
├── Provides: git, nodejs, python
└── Dependencies: base
docker
├── Required by: containerization
├── Provides: Docker CE, compose
└── Dependencies: base
tailscale
├── Required by: secure networking
├── Provides: mesh VPN
└── Dependencies: base
```
## Data Flow
### Configuration Management
1. **Variables** → group_vars/all.yml (global)
2. **Secrets** → group_vars/all/vault.yml (encrypted)
3. **Host Config** → host_vars/hostname.yml (specific)
4. **Role Defaults** → roles/*/defaults/main.yml
5. **Tasks** → roles/*/tasks/main.yml
6. **Templates** → roles/*/templates/*.j2
7. **Handlers** → roles/*/handlers/main.yml
### Execution Flow
```
make command
Makefile target
ansible-playbook
Inventory + Variables
Role execution
Task processing
Handler notification
Result reporting
```
## Security Architecture
### Defense in Depth
#### Network Layer
- **Tailscale VPN**: Encrypted mesh network
- **UFW Firewall**: Default deny, explicit allow
- **SSH Hardening**: Key-only, rate limiting
#### Application Layer
- **Fail2ban**: Intrusion prevention
- **Package signing**: GPG verification
- **Service isolation**: Docker containers
#### Data Layer
- **Ansible Vault**: Encrypted secrets
- **SSH Keys**: Ed25519 cryptography
### Access Control
```
User → SSH Key → Jump Host → Tailscale → Target Host
↓ ↓ ↓ ↓
Ed25519 Bastion WireGuard Firewall
Encryption Rules
```
## Storage Architecture
### Configuration Storage
```
/etc/ # System configuration
/opt/ # Application data
/usr/local/ # Custom scripts
/var/log/ # Logs and audit trails
```
## Monitoring Architecture
### System Monitoring
- **btop/htop**: Process monitoring
- **iotop**: I/O monitoring
- **nethogs**: Network per-process
- **Custom dashboards**: sysinfo, netinfo
### Log Management
- **logwatch**: Daily summaries
- **journald**: System logs
- **fail2ban**: Security logs
## Scalability Considerations
### Horizontal Scaling
- Add hosts to inventory groups
- Parallel execution with ansible forks
- Role reusability across environments
### Vertical Scaling
- Proxmox VM resource adjustment
- Docker resource limits
- Service-specific tuning
## Technology Stack
### Core Technologies
- **Ansible**: 2.9+ (Configuration management)
- **Python**: 3.x (Ansible runtime)
- **Jinja2**: Templating engine
- **YAML**: Configuration format
### Target Platforms
- **Debian**: 11+ (Bullseye, Bookworm)
- **Ubuntu**: 20.04+ (Focal, Jammy, Noble)
- **Alpine**: 3.x (Lightweight containers)
### Service Technologies
- **Docker**: Container runtime
- **Tailscale**: Mesh VPN
- **SystemD**: Service management
- **UFW**: Firewall management
## Best Practices ## Best Practices

View File

@ -58,6 +58,10 @@ Complete reference for all available `make` commands in the Ansible project.
| Command | Description | Usage | | Command | Description | Usage |
|---------|-------------|-------| |---------|-------------|-------|
| `create-vm` | Create Ansible controller VM on Proxmox | `make create-vm` | | `create-vm` | Create Ansible controller VM on Proxmox | `make create-vm` |
| `proxmox-info` | Show Proxmox guest info (LXC/VM) | `make proxmox-info [PROJECT=projectA] [ALL=true] [TYPE=lxc\|qemu\|all]` |
| `app-provision` | Provision app project guests on Proxmox | `make app-provision PROJECT=projectA` |
| `app-configure` | Configure OS + app on project guests | `make app-configure PROJECT=projectA` |
| `app` | Provision + configure app project guests | `make app PROJECT=projectA` |
| `ping` | Ping hosts with colored output | `make ping [GROUP=dev] [HOST=dev01]` | | `ping` | Ping hosts with colored output | `make ping [GROUP=dev] [HOST=dev01]` |
| `facts` | Gather facts from all hosts | `make facts` | | `facts` | Gather facts from all hosts | `make facts` |
| `test-connectivity` | Test network and SSH access | `make test-connectivity` | | `test-connectivity` | Test network and SSH access | `make test-connectivity` |
@ -69,6 +73,7 @@ Complete reference for all available `make` commands in the Ansible project.
| `copy-ssh-key` | Copy SSH key to specific host | `make copy-ssh-key HOST=giteaVM` | | `copy-ssh-key` | Copy SSH key to specific host | `make copy-ssh-key HOST=giteaVM` |
| `create-vault` | Create encrypted vault file | `make create-vault` | | `create-vault` | Create encrypted vault file | `make create-vault` |
| `edit-vault` | Edit encrypted host vars | `make edit-vault HOST=dev01` | | `edit-vault` | Edit encrypted host vars | `make edit-vault HOST=dev01` |
| `edit-group-vault` | Edit encrypted group vars (production inventory) | `make edit-group-vault` |
## Utility Commands ## Utility Commands

21
docs/reference/network.md Normal file
View File

@ -0,0 +1,21 @@
# Network reference
## Overview
This repo manages hosts reachable over your LAN and optionally over a Tailscale overlay network.
## Physical network
- Typical LAN: `192.168.1.0/24` (adjust for your environment)
- Inventory host addressing is defined in `inventories/production/hosts`
## Tailscale overlay
- Tailscale provides a mesh VPN (WireGuard-based) with `100.x.y.z` addresses.
- The repo installs/configures it via `playbooks/tailscale.yml` + `roles/tailscale/`.
## References
- Tailscale guide: `docs/guides/tailscale.md`
- Canonical architecture: `project-docs/architecture.md`

View File

@ -0,0 +1,281 @@
# Playbooks & Tags Map
This repo is organized around playbooks in `playbooks/` (plus a few thin wrapper playbooks in the repo root).
This reference gives you:
- **Execution paths**: where each playbook “goes” (imports → roles → included tasks).
- **Tag paths**: what each tag actually selects, including Makefile shortcuts.
---
## Playbook entrypoints (paths)
### `site.yml` (wrapper)
`site.yml` is a wrapper that delegates to `playbooks/site.yml`.
```mermaid
flowchart TD
A[site.yml] --> B[playbooks/site.yml]
```
### `playbooks/site.yml` (dispatcher)
This is a pure dispatcher: it imports other playbooks and assigns **top-level tags** per import.
```mermaid
flowchart TD
S[playbooks/site.yml] -->|tags: maintenance| M[playbooks/maintenance.yml]
S -->|tags: development| D[playbooks/development.yml]
S -->|tags: tailscale| T[playbooks/tailscale.yml]
S -->|tags: app| A[playbooks/app/site.yml]
```
### `playbooks/maintenance.yml`
```mermaid
flowchart TD
P[playbooks/maintenance.yml] --> R[role: maintenance]
```
- **Notes**:
- `pre_tasks`/`post_tasks` in the playbook are **untagged**; if you run with
`--tags maintenance`, only the `maintenance` role runs (the untagged pre/post tasks are skipped).
### `playbooks/development.yml`
Targets: `hosts: dev`
```mermaid
flowchart TD
P[playbooks/development.yml] --> R1[role: maintenance]
P --> R2[role: base]
P --> R3[role: user]
P --> R4[role: ssh]
P --> R5[role: shell]
P --> R6[role: development]
P --> R7[role: datascience]
P --> R8[role: docker]
P --> R9[role: monitoring_desktop]
%% role-internal paths that matter
R5 --> S1[roles/shell/tasks/main.yml]
S1 --> S2[include_tasks: roles/shell/tasks/configure_user_shell.yml]
R8 --> D1[roles/docker/tasks/main.yml]
D1 --> D2[include_tasks: roles/docker/tasks/setup_gpg_key.yml]
D1 --> D3[include_tasks: roles/docker/tasks/setup_repo_*.yml]
```
- **Notes**:
- `pre_tasks` is **untagged**; if you run with tag filters, that apt-cache update is skipped unless you include
`--tags all` or also run untagged tasks (Ansible has `--skip-tags`/`--tags` behavior to be aware of).
### `playbooks/local.yml`
Targets: `hosts: localhost`, `connection: local`
This is basically the same role stack as `playbooks/development.yml` (minus `datascience`), but applied locally.
```mermaid
flowchart TD
P[playbooks/local.yml] --> R1[role: maintenance]
P --> R2[role: base]
P --> R3[role: user]
P --> R4[role: ssh]
P --> R5[role: shell]
P --> R6[role: development]
P --> R7[role: docker]
P --> R8[role: monitoring_desktop]
```
### `playbooks/servers.yml`
Targets by default: `services:qa:ansible:tailscale` (override via `-e target_group=...`).
```mermaid
flowchart TD
P[playbooks/servers.yml] --> R1[role: maintenance]
P --> R2[role: base]
P --> R3[role: user]
P --> R4[role: ssh]
P --> R5[role: shell]
P --> R6[role: docker]
P --> R7[role: monitoring_server]
```
### `playbooks/workstations.yml`
Two plays:
- Workstation baseline for `dev:desktop:local`
- Desktop applications only for the `desktop` group
```mermaid
flowchart TD
W[playbooks/workstations.yml] --> B1[play: workstation baseline]
B1 --> R1[role: maintenance]
B1 --> R2[role: base]
B1 --> R3[role: user]
B1 --> R4[role: ssh]
B1 --> R5[role: shell]
B1 --> R6[role: development]
B1 --> R7[role: datascience]
B1 --> R8[role: docker]
B1 --> R9[role: monitoring_desktop]
W --> B2[play: desktop apps]
B2 --> A1[role: applications]
```
### `playbooks/shell.yml`
```mermaid
flowchart TD
P[playbooks/shell.yml] --> R[role: shell]
R --> S1[roles/shell/tasks/main.yml]
S1 --> S2[include_tasks: roles/shell/tasks/configure_user_shell.yml]
```
### `playbooks/tailscale.yml`
```mermaid
flowchart TD
P[playbooks/tailscale.yml] --> R[role: tailscale]
R --> T1[roles/tailscale/tasks/main.yml]
T1 -->|Debian| T2[include_tasks: roles/tailscale/tasks/debian.yml]
T1 -->|Alpine| T3[include_tasks: roles/tailscale/tasks/alpine.yml]
```
### `playbooks/infrastructure/proxmox-vm.yml`
Creates an Ansible controller VM on Proxmox (local connection) via `role: proxmox_vm`.
```mermaid
flowchart TD
P[playbooks/infrastructure/proxmox-vm.yml] --> R[role: proxmox_vm]
R --> M1[roles/proxmox_vm/tasks/main.yml]
M1 -->|proxmox_guest_type=lxc| L1[include_tasks: roles/proxmox_vm/tasks/lxc.yml]
M1 -->|else| K1[include_tasks: roles/proxmox_vm/tasks/kvm.yml]
```
### App project suite (`playbooks/app/*`)
#### `playbooks/app/site.yml` (app dispatcher)
```mermaid
flowchart TD
S[playbooks/app/site.yml] -->|tags: app,provision| P[playbooks/app/provision_vms.yml]
S -->|tags: app,configure| C[playbooks/app/configure_app.yml]
```
#### `playbooks/app/provision_vms.yml` (provision app guests)
High-level loop: `project_key``env_item` → provision guest → add to dynamic inventory groups.
```mermaid
flowchart TD
P[playbooks/app/provision_vms.yml] --> T1[include_tasks: playbooks/app/provision_one_guest.yml]
T1 --> T2[include_tasks: playbooks/app/provision_one_env.yml]
T2 --> R[include_role: proxmox_vm]
R --> M1[roles/proxmox_vm/tasks/main.yml]
M1 --> L1[roles/proxmox_vm/tasks/lxc.yml]
T2 --> H[add_host groups]
H --> G1[app_all]
H --> G2[app_${project}_all]
H --> G3[app_${project}_${env}]
```
#### `playbooks/app/configure_app.yml` (configure app guests)
Two phases:
1) **localhost** builds a dynamic inventory from `app_projects` (static IPs)
2) **app_all** (or `app_${project}_all`) configures each host
```mermaid
flowchart TD
A[play: localhost build inventory] --> H[add_host groups]
H --> G1[app_all / app_${project}_*]
B[play: app_all configure] --> OS[include_role: base_os]
B --> POTE[include_role: pote (only when app_project == 'pote')]
B --> APP[include_role: app_setup (when app_project != 'pote')]
```
#### `playbooks/app/proxmox_info.yml`
Single local play that queries Proxmox and prints a filtered summary.
#### `playbooks/app/ssh_client_config.yml`
Single local play that optionally manages `~/.ssh/config` (gated by `manage_ssh_config`).
---
## Tags map (what each tag hits)
### Top-level dispatcher tags (`playbooks/site.yml`)
- **maintenance**: runs `playbooks/maintenance.yml`
- **development**: runs `playbooks/development.yml`
- **tailscale**: runs `playbooks/tailscale.yml`
- **app**: runs `playbooks/app/site.yml` (and therefore app provision + configure)
### Dev/local role tags (`playbooks/development.yml`, `playbooks/local.yml`)
These playbooks tag roles directly:
- **maintenance**`role: maintenance`
- **base**`role: base`
- **security**`role: base` + `role: ssh` (both are tagged `security` at role-inclusion)
- **user**`role: user`
- **ssh**`role: ssh`
- **shell**`role: shell`
- **development** / **dev**`role: development`
- **docker**`role: docker`
- **monitoring**`role: monitoring_desktop`
- **datascience** / **conda** / **jupyter** / **r**`role: datascience` (development playbook only)
- **tailscale** / **vpn** → (currently commented out in dev/local)
### Workstation + desktop apps tags (`playbooks/workstations.yml`)
- **apps** / **applications**`role: applications` (desktop group only)
### App suite tags
From `playbooks/app/site.yml` imports:
- **app**: everything in the app suite
- **provision**: `playbooks/app/provision_vms.yml` only
- **configure**: `playbooks/app/configure_app.yml` only
Standalone app playbook tags (so `--tags ...` works when running them directly):
- `playbooks/app/provision_vms.yml`: **app**, **provision**
- `playbooks/app/configure_app.yml`: **app**, **configure**
- `playbooks/app/proxmox_info.yml`: **app**, **proxmox**, **info**
- `playbooks/app/ssh_client_config.yml`: **app**, **ssh-config**
### Role-internal tags (task/block level)
These are tags inside role task files (useful for targeting parts of a role even if the role itself isnt included with that tag):
- `roles/datascience/tasks/main.yml`:
- **conda**
- **jupyter**
- **r**, **rstats**
### Makefile tag shortcuts
Make targets that apply `--tags`:
- `make datascience HOST=...``--tags datascience`
- `make security``--tags security`
- `make docker``--tags docker`
- `make shell``--tags shell`
- `make apps``--tags apps`
- `make monitoring``--tags monitoring`
---
## Tag-filtering gotchas (important)
- If you run with `--tags X`, **untagged** `pre_tasks`/`tasks`/`post_tasks` in a playbook are skipped.
- Example: `playbooks/maintenance.yml` has untagged `pre_tasks` and `post_tasks`.
`--tags maintenance` runs only the `maintenance` role, not the surrounding reporting steps.

View File

@ -0,0 +1,28 @@
# Security reference
## Overview
Security in this repo is implemented via:
- hardened SSH + firewall defaults (`roles/ssh/`)
- baseline system configuration (`roles/base/`)
- monitoring/intrusion prevention on servers (`roles/monitoring_server/`)
- secrets handled via Ansible Vault (`inventories/production/group_vars/all/vault.yml`)
## Recommended execution
```bash
# Dry-run first
make check
# Apply security-tagged tasks
make security
```
## Vault
- Vault guide: `docs/guides/vault.md`
## Canonical standards
- `project-docs/standards.md`

View File

@ -30,3 +30,269 @@ tailscale_accept_routes: true
tailscale_accept_dns: true tailscale_accept_dns: true
tailscale_ssh: false tailscale_ssh: false
tailscale_hostname: "{{ inventory_hostname }}" tailscale_hostname: "{{ inventory_hostname }}"
# -----------------------------------------------------------------------------
# Proxmox + modular app projects (LXC-first)
#
# This repo can manage many independent apps ("projects"). Each project defines
# its own dev/qa/prod guests (IPs/VMIDs/branches) under `app_projects`.
#
# Usage examples:
# - Run one project: ansible-playbook -i inventories/production playbooks/app/site.yml -e app_project=projectA
# - Run all projects: ansible-playbook -i inventories/production playbooks/app/site.yml
# -----------------------------------------------------------------------------
# Proxmox API connection (keep secrets in vault)
proxmox_host: "{{ vault_proxmox_host }}"
proxmox_user: "{{ vault_proxmox_user }}"
proxmox_node: "{{ vault_proxmox_node | default('pve') }}"
proxmox_api_port: "{{ vault_proxmox_api_port | default(8006) }}"
# Proxmox commonly uses a self-signed cert; keep validation off by default.
proxmox_validate_certs: false
# Prefer API token auth (store in vault):
# - proxmox_token_id: "ansible@pve!token-name"
# - vault_proxmox_token: "secret"
proxmox_token_id: "{{ vault_proxmox_token_id | default('') }}"
# Default guest type for new projects. (Later you can set to `kvm` per project/env.)
proxmox_guest_type: lxc
# Proxmox LXC defaults (override per project/env as needed)
lxc_ostemplate: "local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst"
lxc_storage: "local-lvm"
lxc_network_bridge: "vmbr0"
lxc_unprivileged: true
lxc_features_list:
- "keyctl=1"
- "nesting=1"
lxc_start_after_create: true
lxc_nameserver: "1.1.1.1 8.8.8.8"
# Base OS / access defaults
appuser_name: appuser
appuser_shell: /bin/bash
appuser_groups:
- sudo
# Store your workstation public key in vault_ssh_public_key
appuser_ssh_public_key: "{{ vault_ssh_public_key }}"
# App defaults (override per project)
app_backend_port: 3001
app_frontend_port: 3000
# Default Node workflow commands (override per project if your app differs)
app_backend_install_cmd: "npm ci"
app_backend_migrate_cmd: "npm run migrate"
app_backend_start_cmd: "npm start"
app_frontend_install_cmd: "npm ci"
app_frontend_build_cmd: "npm run build"
app_frontend_start_cmd: "npm start"
# Projects definition: add as many projects as you want here.
# Each project has envs (dev/qa/prod) defining name/vmid/ip/gateway/branch and
# optional env_vars (dummy placeholders by default).
#
# -----------------------------------------------------------------------------
# Proxmox VMID/CTID ranges (DEDICATED; avoid collisions)
#
# Proxmox IDs are global. Never reuse IDs across unrelated guests.
# Suggested reservation table (edit to your preference):
# - 9000-9099: pote
# - 9100-9199: punimTagFE
# - 9200-9299: punimTagBE
# - 9300-9399: projectA (example)
# -----------------------------------------------------------------------------
app_projects:
projectA:
description: "Example full-stack app (edit repo_url, IPs, secrets)."
repo_url: "git@github.com:example/projectA.git"
components:
backend: true
frontend: true
# Repo is assumed to contain `backend/` and `frontend/` directories.
repo_dest: "/srv/app"
# Optional overrides for this project
backend_port: "{{ app_backend_port }}"
frontend_port: "{{ app_frontend_port }}"
guest_defaults:
guest_type: "{{ proxmox_guest_type }}"
cores: 2
memory_mb: 2048
swap_mb: 512
rootfs_size_gb: 16
deploy:
backend_install_cmd: "{{ app_backend_install_cmd }}"
backend_migrate_cmd: "{{ app_backend_migrate_cmd }}"
backend_start_cmd: "{{ app_backend_start_cmd }}"
frontend_install_cmd: "{{ app_frontend_install_cmd }}"
frontend_build_cmd: "{{ app_frontend_build_cmd }}"
frontend_start_cmd: "{{ app_frontend_start_cmd }}"
envs:
dev:
name: "projectA-dev"
vmid: 9301
ip: "10.0.10.101/24"
gateway: "10.0.10.1"
branch: "dev"
env_vars:
APP_ENV: "dev"
BACKEND_BASE_URL: "http://10.0.10.101:{{ app_backend_port }}"
FRONTEND_BASE_URL: "http://10.0.10.101:{{ app_frontend_port }}"
SECRET_PLACEHOLDER: "change-me"
qa:
name: "projectA-qa"
vmid: 9302
ip: "10.0.10.102/24"
gateway: "10.0.10.1"
branch: "qa"
env_vars:
APP_ENV: "qa"
BACKEND_BASE_URL: "http://10.0.10.102:{{ app_backend_port }}"
FRONTEND_BASE_URL: "http://10.0.10.102:{{ app_frontend_port }}"
SECRET_PLACEHOLDER: "change-me"
prod:
name: "projectA-prod"
vmid: 9303
ip: "10.0.10.103/24"
gateway: "10.0.10.1"
branch: "main"
pote:
description: "POTE (python/venv + cron) project (edit repo_url, IPs, secrets)."
repo_url: "gitea@10.0.30.169:ilia/POTE.git"
# POTE deploys as a user-owned python app (not /srv/app)
repo_dest: "/home/poteapp/pote"
os_user: "poteapp"
components:
backend: false
frontend: false
guest_defaults:
guest_type: "{{ proxmox_guest_type }}"
cores: 2
memory_mb: 2048
swap_mb: 512
rootfs_size_gb: 16
# POTE-specific optional defaults (override per env if needed)
pote_db_host: "localhost"
pote_db_user: "poteuser"
pote_db_name: "potedb"
pote_smtp_host: "mail.levkin.ca"
pote_smtp_port: 587
envs:
dev:
name: "pote-dev"
vmid: 9001
ip: "10.0.10.114/24"
gateway: "10.0.10.1"
branch: "dev"
qa:
name: "pote-qa"
vmid: 9002
ip: "10.0.10.112/24"
gateway: "10.0.10.1"
branch: "qa"
prod:
name: "pote-prod"
vmid: 9003
ip: "10.0.10.113/24"
gateway: "10.0.10.1"
branch: "main"
punimTagFE:
description: "punimTag frontend-only project (edit repo_url, IPs, secrets)."
repo_url: "git@github.com:example/punimTagFE.git"
repo_dest: "/srv/app"
components:
backend: false
frontend: true
guest_defaults:
guest_type: "{{ proxmox_guest_type }}"
cores: 2
memory_mb: 2048
swap_mb: 512
rootfs_size_gb: 16
deploy:
frontend_install_cmd: "{{ app_frontend_install_cmd }}"
frontend_build_cmd: "{{ app_frontend_build_cmd }}"
frontend_start_cmd: "{{ app_frontend_start_cmd }}"
envs:
dev:
name: "punimTagFE-dev"
vmid: 9101
ip: "10.0.10.121/24"
gateway: "10.0.10.1"
branch: "dev"
env_vars:
APP_ENV: "dev"
SECRET_PLACEHOLDER: "change-me"
qa:
name: "punimTagFE-qa"
vmid: 9102
ip: "10.0.10.122/24"
gateway: "10.0.10.1"
branch: "qa"
env_vars:
APP_ENV: "qa"
SECRET_PLACEHOLDER: "change-me"
prod:
name: "punimTagFE-prod"
vmid: 9103
ip: "10.0.10.123/24"
gateway: "10.0.10.1"
branch: "main"
env_vars:
APP_ENV: "prod"
SECRET_PLACEHOLDER: "change-me"
punimTagBE:
description: "punimTag backend-only project (edit repo_url, IPs, secrets)."
repo_url: "git@github.com:example/punimTagBE.git"
repo_dest: "/srv/app"
components:
backend: true
frontend: false
guest_defaults:
guest_type: "{{ proxmox_guest_type }}"
cores: 2
memory_mb: 2048
swap_mb: 512
rootfs_size_gb: 16
deploy:
backend_install_cmd: "{{ app_backend_install_cmd }}"
backend_migrate_cmd: "{{ app_backend_migrate_cmd }}"
backend_start_cmd: "{{ app_backend_start_cmd }}"
envs:
dev:
name: "punimTagBE-dev"
vmid: 9201
ip: "10.0.10.131/24"
gateway: "10.0.10.1"
branch: "dev"
env_vars:
APP_ENV: "dev"
SECRET_PLACEHOLDER: "change-me"
qa:
name: "punimTagBE-qa"
vmid: 9202
ip: "10.0.10.132/24"
gateway: "10.0.10.1"
branch: "qa"
env_vars:
APP_ENV: "qa"
SECRET_PLACEHOLDER: "change-me"
prod:
name: "punimTagBE-prod"
vmid: 9203
ip: "10.0.10.133/24"
gateway: "10.0.10.1"
branch: "main"
env_vars:
APP_ENV: "prod"
SECRET_PLACEHOLDER: "change-me"

View File

@ -0,0 +1,42 @@
---
# Example vault values for Proxmox app projects.
#
# Copy required keys into your encrypted vault:
# make edit-group-vault
#
# Never commit real secrets unencrypted.
# Proxmox API
vault_proxmox_host: "10.0.10.201"
vault_proxmox_user: "root@pam"
vault_proxmox_node: "pve"
vault_proxmox_password: "CHANGE_ME"
# Optional token auth (recommended if you use it)
# vault_proxmox_token_id: "root@pam!ansible"
# vault_proxmox_token: "CHANGE_ME"
# SSH public key for appuser (workstation key)
vault_ssh_public_key: "ssh-ed25519 AAAA... you@example"
# LXC create bootstrap password (often required by Proxmox)
vault_lxc_root_password: "CHANGE_ME"
# -----------------------------------------------------------------------------
# POTE (python/venv + cron) secrets
# -----------------------------------------------------------------------------
# Private key used for cloning from Gitea (deploy key). Store as a multi-line block.
vault_pote_git_ssh_key: |
-----BEGIN OPENSSH PRIVATE KEY-----
CHANGE_ME
-----END OPENSSH PRIVATE KEY-----
# Environment-specific DB passwords (used by roles/pote)
vault_pote_db_password_dev: "CHANGE_ME"
vault_pote_db_password_qa: "CHANGE_ME"
vault_pote_db_password_prod: "CHANGE_ME"
# SMTP password for reports
vault_pote_smtp_password: "CHANGE_ME"

View File

@ -1,10 +1,47 @@
$ANSIBLE_VAULT;1.1;AES256 $ANSIBLE_VAULT;1.1;AES256
36343265643238633236643162613137393331386164306133666537633336633036376433386161 36643038376636383030343730626264613839396462366365633837636130623639393361656634
3135366566623235333264386539346364333435373065300a633231633731316633313166346161 3238353261633635353662653036393835313963373562390a646535376366656163383632313835
30363334613965666634633665363632323966396464633636346533616634393664386566333230 39646666653362336661633736333365343962346432653131613134353361366263373162386631
3463666531323866660a666238383331383562313363386639646161653334313661393065343135 3134613438626132320a313765343338643535343837306339616564336564303166626164356530
33613762653361656633366465306264323935363032353737333935363165346639616330333939 63663363656535303137663431613861343662303664313332626166373463393931323937613230
39336538643866366361313838636338643336376365373166376234383838656430623339313162 66333665316331323637663437653339353737653336633864393033336630336438646162643662
37353461313263643263376232393138396233366234336333613535366234383661353938663032 31656164363933333036376263303034646366393134636630663631353235373831303264363762
65383737343164343431363764333063326230623263323231366232626131306637353361343466 66643865616130306537383836646237613730643133656333666632326538613764383530363363
6131 61386161646637316166303633643665383365346534323939383034613430386362303038313761
36303364396436373466653332303562653038373962616539356633373065643130303036363161
65353163326136383066393332376236386333653532326337613163346334616234643562643265
62316134386365343733636661336130623364386634383965386135616633323132643365613231
34636435333031376136396336316337666161383562343834383865316436633333333065323138
37343865363731303137666330306131373734623637343531623562353332353437646631343363
30393565376435303430396535643165616534313334326462363130626639343038643835336335
33613630336534666163356631353438373462306566376134323536373832643264633365653465
62386363326436623330653430383262653732376235626432656362306363303663623834653664
31373762306539376431353137393664396165396261613364653339373765393863633833396131
36666235666234633430373338323331313531643736656137303937653865303431643164373161
39633238383265396366386230303536613461633431333565353433643935613231333232333063
36643435376165656262623863373039393837643564366531666462376162653630626634663037
39373439336239646131306133663566343734656339346462356662373561306264333364383966
38343463616666613037636335333137633737666166633364343736646232396566373866633531
34303734376137386363373039656565323364333539626630323465666636396465323861333365
35376161663630356132373638333937376164316361303531303637396334306133373237656265
36356532623130323565396531306136363339363437376364343138653139653335343765316365
38313035366137393365316139326236326330386365343665376335313339666231333632333133
32353865626531373462346261653832386234396531653136323162653865303861396233376261
34616232363965313635373833333737336166643734373633313865323066393930666562316136
36373763356365646361656436383463393237623461383531343134373336663763663464336361
38396532383932643065303731663565353366373033353237383538636365323064396531386134
61643964613930373439383032373364316437303239393434376465393639373634663738623461
37386366616333626434363761326361373533306635316164316363393264303633353939613239
37353266303637323139653630663236663633313061306633316139666539376632306630313362
34633834326433646230303634313266303530633236353262633066396462646365623935343161
34393166643666366164313438383939386434366665613330653739383139613732396633383261
33633664303131383163356362316639353064373861343132623565636631333135663034373461
61303031616634333235303066633939643337393862653031323936363932633438303035323238
66323066353737316166383533636661336637303265343937633064626164623462656134333732
33316536336430636636646561626232666633656266326339623732363531326131643764313838
62356537326166346666313930383639386466633432626235373738633833393164646238366465
62373938363739373036666238666433303061633732663565666433333631326432626461353037
39636263636632313431353364386566383134653139393762623562643561616166633035353038
39326462356332616563303462636536636132633933336532383938373030666333363264346632
64643063373830353130613662323131353964313038323735626464313363326364653732323732
3663393964633138376665323435366463623463613237366465

View File

@ -0,0 +1,9 @@
---
# Development group overrides
# Development machines may need more permissive SSH settings
# Allow root login for initial setup (can be disabled after setup)
ssh_permit_root_login: 'yes'
# Allow password authentication for initial setup (should be disabled after SSH keys are set up)
ssh_password_authentication: 'yes'

View File

@ -0,0 +1,10 @@
---
# Host variables for KrakenMint
# Using root user directly, password will be prompted
ansible_become: true
# Configure shell for root
shell_users:
- ladmin

View File

@ -0,0 +1,8 @@
$ANSIBLE_VAULT;1.1;AES256
39353931333431383166336133363735336334376339646261353331323162343663386265393337
3761626465643830323333613065316361623839363439630a653563306462313663393432306135
61383936326637366635373563623038623866643230356164336436666535626239346163323665
6339623335643238660a303031363233396466326333613831366265363839313435366235663139
35616161333063363035326636353936633465613865313033393331313662303436646537613665
39616336363533633833383266346562373161656332363237343665316337353764386661333664
336163353333613762626533333437376637

View File

@ -1,3 +1,4 @@
---
$ANSIBLE_VAULT;1.1;AES256 $ANSIBLE_VAULT;1.1;AES256
31306264346663636630656534303766666564333866326139336137383339633338323834653266 31306264346663636630656534303766666564333866326139336137383339633338323834653266
6132333337363566623265303037336266646238633036390a663432623861363562386561393264 6132333337363566623265303037336266646238633036390a663432623861363562386561393264

View File

@ -1,3 +1,4 @@
---
$ANSIBLE_VAULT;1.1;AES256 $ANSIBLE_VAULT;1.1;AES256
66633265383239626163633134656233613638643862323562373330643363323036333334646566 66633265383239626163633134656233613638643862323562373330643363323036333334646566
3439646635343533353432323064643135623532333738380a353866643461636233376432396434 3439646635343533353432323064643135623532333738380a353866643461636233376432396434

View File

@ -0,0 +1,16 @@
---
# Host variables for dev02
# Use ladmin user with sudo to become root
ansible_become: true
ansible_become_method: sudo
ansible_become_password: "{{ vault_dev02_become_password }}"
# Configure shell for ladmin
shell_users:
- ladmin
# Skip data science stack
install_conda: false
install_jupyter: false
install_r: false

View File

@ -1,4 +1,5 @@
ansible_become_password: root ---
ansible_become_password: "{{ vault_devgpu_become_password }}"
ansible_python_interpreter: /usr/bin/python3 ansible_python_interpreter: /usr/bin/python3
@ -21,10 +22,4 @@ jupyter_bind_all_interfaces: true
# R configuration # R configuration
install_r: true install_r: true
# Cursor IDE configuration # IDE/editor tooling is intentionally not managed by Ansible in this repo.
install_cursor_extensions: true
# Cursor extension groups to enable
install_python: true # Python development
install_docs: true # Markdown/documentation

View File

@ -0,0 +1,2 @@
---
vault_devgpu_become_password: root

View File

@ -1,3 +1,4 @@
---
# Configure sudo path for git-ci-01 # Configure sudo path for git-ci-01
# Sudo may not be in PATH for non-interactive shells # Sudo may not be in PATH for non-interactive shells
ansible_become_exe: /usr/bin/sudo ansible_become_exe: /usr/bin/sudo
@ -5,4 +6,3 @@ ansible_become_method: sudo
# Alternative: if sudo is in a different location, update this # Alternative: if sudo is in a different location, update this
# ansible_become_exe: /usr/local/bin/sudo # ansible_become_exe: /usr/local/bin/sudo

View File

@ -1,3 +1,4 @@
---
$ANSIBLE_VAULT;1.1;AES256 $ANSIBLE_VAULT;1.1;AES256
61623232353833613730343036663434633265346638366431383737623936616131356661616238 61623232353833613730343036663434633265346638366431383737623936616131356661616238
3230346138373030396336663566353433396230346434630a313633633161303539373965343466 3230346138373030396336663566353433396230346434630a313633633161303539373965343466

View File

@ -1,3 +1,4 @@
---
$ANSIBLE_VAULT;1.1;AES256 $ANSIBLE_VAULT;1.1;AES256
31316663336338303832323464623866343366313261653536623233303466636630633235643638 31316663336338303832323464623866343366313261653536623233303466636630633235643638
3666646431323061313836333233356162643462323763380a623666663062386337393439653134 3666646431323061313836333233356162643462323763380a623666663062386337393439653134

View File

@ -1,3 +1,4 @@
---
$ANSIBLE_VAULT;1.1;AES256 $ANSIBLE_VAULT;1.1;AES256
62356361353835643235613335613661356230666539386533383536623432316333346431343462 62356361353835643235613335613661356230666539386533383536623432316333346431343462
3265376632633731623430376333323234633962643766380a363033666334643930326636343963 3265376632633731623430376333323234633962643766380a363033666334643930326636343963

View File

@ -7,4 +7,3 @@ ansible_become_method: sudo
# Configure shell for ladmin user # Configure shell for ladmin user
shell_users: shell_users:
- ladmin - ladmin

View File

@ -1,3 +1,4 @@
---
$ANSIBLE_VAULT;1.1;AES256 $ANSIBLE_VAULT;1.1;AES256
35633833353965363964376161393730613065663236326239376562356231316166656131366263 35633833353965363964376161393730613065663236326239376562356231316166656131366263
6263363436373965316339623139353830643062393165370a643138356561613537616431316534 6263363436373965316339623139353830643062393165370a643138356561613537616431316534

View File

@ -2,26 +2,22 @@
# Primary IPs: Tailscale (100.x.x.x) for remote access # Primary IPs: Tailscale (100.x.x.x) for remote access
# Fallback IPs: Local network (10.0.x.x) when Tailscale is down # Fallback IPs: Local network (10.0.x.x) when Tailscale is down
# Usage: ansible_host_fallback is available for manual fallback # Usage: ansible_host_fallback is available for manual fallback
#
[gitea] # NOTE: Proxmox app projects (dev/qa/prod) are provisioned dynamically via
giteaVM ansible_host=10.0.30.169 ansible_user=root # `playbooks/app/site.yml` (it uses `add_host` based on `app_projects`).
# You generally do NOT need to add project hosts here.
[portainer]
portainerVM ansible_host=10.0.30.69 ansible_user=ladmin
[homepage]
homepageVM ansible_host=10.0.30.12 ansible_user=homepage
[vaultwarden]
vaultwardenVM ansible_host=10.0.10.142 ansible_user=root
[dev] [dev]
dev01 ansible_host=10.0.30.105 ansible_user=ladmin dev01 ansible_host=10.0.30.105 ansible_user=ladmin
bottom ansible_host=10.0.10.156 ansible_user=beast bottom ansible_host=10.0.10.156 ansible_user=beast
debianDesktopVM ansible_host=10.0.10.206 ansible_user=user skip_reboot=true debianDesktopVM ansible_host=10.0.10.206 ansible_user=user skip_reboot=true
devGPU ansible_host=10.0.30.63 ansible_user=root devGPU ansible_host=10.0.30.63 ansible_user=root
[qa]
git-ci-01 ansible_host=10.0.10.223 ansible_user=ladmin git-ci-01 ansible_host=10.0.10.223 ansible_user=ladmin
sonarqube-01 ansible_host=10.0.10.54 ansible_user=ladmin sonarqube-01 ansible_host=10.0.10.54 ansible_user=ladmin
dev02 ansible_host=10.0.10.100 ansible_user=ladmin
KrakenMint ansible_host=10.0.10.120 ansible_user=ladmin
[ansible] [ansible]
ansibleVM ansible_host=10.0.10.157 ansible_user=master ansibleVM ansible_host=10.0.10.157 ansible_user=master
@ -34,8 +30,14 @@ caddy ansible_host=10.0.10.50 ansible_user=root
jellyfin ansible_host=10.0.10.232 ansible_user=root jellyfin ansible_host=10.0.10.232 ansible_user=root
listmonk ansible_host=10.0.10.149 ansible_user=root listmonk ansible_host=10.0.10.149 ansible_user=root
nextcloud ansible_host=10.0.10.25 ansible_user=root nextcloud ansible_host=10.0.10.25 ansible_user=root
actual ansible_host=10.0.10.159 ansible_user=root actual ansible_host=10.0.10.158 ansible_user=root
vikanjans ansible_host=10.0.10.159 ansible_user=root
n8n ansible_host=10.0.10.158 ansible_user=root n8n ansible_host=10.0.10.158 ansible_user=root
giteaVM ansible_host=10.0.30.169 ansible_user=root
portainerVM ansible_host=10.0.30.69 ansible_user=ladmin
homepageVM ansible_host=10.0.30.12 ansible_user=homepage
vaultwardenVM ansible_host=10.0.10.142 ansible_user=ladmin
qBittorrent ansible_host=10.0.10.91 ansible_user=root port=8080
[desktop] [desktop]
desktop-beast ansible_host=100.117.34.106 ansible_user=beast desktop-beast ansible_host=100.117.34.106 ansible_user=beast

2
package-lock.json generated
View File

@ -13,7 +13,7 @@
"markdownlint-cli2": "^0.18.1" "markdownlint-cli2": "^0.18.1"
}, },
"engines": { "engines": {
"node": ">=22.0.0", "node": ">=20.0.0",
"npm": ">=10.0.0" "npm": ">=10.0.0"
} }
}, },

View File

@ -0,0 +1,134 @@
---
# Playbook: app/configure_app.yml
# Purpose: Configure OS + app runtime on app project guests created via provision_vms.yml
# Targets: app_all or per-project group created dynamically
# Tags: app, configure
#
# Usage:
# - Run one project: ansible-playbook -i inventories/production playbooks/app/site.yml -e app_project=projectA
# - Run all projects: ansible-playbook -i inventories/production playbooks/app/site.yml
- name: Build dynamic inventory from app_projects (so configure can run standalone)
hosts: localhost
connection: local
gather_facts: false
tags: ['app', 'configure']
vars:
selected_projects: >-
{{
(app_projects | dict2items | map(attribute='key') | list)
if (app_project is not defined or app_project | length == 0)
else [app_project]
}}
app_bootstrap_user_default: root
# If true, configure plays will use vault_lxc_root_password for initial SSH bootstrap.
bootstrap_with_root_password_default: false
tasks:
- name: Validate requested project exists
ansible.builtin.assert:
that:
- app_project is not defined or app_project in app_projects
fail_msg: "Requested app_project={{ app_project }} does not exist in app_projects."
- name: Add each project/env host (by static IP) to dynamic inventory
ansible.builtin.add_host:
name: "{{ app_projects[item.0].envs[item.1].name | default(item.0 ~ '-' ~ item.1) }}"
groups:
- "app_all"
- "app_{{ item.0 }}_all"
- "app_{{ item.0 }}_{{ item.1 }}"
ansible_host: "{{ (app_projects[item.0].envs[item.1].ip | string).split('/')[0] }}"
ansible_user: "{{ app_bootstrap_user | default(app_bootstrap_user_default) }}"
ansible_password: >-
{{
vault_lxc_root_password
if ((bootstrap_with_root_password | default(bootstrap_with_root_password_default) | bool) and (vault_lxc_root_password | default('') | length) > 0)
else omit
}}
app_project: "{{ item.0 }}"
app_env: "{{ item.1 }}"
loop: "{{ selected_projects | product(['dev', 'qa', 'prod']) | list }}"
when:
- app_projects[item.0] is defined
- app_projects[item.0].envs[item.1] is defined
- (app_projects[item.0].envs[item.1].ip | default('')) | length > 0
- name: Configure app guests (base OS + app setup)
hosts: >-
{{
('app_' ~ app_project ~ '_all')
if (app_project is defined and app_project | length > 0)
else 'app_all'
}}
become: true
gather_facts: true
tags: ['app', 'configure']
tasks:
- name: Build project/env effective variables
ansible.builtin.set_fact:
project_def: "{{ app_projects[app_project] }}"
env_def: "{{ app_projects[app_project].envs[app_env] }}"
when: app_project is defined and app_env is defined
- name: Configure base OS
ansible.builtin.include_role:
name: base_os
vars:
base_os_backend_port: "{{ (project_def.backend_port | default(app_backend_port)) if project_def is defined else app_backend_port }}"
base_os_frontend_port: "{{ (project_def.frontend_port | default(app_frontend_port)) if project_def is defined else app_frontend_port }}"
base_os_enable_backend: "{{ project_def.components.backend | default(true) }}"
base_os_enable_frontend: "{{ project_def.components.frontend | default(true) }}"
base_os_user: "{{ project_def.os_user | default(appuser_name) }}"
base_os_user_ssh_public_key: "{{ project_def.os_user_ssh_public_key | default(appuser_ssh_public_key | default('')) }}"
# Only override when explicitly provided (avoids self-referential recursion)
base_os_packages: "{{ project_def.base_os_packages if (project_def is defined and project_def.base_os_packages is defined) else omit }}"
- name: Configure POTE (python/venv + cron)
ansible.builtin.include_role:
name: pote
vars:
pote_git_repo: "{{ project_def.repo_url }}"
pote_git_branch: "{{ env_def.branch }}"
pote_user: "{{ project_def.os_user | default('poteapp') }}"
pote_group: "{{ project_def.os_user | default('poteapp') }}"
pote_app_dir: "{{ project_def.repo_dest | default('/home/' ~ (project_def.os_user | default('poteapp')) ~ '/pote') }}"
pote_env: "{{ app_env }}"
pote_db_host: "{{ env_def.pote_db_host | default(project_def.pote_db_host | default('localhost')) }}"
pote_db_name: "{{ env_def.pote_db_name | default(project_def.pote_db_name | default('potedb')) }}"
pote_db_user: "{{ env_def.pote_db_user | default(project_def.pote_db_user | default('poteuser')) }}"
pote_smtp_host: "{{ env_def.pote_smtp_host | default(project_def.pote_smtp_host | default('mail.levkin.ca')) }}"
pote_smtp_port: "{{ env_def.pote_smtp_port | default(project_def.pote_smtp_port | default(587)) }}"
pote_smtp_user: "{{ env_def.pote_smtp_user | default(project_def.pote_smtp_user | default('')) }}"
pote_from_email: "{{ env_def.pote_from_email | default(project_def.pote_from_email | default('')) }}"
pote_report_recipients: "{{ env_def.pote_report_recipients | default(project_def.pote_report_recipients | default('')) }}"
when: app_project == 'pote'
- name: Configure app layout + deploy + systemd
ansible.builtin.include_role:
name: app_setup
vars:
app_repo_url: "{{ project_def.repo_url }}"
app_repo_dest: "{{ project_def.repo_dest | default('/srv/app') }}"
app_repo_branch: "{{ env_def.branch }}"
# app_env is already set per-host via add_host (dev/qa/prod)
app_owner: "{{ project_def.os_user | default(appuser_name) }}"
app_group: "{{ project_def.os_user | default(appuser_name) }}"
app_backend_port: "{{ project_def.backend_port | default(app_backend_port) }}"
app_frontend_port: "{{ project_def.frontend_port | default(app_frontend_port) }}"
app_enable_backend: "{{ project_def.components.backend | default(true) }}"
app_enable_frontend: "{{ project_def.components.frontend | default(true) }}"
app_backend_install_cmd: "{{ project_def.deploy.backend_install_cmd | default(app_backend_install_cmd) }}"
app_backend_migrate_cmd: "{{ project_def.deploy.backend_migrate_cmd | default(app_backend_migrate_cmd) }}"
app_backend_start_cmd: "{{ project_def.deploy.backend_start_cmd | default(app_backend_start_cmd) }}"
app_frontend_install_cmd: "{{ project_def.deploy.frontend_install_cmd | default(app_frontend_install_cmd) }}"
app_frontend_build_cmd: "{{ project_def.deploy.frontend_build_cmd | default(app_frontend_build_cmd) }}"
app_frontend_start_cmd: "{{ project_def.deploy.frontend_start_cmd | default(app_frontend_start_cmd) }}"
app_env_vars: "{{ env_def.env_vars | default({}) }}"
when: app_project != 'pote'

View File

@ -0,0 +1,237 @@
---
# Helper tasks file for playbooks/app/provision_vms.yml
# Provisions a single (project, env) guest and adds it to dynamic inventory.
- name: Set environment facts
ansible.builtin.set_fact:
env_name: "{{ env_item.key }}"
env_def: "{{ env_item.value }}"
guest_name: "{{ env_item.value.name | default(project_key ~ '-' ~ env_item.key) }}"
# vmid is optional; if omitted, we will manage idempotency by unique guest_name
guest_vmid: "{{ env_item.value.vmid | default(none) }}"
- name: Normalize recreate_existing_envs to a list
ansible.builtin.set_fact:
recreate_envs_list: >-
{{
(recreate_existing_envs.split(',') | map('trim') | list)
if (recreate_existing_envs is defined and recreate_existing_envs is string)
else (recreate_existing_envs | default([]))
}}
- name: Check if Proxmox guest already exists (by VMID when provided)
community.proxmox.proxmox_vm_info:
api_host: "{{ proxmox_host }}"
api_port: "{{ proxmox_api_port | default(8006) }}"
validate_certs: "{{ proxmox_validate_certs | default(false) }}"
api_user: "{{ proxmox_user }}"
api_password: "{{ vault_proxmox_password | default(omit) }}"
api_token_id: "{{ proxmox_token_id | default(omit, true) }}"
api_token_secret: "{{ vault_proxmox_token | default(omit, true) }}"
node: "{{ proxmox_node }}"
type: lxc
vmid: "{{ guest_vmid }}"
register: proxmox_guest_info_vmid
when: guest_vmid is not none
- name: Check if Proxmox guest already exists (by name when VMID omitted)
community.proxmox.proxmox_vm_info:
api_host: "{{ proxmox_host }}"
api_port: "{{ proxmox_api_port | default(8006) }}"
validate_certs: "{{ proxmox_validate_certs | default(false) }}"
api_user: "{{ proxmox_user }}"
api_password: "{{ vault_proxmox_password | default(omit) }}"
api_token_id: "{{ proxmox_token_id | default(omit, true) }}"
api_token_secret: "{{ vault_proxmox_token | default(omit, true) }}"
node: "{{ proxmox_node }}"
type: lxc
name: "{{ guest_name }}"
register: proxmox_guest_info_name
when: guest_vmid is none
- name: Set guest_exists fact
ansible.builtin.set_fact:
guest_exists: >-
{{
((proxmox_guest_info_vmid.proxmox_vms | default([])) | length > 0)
if (guest_vmid is not none)
else ((proxmox_guest_info_name.proxmox_vms | default([])) | length > 0)
}}
- name: "Guardrail: abort if VMID exists but name does not match (prevents overwriting other guests)"
ansible.builtin.fail:
msg: >-
Refusing to use VMID {{ guest_vmid }} for {{ guest_name }} because it already exists as
"{{ (proxmox_guest_info_vmid.proxmox_vms[0].name | default('UNKNOWN')) }}".
Pick a different vmid range in app_projects or omit vmid to auto-allocate.
when:
- guest_vmid is not none
- (proxmox_guest_info_vmid.proxmox_vms | default([])) | length > 0
- (proxmox_guest_info_vmid.proxmox_vms[0].name | default('')) != guest_name
- not (allow_vmid_collision | default(false) | bool)
- name: Delete existing guest if requested (recreate)
community.proxmox.proxmox:
api_host: "{{ proxmox_host }}"
api_port: "{{ proxmox_api_port | default(8006) }}"
validate_certs: "{{ proxmox_validate_certs | default(false) }}"
api_user: "{{ proxmox_user }}"
api_password: "{{ vault_proxmox_password | default(omit) }}"
api_token_id: "{{ proxmox_token_id | default(omit, true) }}"
api_token_secret: "{{ vault_proxmox_token | default(omit, true) }}"
node: "{{ proxmox_node }}"
vmid: "{{ guest_vmid }}"
purge: true
force: true
state: absent
when:
- guest_exists | bool
- guest_vmid is not none
- recreate_existing_guests | default(false) | bool or (env_name in recreate_envs_list)
- name: Mark guest as not existing after delete
ansible.builtin.set_fact:
guest_exists: false
when:
- guest_vmid is not none
- recreate_existing_guests | default(false) | bool or (env_name in recreate_envs_list)
- name: "Preflight: detect IP conflicts on Proxmox (existing LXC net0 ip=)"
community.proxmox.proxmox_vm_info:
api_host: "{{ proxmox_host }}"
api_port: "{{ proxmox_api_port | default(8006) }}"
validate_certs: "{{ proxmox_validate_certs | default(false) }}"
api_user: "{{ proxmox_user }}"
api_password: "{{ vault_proxmox_password | default(omit) }}"
api_token_id: "{{ proxmox_token_id | default(omit, true) }}"
api_token_secret: "{{ vault_proxmox_token | default(omit, true) }}"
node: "{{ proxmox_node }}"
type: lxc
config: current
register: proxmox_all_lxc
when:
- (env_def.ip | default('')) | length > 0
- not (allow_ip_conflicts | default(false) | bool)
- not (guest_exists | default(false) | bool)
- name: Set proxmox_ip_conflicts fact
ansible.builtin.set_fact:
proxmox_ip_conflicts: >-
{%- set conflicts = [] -%}
{%- set target_ip = ((env_def.ip | string).split('/')[0]) -%}
{%- for vm in (proxmox_all_lxc.proxmox_vms | default([])) -%}
{%- set cfg_net0 = (
vm['config']['net0']
if (
vm is mapping and ('config' in vm)
and (vm['config'] is mapping) and ('net0' in vm['config'])
)
else none
) -%}
{%- set vm_netif = (vm['netif'] if (vm is mapping and ('netif' in vm)) else none) -%}
{%- set net0 = (
cfg_net0
if (cfg_net0 is not none)
else (
vm_netif['net0']
if (vm_netif is mapping and ('net0' in vm_netif))
else (
vm_netif
if (vm_netif is string)
else (vm['net0'] if (vm is mapping and ('net0' in vm)) else '')
)
)
) | string -%}
{%- set vm_ip = (net0 | regex_search('(?:^|,)ip=([^,]+)', '\\1') | default('')) | regex_replace('/.*$', '') -%}
{%- if (vm_ip | length) > 0 and vm_ip == target_ip -%}
{%- set _ = conflicts.append({'vmid': (vm.vmid | default('') | string), 'name': (vm.name | default('') | string), 'net0': net0}) -%}
{%- endif -%}
{%- endfor -%}
{{ conflicts }}
when:
- proxmox_all_lxc is defined
- (env_def.ip | default('')) | length > 0
- not (allow_ip_conflicts | default(false) | bool)
- not (guest_exists | default(false) | bool)
- name: Abort if IP is already assigned to an existing Proxmox LXC
ansible.builtin.fail:
msg: >-
Refusing to provision {{ guest_name }} because IP {{ (env_def.ip | string).split('/')[0] }}
is already present in Proxmox LXC net0 config: {{ proxmox_ip_conflicts }}.
Fix app_projects IPs or set -e allow_ip_conflicts=true.
when:
- (env_def.ip | default('')) | length > 0
- not (allow_ip_conflicts | default(false) | bool)
- not (guest_exists | default(false) | bool)
- (proxmox_ip_conflicts | default([])) | length > 0
- name: "Preflight: fail if target IP responds (avoid accidental duplicate IP)"
ansible.builtin.command: "ping -c 1 -W 1 {{ (env_def.ip | string).split('/')[0] }}"
register: ip_ping
changed_when: false
failed_when: false
when:
- (env_def.ip | default('')) | length > 0
- not (allow_ip_conflicts | default(false) | bool)
- not (guest_exists | default(false) | bool)
- name: Abort if IP appears to be in use
ansible.builtin.fail:
msg: >-
Refusing to provision {{ guest_name }} because IP {{ (env_def.ip | string).split('/')[0] }}
responded to ping. Fix app_projects IPs or set -e allow_ip_conflicts=true.
Note: this guardrail is ping-based; if your network blocks ICMP, an in-use IP may not respond.
when:
- (env_def.ip | default('')) | length > 0
- not (allow_ip_conflicts | default(false) | bool)
- not (guest_exists | default(false) | bool)
- ip_ping.rc == 0
- name: Provision LXC guest for project/env
ansible.builtin.include_role:
name: proxmox_vm
vars:
# NOTE: Use hostvars['localhost'] for defaults to avoid recursive self-references
proxmox_guest_type: "{{ project_def.guest_defaults.guest_type | default(hostvars['localhost'].proxmox_guest_type | default('lxc')) }}"
# Only pass vmid when provided; otherwise Proxmox will auto-allocate
lxc_vmid: "{{ guest_vmid if guest_vmid is not none else omit }}"
lxc_hostname: "{{ guest_name }}"
lxc_ostemplate: "{{ project_def.lxc_ostemplate | default(hostvars['localhost'].lxc_ostemplate) }}"
lxc_storage: "{{ project_def.lxc_storage | default(hostvars['localhost'].lxc_storage) }}"
lxc_network_bridge: "{{ project_def.lxc_network_bridge | default(hostvars['localhost'].lxc_network_bridge) }}"
lxc_unprivileged: "{{ project_def.lxc_unprivileged | default(hostvars['localhost'].lxc_unprivileged) }}"
lxc_features_list: "{{ project_def.lxc_features_list | default(hostvars['localhost'].lxc_features_list) }}"
lxc_cores: "{{ project_def.guest_defaults.cores | default(hostvars['localhost'].lxc_cores) }}"
lxc_memory_mb: "{{ project_def.guest_defaults.memory_mb | default(hostvars['localhost'].lxc_memory_mb) }}"
lxc_swap_mb: "{{ project_def.guest_defaults.swap_mb | default(hostvars['localhost'].lxc_swap_mb) }}"
lxc_rootfs_size_gb: "{{ project_def.guest_defaults.rootfs_size_gb | default(hostvars['localhost'].lxc_rootfs_size_gb) }}"
lxc_ip: "{{ env_def.ip }}"
lxc_gateway: "{{ env_def.gateway }}"
lxc_nameserver: "{{ project_def.lxc_nameserver | default(hostvars['localhost'].lxc_nameserver) }}"
lxc_pubkey: "{{ appuser_ssh_public_key | default('') }}"
lxc_start_after_create: "{{ project_def.lxc_start_after_create | default(hostvars['localhost'].lxc_start_after_create) }}"
- name: Wait for SSH to become available
ansible.builtin.wait_for:
host: "{{ (env_def.ip | string).split('/')[0] }}"
port: 22
timeout: 300
when: (env_def.ip | default('')) | length > 0
- name: Add guest to dynamic inventory
ansible.builtin.add_host:
name: "{{ guest_name }}"
groups:
- "app_all"
- "app_{{ project_key }}_all"
- "app_{{ project_key }}_{{ env_name }}"
ansible_host: "{{ (env_def.ip | string).split('/')[0] }}"
ansible_user: root
app_project: "{{ project_key }}"
app_env: "{{ env_name }}"
# EOF

View File

@ -0,0 +1,23 @@
---
# Helper tasks file for playbooks/app/provision_vms.yml
# Provisions all envs for a single project and adds dynamic inventory hosts.
- name: Set project definition
ansible.builtin.set_fact:
project_def: "{{ app_projects[project_key] }}"
- name: "Preflight: validate env IPs are unique within project"
ansible.builtin.assert:
that:
- (project_env_ips | length) == ((project_env_ips | unique) | length)
fail_msg: "Duplicate IPs detected in app_projects.{{ project_key }}.envs (IPs must be unique): {{ project_env_ips }}"
vars:
project_env_ips: "{{ project_def.envs | dict2items | map(attribute='value.ip') | select('defined') | map('string') | map('regex_replace', '/.*$', '') | reject('equalto', '') | list }}"
when:
- project_def.envs is defined
- (project_def.envs | length) > 0
- name: Provision each environment for project
ansible.builtin.include_tasks: provision_one_env.yml
loop: "{{ project_def.envs | dict2items }}"
loop_control:
loop_var: env_item
# EOF

View File

@ -0,0 +1,35 @@
---
# Playbook: app/provision_vms.yml
# Purpose: Provision Proxmox guests for app projects (LXC-first) based on `app_projects`.
# Targets: localhost (Proxmox API)
# Tags: app, provision
#
# Usage:
# - Run one project: ansible-playbook -i inventories/production playbooks/app/provision_vms.yml -e app_project=projectA
# - Run all projects: ansible-playbook -i inventories/production playbooks/app/provision_vms.yml
- name: Provision Proxmox guests for app projects
hosts: localhost
connection: local
gather_facts: false
tags: ['app', 'provision']
vars:
selected_projects: >-
{{
(app_projects | dict2items | map(attribute='key') | list)
if (app_project is not defined or app_project | length == 0)
else [app_project]
}}
tasks:
- name: Validate requested project exists
ansible.builtin.assert:
that:
- app_project is not defined or app_project in app_projects
fail_msg: "Requested app_project={{ app_project }} does not exist in app_projects."
- name: Provision each project/env guest via Proxmox API
ansible.builtin.include_tasks: provision_one_guest.yml
loop: "{{ selected_projects }}"
loop_control:
loop_var: project_key

View File

@ -0,0 +1,98 @@
---
# Playbook: app/proxmox_info.yml
# Purpose: Query Proxmox API for VM/LXC info (status, node, name, vmid) and
# optionally filter to just the guests defined in `app_projects`.
# Targets: localhost
# Tags: app, proxmox, info
#
# Usage examples:
# - Show only projectA guests: ansible-playbook -i inventories/production playbooks/app/proxmox_info.yml -e app_project=projectA
# - Show all VMs/CTs on the cluster: ansible-playbook -i inventories/production playbooks/app/proxmox_info.yml -e proxmox_info_all=true
# - Restrict to only LXC: -e proxmox_info_type=lxc
- name: Proxmox inventory info (VMs and containers)
hosts: localhost
connection: local
gather_facts: false
tags: ['app', 'proxmox', 'info']
vars:
selected_projects: >-
{{
(app_projects | dict2items | map(attribute='key') | list)
if (app_project is not defined or app_project | length == 0)
else [app_project]
}}
proxmox_info_all_default: false
proxmox_info_type_default: all # all|lxc|qemu
tasks:
- name: Validate requested project exists
ansible.builtin.assert:
that:
- app_project is not defined or app_project in app_projects
fail_msg: "Requested app_project={{ app_project | default('') }} does not exist in app_projects."
- name: Build list of expected VMIDs and names from app_projects
ansible.builtin.set_fact:
expected_vmids: >-
{{
selected_projects
| map('extract', app_projects)
| map(attribute='envs')
| map('dict2items')
| map('map', attribute='value')
| list
| flatten
| map(attribute='vmid')
| select('defined')
| list
}}
expected_names: >-
{{
selected_projects
| map('extract', app_projects)
| map(attribute='envs')
| map('dict2items')
| map('map', attribute='value')
| list
| flatten
| map(attribute='name')
| list
}}
- name: Query Proxmox for guest info
community.proxmox.proxmox_vm_info:
api_host: "{{ proxmox_host }}"
api_port: "{{ proxmox_api_port | default(8006) }}"
validate_certs: "{{ proxmox_validate_certs | default(false) }}"
api_user: "{{ proxmox_user }}"
api_password: "{{ vault_proxmox_password | default(omit) }}"
api_token_id: "{{ proxmox_token_id | default(omit, true) }}"
api_token_secret: "{{ vault_proxmox_token | default(omit, true) }}"
node: "{{ proxmox_node | default(omit) }}"
type: "{{ proxmox_info_type | default(proxmox_info_type_default) }}"
config: none
register: proxmox_info
- name: Filter guests to expected VMIDs/names (unless proxmox_info_all)
ansible.builtin.set_fact:
filtered_guests: >-
{{
(proxmox_info.proxmox_vms | default([]))
if (proxmox_info_all | default(proxmox_info_all_default) | bool)
else (
(proxmox_info.proxmox_vms | default([]))
| selectattr('name', 'in', expected_names)
| list
)
}}
- name: Display Proxmox guest summary
ansible.builtin.debug:
msg: |
Proxmox: {{ proxmox_host }} (node={{ proxmox_node | default('any') }}, type={{ proxmox_info_type | default(proxmox_info_type_default) }})
Showing: {{ 'ALL guests' if (proxmox_info_all | default(proxmox_info_all_default) | bool) else ('app_projects for ' ~ (selected_projects | join(', '))) }}
{% for g in (filtered_guests | sort(attribute='vmid')) %}
- vmid={{ g.vmid }} type={{ g.id.split('/')[0] if g.id is defined else 'unknown' }} name={{ g.name | default('') }} node={{ g.node | default('') }} status={{ g.status | default('') }}
{% endfor %}

13
playbooks/app/site.yml Normal file
View File

@ -0,0 +1,13 @@
---
# Playbook: app/site.yml
# Purpose: End-to-end provisioning + configuration for app projects.
# Targets: localhost (provision) + dynamic inventory groups (configure)
# Tags: app
- name: Provision Proxmox guests
import_playbook: provision_vms.yml
tags: ['app', 'provision']
- name: Configure guests
import_playbook: configure_app.yml
tags: ['app', 'configure']

View File

@ -0,0 +1,50 @@
---
# Playbook: app/ssh_client_config.yml
# Purpose: Ensure ~/.ssh/config has convenient host aliases for project envs.
# Targets: localhost
# Tags: app, ssh-config
#
# Example:
# ssh projectA-dev
# ssh projectA-qa
# ssh projectA-prod
- name: Configure SSH client aliases for app projects
hosts: localhost
connection: local
gather_facts: false
tags: ['app', 'ssh-config']
vars:
manage_ssh_config: "{{ manage_ssh_config | default(false) }}"
ssh_config_path: "{{ lookup('ansible.builtin.env', 'HOME') + '/.ssh/config' }}"
selected_projects: >-
{{
(app_projects | dict2items | map(attribute='key') | list)
if (app_project is not defined or app_project | length == 0)
else [app_project]
}}
tasks:
- name: Skip if SSH config management disabled
ansible.builtin.meta: end_play
when: not manage_ssh_config | bool
- name: Ensure ~/.ssh directory exists
ansible.builtin.file:
path: "{{ lookup('ansible.builtin.env', 'HOME') + '/.ssh' }}"
state: directory
mode: "0700"
- name: Add SSH config entries for each project/env
community.general.ssh_config:
user_ssh_config_file: "{{ ssh_config_path }}"
host: "{{ app_projects[item.0].envs[item.1].name | default(item.0 ~ '-' ~ item.1) }}"
hostname: "{{ (app_projects[item.0].envs[item.1].ip | string).split('/')[0] }}"
user: "{{ appuser_name | default('appuser') }}"
identity_file: "{{ ssh_identity_file | default(omit) }}"
state: present
loop: "{{ selected_projects | product(['dev', 'qa', 'prod']) | list }}"
when:
- app_projects[item.0] is defined
- app_projects[item.0].envs[item.1] is defined
- (app_projects[item.0].envs[item.1].ip | default('')) | length > 0

View File

@ -2,32 +2,19 @@
- name: Configure development environment - name: Configure development environment
hosts: dev hosts: dev
become: true become: true
strategy: free
roles: roles:
- {role: timeshift, tags: ['timeshift', 'snapshot']} # Create snapshot before changes
- {role: maintenance, tags: ['maintenance']} - {role: maintenance, tags: ['maintenance']}
- {role: base, tags: ['base', 'security']} - {role: base, tags: ['base', 'security']}
- {role: user, tags: ['user']} - {role: user, tags: ['user']}
- {role: ssh, tags: ['ssh', 'security']} - {role: ssh, tags: ['ssh', 'security']}
- {role: shell, tags: ['shell']} - {role: shell, tags: ['shell'], shell_mode: full, shell_set_default_shell: true}
- {role: development, tags: ['development', 'dev']} - {role: development, tags: ['development', 'dev']}
- {role: datascience, tags: ['datascience', 'conda', 'jupyter', 'r']} - {role: datascience, tags: ['datascience', 'conda', 'jupyter', 'r']}
- {role: docker, tags: ['docker']} - {role: docker, tags: ['docker']}
- {role: applications, tags: ['applications', 'apps']}
# - {role: tailscale, tags: ['tailscale', 'vpn']} # - {role: tailscale, tags: ['tailscale', 'vpn']}
- {role: monitoring, tags: ['monitoring']} - {role: monitoring_desktop, tags: ['monitoring']}
pre_tasks:
- name: Update apt cache
ansible.builtin.apt:
update_cache: true
ignore_errors: true
register: apt_update_result
- name: Display apt update status
ansible.builtin.debug:
msg: "Apt cache update: {{ 'Success' if apt_update_result is succeeded else 'Failed - continuing anyway' }}"
when: ansible_debug_output | default(false) | bool
tasks: tasks:
# Additional tasks can be added here if needed # Additional tasks can be added here if needed

View File

@ -12,14 +12,8 @@
- {role: shell, tags: ['shell']} - {role: shell, tags: ['shell']}
- {role: development, tags: ['development', 'dev']} - {role: development, tags: ['development', 'dev']}
- {role: docker, tags: ['docker']} - {role: docker, tags: ['docker']}
- {role: applications, tags: ['applications', 'apps']}
# - {role: tailscale, tags: ['tailscale', 'vpn']} # - {role: tailscale, tags: ['tailscale', 'vpn']}
- {role: monitoring, tags: ['monitoring']} - {role: monitoring_desktop, tags: ['monitoring']}
pre_tasks:
- name: Update apt cache
ansible.builtin.apt:
update_cache: true
tasks: tasks:
- name: Display completion message - name: Display completion message

View File

@ -22,12 +22,6 @@
Group: {{ group_names | join(', ') }} Group: {{ group_names | join(', ') }}
Skip reboot: {{ skip_reboot | default(false) | bool }} Skip reboot: {{ skip_reboot | default(false) | bool }}
- name: Update apt cache
ansible.builtin.apt:
update_cache: true
cache_valid_time: 3600
when: maintenance_update_cache | bool
roles: roles:
- {role: maintenance, tags: ['maintenance']} - {role: maintenance, tags: ['maintenance']}

27
playbooks/servers.yml Normal file
View File

@ -0,0 +1,27 @@
---
# Playbook: servers.yml
# Purpose: Baseline configuration for servers (no desktop apps, no IDE install)
# Targets: services + qa + ansible + tailscale (override with -e target_group=...)
# Tags: maintenance, base, security, user, ssh, shell, docker, monitoring
# Usage:
# ansible-playbook -i inventories/production playbooks/servers.yml
# ansible-playbook -i inventories/production playbooks/servers.yml -e target_group=services
# ansible-playbook -i inventories/production playbooks/servers.yml --limit jellyfin
- name: Configure servers baseline
hosts: "{{ target_group | default('services:qa:ansible:tailscale') }}"
become: true
roles:
- {role: maintenance, tags: ['maintenance']}
- {role: base, tags: ['base', 'security']}
- {role: user, tags: ['user']}
- {role: ssh, tags: ['ssh', 'security']}
- {role: shell, tags: ['shell']}
- {role: docker, tags: ['docker']}
- {role: monitoring_server, tags: ['monitoring']}
tasks:
- name: Display completion message
ansible.builtin.debug:
msg: "Server baseline configuration completed successfully!"

View File

@ -1,6 +1,6 @@
--- ---
# Playbook: shell.yml # Playbook: shell.yml
# Purpose: Configure shell environment (zsh, oh-my-zsh, plugins) # Purpose: Configure shell environment (minimal zsh + managed aliases)
# Targets: all hosts # Targets: all hosts
# Tags: shell # Tags: shell
# Usage: make shell-all # Usage: make shell-all
@ -8,25 +8,12 @@
- name: Configure shell environment - name: Configure shell environment
hosts: all hosts: all
become: true become: true
strategy: free
ignore_errors: true ignore_errors: true
ignore_unreachable: true ignore_unreachable: true
roles: roles:
- {role: shell, tags: ['shell']} - {role: shell, tags: ['shell']}
pre_tasks:
- name: Update apt cache
ansible.builtin.apt:
update_cache: true
ignore_errors: true
register: apt_update_result
- name: Display apt update status
ansible.builtin.debug:
msg: "Apt cache update: {{ 'Success' if apt_update_result is succeeded else 'Failed - continuing anyway' }}"
when: ansible_debug_output | default(false) | bool
tasks: tasks:
- name: Display completion message - name: Display completion message
ansible.builtin.debug: ansible.builtin.debug:

View File

@ -13,3 +13,7 @@
- name: Tailscale VPN deployment - name: Tailscale VPN deployment
import_playbook: tailscale.yml import_playbook: tailscale.yml
tags: ['tailscale'] tags: ['tailscale']
- name: App projects on Proxmox (LXC-first)
import_playbook: app/site.yml
tags: ['app']

View File

@ -9,12 +9,6 @@
# Override here if needed or pass via: --extra-vars "tailscale_auth_key=your_key" # Override here if needed or pass via: --extra-vars "tailscale_auth_key=your_key"
tailscale_auth_key: "{{ vault_tailscale_auth_key | default('') }}" tailscale_auth_key: "{{ vault_tailscale_auth_key | default('') }}"
pre_tasks:
- name: Update package cache (Debian/Ubuntu)
ansible.builtin.apt:
update_cache: true
when: ansible_os_family == "Debian"
roles: roles:
- {role: tailscale, tags: ['tailscale', 'vpn']} - {role: tailscale, tags: ['tailscale', 'vpn']}

28
playbooks/timeshift.yml Normal file
View File

@ -0,0 +1,28 @@
---
- name: Timeshift operations
hosts: all
become: true
gather_facts: false
tasks:
- name: List Timeshift snapshots
ansible.builtin.command: timeshift --list
register: timeshift_list_result
when: timeshift_action == "list"
changed_when: false
- name: Display snapshots
ansible.builtin.debug:
msg: "{{ timeshift_list_result.stdout_lines }}"
when: timeshift_action == "list"
- name: Restore from snapshot
ansible.builtin.command: timeshift --restore --snapshot "{{ timeshift_snapshot }}" --scripted # noqa command-instead-of-module
when: timeshift_action == "restore"
register: timeshift_restore_result
changed_when: false
- name: Display restore result
ansible.builtin.debug:
msg: "{{ timeshift_restore_result.stdout_lines }}"
when: timeshift_action == "restore"

View File

@ -0,0 +1,42 @@
---
# Playbook: workstations.yml
# Purpose: Workstation baseline (dev boxes + desktops). Desktop apps are applied only to the `desktop` group.
# Targets: dev + desktop + local (override with -e target_group=...)
# Tags: maintenance, base, security, user, ssh, shell, development, dev, datascience, docker, monitoring, apps
#
# Usage:
# ansible-playbook -i inventories/production playbooks/workstations.yml
# ansible-playbook -i inventories/production playbooks/workstations.yml -e target_group=dev
# ansible-playbook -i inventories/production playbooks/workstations.yml --tags apps
- name: Configure workstation baseline
hosts: "{{ target_group | default('dev:desktop:local') }}"
become: true
roles:
- {role: maintenance, tags: ['maintenance']}
- {role: base, tags: ['base', 'security']}
- {role: user, tags: ['user']}
- {role: ssh, tags: ['ssh', 'security']}
- {role: shell, tags: ['shell'], shell_mode: full, shell_set_default_shell: true}
- {role: development, tags: ['development', 'dev']}
- {role: datascience, tags: ['datascience', 'conda', 'jupyter', 'r']}
- {role: docker, tags: ['docker']}
- {role: monitoring_desktop, tags: ['monitoring']}
tasks:
- name: Display completion message
ansible.builtin.debug:
msg: "Workstation baseline configuration completed successfully!"
- name: Install desktop applications (desktop group only)
hosts: desktop
become: true
roles:
- {role: applications, tags: ['applications', 'apps']}
tasks:
- name: Display completion message
ansible.builtin.debug:
msg: "Desktop applications installed successfully!"

View File

@ -0,0 +1,63 @@
## Architecture
### High-level map (modules and relationships)
- **Inventory**: `inventories/production/`
- `hosts`: groups like `dev`, `desktop`, `services`, `qa`, `ansible`, `tailscale`, `local`
- `group_vars/all/main.yml`: shared configuration (including `app_projects`)
- `group_vars/all/vault.yml`: encrypted secrets (Ansible Vault)
- `host_vars/*`: per-host overrides (some encrypted)
- **Playbooks**: `playbooks/`
- `playbooks/site.yml`: dispatcher (imports other playbooks)
- `playbooks/servers.yml`: baseline for servers (`services:qa:ansible:tailscale`)
- `playbooks/workstations.yml`: baseline for `dev:desktop:local` + desktop apps for `desktop` group only
- `playbooks/development.yml`: dev machines baseline (no desktop apps)
- `playbooks/local.yml`: localhost baseline (no desktop apps)
- `playbooks/app/*`: Proxmox app-project provisioning/configuration suite
- **Roles**: `roles/*`
- Baseline/security: `base`, `user`, `ssh`
- Dev tooling: `development`, `datascience`, `docker`
- Shell: `shell` (minimal aliases-only)
- Monitoring split:
- `monitoring_server` (fail2ban + sysstat)
- `monitoring_desktop` (desktop-oriented monitoring tooling)
- Proxmox guests: `proxmox_vm`
- App guest configuration: `base_os`, `app_setup`, `pote`
### Proxmox “app projects” flow (data model + execution)
- **Data model**: `app_projects` in `inventories/production/group_vars/all/main.yml`
- Defines projects and per-env (`dev/qa/prod`) guest parameters (ip, branch, vmid, etc.)
- **Provision**: `playbooks/app/provision_vms.yml`
- Loops `app_projects` → envs → calls `role: proxmox_vm` to create LXC guests
- Adds dynamic inventory groups:
- `app_all`
- `app_<project>_all`
- `app_<project>_<env>`
- **Configure**: `playbooks/app/configure_app.yml`
- Builds a dynamic inventory from `app_projects` (so it can run standalone)
- Applies:
- `role: base_os` (baseline OS for app guests)
- `role: app_setup` (deploy + systemd) or `role: pote` for the POTE project
### Boundaries
- **Inventory/vars** define desired state and credentials.
- **Playbooks** define “what path to run” (role ordering, target groups, tags).
- **Roles** implement actual host configuration (idempotent tasks, handlers).
### External dependencies
- **Ansible collections**: `collections/requirements.yml`
- **Ansible Vault**: `inventories/production/group_vars/all/vault.yml`
- **Proxmox API**: used by `community.proxmox.*` modules in provisioning
### References
- Playbook execution graphs and tags: `docs/reference/playbooks-and-tags.md`
- Legacy pointer (do not update): `docs/reference/architecture.md``project-docs/architecture.md`

35
project-docs/decisions.md Normal file
View File

@ -0,0 +1,35 @@
## Decisions (ADR-style)
### 2025-12-31 — Do not manage IDE/editor installs in Ansible
- **Context**: IDEs/editors are interactive, fast-moving, and often user-preference-driven.
- **Decision**: Keep editor installation (Cursor, VS Code, etc.) out of Ansible roles/playbooks.
- **Consequences**:
- Faster, more stable provisioning runs
- Less drift caused by UI tooling changes
- Editor setup is handled separately (manual or via dedicated tooling)
### 2025-12-31 — Split monitoring into server vs workstation roles
- **Context**: Servers and workstations have different needs (e.g., fail2ban/sysstat are server-centric; wireshark-common is workstation-centric).
- **Decision**: Create `monitoring_server` and `monitoring_desktop` roles and wire them into `servers.yml` / workstation playbooks.
- **Consequences**:
- Smaller install footprint on servers
- Clearer intent and faster runs
### 2025-12-31 — Desktop applications are installed only on the `desktop` group
- **Context**: Desktop apps should not be installed on headless servers or dev VMs by default.
- **Decision**: Run `role: applications` only in a `desktop`-scoped play (workstations playbook).
- **Consequences**:
- Reduced unnecessary package installs
- Less attack surface and fewer updates on non-desktop hosts
### 2025-12-31 — Minimal shell role (aliases-only)
- **Context**: Oh-my-zsh/theme/plugin cloning is slow and overwriting `.zshrc` is risky.
- **Decision**: `role: shell` now manages a small alias file and ensures its sourced; it does not overwrite `.zshrc`.
- **Consequences**:
- Much faster shell configuration
- Safer for servers and multi-user systems

33
project-docs/index.md Normal file
View File

@ -0,0 +1,33 @@
## Project docs index
Last updated: **2025-12-31**
### Documents
- **`project-docs/overview.md`** (updated 2025-12-31)
High-level goals, scope, and primary users for this Ansible infrastructure repo.
- **`project-docs/architecture.md`** (updated 2025-12-31)
Architecture map: inventories, playbooks, roles, and the Proxmox app-project flow.
- **`project-docs/standards.md`** (updated 2025-12-31)
Conventions for Ansible YAML, role structure, naming, vault usage, and linting.
- **`project-docs/workflow.md`** (updated 2025-12-31)
How to run common tasks via `Makefile`, how to lint/test, and how to apply safely.
- **`project-docs/decisions.md`** (updated 2025-12-31)
Short ADR-style notes for important architectural decisions.
### Related docs (existing)
- **Playbooks/tags map**: `docs/reference/playbooks-and-tags.md`
- **Applications inventory**: `docs/reference/applications.md`
- **Makefile reference**: `docs/reference/makefile.md`
- **Proxmox app project guides**:
- `docs/guides/app_stack_proxmox.md`
- `docs/guides/app_stack_execution_flow.md`
Legacy pointers:
- `docs/reference/architecture.md``project-docs/architecture.md`

27
project-docs/overview.md Normal file
View File

@ -0,0 +1,27 @@
## Overview
This repository manages infrastructure automation using **Ansible** for:
- Development machines (`dev`)
- Desktop machines (`desktop`)
- Service hosts (`services`, `qa`, `ansible`, `tailscale`)
- Proxmox-managed guests for “app projects” (LXC-first, with a KVM path)
Primary entrypoint is the **Makefile** (`Makefile`) and playbooks under `playbooks/`.
### Goals
- **Predictable, repeatable provisioning** of hosts and Proxmox guests
- **Safe defaults**: avoid destructive automation; prefer guardrails and idempotency
- **Clear separation** between server vs workstation responsibilities
- **Secrets handled via Ansible Vault** (never commit plaintext credentials)
### Non-goals
- Automated decommission/destroy playbooks for infrastructure or guests
- Managing interactive IDE/editor installs (kept out of Ansible by design)
### Target users
- You (and collaborators) operating a small homelab / Proxmox environment
- Contributors extending roles/playbooks in a consistent style

49
project-docs/standards.md Normal file
View File

@ -0,0 +1,49 @@
## Standards
### Ansible + YAML conventions
- **Indentation**: 2 spaces (no tabs)
- **Task naming**: every task should include a clear `name:`
- **Play-level privilege**: prefer `become: true` at play level when most tasks need sudo
- **Modules**:
- Prefer native modules over `shell`/`command`
- Use **fully qualified collection names** (FQCN), e.g. `ansible.builtin.apt`, `community.general.ufw`
- **Handlers**: use handlers for restarts/reloads
- **Idempotency**:
- If `shell`/`command` is unavoidable, set `changed_when:` / `creates:` / `removes:` appropriately
### Role structure
Roles should follow:
```
roles/<role_name>/
├── defaults/main.yml
├── handlers/main.yml
├── tasks/main.yml
├── templates/
├── files/
└── README.md
```
### Variable naming
- **snake_case** everywhere
- Vault-backed variables are prefixed with **`vault_`**
### Secrets / Vault
- Never commit plaintext secrets.
- Use Ansible Vault for credentials:
- `inventories/production/group_vars/all/vault.yml` (encrypted)
- Local vault password file is expected at `~/.ansible-vault-pass`.
### Makefile-first workflow
- Prefer `make ...` targets over direct `ansible-playbook` commands for consistency.
### Linting
- `ansible-lint` is the primary linter.
- `.ansible-lint` excludes vault-containing inventory paths to keep linting deterministic without vault secrets.

86
project-docs/workflow.md Normal file
View File

@ -0,0 +1,86 @@
## Workflow
### Setup
- Install dependencies (Python requirements, Node deps for docs, Ansible collections):
```bash
make bootstrap
```
- Edit vault secrets:
```bash
make edit-group-vault
```
### Validate (safe, local)
- Syntax checks:
```bash
make test-syntax
```
- Lint:
```bash
make lint
```
### Common apply flows
- **Servers baseline** (services + qa + ansible + tailscale):
```bash
make servers
make servers GROUP=services
make servers HOST=jellyfin
```
- **Workstations baseline** (dev + desktop + local; desktop apps only on `desktop` group):
```bash
make workstations
make workstations GROUP=dev
make apps
```
### Proxmox app projects
End-to-end:
```bash
make app PROJECT=projectA
```
Provision only / configure only:
```bash
make app-provision PROJECT=projectA
make app-configure PROJECT=projectA
```
Inspect Proxmox guests:
```bash
make proxmox-info PROJECT=projectA
make proxmox-info ALL=true
make proxmox-info TYPE=lxc
```
### Safety checks
- Prefer `--check --diff` first:
```bash
make check
```
### Debugging
```bash
make debug
make verbose
```

7
provision_vms.yml Normal file
View File

@ -0,0 +1,7 @@
---
# Wrapper playbook
# Purpose:
# ansible-playbook -i inventories/production provision_vms.yml -e app_project=projectA
- name: Provision app project guests
import_playbook: playbooks/app/provision_vms.yml

24
roles/app_setup/README.md Normal file
View File

@ -0,0 +1,24 @@
# `app_setup`
Creates the standard app filesystem layout and runtime services:
- `/srv/app/backend` and `/srv/app/frontend`
- `/srv/app/.env.<dev|qa|prod>`
- `/usr/local/bin/deploy_app.sh` (git pull, install deps, build, migrate, restart services)
- systemd units:
- `app-backend.service`
- `app-frontend.service`
All behavior is driven by variables so you can reuse this role for multiple projects.
## Variables
See [`defaults/main.yml`](defaults/main.yml). Common inputs in the app stack:
- `app_project`, `app_env` (used for naming and `.env.<env>` selection)
- `app_repo_url`, `app_repo_dest`, `app_repo_branch`
- `app_env_vars` (map written into `/srv/app/.env.<env>`)
- `components.backend`, `components.frontend` (enable/disable backend/frontend setup)
- `app_backend_dir`, `app_frontend_dir`, ports and Node.js commands

View File

@ -0,0 +1,39 @@
---
# Role: app_setup
# Purpose: app filesystem layout, env files, deploy script, and systemd units.
app_root: "/srv/app"
app_backend_dir: "{{ app_root }}/backend"
app_frontend_dir: "{{ app_root }}/frontend"
# Which environment file to render for this host: dev|qa|prod
app_env: dev
# Components (useful for single-repo projects)
app_enable_backend: true
app_enable_frontend: true
# Repo settings (project-driven)
app_repo_url: ""
app_repo_dest: "{{ app_root }}"
app_repo_branch: "main"
# Owner for app files
app_owner: "{{ appuser_name | default('appuser') }}"
app_group: "{{ appuser_name | default('appuser') }}"
# Ports
app_backend_port: 3001
app_frontend_port: 3000
# Commands (Node defaults; override per project as needed)
app_backend_install_cmd: "npm ci"
app_backend_migrate_cmd: "npm run migrate"
app_backend_start_cmd: "npm start"
app_frontend_install_cmd: "npm ci"
app_frontend_build_cmd: "npm run build"
app_frontend_start_cmd: "npm start"
# Arbitrary environment variables for the env file
app_env_vars: {}

View File

@ -0,0 +1,6 @@
---
# Role: app_setup handlers
- name: Reload systemd
ansible.builtin.systemd:
daemon_reload: true

View File

@ -0,0 +1,82 @@
---
# Role: app_setup
# Purpose: create app layout, env file, deploy script, and systemd units.
- name: Ensure app root directory exists
ansible.builtin.file:
path: "{{ app_root }}"
state: directory
owner: "{{ app_owner }}"
group: "{{ app_group }}"
mode: "0755"
- name: Ensure backend directory exists
ansible.builtin.file:
path: "{{ app_backend_dir }}"
state: directory
owner: "{{ app_owner }}"
group: "{{ app_group }}"
mode: "0755"
when: app_enable_backend | bool
- name: Ensure frontend directory exists
ansible.builtin.file:
path: "{{ app_frontend_dir }}"
state: directory
owner: "{{ app_owner }}"
group: "{{ app_group }}"
mode: "0755"
when: app_enable_frontend | bool
- name: Deploy environment file for this env
ansible.builtin.template:
src: env.j2
dest: "{{ app_root }}/.env.{{ app_env }}"
owner: "{{ app_owner }}"
group: "{{ app_group }}"
mode: "0640"
- name: Deploy deploy script
ansible.builtin.template:
src: deploy_app.sh.j2
dest: /usr/local/bin/deploy_app.sh
owner: root
group: root
mode: "0755"
- name: Deploy systemd unit for backend
ansible.builtin.template:
src: app-backend.service.j2
dest: /etc/systemd/system/app-backend.service
owner: root
group: root
mode: "0644"
notify: Reload systemd
when: app_enable_backend | bool
- name: Deploy systemd unit for frontend
ansible.builtin.template:
src: app-frontend.service.j2
dest: /etc/systemd/system/app-frontend.service
owner: root
group: root
mode: "0644"
notify: Reload systemd
when: app_enable_frontend | bool
- name: Ensure systemd is reloaded before enabling services
ansible.builtin.meta: flush_handlers
- name: Enable and start backend service
ansible.builtin.systemd:
name: app-backend.service
enabled: true
state: started
when: app_enable_backend | bool
- name: Enable and start frontend service
ansible.builtin.systemd:
name: app-frontend.service
enabled: true
state: started
when: app_enable_frontend | bool

View File

@ -0,0 +1,19 @@
[Unit]
Description=App Backend ({{ app_env }})
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User={{ app_owner }}
Group={{ app_group }}
WorkingDirectory={{ app_backend_dir }}
EnvironmentFile={{ app_root }}/.env.{{ app_env }}
ExecStart=/usr/bin/env bash -lc '{{ app_backend_start_cmd }}'
Restart=on-failure
RestartSec=3
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,19 @@
[Unit]
Description=App Frontend ({{ app_env }})
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User={{ app_owner }}
Group={{ app_group }}
WorkingDirectory={{ app_frontend_dir }}
EnvironmentFile={{ app_root }}/.env.{{ app_env }}
ExecStart=/usr/bin/env bash -lc '{{ app_frontend_start_cmd }}'
Restart=on-failure
RestartSec=3
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,57 @@
#!/usr/bin/env bash
# Ansible-managed deploy script
set -euo pipefail
REPO_URL="{{ app_repo_url }}"
BRANCH="{{ app_repo_branch }}"
APP_ROOT="{{ app_repo_dest }}"
BACKEND_DIR="{{ app_backend_dir }}"
FRONTEND_DIR="{{ app_frontend_dir }}"
ENV_FILE="{{ app_root }}/.env.{{ app_env }}"
echo "[deploy] repo=${REPO_URL} branch=${BRANCH} root=${APP_ROOT}"
if [[ ! -d "${APP_ROOT}/.git" ]]; then
echo "[deploy] cloning repo"
install -d -m 0755 "${APP_ROOT}"
git clone --branch "${BRANCH}" --single-branch "${REPO_URL}" "${APP_ROOT}"
fi
echo "[deploy] syncing branch"
git -C "${APP_ROOT}" fetch origin --prune
if ! git -C "${APP_ROOT}" rev-parse --verify --quiet "refs/remotes/origin/${BRANCH}" >/dev/null; then
echo "[deploy] ERROR: branch '${BRANCH}' not found on origin"
exit 2
fi
git -C "${APP_ROOT}" checkout -B "${BRANCH}" "origin/${BRANCH}"
git -C "${APP_ROOT}" pull --ff-only origin "${BRANCH}"
if [[ "{{ app_enable_backend | bool }}" == "True" ]]; then
echo "[deploy] backend install"
cd "${BACKEND_DIR}"
{{ app_backend_install_cmd }}
echo "[deploy] backend migrations"
{{ app_backend_migrate_cmd }}
fi
if [[ "{{ app_enable_frontend | bool }}" == "True" ]]; then
echo "[deploy] frontend install"
cd "${FRONTEND_DIR}"
{{ app_frontend_install_cmd }}
echo "[deploy] frontend build"
{{ app_frontend_build_cmd }}
fi
echo "[deploy] restarting services"
{% if app_enable_backend | bool %}
systemctl restart app-backend.service
{% endif %}
{% if app_enable_frontend | bool %}
systemctl restart app-frontend.service
{% endif %}
echo "[deploy] done"

View File

@ -0,0 +1,13 @@
# Ansible-managed environment file for {{ app_env }}
# Loaded by systemd units and deploy script.
# Common
APP_ENV={{ app_env }}
BACKEND_PORT={{ app_backend_port }}
FRONTEND_PORT={{ app_frontend_port }}
{% for k, v in (app_env_vars | default({})).items() %}
{{ k }}={{ v }}
{% endfor %}

View File

@ -1,7 +1,7 @@
# Role: applications # Role: applications
## Description ## Description
Installs desktop applications for development and productivity including browsers, office suites, and utilities. Installs a small set of desktop GUI applications (desktop group only via `playbooks/workstations.yml`).
## Requirements ## Requirements
- Ansible 2.9+ - Ansible 2.9+
@ -9,8 +9,7 @@ Installs desktop applications for development and productivity including browser
- Internet access for package downloads - Internet access for package downloads
## Installed Applications ## Installed Applications
- **Brave Browser**: Privacy-focused web browser - **CopyQ**: Clipboard manager (history, search, scripting)
- **LibreOffice**: Complete office suite
- **Evince**: PDF document viewer - **Evince**: PDF document viewer
- **Redshift**: Blue light filter for eye comfort - **Redshift**: Blue light filter for eye comfort
@ -18,10 +17,7 @@ Installs desktop applications for development and productivity including browser
| Variable | Default | Description | | Variable | Default | Description |
|----------|---------|-------------| |----------|---------|-------------|
| `applications_install_brave` | `true` | Install Brave browser | | `applications_desktop_packages` | `['copyq','evince','redshift']` | Desktop packages to install |
| `applications_install_libreoffice` | `true` | Install LibreOffice suite |
| `applications_install_evince` | `true` | Install PDF viewer |
| `applications_install_redshift` | `true` | Install blue light filter |
## Dependencies ## Dependencies
- `base` role (for package management) - `base` role (for package management)
@ -31,16 +27,13 @@ Installs desktop applications for development and productivity including browser
```yaml ```yaml
- hosts: desktop - hosts: desktop
roles: roles:
- { role: applications, applications_install_brave: false } - role: applications
``` ```
## Tags ## Tags
- `applications`: All application installations - `applications`: All application installations
- `apps`: Alias for applications - `apps`: Alias for applications
- `browser`: Browser installation only
- `office`: Office suite installation only
## Notes ## Notes
- Adds external repositories for Brave browser
- Requires desktop environment for GUI applications - Requires desktop environment for GUI applications
- Applications are installed system-wide - Applications are installed system-wide

View File

@ -1 +1,7 @@
--- ---
# Desktop GUI applications to install (desktop group only via playbooks/workstations.yml)
applications_desktop_packages:
- copyq
- evince
- redshift

View File

@ -3,109 +3,25 @@
ansible.builtin.package_facts: ansible.builtin.package_facts:
manager: apt manager: apt
- name: Check if Brave browser is installed
ansible.builtin.command: brave-browser --version
register: applications_brave_check
ignore_errors: true
changed_when: false
failed_when: false
no_log: true
- name: Set installation conditions - name: Set installation conditions
ansible.builtin.set_fact: ansible.builtin.set_fact:
applications_desktop_apps_needed: "{{ ['redshift', 'libreoffice', 'evince'] | difference(ansible_facts.packages.keys()) | length > 0 }}" applications_desktop_apps_needed: >-
applications_brave_needs_install: "{{ applications_brave_check.rc != 0 or 'brave-browser' not in ansible_facts.packages }}" {{
(applications_desktop_packages | default([]))
- name: Check if Brave GPG key exists and is correct | difference(ansible_facts.packages.keys())
ansible.builtin.shell: | | length > 0
if [ -f /usr/share/keyrings/brave-browser-archive-keyring.gpg ]; then }}
if file /usr/share/keyrings/brave-browser-archive-keyring.gpg | grep -q "PGP"; then
echo "correct_key"
else
echo "wrong_key"
fi
else
echo "not_exists"
fi
register: brave_key_check
failed_when: false
when: applications_brave_needs_install
- name: Check if Brave repository exists and is correct
ansible.builtin.shell: |
if [ -f /etc/apt/sources.list.d/brave-browser.list ]; then
if grep -q "deb \[signed-by=/usr/share/keyrings/brave-browser-archive-keyring.gpg\]" /etc/apt/sources.list.d/brave-browser.list; then
echo "correct_config"
else
echo "wrong_config"
fi
else
echo "not_exists"
fi
register: brave_repo_check
failed_when: false
when: applications_brave_needs_install
- name: Clean up duplicate Brave repository files
ansible.builtin.file:
path: "{{ item }}"
state: absent
loop:
- /etc/apt/sources.list.d/brave-browser.list
- /etc/apt/sources.list.d/brave-browser-release.sources
become: true
failed_when: false
when:
- applications_brave_needs_install
- brave_repo_check.stdout == "wrong_config"
- name: Remove incorrect Brave GPG key
ansible.builtin.file:
path: /usr/share/keyrings/brave-browser-archive-keyring.gpg
state: absent
become: true
when:
- applications_brave_needs_install
- brave_key_check.stdout == "wrong_key"
- name: Install desktop applications - name: Install desktop applications
ansible.builtin.apt: ansible.builtin.apt:
name: name: "{{ applications_desktop_packages }}"
- redshift
- libreoffice
- evince
state: present state: present
when: applications_desktop_apps_needed when: applications_desktop_apps_needed
- name: Brave browser installation
when: applications_brave_needs_install
block:
- name: Download Brave APT key only if needed
ansible.builtin.get_url:
url: https://brave-browser-apt-release.s3.brave.com/brave-browser-archive-keyring.gpg
dest: /usr/share/keyrings/brave-browser-archive-keyring.gpg
mode: '0644'
when: brave_key_check.stdout in ["not_exists", "wrong_key"]
- name: Add Brave repository only if needed
ansible.builtin.apt_repository:
repo: "deb [signed-by=/usr/share/keyrings/brave-browser-archive-keyring.gpg] https://brave-browser-apt-release.s3.brave.com/ stable main"
filename: brave-browser
state: present
when: brave_repo_check.stdout in ["not_exists", "wrong_config"]
- name: Install Brave browser
ansible.builtin.apt:
name: brave-browser
state: present
- name: Display application status - name: Display application status
ansible.builtin.debug: ansible.builtin.debug:
msg: msg:
- "Desktop apps needed: {{ applications_desktop_apps_needed }}" - "Desktop apps needed: {{ applications_desktop_apps_needed }}"
- "Brave needed: {{ applications_brave_needs_install }}"
- "Redshift: {{ 'Installed' if 'redshift' in ansible_facts.packages else 'Missing' }}" - "Redshift: {{ 'Installed' if 'redshift' in ansible_facts.packages else 'Missing' }}"
- "LibreOffice: {{ 'Installed' if 'libreoffice' in ansible_facts.packages else 'Missing' }}"
- "Evince: {{ 'Installed' if 'evince' in ansible_facts.packages else 'Missing' }}" - "Evince: {{ 'Installed' if 'evince' in ansible_facts.packages else 'Missing' }}"
- "Brave: {{ applications_brave_check.stdout if applications_brave_check.rc == 0 else 'Not installed' }}"
when: ansible_debug_output | default(false) | bool when: ansible_debug_output | default(false) | bool

View File

@ -1,2 +1,8 @@
--- ---
# defaults file for base # defaults file for base
# Fail2ban email configuration
# Set these in group_vars/all/main.yml or host_vars to enable email notifications
fail2ban_destemail: "" # Empty by default - no email notifications
fail2ban_sender: "" # Empty by default
fail2ban_action: "%(action_mwl)s" # Mail, whois, and log action

View File

@ -1,4 +1,10 @@
--- ---
- name: Update apt cache (shared baseline)
ansible.builtin.apt:
update_cache: true
cache_valid_time: "{{ apt_cache_valid_time | default(3600) }}"
when: ansible_os_family == "Debian"
- name: Ensure Ansible remote_tmp directory exists with correct permissions - name: Ensure Ansible remote_tmp directory exists with correct permissions
ansible.builtin.file: ansible.builtin.file:
path: /root/.ansible/tmp path: /root/.ansible/tmp
@ -17,6 +23,7 @@
- unzip - unzip
- xclip - xclip
- tree - tree
- copyq
# Network and admin tools # Network and admin tools
- net-tools - net-tools
- ufw - ufw
@ -25,6 +32,9 @@
- jq - jq
- ripgrep - ripgrep
- fd-find - fd-find
# Power management (TLP for laptops)
- tlp
- tlp-rdw
state: present state: present
- name: Install yq YAML processor - name: Install yq YAML processor
@ -68,3 +78,17 @@
community.general.locale_gen: community.general.locale_gen:
name: "{{ locale | default('en_US.UTF-8') }}" name: "{{ locale | default('en_US.UTF-8') }}"
state: present state: present
- name: Gather package facts to check for TLP
ansible.builtin.package_facts:
manager: apt
when: ansible_facts.packages is not defined
- name: Enable and start TLP service
ansible.builtin.systemd:
name: tlp
enabled: true
state: started
daemon_reload: true
become: true
when: ansible_facts.packages is defined and 'tlp' in ansible_facts.packages

View File

@ -6,10 +6,14 @@ findtime = 600
# Allow 3 failures before banning # Allow 3 failures before banning
maxretry = 3 maxretry = 3
# Email notifications (uncomment and configure if needed) # Email notifications (configured via fail2ban_destemail variable)
destemail = idobkin@gmail.com {% if fail2ban_destemail | default('') | length > 0 %}
sender = idobkin@gmail.com destemail = {{ fail2ban_destemail }}
action = %(action_mwl)s sender = {{ fail2ban_sender | default(fail2ban_destemail) }}
action = {{ fail2ban_action | default('%(action_mwl)s') }}
{% else %}
# Email notifications disabled (set fail2ban_destemail in group_vars/all/main.yml to enable)
{% endif %}
[sshd] [sshd]
enabled = true enabled = true

21
roles/base_os/README.md Normal file
View File

@ -0,0 +1,21 @@
# `base_os`
Baseline OS configuration for app guests:
- Installs required packages (git/curl/nodejs/npm/ufw/openssh-server/etc.)
- Creates deployment user (default `appuser`) with passwordless sudo
- Adds your authorized SSH key
- Configures UFW to allow SSH + backend/frontend ports
## Variables
See [`defaults/main.yml`](defaults/main.yml). Common inputs in the app stack:
- `appuser_name`, `appuser_groups`, `appuser_shell`
- `appuser_ssh_public_key` (usually `{{ vault_ssh_public_key }}`)
- `components.backend`, `components.frontend` (enable/disable firewall rules per component)
- `app_backend_port`, `app_frontend_port`
This role is used by `playbooks/app/configure_app.yml` after provisioning.

View File

@ -0,0 +1,31 @@
---
# Role: base_os
# Purpose: baseline OS configuration for app guests (packages, appuser, firewall).
base_os_packages:
- git
- curl
- ca-certificates
- openssh-server
- sudo
- ufw
- python3
- python3-apt
- nodejs
- npm
base_os_allow_ssh_port: 22
# App ports (override per project)
base_os_backend_port: "{{ app_backend_port | default(3001) }}"
base_os_frontend_port: "{{ app_frontend_port | default(3000) }}"
base_os_enable_backend: true
base_os_enable_frontend: true
base_os_user: "{{ appuser_name | default('appuser') }}"
base_os_user_shell: "{{ appuser_shell | default('/bin/bash') }}"
base_os_user_groups: "{{ appuser_groups | default(['sudo']) }}"
base_os_user_ssh_public_key: "{{ appuser_ssh_public_key | default('') }}"
# If true, create passwordless sudo for base_os_user.
base_os_passwordless_sudo: true

View File

@ -0,0 +1,6 @@
---
# Role: base_os handlers
- name: Reload ufw
ansible.builtin.command: ufw reload
changed_when: false

View File

@ -0,0 +1,63 @@
---
# Role: base_os
# Purpose: baseline OS config for app guests.
- name: Ensure apt cache is up to date
ansible.builtin.apt:
update_cache: true
cache_valid_time: 3600
- name: Install baseline packages
ansible.builtin.apt:
name: "{{ base_os_packages }}"
state: present
- name: Ensure app user exists
ansible.builtin.user:
name: "{{ base_os_user }}"
shell: "{{ base_os_user_shell }}"
groups: "{{ base_os_user_groups }}"
append: true
create_home: true
state: present
- name: Ensure app user has authorized SSH key
ansible.posix.authorized_key:
user: "{{ base_os_user }}"
state: present
key: "{{ base_os_user_ssh_public_key }}"
when: base_os_user_ssh_public_key | length > 0
- name: Configure passwordless sudo for app user
ansible.builtin.copy:
dest: "/etc/sudoers.d/{{ base_os_user }}"
content: "{{ base_os_user }} ALL=(ALL) NOPASSWD:ALL\n"
owner: root
group: root
mode: "0440"
when: base_os_passwordless_sudo | bool
- name: Ensure UFW allows SSH
community.general.ufw:
rule: allow
port: "{{ base_os_allow_ssh_port }}"
proto: tcp
- name: Ensure UFW allows backend port
community.general.ufw:
rule: allow
port: "{{ base_os_backend_port }}"
proto: tcp
when: base_os_enable_backend | bool
- name: Ensure UFW allows frontend port
community.general.ufw:
rule: allow
port: "{{ base_os_frontend_port }}"
proto: tcp
when: base_os_enable_frontend | bool
- name: Enable UFW (deny incoming by default)
community.general.ufw:
state: enabled
policy: deny

View File

@ -17,4 +17,3 @@ r_packages:
- r-base - r-base
- r-base-dev - r-base-dev
- r-recommended - r-recommended

View File

@ -5,4 +5,3 @@
state: restarted state: restarted
daemon_reload: true daemon_reload: true
become: true become: true

View File

@ -1,4 +1,3 @@
--- ---
dependencies: dependencies:
- role: base - role: base

View File

@ -200,4 +200,3 @@
- name: Display R version - name: Display R version
ansible.builtin.debug: ansible.builtin.debug:
msg: "R version installed: {{ r_version.stdout_lines[0] if r_version.stdout_lines | length > 0 else 'Not checked in dry-run mode' }}" msg: "R version installed: {{ r_version.stdout_lines[0] if r_version.stdout_lines | length > 0 else 'Not checked in dry-run mode' }}"

View File

@ -23,42 +23,12 @@ Installs core development tools and utilities for software development. This rol
- **npm**: Node package manager (included with Node.js) - **npm**: Node package manager (included with Node.js)
- Configured from official NodeSource repository - Configured from official NodeSource repository
### Code Editors
- **Cursor IDE**: AI-powered code editor (AppImage)
- Installed to `/usr/local/bin/cursor`
- Latest stable version from cursor.com
## Variables ## Variables
### Core Settings ### Core Settings
| Variable | Default | Description | | Variable | Default | Description |
|----------|---------|-------------| |----------|---------|-------------|
| `install_cursor` | `true` | Install Cursor IDE | | `development_packages` | See defaults | Base packages installed by the role |
| `install_cursor_extensions` | `false` | Install Cursor extensions |
### Extension Groups
Enable specific extension groups based on your development needs:
| Variable | Default | Extensions Included |
|----------|---------|-------------------|
| `install_python` | `false` | Python, Pylance, Black, isort, Flake8, Ruff |
| `install_jupyter` | `false` | Jupyter notebooks, keybindings, renderers |
| `install_web` | `false` | Prettier, ESLint, Tailwind, Vue, Svelte |
| `install_playwright` | `false` | Playwright testing framework |
| `install_devops` | `false` | Go, Rust, YAML, Docker, Ansible |
| `install_r` | `false` | R language support and pack development |
| `install_docs` | `false` | Markdown tools and linter |
### Base Extensions (Always Installed)
When `install_cursor_extensions: true`, these are always installed:
- ErrorLens (better error highlighting)
- GitLens (Git supercharged)
- Git Graph (visualization)
- Code Spell Checker
- EditorConfig support
- Material Icon Theme
- GitHub Copilot (if licensed)
- Copilot Chat
## Dependencies ## Dependencies
- `base` role (for core utilities) - `base` role (for core utilities)
@ -72,61 +42,17 @@ When `install_cursor_extensions: true`, these are always installed:
- role: development - role: development
``` ```
### Python Data Science Machine ### Customize packages
```yaml ```yaml
- hosts: datascience - hosts: developers
roles: roles:
- role: development - role: development
vars: vars:
install_cursor_extensions: true development_packages:
install_python: true - git
install_jupyter: true - build-essential
install_docs: true - python3
``` - python3-pip
### Web Development Machine
```yaml
- hosts: webdevs
roles:
- role: development
vars:
install_cursor_extensions: true
install_web: true
install_playwright: true
install_docs: true
```
### Full Stack with DevOps
```yaml
- hosts: fullstack
roles:
- role: development
vars:
install_cursor_extensions: true
install_python: true
install_web: true
install_devops: true
install_docs: true
```
### Custom Extension List
You can also override the extension list completely in `host_vars`:
```yaml
# host_vars/myhost.yml
install_cursor_extensions: true
cursor_extensions:
- ms-python.python
- golang.go
- hashicorp.terraform
# ... your custom list
```
### With Cursor disabled
```yaml
- hosts: servers
roles:
- role: development
install_cursor: false
``` ```
## Usage ## Usage
@ -141,7 +67,6 @@ ansible-playbook playbooks/development.yml --limit dev01 --tags development
## Tags ## Tags
- `development`, `dev`: All development tasks - `development`, `dev`: All development tasks
- `cursor`, `ide`: Cursor IDE installation only
## Post-Installation ## Post-Installation
@ -151,7 +76,6 @@ git --version
node --version node --version
npm --version npm --version
python3 --version python3 --version
cursor --version
``` ```
### Node.js Usage ### Node.js Usage
@ -163,28 +87,17 @@ npm install -g <package>
node --version # Should show v22.x node --version # Should show v22.x
``` ```
### Cursor IDE Usage
```bash
# Launch Cursor (if using X11/Wayland)
cursor
# For root users, use the aliased version from .zshrc:
cursor # Automatically adds --no-sandbox flags
```
## Performance Notes ## Performance Notes
### Installation Time ### Installation Time
- **Base packages**: 1-2 minutes - **Base packages**: 1-2 minutes
- **Node.js**: 1-2 minutes - **Node.js**: 1-2 minutes
- **Cursor IDE**: 2-5 minutes (~200MB download) - **Total**: ~3-5 minutes
- **Total**: ~5-10 minutes
### Disk Space ### Disk Space
- **Node.js + npm**: ~100MB - **Node.js + npm**: ~100MB
- **Cursor IDE**: ~200MB
- **Build tools**: ~50MB - **Build tools**: ~50MB
- **Total**: ~350MB - **Total**: ~150MB
## Integration ## Integration
@ -217,17 +130,9 @@ apt-get remove nodejs
# Re-run playbook # Re-run playbook
``` ```
### Cursor Won't Launch
For root users, use the alias that adds required flags:
```bash
# Check alias in .zshrc
alias cursor="cursor --no-sandbox --disable-gpu-sandbox..."
```
## Notes ## Notes
- Node.js 22 is the current LTS version - Node.js 22 is the current LTS version
- NodeSource repository is configured for automatic updates - NodeSource repository is configured for automatic updates
- Cursor IDE is installed as AppImage for easy updates
- Build tools (gcc, make) are essential for npm native modules - Build tools (gcc, make) are essential for npm native modules
- Python 3 is included for development scripts - Python 3 is included for development scripts
- All installations are idempotent (safe to re-run) - All installations are idempotent (safe to re-run)
@ -239,11 +144,10 @@ alias cursor="cursor --no-sandbox --disable-gpu-sandbox..."
| Git | ✅ | - | | Git | ✅ | - |
| Node.js | ✅ | - | | Node.js | ✅ | - |
| Build Tools | ✅ | - | | Build Tools | ✅ | - |
| Cursor IDE | ✅ | - |
| Anaconda | ❌ | ✅ | | Anaconda | ❌ | ✅ |
| Jupyter | ❌ | ✅ | | Jupyter | ❌ | ✅ |
| R Language | ❌ | ✅ | | R Language | ❌ | ✅ |
| Install Time | ~10 min | ~30-60 min | | Install Time | ~10 min | ~30-60 min |
| Disk Space | ~350MB | ~3GB | | Disk Space | ~150MB | ~3GB |
**Recommendation**: Use `development` role for general coding. Add `datascience` role only when needed for data analysis/ML work. **Recommendation**: Use `development` role for general coding. Add `datascience` role only when needed for data analysis/ML work.

View File

@ -1,87 +1,9 @@
--- ---
# Development role defaults # Development role defaults (IDEs intentionally not managed here).
# Node.js is installed by default from NodeSource # Base packages for a lightweight dev foundation.
# No additional configuration needed development_packages:
- git
# Cursor IDE - lightweight IDE installation - build-essential
install_cursor: true - python3
install_cursor_extensions: false - python3-pip
# Base Cursor extensions (always good to have)
cursor_extensions_base:
- usernamehw.errorlens # Better error highlighting
- eamodio.gitlens # Git supercharged
- mhutchie.git-graph # Git graph visualization
- streetsidesoftware.code-spell-checker # Spell checker
- EditorConfig.EditorConfig # EditorConfig support
- PKief.material-icon-theme # Better file icons
# Python/Data Science extensions
cursor_extensions_python:
- ms-python.python # Python language support
- ms-python.vscode-pylance # Python IntelliSense
- ms-python.black-formatter # Black formatter
- ms-python.isort # Import sorter
- ms-python.flake8 # Linter
- charliermarsh.ruff # Fast Python linter
# Jupyter/Data Science extensions
cursor_extensions_jupyter:
- ms-toolsai.jupyter # Jupyter notebooks
- ms-toolsai.jupyter-keymap # Jupyter keybindings
- ms-toolsai.jupyter-renderers # Jupyter renderers
# Web Development extensions
cursor_extensions_web:
- esbenp.prettier-vscode # Code formatter
- dbaeumer.vscode-eslint # ESLint
- bradlc.vscode-tailwindcss # Tailwind CSS
- vue.volar # Vue 3
- svelte.svelte-vscode # Svelte
# Testing extensions
cursor_extensions_testing:
- ms-playwright.playwright # Playwright testing
# Systems/DevOps extensions
cursor_extensions_devops:
- golang.go # Go language
- rust-lang.rust-analyzer # Rust language
- redhat.vscode-yaml # YAML support
- ms-azuretools.vscode-docker # Docker support
- redhat.ansible # Ansible support
# R language extensions
cursor_extensions_r:
- REditorSupport.r # R language support
- Ikuyadeu.r-pack # R package development
# Markdown/Documentation extensions
cursor_extensions_docs:
- yzhang.markdown-all-in-one # Markdown tools
- DavidAnson.vscode-markdownlint # Markdown linter
# Default combined list (customize per host in host_vars)
cursor_extensions: >-
{{
[
cursor_extensions_base,
(cursor_extensions_python if install_python | default(false) else []),
(cursor_extensions_jupyter if install_jupyter | default(false) else []),
(cursor_extensions_web if install_web | default(false) else []),
(cursor_extensions_testing if install_playwright | default(false) else []),
(cursor_extensions_devops if install_devops | default(false) else []),
(cursor_extensions_r if install_r | default(false) else []),
(cursor_extensions_docs if install_docs | default(false) else [])
] | flatten
}}
# Feature flags to enable extension groups
install_python: false
install_jupyter: false
install_web: false
install_playwright: false
install_devops: false
install_r: false
install_docs: false

View File

@ -1,50 +1,51 @@
--- ---
- name: Install basic development packages - name: Install basic development packages
ansible.builtin.apt: ansible.builtin.apt:
name: name: "{{ development_packages }}"
# Development tools
- git
# Build tools
- build-essential
- python3
- python3-pip
state: present state: present
become: true become: true
- name: Check if NodeSource Node.js is installed - name: Check if Node.js is installed
ansible.builtin.command: node --version ansible.builtin.command: node --version
register: node_version_check register: node_version_check
failed_when: false failed_when: false
changed_when: false changed_when: false
- name: Check if NodeSource repository exists and is correct - name: Check NodeSource repository file presence
ansible.builtin.shell: | ansible.builtin.stat:
if [ -f /etc/apt/sources.list.d/nodesource.list ]; then path: /etc/apt/sources.list.d/nodesource.list
if grep -q "deb \[signed-by=/etc/apt/keyrings/nodesource.gpg\] https://deb.nodesource.com/node_22.x nodistro main" /etc/apt/sources.list.d/nodesource.list; then register: nodesource_list_stat
echo "correct_config"
else
echo "wrong_config"
fi
else
echo "not_exists"
fi
register: nodesource_repo_check
failed_when: false
when: node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22') when: node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
- name: Check if NodeSource GPG key exists and is correct - name: Read NodeSource repository file
ansible.builtin.shell: | ansible.builtin.slurp:
if [ -f /etc/apt/keyrings/nodesource.gpg ]; then src: /etc/apt/sources.list.d/nodesource.list
if file /etc/apt/keyrings/nodesource.gpg | grep -q "PGP"; then register: nodesource_list_slurp
echo "correct_key" when:
else - node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
echo "wrong_key" - nodesource_list_stat.stat.exists | default(false)
fi
else - name: Set NodeSource repository state
echo "not_exists" ansible.builtin.set_fact:
fi nodesource_repo_state: >-
register: nodesource_key_check {{
failed_when: false 'not_exists'
if not (nodesource_list_stat.stat.exists | default(false))
else (
'correct_config'
if (
(nodesource_list_slurp.content | b64decode)
is search('^deb \\[signed-by=/etc/apt/keyrings/nodesource\\.gpg\\] https://deb\\.nodesource\\.com/node_22\\.x nodistro main', multiline=True)
)
else 'wrong_config'
)
}}
when: node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
- name: Check NodeSource GPG key presence
ansible.builtin.stat:
path: /etc/apt/keyrings/nodesource.gpg
register: nodesource_key_stat
when: node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22') when: node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
- name: Remove incorrect NodeSource repository - name: Remove incorrect NodeSource repository
@ -54,16 +55,7 @@
become: true become: true
when: when:
- node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22') - node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
- nodesource_repo_check.stdout == "wrong_config" - nodesource_repo_state == "wrong_config"
- name: Remove incorrect NodeSource key
ansible.builtin.file:
path: /etc/apt/keyrings/nodesource.gpg
state: absent
become: true
when:
- node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
- nodesource_key_check.stdout == "wrong_key"
- name: Create keyrings directory - name: Create keyrings directory
ansible.builtin.file: ansible.builtin.file:
@ -73,18 +65,26 @@
become: true become: true
when: when:
- node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22') - node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
- nodesource_key_check.stdout in ["not_exists", "wrong_key"] - not (nodesource_key_stat.stat.exists | default(false))
- name: Add NodeSource GPG key only if needed - name: Import NodeSource GPG key into apt keyring
ansible.builtin.get_url: ansible.builtin.shell: |
url: https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key # Ensure keyrings directory exists
dest: /etc/apt/keyrings/nodesource.gpg mkdir -p /etc/apt/keyrings
mode: '0644' # Download and convert key to binary format for signed-by
force: true curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg
chmod 644 /etc/apt/keyrings/nodesource.gpg
# Verify the key file is valid
if ! file /etc/apt/keyrings/nodesource.gpg | grep -q "PGP"; then
echo "ERROR: Key file is not valid PGP format"
exit 1
fi
args:
creates: /etc/apt/keyrings/nodesource.gpg
become: true become: true
when: when:
- node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22') - node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
- nodesource_key_check.stdout in ["not_exists", "wrong_key"] - not (nodesource_key_stat.stat.exists | default(false))
- name: Add NodeSource repository only if needed - name: Add NodeSource repository only if needed
ansible.builtin.apt_repository: ansible.builtin.apt_repository:
@ -94,14 +94,15 @@
become: true become: true
when: when:
- node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22') - node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
- nodesource_repo_check.stdout in ["not_exists", "wrong_config"] - nodesource_repo_state in ["not_exists", "wrong_config"]
- name: Install Node.js 22 from NodeSource - name: Install Node.js 22 from NodeSource
ansible.builtin.apt: ansible.builtin.apt:
name: nodejs name: nodejs
state: present state: present
become: true become: true
when: node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22') when:
- (node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22'))
- name: Verify Node.js installation - name: Verify Node.js installation
ansible.builtin.command: node --version ansible.builtin.command: node --version
@ -111,92 +112,3 @@
- name: Display Node.js version - name: Display Node.js version
ansible.builtin.debug: ansible.builtin.debug:
msg: "Node.js version installed: {{ final_node_version.stdout if final_node_version.stdout is defined else 'Not checked in dry-run mode' }}" msg: "Node.js version installed: {{ final_node_version.stdout if final_node_version.stdout is defined else 'Not checked in dry-run mode' }}"
# Cursor IDE installation (using AppImage)
# Downloads the latest version from cursor.com API
- name: Install Cursor IDE block
tags: ['cursor', 'ide']
block:
- name: Install libfuse2 dependency for AppImage
ansible.builtin.apt:
name: libfuse2
state: present
update_cache: false
become: true
when: ansible_os_family == "Debian"
- name: Check if Cursor is already installed at /usr/local/bin
ansible.builtin.stat:
path: /usr/local/bin/cursor
register: cursor_bin_check
- name: Get Cursor download URL from API and download AppImage
ansible.builtin.shell: |
DOWNLOAD_URL=$(curl -sL "https://www.cursor.com/api/download?platform=linux-x64&releaseTrack=stable" | grep -o '"downloadUrl":"[^"]*' | cut -d'"' -f4)
wget --timeout=60 --tries=3 -O /tmp/cursor.AppImage "$DOWNLOAD_URL"
args:
creates: /tmp/cursor.AppImage
when: not cursor_bin_check.stat.exists
register: cursor_download
retries: 2
delay: 5
until: cursor_download.rc == 0
- name: Make Cursor AppImage executable
ansible.builtin.file:
path: /tmp/cursor.AppImage
mode: '0755'
when:
- not cursor_bin_check.stat.exists
- cursor_download is defined
- cursor_download.rc is defined
- cursor_download.rc == 0
- name: Install Cursor to /usr/local/bin
ansible.builtin.copy:
src: /tmp/cursor.AppImage
dest: /usr/local/bin/cursor
mode: '0755'
remote_src: true
when:
- not cursor_bin_check.stat.exists
- cursor_download is defined
- cursor_download.rc is defined
- cursor_download.rc == 0
become: true
- name: Clean up Cursor download
ansible.builtin.file:
path: /tmp/cursor.AppImage
state: absent
when:
- cursor_download is defined
- cursor_download.rc is defined
- cursor_download.rc == 0
- name: Display Cursor installation status
ansible.builtin.debug:
msg: "{{ 'Cursor already installed' if cursor_bin_check.stat.exists else ('Cursor installed successfully' if (cursor_download is defined and cursor_download.rc is defined and cursor_download.rc == 0) else 'Cursor installation failed - download manually from cursor.com') }}"
# Cursor extensions installation
- name: Install Cursor extensions block
when:
- install_cursor | default(true) | bool
- install_cursor_extensions | default(false) | bool
- cursor_extensions is defined
- cursor_extensions | length > 0
tags: ['cursor', 'extensions']
block:
- name: Install Cursor extensions
ansible.builtin.shell: |
cursor --install-extension {{ item }} --force --user-data-dir={{ ansible_env.HOME }}/.cursor-root 2>/dev/null || true
loop: "{{ cursor_extensions }}"
register: cursor_ext_install
changed_when: "'successfully installed' in cursor_ext_install.stdout.lower()"
failed_when: false
become: true
become_user: "{{ ansible_user }}"
- name: Display Cursor extensions status
ansible.builtin.debug:
msg: "Installed {{ cursor_extensions | length }} Cursor extensions"

View File

@ -11,7 +11,6 @@
- name: Check if Docker is already installed - name: Check if Docker is already installed
ansible.builtin.command: docker --version ansible.builtin.command: docker --version
register: docker_check register: docker_check
ignore_errors: true
changed_when: false changed_when: false
failed_when: false failed_when: false
no_log: true no_log: true

View File

@ -12,6 +12,7 @@
fi fi
register: docker_key_check register: docker_key_check
failed_when: false failed_when: false
changed_when: false
- name: Remove incorrect Docker GPG key - name: Remove incorrect Docker GPG key
ansible.builtin.file: ansible.builtin.file:
@ -43,4 +44,3 @@
path: /tmp/docker.gpg path: /tmp/docker.gpg
state: absent state: absent
when: docker_key_check.stdout in ["not_exists", "wrong_key"] when: docker_key_check.stdout in ["not_exists", "wrong_key"]

View File

@ -12,6 +12,7 @@
fi fi
register: docker_repo_check register: docker_repo_check
failed_when: false failed_when: false
changed_when: false
- name: Remove incorrect Docker repository - name: Remove incorrect Docker repository
ansible.builtin.file: ansible.builtin.file:
@ -26,4 +27,3 @@
state: present state: present
update_cache: true update_cache: true
when: docker_repo_check.stdout in ["not_exists", "wrong_config"] when: docker_repo_check.stdout in ["not_exists", "wrong_config"]

View File

@ -20,6 +20,7 @@
fi fi
register: docker_repo_check register: docker_repo_check
failed_when: false failed_when: false
changed_when: false
- name: Remove incorrect Docker repository - name: Remove incorrect Docker repository
ansible.builtin.file: ansible.builtin.file:
@ -32,6 +33,11 @@
ansible.builtin.apt_repository: ansible.builtin.apt_repository:
repo: "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ docker_ubuntu_codename }} stable" repo: "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu {{ docker_ubuntu_codename }} stable"
state: present state: present
update_cache: true update_cache: false
when: docker_repo_check.stdout in ["not_exists", "wrong_config"] when: docker_repo_check.stdout in ["not_exists", "wrong_config"]
- name: Update apt cache after adding Docker repository
ansible.builtin.apt:
update_cache: true
become: true
when: docker_repo_check.stdout in ["not_exists", "wrong_config"]

View File

@ -12,6 +12,7 @@
fi fi
register: docker_repo_check register: docker_repo_check
failed_when: false failed_when: false
changed_when: false
- name: Remove incorrect Docker repository - name: Remove incorrect Docker repository
ansible.builtin.file: ansible.builtin.file:
@ -26,4 +27,3 @@
state: present state: present
update_cache: true update_cache: true
when: docker_repo_check.stdout in ["not_exists", "wrong_config"] when: docker_repo_check.stdout in ["not_exists", "wrong_config"]

View File

@ -6,10 +6,14 @@ findtime = 600
# Allow 3 failures before banning # Allow 3 failures before banning
maxretry = 3 maxretry = 3
# Email notifications (uncomment and configure if needed) # Email notifications (configured via fail2ban_destemail variable)
destemail = idobkin@gmail.com {% if fail2ban_destemail | default('') | length > 0 %}
sender = idobkin@gmail.com destemail = {{ fail2ban_destemail }}
action = %(action_mwl)s sender = {{ fail2ban_sender | default(fail2ban_destemail) }}
action = {{ fail2ban_action | default('%(action_mwl)s') }}
{% else %}
# Email notifications disabled (set fail2ban_destemail in group_vars/all/main.yml to enable)
{% endif %}
[sshd] [sshd]
enabled = true enabled = true

View File

@ -0,0 +1,5 @@
---
# Monitoring (desktop/workstation) role defaults
monitoring_desktop_install_btop: true
monitoring_desktop_install_wireshark_common: true
monitoring_desktop_create_scripts: true

Some files were not shown because too many files have changed in this diff Show More