Compare commits

...

7 Commits

Author SHA1 Message Date
3415340e26 Refactor playbooks: servers/workstations, split monitoring, improve shell
All checks were successful
CI / skip-ci-check (pull_request) Successful in 1m18s
CI / lint-and-test (pull_request) Successful in 1m21s
CI / ansible-validation (pull_request) Successful in 2m43s
CI / secret-scanning (pull_request) Successful in 1m19s
CI / dependency-scan (pull_request) Successful in 1m23s
CI / sast-scan (pull_request) Successful in 2m28s
CI / license-check (pull_request) Successful in 1m20s
CI / vault-check (pull_request) Successful in 2m21s
CI / playbook-test (pull_request) Successful in 2m19s
CI / container-scan (pull_request) Successful in 1m48s
CI / sonar-analysis (pull_request) Successful in 1m26s
CI / workflow-summary (pull_request) Successful in 1m17s
2025-12-31 23:13:03 -05:00
572af82852 Add comment to CI skip check job
All checks were successful
CI / skip-ci-check (pull_request) Successful in 1m12s
CI / lint-and-test (pull_request) Successful in 1m20s
CI / ansible-validation (pull_request) Successful in 5m49s
CI / secret-scanning (pull_request) Successful in 1m38s
CI / dependency-scan (pull_request) Successful in 2m53s
CI / sast-scan (pull_request) Successful in 5m42s
CI / license-check (pull_request) Successful in 1m16s
CI / vault-check (pull_request) Successful in 5m34s
CI / playbook-test (pull_request) Successful in 5m35s
CI / container-scan (pull_request) Successful in 4m58s
CI / sonar-analysis (pull_request) Successful in 1m21s
CI / workflow-summary (pull_request) Successful in 1m11s
2025-12-29 00:00:58 -05:00
1b9b801713 Refactor CI skip check to use a single pattern
All checks were successful
CI / skip-ci-check (pull_request) Successful in 1m13s
CI / lint-and-test (pull_request) Has been skipped
CI / ansible-validation (pull_request) Has been skipped
CI / secret-scanning (pull_request) Has been skipped
CI / dependency-scan (pull_request) Has been skipped
CI / sast-scan (pull_request) Has been skipped
CI / license-check (pull_request) Has been skipped
CI / vault-check (pull_request) Has been skipped
CI / playbook-test (pull_request) Has been skipped
CI / container-scan (pull_request) Has been skipped
CI / sonar-analysis (pull_request) Has been skipped
CI / workflow-summary (pull_request) Successful in 1m11s
- Simplify the CI workflow by consolidating the skip check for both branch names and commit messages to a single case-insensitive pattern: @skipci.
- Remove the previous multiple pattern checks to streamline the logic and improve readability.
- Ensure that the CI process can be effectively skipped based on the new pattern, enhancing overall efficiency.
2025-12-28 23:54:02 -05:00
32479d03f8 Add CI skip check for branch name and commit message
All checks were successful
CI / skip-ci-check (pull_request) Successful in 1m12s
CI / lint-and-test (pull_request) Has been skipped
CI / ansible-validation (pull_request) Has been skipped
CI / secret-scanning (pull_request) Has been skipped
CI / dependency-scan (pull_request) Has been skipped
CI / sast-scan (pull_request) Has been skipped
CI / license-check (pull_request) Has been skipped
CI / vault-check (pull_request) Has been skipped
CI / playbook-test (pull_request) Has been skipped
CI / container-scan (pull_request) Has been skipped
CI / sonar-analysis (pull_request) Has been skipped
CI / workflow-summary (pull_request) Successful in 1m11s
- Introduce a new job in the CI workflow to determine if CI should be skipped based on specific patterns in the branch name or commit message.
- Update existing jobs to depend on the skip check, ensuring that CI processes are only executed when necessary.
- Enhance the overall efficiency of the CI pipeline by preventing unnecessary runs for certain commits.
2025-12-28 23:05:46 -05:00
c84b0b8260 Remove Node.js installation step from CI workflow
All checks were successful
CI / lint-and-test (pull_request) Successful in 1m20s
CI / ansible-validation (pull_request) Successful in 5m54s
CI / secret-scanning (pull_request) Successful in 1m39s
CI / dependency-scan (pull_request) Successful in 2m53s
CI / sast-scan (pull_request) Successful in 5m46s
CI / license-check (pull_request) Successful in 1m15s
CI / vault-check (pull_request) Successful in 5m29s
CI / playbook-test (pull_request) Successful in 5m35s
CI / container-scan (pull_request) Successful in 4m49s
CI / sonar-analysis (pull_request) Successful in 1m22s
CI / workflow-summary (pull_request) Successful in 1m11s
- Eliminate the installation of Node.js for the checkout action in the CI workflow to streamline the process and reduce unnecessary dependencies.
2025-12-28 21:42:20 -05:00
9ea1090d02 Update CI workflow to exclude example vault files from validation and add host variables for dev02
Some checks failed
CI / lint-and-test (pull_request) Successful in 1m21s
CI / ansible-validation (pull_request) Successful in 8m50s
CI / secret-scanning (pull_request) Successful in 2m49s
CI / dependency-scan (pull_request) Successful in 6m8s
CI / sast-scan (pull_request) Successful in 6m31s
CI / license-check (pull_request) Successful in 1m16s
CI / vault-check (pull_request) Successful in 5m34s
CI / playbook-test (pull_request) Successful in 5m33s
CI / container-scan (pull_request) Failing after 2m51s
CI / sonar-analysis (pull_request) Failing after 1m10s
CI / workflow-summary (pull_request) Successful in 1m11s
- Modify CI workflow to filter out example vault files during encryption validation
- Add new host variables for dev02, including sudo configuration and shell user settings
- Disable installation of data science stack components for dev02
2025-12-28 21:31:02 -05:00
ilia
c7a300b922 Add POTE app project support and improve IP conflict detection
Some checks failed
CI / lint-and-test (pull_request) Successful in 1m21s
CI / ansible-validation (pull_request) Successful in 9m3s
CI / secret-scanning (pull_request) Successful in 3m19s
CI / dependency-scan (pull_request) Successful in 7m13s
CI / sast-scan (pull_request) Successful in 6m38s
CI / license-check (pull_request) Successful in 1m16s
CI / vault-check (pull_request) Failing after 6m40s
CI / playbook-test (pull_request) Successful in 9m28s
CI / container-scan (pull_request) Successful in 7m59s
CI / sonar-analysis (pull_request) Failing after 1m11s
CI / workflow-summary (pull_request) Successful in 1m11s
- Add roles/pote: Python/venv deployment role with PostgreSQL, cron jobs
- Add playbooks/app/: Proxmox app stack provisioning and configuration
- Add roles/app_setup: Generic app deployment role (Node.js/systemd)
- Add roles/base_os: Base OS hardening role
- Enhance roles/proxmox_vm: Split LXC/KVM tasks, improve error handling
- Add IP uniqueness validation: Preflight check for duplicate IPs within projects
- Add Proxmox-side IP conflict detection: Check existing LXC net0 configs
- Update inventories/production/group_vars/all/main.yml: Add pote project config
- Add vault.example.yml: Template for POTE secrets (git key, DB, SMTP)
- Update .gitignore: Exclude deploy keys, backup files, and other secrets
- Update documentation: README, role docs, execution flow guides

Security:
- All secrets stored in encrypted vault.yml (never committed in plaintext)
- Deploy keys excluded via .gitignore
- IP conflict guardrails prevent accidental duplicate IP assignments
2025-12-28 20:54:50 -05:00
109 changed files with 4092 additions and 3145 deletions

View File

@ -1,7 +1,14 @@
# Ansible Lint Configuration
---
# Exclude patterns
# ansible-lint configuration
#
# We exclude inventory host/group vars because many contain vault-encrypted content
# that cannot be parsed without vault secrets in CI/dev environments.
exclude_paths:
- inventories/production/host_vars/
- inventories/production/group_vars/all/vault.yml
- inventories/production/group_vars/all/vault.example.yml
# Exclude patterns
- .cache/
- .github/
- .ansible/

View File

@ -0,0 +1,33 @@
## Project rules (Ansible infrastructure repo)
### Canonical documentation
- Start here: `project-docs/index.md`
- Architecture: `project-docs/architecture.md`
- Standards: `project-docs/standards.md`
- Workflow: `project-docs/workflow.md`
- Decisions: `project-docs/decisions.md`
### Repo structure (high level)
- **Inventory**: `inventories/production/`
- **Playbooks**: `playbooks/`
- `playbooks/servers.yml`: server baseline
- `playbooks/workstations.yml`: workstation baseline + desktop apps on `desktop` group only
- `playbooks/app/*`: Proxmox app-project suite
- **Roles**: `roles/*` (standard Ansible role layout)
### Key standards to follow
- **YAML**: 2-space indentation; tasks must have `name:`
- **Modules**: prefer native modules; use FQCN (e.g., `ansible.builtin.*`, `community.general.*`)
- **Idempotency**: no “always-changed” shell tasks; use `changed_when:` / `creates:` / `removes:`
- **Secrets**: never commit plaintext; use Ansible Vault with `vault_`-prefixed vars
- **Makefile-first**: prefer `make ...` targets over raw `ansible-playbook`
### Architectural decisions (must not regress)
- Editor/IDE installation is **out of scope** for Ansible roles/playbooks.
- Monitoring is split: `monitoring_server` vs `monitoring_desktop`.
- Desktop applications run only for `desktop` group (via workstations playbook).

View File

@ -1,13 +1,67 @@
---
name: CI
on:
"on":
push:
branches: [master]
pull_request:
jobs:
# Check if CI should be skipped based on branch name or commit message
# Simple skip pattern: @skipci (case-insensitive)
skip-ci-check:
runs-on: ubuntu-latest
outputs:
should-skip: ${{ steps.check.outputs.skip }}
steps:
- name: Check out code (for commit message)
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Check if CI should be skipped
id: check
run: |
# Simple skip pattern: @skipci (case-insensitive)
# Works in branch names and commit messages
SKIP_PATTERN="@skipci"
# Get branch name (works for both push and PR)
BRANCH_NAME="${GITHUB_HEAD_REF:-${GITHUB_REF#refs/heads/}}"
# Get commit message (works for both push and PR)
COMMIT_MSG="${GITHUB_EVENT_HEAD_COMMIT_MESSAGE:-}"
if [ -z "$COMMIT_MSG" ]; then
COMMIT_MSG="${GITHUB_EVENT_PULL_REQUEST_HEAD_COMMIT_MESSAGE:-}"
fi
if [ -z "$COMMIT_MSG" ]; then
COMMIT_MSG=$(git log -1 --pretty=%B 2>/dev/null || echo "")
fi
SKIP=0
# Check branch name (case-insensitive)
if echo "$BRANCH_NAME" | grep -qiF "$SKIP_PATTERN"; then
echo "Skipping CI: branch name contains '$SKIP_PATTERN'"
SKIP=1
fi
# Check commit message (case-insensitive)
if [ $SKIP -eq 0 ] && [ -n "$COMMIT_MSG" ]; then
if echo "$COMMIT_MSG" | grep -qiF "$SKIP_PATTERN"; then
echo "Skipping CI: commit message contains '$SKIP_PATTERN'"
SKIP=1
fi
fi
echo "skip=$SKIP" >> $GITHUB_OUTPUT
echo "Branch: $BRANCH_NAME"
echo "Commit: ${COMMIT_MSG:0:50}..."
echo "Skip CI: $SKIP"
lint-and-test:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest
container:
image: node:20-bullseye
@ -26,6 +80,8 @@ jobs:
continue-on-error: true
ansible-validation:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest
container:
image: ubuntu:22.04
@ -60,6 +116,8 @@ jobs:
continue-on-error: true
secret-scanning:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest
container:
image: zricethezav/gitleaks:latest
@ -78,6 +136,8 @@ jobs:
continue-on-error: true
dependency-scan:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest
container:
image: aquasec/trivy:latest
@ -93,6 +153,8 @@ jobs:
run: trivy fs --scanners vuln,secret --exit-code 0 .
sast-scan:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest
container:
image: ubuntu:22.04
@ -116,6 +178,8 @@ jobs:
continue-on-error: true
license-check:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest
container:
image: node:20-bullseye
@ -136,6 +200,8 @@ jobs:
continue-on-error: true
vault-check:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest
container:
image: ubuntu:22.04
@ -159,7 +225,7 @@ jobs:
- name: Validate vault files are encrypted
run: |
echo "Checking for Ansible Vault files..."
vault_files=$(find . -name "*vault*.yml" -o -name "*vault*.yaml" | grep -v ".git" || true)
vault_files=$(find . -name "*vault*.yml" -o -name "*vault*.yaml" | grep -v ".git" | grep -v ".example" || true)
if [ -z "$vault_files" ]; then
echo "No vault files found"
exit 0
@ -182,6 +248,8 @@ jobs:
echo "All vault files are properly encrypted!"
playbook-test:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest
container:
image: ubuntu:22.04
@ -224,6 +292,8 @@ jobs:
continue-on-error: true
container-scan:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest
container:
image: ubuntu:22.04
@ -273,6 +343,8 @@ jobs:
continue-on-error: true
sonar-analysis:
needs: skip-ci-check
if: needs.skip-ci-check.outputs.should-skip != '1'
runs-on: ubuntu-latest
container:
image: sonarsource/sonar-scanner-cli:latest
@ -280,10 +352,6 @@ jobs:
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
steps:
- name: Install Node.js for checkout action
run: |
apk add --no-cache nodejs npm curl
- name: Check out code
uses: actions/checkout@v4

10
.gitignore vendored
View File

@ -6,6 +6,16 @@
*.tmp
*.bak
*~
vault.yml.bak.*
# Deploy keys and SSH private keys - NEVER commit these!
*_deploy_key
*_deploy_key.pub
*.pem
*.key
id_rsa
id_ed25519
id_ecdsa
# Python bytecode
__pycache__/

View File

@ -1,4 +1,4 @@
.PHONY: help bootstrap lint test check dev datascience inventory inventory-all local clean status tailscale tailscale-check tailscale-dev tailscale-status create-vault create-vm monitoring
.PHONY: help bootstrap lint test check dev datascience inventory inventory-all local servers workstations clean status tailscale tailscale-check tailscale-dev tailscale-status create-vault create-vm monitoring
.DEFAULT_GOAL := help
## Colors for output
@ -13,9 +13,12 @@ RESET := \033[0m
PLAYBOOK_SITE := playbooks/site.yml
PLAYBOOK_DEV := playbooks/development.yml
PLAYBOOK_LOCAL := playbooks/local.yml
PLAYBOOK_SERVERS := playbooks/servers.yml
PLAYBOOK_WORKSTATIONS := playbooks/workstations.yml
PLAYBOOK_MAINTENANCE := playbooks/maintenance.yml
PLAYBOOK_TAILSCALE := playbooks/tailscale.yml
PLAYBOOK_PROXMOX := playbooks/infrastructure/proxmox-vm.yml
PLAYBOOK_PROXMOX_INFO := playbooks/app/proxmox_info.yml
# Collection and requirement paths
COLLECTIONS_REQ := collections/requirements.yml
@ -152,6 +155,18 @@ test-syntax: ## Run comprehensive syntax and validation checks
fi; \
done
@echo ""
@echo "$(YELLOW)App Project Playbooks:$(RESET)"
@for playbook in playbooks/app/site.yml playbooks/app/provision_vms.yml playbooks/app/configure_app.yml playbooks/app/ssh_client_config.yml; do \
if [ -f "$$playbook" ]; then \
printf " %-25s " "$$playbook"; \
if ansible-playbook "$$playbook" --syntax-check >/dev/null 2>&1; then \
echo "$(GREEN)✓ OK$(RESET)"; \
else \
echo "$(RED)✗ FAIL$(RESET)"; \
fi; \
fi; \
done
@echo ""
@echo "$(YELLOW)Role Test Playbooks:$(RESET)"
@for test_playbook in roles/*/tests/test.yml; do \
if [ -f "$$test_playbook" ]; then \
@ -195,11 +210,15 @@ test-syntax: ## Run comprehensive syntax and validation checks
@for yaml_file in inventories/production/group_vars/all/main.yml; do \
if [ -f "$$yaml_file" ]; then \
printf " %-25s " "$$yaml_file (YAML)"; \
if python3 -c "import yaml" >/dev/null 2>&1; then \
if python3 -c "import yaml; yaml.safe_load(open('$$yaml_file'))" >/dev/null 2>&1; then \
echo "$(GREEN)✓ OK$(RESET)"; \
else \
echo "$(RED)✗ FAIL$(RESET)"; \
fi; \
else \
echo "$(YELLOW)⚠ Skipped (PyYAML not installed)$(RESET)"; \
fi; \
fi; \
done
@printf " %-25s " "ansible.cfg (INI)"; \
@ -234,6 +253,28 @@ local: ## Run the local playbook on localhost
@echo "$(YELLOW)Applying local playbook...$(RESET)"
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_LOCAL) -K
servers: ## Run baseline server playbook (usage: make servers [GROUP=services] [HOST=host1])
@echo "$(YELLOW)Applying server baseline...$(RESET)"
@EXTRA=""; \
if [ -n "$(HOST)" ]; then \
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_SERVERS) --limit $(HOST); \
elif [ -n "$(GROUP)" ]; then \
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_SERVERS) -e target_group=$(GROUP); \
else \
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_SERVERS); \
fi
workstations: ## Run workstation baseline (usage: make workstations [GROUP=dev] [HOST=dev01])
@echo "$(YELLOW)Applying workstation baseline...$(RESET)"
@EXTRA=""; \
if [ -n "$(HOST)" ]; then \
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_WORKSTATIONS) --limit $(HOST); \
elif [ -n "$(GROUP)" ]; then \
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_WORKSTATIONS) -e target_group=$(GROUP); \
else \
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_WORKSTATIONS); \
fi
# Host-specific targets
dev: ## Run on specific host (usage: make dev HOST=dev01)
ifndef HOST
@ -364,7 +405,7 @@ shell-all: ## Configure shell on all shell_hosts (usage: make shell-all)
apps: ## Install applications only
@echo "$(YELLOW)Installing applications...$(RESET)"
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_DEV) --tags apps
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_WORKSTATIONS) --tags apps
# Connectivity targets
ping: auto-fallback ## Ping hosts with colored output (usage: make ping [GROUP=dev] [HOST=dev01])
@ -528,6 +569,42 @@ monitoring: ## Install monitoring tools on all machines
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_DEV) --tags monitoring
@echo "$(GREEN)✓ Monitoring installation complete$(RESET)"
proxmox-info: ## Show Proxmox VM/LXC info (usage: make proxmox-info [PROJECT=projectA] [ALL=true] [TYPE=lxc|qemu|all])
@echo "$(YELLOW)Querying Proxmox guest info...$(RESET)"
@EXTRA=""; \
if [ -n "$(PROJECT)" ]; then EXTRA="$$EXTRA -e app_project=$(PROJECT)"; fi; \
if [ "$(ALL)" = "true" ]; then EXTRA="$$EXTRA -e proxmox_info_all=true"; fi; \
if [ -n "$(TYPE)" ]; then EXTRA="$$EXTRA -e proxmox_info_type=$(TYPE)"; fi; \
$(ANSIBLE_PLAYBOOK) $(PLAYBOOK_PROXMOX_INFO) $$EXTRA
app-provision: ## Provision app project containers/VMs on Proxmox (usage: make app-provision PROJECT=projectA)
ifndef PROJECT
@echo "$(RED)Error: PROJECT parameter required$(RESET)"
@echo "Usage: make app-provision PROJECT=projectA"
@exit 1
endif
@echo "$(YELLOW)Provisioning app project guests on Proxmox: $(PROJECT)$(RESET)"
$(ANSIBLE_PLAYBOOK) playbooks/app/provision_vms.yml -e app_project=$(PROJECT)
app-configure: ## Configure OS + app on project guests (usage: make app-configure PROJECT=projectA)
ifndef PROJECT
@echo "$(RED)Error: PROJECT parameter required$(RESET)"
@echo "Usage: make app-configure PROJECT=projectA"
@exit 1
endif
@echo "$(YELLOW)Configuring app project guests: $(PROJECT)$(RESET)"
$(ANSIBLE_PLAYBOOK) playbooks/app/configure_app.yml -e app_project=$(PROJECT)
app: ## Provision + configure app project (usage: make app PROJECT=projectA)
ifndef PROJECT
@echo "$(RED)Error: PROJECT parameter required$(RESET)"
@echo "Usage: make app PROJECT=projectA"
@exit 1
endif
@echo "$(YELLOW)Provisioning + configuring app project: $(PROJECT)$(RESET)"
$(ANSIBLE_PLAYBOOK) playbooks/app/site.yml -e app_project=$(PROJECT)
test-connectivity: ## Test host connectivity with detailed diagnostics and recommendations
@echo "$(YELLOW)Testing host connectivity...$(RESET)"
@if [ -f "test_connectivity.py" ]; then \

219
README.md
View File

@ -1,178 +1,81 @@
# Ansible Infrastructure Management
Comprehensive infrastructure automation for development environments, server management, and VM provisioning.
Ansible automation for development machines, service hosts, and **Proxmox-managed guests** (LXC-first, with a path for KVM VMs).
## 📊 **Current Status**
### ✅ **Completed Infrastructure**
- **Core System**: Base packages, SSH hardening, user management
- **Development Environment**: Git, Node.js, Python, Docker, modern CLI tools
- **Shell Configuration**: Zsh + Oh My Zsh + Powerlevel10k + plugins
- **Applications**: VS Code, Cursor, Brave, LibreOffice, desktop tools
- **Monitoring**: System monitoring tools + custom scripts (`sysinfo`, `netinfo`)
- **VPN Mesh**: Tailscale integration with automated auth keys
- **Security**: UFW firewall, fail2ban, SSH hardening
- **Maintenance**: Automated package updates and system cleanup
### 🎯 **Next Priorities**
1. **Enhanced monitoring**: Grafana + Prometheus dashboard
2. **Security hardening**: ClamAV antivirus, Lynis auditing, vulnerability scanning
3. **Centralized logging**: ELK stack for log aggregation
4. **CI/CD pipeline**: GitLab Runner or Jenkins integration
5. **Advanced security**: Intrusion detection, automated patching
## 🚀 Quick Start
## Quick start
```bash
# Install dependencies
# Install Python deps + Ansible collections
make bootstrap
# Set up secrets management
make create-vault
# Edit secrets (Proxmox credentials, SSH public key, etc.)
make edit-group-vault
# Test configuration (comprehensive)
make test
# Deploy to all hosts (dry run first)
make check
make apply
# Validate the repo
make test-syntax
```
## 📚 Documentation
## Proxmox app projects (LXC-first)
### Getting Started
- [**Initial Setup Guide**](docs/guides/setup.md) - First-time setup instructions
- [**Ansible Vault Guide**](docs/guides/vault.md) - Managing secrets securely
- [**Tailscale VPN Setup**](docs/guides/tailscale.md) - Mesh networking configuration
This repo can provision and configure **dev/qa/prod guests per application project** using the `app_projects` model.
### Reference
- [**Installed Applications**](docs/reference/applications.md) - Complete software inventory
- [**Makefile Commands**](docs/reference/makefile.md) - All available make targets
- [**Architecture Overview**](docs/reference/architecture.md) - System design and structure
- **Configure projects**: `inventories/production/group_vars/all/main.yml` (`app_projects`)
- **Configure secrets**: `inventories/production/group_vars/all/vault.yml` (encrypted)
- **Run end-to-end**:
## 🏗️ Project Structure
```bash
make app PROJECT=projectA
```
Other useful entry points:
- **Provision only**: `make app-provision PROJECT=projectA`
- **Configure only**: `make app-configure PROJECT=projectA`
- **Info / safety**: `make proxmox-info [PROJECT=projectA] [ALL=true] [TYPE=lxc|qemu|all]`
Safety notes:
- **IP conflict precheck**: provisioning fails if the target IP responds
(override with `-e allow_ip_conflicts=true` only if you really mean it).
- **VMID/CTID collision guardrail**: provisioning fails if the VMID exists but the guest name doesn't match
(override with `-e allow_vmid_collision=true` only if you really mean it).
- **No destructive playbooks**: this repo intentionally does **not** ship “destroy/decommission” automation.
Docs:
- `docs/guides/app_stack_proxmox.md`
- `docs/guides/app_stack_execution_flow.md`
## Project structure (relevant paths)
```
ansible/
├── Makefile # Task automation
├── ansible.cfg # Ansible configuration
├── hosts # Inventory file
├── collections/
│ └── requirements.yml # Galaxy dependencies
├── group_vars/ # Global variables
│ ├── all.yml
│ └── all/vault.yml # Encrypted secrets
├── host_vars/ # Host-specific configs
├── roles/ # Ansible roles
│ ├── base/ # Core system setup
│ ├── development/ # Dev tools
│ ├── docker/ # Container platform
│ ├── monitoring/ # System monitoring
│ ├── tailscale/ # VPN networking
│ └── ... # Additional roles
├── Makefile
├── ansible.cfg
├── collections/requirements.yml
├── inventories/production/
│ ├── hosts
│ ├── group_vars/all/
│ │ ├── main.yml
│ │ ├── vault.yml
│ │ └── vault.example.yml
│ └── host_vars/
├── playbooks/
│ ├── dev-playbook.yml # Development setup
│ ├── local-playbook.yml # Local machine
│ ├── maintenance-playbook.yml
│ └── tailscale-playbook.yml
└── docs/ # Documentation
├── guides/ # How-to guides
└── reference/ # Technical reference
│ ├── app/
│ │ ├── site.yml
│ │ ├── provision_vms.yml
│ │ ├── configure_app.yml
│ │ └── proxmox_info.yml
│ └── site.yml
└── roles/
├── proxmox_vm/
├── base_os/
├── app_setup/
└── pote/
```
## 🎯 Key Features
## Documentation
### Infrastructure Management
- **Automated Provisioning**: Proxmox VM creation and configuration
- **Configuration Management**: Consistent setup across all machines
- **Network Security**: Tailscale VPN mesh networking
- **System Maintenance**: Automated updates and cleanup
### Development Environment
- **Shell Environment**: Zsh + Oh My Zsh + Powerlevel10k
- **Container Platform**: Docker CE with Compose
- **Development Tools**: Node.js, Python, Git, build tools
- **Code Editors**: VS Code, Cursor IDE
### Security & Monitoring
- **SSH Hardening**: Modern crypto, key-only auth, fail2ban
- **Firewall**: UFW with sensible defaults
- **Monitoring Tools**: btop, iotop, nethogs, custom dashboards
## 🧪 Testing & Validation
### Comprehensive Testing
```bash
make test # Full test suite (lint + syntax + validation)
make test-syntax # Syntax and configuration validation only
make lint # Ansible-lint only
```
### Testing Coverage
- **Playbook syntax**: All main playbooks and infrastructure playbooks
- **Role validation**: All role test playbooks
- **Configuration files**: YAML and INI file validation
- **Documentation**: Markdown syntax and link checking (installed via `make bootstrap`)
- **Linting**: Full Ansible best practices validation
## 🖥️ Managed Hosts
| Host | Type | OS | Purpose |
|------|------|-----|---------|
| dev01 | Physical | Debian | Primary development |
| bottom | Physical | Debian | Secondary development |
| debianDesktopVM | VM | Debian | Desktop environment |
| giteaVM | VM | Alpine | Git repository hosting |
| portainerVM | VM | Alpine | Container management |
| homepageVM | VM | Debian | Service dashboard |
## 🔧 Common Tasks
```bash
# System Maintenance
make maintenance # Update all systems
make maintenance HOST=dev01 # Update specific host
# Development Setup
make docker # Install Docker
make shell # Configure shell
make apps # Install applications
# Network & Security
make tailscale # Deploy VPN
make security # Security hardening
make monitoring # Deploy monitoring
# Infrastructure
make create-vm # Create new VM
make status # Check connectivity
make facts # Gather system info
```
## 🛠️ Requirements
### Control Machine (where you run Ansible)
- Python 3.x with `pipx` (recommended) or `pip3`
- Node.js and `npm` (for documentation testing)
- SSH access to target hosts
- Ansible Vault password (for secrets)
### Target Hosts
- SSH server running
- Python 3.x
- `sudo` access for the Ansible user
### Dependency Management
All project dependencies are managed through standard requirements files:
- **`requirements.txt`** - Python packages (ansible, ansible-lint, etc.)
- **`package.json`** - Node.js packages (markdown tools)
- **`collections/requirements.yml`** - Ansible collections
**Setup**: Run `make bootstrap` to install all dependencies automatically.
## 📝 Contributing
1. Test changes with `make check` (dry run)
2. Follow existing patterns and naming conventions
3. Update documentation for new features
4. Encrypt sensitive data with Ansible Vault
- **Guides**: `docs/guides/`
- **Reference**: `docs/reference/`
- **Project docs (architecture/standards/workflow)**: `project-docs/index.md`

View File

@ -1,4 +1,6 @@
---
# Collections required for this repo.
# Install with: ansible-galaxy collection install -r collections/requirements.yml
collections:
- name: community.general
version: ">=6.0.0"

7
configure_app.yml Normal file
View File

@ -0,0 +1,7 @@
---
# Wrapper playbook
# Purpose:
# ansible-playbook -i inventories/production configure_app.yml -e app_project=projectA
- name: Configure app project guests
import_playbook: playbooks/app/configure_app.yml

View File

@ -0,0 +1,173 @@
# App stack execution flow (what happens when you run it)
This document describes **exactly** what Ansible runs and what it changes when you execute the Proxmox app stack playbooks.
## Entry points
- Recommended end-to-end run:
- `playbooks/app/site.yml`
- Repo-root wrappers (equivalent):
- `site.yml` (imports `playbooks/site.yml`, and you can `--tags app`)
- `provision_vms.yml` (imports `playbooks/app/provision_vms.yml`)
- `configure_app.yml` (imports `playbooks/app/configure_app.yml`)
## High-level flow
When you run `playbooks/app/site.yml`, it imports two playbooks in order:
1. `playbooks/app/provision_vms.yml` (**Proxmox API changes happen here**)
2. `playbooks/app/configure_app.yml` (**SSH into guests and configure OS/app**)
## Variables that drive everything
All per-project/per-env inputs come from:
- `inventories/production/group_vars/all/main.yml``app_projects`
Each `app_projects.<project>.envs.<env>` contains:
- `name` (container hostname / inventory host name)
- `vmid` (Proxmox CTID)
- `ip` (static IP in CIDR form, e.g. `10.0.10.101/24`)
- `gateway` (e.g. `10.0.10.1`)
- `branch` (`dev`, `qa`, `main`)
- `env_vars` (key/value map written to `/srv/app/.env.<env>`)
Proxmox connection variables are also read from `inventories/production/group_vars/all/main.yml` but are usually vault-backed:
- `proxmox_host: "{{ vault_proxmox_host }}"`
- `proxmox_user: "{{ vault_proxmox_user }}"`
- `proxmox_node: "{{ vault_proxmox_node | default('pve') }}"`
## Phase 1: Provisioning via Proxmox API
### File chain
`playbooks/app/site.yml` imports `playbooks/app/provision_vms.yml`, which does:
- Validates `app_project` exists (if you passed one)
- Loops projects → includes `playbooks/app/provision_one_guest.yml`
- Loops envs inside the project → includes `playbooks/app/provision_one_env.yml`
### Preflight IP safety check
In `playbooks/app/provision_one_env.yml`:
- It runs `ping` against the target IP.
- If the IP responds, the play **fails** to prevent accidental duplicate-IP provisioning.
- You can override the guard (not recommended) with `-e allow_ip_conflicts=true`.
### What it creates/updates in Proxmox
In `playbooks/app/provision_one_env.yml` it calls role `roles/proxmox_vm` with LXC variables.
`roles/proxmox_vm/tasks/main.yml` dispatches:
- If `proxmox_guest_type == 'lxc'` → includes `roles/proxmox_vm/tasks/lxc.yml`
`roles/proxmox_vm/tasks/lxc.yml` performs:
1. **Build CT network config**
- Produces a `netif` dict like:
- `net0: name=eth0,bridge=vmbr0,firewall=1,ip=<CIDR>,gw=<GW>`
2. **Create/update the container**
- Uses `community.proxmox.proxmox` with:
- `state: present`
- `update: true` (so re-runs reconcile config)
- `vmid`, `hostname`, `ostemplate`, CPU/mem/swap, rootfs sizing, `netif`
- `pubkey` and optionally `password` for initial root access
3. **Start the container**
- Ensures `state: started` (if `lxc_start_after_create: true`)
4. **Wait for SSH**
- `wait_for: host=<ip> port=22`
### Dynamic inventory creation
Still in `playbooks/app/provision_one_env.yml`, it calls `ansible.builtin.add_host` so the guests become available to later plays:
- Adds the guest to groups:
- `app_all`
- `app_<project>_all`
- `app_<project>_<env>`
- Sets:
- `ansible_host` to the IP (without CIDR)
- `ansible_user: root` (bootstrap user for first config)
- `app_project`, `app_env` facts
## Phase 2: Configure OS + app on the guests
`playbooks/app/configure_app.yml` contains two plays:
### Play A: Build dynamic inventory (localhost)
This play exists so you can run `configure_app.yml` even if you didnt run provisioning in the same Ansible invocation.
- It loops over projects/envs from `app_projects`
- Adds hosts to:
- `app_all`, `app_<project>_all`, `app_<project>_<env>`
- Uses:
- `ansible_user: "{{ app_bootstrap_user | default('root') }}"`
### Play B: Configure the hosts (SSH + sudo)
Targets:
- If you pass `-e app_project=projectA``hosts: app_projectA_all`
- Otherwise → `hosts: app_all`
Tasks executed on each guest:
1. **Resolve effective project/env variables**
- `project_def = app_projects[app_project]`
- `env_def = app_projects[app_project].envs[app_env]`
2. **Role: `base_os`** (`roles/base_os/tasks/main.yml`)
- Updates apt cache
- Installs baseline packages (git/curl/nodejs/npm/ufw/etc.)
- Creates `appuser` (passwordless sudo)
- Adds your SSH public key to `appuser`
- Enables UFW and allows:
- SSH (22)
- backend port (default `3001`, overridable per project)
- frontend port (default `3000`, overridable per project)
3. **Role: `app_setup`** (`roles/app_setup/tasks/main.yml`)
- Creates:
- `/srv/app`
- `/srv/app/backend`
- `/srv/app/frontend`
- Writes the env file:
- `/srv/app/.env.<dev|qa|prod>` from template `roles/app_setup/templates/env.j2`
- Writes the deploy script:
- `/usr/local/bin/deploy_app.sh` from `roles/app_setup/templates/deploy_app.sh.j2`
- Script does:
- `git clone` if missing
- `git checkout/pull` correct branch
- runs backend install + migrations
- runs frontend install + build
- restarts systemd services
- Writes systemd units:
- `/etc/systemd/system/app-backend.service` from `app-backend.service.j2`
- `/etc/systemd/system/app-frontend.service` from `app-frontend.service.j2`
- Reloads systemd and enables/starts both services
## What changes on first run vs re-run
- **Provisioning**:
- First run: creates CTs in Proxmox, sets static IP config, starts them.
- Re-run: reconciles settings because `update: true` is used.
- **Configuration**:
- Mostly idempotent (directories/templates/users/firewall/services converge).
## Common “before you run” checklist
- Confirm `app_projects` has correct IPs/CTIDs/branches:
- `inventories/production/group_vars/all/main.yml`
- Ensure vault has Proxmox + SSH key material:
- `inventories/production/group_vars/all/vault.yml`
- Reference template: `inventories/production/group_vars/all/vault.example.yml`

View File

@ -0,0 +1,90 @@
# Proxmox App Projects (LXC-first)
This guide documents the **modular app-project stack** that provisions Proxmox guests (dev/qa/prod) and configures a full-stack app layout on them.
## What you get
- Proxmox provisioning via API (currently **LXC**; VM support remains via existing `roles/proxmox_vm` KVM path)
- A deployment user (`appuser`) with your SSH key
- `/srv/app/backend` and `/srv/app/frontend`
- Env file `/srv/app/.env.<dev|qa|prod>`
- `/usr/local/bin/deploy_app.sh` to pull the right branch and restart services
- systemd services:
- `app-backend.service`
- `app-frontend.service`
## Where to configure projects
Edit:
- `inventories/production/group_vars/all/main.yml`
Under `app_projects`, define projects like:
- `projectA.repo_url`
- `projectA.envs.dev|qa|prod.ip/gateway/branch`
- `projectA.guest_defaults` (cores/memory/rootfs sizing)
- `projectA.deploy.*` (install/build/migrate/start commands)
Adding **projectB** is just adding another top-level `app_projects.projectB` entry.
## Proxmox credentials (vault)
This repo already expects Proxmox connection vars in vault (see existing Proxmox playbooks). Ensure these exist in:
- `inventories/production/group_vars/all/vault.yml` (encrypted)
Common patterns:
- `vault_proxmox_host`: `10.0.10.201`
- `vault_proxmox_user`: e.g. `root@pam` or `ansible@pve`
- `vault_proxmox_node`: e.g. `pve`
- Either:
- `vault_proxmox_password`, or
- `vault_proxmox_token` + `vault_proxmox_token_id`
## Debian LXC template
The LXC provisioning uses `lxc_ostemplate`, defaulting to a Debian 12 template string like:
`local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst`
If your Proxmox has a different template filename, change `lxc_ostemplate` in `inventories/production/group_vars/all/main.yml`.
## Running it
Provision + configure one project:
```bash
ansible-playbook -i inventories/production playbooks/app/site.yml -e app_project=projectA
```
Provision + configure all projects in `app_projects`:
```bash
ansible-playbook -i inventories/production playbooks/app/site.yml
```
Only provisioning (Proxmox API):
```bash
ansible-playbook -i inventories/production playbooks/app/provision_vms.yml -e app_project=projectA
```
Only OS/app configuration:
```bash
ansible-playbook -i inventories/production playbooks/app/configure_app.yml -e app_project=projectA
```
## Optional: SSH aliases on your workstation
To write `~/.ssh/config` entries (disabled by default):
```bash
ansible-playbook -i inventories/production playbooks/app/ssh_client_config.yml -e manage_ssh_config=true -e app_project=projectA
```
This creates aliases like `projectA-dev`, `projectA-qa`, `projectA-prod`.

View File

@ -0,0 +1,28 @@
# Custom roles guide
This repo is designed to be extended by adding new roles under `roles/`.
## Role structure
Follow the standard Ansible role layout:
```
roles/<role_name>/
├── defaults/main.yml
├── handlers/main.yml
├── tasks/main.yml
├── templates/
├── files/
└── README.md
```
## Where to wire a new role
- Add it to the relevant playbook under `playbooks/` (or create a new playbook if its a major concern).
- Prefer tagging the role at inclusion time so `make <feature>` targets can use `--tags`.
## Standards (canonical)
- `project-docs/standards.md`
- `project-docs/decisions.md` (add an ADR entry for significant changes)

22
docs/guides/monitoring.md Normal file
View File

@ -0,0 +1,22 @@
# Monitoring guide
Monitoring is split by host type:
- **Servers**: `roles/monitoring_server/` (includes `fail2ban`, sysstat tooling)
- **Desktops/workstations**: `roles/monitoring_desktop/` (desktop-oriented tooling)
## Run monitoring only
```bash
# Dry-run
make monitoring CHECK=true
# Apply
make monitoring
```
## Notes
- Desktop apps are installed only on the `desktop` group via `playbooks/workstations.yml`.
- If you need packet analysis tools, keep them opt-in (see `docs/reference/applications.md`).

31
docs/guides/security.md Normal file
View File

@ -0,0 +1,31 @@
# Security hardening guide
This repos “security” work is primarily implemented via roles and inventory defaults.
## What runs where
- **SSH hardening + firewall**: `roles/ssh/`
- **Baseline packages/security utilities**: `roles/base/`
- **Monitoring + intrusion prevention (servers)**: `roles/monitoring_server/` (includes `fail2ban`)
- **Secrets**: Ansible Vault in `inventories/production/group_vars/all/vault.yml`
## Recommended flow
```bash
# Dry-run first
make check
# Apply only security-tagged roles
make security
```
## Secrets / Vault
Use vault for anything sensitive:
- Guide: `docs/guides/vault.md`
## Canonical standards
- `project-docs/standards.md`

View File

@ -129,7 +129,7 @@ vault_ssh_public_key: "ssh-ed25519 AAAA..."
## Step 7: Configure Variables
### Global Settings
Edit `group_vars/all.yml`:
Edit `inventories/production/group_vars/all/main.yml`:
```yaml
# Timezone and locale
timezone: "America/New_York" # Your timezone
@ -145,7 +145,7 @@ ssh_permit_root_login: "no"
```
### Host-Specific Settings
Create/edit `host_vars/hostname.yml` for host-specific configuration.
Create/edit `inventories/production/host_vars/<hostname>.yml` for host-specific configuration.
## Step 8: Test Configuration
@ -159,7 +159,7 @@ make check
make check HOST=dev01
# Check specific role
ansible-playbook dev-playbook.yml --check --tags docker
ansible-playbook playbooks/development.yml --check --tags docker
```
## Step 9: Deploy
@ -208,7 +208,7 @@ ansible dev -m shell -a "tailscale status"
### Vault Password Issues
- Check vault password file exists and has correct permissions
- Verify password is correct: `ansible-vault view group_vars/all/vault.yml`
- Verify password is correct: `ansible-vault view inventories/production/group_vars/all/vault.yml`
### Python Not Found
- Install Python on target: `sudo apt install python3`

View File

@ -46,21 +46,21 @@ make tailscale-status
make tailscale-dev
# Specific hosts
ansible-playbook tailscale-playbook.yml --limit "dev01,bottom"
ansible-playbook playbooks/tailscale.yml --limit "dev01,bottom"
```
### Manual Installation
```bash
# With custom auth key (not recommended - use vault instead)
ansible-playbook tailscale-playbook.yml -e "tailscale_auth_key=your-key"
ansible-playbook playbooks/tailscale.yml -e "tailscale_auth_key=your-key"
# As part of existing playbooks
ansible-playbook dev-playbook.yml --tags tailscale
ansible-playbook playbooks/development.yml --tags tailscale
```
## Configuration
### Global Settings (`group_vars/all.yml`)
### Global Settings (`inventories/production/group_vars/all/main.yml`)
```yaml
tailscale_auth_key: "{{ vault_tailscale_auth_key }}" # From vault
tailscale_accept_routes: true # Accept subnet routes
@ -68,7 +68,7 @@ tailscale_accept_dns: true # Accept DNS settings
tailscale_ssh: true # Enable SSH over Tailscale
```
### Host-Specific Settings (`host_vars/hostname.yml`)
### Host-Specific Settings (`inventories/production/host_vars/<hostname>.yml`)
```yaml
tailscale_hostname: "custom-name" # Override hostname
tailscale_advertise_routes: "192.168.1.0/24" # Share local subnet
@ -100,7 +100,7 @@ sudo tailscale up
### Reset Connection
```bash
ansible-playbook tailscale-playbook.yml -e "tailscale_reset=true"
ansible-playbook playbooks/tailscale.yml -e "tailscale_reset=true"
```
## Security Best Practices
@ -119,7 +119,7 @@ The role automatically detects OS and uses appropriate package manager.
## How It Works
1. **Playbook runs** → looks for `tailscale_auth_key`
2. **Checks `all.yml`** → finds `{{ vault_tailscale_auth_key }}`
2. **Checks inventory group vars** → finds `{{ vault_tailscale_auth_key }}`
3. **Decrypts vault** → retrieves actual auth key
4. **Installs Tailscale** → configures with your settings
5. **Connects to network** → machine appears in admin console

View File

@ -6,7 +6,7 @@ Ansible Vault encrypts sensitive data like passwords and API keys while keeping
### Create Vault
```bash
make create-vault
make edit-group-vault
```
### Add Secrets
@ -38,32 +38,31 @@ database_password: "{{ vault_db_password }}"
## File Structure
```
group_vars/
├── all.yml # Plain text configuration
└── all/
└── vault.yml # Encrypted secrets (created by make create-vault)
host_vars/
inventories/production/
├── group_vars/
└── all/
│ ├── main.yml # Plain text configuration
│ └── vault.yml # Encrypted secrets (edit with make edit-group-vault)
└── host_vars/
├── dev01.yml # Host-specific plain text
└── dev01/
└── vault.yml # Host-specific secrets
└── vault.yml # Host-specific secrets (edit with make edit-vault HOST=dev01)
```
## Common Commands
```bash
# Create new vault
make create-vault
# Edit group vault (production inventory)
make edit-group-vault
# Edit existing vault
make edit-vault # Global vault
make edit-vault HOST=dev01 # Host-specific vault
# Edit host-specific vault
make edit-vault HOST=dev01
# View decrypted contents
ansible-vault view group_vars/all/vault.yml
ansible-vault view inventories/production/group_vars/all/vault.yml
# Change vault password
ansible-vault rekey group_vars/all/vault.yml
ansible-vault rekey inventories/production/group_vars/all/vault.yml
```
## Password Management

13
docs/project-docs.md Normal file
View File

@ -0,0 +1,13 @@
# Project docs (canonical)
Canonical project documentation lives in `project-docs/` (repo root):
- Index: `project-docs/index.md`
- Overview: `project-docs/overview.md`
- Architecture: `project-docs/architecture.md`
- Standards: `project-docs/standards.md`
- Workflow: `project-docs/workflow.md`
- Decisions (ADRs): `project-docs/decisions.md`
This file exists so users browsing `docs/` can quickly find the canonical project documentation.

View File

@ -54,10 +54,7 @@ Complete inventory of applications and tools deployed by Ansible playbooks.
| zsh | Z shell | apt | shell |
| tmux | Terminal multiplexer | apt | shell |
| fzf | Fuzzy finder | apt | shell |
| oh-my-zsh | Zsh framework | git | shell |
| powerlevel10k | Zsh theme | git | shell |
| zsh-syntax-highlighting | Syntax highlighting | git | shell |
| zsh-autosuggestions | Command suggestions | git | shell |
| zsh aliases | Minimal alias set (sourced from ~/.zshrc) | file | shell |
### 📊 Monitoring Tools
| Package | Description | Source | Role |
@ -84,37 +81,58 @@ Complete inventory of applications and tools deployed by Ansible playbooks.
### 🖱️ Desktop Applications
| Package | Description | Source | Role |
|---------|-------------|--------|------|
| brave-browser | Privacy-focused browser | brave | applications |
| libreoffice | Office suite | apt | applications |
| copyq | Clipboard manager (history/search) | apt | applications |
| evince | PDF viewer | apt | applications |
| redshift | Blue light filter | apt | applications |
### 📝 Code Editors
| Package | Description | Source | Role |
|---------|-------------|--------|------|
| code | Visual Studio Code | snap | snap |
| cursor | AI-powered editor | snap | snap |
## Nice-to-have apps (not installed by default)
These are good add-ons depending on how you use your workstations. Keep them opt-in to avoid bloating baseline installs.
### Desktop / UX
- **flameshot**: screenshots + annotation
- **keepassxc**: local password manager (or use your preferred)
- **syncthing**: peer-to-peer file sync (if you want self-hosted sync)
- **remmina**: RDP/VNC client
- **mpv**: lightweight media player
### Developer workstation helpers
- **direnv**: per-project env var loading
- **shellcheck**: shell script linting
- **jq** / **yq**: JSON/YAML CLI tooling (already in base here, but listing for completeness)
- **ripgrep** / **fd-find**: fast search/find (already in base here)
### Networking / diagnostics
- **wireshark** (GUI) or **wireshark-common**: packet analysis (only if you need it)
- **iperf3**: bandwidth testing
- **dnsutils**: dig/nslookup tools
## Installation by Playbook
### dev-playbook.yml
### `playbooks/development.yml`
Installs all roles for development machines:
- All system tools
- Development environment
- Docker platform
- Shell configuration
- Desktop applications
- Monitoring tools
- Tailscale VPN
### local-playbook.yml
### `playbooks/local.yml`
Installs for local machine management:
- Core system tools
- Shell environment
- Development basics
- Selected applications
### maintenance-playbook.yml
### `playbooks/workstations.yml`
Installs baseline for `dev:desktop:local`, and installs desktop apps only for the `desktop` group:
- Workstation baseline (dev + desktop + local)
- Desktop applications (desktop group only)
### `playbooks/maintenance.yml`
Maintains existing installations:
- System updates
- Package cleanup
@ -135,7 +153,6 @@ Maintains existing installations:
| snap | Snap packages | snapd daemon |
| docker | Docker repository | Docker GPG key + repo |
| tailscale | Tailscale repository | Tailscale GPG key + repo |
| brave | Brave browser repository | Brave GPG key + repo |
| git | Git repositories | Direct clone |
## Services Enabled

View File

@ -1,259 +1,10 @@
# Architecture Overview
# Architecture (canonical doc moved)
Technical architecture and design of the Ansible infrastructure management system.
The canonical architecture document is now:
## System Architecture
- `project-docs/architecture.md`
```
┌─────────────────────────────────────────────────────────────┐
│ Control Machine |
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Ansible │ │ Makefile │ │ Vault │ │
│ │ Engine │ │ Automation │ │ Secrets │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────┬───────────────────────────────────────┘
│ SSH + Tailscale VPN
┌─────────────────┴──────────────────────────────┐
│ │
┌───▼────────┐ ┌──────────────┐ ┌─────────────────▼────────┐
│ Dev │ │ Service │ │ Infrastructure │
│ Machines │ │ VMs │ │ VMs │
├────────────┤ ├──────────────┤ ├──────────────────────────┤
│ • dev01 │ │ • giteaVM │ │ • Proxmox Controller │
│ • bottom │ │ • portainerVM│ │ • Future VMs │
│ • desktop │ │ • homepageVM │ │ │
└────────────┘ └──────────────┘ └──────────────────────────┘
```
## Network Topology
### Physical Network
- **LAN**: 192.168.1.0/24 (typical home/office network)
- **Proxmox Host**: Hypervisor for VM management
- **Physical Machines**: Direct network access
### Tailscale Overlay Network
- **Mesh VPN**: Secure peer-to-peer connections
- **100.x.x.x**: Tailscale IP range
- **Zero-config**: Automatic NAT traversal
- **End-to-end encryption**: WireGuard protocol
## Host Groups
### Development (`dev`)
**Purpose**: Developer workstations and environments
- **Hosts**: dev01, bottom, debianDesktopVM
- **OS**: Debian/Ubuntu
- **Roles**: Full development stack
### Services
**Purpose**: Self-hosted services and applications
#### Gitea (`gitea`)
- **Host**: giteaVM
- **OS**: Alpine Linux (lightweight)
- **Service**: Git repository hosting
#### Portainer (`portainer`)
- **Host**: portainerVM
- **OS**: Alpine Linux
- **Service**: Container management UI
#### Homepage (`homepage`)
- **Host**: homepageVM
- **OS**: Debian
- **Service**: Service dashboard
### Infrastructure (`ansible`)
**Purpose**: Ansible control and automation
- **Host**: Ansible controller VM
- **OS**: Ubuntu Server
- **Service**: Infrastructure automation
### Local (`local`)
**Purpose**: Local machine management
- **Host**: localhost
- **Connection**: Local (no SSH)
## Playbook Architecture
### Core Playbooks
```yaml
dev-playbook.yml # Development environment setup
├── roles/maintenance # System updates
├── roles/base # Core packages
├── roles/ssh # SSH hardening
├── roles/user # User management
├── roles/development # Dev tools
├── roles/shell # Shell config
├── roles/docker # Container platform
├── roles/applications # Desktop apps
├── roles/snap # Snap packages
├── roles/tailscale # VPN setup
├── roles/monitoring # Monitoring tools
local-playbook.yml # Local machine
├── roles/base
├── roles/shell
├── roles/development
└── roles/tailscale
maintenance-playbook.yml # System maintenance
└── roles/maintenance
tailscale-playbook.yml # VPN deployment
└── roles/tailscale
proxmox-create-vm.yml # VM provisioning
└── roles/proxmox_vm
```
### Role Dependencies
```
base
├── Required by: all other roles
├── Provides: core utilities, security tools
└── Dependencies: none
ssh
├── Required by: secure access
├── Provides: hardened SSH, firewall
└── Dependencies: base
user
├── Required by: system access
├── Provides: user accounts, sudo
└── Dependencies: base
development
├── Required by: coding tasks
├── Provides: git, nodejs, python
└── Dependencies: base
docker
├── Required by: containerization
├── Provides: Docker CE, compose
└── Dependencies: base
tailscale
├── Required by: secure networking
├── Provides: mesh VPN
└── Dependencies: base
```
## Data Flow
### Configuration Management
1. **Variables** → group_vars/all.yml (global)
2. **Secrets** → group_vars/all/vault.yml (encrypted)
3. **Host Config** → host_vars/hostname.yml (specific)
4. **Role Defaults** → roles/*/defaults/main.yml
5. **Tasks** → roles/*/tasks/main.yml
6. **Templates** → roles/*/templates/*.j2
7. **Handlers** → roles/*/handlers/main.yml
### Execution Flow
```
make command
Makefile target
ansible-playbook
Inventory + Variables
Role execution
Task processing
Handler notification
Result reporting
```
## Security Architecture
### Defense in Depth
#### Network Layer
- **Tailscale VPN**: Encrypted mesh network
- **UFW Firewall**: Default deny, explicit allow
- **SSH Hardening**: Key-only, rate limiting
#### Application Layer
- **Fail2ban**: Intrusion prevention
- **Package signing**: GPG verification
- **Service isolation**: Docker containers
#### Data Layer
- **Ansible Vault**: Encrypted secrets
- **SSH Keys**: Ed25519 cryptography
### Access Control
```
User → SSH Key → Jump Host → Tailscale → Target Host
↓ ↓ ↓ ↓
Ed25519 Bastion WireGuard Firewall
Encryption Rules
```
## Storage Architecture
### Configuration Storage
```
/etc/ # System configuration
/opt/ # Application data
/usr/local/ # Custom scripts
/var/log/ # Logs and audit trails
```
## Monitoring Architecture
### System Monitoring
- **btop/htop**: Process monitoring
- **iotop**: I/O monitoring
- **nethogs**: Network per-process
- **Custom dashboards**: sysinfo, netinfo
### Log Management
- **logwatch**: Daily summaries
- **journald**: System logs
- **fail2ban**: Security logs
## Scalability Considerations
### Horizontal Scaling
- Add hosts to inventory groups
- Parallel execution with ansible forks
- Role reusability across environments
### Vertical Scaling
- Proxmox VM resource adjustment
- Docker resource limits
- Service-specific tuning
## Technology Stack
### Core Technologies
- **Ansible**: 2.9+ (Configuration management)
- **Python**: 3.x (Ansible runtime)
- **Jinja2**: Templating engine
- **YAML**: Configuration format
### Target Platforms
- **Debian**: 11+ (Bullseye, Bookworm)
- **Ubuntu**: 20.04+ (Focal, Jammy, Noble)
- **Alpine**: 3.x (Lightweight containers)
### Service Technologies
- **Docker**: Container runtime
- **Tailscale**: Mesh VPN
- **SystemD**: Service management
- **UFW**: Firewall management
This `docs/reference/architecture.md` file is kept as a pointer to avoid maintaining two competing sources of truth.
## Best Practices

View File

@ -58,6 +58,10 @@ Complete reference for all available `make` commands in the Ansible project.
| Command | Description | Usage |
|---------|-------------|-------|
| `create-vm` | Create Ansible controller VM on Proxmox | `make create-vm` |
| `proxmox-info` | Show Proxmox guest info (LXC/VM) | `make proxmox-info [PROJECT=projectA] [ALL=true] [TYPE=lxc\|qemu\|all]` |
| `app-provision` | Provision app project guests on Proxmox | `make app-provision PROJECT=projectA` |
| `app-configure` | Configure OS + app on project guests | `make app-configure PROJECT=projectA` |
| `app` | Provision + configure app project guests | `make app PROJECT=projectA` |
| `ping` | Ping hosts with colored output | `make ping [GROUP=dev] [HOST=dev01]` |
| `facts` | Gather facts from all hosts | `make facts` |
| `test-connectivity` | Test network and SSH access | `make test-connectivity` |
@ -69,6 +73,7 @@ Complete reference for all available `make` commands in the Ansible project.
| `copy-ssh-key` | Copy SSH key to specific host | `make copy-ssh-key HOST=giteaVM` |
| `create-vault` | Create encrypted vault file | `make create-vault` |
| `edit-vault` | Edit encrypted host vars | `make edit-vault HOST=dev01` |
| `edit-group-vault` | Edit encrypted group vars (production inventory) | `make edit-group-vault` |
## Utility Commands

21
docs/reference/network.md Normal file
View File

@ -0,0 +1,21 @@
# Network reference
## Overview
This repo manages hosts reachable over your LAN and optionally over a Tailscale overlay network.
## Physical network
- Typical LAN: `192.168.1.0/24` (adjust for your environment)
- Inventory host addressing is defined in `inventories/production/hosts`
## Tailscale overlay
- Tailscale provides a mesh VPN (WireGuard-based) with `100.x.y.z` addresses.
- The repo installs/configures it via `playbooks/tailscale.yml` + `roles/tailscale/`.
## References
- Tailscale guide: `docs/guides/tailscale.md`
- Canonical architecture: `project-docs/architecture.md`

View File

@ -0,0 +1,281 @@
# Playbooks & Tags Map
This repo is organized around playbooks in `playbooks/` (plus a few thin wrapper playbooks in the repo root).
This reference gives you:
- **Execution paths**: where each playbook “goes” (imports → roles → included tasks).
- **Tag paths**: what each tag actually selects, including Makefile shortcuts.
---
## Playbook entrypoints (paths)
### `site.yml` (wrapper)
`site.yml` is a wrapper that delegates to `playbooks/site.yml`.
```mermaid
flowchart TD
A[site.yml] --> B[playbooks/site.yml]
```
### `playbooks/site.yml` (dispatcher)
This is a pure dispatcher: it imports other playbooks and assigns **top-level tags** per import.
```mermaid
flowchart TD
S[playbooks/site.yml] -->|tags: maintenance| M[playbooks/maintenance.yml]
S -->|tags: development| D[playbooks/development.yml]
S -->|tags: tailscale| T[playbooks/tailscale.yml]
S -->|tags: app| A[playbooks/app/site.yml]
```
### `playbooks/maintenance.yml`
```mermaid
flowchart TD
P[playbooks/maintenance.yml] --> R[role: maintenance]
```
- **Notes**:
- `pre_tasks`/`post_tasks` in the playbook are **untagged**; if you run with
`--tags maintenance`, only the `maintenance` role runs (the untagged pre/post tasks are skipped).
### `playbooks/development.yml`
Targets: `hosts: dev`
```mermaid
flowchart TD
P[playbooks/development.yml] --> R1[role: maintenance]
P --> R2[role: base]
P --> R3[role: user]
P --> R4[role: ssh]
P --> R5[role: shell]
P --> R6[role: development]
P --> R7[role: datascience]
P --> R8[role: docker]
P --> R9[role: monitoring_desktop]
%% role-internal paths that matter
R5 --> S1[roles/shell/tasks/main.yml]
S1 --> S2[include_tasks: roles/shell/tasks/configure_user_shell.yml]
R8 --> D1[roles/docker/tasks/main.yml]
D1 --> D2[include_tasks: roles/docker/tasks/setup_gpg_key.yml]
D1 --> D3[include_tasks: roles/docker/tasks/setup_repo_*.yml]
```
- **Notes**:
- `pre_tasks` is **untagged**; if you run with tag filters, that apt-cache update is skipped unless you include
`--tags all` or also run untagged tasks (Ansible has `--skip-tags`/`--tags` behavior to be aware of).
### `playbooks/local.yml`
Targets: `hosts: localhost`, `connection: local`
This is basically the same role stack as `playbooks/development.yml` (minus `datascience`), but applied locally.
```mermaid
flowchart TD
P[playbooks/local.yml] --> R1[role: maintenance]
P --> R2[role: base]
P --> R3[role: user]
P --> R4[role: ssh]
P --> R5[role: shell]
P --> R6[role: development]
P --> R7[role: docker]
P --> R8[role: monitoring_desktop]
```
### `playbooks/servers.yml`
Targets by default: `services:qa:ansible:tailscale` (override via `-e target_group=...`).
```mermaid
flowchart TD
P[playbooks/servers.yml] --> R1[role: maintenance]
P --> R2[role: base]
P --> R3[role: user]
P --> R4[role: ssh]
P --> R5[role: shell]
P --> R6[role: docker]
P --> R7[role: monitoring_server]
```
### `playbooks/workstations.yml`
Two plays:
- Workstation baseline for `dev:desktop:local`
- Desktop applications only for the `desktop` group
```mermaid
flowchart TD
W[playbooks/workstations.yml] --> B1[play: workstation baseline]
B1 --> R1[role: maintenance]
B1 --> R2[role: base]
B1 --> R3[role: user]
B1 --> R4[role: ssh]
B1 --> R5[role: shell]
B1 --> R6[role: development]
B1 --> R7[role: datascience]
B1 --> R8[role: docker]
B1 --> R9[role: monitoring_desktop]
W --> B2[play: desktop apps]
B2 --> A1[role: applications]
```
### `playbooks/shell.yml`
```mermaid
flowchart TD
P[playbooks/shell.yml] --> R[role: shell]
R --> S1[roles/shell/tasks/main.yml]
S1 --> S2[include_tasks: roles/shell/tasks/configure_user_shell.yml]
```
### `playbooks/tailscale.yml`
```mermaid
flowchart TD
P[playbooks/tailscale.yml] --> R[role: tailscale]
R --> T1[roles/tailscale/tasks/main.yml]
T1 -->|Debian| T2[include_tasks: roles/tailscale/tasks/debian.yml]
T1 -->|Alpine| T3[include_tasks: roles/tailscale/tasks/alpine.yml]
```
### `playbooks/infrastructure/proxmox-vm.yml`
Creates an Ansible controller VM on Proxmox (local connection) via `role: proxmox_vm`.
```mermaid
flowchart TD
P[playbooks/infrastructure/proxmox-vm.yml] --> R[role: proxmox_vm]
R --> M1[roles/proxmox_vm/tasks/main.yml]
M1 -->|proxmox_guest_type=lxc| L1[include_tasks: roles/proxmox_vm/tasks/lxc.yml]
M1 -->|else| K1[include_tasks: roles/proxmox_vm/tasks/kvm.yml]
```
### App project suite (`playbooks/app/*`)
#### `playbooks/app/site.yml` (app dispatcher)
```mermaid
flowchart TD
S[playbooks/app/site.yml] -->|tags: app,provision| P[playbooks/app/provision_vms.yml]
S -->|tags: app,configure| C[playbooks/app/configure_app.yml]
```
#### `playbooks/app/provision_vms.yml` (provision app guests)
High-level loop: `project_key``env_item` → provision guest → add to dynamic inventory groups.
```mermaid
flowchart TD
P[playbooks/app/provision_vms.yml] --> T1[include_tasks: playbooks/app/provision_one_guest.yml]
T1 --> T2[include_tasks: playbooks/app/provision_one_env.yml]
T2 --> R[include_role: proxmox_vm]
R --> M1[roles/proxmox_vm/tasks/main.yml]
M1 --> L1[roles/proxmox_vm/tasks/lxc.yml]
T2 --> H[add_host groups]
H --> G1[app_all]
H --> G2[app_${project}_all]
H --> G3[app_${project}_${env}]
```
#### `playbooks/app/configure_app.yml` (configure app guests)
Two phases:
1) **localhost** builds a dynamic inventory from `app_projects` (static IPs)
2) **app_all** (or `app_${project}_all`) configures each host
```mermaid
flowchart TD
A[play: localhost build inventory] --> H[add_host groups]
H --> G1[app_all / app_${project}_*]
B[play: app_all configure] --> OS[include_role: base_os]
B --> POTE[include_role: pote (only when app_project == 'pote')]
B --> APP[include_role: app_setup (when app_project != 'pote')]
```
#### `playbooks/app/proxmox_info.yml`
Single local play that queries Proxmox and prints a filtered summary.
#### `playbooks/app/ssh_client_config.yml`
Single local play that optionally manages `~/.ssh/config` (gated by `manage_ssh_config`).
---
## Tags map (what each tag hits)
### Top-level dispatcher tags (`playbooks/site.yml`)
- **maintenance**: runs `playbooks/maintenance.yml`
- **development**: runs `playbooks/development.yml`
- **tailscale**: runs `playbooks/tailscale.yml`
- **app**: runs `playbooks/app/site.yml` (and therefore app provision + configure)
### Dev/local role tags (`playbooks/development.yml`, `playbooks/local.yml`)
These playbooks tag roles directly:
- **maintenance**`role: maintenance`
- **base**`role: base`
- **security**`role: base` + `role: ssh` (both are tagged `security` at role-inclusion)
- **user**`role: user`
- **ssh**`role: ssh`
- **shell**`role: shell`
- **development** / **dev**`role: development`
- **docker**`role: docker`
- **monitoring**`role: monitoring_desktop`
- **datascience** / **conda** / **jupyter** / **r**`role: datascience` (development playbook only)
- **tailscale** / **vpn** → (currently commented out in dev/local)
### Workstation + desktop apps tags (`playbooks/workstations.yml`)
- **apps** / **applications**`role: applications` (desktop group only)
### App suite tags
From `playbooks/app/site.yml` imports:
- **app**: everything in the app suite
- **provision**: `playbooks/app/provision_vms.yml` only
- **configure**: `playbooks/app/configure_app.yml` only
Standalone app playbook tags (so `--tags ...` works when running them directly):
- `playbooks/app/provision_vms.yml`: **app**, **provision**
- `playbooks/app/configure_app.yml`: **app**, **configure**
- `playbooks/app/proxmox_info.yml`: **app**, **proxmox**, **info**
- `playbooks/app/ssh_client_config.yml`: **app**, **ssh-config**
### Role-internal tags (task/block level)
These are tags inside role task files (useful for targeting parts of a role even if the role itself isnt included with that tag):
- `roles/datascience/tasks/main.yml`:
- **conda**
- **jupyter**
- **r**, **rstats**
### Makefile tag shortcuts
Make targets that apply `--tags`:
- `make datascience HOST=...``--tags datascience`
- `make security``--tags security`
- `make docker``--tags docker`
- `make shell``--tags shell`
- `make apps``--tags apps`
- `make monitoring``--tags monitoring`
---
## Tag-filtering gotchas (important)
- If you run with `--tags X`, **untagged** `pre_tasks`/`tasks`/`post_tasks` in a playbook are skipped.
- Example: `playbooks/maintenance.yml` has untagged `pre_tasks` and `post_tasks`.
`--tags maintenance` runs only the `maintenance` role, not the surrounding reporting steps.

View File

@ -0,0 +1,28 @@
# Security reference
## Overview
Security in this repo is implemented via:
- hardened SSH + firewall defaults (`roles/ssh/`)
- baseline system configuration (`roles/base/`)
- monitoring/intrusion prevention on servers (`roles/monitoring_server/`)
- secrets handled via Ansible Vault (`inventories/production/group_vars/all/vault.yml`)
## Recommended execution
```bash
# Dry-run first
make check
# Apply security-tagged tasks
make security
```
## Vault
- Vault guide: `docs/guides/vault.md`
## Canonical standards
- `project-docs/standards.md`

View File

@ -30,3 +30,269 @@ tailscale_accept_routes: true
tailscale_accept_dns: true
tailscale_ssh: false
tailscale_hostname: "{{ inventory_hostname }}"
# -----------------------------------------------------------------------------
# Proxmox + modular app projects (LXC-first)
#
# This repo can manage many independent apps ("projects"). Each project defines
# its own dev/qa/prod guests (IPs/VMIDs/branches) under `app_projects`.
#
# Usage examples:
# - Run one project: ansible-playbook -i inventories/production playbooks/app/site.yml -e app_project=projectA
# - Run all projects: ansible-playbook -i inventories/production playbooks/app/site.yml
# -----------------------------------------------------------------------------
# Proxmox API connection (keep secrets in vault)
proxmox_host: "{{ vault_proxmox_host }}"
proxmox_user: "{{ vault_proxmox_user }}"
proxmox_node: "{{ vault_proxmox_node | default('pve') }}"
proxmox_api_port: "{{ vault_proxmox_api_port | default(8006) }}"
# Proxmox commonly uses a self-signed cert; keep validation off by default.
proxmox_validate_certs: false
# Prefer API token auth (store in vault):
# - proxmox_token_id: "ansible@pve!token-name"
# - vault_proxmox_token: "secret"
proxmox_token_id: "{{ vault_proxmox_token_id | default('') }}"
# Default guest type for new projects. (Later you can set to `kvm` per project/env.)
proxmox_guest_type: lxc
# Proxmox LXC defaults (override per project/env as needed)
lxc_ostemplate: "local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst"
lxc_storage: "local-lvm"
lxc_network_bridge: "vmbr0"
lxc_unprivileged: true
lxc_features_list:
- "keyctl=1"
- "nesting=1"
lxc_start_after_create: true
lxc_nameserver: "1.1.1.1 8.8.8.8"
# Base OS / access defaults
appuser_name: appuser
appuser_shell: /bin/bash
appuser_groups:
- sudo
# Store your workstation public key in vault_ssh_public_key
appuser_ssh_public_key: "{{ vault_ssh_public_key }}"
# App defaults (override per project)
app_backend_port: 3001
app_frontend_port: 3000
# Default Node workflow commands (override per project if your app differs)
app_backend_install_cmd: "npm ci"
app_backend_migrate_cmd: "npm run migrate"
app_backend_start_cmd: "npm start"
app_frontend_install_cmd: "npm ci"
app_frontend_build_cmd: "npm run build"
app_frontend_start_cmd: "npm start"
# Projects definition: add as many projects as you want here.
# Each project has envs (dev/qa/prod) defining name/vmid/ip/gateway/branch and
# optional env_vars (dummy placeholders by default).
#
# -----------------------------------------------------------------------------
# Proxmox VMID/CTID ranges (DEDICATED; avoid collisions)
#
# Proxmox IDs are global. Never reuse IDs across unrelated guests.
# Suggested reservation table (edit to your preference):
# - 9000-9099: pote
# - 9100-9199: punimTagFE
# - 9200-9299: punimTagBE
# - 9300-9399: projectA (example)
# -----------------------------------------------------------------------------
app_projects:
projectA:
description: "Example full-stack app (edit repo_url, IPs, secrets)."
repo_url: "git@github.com:example/projectA.git"
components:
backend: true
frontend: true
# Repo is assumed to contain `backend/` and `frontend/` directories.
repo_dest: "/srv/app"
# Optional overrides for this project
backend_port: "{{ app_backend_port }}"
frontend_port: "{{ app_frontend_port }}"
guest_defaults:
guest_type: "{{ proxmox_guest_type }}"
cores: 2
memory_mb: 2048
swap_mb: 512
rootfs_size_gb: 16
deploy:
backend_install_cmd: "{{ app_backend_install_cmd }}"
backend_migrate_cmd: "{{ app_backend_migrate_cmd }}"
backend_start_cmd: "{{ app_backend_start_cmd }}"
frontend_install_cmd: "{{ app_frontend_install_cmd }}"
frontend_build_cmd: "{{ app_frontend_build_cmd }}"
frontend_start_cmd: "{{ app_frontend_start_cmd }}"
envs:
dev:
name: "projectA-dev"
vmid: 9301
ip: "10.0.10.101/24"
gateway: "10.0.10.1"
branch: "dev"
env_vars:
APP_ENV: "dev"
BACKEND_BASE_URL: "http://10.0.10.101:{{ app_backend_port }}"
FRONTEND_BASE_URL: "http://10.0.10.101:{{ app_frontend_port }}"
SECRET_PLACEHOLDER: "change-me"
qa:
name: "projectA-qa"
vmid: 9302
ip: "10.0.10.102/24"
gateway: "10.0.10.1"
branch: "qa"
env_vars:
APP_ENV: "qa"
BACKEND_BASE_URL: "http://10.0.10.102:{{ app_backend_port }}"
FRONTEND_BASE_URL: "http://10.0.10.102:{{ app_frontend_port }}"
SECRET_PLACEHOLDER: "change-me"
prod:
name: "projectA-prod"
vmid: 9303
ip: "10.0.10.103/24"
gateway: "10.0.10.1"
branch: "main"
pote:
description: "POTE (python/venv + cron) project (edit repo_url, IPs, secrets)."
repo_url: "gitea@10.0.30.169:ilia/POTE.git"
# POTE deploys as a user-owned python app (not /srv/app)
repo_dest: "/home/poteapp/pote"
os_user: "poteapp"
components:
backend: false
frontend: false
guest_defaults:
guest_type: "{{ proxmox_guest_type }}"
cores: 2
memory_mb: 2048
swap_mb: 512
rootfs_size_gb: 16
# POTE-specific optional defaults (override per env if needed)
pote_db_host: "localhost"
pote_db_user: "poteuser"
pote_db_name: "potedb"
pote_smtp_host: "mail.levkin.ca"
pote_smtp_port: 587
envs:
dev:
name: "pote-dev"
vmid: 9001
ip: "10.0.10.114/24"
gateway: "10.0.10.1"
branch: "dev"
qa:
name: "pote-qa"
vmid: 9002
ip: "10.0.10.112/24"
gateway: "10.0.10.1"
branch: "qa"
prod:
name: "pote-prod"
vmid: 9003
ip: "10.0.10.113/24"
gateway: "10.0.10.1"
branch: "main"
punimTagFE:
description: "punimTag frontend-only project (edit repo_url, IPs, secrets)."
repo_url: "git@github.com:example/punimTagFE.git"
repo_dest: "/srv/app"
components:
backend: false
frontend: true
guest_defaults:
guest_type: "{{ proxmox_guest_type }}"
cores: 2
memory_mb: 2048
swap_mb: 512
rootfs_size_gb: 16
deploy:
frontend_install_cmd: "{{ app_frontend_install_cmd }}"
frontend_build_cmd: "{{ app_frontend_build_cmd }}"
frontend_start_cmd: "{{ app_frontend_start_cmd }}"
envs:
dev:
name: "punimTagFE-dev"
vmid: 9101
ip: "10.0.10.121/24"
gateway: "10.0.10.1"
branch: "dev"
env_vars:
APP_ENV: "dev"
SECRET_PLACEHOLDER: "change-me"
qa:
name: "punimTagFE-qa"
vmid: 9102
ip: "10.0.10.122/24"
gateway: "10.0.10.1"
branch: "qa"
env_vars:
APP_ENV: "qa"
SECRET_PLACEHOLDER: "change-me"
prod:
name: "punimTagFE-prod"
vmid: 9103
ip: "10.0.10.123/24"
gateway: "10.0.10.1"
branch: "main"
env_vars:
APP_ENV: "prod"
SECRET_PLACEHOLDER: "change-me"
punimTagBE:
description: "punimTag backend-only project (edit repo_url, IPs, secrets)."
repo_url: "git@github.com:example/punimTagBE.git"
repo_dest: "/srv/app"
components:
backend: true
frontend: false
guest_defaults:
guest_type: "{{ proxmox_guest_type }}"
cores: 2
memory_mb: 2048
swap_mb: 512
rootfs_size_gb: 16
deploy:
backend_install_cmd: "{{ app_backend_install_cmd }}"
backend_migrate_cmd: "{{ app_backend_migrate_cmd }}"
backend_start_cmd: "{{ app_backend_start_cmd }}"
envs:
dev:
name: "punimTagBE-dev"
vmid: 9201
ip: "10.0.10.131/24"
gateway: "10.0.10.1"
branch: "dev"
env_vars:
APP_ENV: "dev"
SECRET_PLACEHOLDER: "change-me"
qa:
name: "punimTagBE-qa"
vmid: 9202
ip: "10.0.10.132/24"
gateway: "10.0.10.1"
branch: "qa"
env_vars:
APP_ENV: "qa"
SECRET_PLACEHOLDER: "change-me"
prod:
name: "punimTagBE-prod"
vmid: 9203
ip: "10.0.10.133/24"
gateway: "10.0.10.1"
branch: "main"
env_vars:
APP_ENV: "prod"
SECRET_PLACEHOLDER: "change-me"

View File

@ -0,0 +1,42 @@
---
# Example vault values for Proxmox app projects.
#
# Copy required keys into your encrypted vault:
# make edit-group-vault
#
# Never commit real secrets unencrypted.
# Proxmox API
vault_proxmox_host: "10.0.10.201"
vault_proxmox_user: "root@pam"
vault_proxmox_node: "pve"
vault_proxmox_password: "CHANGE_ME"
# Optional token auth (recommended if you use it)
# vault_proxmox_token_id: "root@pam!ansible"
# vault_proxmox_token: "CHANGE_ME"
# SSH public key for appuser (workstation key)
vault_ssh_public_key: "ssh-ed25519 AAAA... you@example"
# LXC create bootstrap password (often required by Proxmox)
vault_lxc_root_password: "CHANGE_ME"
# -----------------------------------------------------------------------------
# POTE (python/venv + cron) secrets
# -----------------------------------------------------------------------------
# Private key used for cloning from Gitea (deploy key). Store as a multi-line block.
vault_pote_git_ssh_key: |
-----BEGIN OPENSSH PRIVATE KEY-----
CHANGE_ME
-----END OPENSSH PRIVATE KEY-----
# Environment-specific DB passwords (used by roles/pote)
vault_pote_db_password_dev: "CHANGE_ME"
vault_pote_db_password_qa: "CHANGE_ME"
vault_pote_db_password_prod: "CHANGE_ME"
# SMTP password for reports
vault_pote_smtp_password: "CHANGE_ME"

View File

@ -1,10 +1,47 @@
$ANSIBLE_VAULT;1.1;AES256
36343265643238633236643162613137393331386164306133666537633336633036376433386161
3135366566623235333264386539346364333435373065300a633231633731316633313166346161
30363334613965666634633665363632323966396464633636346533616634393664386566333230
3463666531323866660a666238383331383562313363386639646161653334313661393065343135
33613762653361656633366465306264323935363032353737333935363165346639616330333939
39336538643866366361313838636338643336376365373166376234383838656430623339313162
37353461313263643263376232393138396233366234336333613535366234383661353938663032
65383737343164343431363764333063326230623263323231366232626131306637353361343466
6131
36643038376636383030343730626264613839396462366365633837636130623639393361656634
3238353261633635353662653036393835313963373562390a646535376366656163383632313835
39646666653362336661633736333365343962346432653131613134353361366263373162386631
3134613438626132320a313765343338643535343837306339616564336564303166626164356530
63663363656535303137663431613861343662303664313332626166373463393931323937613230
66333665316331323637663437653339353737653336633864393033336630336438646162643662
31656164363933333036376263303034646366393134636630663631353235373831303264363762
66643865616130306537383836646237613730643133656333666632326538613764383530363363
61386161646637316166303633643665383365346534323939383034613430386362303038313761
36303364396436373466653332303562653038373962616539356633373065643130303036363161
65353163326136383066393332376236386333653532326337613163346334616234643562643265
62316134386365343733636661336130623364386634383965386135616633323132643365613231
34636435333031376136396336316337666161383562343834383865316436633333333065323138
37343865363731303137666330306131373734623637343531623562353332353437646631343363
30393565376435303430396535643165616534313334326462363130626639343038643835336335
33613630336534666163356631353438373462306566376134323536373832643264633365653465
62386363326436623330653430383262653732376235626432656362306363303663623834653664
31373762306539376431353137393664396165396261613364653339373765393863633833396131
36666235666234633430373338323331313531643736656137303937653865303431643164373161
39633238383265396366386230303536613461633431333565353433643935613231333232333063
36643435376165656262623863373039393837643564366531666462376162653630626634663037
39373439336239646131306133663566343734656339346462356662373561306264333364383966
38343463616666613037636335333137633737666166633364343736646232396566373866633531
34303734376137386363373039656565323364333539626630323465666636396465323861333365
35376161663630356132373638333937376164316361303531303637396334306133373237656265
36356532623130323565396531306136363339363437376364343138653139653335343765316365
38313035366137393365316139326236326330386365343665376335313339666231333632333133
32353865626531373462346261653832386234396531653136323162653865303861396233376261
34616232363965313635373833333737336166643734373633313865323066393930666562316136
36373763356365646361656436383463393237623461383531343134373336663763663464336361
38396532383932643065303731663565353366373033353237383538636365323064396531386134
61643964613930373439383032373364316437303239393434376465393639373634663738623461
37386366616333626434363761326361373533306635316164316363393264303633353939613239
37353266303637323139653630663236663633313061306633316139666539376632306630313362
34633834326433646230303634313266303530633236353262633066396462646365623935343161
34393166643666366164313438383939386434366665613330653739383139613732396633383261
33633664303131383163356362316639353064373861343132623565636631333135663034373461
61303031616634333235303066633939643337393862653031323936363932633438303035323238
66323066353737316166383533636661336637303265343937633064626164623462656134333732
33316536336430636636646561626232666633656266326339623732363531326131643764313838
62356537326166346666313930383639386466633432626235373738633833393164646238366465
62373938363739373036666238666433303061633732663565666433333631326432626461353037
39636263636632313431353364386566383134653139393762623562643561616166633035353038
39326462356332616563303462636536636132633933336532383938373030666333363264346632
64643063373830353130613662323131353964313038323735626464313363326364653732323732
3663393964633138376665323435366463623463613237366465

View File

@ -0,0 +1,16 @@
---
# Host variables for dev02
# Use ladmin user with sudo to become root
ansible_become: true
ansible_become_method: sudo
ansible_become_password: "{{ vault_dev02_become_password }}"
# Configure shell for ladmin
shell_users:
- ladmin
# Skip data science stack
install_conda: false
install_jupyter: false
install_r: false

View File

@ -1,3 +1,4 @@
---
ansible_become_password: root
ansible_python_interpreter: /usr/bin/python3
@ -21,10 +22,4 @@ jupyter_bind_all_interfaces: true
# R configuration
install_r: true
# Cursor IDE configuration
install_cursor_extensions: true
# Cursor extension groups to enable
install_python: true # Python development
install_docs: true # Markdown/documentation
# IDE/editor tooling is intentionally not managed by Ansible in this repo.

View File

@ -1,3 +1,4 @@
---
# Configure sudo path for git-ci-01
# Sudo may not be in PATH for non-interactive shells
ansible_become_exe: /usr/bin/sudo
@ -5,4 +6,3 @@ ansible_become_method: sudo
# Alternative: if sudo is in a different location, update this
# ansible_become_exe: /usr/local/bin/sudo

View File

@ -7,4 +7,3 @@ ansible_become_method: sudo
# Configure shell for ladmin user
shell_users:
- ladmin

View File

@ -2,24 +2,18 @@
# Primary IPs: Tailscale (100.x.x.x) for remote access
# Fallback IPs: Local network (10.0.x.x) when Tailscale is down
# Usage: ansible_host_fallback is available for manual fallback
[gitea]
giteaVM ansible_host=10.0.30.169 ansible_user=root
[portainer]
portainerVM ansible_host=10.0.30.69 ansible_user=ladmin
[homepage]
homepageVM ansible_host=10.0.30.12 ansible_user=homepage
[vaultwarden]
vaultwardenVM ansible_host=10.0.10.142 ansible_user=root
#
# NOTE: Proxmox app projects (dev/qa/prod) are provisioned dynamically via
# `playbooks/app/site.yml` (it uses `add_host` based on `app_projects`).
# You generally do NOT need to add project hosts here.
[dev]
dev01 ansible_host=10.0.30.105 ansible_user=ladmin
bottom ansible_host=10.0.10.156 ansible_user=beast
debianDesktopVM ansible_host=10.0.10.206 ansible_user=user skip_reboot=true
devGPU ansible_host=10.0.30.63 ansible_user=root
[qa]
git-ci-01 ansible_host=10.0.10.223 ansible_user=ladmin
sonarqube-01 ansible_host=10.0.10.54 ansible_user=ladmin
@ -34,8 +28,14 @@ caddy ansible_host=10.0.10.50 ansible_user=root
jellyfin ansible_host=10.0.10.232 ansible_user=root
listmonk ansible_host=10.0.10.149 ansible_user=root
nextcloud ansible_host=10.0.10.25 ansible_user=root
actual ansible_host=10.0.10.159 ansible_user=root
actual ansible_host=10.0.10.158 ansible_user=root
vikanjans ansible_host=10.0.10.159 ansible_user=root
n8n ansible_host=10.0.10.158 ansible_user=root
giteaVM ansible_host=10.0.30.169 ansible_user=root
portainerVM ansible_host=10.0.30.69 ansible_user=ladmin
homepageVM ansible_host=10.0.30.12 ansible_user=homepage
vaultwardenVM ansible_host=10.0.10.142 ansible_user=ladmin
qBittorrent ansible_host=10.0.10.91 ansible_user=root port=8080
[desktop]
desktop-beast ansible_host=100.117.34.106 ansible_user=beast

2
package-lock.json generated
View File

@ -13,7 +13,7 @@
"markdownlint-cli2": "^0.18.1"
},
"engines": {
"node": ">=22.0.0",
"node": ">=20.0.0",
"npm": ">=10.0.0"
}
},

View File

@ -0,0 +1,134 @@
---
# Playbook: app/configure_app.yml
# Purpose: Configure OS + app runtime on app project guests created via provision_vms.yml
# Targets: app_all or per-project group created dynamically
# Tags: app, configure
#
# Usage:
# - Run one project: ansible-playbook -i inventories/production playbooks/app/site.yml -e app_project=projectA
# - Run all projects: ansible-playbook -i inventories/production playbooks/app/site.yml
- name: Build dynamic inventory from app_projects (so configure can run standalone)
hosts: localhost
connection: local
gather_facts: false
tags: ['app', 'configure']
vars:
selected_projects: >-
{{
(app_projects | dict2items | map(attribute='key') | list)
if (app_project is not defined or app_project | length == 0)
else [app_project]
}}
app_bootstrap_user_default: root
# If true, configure plays will use vault_lxc_root_password for initial SSH bootstrap.
bootstrap_with_root_password_default: false
tasks:
- name: Validate requested project exists
ansible.builtin.assert:
that:
- app_project is not defined or app_project in app_projects
fail_msg: "Requested app_project={{ app_project }} does not exist in app_projects."
- name: Add each project/env host (by static IP) to dynamic inventory
ansible.builtin.add_host:
name: "{{ app_projects[item.0].envs[item.1].name | default(item.0 ~ '-' ~ item.1) }}"
groups:
- "app_all"
- "app_{{ item.0 }}_all"
- "app_{{ item.0 }}_{{ item.1 }}"
ansible_host: "{{ (app_projects[item.0].envs[item.1].ip | string).split('/')[0] }}"
ansible_user: "{{ app_bootstrap_user | default(app_bootstrap_user_default) }}"
ansible_password: >-
{{
vault_lxc_root_password
if ((bootstrap_with_root_password | default(bootstrap_with_root_password_default) | bool) and (vault_lxc_root_password | default('') | length) > 0)
else omit
}}
app_project: "{{ item.0 }}"
app_env: "{{ item.1 }}"
loop: "{{ selected_projects | product(['dev', 'qa', 'prod']) | list }}"
when:
- app_projects[item.0] is defined
- app_projects[item.0].envs[item.1] is defined
- (app_projects[item.0].envs[item.1].ip | default('')) | length > 0
- name: Configure app guests (base OS + app setup)
hosts: >-
{{
('app_' ~ app_project ~ '_all')
if (app_project is defined and app_project | length > 0)
else 'app_all'
}}
become: true
gather_facts: true
tags: ['app', 'configure']
tasks:
- name: Build project/env effective variables
ansible.builtin.set_fact:
project_def: "{{ app_projects[app_project] }}"
env_def: "{{ app_projects[app_project].envs[app_env] }}"
when: app_project is defined and app_env is defined
- name: Configure base OS
ansible.builtin.include_role:
name: base_os
vars:
base_os_backend_port: "{{ (project_def.backend_port | default(app_backend_port)) if project_def is defined else app_backend_port }}"
base_os_frontend_port: "{{ (project_def.frontend_port | default(app_frontend_port)) if project_def is defined else app_frontend_port }}"
base_os_enable_backend: "{{ project_def.components.backend | default(true) }}"
base_os_enable_frontend: "{{ project_def.components.frontend | default(true) }}"
base_os_user: "{{ project_def.os_user | default(appuser_name) }}"
base_os_user_ssh_public_key: "{{ project_def.os_user_ssh_public_key | default(appuser_ssh_public_key | default('')) }}"
# Only override when explicitly provided (avoids self-referential recursion)
base_os_packages: "{{ project_def.base_os_packages if (project_def is defined and project_def.base_os_packages is defined) else omit }}"
- name: Configure POTE (python/venv + cron)
ansible.builtin.include_role:
name: pote
vars:
pote_git_repo: "{{ project_def.repo_url }}"
pote_git_branch: "{{ env_def.branch }}"
pote_user: "{{ project_def.os_user | default('poteapp') }}"
pote_group: "{{ project_def.os_user | default('poteapp') }}"
pote_app_dir: "{{ project_def.repo_dest | default('/home/' ~ (project_def.os_user | default('poteapp')) ~ '/pote') }}"
pote_env: "{{ app_env }}"
pote_db_host: "{{ env_def.pote_db_host | default(project_def.pote_db_host | default('localhost')) }}"
pote_db_name: "{{ env_def.pote_db_name | default(project_def.pote_db_name | default('potedb')) }}"
pote_db_user: "{{ env_def.pote_db_user | default(project_def.pote_db_user | default('poteuser')) }}"
pote_smtp_host: "{{ env_def.pote_smtp_host | default(project_def.pote_smtp_host | default('mail.levkin.ca')) }}"
pote_smtp_port: "{{ env_def.pote_smtp_port | default(project_def.pote_smtp_port | default(587)) }}"
pote_smtp_user: "{{ env_def.pote_smtp_user | default(project_def.pote_smtp_user | default('')) }}"
pote_from_email: "{{ env_def.pote_from_email | default(project_def.pote_from_email | default('')) }}"
pote_report_recipients: "{{ env_def.pote_report_recipients | default(project_def.pote_report_recipients | default('')) }}"
when: app_project == 'pote'
- name: Configure app layout + deploy + systemd
ansible.builtin.include_role:
name: app_setup
vars:
app_repo_url: "{{ project_def.repo_url }}"
app_repo_dest: "{{ project_def.repo_dest | default('/srv/app') }}"
app_repo_branch: "{{ env_def.branch }}"
# app_env is already set per-host via add_host (dev/qa/prod)
app_owner: "{{ project_def.os_user | default(appuser_name) }}"
app_group: "{{ project_def.os_user | default(appuser_name) }}"
app_backend_port: "{{ project_def.backend_port | default(app_backend_port) }}"
app_frontend_port: "{{ project_def.frontend_port | default(app_frontend_port) }}"
app_enable_backend: "{{ project_def.components.backend | default(true) }}"
app_enable_frontend: "{{ project_def.components.frontend | default(true) }}"
app_backend_install_cmd: "{{ project_def.deploy.backend_install_cmd | default(app_backend_install_cmd) }}"
app_backend_migrate_cmd: "{{ project_def.deploy.backend_migrate_cmd | default(app_backend_migrate_cmd) }}"
app_backend_start_cmd: "{{ project_def.deploy.backend_start_cmd | default(app_backend_start_cmd) }}"
app_frontend_install_cmd: "{{ project_def.deploy.frontend_install_cmd | default(app_frontend_install_cmd) }}"
app_frontend_build_cmd: "{{ project_def.deploy.frontend_build_cmd | default(app_frontend_build_cmd) }}"
app_frontend_start_cmd: "{{ project_def.deploy.frontend_start_cmd | default(app_frontend_start_cmd) }}"
app_env_vars: "{{ env_def.env_vars | default({}) }}"
when: app_project != 'pote'

View File

@ -0,0 +1,237 @@
---
# Helper tasks file for playbooks/app/provision_vms.yml
# Provisions a single (project, env) guest and adds it to dynamic inventory.
- name: Set environment facts
ansible.builtin.set_fact:
env_name: "{{ env_item.key }}"
env_def: "{{ env_item.value }}"
guest_name: "{{ env_item.value.name | default(project_key ~ '-' ~ env_item.key) }}"
# vmid is optional; if omitted, we will manage idempotency by unique guest_name
guest_vmid: "{{ env_item.value.vmid | default(none) }}"
- name: Normalize recreate_existing_envs to a list
ansible.builtin.set_fact:
recreate_envs_list: >-
{{
(recreate_existing_envs.split(',') | map('trim') | list)
if (recreate_existing_envs is defined and recreate_existing_envs is string)
else (recreate_existing_envs | default([]))
}}
- name: Check if Proxmox guest already exists (by VMID when provided)
community.proxmox.proxmox_vm_info:
api_host: "{{ proxmox_host }}"
api_port: "{{ proxmox_api_port | default(8006) }}"
validate_certs: "{{ proxmox_validate_certs | default(false) }}"
api_user: "{{ proxmox_user }}"
api_password: "{{ vault_proxmox_password | default(omit) }}"
api_token_id: "{{ proxmox_token_id | default(omit, true) }}"
api_token_secret: "{{ vault_proxmox_token | default(omit, true) }}"
node: "{{ proxmox_node }}"
type: lxc
vmid: "{{ guest_vmid }}"
register: proxmox_guest_info_vmid
when: guest_vmid is not none
- name: Check if Proxmox guest already exists (by name when VMID omitted)
community.proxmox.proxmox_vm_info:
api_host: "{{ proxmox_host }}"
api_port: "{{ proxmox_api_port | default(8006) }}"
validate_certs: "{{ proxmox_validate_certs | default(false) }}"
api_user: "{{ proxmox_user }}"
api_password: "{{ vault_proxmox_password | default(omit) }}"
api_token_id: "{{ proxmox_token_id | default(omit, true) }}"
api_token_secret: "{{ vault_proxmox_token | default(omit, true) }}"
node: "{{ proxmox_node }}"
type: lxc
name: "{{ guest_name }}"
register: proxmox_guest_info_name
when: guest_vmid is none
- name: Set guest_exists fact
ansible.builtin.set_fact:
guest_exists: >-
{{
((proxmox_guest_info_vmid.proxmox_vms | default([])) | length > 0)
if (guest_vmid is not none)
else ((proxmox_guest_info_name.proxmox_vms | default([])) | length > 0)
}}
- name: "Guardrail: abort if VMID exists but name does not match (prevents overwriting other guests)"
ansible.builtin.fail:
msg: >-
Refusing to use VMID {{ guest_vmid }} for {{ guest_name }} because it already exists as
"{{ (proxmox_guest_info_vmid.proxmox_vms[0].name | default('UNKNOWN')) }}".
Pick a different vmid range in app_projects or omit vmid to auto-allocate.
when:
- guest_vmid is not none
- (proxmox_guest_info_vmid.proxmox_vms | default([])) | length > 0
- (proxmox_guest_info_vmid.proxmox_vms[0].name | default('')) != guest_name
- not (allow_vmid_collision | default(false) | bool)
- name: Delete existing guest if requested (recreate)
community.proxmox.proxmox:
api_host: "{{ proxmox_host }}"
api_port: "{{ proxmox_api_port | default(8006) }}"
validate_certs: "{{ proxmox_validate_certs | default(false) }}"
api_user: "{{ proxmox_user }}"
api_password: "{{ vault_proxmox_password | default(omit) }}"
api_token_id: "{{ proxmox_token_id | default(omit, true) }}"
api_token_secret: "{{ vault_proxmox_token | default(omit, true) }}"
node: "{{ proxmox_node }}"
vmid: "{{ guest_vmid }}"
purge: true
force: true
state: absent
when:
- guest_exists | bool
- guest_vmid is not none
- recreate_existing_guests | default(false) | bool or (env_name in recreate_envs_list)
- name: Mark guest as not existing after delete
ansible.builtin.set_fact:
guest_exists: false
when:
- guest_vmid is not none
- recreate_existing_guests | default(false) | bool or (env_name in recreate_envs_list)
- name: "Preflight: detect IP conflicts on Proxmox (existing LXC net0 ip=)"
community.proxmox.proxmox_vm_info:
api_host: "{{ proxmox_host }}"
api_port: "{{ proxmox_api_port | default(8006) }}"
validate_certs: "{{ proxmox_validate_certs | default(false) }}"
api_user: "{{ proxmox_user }}"
api_password: "{{ vault_proxmox_password | default(omit) }}"
api_token_id: "{{ proxmox_token_id | default(omit, true) }}"
api_token_secret: "{{ vault_proxmox_token | default(omit, true) }}"
node: "{{ proxmox_node }}"
type: lxc
config: current
register: proxmox_all_lxc
when:
- (env_def.ip | default('')) | length > 0
- not (allow_ip_conflicts | default(false) | bool)
- not (guest_exists | default(false) | bool)
- name: Set proxmox_ip_conflicts fact
ansible.builtin.set_fact:
proxmox_ip_conflicts: >-
{%- set conflicts = [] -%}
{%- set target_ip = ((env_def.ip | string).split('/')[0]) -%}
{%- for vm in (proxmox_all_lxc.proxmox_vms | default([])) -%}
{%- set cfg_net0 = (
vm['config']['net0']
if (
vm is mapping and ('config' in vm)
and (vm['config'] is mapping) and ('net0' in vm['config'])
)
else none
) -%}
{%- set vm_netif = (vm['netif'] if (vm is mapping and ('netif' in vm)) else none) -%}
{%- set net0 = (
cfg_net0
if (cfg_net0 is not none)
else (
vm_netif['net0']
if (vm_netif is mapping and ('net0' in vm_netif))
else (
vm_netif
if (vm_netif is string)
else (vm['net0'] if (vm is mapping and ('net0' in vm)) else '')
)
)
) | string -%}
{%- set vm_ip = (net0 | regex_search('(?:^|,)ip=([^,]+)', '\\1') | default('')) | regex_replace('/.*$', '') -%}
{%- if (vm_ip | length) > 0 and vm_ip == target_ip -%}
{%- set _ = conflicts.append({'vmid': (vm.vmid | default('') | string), 'name': (vm.name | default('') | string), 'net0': net0}) -%}
{%- endif -%}
{%- endfor -%}
{{ conflicts }}
when:
- proxmox_all_lxc is defined
- (env_def.ip | default('')) | length > 0
- not (allow_ip_conflicts | default(false) | bool)
- not (guest_exists | default(false) | bool)
- name: Abort if IP is already assigned to an existing Proxmox LXC
ansible.builtin.fail:
msg: >-
Refusing to provision {{ guest_name }} because IP {{ (env_def.ip | string).split('/')[0] }}
is already present in Proxmox LXC net0 config: {{ proxmox_ip_conflicts }}.
Fix app_projects IPs or set -e allow_ip_conflicts=true.
when:
- (env_def.ip | default('')) | length > 0
- not (allow_ip_conflicts | default(false) | bool)
- not (guest_exists | default(false) | bool)
- (proxmox_ip_conflicts | default([])) | length > 0
- name: "Preflight: fail if target IP responds (avoid accidental duplicate IP)"
ansible.builtin.command: "ping -c 1 -W 1 {{ (env_def.ip | string).split('/')[0] }}"
register: ip_ping
changed_when: false
failed_when: false
when:
- (env_def.ip | default('')) | length > 0
- not (allow_ip_conflicts | default(false) | bool)
- not (guest_exists | default(false) | bool)
- name: Abort if IP appears to be in use
ansible.builtin.fail:
msg: >-
Refusing to provision {{ guest_name }} because IP {{ (env_def.ip | string).split('/')[0] }}
responded to ping. Fix app_projects IPs or set -e allow_ip_conflicts=true.
Note: this guardrail is ping-based; if your network blocks ICMP, an in-use IP may not respond.
when:
- (env_def.ip | default('')) | length > 0
- not (allow_ip_conflicts | default(false) | bool)
- not (guest_exists | default(false) | bool)
- ip_ping.rc == 0
- name: Provision LXC guest for project/env
ansible.builtin.include_role:
name: proxmox_vm
vars:
# NOTE: Use hostvars['localhost'] for defaults to avoid recursive self-references
proxmox_guest_type: "{{ project_def.guest_defaults.guest_type | default(hostvars['localhost'].proxmox_guest_type | default('lxc')) }}"
# Only pass vmid when provided; otherwise Proxmox will auto-allocate
lxc_vmid: "{{ guest_vmid if guest_vmid is not none else omit }}"
lxc_hostname: "{{ guest_name }}"
lxc_ostemplate: "{{ project_def.lxc_ostemplate | default(hostvars['localhost'].lxc_ostemplate) }}"
lxc_storage: "{{ project_def.lxc_storage | default(hostvars['localhost'].lxc_storage) }}"
lxc_network_bridge: "{{ project_def.lxc_network_bridge | default(hostvars['localhost'].lxc_network_bridge) }}"
lxc_unprivileged: "{{ project_def.lxc_unprivileged | default(hostvars['localhost'].lxc_unprivileged) }}"
lxc_features_list: "{{ project_def.lxc_features_list | default(hostvars['localhost'].lxc_features_list) }}"
lxc_cores: "{{ project_def.guest_defaults.cores | default(hostvars['localhost'].lxc_cores) }}"
lxc_memory_mb: "{{ project_def.guest_defaults.memory_mb | default(hostvars['localhost'].lxc_memory_mb) }}"
lxc_swap_mb: "{{ project_def.guest_defaults.swap_mb | default(hostvars['localhost'].lxc_swap_mb) }}"
lxc_rootfs_size_gb: "{{ project_def.guest_defaults.rootfs_size_gb | default(hostvars['localhost'].lxc_rootfs_size_gb) }}"
lxc_ip: "{{ env_def.ip }}"
lxc_gateway: "{{ env_def.gateway }}"
lxc_nameserver: "{{ project_def.lxc_nameserver | default(hostvars['localhost'].lxc_nameserver) }}"
lxc_pubkey: "{{ appuser_ssh_public_key | default('') }}"
lxc_start_after_create: "{{ project_def.lxc_start_after_create | default(hostvars['localhost'].lxc_start_after_create) }}"
- name: Wait for SSH to become available
ansible.builtin.wait_for:
host: "{{ (env_def.ip | string).split('/')[0] }}"
port: 22
timeout: 300
when: (env_def.ip | default('')) | length > 0
- name: Add guest to dynamic inventory
ansible.builtin.add_host:
name: "{{ guest_name }}"
groups:
- "app_all"
- "app_{{ project_key }}_all"
- "app_{{ project_key }}_{{ env_name }}"
ansible_host: "{{ (env_def.ip | string).split('/')[0] }}"
ansible_user: root
app_project: "{{ project_key }}"
app_env: "{{ env_name }}"
# EOF

View File

@ -0,0 +1,23 @@
---
# Helper tasks file for playbooks/app/provision_vms.yml
# Provisions all envs for a single project and adds dynamic inventory hosts.
- name: Set project definition
ansible.builtin.set_fact:
project_def: "{{ app_projects[project_key] }}"
- name: "Preflight: validate env IPs are unique within project"
ansible.builtin.assert:
that:
- (project_env_ips | length) == ((project_env_ips | unique) | length)
fail_msg: "Duplicate IPs detected in app_projects.{{ project_key }}.envs (IPs must be unique): {{ project_env_ips }}"
vars:
project_env_ips: "{{ project_def.envs | dict2items | map(attribute='value.ip') | select('defined') | map('string') | map('regex_replace', '/.*$', '') | reject('equalto', '') | list }}"
when:
- project_def.envs is defined
- (project_def.envs | length) > 0
- name: Provision each environment for project
ansible.builtin.include_tasks: provision_one_env.yml
loop: "{{ project_def.envs | dict2items }}"
loop_control:
loop_var: env_item
# EOF

View File

@ -0,0 +1,35 @@
---
# Playbook: app/provision_vms.yml
# Purpose: Provision Proxmox guests for app projects (LXC-first) based on `app_projects`.
# Targets: localhost (Proxmox API)
# Tags: app, provision
#
# Usage:
# - Run one project: ansible-playbook -i inventories/production playbooks/app/provision_vms.yml -e app_project=projectA
# - Run all projects: ansible-playbook -i inventories/production playbooks/app/provision_vms.yml
- name: Provision Proxmox guests for app projects
hosts: localhost
connection: local
gather_facts: false
tags: ['app', 'provision']
vars:
selected_projects: >-
{{
(app_projects | dict2items | map(attribute='key') | list)
if (app_project is not defined or app_project | length == 0)
else [app_project]
}}
tasks:
- name: Validate requested project exists
ansible.builtin.assert:
that:
- app_project is not defined or app_project in app_projects
fail_msg: "Requested app_project={{ app_project }} does not exist in app_projects."
- name: Provision each project/env guest via Proxmox API
ansible.builtin.include_tasks: provision_one_guest.yml
loop: "{{ selected_projects }}"
loop_control:
loop_var: project_key

View File

@ -0,0 +1,98 @@
---
# Playbook: app/proxmox_info.yml
# Purpose: Query Proxmox API for VM/LXC info (status, node, name, vmid) and
# optionally filter to just the guests defined in `app_projects`.
# Targets: localhost
# Tags: app, proxmox, info
#
# Usage examples:
# - Show only projectA guests: ansible-playbook -i inventories/production playbooks/app/proxmox_info.yml -e app_project=projectA
# - Show all VMs/CTs on the cluster: ansible-playbook -i inventories/production playbooks/app/proxmox_info.yml -e proxmox_info_all=true
# - Restrict to only LXC: -e proxmox_info_type=lxc
- name: Proxmox inventory info (VMs and containers)
hosts: localhost
connection: local
gather_facts: false
tags: ['app', 'proxmox', 'info']
vars:
selected_projects: >-
{{
(app_projects | dict2items | map(attribute='key') | list)
if (app_project is not defined or app_project | length == 0)
else [app_project]
}}
proxmox_info_all_default: false
proxmox_info_type_default: all # all|lxc|qemu
tasks:
- name: Validate requested project exists
ansible.builtin.assert:
that:
- app_project is not defined or app_project in app_projects
fail_msg: "Requested app_project={{ app_project | default('') }} does not exist in app_projects."
- name: Build list of expected VMIDs and names from app_projects
ansible.builtin.set_fact:
expected_vmids: >-
{{
selected_projects
| map('extract', app_projects)
| map(attribute='envs')
| map('dict2items')
| map('map', attribute='value')
| list
| flatten
| map(attribute='vmid')
| select('defined')
| list
}}
expected_names: >-
{{
selected_projects
| map('extract', app_projects)
| map(attribute='envs')
| map('dict2items')
| map('map', attribute='value')
| list
| flatten
| map(attribute='name')
| list
}}
- name: Query Proxmox for guest info
community.proxmox.proxmox_vm_info:
api_host: "{{ proxmox_host }}"
api_port: "{{ proxmox_api_port | default(8006) }}"
validate_certs: "{{ proxmox_validate_certs | default(false) }}"
api_user: "{{ proxmox_user }}"
api_password: "{{ vault_proxmox_password | default(omit) }}"
api_token_id: "{{ proxmox_token_id | default(omit, true) }}"
api_token_secret: "{{ vault_proxmox_token | default(omit, true) }}"
node: "{{ proxmox_node | default(omit) }}"
type: "{{ proxmox_info_type | default(proxmox_info_type_default) }}"
config: none
register: proxmox_info
- name: Filter guests to expected VMIDs/names (unless proxmox_info_all)
ansible.builtin.set_fact:
filtered_guests: >-
{{
(proxmox_info.proxmox_vms | default([]))
if (proxmox_info_all | default(proxmox_info_all_default) | bool)
else (
(proxmox_info.proxmox_vms | default([]))
| selectattr('name', 'in', expected_names)
| list
)
}}
- name: Display Proxmox guest summary
ansible.builtin.debug:
msg: |
Proxmox: {{ proxmox_host }} (node={{ proxmox_node | default('any') }}, type={{ proxmox_info_type | default(proxmox_info_type_default) }})
Showing: {{ 'ALL guests' if (proxmox_info_all | default(proxmox_info_all_default) | bool) else ('app_projects for ' ~ (selected_projects | join(', '))) }}
{% for g in (filtered_guests | sort(attribute='vmid')) %}
- vmid={{ g.vmid }} type={{ g.id.split('/')[0] if g.id is defined else 'unknown' }} name={{ g.name | default('') }} node={{ g.node | default('') }} status={{ g.status | default('') }}
{% endfor %}

13
playbooks/app/site.yml Normal file
View File

@ -0,0 +1,13 @@
---
# Playbook: app/site.yml
# Purpose: End-to-end provisioning + configuration for app projects.
# Targets: localhost (provision) + dynamic inventory groups (configure)
# Tags: app
- name: Provision Proxmox guests
import_playbook: provision_vms.yml
tags: ['app', 'provision']
- name: Configure guests
import_playbook: configure_app.yml
tags: ['app', 'configure']

View File

@ -0,0 +1,50 @@
---
# Playbook: app/ssh_client_config.yml
# Purpose: Ensure ~/.ssh/config has convenient host aliases for project envs.
# Targets: localhost
# Tags: app, ssh-config
#
# Example:
# ssh projectA-dev
# ssh projectA-qa
# ssh projectA-prod
- name: Configure SSH client aliases for app projects
hosts: localhost
connection: local
gather_facts: false
tags: ['app', 'ssh-config']
vars:
manage_ssh_config: "{{ manage_ssh_config | default(false) }}"
ssh_config_path: "{{ lookup('ansible.builtin.env', 'HOME') + '/.ssh/config' }}"
selected_projects: >-
{{
(app_projects | dict2items | map(attribute='key') | list)
if (app_project is not defined or app_project | length == 0)
else [app_project]
}}
tasks:
- name: Skip if SSH config management disabled
ansible.builtin.meta: end_play
when: not manage_ssh_config | bool
- name: Ensure ~/.ssh directory exists
ansible.builtin.file:
path: "{{ lookup('ansible.builtin.env', 'HOME') + '/.ssh' }}"
state: directory
mode: "0700"
- name: Add SSH config entries for each project/env
community.general.ssh_config:
user_ssh_config_file: "{{ ssh_config_path }}"
host: "{{ app_projects[item.0].envs[item.1].name | default(item.0 ~ '-' ~ item.1) }}"
hostname: "{{ (app_projects[item.0].envs[item.1].ip | string).split('/')[0] }}"
user: "{{ appuser_name | default('appuser') }}"
identity_file: "{{ ssh_identity_file | default(omit) }}"
state: present
loop: "{{ selected_projects | product(['dev', 'qa', 'prod']) | list }}"
when:
- app_projects[item.0] is defined
- app_projects[item.0].envs[item.1] is defined
- (app_projects[item.0].envs[item.1].ip | default('')) | length > 0

View File

@ -2,32 +2,18 @@
- name: Configure development environment
hosts: dev
become: true
strategy: free
roles:
- {role: maintenance, tags: ['maintenance']}
- {role: base, tags: ['base', 'security']}
- {role: user, tags: ['user']}
- {role: ssh, tags: ['ssh', 'security']}
- {role: shell, tags: ['shell']}
- {role: shell, tags: ['shell'], shell_mode: full, shell_set_default_shell: true}
- {role: development, tags: ['development', 'dev']}
- {role: datascience, tags: ['datascience', 'conda', 'jupyter', 'r']}
- {role: docker, tags: ['docker']}
- {role: applications, tags: ['applications', 'apps']}
# - {role: tailscale, tags: ['tailscale', 'vpn']}
- {role: monitoring, tags: ['monitoring']}
pre_tasks:
- name: Update apt cache
ansible.builtin.apt:
update_cache: true
ignore_errors: true
register: apt_update_result
- name: Display apt update status
ansible.builtin.debug:
msg: "Apt cache update: {{ 'Success' if apt_update_result is succeeded else 'Failed - continuing anyway' }}"
when: ansible_debug_output | default(false) | bool
- {role: monitoring_desktop, tags: ['monitoring']}
tasks:
# Additional tasks can be added here if needed

View File

@ -12,14 +12,8 @@
- {role: shell, tags: ['shell']}
- {role: development, tags: ['development', 'dev']}
- {role: docker, tags: ['docker']}
- {role: applications, tags: ['applications', 'apps']}
# - {role: tailscale, tags: ['tailscale', 'vpn']}
- {role: monitoring, tags: ['monitoring']}
pre_tasks:
- name: Update apt cache
ansible.builtin.apt:
update_cache: true
- {role: monitoring_desktop, tags: ['monitoring']}
tasks:
- name: Display completion message

View File

@ -22,12 +22,6 @@
Group: {{ group_names | join(', ') }}
Skip reboot: {{ skip_reboot | default(false) | bool }}
- name: Update apt cache
ansible.builtin.apt:
update_cache: true
cache_valid_time: 3600
when: maintenance_update_cache | bool
roles:
- {role: maintenance, tags: ['maintenance']}

27
playbooks/servers.yml Normal file
View File

@ -0,0 +1,27 @@
---
# Playbook: servers.yml
# Purpose: Baseline configuration for servers (no desktop apps, no IDE install)
# Targets: services + qa + ansible + tailscale (override with -e target_group=...)
# Tags: maintenance, base, security, user, ssh, shell, docker, monitoring
# Usage:
# ansible-playbook -i inventories/production playbooks/servers.yml
# ansible-playbook -i inventories/production playbooks/servers.yml -e target_group=services
# ansible-playbook -i inventories/production playbooks/servers.yml --limit jellyfin
- name: Configure servers baseline
hosts: "{{ target_group | default('services:qa:ansible:tailscale') }}"
become: true
roles:
- {role: maintenance, tags: ['maintenance']}
- {role: base, tags: ['base', 'security']}
- {role: user, tags: ['user']}
- {role: ssh, tags: ['ssh', 'security']}
- {role: shell, tags: ['shell']}
- {role: docker, tags: ['docker']}
- {role: monitoring_server, tags: ['monitoring']}
tasks:
- name: Display completion message
ansible.builtin.debug:
msg: "Server baseline configuration completed successfully!"

View File

@ -1,6 +1,6 @@
---
# Playbook: shell.yml
# Purpose: Configure shell environment (zsh, oh-my-zsh, plugins)
# Purpose: Configure shell environment (minimal zsh + managed aliases)
# Targets: all hosts
# Tags: shell
# Usage: make shell-all
@ -8,25 +8,12 @@
- name: Configure shell environment
hosts: all
become: true
strategy: free
ignore_errors: true
ignore_unreachable: true
roles:
- {role: shell, tags: ['shell']}
pre_tasks:
- name: Update apt cache
ansible.builtin.apt:
update_cache: true
ignore_errors: true
register: apt_update_result
- name: Display apt update status
ansible.builtin.debug:
msg: "Apt cache update: {{ 'Success' if apt_update_result is succeeded else 'Failed - continuing anyway' }}"
when: ansible_debug_output | default(false) | bool
tasks:
- name: Display completion message
ansible.builtin.debug:

View File

@ -13,3 +13,7 @@
- name: Tailscale VPN deployment
import_playbook: tailscale.yml
tags: ['tailscale']
- name: App projects on Proxmox (LXC-first)
import_playbook: app/site.yml
tags: ['app']

View File

@ -9,12 +9,6 @@
# Override here if needed or pass via: --extra-vars "tailscale_auth_key=your_key"
tailscale_auth_key: "{{ vault_tailscale_auth_key | default('') }}"
pre_tasks:
- name: Update package cache (Debian/Ubuntu)
ansible.builtin.apt:
update_cache: true
when: ansible_os_family == "Debian"
roles:
- {role: tailscale, tags: ['tailscale', 'vpn']}

View File

@ -0,0 +1,42 @@
---
# Playbook: workstations.yml
# Purpose: Workstation baseline (dev boxes + desktops). Desktop apps are applied only to the `desktop` group.
# Targets: dev + desktop + local (override with -e target_group=...)
# Tags: maintenance, base, security, user, ssh, shell, development, dev, datascience, docker, monitoring, apps
#
# Usage:
# ansible-playbook -i inventories/production playbooks/workstations.yml
# ansible-playbook -i inventories/production playbooks/workstations.yml -e target_group=dev
# ansible-playbook -i inventories/production playbooks/workstations.yml --tags apps
- name: Configure workstation baseline
hosts: "{{ target_group | default('dev:desktop:local') }}"
become: true
roles:
- {role: maintenance, tags: ['maintenance']}
- {role: base, tags: ['base', 'security']}
- {role: user, tags: ['user']}
- {role: ssh, tags: ['ssh', 'security']}
- {role: shell, tags: ['shell'], shell_mode: full, shell_set_default_shell: true}
- {role: development, tags: ['development', 'dev']}
- {role: datascience, tags: ['datascience', 'conda', 'jupyter', 'r']}
- {role: docker, tags: ['docker']}
- {role: monitoring_desktop, tags: ['monitoring']}
tasks:
- name: Display completion message
ansible.builtin.debug:
msg: "Workstation baseline configuration completed successfully!"
- name: Install desktop applications (desktop group only)
hosts: desktop
become: true
roles:
- {role: applications, tags: ['applications', 'apps']}
tasks:
- name: Display completion message
ansible.builtin.debug:
msg: "Desktop applications installed successfully!"

View File

@ -0,0 +1,63 @@
## Architecture
### High-level map (modules and relationships)
- **Inventory**: `inventories/production/`
- `hosts`: groups like `dev`, `desktop`, `services`, `qa`, `ansible`, `tailscale`, `local`
- `group_vars/all/main.yml`: shared configuration (including `app_projects`)
- `group_vars/all/vault.yml`: encrypted secrets (Ansible Vault)
- `host_vars/*`: per-host overrides (some encrypted)
- **Playbooks**: `playbooks/`
- `playbooks/site.yml`: dispatcher (imports other playbooks)
- `playbooks/servers.yml`: baseline for servers (`services:qa:ansible:tailscale`)
- `playbooks/workstations.yml`: baseline for `dev:desktop:local` + desktop apps for `desktop` group only
- `playbooks/development.yml`: dev machines baseline (no desktop apps)
- `playbooks/local.yml`: localhost baseline (no desktop apps)
- `playbooks/app/*`: Proxmox app-project provisioning/configuration suite
- **Roles**: `roles/*`
- Baseline/security: `base`, `user`, `ssh`
- Dev tooling: `development`, `datascience`, `docker`
- Shell: `shell` (minimal aliases-only)
- Monitoring split:
- `monitoring_server` (fail2ban + sysstat)
- `monitoring_desktop` (desktop-oriented monitoring tooling)
- Proxmox guests: `proxmox_vm`
- App guest configuration: `base_os`, `app_setup`, `pote`
### Proxmox “app projects” flow (data model + execution)
- **Data model**: `app_projects` in `inventories/production/group_vars/all/main.yml`
- Defines projects and per-env (`dev/qa/prod`) guest parameters (ip, branch, vmid, etc.)
- **Provision**: `playbooks/app/provision_vms.yml`
- Loops `app_projects` → envs → calls `role: proxmox_vm` to create LXC guests
- Adds dynamic inventory groups:
- `app_all`
- `app_<project>_all`
- `app_<project>_<env>`
- **Configure**: `playbooks/app/configure_app.yml`
- Builds a dynamic inventory from `app_projects` (so it can run standalone)
- Applies:
- `role: base_os` (baseline OS for app guests)
- `role: app_setup` (deploy + systemd) or `role: pote` for the POTE project
### Boundaries
- **Inventory/vars** define desired state and credentials.
- **Playbooks** define “what path to run” (role ordering, target groups, tags).
- **Roles** implement actual host configuration (idempotent tasks, handlers).
### External dependencies
- **Ansible collections**: `collections/requirements.yml`
- **Ansible Vault**: `inventories/production/group_vars/all/vault.yml`
- **Proxmox API**: used by `community.proxmox.*` modules in provisioning
### References
- Playbook execution graphs and tags: `docs/reference/playbooks-and-tags.md`
- Legacy pointer (do not update): `docs/reference/architecture.md``project-docs/architecture.md`

35
project-docs/decisions.md Normal file
View File

@ -0,0 +1,35 @@
## Decisions (ADR-style)
### 2025-12-31 — Do not manage IDE/editor installs in Ansible
- **Context**: IDEs/editors are interactive, fast-moving, and often user-preference-driven.
- **Decision**: Keep editor installation (Cursor, VS Code, etc.) out of Ansible roles/playbooks.
- **Consequences**:
- Faster, more stable provisioning runs
- Less drift caused by UI tooling changes
- Editor setup is handled separately (manual or via dedicated tooling)
### 2025-12-31 — Split monitoring into server vs workstation roles
- **Context**: Servers and workstations have different needs (e.g., fail2ban/sysstat are server-centric; wireshark-common is workstation-centric).
- **Decision**: Create `monitoring_server` and `monitoring_desktop` roles and wire them into `servers.yml` / workstation playbooks.
- **Consequences**:
- Smaller install footprint on servers
- Clearer intent and faster runs
### 2025-12-31 — Desktop applications are installed only on the `desktop` group
- **Context**: Desktop apps should not be installed on headless servers or dev VMs by default.
- **Decision**: Run `role: applications` only in a `desktop`-scoped play (workstations playbook).
- **Consequences**:
- Reduced unnecessary package installs
- Less attack surface and fewer updates on non-desktop hosts
### 2025-12-31 — Minimal shell role (aliases-only)
- **Context**: Oh-my-zsh/theme/plugin cloning is slow and overwriting `.zshrc` is risky.
- **Decision**: `role: shell` now manages a small alias file and ensures its sourced; it does not overwrite `.zshrc`.
- **Consequences**:
- Much faster shell configuration
- Safer for servers and multi-user systems

33
project-docs/index.md Normal file
View File

@ -0,0 +1,33 @@
## Project docs index
Last updated: **2025-12-31**
### Documents
- **`project-docs/overview.md`** (updated 2025-12-31)
High-level goals, scope, and primary users for this Ansible infrastructure repo.
- **`project-docs/architecture.md`** (updated 2025-12-31)
Architecture map: inventories, playbooks, roles, and the Proxmox app-project flow.
- **`project-docs/standards.md`** (updated 2025-12-31)
Conventions for Ansible YAML, role structure, naming, vault usage, and linting.
- **`project-docs/workflow.md`** (updated 2025-12-31)
How to run common tasks via `Makefile`, how to lint/test, and how to apply safely.
- **`project-docs/decisions.md`** (updated 2025-12-31)
Short ADR-style notes for important architectural decisions.
### Related docs (existing)
- **Playbooks/tags map**: `docs/reference/playbooks-and-tags.md`
- **Applications inventory**: `docs/reference/applications.md`
- **Makefile reference**: `docs/reference/makefile.md`
- **Proxmox app project guides**:
- `docs/guides/app_stack_proxmox.md`
- `docs/guides/app_stack_execution_flow.md`
Legacy pointers:
- `docs/reference/architecture.md``project-docs/architecture.md`

27
project-docs/overview.md Normal file
View File

@ -0,0 +1,27 @@
## Overview
This repository manages infrastructure automation using **Ansible** for:
- Development machines (`dev`)
- Desktop machines (`desktop`)
- Service hosts (`services`, `qa`, `ansible`, `tailscale`)
- Proxmox-managed guests for “app projects” (LXC-first, with a KVM path)
Primary entrypoint is the **Makefile** (`Makefile`) and playbooks under `playbooks/`.
### Goals
- **Predictable, repeatable provisioning** of hosts and Proxmox guests
- **Safe defaults**: avoid destructive automation; prefer guardrails and idempotency
- **Clear separation** between server vs workstation responsibilities
- **Secrets handled via Ansible Vault** (never commit plaintext credentials)
### Non-goals
- Automated decommission/destroy playbooks for infrastructure or guests
- Managing interactive IDE/editor installs (kept out of Ansible by design)
### Target users
- You (and collaborators) operating a small homelab / Proxmox environment
- Contributors extending roles/playbooks in a consistent style

49
project-docs/standards.md Normal file
View File

@ -0,0 +1,49 @@
## Standards
### Ansible + YAML conventions
- **Indentation**: 2 spaces (no tabs)
- **Task naming**: every task should include a clear `name:`
- **Play-level privilege**: prefer `become: true` at play level when most tasks need sudo
- **Modules**:
- Prefer native modules over `shell`/`command`
- Use **fully qualified collection names** (FQCN), e.g. `ansible.builtin.apt`, `community.general.ufw`
- **Handlers**: use handlers for restarts/reloads
- **Idempotency**:
- If `shell`/`command` is unavoidable, set `changed_when:` / `creates:` / `removes:` appropriately
### Role structure
Roles should follow:
```
roles/<role_name>/
├── defaults/main.yml
├── handlers/main.yml
├── tasks/main.yml
├── templates/
├── files/
└── README.md
```
### Variable naming
- **snake_case** everywhere
- Vault-backed variables are prefixed with **`vault_`**
### Secrets / Vault
- Never commit plaintext secrets.
- Use Ansible Vault for credentials:
- `inventories/production/group_vars/all/vault.yml` (encrypted)
- Local vault password file is expected at `~/.ansible-vault-pass`.
### Makefile-first workflow
- Prefer `make ...` targets over direct `ansible-playbook` commands for consistency.
### Linting
- `ansible-lint` is the primary linter.
- `.ansible-lint` excludes vault-containing inventory paths to keep linting deterministic without vault secrets.

86
project-docs/workflow.md Normal file
View File

@ -0,0 +1,86 @@
## Workflow
### Setup
- Install dependencies (Python requirements, Node deps for docs, Ansible collections):
```bash
make bootstrap
```
- Edit vault secrets:
```bash
make edit-group-vault
```
### Validate (safe, local)
- Syntax checks:
```bash
make test-syntax
```
- Lint:
```bash
make lint
```
### Common apply flows
- **Servers baseline** (services + qa + ansible + tailscale):
```bash
make servers
make servers GROUP=services
make servers HOST=jellyfin
```
- **Workstations baseline** (dev + desktop + local; desktop apps only on `desktop` group):
```bash
make workstations
make workstations GROUP=dev
make apps
```
### Proxmox app projects
End-to-end:
```bash
make app PROJECT=projectA
```
Provision only / configure only:
```bash
make app-provision PROJECT=projectA
make app-configure PROJECT=projectA
```
Inspect Proxmox guests:
```bash
make proxmox-info PROJECT=projectA
make proxmox-info ALL=true
make proxmox-info TYPE=lxc
```
### Safety checks
- Prefer `--check --diff` first:
```bash
make check
```
### Debugging
```bash
make debug
make verbose
```

7
provision_vms.yml Normal file
View File

@ -0,0 +1,7 @@
---
# Wrapper playbook
# Purpose:
# ansible-playbook -i inventories/production provision_vms.yml -e app_project=projectA
- name: Provision app project guests
import_playbook: playbooks/app/provision_vms.yml

24
roles/app_setup/README.md Normal file
View File

@ -0,0 +1,24 @@
# `app_setup`
Creates the standard app filesystem layout and runtime services:
- `/srv/app/backend` and `/srv/app/frontend`
- `/srv/app/.env.<dev|qa|prod>`
- `/usr/local/bin/deploy_app.sh` (git pull, install deps, build, migrate, restart services)
- systemd units:
- `app-backend.service`
- `app-frontend.service`
All behavior is driven by variables so you can reuse this role for multiple projects.
## Variables
See [`defaults/main.yml`](defaults/main.yml). Common inputs in the app stack:
- `app_project`, `app_env` (used for naming and `.env.<env>` selection)
- `app_repo_url`, `app_repo_dest`, `app_repo_branch`
- `app_env_vars` (map written into `/srv/app/.env.<env>`)
- `components.backend`, `components.frontend` (enable/disable backend/frontend setup)
- `app_backend_dir`, `app_frontend_dir`, ports and Node.js commands

View File

@ -0,0 +1,39 @@
---
# Role: app_setup
# Purpose: app filesystem layout, env files, deploy script, and systemd units.
app_root: "/srv/app"
app_backend_dir: "{{ app_root }}/backend"
app_frontend_dir: "{{ app_root }}/frontend"
# Which environment file to render for this host: dev|qa|prod
app_env: dev
# Components (useful for single-repo projects)
app_enable_backend: true
app_enable_frontend: true
# Repo settings (project-driven)
app_repo_url: ""
app_repo_dest: "{{ app_root }}"
app_repo_branch: "main"
# Owner for app files
app_owner: "{{ appuser_name | default('appuser') }}"
app_group: "{{ appuser_name | default('appuser') }}"
# Ports
app_backend_port: 3001
app_frontend_port: 3000
# Commands (Node defaults; override per project as needed)
app_backend_install_cmd: "npm ci"
app_backend_migrate_cmd: "npm run migrate"
app_backend_start_cmd: "npm start"
app_frontend_install_cmd: "npm ci"
app_frontend_build_cmd: "npm run build"
app_frontend_start_cmd: "npm start"
# Arbitrary environment variables for the env file
app_env_vars: {}

View File

@ -0,0 +1,6 @@
---
# Role: app_setup handlers
- name: Reload systemd
ansible.builtin.systemd:
daemon_reload: true

View File

@ -0,0 +1,82 @@
---
# Role: app_setup
# Purpose: create app layout, env file, deploy script, and systemd units.
- name: Ensure app root directory exists
ansible.builtin.file:
path: "{{ app_root }}"
state: directory
owner: "{{ app_owner }}"
group: "{{ app_group }}"
mode: "0755"
- name: Ensure backend directory exists
ansible.builtin.file:
path: "{{ app_backend_dir }}"
state: directory
owner: "{{ app_owner }}"
group: "{{ app_group }}"
mode: "0755"
when: app_enable_backend | bool
- name: Ensure frontend directory exists
ansible.builtin.file:
path: "{{ app_frontend_dir }}"
state: directory
owner: "{{ app_owner }}"
group: "{{ app_group }}"
mode: "0755"
when: app_enable_frontend | bool
- name: Deploy environment file for this env
ansible.builtin.template:
src: env.j2
dest: "{{ app_root }}/.env.{{ app_env }}"
owner: "{{ app_owner }}"
group: "{{ app_group }}"
mode: "0640"
- name: Deploy deploy script
ansible.builtin.template:
src: deploy_app.sh.j2
dest: /usr/local/bin/deploy_app.sh
owner: root
group: root
mode: "0755"
- name: Deploy systemd unit for backend
ansible.builtin.template:
src: app-backend.service.j2
dest: /etc/systemd/system/app-backend.service
owner: root
group: root
mode: "0644"
notify: Reload systemd
when: app_enable_backend | bool
- name: Deploy systemd unit for frontend
ansible.builtin.template:
src: app-frontend.service.j2
dest: /etc/systemd/system/app-frontend.service
owner: root
group: root
mode: "0644"
notify: Reload systemd
when: app_enable_frontend | bool
- name: Ensure systemd is reloaded before enabling services
ansible.builtin.meta: flush_handlers
- name: Enable and start backend service
ansible.builtin.systemd:
name: app-backend.service
enabled: true
state: started
when: app_enable_backend | bool
- name: Enable and start frontend service
ansible.builtin.systemd:
name: app-frontend.service
enabled: true
state: started
when: app_enable_frontend | bool

View File

@ -0,0 +1,19 @@
[Unit]
Description=App Backend ({{ app_env }})
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User={{ app_owner }}
Group={{ app_group }}
WorkingDirectory={{ app_backend_dir }}
EnvironmentFile={{ app_root }}/.env.{{ app_env }}
ExecStart=/usr/bin/env bash -lc '{{ app_backend_start_cmd }}'
Restart=on-failure
RestartSec=3
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,19 @@
[Unit]
Description=App Frontend ({{ app_env }})
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User={{ app_owner }}
Group={{ app_group }}
WorkingDirectory={{ app_frontend_dir }}
EnvironmentFile={{ app_root }}/.env.{{ app_env }}
ExecStart=/usr/bin/env bash -lc '{{ app_frontend_start_cmd }}'
Restart=on-failure
RestartSec=3
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,57 @@
#!/usr/bin/env bash
# Ansible-managed deploy script
set -euo pipefail
REPO_URL="{{ app_repo_url }}"
BRANCH="{{ app_repo_branch }}"
APP_ROOT="{{ app_repo_dest }}"
BACKEND_DIR="{{ app_backend_dir }}"
FRONTEND_DIR="{{ app_frontend_dir }}"
ENV_FILE="{{ app_root }}/.env.{{ app_env }}"
echo "[deploy] repo=${REPO_URL} branch=${BRANCH} root=${APP_ROOT}"
if [[ ! -d "${APP_ROOT}/.git" ]]; then
echo "[deploy] cloning repo"
install -d -m 0755 "${APP_ROOT}"
git clone --branch "${BRANCH}" --single-branch "${REPO_URL}" "${APP_ROOT}"
fi
echo "[deploy] syncing branch"
git -C "${APP_ROOT}" fetch origin --prune
if ! git -C "${APP_ROOT}" rev-parse --verify --quiet "refs/remotes/origin/${BRANCH}" >/dev/null; then
echo "[deploy] ERROR: branch '${BRANCH}' not found on origin"
exit 2
fi
git -C "${APP_ROOT}" checkout -B "${BRANCH}" "origin/${BRANCH}"
git -C "${APP_ROOT}" pull --ff-only origin "${BRANCH}"
if [[ "{{ app_enable_backend | bool }}" == "True" ]]; then
echo "[deploy] backend install"
cd "${BACKEND_DIR}"
{{ app_backend_install_cmd }}
echo "[deploy] backend migrations"
{{ app_backend_migrate_cmd }}
fi
if [[ "{{ app_enable_frontend | bool }}" == "True" ]]; then
echo "[deploy] frontend install"
cd "${FRONTEND_DIR}"
{{ app_frontend_install_cmd }}
echo "[deploy] frontend build"
{{ app_frontend_build_cmd }}
fi
echo "[deploy] restarting services"
{% if app_enable_backend | bool %}
systemctl restart app-backend.service
{% endif %}
{% if app_enable_frontend | bool %}
systemctl restart app-frontend.service
{% endif %}
echo "[deploy] done"

View File

@ -0,0 +1,13 @@
# Ansible-managed environment file for {{ app_env }}
# Loaded by systemd units and deploy script.
# Common
APP_ENV={{ app_env }}
BACKEND_PORT={{ app_backend_port }}
FRONTEND_PORT={{ app_frontend_port }}
{% for k, v in (app_env_vars | default({})).items() %}
{{ k }}={{ v }}
{% endfor %}

View File

@ -1,7 +1,7 @@
# Role: applications
## Description
Installs desktop applications for development and productivity including browsers, office suites, and utilities.
Installs a small set of desktop GUI applications (desktop group only via `playbooks/workstations.yml`).
## Requirements
- Ansible 2.9+
@ -9,8 +9,7 @@ Installs desktop applications for development and productivity including browser
- Internet access for package downloads
## Installed Applications
- **Brave Browser**: Privacy-focused web browser
- **LibreOffice**: Complete office suite
- **CopyQ**: Clipboard manager (history, search, scripting)
- **Evince**: PDF document viewer
- **Redshift**: Blue light filter for eye comfort
@ -18,10 +17,7 @@ Installs desktop applications for development and productivity including browser
| Variable | Default | Description |
|----------|---------|-------------|
| `applications_install_brave` | `true` | Install Brave browser |
| `applications_install_libreoffice` | `true` | Install LibreOffice suite |
| `applications_install_evince` | `true` | Install PDF viewer |
| `applications_install_redshift` | `true` | Install blue light filter |
| `applications_desktop_packages` | `['copyq','evince','redshift']` | Desktop packages to install |
## Dependencies
- `base` role (for package management)
@ -31,16 +27,13 @@ Installs desktop applications for development and productivity including browser
```yaml
- hosts: desktop
roles:
- { role: applications, applications_install_brave: false }
- role: applications
```
## Tags
- `applications`: All application installations
- `apps`: Alias for applications
- `browser`: Browser installation only
- `office`: Office suite installation only
## Notes
- Adds external repositories for Brave browser
- Requires desktop environment for GUI applications
- Applications are installed system-wide

View File

@ -1 +1,7 @@
---
# Desktop GUI applications to install (desktop group only via playbooks/workstations.yml)
applications_desktop_packages:
- copyq
- evince
- redshift

View File

@ -3,109 +3,25 @@
ansible.builtin.package_facts:
manager: apt
- name: Check if Brave browser is installed
ansible.builtin.command: brave-browser --version
register: applications_brave_check
ignore_errors: true
changed_when: false
failed_when: false
no_log: true
- name: Set installation conditions
ansible.builtin.set_fact:
applications_desktop_apps_needed: "{{ ['redshift', 'libreoffice', 'evince'] | difference(ansible_facts.packages.keys()) | length > 0 }}"
applications_brave_needs_install: "{{ applications_brave_check.rc != 0 or 'brave-browser' not in ansible_facts.packages }}"
- name: Check if Brave GPG key exists and is correct
ansible.builtin.shell: |
if [ -f /usr/share/keyrings/brave-browser-archive-keyring.gpg ]; then
if file /usr/share/keyrings/brave-browser-archive-keyring.gpg | grep -q "PGP"; then
echo "correct_key"
else
echo "wrong_key"
fi
else
echo "not_exists"
fi
register: brave_key_check
failed_when: false
when: applications_brave_needs_install
- name: Check if Brave repository exists and is correct
ansible.builtin.shell: |
if [ -f /etc/apt/sources.list.d/brave-browser.list ]; then
if grep -q "deb \[signed-by=/usr/share/keyrings/brave-browser-archive-keyring.gpg\]" /etc/apt/sources.list.d/brave-browser.list; then
echo "correct_config"
else
echo "wrong_config"
fi
else
echo "not_exists"
fi
register: brave_repo_check
failed_when: false
when: applications_brave_needs_install
- name: Clean up duplicate Brave repository files
ansible.builtin.file:
path: "{{ item }}"
state: absent
loop:
- /etc/apt/sources.list.d/brave-browser.list
- /etc/apt/sources.list.d/brave-browser-release.sources
become: true
failed_when: false
when:
- applications_brave_needs_install
- brave_repo_check.stdout == "wrong_config"
- name: Remove incorrect Brave GPG key
ansible.builtin.file:
path: /usr/share/keyrings/brave-browser-archive-keyring.gpg
state: absent
become: true
when:
- applications_brave_needs_install
- brave_key_check.stdout == "wrong_key"
applications_desktop_apps_needed: >-
{{
(applications_desktop_packages | default([]))
| difference(ansible_facts.packages.keys())
| length > 0
}}
- name: Install desktop applications
ansible.builtin.apt:
name:
- redshift
- libreoffice
- evince
name: "{{ applications_desktop_packages }}"
state: present
when: applications_desktop_apps_needed
- name: Brave browser installation
when: applications_brave_needs_install
block:
- name: Download Brave APT key only if needed
ansible.builtin.get_url:
url: https://brave-browser-apt-release.s3.brave.com/brave-browser-archive-keyring.gpg
dest: /usr/share/keyrings/brave-browser-archive-keyring.gpg
mode: '0644'
when: brave_key_check.stdout in ["not_exists", "wrong_key"]
- name: Add Brave repository only if needed
ansible.builtin.apt_repository:
repo: "deb [signed-by=/usr/share/keyrings/brave-browser-archive-keyring.gpg] https://brave-browser-apt-release.s3.brave.com/ stable main"
filename: brave-browser
state: present
when: brave_repo_check.stdout in ["not_exists", "wrong_config"]
- name: Install Brave browser
ansible.builtin.apt:
name: brave-browser
state: present
- name: Display application status
ansible.builtin.debug:
msg:
- "Desktop apps needed: {{ applications_desktop_apps_needed }}"
- "Brave needed: {{ applications_brave_needs_install }}"
- "Redshift: {{ 'Installed' if 'redshift' in ansible_facts.packages else 'Missing' }}"
- "LibreOffice: {{ 'Installed' if 'libreoffice' in ansible_facts.packages else 'Missing' }}"
- "Evince: {{ 'Installed' if 'evince' in ansible_facts.packages else 'Missing' }}"
- "Brave: {{ applications_brave_check.stdout if applications_brave_check.rc == 0 else 'Not installed' }}"
when: ansible_debug_output | default(false) | bool

View File

@ -1,4 +1,10 @@
---
- name: Update apt cache (shared baseline)
ansible.builtin.apt:
update_cache: true
cache_valid_time: "{{ apt_cache_valid_time | default(3600) }}"
when: ansible_os_family == "Debian"
- name: Ensure Ansible remote_tmp directory exists with correct permissions
ansible.builtin.file:
path: /root/.ansible/tmp

21
roles/base_os/README.md Normal file
View File

@ -0,0 +1,21 @@
# `base_os`
Baseline OS configuration for app guests:
- Installs required packages (git/curl/nodejs/npm/ufw/openssh-server/etc.)
- Creates deployment user (default `appuser`) with passwordless sudo
- Adds your authorized SSH key
- Configures UFW to allow SSH + backend/frontend ports
## Variables
See [`defaults/main.yml`](defaults/main.yml). Common inputs in the app stack:
- `appuser_name`, `appuser_groups`, `appuser_shell`
- `appuser_ssh_public_key` (usually `{{ vault_ssh_public_key }}`)
- `components.backend`, `components.frontend` (enable/disable firewall rules per component)
- `app_backend_port`, `app_frontend_port`
This role is used by `playbooks/app/configure_app.yml` after provisioning.

View File

@ -0,0 +1,31 @@
---
# Role: base_os
# Purpose: baseline OS configuration for app guests (packages, appuser, firewall).
base_os_packages:
- git
- curl
- ca-certificates
- openssh-server
- sudo
- ufw
- python3
- python3-apt
- nodejs
- npm
base_os_allow_ssh_port: 22
# App ports (override per project)
base_os_backend_port: "{{ app_backend_port | default(3001) }}"
base_os_frontend_port: "{{ app_frontend_port | default(3000) }}"
base_os_enable_backend: true
base_os_enable_frontend: true
base_os_user: "{{ appuser_name | default('appuser') }}"
base_os_user_shell: "{{ appuser_shell | default('/bin/bash') }}"
base_os_user_groups: "{{ appuser_groups | default(['sudo']) }}"
base_os_user_ssh_public_key: "{{ appuser_ssh_public_key | default('') }}"
# If true, create passwordless sudo for base_os_user.
base_os_passwordless_sudo: true

View File

@ -0,0 +1,6 @@
---
# Role: base_os handlers
- name: Reload ufw
ansible.builtin.command: ufw reload
changed_when: false

View File

@ -0,0 +1,63 @@
---
# Role: base_os
# Purpose: baseline OS config for app guests.
- name: Ensure apt cache is up to date
ansible.builtin.apt:
update_cache: true
cache_valid_time: 3600
- name: Install baseline packages
ansible.builtin.apt:
name: "{{ base_os_packages }}"
state: present
- name: Ensure app user exists
ansible.builtin.user:
name: "{{ base_os_user }}"
shell: "{{ base_os_user_shell }}"
groups: "{{ base_os_user_groups }}"
append: true
create_home: true
state: present
- name: Ensure app user has authorized SSH key
ansible.posix.authorized_key:
user: "{{ base_os_user }}"
state: present
key: "{{ base_os_user_ssh_public_key }}"
when: base_os_user_ssh_public_key | length > 0
- name: Configure passwordless sudo for app user
ansible.builtin.copy:
dest: "/etc/sudoers.d/{{ base_os_user }}"
content: "{{ base_os_user }} ALL=(ALL) NOPASSWD:ALL\n"
owner: root
group: root
mode: "0440"
when: base_os_passwordless_sudo | bool
- name: Ensure UFW allows SSH
community.general.ufw:
rule: allow
port: "{{ base_os_allow_ssh_port }}"
proto: tcp
- name: Ensure UFW allows backend port
community.general.ufw:
rule: allow
port: "{{ base_os_backend_port }}"
proto: tcp
when: base_os_enable_backend | bool
- name: Ensure UFW allows frontend port
community.general.ufw:
rule: allow
port: "{{ base_os_frontend_port }}"
proto: tcp
when: base_os_enable_frontend | bool
- name: Enable UFW (deny incoming by default)
community.general.ufw:
state: enabled
policy: deny

View File

@ -17,4 +17,3 @@ r_packages:
- r-base
- r-base-dev
- r-recommended

View File

@ -5,4 +5,3 @@
state: restarted
daemon_reload: true
become: true

View File

@ -1,4 +1,3 @@
---
dependencies:
- role: base

View File

@ -200,4 +200,3 @@
- name: Display R version
ansible.builtin.debug:
msg: "R version installed: {{ r_version.stdout_lines[0] if r_version.stdout_lines | length > 0 else 'Not checked in dry-run mode' }}"

View File

@ -23,42 +23,12 @@ Installs core development tools and utilities for software development. This rol
- **npm**: Node package manager (included with Node.js)
- Configured from official NodeSource repository
### Code Editors
- **Cursor IDE**: AI-powered code editor (AppImage)
- Installed to `/usr/local/bin/cursor`
- Latest stable version from cursor.com
## Variables
### Core Settings
| Variable | Default | Description |
|----------|---------|-------------|
| `install_cursor` | `true` | Install Cursor IDE |
| `install_cursor_extensions` | `false` | Install Cursor extensions |
### Extension Groups
Enable specific extension groups based on your development needs:
| Variable | Default | Extensions Included |
|----------|---------|-------------------|
| `install_python` | `false` | Python, Pylance, Black, isort, Flake8, Ruff |
| `install_jupyter` | `false` | Jupyter notebooks, keybindings, renderers |
| `install_web` | `false` | Prettier, ESLint, Tailwind, Vue, Svelte |
| `install_playwright` | `false` | Playwright testing framework |
| `install_devops` | `false` | Go, Rust, YAML, Docker, Ansible |
| `install_r` | `false` | R language support and pack development |
| `install_docs` | `false` | Markdown tools and linter |
### Base Extensions (Always Installed)
When `install_cursor_extensions: true`, these are always installed:
- ErrorLens (better error highlighting)
- GitLens (Git supercharged)
- Git Graph (visualization)
- Code Spell Checker
- EditorConfig support
- Material Icon Theme
- GitHub Copilot (if licensed)
- Copilot Chat
| `development_packages` | See defaults | Base packages installed by the role |
## Dependencies
- `base` role (for core utilities)
@ -72,61 +42,17 @@ When `install_cursor_extensions: true`, these are always installed:
- role: development
```
### Python Data Science Machine
### Customize packages
```yaml
- hosts: datascience
- hosts: developers
roles:
- role: development
vars:
install_cursor_extensions: true
install_python: true
install_jupyter: true
install_docs: true
```
### Web Development Machine
```yaml
- hosts: webdevs
roles:
- role: development
vars:
install_cursor_extensions: true
install_web: true
install_playwright: true
install_docs: true
```
### Full Stack with DevOps
```yaml
- hosts: fullstack
roles:
- role: development
vars:
install_cursor_extensions: true
install_python: true
install_web: true
install_devops: true
install_docs: true
```
### Custom Extension List
You can also override the extension list completely in `host_vars`:
```yaml
# host_vars/myhost.yml
install_cursor_extensions: true
cursor_extensions:
- ms-python.python
- golang.go
- hashicorp.terraform
# ... your custom list
```
### With Cursor disabled
```yaml
- hosts: servers
roles:
- role: development
install_cursor: false
development_packages:
- git
- build-essential
- python3
- python3-pip
```
## Usage
@ -141,7 +67,6 @@ ansible-playbook playbooks/development.yml --limit dev01 --tags development
## Tags
- `development`, `dev`: All development tasks
- `cursor`, `ide`: Cursor IDE installation only
## Post-Installation
@ -151,7 +76,6 @@ git --version
node --version
npm --version
python3 --version
cursor --version
```
### Node.js Usage
@ -163,28 +87,17 @@ npm install -g <package>
node --version # Should show v22.x
```
### Cursor IDE Usage
```bash
# Launch Cursor (if using X11/Wayland)
cursor
# For root users, use the aliased version from .zshrc:
cursor # Automatically adds --no-sandbox flags
```
## Performance Notes
### Installation Time
- **Base packages**: 1-2 minutes
- **Node.js**: 1-2 minutes
- **Cursor IDE**: 2-5 minutes (~200MB download)
- **Total**: ~5-10 minutes
- **Total**: ~3-5 minutes
### Disk Space
- **Node.js + npm**: ~100MB
- **Cursor IDE**: ~200MB
- **Build tools**: ~50MB
- **Total**: ~350MB
- **Total**: ~150MB
## Integration
@ -217,17 +130,9 @@ apt-get remove nodejs
# Re-run playbook
```
### Cursor Won't Launch
For root users, use the alias that adds required flags:
```bash
# Check alias in .zshrc
alias cursor="cursor --no-sandbox --disable-gpu-sandbox..."
```
## Notes
- Node.js 22 is the current LTS version
- NodeSource repository is configured for automatic updates
- Cursor IDE is installed as AppImage for easy updates
- Build tools (gcc, make) are essential for npm native modules
- Python 3 is included for development scripts
- All installations are idempotent (safe to re-run)
@ -239,11 +144,10 @@ alias cursor="cursor --no-sandbox --disable-gpu-sandbox..."
| Git | ✅ | - |
| Node.js | ✅ | - |
| Build Tools | ✅ | - |
| Cursor IDE | ✅ | - |
| Anaconda | ❌ | ✅ |
| Jupyter | ❌ | ✅ |
| R Language | ❌ | ✅ |
| Install Time | ~10 min | ~30-60 min |
| Disk Space | ~350MB | ~3GB |
| Disk Space | ~150MB | ~3GB |
**Recommendation**: Use `development` role for general coding. Add `datascience` role only when needed for data analysis/ML work.

View File

@ -1,87 +1,9 @@
---
# Development role defaults
# Development role defaults (IDEs intentionally not managed here).
# Node.js is installed by default from NodeSource
# No additional configuration needed
# Cursor IDE - lightweight IDE installation
install_cursor: true
install_cursor_extensions: false
# Base Cursor extensions (always good to have)
cursor_extensions_base:
- usernamehw.errorlens # Better error highlighting
- eamodio.gitlens # Git supercharged
- mhutchie.git-graph # Git graph visualization
- streetsidesoftware.code-spell-checker # Spell checker
- EditorConfig.EditorConfig # EditorConfig support
- PKief.material-icon-theme # Better file icons
# Python/Data Science extensions
cursor_extensions_python:
- ms-python.python # Python language support
- ms-python.vscode-pylance # Python IntelliSense
- ms-python.black-formatter # Black formatter
- ms-python.isort # Import sorter
- ms-python.flake8 # Linter
- charliermarsh.ruff # Fast Python linter
# Jupyter/Data Science extensions
cursor_extensions_jupyter:
- ms-toolsai.jupyter # Jupyter notebooks
- ms-toolsai.jupyter-keymap # Jupyter keybindings
- ms-toolsai.jupyter-renderers # Jupyter renderers
# Web Development extensions
cursor_extensions_web:
- esbenp.prettier-vscode # Code formatter
- dbaeumer.vscode-eslint # ESLint
- bradlc.vscode-tailwindcss # Tailwind CSS
- vue.volar # Vue 3
- svelte.svelte-vscode # Svelte
# Testing extensions
cursor_extensions_testing:
- ms-playwright.playwright # Playwright testing
# Systems/DevOps extensions
cursor_extensions_devops:
- golang.go # Go language
- rust-lang.rust-analyzer # Rust language
- redhat.vscode-yaml # YAML support
- ms-azuretools.vscode-docker # Docker support
- redhat.ansible # Ansible support
# R language extensions
cursor_extensions_r:
- REditorSupport.r # R language support
- Ikuyadeu.r-pack # R package development
# Markdown/Documentation extensions
cursor_extensions_docs:
- yzhang.markdown-all-in-one # Markdown tools
- DavidAnson.vscode-markdownlint # Markdown linter
# Default combined list (customize per host in host_vars)
cursor_extensions: >-
{{
[
cursor_extensions_base,
(cursor_extensions_python if install_python | default(false) else []),
(cursor_extensions_jupyter if install_jupyter | default(false) else []),
(cursor_extensions_web if install_web | default(false) else []),
(cursor_extensions_testing if install_playwright | default(false) else []),
(cursor_extensions_devops if install_devops | default(false) else []),
(cursor_extensions_r if install_r | default(false) else []),
(cursor_extensions_docs if install_docs | default(false) else [])
] | flatten
}}
# Feature flags to enable extension groups
install_python: false
install_jupyter: false
install_web: false
install_playwright: false
install_devops: false
install_r: false
install_docs: false
# Base packages for a lightweight dev foundation.
development_packages:
- git
- build-essential
- python3
- python3-pip

View File

@ -1,13 +1,7 @@
---
- name: Install basic development packages
ansible.builtin.apt:
name:
# Development tools
- git
# Build tools
- build-essential
- python3
- python3-pip
name: "{{ development_packages }}"
state: present
become: true
@ -17,34 +11,41 @@
failed_when: false
changed_when: false
- name: Check if NodeSource repository exists and is correct
ansible.builtin.shell: |
if [ -f /etc/apt/sources.list.d/nodesource.list ]; then
if grep -q "deb \[signed-by=/etc/apt/keyrings/nodesource.gpg\] https://deb.nodesource.com/node_22.x nodistro main" /etc/apt/sources.list.d/nodesource.list; then
echo "correct_config"
else
echo "wrong_config"
fi
else
echo "not_exists"
fi
register: nodesource_repo_check
failed_when: false
- name: Check NodeSource repository file presence
ansible.builtin.stat:
path: /etc/apt/sources.list.d/nodesource.list
register: nodesource_list_stat
when: node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
- name: Check if NodeSource GPG key exists and is correct
ansible.builtin.shell: |
if [ -f /etc/apt/keyrings/nodesource.gpg ]; then
if file /etc/apt/keyrings/nodesource.gpg | grep -q "PGP"; then
echo "correct_key"
else
echo "wrong_key"
fi
else
echo "not_exists"
fi
register: nodesource_key_check
failed_when: false
- name: Read NodeSource repository file
ansible.builtin.slurp:
src: /etc/apt/sources.list.d/nodesource.list
register: nodesource_list_slurp
when:
- node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
- nodesource_list_stat.stat.exists | default(false)
- name: Set NodeSource repository state
ansible.builtin.set_fact:
nodesource_repo_state: >-
{{
'not_exists'
if not (nodesource_list_stat.stat.exists | default(false))
else (
'correct_config'
if (
(nodesource_list_slurp.content | b64decode)
is search('^deb \\[signed-by=/etc/apt/keyrings/nodesource\\.gpg\\] https://deb\\.nodesource\\.com/node_22\\.x nodistro main', multiline=True)
)
else 'wrong_config'
)
}}
when: node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
- name: Check NodeSource GPG key presence
ansible.builtin.stat:
path: /etc/apt/keyrings/nodesource.gpg
register: nodesource_key_stat
when: node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
- name: Remove incorrect NodeSource repository
@ -54,16 +55,7 @@
become: true
when:
- node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
- nodesource_repo_check.stdout == "wrong_config"
- name: Remove incorrect NodeSource key
ansible.builtin.file:
path: /etc/apt/keyrings/nodesource.gpg
state: absent
become: true
when:
- node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
- nodesource_key_check.stdout == "wrong_key"
- nodesource_repo_state == "wrong_config"
- name: Create keyrings directory
ansible.builtin.file:
@ -73,7 +65,7 @@
become: true
when:
- node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
- nodesource_key_check.stdout in ["not_exists", "wrong_key"]
- not (nodesource_key_stat.stat.exists | default(false))
- name: Add NodeSource GPG key only if needed
ansible.builtin.get_url:
@ -84,7 +76,7 @@
become: true
when:
- node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
- nodesource_key_check.stdout in ["not_exists", "wrong_key"]
- not (nodesource_key_stat.stat.exists | default(false))
- name: Add NodeSource repository only if needed
ansible.builtin.apt_repository:
@ -94,7 +86,7 @@
become: true
when:
- node_version_check.rc != 0 or not node_version_check.stdout.startswith('v22')
- nodesource_repo_check.stdout in ["not_exists", "wrong_config"]
- nodesource_repo_state in ["not_exists", "wrong_config"]
- name: Install Node.js 22 from NodeSource
ansible.builtin.apt:
@ -111,92 +103,3 @@
- name: Display Node.js version
ansible.builtin.debug:
msg: "Node.js version installed: {{ final_node_version.stdout if final_node_version.stdout is defined else 'Not checked in dry-run mode' }}"
# Cursor IDE installation (using AppImage)
# Downloads the latest version from cursor.com API
- name: Install Cursor IDE block
tags: ['cursor', 'ide']
block:
- name: Install libfuse2 dependency for AppImage
ansible.builtin.apt:
name: libfuse2
state: present
update_cache: false
become: true
when: ansible_os_family == "Debian"
- name: Check if Cursor is already installed at /usr/local/bin
ansible.builtin.stat:
path: /usr/local/bin/cursor
register: cursor_bin_check
- name: Get Cursor download URL from API and download AppImage
ansible.builtin.shell: |
DOWNLOAD_URL=$(curl -sL "https://www.cursor.com/api/download?platform=linux-x64&releaseTrack=stable" | grep -o '"downloadUrl":"[^"]*' | cut -d'"' -f4)
wget --timeout=60 --tries=3 -O /tmp/cursor.AppImage "$DOWNLOAD_URL"
args:
creates: /tmp/cursor.AppImage
when: not cursor_bin_check.stat.exists
register: cursor_download
retries: 2
delay: 5
until: cursor_download.rc == 0
- name: Make Cursor AppImage executable
ansible.builtin.file:
path: /tmp/cursor.AppImage
mode: '0755'
when:
- not cursor_bin_check.stat.exists
- cursor_download is defined
- cursor_download.rc is defined
- cursor_download.rc == 0
- name: Install Cursor to /usr/local/bin
ansible.builtin.copy:
src: /tmp/cursor.AppImage
dest: /usr/local/bin/cursor
mode: '0755'
remote_src: true
when:
- not cursor_bin_check.stat.exists
- cursor_download is defined
- cursor_download.rc is defined
- cursor_download.rc == 0
become: true
- name: Clean up Cursor download
ansible.builtin.file:
path: /tmp/cursor.AppImage
state: absent
when:
- cursor_download is defined
- cursor_download.rc is defined
- cursor_download.rc == 0
- name: Display Cursor installation status
ansible.builtin.debug:
msg: "{{ 'Cursor already installed' if cursor_bin_check.stat.exists else ('Cursor installed successfully' if (cursor_download is defined and cursor_download.rc is defined and cursor_download.rc == 0) else 'Cursor installation failed - download manually from cursor.com') }}"
# Cursor extensions installation
- name: Install Cursor extensions block
when:
- install_cursor | default(true) | bool
- install_cursor_extensions | default(false) | bool
- cursor_extensions is defined
- cursor_extensions | length > 0
tags: ['cursor', 'extensions']
block:
- name: Install Cursor extensions
ansible.builtin.shell: |
cursor --install-extension {{ item }} --force --user-data-dir={{ ansible_env.HOME }}/.cursor-root 2>/dev/null || true
loop: "{{ cursor_extensions }}"
register: cursor_ext_install
changed_when: "'successfully installed' in cursor_ext_install.stdout.lower()"
failed_when: false
become: true
become_user: "{{ ansible_user }}"
- name: Display Cursor extensions status
ansible.builtin.debug:
msg: "Installed {{ cursor_extensions | length }} Cursor extensions"

View File

@ -12,6 +12,7 @@
fi
register: docker_key_check
failed_when: false
changed_when: false
- name: Remove incorrect Docker GPG key
ansible.builtin.file:
@ -43,4 +44,3 @@
path: /tmp/docker.gpg
state: absent
when: docker_key_check.stdout in ["not_exists", "wrong_key"]

View File

@ -12,6 +12,7 @@
fi
register: docker_repo_check
failed_when: false
changed_when: false
- name: Remove incorrect Docker repository
ansible.builtin.file:
@ -26,4 +27,3 @@
state: present
update_cache: true
when: docker_repo_check.stdout in ["not_exists", "wrong_config"]

View File

@ -20,6 +20,7 @@
fi
register: docker_repo_check
failed_when: false
changed_when: false
- name: Remove incorrect Docker repository
ansible.builtin.file:
@ -34,4 +35,3 @@
state: present
update_cache: true
when: docker_repo_check.stdout in ["not_exists", "wrong_config"]

View File

@ -12,6 +12,7 @@
fi
register: docker_repo_check
failed_when: false
changed_when: false
- name: Remove incorrect Docker repository
ansible.builtin.file:
@ -26,4 +27,3 @@
state: present
update_cache: true
when: docker_repo_check.stdout in ["not_exists", "wrong_config"]

View File

@ -0,0 +1,5 @@
---
# Monitoring (desktop/workstation) role defaults
monitoring_desktop_install_btop: true
monitoring_desktop_install_wireshark_common: true
monitoring_desktop_create_scripts: true

View File

@ -0,0 +1,131 @@
---
- name: Install monitoring packages (desktop/workstation)
ansible.builtin.apt:
name:
# System monitoring
- htop
- iotop
- nethogs
- iftop
- ncdu
- dstat
# Network monitoring
- nmap
- tcpdump
# Performance monitoring
- atop
# Desktop extras
- "{{ 'wireshark-common' if monitoring_desktop_install_wireshark_common | bool else omit }}"
state: present
- name: Check if btop is available in apt
ansible.builtin.command: apt-cache policy btop
register: monitoring_desktop_btop_apt_check
changed_when: false
failed_when: false
when: monitoring_desktop_install_btop | bool
- name: Install btop from apt if available (Debian 12+)
ansible.builtin.apt:
name: btop
state: present
update_cache: false
when:
- monitoring_desktop_install_btop | bool
- monitoring_desktop_btop_apt_check.rc == 0
- "'Candidate:' in monitoring_desktop_btop_apt_check.stdout"
- "'(none)' not in monitoring_desktop_btop_apt_check.stdout"
failed_when: false
- name: Install btop from binary if apt not available
when:
- monitoring_desktop_install_btop | bool
- monitoring_desktop_btop_apt_check.rc != 0 or "(none)" in monitoring_desktop_btop_apt_check.stdout
block:
- name: Download btop binary
ansible.builtin.get_url:
url: https://github.com/aristocratos/btop/releases/latest/download/btop-x86_64-linux-musl.tbz
dest: /tmp/btop.tbz
mode: '0644'
failed_when: false
- name: Extract btop
ansible.builtin.unarchive:
src: /tmp/btop.tbz
dest: /tmp/
remote_src: true
failed_when: false
- name: Install btop binary
ansible.builtin.copy:
src: /tmp/btop/bin/btop
dest: /usr/local/bin/btop
mode: '0755'
remote_src: true
failed_when: false
- name: Clean up btop download
ansible.builtin.file:
path: "{{ item }}"
state: absent
loop:
- /tmp/btop.tbz
- /tmp/btop
failed_when: false
- name: Create monitoring scripts directory
ansible.builtin.file:
path: /usr/local/bin/monitoring
state: directory
mode: '0755'
when: monitoring_desktop_create_scripts | bool
- name: Deploy system monitoring script
ansible.builtin.copy:
content: |
#!/bin/bash
# System monitoring dashboard
echo "=== System Overview ==="
echo "Hostname: $(hostname)"
echo "Uptime: $(uptime -p)"
echo "Load: $(uptime | awk -F'load average:' '{print $2}')"
echo ""
echo "=== Memory ==="
free -h
echo ""
echo "=== Disk Usage ==="
df -h / /home 2>/dev/null | grep -v tmpfs
echo ""
echo "=== Top Processes ==="
ps aux --sort=-%cpu | head -6
echo ""
echo "=== Network Connections ==="
ss -tuln | head -10
echo ""
if command -v tailscale >/dev/null; then
echo "=== Tailscale Status ==="
tailscale status --peers=false 2>/dev/null || echo "Not connected"
fi
dest: /usr/local/bin/monitoring/sysinfo
mode: '0755'
when: monitoring_desktop_create_scripts | bool
- name: Deploy network monitoring script
ansible.builtin.copy:
content: |
#!/bin/bash
# Network monitoring script
echo "=== Network Interface Status ==="
ip addr show | grep -E "(inet |state )" | grep -v 127.0.0.1
echo ""
echo "=== Route Table ==="
ip route show
echo ""
echo "=== DNS Configuration ==="
cat /etc/resolv.conf | grep nameserver
echo ""
echo "=== Open Ports ==="
ss -tuln | grep LISTEN | sort
dest: /usr/local/bin/monitoring/netinfo
mode: '0755'
when: monitoring_desktop_create_scripts | bool

View File

@ -0,0 +1,5 @@
---
# Monitoring (server) role defaults
monitoring_server_install_btop: true
monitoring_server_enable_sysstat: true
monitoring_server_create_scripts: true

View File

@ -0,0 +1,11 @@
---
- name: restart fail2ban
ansible.builtin.systemd:
name: fail2ban
state: restarted
- name: restart sysstat
ansible.builtin.systemd:
name: sysstat
state: restarted
enabled: true

View File

@ -0,0 +1,148 @@
---
- name: Install monitoring packages (server)
ansible.builtin.apt:
name:
# System monitoring
- htop
- iotop
- nethogs
- iftop
- ncdu
- dstat
# Log monitoring / security
- logwatch
- fail2ban
# Network monitoring
- nmap
- tcpdump
# Performance monitoring
- sysstat
- atop
state: present
- name: Check if btop is available in apt
ansible.builtin.command: apt-cache policy btop
register: monitoring_server_btop_apt_check
changed_when: false
failed_when: false
when: monitoring_server_install_btop | bool
- name: Install btop from apt if available (Debian 12+)
ansible.builtin.apt:
name: btop
state: present
update_cache: false
when:
- monitoring_server_install_btop | bool
- monitoring_server_btop_apt_check.rc == 0
- "'Candidate:' in monitoring_server_btop_apt_check.stdout"
- "'(none)' not in monitoring_server_btop_apt_check.stdout"
failed_when: false
- name: Install btop from binary if apt not available
when:
- monitoring_server_install_btop | bool
- monitoring_server_btop_apt_check.rc != 0 or "(none)" in monitoring_server_btop_apt_check.stdout
block:
- name: Download btop binary
ansible.builtin.get_url:
url: https://github.com/aristocratos/btop/releases/latest/download/btop-x86_64-linux-musl.tbz
dest: /tmp/btop.tbz
mode: '0644'
failed_when: false
- name: Extract btop
ansible.builtin.unarchive:
src: /tmp/btop.tbz
dest: /tmp/
remote_src: true
failed_when: false
- name: Install btop binary
ansible.builtin.copy:
src: /tmp/btop/bin/btop
dest: /usr/local/bin/btop
mode: '0755'
remote_src: true
failed_when: false
- name: Clean up btop download
ansible.builtin.file:
path: "{{ item }}"
state: absent
loop:
- /tmp/btop.tbz
- /tmp/btop
failed_when: false
- name: Configure fail2ban
ansible.builtin.template:
src: jail.local.j2
dest: /etc/fail2ban/jail.local
mode: '0644'
notify: restart fail2ban
- name: Enable sysstat data collection
ansible.builtin.lineinfile:
path: /etc/default/sysstat
regexp: '^ENABLED='
line: 'ENABLED="true"'
notify: restart sysstat
when: monitoring_server_enable_sysstat | bool
- name: Create monitoring scripts directory
ansible.builtin.file:
path: /usr/local/bin/monitoring
state: directory
mode: '0755'
when: monitoring_server_create_scripts | bool
- name: Deploy system monitoring script
ansible.builtin.copy:
content: |
#!/bin/bash
# System monitoring dashboard
echo "=== System Overview ==="
echo "Hostname: $(hostname)"
echo "Uptime: $(uptime -p)"
echo "Load: $(uptime | awk -F'load average:' '{print $2}')"
echo ""
echo "=== Memory ==="
free -h
echo ""
echo "=== Disk Usage ==="
df -h / /home 2>/dev/null | grep -v tmpfs
echo ""
echo "=== Top Processes ==="
ps aux --sort=-%cpu | head -6
echo ""
echo "=== Network Connections ==="
ss -tuln | head -10
echo ""
if command -v tailscale >/dev/null; then
echo "=== Tailscale Status ==="
tailscale status --peers=false 2>/dev/null || echo "Not connected"
fi
dest: /usr/local/bin/monitoring/sysinfo
mode: '0755'
when: monitoring_server_create_scripts | bool
- name: Deploy network monitoring script
ansible.builtin.copy:
content: |
#!/bin/bash
# Network monitoring script
echo "=== Network Interface Status ==="
ip addr show | grep -E "(inet |state )" | grep -v 127.0.0.1
echo ""
echo "=== Route Table ==="
ip route show
echo ""
echo "=== DNS Configuration ==="
cat /etc/resolv.conf | grep nameserver
echo ""
echo "=== Open Ports ==="
ss -tuln | grep LISTEN | sort
dest: /usr/local/bin/monitoring/netinfo
mode: '0755'
when: monitoring_server_create_scripts | bool

View File

@ -0,0 +1,34 @@
[DEFAULT]
# Ban hosts for 1 hour
bantime = 3600
# Check for repeated failures for 10 minutes
findtime = 600
# Allow 3 failures before banning
maxretry = 3
# Email notifications (uncomment and configure if needed)
destemail = idobkin@gmail.com
sender = idobkin@gmail.com
action = %(action_mwl)s
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
[apache]
enabled = false
port = http,https
filter = apache-auth
logpath = /var/log/apache2/error.log
maxretry = 3
[nginx-http-auth]
enabled = false
port = http,https
filter = nginx-http-auth
logpath = /var/log/nginx/error.log
maxretry = 3

27
roles/pote/README.md Normal file
View File

@ -0,0 +1,27 @@
# `pote`
Deploys the **POTE** project as a Python/venv application (no HTTP services required) and schedules cron jobs.
## What it does
- Installs required system packages (git, python3.11/venv, build deps, postgresql server/client)
- Ensures a dedicated OS user exists (default: `poteapp`)
- Creates PostgreSQL database and user
- Clones/updates the repo from an SSH remote using a vault-backed private key
- Creates a Python virtualenv and installs from `pyproject.toml` (editable mode)
- Renders an environment file (default: `{{ pote_app_dir }}/.env`)
- Runs Alembic database migrations
- Installs cron jobs (daily/weekly/health-check)
## Key variables
See `defaults/main.yml`. Common inputs:
- `pote_git_repo`, `pote_git_branch`
- `pote_git_ssh_key` (set `vault_pote_git_ssh_key` in your vault)
- `pote_user`, `pote_app_dir`, `pote_venv_dir`
- `pote_db_*`, `pote_smtp_*`
- `pote_enable_cron`, `pote_*_time`, `pote_*_job`

View File

@ -0,0 +1,116 @@
---
# Role: pote
# Purpose: Deploy POTE (Python/venv + cron) from a Git repo via SSH.
# -----------------------------------------------------------------------------
# Git / source
# -----------------------------------------------------------------------------
pote_git_repo: ""
pote_git_branch: "main"
# SSH private key used to clone/pull (vault-backed). Keep this secret.
# Prefer setting `vault_pote_git_ssh_key` in your vault; `vault_git_ssh_key` is supported for compatibility.
pote_git_ssh_key: "{{ vault_pote_git_ssh_key | default(vault_git_ssh_key | default('')) }}"
# Host/IP for known_hosts (so first clone is non-interactive).
pote_git_host: "10.0.30.169"
pote_git_port: 22
# -----------------------------------------------------------------------------
# User / paths
# -----------------------------------------------------------------------------
pote_user: "poteapp"
pote_group: "{{ pote_user }}"
pote_app_dir: "/home/{{ pote_user }}/pote"
pote_venv_dir: "{{ pote_app_dir }}/venv"
pote_python_bin: "python3.11"
# Environment file
pote_env_file: "{{ pote_app_dir }}/.env"
pote_env_file_mode: "0600"
# Logs
pote_logs_dir: "/home/{{ pote_user }}/logs"
pote_log_level: "INFO"
pote_log_file: "{{ pote_logs_dir }}/pote.log"
# Monitoring / alerting (optional)
pote_market_tickers: ""
pote_alert_min_severity: ""
# Optional API keys
pote_quiverquant_api_key: ""
pote_fmp_api_key: ""
# -----------------------------------------------------------------------------
# System deps
# -----------------------------------------------------------------------------
pote_system_packages:
- git
- ca-certificates
- python3.11
- python3.11-venv
- python3.11-dev
- python3-pip
- build-essential
- postgresql
- postgresql-contrib
- postgresql-client
- libpq-dev
# -----------------------------------------------------------------------------
# Database
# -----------------------------------------------------------------------------
pote_db_host: "localhost"
pote_db_port: 5432
pote_db_name: "potedb"
pote_db_user: "poteuser"
# Prefer env-specific vault vars; fall back to a generic one if present.
pote_db_password: >-
{{
vault_pote_db_password
| default(
(vault_pote_db_password_dev | default(vault_db_password_dev | default(''), true)) if pote_env == 'dev'
else (vault_pote_db_password_qa | default(vault_db_password_qa | default(''), true)) if pote_env == 'qa'
else (vault_pote_db_password_prod | default(vault_db_password_prod | default(''), true)) if pote_env == 'prod'
else '',
true
)
}}
# Convenience computed URL (commonly used by Python apps)
pote_database_url: "postgresql://{{ pote_db_user }}:{{ pote_db_password }}@{{ pote_db_host }}:{{ pote_db_port }}/{{ pote_db_name }}"
# -----------------------------------------------------------------------------
# SMTP / email
# -----------------------------------------------------------------------------
pote_smtp_host: "mail.levkin.ca"
pote_smtp_port: 587
pote_smtp_user: ""
pote_smtp_password: "{{ vault_pote_smtp_password | default(vault_smtp_password | default('')) }}"
pote_from_email: ""
pote_report_recipients: ""
# -----------------------------------------------------------------------------
# Automation / cron
# -----------------------------------------------------------------------------
pote_enable_cron: true
# "minute hour" (e.g. "0 6")
pote_daily_report_time: "0 6"
# "minute hour dow" (e.g. "0 8 0" => Sunday 08:00)
pote_weekly_report_time: "0 8 0"
# "minute hour" for */6 style (e.g. "0 */6")
pote_health_check_time: "0 */6"
pote_daily_report_enabled: true
pote_weekly_report_enabled: true
pote_health_check_enabled: true
# Commands (adjust to your repos actual scripts)
pote_daily_job: "{{ pote_app_dir }}/scripts/automated_daily_run.sh >> {{ pote_logs_dir }}/daily_run.log 2>&1"
pote_weekly_job: "{{ pote_app_dir }}/scripts/automated_weekly_run.sh >> {{ pote_logs_dir }}/weekly_run.log 2>&1"
pote_health_check_job: "{{ pote_venv_dir }}/bin/python {{ pote_app_dir }}/scripts/health_check.py >> {{ pote_logs_dir }}/health_check.log 2>&1"
# Environment name for templating/logging (dev|qa|prod)
pote_env: "{{ app_env | default('prod') }}"

227
roles/pote/tasks/main.yml Normal file
View File

@ -0,0 +1,227 @@
---
# Role: pote
# Purpose: Deploy POTE (python/venv) and schedule cron jobs.
- name: Ensure POTE system dependencies are installed
ansible.builtin.apt:
name: "{{ pote_system_packages }}"
state: present
update_cache: true
cache_valid_time: 3600
- name: Ensure POTE group exists
ansible.builtin.group:
name: "{{ pote_group }}"
state: present
- name: Ensure POTE user exists
ansible.builtin.user:
name: "{{ pote_user }}"
group: "{{ pote_group }}"
shell: /bin/bash
create_home: true
state: present
- name: Ensure POTE app directory exists
ansible.builtin.file:
path: "{{ pote_app_dir }}"
state: directory
owner: "{{ pote_user }}"
group: "{{ pote_group }}"
mode: "0755"
- name: Ensure SSH directory exists for POTE user
ansible.builtin.file:
path: "/home/{{ pote_user }}/.ssh"
state: directory
owner: "{{ pote_user }}"
group: "{{ pote_group }}"
mode: "0700"
- name: Install Git SSH key for POTE (vault-backed)
ansible.builtin.copy:
dest: "/home/{{ pote_user }}/.ssh/id_ed25519"
content: "{{ pote_git_ssh_key }}"
owner: "{{ pote_user }}"
group: "{{ pote_group }}"
mode: "0600"
no_log: true
when: (pote_git_ssh_key | default('')) | length > 0
- name: Fetch Git host key (ssh-keyscan)
ansible.builtin.command: "ssh-keyscan -p {{ pote_git_port }} -H {{ pote_git_host }}"
register: pote_ssh_keyscan
changed_when: false
failed_when: false
when: (pote_git_host | default('')) | length > 0
- name: Ensure Git host is in known_hosts for POTE user
ansible.builtin.known_hosts:
path: "/home/{{ pote_user }}/.ssh/known_hosts"
name: "{{ pote_git_host }}"
key: "{{ pote_ssh_keyscan.stdout }}"
state: present
when:
- (pote_git_host | default('')) | length > 0
- (pote_ssh_keyscan.stdout | default('')) | length > 0
- name: Clone/update POTE repository
block:
- name: Clone/update POTE repository (git over SSH)
ansible.builtin.git:
repo: "{{ pote_git_repo }}"
dest: "{{ pote_app_dir }}"
version: "{{ pote_git_branch }}"
key_file: "/home/{{ pote_user }}/.ssh/id_ed25519"
accept_hostkey: true
update: true
become: true
become_user: "{{ pote_user }}"
register: pote_git_result
rescue:
- name: Abort with actionable Git SSH guidance
ansible.builtin.fail:
msg: >-
Failed to clone {{ pote_git_repo }} (branch={{ pote_git_branch }}) as user {{ pote_user }}.
Common causes:
- vault_pote_git_ssh_key is not a valid OpenSSH private key (or is passphrase-protected)
- the public key is not added to Gitea as a deploy key / user key with access to ilia/POTE
- repo or branch name is wrong
Error: {{ pote_git_result.msg | default(pote_git_result.stderr | default('unknown error')) }}
- name: Ensure PostgreSQL is running
ansible.builtin.systemd:
name: postgresql
state: started
enabled: true
- name: Check if PostgreSQL role exists
ansible.builtin.command: "psql -tAc \"SELECT 1 FROM pg_roles WHERE rolname='{{ pote_db_user }}'\""
become: true
become_user: postgres
register: pote_pg_role_check
changed_when: false
- name: Create PostgreSQL user for POTE
ansible.builtin.command: "psql -c \"CREATE USER {{ pote_db_user }} WITH PASSWORD '{{ pote_db_password }}'\""
become: true
become_user: postgres
when: (pote_pg_role_check.stdout | trim) != '1'
changed_when: true
- name: Ensure PostgreSQL user password is set (idempotent)
ansible.builtin.command: "psql -c \"ALTER USER {{ pote_db_user }} WITH PASSWORD '{{ pote_db_password }}'\""
become: true
become_user: postgres
when: (pote_db_password | default('')) | length > 0
changed_when: false
- name: Check if PostgreSQL database exists
ansible.builtin.command: "psql -tAc \"SELECT 1 FROM pg_database WHERE datname='{{ pote_db_name }}'\""
become: true
become_user: postgres
register: pote_pg_db_check
changed_when: false
- name: Create PostgreSQL database for POTE
ansible.builtin.command: "psql -c \"CREATE DATABASE {{ pote_db_name }} OWNER {{ pote_db_user }}\""
become: true
become_user: postgres
when: (pote_pg_db_check.stdout | trim) != '1'
changed_when: true
- name: Ensure Python virtual environment exists
ansible.builtin.command: "{{ pote_python_bin }} -m venv {{ pote_venv_dir }}"
args:
creates: "{{ pote_venv_dir }}/bin/activate"
become: true
become_user: "{{ pote_user }}"
- name: Upgrade pip in venv
ansible.builtin.pip:
name: pip
state: present
virtualenv: "{{ pote_venv_dir }}"
become: true
become_user: "{{ pote_user }}"
- name: Deploy POTE environment file
ansible.builtin.template:
src: env.j2
dest: "{{ pote_env_file }}"
owner: "{{ pote_user }}"
group: "{{ pote_group }}"
mode: "{{ pote_env_file_mode }}"
- name: Install POTE in editable mode (pyproject.toml)
ansible.builtin.pip:
name: "{{ pote_app_dir }}"
editable: true
virtualenv: "{{ pote_venv_dir }}"
become: true
become_user: "{{ pote_user }}"
- name: Run Alembic migrations
ansible.builtin.command: "{{ pote_venv_dir }}/bin/alembic upgrade head"
args:
chdir: "{{ pote_app_dir }}"
become: true
become_user: "{{ pote_user }}"
changed_when: false
- name: Ensure logs directory exists
ansible.builtin.file:
path: "{{ pote_logs_dir }}"
state: directory
owner: "{{ pote_user }}"
group: "{{ pote_group }}"
mode: "0755"
- name: Ensure automation shell scripts are executable
ansible.builtin.file:
path: "{{ pote_app_dir }}/scripts/{{ item }}"
mode: "0755"
loop:
- automated_daily_run.sh
- automated_weekly_run.sh
- setup_cron.sh
- setup_automation.sh
become: true
become_user: "{{ pote_user }}"
- name: Install cron job - daily report
ansible.builtin.cron:
name: "POTE daily report"
minute: "{{ pote_daily_report_time.split()[0] }}"
hour: "{{ pote_daily_report_time.split()[1] }}"
job: "{{ pote_daily_job }}"
user: "{{ pote_user }}"
state: present
when:
- pote_enable_cron | bool
- pote_daily_report_enabled | bool
- name: Install cron job - weekly report
ansible.builtin.cron:
name: "POTE weekly report"
minute: "{{ pote_weekly_report_time.split()[0] }}"
hour: "{{ pote_weekly_report_time.split()[1] }}"
weekday: "{{ pote_weekly_report_time.split()[2] }}"
job: "{{ pote_weekly_job }}"
user: "{{ pote_user }}"
state: present
when:
- pote_enable_cron | bool
- pote_weekly_report_enabled | bool
- name: Install cron job - health check
ansible.builtin.cron:
name: "POTE health check"
minute: "{{ pote_health_check_time.split()[0] }}"
hour: "{{ pote_health_check_time.split()[1] }}"
job: "{{ pote_health_check_job }}"
user: "{{ pote_user }}"
state: present
when:
- pote_enable_cron | bool
- pote_health_check_enabled | bool

View File

@ -0,0 +1,27 @@
### Ansible-managed POTE environment
POTE_ENV="{{ pote_env }}"
# Database
DATABASE_URL="{{ pote_database_url }}"
# Email
SMTP_HOST="{{ pote_smtp_host }}"
SMTP_PORT="{{ pote_smtp_port }}"
SMTP_USER="{{ pote_smtp_user }}"
SMTP_PASSWORD="{{ pote_smtp_password }}"
FROM_EMAIL="{{ pote_from_email }}"
REPORT_RECIPIENTS="{{ pote_report_recipients }}"
# Monitoring / alerting (optional)
MARKET_MONITOR_TICKERS="{{ pote_market_tickers | default('') }}"
ALERT_MIN_SEVERITY="{{ pote_alert_min_severity | default('') }}"
# Logging
LOG_LEVEL="{{ pote_log_level }}"
LOG_FILE="{{ pote_log_file }}"
# Optional API keys
QUIVERQUANT_API_KEY="{{ pote_quiverquant_api_key | default('') }}"
FMP_API_KEY="{{ pote_fmp_api_key | default('') }}"

View File

@ -1,64 +1,82 @@
# Role: proxmox_vm
# Role: `proxmox_vm`
## Description
Creates and configures virtual machines on Proxmox VE hypervisor with cloud-init support and automated provisioning.
Provision Proxmox guests via API. This role supports **both**:
- **LXC containers** (`proxmox_guest_type: lxc`) via `community.proxmox.proxmox`
- **KVM VMs** (`proxmox_guest_type: kvm`) via `community.general.proxmox_kvm`
The entry point is `roles/proxmox_vm/tasks/main.yml`, which dispatches to `tasks/lxc.yml` or `tasks/kvm.yml`.
## Requirements
- Ansible 2.9+
- Proxmox VE server
- `community.general` collection
- Valid Proxmox credentials in vault
## Features
- Automated VM creation with cloud-init
- Configurable CPU, memory, and disk resources
- Network configuration with DHCP or static IP
- SSH key injection for passwordless access
- Ubuntu Server template support
- Ansible (project tested with modern Ansible; older 2.9-era setups may need adjustments)
- Proxmox VE API access
- Collections:
- `community.proxmox`
- `community.general` (for `proxmox_kvm`)
- Python lib on the control machine:
- `proxmoxer` (installed by `make bootstrap` / `requirements.txt`)
## Variables
## Authentication (vault-backed)
| Variable | Default | Description |
|----------|---------|-------------|
| `vm_memory` | `8192` | RAM allocation in MB |
| `vm_cores` | `2` | Number of CPU cores |
| `vm_disk_size` | `20G` | Disk size |
| `vm_iso` | `ubuntu-24.04-live-server-amd64.iso` | Installation ISO |
| `vm_ciuser` | `master` | Default cloud-init user |
| `vm_storage` | `local-lvm` | Proxmox storage backend |
Store secrets in `inventories/production/group_vars/all/vault.yml`:
## Vault Variables (Required)
- `vault_proxmox_host`
- `vault_proxmox_user`
- `vault_proxmox_password` (or token auth)
- `vault_proxmox_token_id` (optional)
- `vault_proxmox_token` (optional)
- `vault_ssh_public_key` (used for bootstrap access where applicable)
| Variable | Description |
|----------|-------------|
| `vault_proxmox_host` | Proxmox server IP/hostname |
| `vault_proxmox_user` | Proxmox username (e.g., root@pam) |
| `vault_proxmox_password` | Proxmox password |
| `vault_vm_cipassword` | VM default user password |
| `vault_ssh_public_key` | SSH public key for VM access |
## Key variables
## Dependencies
- Proxmox VE server with API access
- ISO images uploaded to Proxmox storage
Common:
## Example Playbook
- `proxmox_guest_type`: `lxc` or `kvm`
- `proxmox_host`, `proxmox_user`, `proxmox_node`
- `proxmox_api_port` (default `8006`)
- `proxmox_validate_certs` (default `false`)
LXC (`tasks/lxc.yml`):
- `lxc_vmid`, `lxc_hostname`
- `lxc_ostemplate` (e.g. `local:vztmpl/debian-12-standard_*.tar.zst`)
- `lxc_storage` (default `local-lvm`)
- `lxc_network_bridge` (default `vmbr0`)
- `lxc_ip` (CIDR), `lxc_gateway`
- `lxc_cores`, `lxc_memory_mb`, `lxc_swap_mb`, `lxc_rootfs_size_gb`
KVM (`tasks/kvm.yml`):
- `vm_id`, `vm_name`
- `vm_cores`, `vm_memory`, `vm_disk_size`
- `vm_storage`, `vm_network_bridge`
- cloud-init parameters used by the existing KVM provisioning flow
## Safety guardrails
LXC provisioning includes a VMID collision guardrail:
- If the target VMID already exists but the guest name does not match the expected name, provisioning fails.
- Override only if you really mean it: `-e allow_vmid_collision=true`
## Example usage
Provisioning is typically orchestrated by `playbooks/app/provision_vms.yml`, but you can call the role directly:
```yaml
- hosts: localhost
roles:
- role: proxmox_vm
vm_name: "test-vm"
vm_id: 999
vm_memory: 4096
- name: Provision one LXC
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Create/update container
ansible.builtin.include_role:
name: proxmox_vm
vars:
proxmox_guest_type: lxc
lxc_vmid: 9301
lxc_hostname: projectA-dev
lxc_ip: "10.0.10.101/24"
lxc_gateway: "10.0.10.1"
```
## Tags
- `proxmox`: All Proxmox operations
- `vm`: VM creation tasks
- `infrastructure`: Infrastructure provisioning
## Notes
- Requires Proxmox API credentials in vault
- VM IDs must be unique on Proxmox cluster
- Cloud-init requires compatible ISO images
- VMs are created but not started by default

View File

@ -25,3 +25,31 @@ vm_nameservers: "8.8.8.8 8.8.4.4"
vm_start_after_create: true
vm_enable_agent: true
vm_boot_order: "order=scsi0"
# -----------------------------------------------------------------------------
# Proxmox LXC defaults (used when proxmox_guest_type == 'lxc')
# -----------------------------------------------------------------------------
lxc_vmid: 300
lxc_hostname: "app-container"
lxc_ostemplate: "local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst"
lxc_storage: "local-lvm"
lxc_network_bridge: "vmbr0"
lxc_ip: "" # e.g. "10.0.10.101/24"
lxc_gateway: "" # e.g. "10.0.10.1"
lxc_nameserver: "1.1.1.1 8.8.8.8"
lxc_unprivileged: true
# Use list form because community.proxmox.proxmox expects list for `features`
lxc_features_list:
- "keyctl=1"
- "nesting=1"
lxc_cores: 2
lxc_memory_mb: 2048
lxc_swap_mb: 512
lxc_rootfs_size_gb: 16
# Add to /root/.ssh/authorized_keys (bootstrap). Override with appuser_ssh_public_key.
lxc_pubkey: ""
lxc_start_after_create: true

View File

@ -0,0 +1,80 @@
---
# Proxmox QEMU VM provisioning via API (cloud-init).
# This task file preserves the repo's existing VM behavior.
# Break down the Proxmox VM creation to avoid "file name too long" error
- name: Set VM configuration facts
ansible.builtin.set_fact:
vm_scsi_config:
scsi0: "{{ vm_storage }}:{{ vm_disk_size }},format=raw"
vm_net_config:
net0: "virtio,bridge={{ vm_network_bridge }},firewall=1"
vm_ide_config:
ide2: "{{ vm_iso_storage }}:cloudinit,format=qcow2"
vm_ipconfig:
ipconfig0: "{{ vm_ip_config }}"
- name: Create VM on Proxmox
community.general.proxmox_kvm:
# Connection
api_host: "{{ proxmox_host }}"
api_user: "{{ proxmox_user }}"
api_password: "{{ vault_proxmox_password }}"
api_token_id: "{{ proxmox_token_id | default(omit) }}"
api_token_secret: "{{ vault_proxmox_token | default(omit) }}"
# VM identification
vmid: "{{ vm_id }}"
name: "{{ vm_name }}"
node: "{{ proxmox_node }}"
# Hardware specs
memory: "{{ vm_memory }}"
cores: "{{ vm_cores }}"
sockets: "{{ vm_sockets }}"
cpu: "host"
# Storage and network
scsi: "{{ vm_scsi_config }}"
net: "{{ vm_net_config }}"
ide: "{{ vm_ide_config }}"
# Boot and OS
boot: "{{ vm_boot_order }}"
ostype: "{{ vm_os_type }}"
# Cloud-init
ciuser: "{{ vm_ciuser }}"
cipassword: "{{ vault_vm_cipassword | default(omit) }}"
sshkeys: "{{ vm_ssh_keys | join('\n') if vm_ssh_keys else omit }}"
ipconfig: "{{ vm_ipconfig }}"
nameserver: "{{ vm_nameservers }}"
# VM options
agent: "{{ vm_enable_agent | bool }}"
autostart: false
balloon: 0
state: present
register: vm_creation_result
- name: Start VM if requested
community.general.proxmox_kvm:
api_host: "{{ proxmox_host }}"
api_user: "{{ proxmox_user }}"
api_password: "{{ vault_proxmox_password }}"
api_token_id: "{{ proxmox_token_id | default(omit) }}"
api_token_secret: "{{ vault_proxmox_token | default(omit) }}"
vmid: "{{ vm_id }}"
node: "{{ proxmox_node }}"
state: started
when: vm_start_after_create | bool
- name: Display VM creation results
ansible.builtin.debug:
msg: |
VM Created: {{ vm_name }} (ID: {{ vm_id }})
Memory: {{ vm_memory }}MB
Cores: {{ vm_cores }}
Storage: {{ vm_storage }}:{{ vm_disk_size }}
Network: {{ vm_network_bridge }}
Status: {{ vm_creation_result.msg | default('Created') }}

View File

@ -0,0 +1,82 @@
---
# Proxmox LXC container provisioning via API.
#
# This uses `community.proxmox.proxmox` because it is widely available and
# supports idempotent updates via `update: true`.
- name: Build LXC netif configuration
ansible.builtin.set_fact:
lxc_netif_config:
# IMPORTANT: Proxmox requires net0 to be a single comma-delimited string.
# Avoid folded YAML blocks here (they can introduce newlines/spaces).
net0: >-
{{
(
['name=eth0', 'bridge=' ~ lxc_network_bridge, 'firewall=1']
+ (['ip=' ~ lxc_ip] if (lxc_ip is defined and (lxc_ip | string | length) > 0) else [])
+ (['gw=' ~ lxc_gateway] if (lxc_gateway is defined and (lxc_gateway | string | length) > 0) else [])
) | join(',')
}}
- name: Ensure LXC container is present (create or update)
community.proxmox.proxmox:
api_host: "{{ proxmox_host }}"
api_port: "{{ proxmox_api_port | default(8006) }}"
validate_certs: "{{ proxmox_validate_certs | default(false) }}"
api_user: "{{ proxmox_user }}"
api_password: "{{ vault_proxmox_password | default(omit) }}"
# Only pass token params when they are set (avoid empty-string triggering required-together errors)
api_token_id: "{{ proxmox_token_id | default(omit, true) }}"
api_token_secret: "{{ vault_proxmox_token | default(omit, true) }}"
node: "{{ proxmox_node }}"
vmid: "{{ lxc_vmid | default(omit) }}"
hostname: "{{ lxc_hostname }}"
ostemplate: "{{ lxc_ostemplate }}"
unprivileged: "{{ lxc_unprivileged | bool }}"
features: "{{ lxc_features_list | default(omit) }}"
cores: "{{ lxc_cores }}"
memory: "{{ lxc_memory_mb }}"
swap: "{{ lxc_swap_mb }}"
# rootfs sizing (GiB). disk_volume is less version-sensitive than string `disk`.
disk_volume:
storage: "{{ lxc_storage }}"
size: "{{ lxc_rootfs_size_gb }}"
netif: "{{ lxc_netif_config }}"
nameserver: "{{ lxc_nameserver | default(omit) }}"
# Bootstrap root SSH access (used by Ansible until appuser exists).
pubkey: "{{ lxc_pubkey | default(omit) }}"
password: "{{ vault_lxc_root_password | default(omit) }}"
update: true
state: present
register: lxc_present
- name: Ensure LXC container is started
community.proxmox.proxmox:
api_host: "{{ proxmox_host }}"
api_port: "{{ proxmox_api_port | default(8006) }}"
validate_certs: "{{ proxmox_validate_certs | default(false) }}"
api_user: "{{ proxmox_user }}"
api_password: "{{ vault_proxmox_password | default(omit) }}"
api_token_id: "{{ proxmox_token_id | default(omit, true) }}"
api_token_secret: "{{ vault_proxmox_token | default(omit, true) }}"
node: "{{ proxmox_node }}"
vmid: "{{ lxc_vmid | default(omit) }}"
state: started
when: lxc_start_after_create | bool
- name: Display LXC provisioning results
ansible.builtin.debug:
msg: |
LXC Present: {{ lxc_hostname }} (VMID: {{ lxc_vmid }})
Cores: {{ lxc_cores }}
Memory: {{ lxc_memory_mb }}MB (swap {{ lxc_swap_mb }}MB)
RootFS: {{ lxc_storage }}:{{ lxc_rootfs_size_gb }}
Net: {{ lxc_network_bridge }} / {{ lxc_ip | default('dhcp/unspecified') }}
Changed: {{ lxc_present.changed | default(false) }}

View File

@ -1,77 +1,13 @@
---
# Break down the Proxmox VM creation to avoid "file name too long" error
- name: Set VM configuration facts
ansible.builtin.set_fact:
vm_scsi_config:
scsi0: "{{ vm_storage }}:{{ vm_disk_size }},format=raw"
vm_net_config:
net0: "virtio,bridge={{ vm_network_bridge }},firewall=1"
vm_ide_config:
ide2: "{{ vm_iso_storage }}:cloudinit,format=qcow2"
vm_ipconfig:
ipconfig0: "{{ vm_ip_config }}"
# Proxmox guest provisioning dispatcher.
#
# - `proxmox_guest_type: lxc` uses `tasks/lxc.yml`
# - default uses `tasks/kvm.yml` (existing behavior)
- name: Create VM on Proxmox
community.general.proxmox_kvm:
# Connection
api_host: "{{ proxmox_host }}"
api_user: "{{ proxmox_user }}"
api_password: "{{ vault_proxmox_password }}"
api_token_id: "{{ proxmox_token_id | default(omit) }}"
api_token_secret: "{{ vault_proxmox_token | default(omit) }}"
- name: Provision LXC container
ansible.builtin.include_tasks: lxc.yml
when: (proxmox_guest_type | default('kvm')) == 'lxc'
# VM identification
vmid: "{{ vm_id }}"
name: "{{ vm_name }}"
node: "{{ proxmox_node }}"
# Hardware specs
memory: "{{ vm_memory }}"
cores: "{{ vm_cores }}"
sockets: "{{ vm_sockets }}"
cpu: "host"
# Storage and network
scsi: "{{ vm_scsi_config }}"
net: "{{ vm_net_config }}"
ide: "{{ vm_ide_config }}"
# Boot and OS
boot: "{{ vm_boot_order }}"
ostype: "{{ vm_os_type }}"
# Cloud-init
ciuser: "{{ vm_ciuser }}"
cipassword: "{{ vault_vm_cipassword | default(omit) }}"
sshkeys: "{{ vm_ssh_keys | join('\n') if vm_ssh_keys else omit }}"
ipconfig: "{{ vm_ipconfig }}"
nameserver: "{{ vm_nameservers }}"
# VM options
agent: "{{ vm_enable_agent | bool }}"
autostart: false
balloon: 0
state: present
register: vm_creation_result
- name: Start VM if requested
community.general.proxmox_kvm:
api_host: "{{ proxmox_host }}"
api_user: "{{ proxmox_user }}"
api_password: "{{ vault_proxmox_password }}"
api_token_id: "{{ proxmox_token_id | default(omit) }}"
api_token_secret: "{{ vault_proxmox_token | default(omit) }}"
vmid: "{{ vm_id }}"
node: "{{ proxmox_node }}"
state: started
when: vm_start_after_create | bool
- name: Display VM creation results
ansible.builtin.debug:
msg: |
VM Created: {{ vm_name }} (ID: {{ vm_id }})
Memory: {{ vm_memory }}MB
Cores: {{ vm_cores }}
Storage: {{ vm_storage }}:{{ vm_disk_size }}
Network: {{ vm_network_bridge }}
Status: {{ vm_creation_result.msg | default('Created') }}
- name: Provision QEMU VM (cloud-init)
ansible.builtin.include_tasks: kvm.yml
when: (proxmox_guest_type | default('kvm')) != 'lxc'

View File

@ -1,7 +1,10 @@
### Role: shell
## Description
Configures modern shell environment with zsh, Oh My Zsh, Powerlevel10k theme, and useful plugins. Can be configured for multiple users on the same host.
Configures shell in one of two modes:
- **minimal**: aliases-only (safe for servers; does not overwrite `~/.zshrc`)
- **full**: installs Oh My Zsh + Powerlevel10k + plugins and deploys a managed `~/.zshrc` (intended for developer machines)
## Requirements
- Ansible 2.9+
@ -12,25 +15,23 @@ Configures modern shell environment with zsh, Oh My Zsh, Powerlevel10k theme, an
### Shell Environment
- **zsh**: Z shell
- **Oh My Zsh**: Zsh configuration framework
- **Powerlevel10k**: Modern, feature-rich theme
- **tmux**: Terminal multiplexer
- **fzf**: Fuzzy finder
### Zsh Plugins
- **zsh-syntax-highlighting**: Syntax highlighting for commands
- **zsh-autosuggestions**: Fish-like autosuggestions
- **oh-my-zsh / powerlevel10k**: only in `shell_mode=full`
### Configuration Files
- `.zshrc`: Custom zsh configuration with conda support
- `.p10k.zsh`: Powerlevel10k theme configuration
- `~/.zsh_aliases_ansible`: Managed aliases file (sourced from `~/.zshrc`)
- `~/.zshrc`: appended in `minimal` mode; fully managed in `full` mode
- `~/.p10k.zsh`: only in `shell_mode=full`
## Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `shell_users` | `[ansible_user]` | List of users to configure zsh for |
| `zsh_plugins` | See defaults/main.yml | List of zsh plugins to install |
| `shell_packages` | `['zsh','tmux','fzf']` | Packages installed by the role |
| `shell_mode` | `minimal` | `minimal` (aliases-only) or `full` (oh-my-zsh + p10k + managed zshrc) |
| `shell_set_default_shell` | `false` | If true, set login shell to `/usr/bin/zsh` |
## Dependencies
None
@ -44,6 +45,16 @@ None
- role: shell
```
### Full Zsh for developer machines
```yaml
- hosts: dev
roles:
- role: shell
vars:
shell_mode: full
shell_set_default_shell: true
```
### Configure Multiple Users
```yaml
- hosts: servers
@ -90,19 +101,14 @@ make dev HOST=devGPU --tags shell
## Post-Installation
### For Each Configured User
The shell configuration is immediately active. Users can:
The aliases are immediately available in new shells. Users can:
1. **Customize Powerlevel10k**:
```bash
p10k configure
```
2. **Reload Configuration**:
1. **Reload Configuration**:
```bash
source ~/.zshrc
```
3. **View Available Aliases**:
2. **View Available Aliases**:
```bash
alias # List all aliases
```
@ -110,27 +116,12 @@ The shell configuration is immediately active. Users can:
## Features
### Custom Aliases
The `.zshrc` includes aliases for:
- Git operations (`gs`, `ga`, `gc`, etc.)
- Docker (`dps`, `dex`, `dlogs`)
- System (`ll`, `la`, `update`, `sysinfo`)
- Networking (`ports`, `myip`)
- Development (`serve`, `mkcd`)
- Data Science (`jup`, `conda-list`, `r-studio`)
### Conda Integration
If Anaconda is installed (via datascience role), conda is automatically initialized in zsh.
### Root User Support
Includes special aliases for root users (IDEs with `--no-sandbox` flags).
The role installs a small, server-safe alias set in `~/.zsh_aliases_ansible`.
## Notes
- Zsh is set as the default shell for all configured users
- Oh My Zsh is installed in each user's home directory
- The role is idempotent - safe to run multiple times
- Existing `.zshrc` files are overwritten
- Users must log out and back in for shell changes to take effect
- Existing `.zshrc` files are **not** overwritten
- The role skips users that don't exist on the system
## Troubleshooting
@ -143,16 +134,13 @@ User username not found, skipping shell configuration
Solution: Ensure the user exists or remove from `shell_users` list.
### Oh My Zsh Installation Fails
Check user has a valid home directory and write permissions.
If `shell_mode=full`, ensure the host has outbound internet access to fetch the installer and clone git repos.
### Powerlevel10k Not Loading
Verify the theme is cloned:
```bash
ls ~/.oh-my-zsh/custom/themes/powerlevel10k
```
Only applies to `shell_mode=full`. Verify `~/.p10k.zsh` exists and the theme repo is present under `~/.oh-my-zsh/custom/themes/powerlevel10k`.
### Conda Not Initialized
The datascience role must be run to install Anaconda. The shell role only adds the initialization code to `.zshrc`.
`shell_mode=full` includes a minimal “initialize conda if present” block. Conda installation is still handled by the `datascience` role.
## Integration
@ -174,29 +162,21 @@ roles:
## Security Considerations
- The `.zshrc` is deployed from a template - review before deploying to production
- Root aliases include `--no-sandbox` flags for IDEs (required for root but less secure)
- Only an aliases file is managed; no remote scripts/themes are downloaded.
- Consider limiting which users get shell configuration on production servers
## Performance
- Installation time: ~2-3 minutes per user
- Disk space: ~10MB per user (Oh My Zsh + plugins + theme)
- First shell launch: ~1-2 seconds (Powerlevel10k initialization)
- Subsequent launches: <0.5 seconds
- Installation time: seconds per user (copy + lineinfile)
- Disk space: negligible
## Customization
### Adding Custom Aliases
Edit `roles/shell/files/.zshrc` and add your aliases.
Edit `roles/shell/files/ansible_aliases.zsh` and re-run the role.
### Adding More Plugins
Update `roles/shell/defaults/main.yml`:
```yaml
zsh_plugins:
- name: "my-plugin"
repo: "https://github.com/user/my-plugin.git"
```
Out of scope for this role (keep it fast/minimal).
### Custom Theme
Replace Powerlevel10k in tasks if desired, or users can run `p10k configure` to customize.
Out of scope for this role.

View File

@ -15,7 +15,40 @@ shell_users:
# - ladmin
shell_additional_users: []
# Zsh plugins to install
# Shell configuration mode:
# - minimal: aliases-only (safe for servers; does not overwrite ~/.zshrc)
# - full: install oh-my-zsh + powerlevel10k + plugins and deploy managed ~/.zshrc
shell_mode: minimal
# Packages installed for all modes.
shell_packages_common:
- zsh
- tmux
- fzf
# Extra packages for full mode.
shell_packages_full_extra:
- git
# Effective package list.
shell_packages: "{{ shell_packages_common + (shell_packages_full_extra if shell_mode == 'full' else []) }}"
# If true, change users' login shell to zsh.
shell_set_default_shell: false
# Path (relative to the user's home) for the managed aliases file.
shell_aliases_filename: ".zsh_aliases_ansible"
# Line added to ~/.zshrc to source the managed aliases file.
shell_zshrc_source_line: '[ -f "$HOME/{{ shell_aliases_filename }}" ] && source "$HOME/{{ shell_aliases_filename }}"'
# Full mode settings
shell_install_oh_my_zsh: "{{ shell_mode == 'full' }}"
shell_install_powerlevel10k: "{{ shell_mode == 'full' }}"
shell_install_plugins: "{{ shell_mode == 'full' }}"
shell_deploy_managed_zshrc: "{{ shell_mode == 'full' }}"
# Zsh plugins cloned into oh-my-zsh custom plugins (full mode only).
zsh_plugins:
- name: "zsh-syntax-highlighting"
repo: "https://github.com/zsh-users/zsh-syntax-highlighting.git"

File diff suppressed because it is too large Load Diff

View File

@ -1,224 +0,0 @@
typeset -g POWERLEVEL9K_INSTANT_PROMPT=quiet
# Enable Powerlevel10k instant prompt. Should stay close to the top of ~/.zshrc.
# Initialization code that may require console input (password prompts, [y/n]
# confirmations, etc.) must go above this block; everything else may go below.
if [[ -r "${XDG_CACHE_HOME:-$HOME/.cache}/p10k-instant-prompt-${(%):-%n}.zsh" ]]; then
source "${XDG_CACHE_HOME:-$HOME/.cache}/p10k-instant-prompt-${(%):-%n}.zsh"
fi
# If you come from bash you might have to change your $PATH.
# export PATH=$HOME/bin:$HOME/.local/bin:/usr/local/bin:$PATH
# Path to your Oh My Zsh installation.
export ZSH="$HOME/.oh-my-zsh"
# Set name of the theme to load --- if set to "random", it will
# load a random theme each time Oh My Zsh is loaded, in which case,
# to know which specific one was loaded, run: echo $RANDOM_THEME
# See https://github.com/ohmyzsh/ohmyzsh/wiki/Themes
ZSH_THEME="powerlevel10k/powerlevel10k"
# Set list of themes to pick from when loading at random
# Setting this variable when ZSH_THEME=random will cause zsh to load
# a theme from this variable instead of looking in $ZSH/themes/
# If set to an empty array, this variable will have no effect.
# ZSH_THEME_RANDOM_CANDIDATES=( "robbyrussell" "agnoster" )
# Uncomment the following line to use case-sensitive completion.
# CASE_SENSITIVE="true"
# Uncomment the following line to use hyphen-insensitive completion.
# Case-sensitive completion must be off. _ and - will be interchangeable.
# HYPHEN_INSENSITIVE="true"
# Uncomment one of the following lines to change the auto-update behavior
# zstyle ':omz:update' mode disabled # disable automatic updates
# zstyle ':omz:update' mode auto # update automatically without asking
# zstyle ':omz:update' mode reminder # just remind me to update when it's time
# Uncomment the following line to change how often to auto-update (in days).
# zstyle ':omz:update' frequency 13
# Uncomment the following line if pasting URLs and other text is messed up.
# DISABLE_MAGIC_FUNCTIONS="true"
# Uncomment the following line to disable colors in ls.
# DISABLE_LS_COLORS="true"
# Uncomment the following line to disable auto-setting terminal title.
# DISABLE_AUTO_TITLE="true"
# Uncomment the following line to enable command auto-correction.
# ENABLE_CORRECTION="true"
# Uncomment the following line to display red dots whilst waiting for completion.
# You can also set it to another string to have that shown instead of the default red dots.
# e.g. COMPLETION_WAITING_DOTS="%F{yellow}waiting...%f"
# Caution: this setting can cause issues with multiline prompts in zsh < 5.7.1 (see #5765)
# COMPLETION_WAITING_DOTS="true"
# Uncomment the following line if you want to disable marking untracked files
# under VCS as dirty. This makes repository status check for large repositories
# much, much faster.
# DISABLE_UNTRACKED_FILES_DIRTY="true"
# Uncomment the following line if you want to change the command execution time
# stamp shown in the history command output.
# You can set one of the optional three formats:
# "mm/dd/yyyy"|"dd.mm.yyyy"|"yyyy-mm-dd"
# or set a custom format using the strftime function format specifications,
# see 'man strftime' for details.
# HIST_STAMPS="mm/dd/yyyy"
# Would you like to use another custom folder than $ZSH/custom?
# ZSH_CUSTOM=/path/to/new-custom-folder
# Which plugins would you like to load?
# Standard plugins can be found in $ZSH/plugins/
# Custom plugins may be added to $ZSH_CUSTOM/plugins/
# Example format: plugins=(rails git textmate ruby lighthouse)
# Add wisely, as too many plugins slow down shell startup.
plugins=(git sudo z colored-man-pages fzf zsh-syntax-highlighting zsh-autosuggestions web-search copypath)
source $ZSH/oh-my-zsh.sh
# User configuration
# export MANPATH="/usr/local/man:$MANPATH"
# You may need to manually set your language environment
# export LANG=en_US.UTF-8
# Preferred editor for local and remote sessions
# if [[ -n $SSH_CONNECTION ]]; then
# export EDITOR='vim'
# else
# export EDITOR='nvim'
# fi
# Compilation flags
# export ARCHFLAGS="-arch $(uname -m)"
# Set personal aliases, overriding those provided by Oh My Zsh libs,
# plugins, and themes. Aliases can be placed here, though Oh My Zsh
# users are encouraged to define aliases within a top-level file in
# the $ZSH_CUSTOM folder, with .zsh extension. Examples:
# - $ZSH_CUSTOM/aliases.zsh
# - $ZSH_CUSTOM/macos.zsh
# For a full list of active aliases, run `alias`.
#
# Example aliases
# alias zshconfig="mate ~/.zshrc"
# alias ohmyzsh="mate ~/.oh-my-zsh"
# To customize prompt, run `p10k configure` or edit ~/.p10k.zsh.
[[ ! -f ~/.p10k.zsh ]] || source ~/.p10k.zsh
[ -f ~/.fzf.zsh ] && source ~/.fzf.zsh
alias reload="source ~/.zshrc && echo 'ZSH config reloaded from ~/.zshrc'"
alias editrc="nano ~/.zshrc"
alias c="clear"
alias ls="ls --color=auto"
alias ..="cd .."
alias ...="cd ../.."
alias ....="cd ../../.."
alias cd..="cd .."
alias h="cd ~"
alias dc="cd ~/Documents/code"
# System information
alias df="df -h" # disk usage human readable
alias du="du -h" # directory usage human readable
alias free="free -h" # memory usage human readable
alias sysinfo="/usr/local/bin/monitoring/sysinfo 2>/dev/null || echo 'sysinfo script not found'"
# Process management
alias ps="ps aux"
alias cpu="lscpu"
alias top="btop"
alias mem="free -m"
alias ports="ss -tulpn" # open ports
# Network information
alias myip="curl -s http://ipecho.net/plain; echo"
alias localip="ip route get 1.2.3.4 | awk '{print $7}'"
alias netinfo="/usr/local/bin/monitoring/netinfo 2>/dev/null || echo 'netinfo script not found'"
# Software inventory - show what's installed on this system
alias showapps="$HOME/.local/bin/showapps"
alias apps="showapps"
# Python
alias py="python3"
alias pip="pip3"
alias venv="python3 -m venv"
alias activate="source venv/bin/activate"
# Docker
alias d="docker"
alias dc="docker-compose"
alias dcu="docker-compose up -d"
alias dcd="docker-compose down"
alias dcb="docker-compose build"
alias dps="docker ps"
alias di="docker images"
# IDE - suppress root warnings
alias code="code --no-sandbox --user-data-dir=/root/.vscode-root"
alias cursor="cursor --no-sandbox --disable-gpu-sandbox --appimage-extract-and-run --user-data-dir=/root/.cursor-root"
# Date and time
alias now="date +'%Y-%m-%d %H:%M:%S'"
alias today="date +'%Y-%m-%d'"
# Package management (Debian/Ubuntu)
alias update="sudo apt update && sudo apt upgrade -y"
alias install="sudo apt install"
alias remove="sudo apt remove"
alias search="apt search"
# Permissions and ownership
alias chmox="chmod +x"
alias own="sudo chown -R $USER:$USER"
alias nfresh="rm -rf node_modules/ package-lock.json && npm install"
# SSH aliases for Ansible hosts
alias ssh-gitea="ssh gitea@10.0.30.169"
alias ssh-portainer="ssh ladmin@10.0.30.69"
alias ssh-homepage="ssh homepage@10.0.30.12"
alias ssh-vaultwarden="ssh root@100.100.19.11"
alias ssh-vaultwarden-fallback="ssh root@10.0.10.142"
alias ssh-dev01="ssh ladmin@10.0.30.105"
alias ssh-bottom="ssh beast@10.0.10.156"
alias ssh-debian="ssh user@10.0.10.206"
alias ssh-devGPU="ssh root@10.0.30.63"
alias ssh-ansible="ssh master@10.0.10.157"
alias ssh-tailscale="ssh ladmin@100.66.218.53"
alias ssh-caddy="ssh root@100.117.106.18"
alias ssh-caddy-fallback="ssh root@10.0.10.50"
alias ssh-jellyfin="ssh root@100.104.109.45"
alias ssh-jellyfin-fallback="ssh root@10.0.10.232"
alias ssh-listmonk="ssh root@100.73.190.115"
alias ssh-listmonk-fallback="ssh root@10.0.10.149"
alias ssh-desktop="ssh beast@100.117.34.106"
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
if [ -f "$HOME/anaconda3/bin/conda" ]; then
__conda_setup="$('$HOME/anaconda3/bin/conda' 'shell.zsh' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "$HOME/anaconda3/etc/profile.d/conda.sh" ]; then
. "$HOME/anaconda3/etc/profile.d/conda.sh"
else
export PATH="$HOME/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
fi
# <<< conda initialize <<<

Some files were not shown because too many files have changed in this diff Show More