diff --git a/README.md b/README.md index 106d004..a82ebb0 100644 --- a/README.md +++ b/README.md @@ -1 +1,215 @@ -This Repository contains all important playbooks/roles used for system operation automation such as updates and cleanups. \ No newline at end of file +# Operating Automation – Ansible Playbooks & Roles + +Automation for system operations: OS updates/upgrades, Docker cleanup, Mailcow maintenance, Checkmk onboarding, time services, hardening, and more. + +Last Update: 2025-11-19 + +## Prerequisites + +- Ansible (>= 2.14 recommended) +- Python on target systems, SSH access (key-based authentication preferred) +- Collections (install once): + +```bash +ansible-galaxy collection install \ + community.docker:3.11.0 \ + community.proxmox:1.4.0 \ + checkmk.general +``` + +Notes: +- `ansible.cfg` sets `roles_path = ./roles:/etc/ansible/roles` and disables host key checking. +- Sensitive variables are stored in `vault.yml` (protect with Ansible Vault). + +## Inventories & Variables + +- Examples: `inventories/icp-fra-pve1.yml`, `inventories/icp-frav-packer01.yml` +- Group variables: `inventories/group_vars/all.yml` +- Important OS update variables (defaults in `roles/os-updates/defaults/main.yml`): + - `os_also_update_mirror` (bool, default: true) + - `os_update_mirrors` (list of mirror entries) + - `os_update_major_version` (bool) + - `os_update_version_codename` (e.g., `bookworm`, `trixie`) +- Checkmk variables: `checkmk_server_url`, `checkmk_monitoring_site`, `checkmk_automation_user`, `checkmk_automation_pass`, `checkmk_agent_bakery_passphrase`, and others. + +Vault example (excerpt; store in `vault.yml` and encrypt with Vault): + +```yaml +checkmk_automation_user: "automation" +checkmk_automation_pass: "" +checkmk_agent_bakery_passphrase: "" +# Proxmox API (for major upgrade snapshots) +proxmox_api_host: "" +proxmox_api_user: "" +proxmox_api_token_id: "" +proxmox_api_token_secret: "" +``` + +## Quick Start + +1) Install collections (see above) + +2) Run playbooks (examples): + +```bash +# OS update (minor) for all inventory hosts +ansible-playbook -i inventories/icp-fra-pve1.yml playbooks/os-update.yml -K + +# OS major upgrade to Debian "trixie" (with Proxmox snapshot and reboot) +ansible-playbook -i inventories/icp-fra-pve1.yml playbooks/os-major-upgrade.yml \ + -e os_update_version_codename=trixie -K + +# Change mirrors +ansible-playbook -i inventories/icp-fra-pve1.yml playbooks/os-change-mirror.yml -K + +# Configure time service via chronyd +ansible-playbook -i inventories/icp-frav-packer01.yml playbooks/setup-chronyd.yml -K + +# Checkmk monitoring (create host, sign/bake agent, register) +ansible-playbook -i inventories/icp-frav-packer01.yml playbooks/setup-checkmk-monitoring.yml --ask-vault-pass + +# Deploy ClamAV server (group "clamav-servers") +ansible-playbook -i inventories/icp-fra-pve1.yml playbooks/deploy-clamav-server.yml -K + +# Docker: cleanup images only +ansible-playbook -i inventories/icp-frav-packer01.yml playbooks/docker/cleanup-images.yml -K + +# Docker: full cleanup (containers/networks/volumes/cache), determine Mailcow first +ansible-playbook -i inventories/icp-frav-packer01.yml playbooks/docker/cleanup-all.yml -K + +# Mailcow: update/restart/cleanup in sequence +ansible-playbook -i inventories/icp-frav-packer01.yml playbooks/managed-mailcow/update-mailcow.yaml -K +``` + +## Playbook Reference + +### OS & System + +- `playbooks/os-update.yml` + - Purpose: Standard OS update on Debian. Optionally updates mirrors (`os_also_update_mirror`). + - Variables: `os_update_major_version` (bool), `os_update_version_codename` (relevant for templates only) + - Role: `os-updates` (executes `update_mirrors.yaml` and `upgrade_packages.yaml`; reboots on kernel change via handler) + +- `playbooks/os-major-upgrade.yml` + - Purpose: Debian major upgrade to target codename (e.g., `trixie`) including Proxmox snapshot before and reboot after. + - Loads `vault.yml` (Proxmox API & Checkmk secrets, etc.). + - Roles/tasks: `proxmox-automation:get-vmid`, `proxmox-automation:create-snapshots`, `os-updates:update_major_version`. + - Requirement: Collection `community.proxmox` and valid API tokens. + +- `playbooks/os-change-mirror.yml` + - Purpose: Change Debian APT mirrors according to `os_update_mirrors`. + - Role: `os-updates:update_mirrors`. + +- `playbooks/setup-chronyd.yml` + - Purpose: Configure time service with Chrony (systemd-timesyncd is removed). + - Role: `system:setup-timeserver` (handler: restart chronyd). + +### Checkmk Onboarding + +- `playbooks/setup-checkmk-monitoring.yml` + - Purpose: Create host in Checkmk, sign/bake pending agent jobs, register agent, run discovery. + - Loads `vault.yml` (automation user/pass, etc.). + - Roles/tasks: `checkmk-monitoring:create-host`, `checkmk-monitoring:sign-bake-agents`, `checkmk.general.agent` (TLS/update/registration), `checkmk-monitoring:discover-host`. + - Tags: `checkmk-deploy` (for registration & wait time). + - Requirement: Collection `checkmk.general`. + +### Docker & Mailcow + +- `playbooks/docker/cleanup-images.yml` + - Purpose: Prune Docker images only; optionally capture Compose stack status (`docker_compose_path`). + - Role: `docker:cleanup-images.yml` (collection `community.docker`). + +- `playbooks/docker/cleanup-all.yml` + - Purpose: Full Docker cleanup (containers/images/networks/volumes/builder cache) with running Mailcow stack. + - Roles/tasks: `managed-mailcow:find-mailcow-composedir`, `docker:get-containerstatus`, `docker:cleanup-all` (only if containers not "false"). + +- `playbooks/managed-mailcow/update-mailcow.yaml` + - Purpose: Update Mailcow via `update.sh`; optionally restart Docker daemon and cleanup. + - Variables: `github_mailcow_ver` (target tag), `disk_space_percent_max` (threshold), `debug`. + - Roles/tasks: `roles/managed-mailcow:*`, `roles/docker:restart-daemon`, `roles/docker:cleanup-all`. + +- `playbooks/managed-mailcow/start-stop-mailcow.yaml` + - Purpose: Stop and restart Mailcow stack (Compose v2). + - Roles/tasks: `managed-mailcow:find-mailcow-composedir`, `managed-mailcow:stop-mailcow`, `managed-mailcow:start-mailcow`. + +- `playbooks/managed-mailcow/check-mailcow-health.yml` + - Purpose: Check HTTP accessibility and ports (25/587/143/993); tolerates errors (`ignore_errors`). + +- `playbooks/managed-mailcow/enable-sni-globally.yml` + - Purpose: Set `ENABLE_SSL_SNI=y` in `mailcow.conf`; restart stack if changed. + +- `playbooks/managed-mailcow/change-garbagecleaner.yaml` + - Purpose: Set `MAILDIR_GC_TIME` to 7 days (10080 minutes) and restart stack if changed. + +- `playbooks/managed-mailcow/migrate-clamd.yaml` + - Purpose: Switch Rspamd to external/shared ClamAV, disable local ClamAV, restart Rspamd. + +- `playbooks/managed-mailcow/use-docker-image-proxy.yaml` + - Purpose: Configure Docker daemon proxy & CA, set systemd drop-in, restart Docker. + +- `playbooks/managed-mailcow/use-syslog-server.yaml` + - Purpose: Switch Docker logging to syslog and restart Mailcow if needed. + +- `playbooks/managed-mailcow/remove-watchdog-mail.yaml` + - Purpose: Remove `WATCHDOG_NOTIFY_EMAIL` from `mailcow.conf` and restart stack. + +- `playbooks/managed-mailcow/find-roundcube-versions.yaml` + - Purpose: Extract Roundcube version from `CHANGELOG.md` (under `data/web/rc|roundcube|roundcubemail`). + +- `playbooks/managed-mailcow/add-haveged.yaml` + - Purpose: Install `haveged` package. + +### Hardening + +- `playbooks/hardening/manage-ssh-keys.yaml` + - Purpose: Add good keys, remove bad keys; write comment with timestamp. + - Role: `manage-ssh-keys` + - Variables (see `roles/manage-ssh-keys/defaults/main.yml`): + - `ssh_user` (default: root) + - `good_keys` (list of allowed keys) + - `bad_keys` (list of keys to remove) + +### ClamAV‑Server + +- `playbooks/deploy-clamav-server.yml` + - Hosts: `clamav-servers` + - Role: `deploy-clamd` (compiles ClamAV, creates user/group, configures systemd services `clamd`/`freshclam`). + - Variable: `clamd_version` (default: 1.4.2). IPv6 binding according to template (`TCPAddr {{ ansible_default_ipv6.address }}`). + +## Roles & Collections (Overview) + +- `roles/os-updates` – Mirror update, package upgrade, major upgrade including Exim blocking, reboot/apt cleanup handlers. +- `roles/docker` – Compose v2 status, prune (images/all), Docker daemon restart. Collection: `community.docker`. +- `roles/managed-mailcow` – Find Mailcow path, start/stop, update process, helper tasks. +- `roles/system` – Chrony setup, Docker/MOTD/SSH hardening, disk utility check, service handlers. +- `roles/checkmk-monitoring` – Create host, discovery, agent bakery/activation. Collection: `checkmk.general`. +- `roles/deploy-clamd` – ClamAV build/configuration/templates (systemd units, freshclam/clamd.conf). +- `roles/proxmox-automation` – Snapshots/VM info (collection: `community.proxmox`). + +## Common Commands + +```bash +# Create/edit vault file +ansible-vault create vault.yml +ansible-vault edit vault.yml + +# Syntax check +ansible-playbook -i inventories/icp-fra-pve1.yml playbooks/os-update.yml --syntax-check + +# Target only one host group +ansible-playbook -i inventories/icp-fra-pve1.yml playbooks/os-update.yml -l icp-fra-pve1 + +# Dry run +ansible-playbook -i inventories/icp-fra-pve1.yml playbooks/os-update.yml --check +``` + +## Notes & Best Practices + +- Never commit secrets in plaintext – only provide via `vault.yml`. +- Always create snapshots/backups before major upgrades (playbook handles Proxmox snapshots automatically if configured). +- `community.docker` requires a working Docker engine and Compose v2 on the target system. +- Maintain inventory/hosts with IPv6 where possible (repo is prepared for this). + +--- + +Questions or feature requests? Please mention the playbook/use case – we're happy to extend documentation and examples. \ No newline at end of file diff --git a/ansible.cfg b/ansible.cfg index b2eabc1..4c239b1 100644 --- a/ansible.cfg +++ b/ansible.cfg @@ -1,3 +1,4 @@ [defaults] host_key_checking = False -roles_path = ./roles:/etc/ansible/roles \ No newline at end of file +roles_path = ./roles:/etc/ansible/roles +ansible_python_interpreter=/usr/bin/python3 \ No newline at end of file diff --git a/inventories/extern.yml b/inventories/extern.yml new file mode 100644 index 0000000..070e11a --- /dev/null +++ b/inventories/extern.yml @@ -0,0 +1,11 @@ +all: + hosts: + mail.macbay.eu: + ansible_ssh_host: 116.203.102.78 + mx.linova.de: + ansible_ssh_host: mailinfrastructure.linova.de + ansible_ssh_port: 34713 + mail.picacho-nr.de: + ansible_ssh_host: 2a0e:b680:80::8a + mail.ketzin.de: + ansible_ssh_host: 176.9.238.149 diff --git a/inventories/group_vars/all.yml b/inventories/group_vars/all.yml index 3281e60..467c63c 100644 --- a/inventories/group_vars/all.yml +++ b/inventories/group_vars/all.yml @@ -1,4 +1,4 @@ -# Standardwerte, die überschrieben werden können +# Default values that can be overridden os_update_auto_upgrade: true os_also_update_mirror: true # Can either be true or false | Use this to enable mirror changes. Useful for first runs. os_update_mirrors: diff --git a/playbooks/cleanups/clean-pve-snapshots.yml b/playbooks/cleanups/clean-pve-snapshots.yml new file mode 100644 index 0000000..506a7f0 --- /dev/null +++ b/playbooks/cleanups/clean-pve-snapshots.yml @@ -0,0 +1,16 @@ +- hosts: all + user: tincadmin + gather_facts: false + become: true + vars_files: + # Load vault file for sensitive data like Proxmox API tokens + - ../../vault.yml + tasks: + - name: Include Proxmox Info task + ansible.builtin.include_role: + name: proxmox-automation + tasks_from: get-vmid + - name: Clean Proxmox VE Snapshots + ansible.builtin.include_role: + name: proxmox-automation + tasks_from: delete-snapshots \ No newline at end of file diff --git a/playbooks/deploy-clamav-server.yml b/playbooks/deploy-clamav-server.yml index 1991732..091a5b6 100644 --- a/playbooks/deploy-clamav-server.yml +++ b/playbooks/deploy-clamav-server.yml @@ -1,4 +1,6 @@ --- - hosts: clamav-servers + user: tincadmin + become: true roles: - deploy-clamd \ No newline at end of file diff --git a/playbooks/docker/cleanup-all.yml b/playbooks/docker/cleanup-all.yml index 297d86f..321f345 100644 --- a/playbooks/docker/cleanup-all.yml +++ b/playbooks/docker/cleanup-all.yml @@ -1,5 +1,7 @@ - name: Run Docker Cleanup (full) hosts: all + user: tincadmin + become: true tasks: - include_role: name: managed-mailcow diff --git a/playbooks/docker/cleanup-images.yml b/playbooks/docker/cleanup-images.yml index 212b80e..b7b7109 100644 --- a/playbooks/docker/cleanup-images.yml +++ b/playbooks/docker/cleanup-images.yml @@ -1,5 +1,7 @@ - name: Clean Docker Images on Host hosts: all + user: tincadmin + become: true tasks: - include_role: name: docker diff --git a/playbooks/docker/install-docker.yml b/playbooks/docker/install-docker.yml new file mode 100644 index 0000000..cec9318 --- /dev/null +++ b/playbooks/docker/install-docker.yml @@ -0,0 +1,11 @@ +- name: Install Docker on Host + hosts: all + user: tincadmin + become: true + tasks: + - include_role: + name: system + tasks_from: install-docker.yaml + vars: + docker_install_source: "official" + diff --git a/playbooks/hardening/manage-ssh-keys.yaml b/playbooks/hardening/manage-ssh-keys.yaml index e6c7be9..d139bf4 100644 --- a/playbooks/hardening/manage-ssh-keys.yaml +++ b/playbooks/hardening/manage-ssh-keys.yaml @@ -1,4 +1,6 @@ - hosts: all + user: tincadmin + become: true # vars: # good_keys: "{{ lookup('env', 'good_keys') | from_json }}" # bad_keys: "{{ lookup('env', 'bad_keys') | from_json }}" diff --git a/playbooks/managed-mailcow/add-haveged.yaml b/playbooks/managed-mailcow/add-haveged.yaml index f9b4de5..41683c8 100644 --- a/playbooks/managed-mailcow/add-haveged.yaml +++ b/playbooks/managed-mailcow/add-haveged.yaml @@ -1,6 +1,8 @@ --- - name: Deploy Haveged to VMs hosts: all + user: tincadmin + become: true tasks: - name: Install Haveged apt: diff --git a/playbooks/managed-mailcow/change-garbagecleaner.yaml b/playbooks/managed-mailcow/change-garbagecleaner.yaml index 3d1a796..1bebe3e 100644 --- a/playbooks/managed-mailcow/change-garbagecleaner.yaml +++ b/playbooks/managed-mailcow/change-garbagecleaner.yaml @@ -1,10 +1,12 @@ --- -- name: Garbage Cleaner ändern +- name: Change garbage cleaner configuration hosts: all + user: tincadmin + become: true tasks: - - name: "Prüfe ob mailcow.conf exists" + - name: "Check if mailcow.conf exists" ansible.builtin.stat: path: /opt/mailcow-dockerized/mailcow.conf register: mailcow_conf diff --git a/playbooks/managed-mailcow/count-mailboxes.yml b/playbooks/managed-mailcow/count-mailboxes.yml index a69888c..094c595 100644 --- a/playbooks/managed-mailcow/count-mailboxes.yml +++ b/playbooks/managed-mailcow/count-mailboxes.yml @@ -1,6 +1,8 @@ --- - name: Mailcow Mailbox Counter hosts: all + user: tincadmin + become: true gather_facts: no tasks: - import_role: @@ -26,18 +28,18 @@ ansible.builtin.set_fact: mailbox_count_int: "{{ mailbox_count.stdout | int }}" -- name: Summiere alle Mailboxen über alle Hosts +- name: Summarize all mailboxes across all hosts hosts: all gather_facts: false run_once: true tasks: - - name: Summiere aktive Mailboxen + - name: Summarize active mailboxes ansible.builtin.set_fact: total_mailboxes: "{{ (total_mailboxes | default(0) | int) + (item.value.mailbox_count_int | default(0) | int) }}" loop: "{{ hostvars | dict2items }}" when: "'mailbox_count_int' in item.value" - - name: Zeige Gesamtsumme + - name: Show total sum ansible.builtin.debug: msg: "Gesamtanzahl aktiver Mailboxen: {{ total_mailboxes }}" \ No newline at end of file diff --git a/playbooks/managed-mailcow/enable-sni-globally.yml b/playbooks/managed-mailcow/enable-sni-globally.yml index a3648bf..e7b2f7f 100644 --- a/playbooks/managed-mailcow/enable-sni-globally.yml +++ b/playbooks/managed-mailcow/enable-sni-globally.yml @@ -2,6 +2,8 @@ - name: Enable SNI globally hosts: all + user: tincadmin + become: true vars: debug: false tasks: @@ -11,12 +13,12 @@ name: managed-mailcow tasks_from: find-mailcow-composedir - - name: "Prüfe ob mailcow.conf exists" + - name: "Check if mailcow.conf exists" ansible.builtin.stat: path: "{{ mailcow_dir_result.files[0].path }}/mailcow.conf" register: mailcow_conf - - name: "Setze SNI global ein" + - name: "Set SNI globally" ansible.builtin.replace: path: "{{ mailcow_dir_result.files[0].path }}/mailcow.conf" regexp: "^ENABLE_SSL_SNI=n" diff --git a/playbooks/managed-mailcow/find-roundcube-versions.yaml b/playbooks/managed-mailcow/find-roundcube-versions.yaml index 4351173..07dffa5 100644 --- a/playbooks/managed-mailcow/find-roundcube-versions.yaml +++ b/playbooks/managed-mailcow/find-roundcube-versions.yaml @@ -1,6 +1,7 @@ --- -- name: Prüfe mailcow-Installation und extrahiere Roundcube-Version aus CHANGELOG.md +- name: Check mailcow installation and extract Roundcube version from CHANGELOG.md hosts: all + user: tincadmin become: true vars: mailcow_search_paths: @@ -28,20 +29,20 @@ mailcow_root: "{{ mailcow_dir_result.files[0].path }}" when: mailcow_dir_result.matched > 0 - - name: Prüfe auf Roundcube-Ordner unter data/web + - name: Check for Roundcube folder under data/web ansible.builtin.stat: path: "{{ mailcow_root }}/data/web/{{ item }}" loop: "{{ rc_dirs }}" register: rc_stat when: mailcow_root is defined - - name: Bestimme den tatsächlichen Roundcube-Pfad + - name: Determine the actual Roundcube path ansible.builtin.set_fact: rc_path: "{{ mailcow_root }}/data/web/{{ item.item }}" loop: "{{ rc_stat.results }}" when: item.stat.exists and item.stat.isdir - - name: Prüfe ob CHANGELOG.md existiert + - name: Check if CHANGELOG.md exists ansible.builtin.stat: path: "{{ rc_path }}/CHANGELOG.md" register: changelog_stat @@ -61,9 +62,9 @@ msg: "Roundcube-Version (laut CHANGELOG.md): {{ rc_version.stdout }}" when: rc_version.stdout != "" - - name: Warnung wenn keine CHANGELOG.md gefunden wurde + - name: Warning if no CHANGELOG.md found ansible.builtin.debug: - msg: "Keine CHANGELOG.md unter {{ rc_path }} gefunden." + msg: "No CHANGELOG.md found under {{ rc_path }}." when: - rc_path is defined - not changelog_stat.stat.exists \ No newline at end of file diff --git a/playbooks/managed-mailcow/install-register-cmk-agent.yaml b/playbooks/managed-mailcow/install-register-cmk-agent.yaml index 756ee66..5e5665c 100644 --- a/playbooks/managed-mailcow/install-register-cmk-agent.yaml +++ b/playbooks/managed-mailcow/install-register-cmk-agent.yaml @@ -1,5 +1,7 @@ - name: "Register hosts against a remote site. Both for updates and TLS." hosts: all + user: tincadmin + become: true strategy: linear vars: # Basic server and authentication information. diff --git a/playbooks/managed-mailcow/migrate-clamd.yaml b/playbooks/managed-mailcow/migrate-clamd.yaml index 6652b36..f2a515e 100644 --- a/playbooks/managed-mailcow/migrate-clamd.yaml +++ b/playbooks/managed-mailcow/migrate-clamd.yaml @@ -2,6 +2,8 @@ - name: ClamAV Server auf neuen shared ClamAV setzen hosts: all + user: tincadmin + become: true tasks: - name: "Setze ClamAV Server in rspamd Config auf managed mailcows" ansible.builtin.replace: diff --git a/playbooks/managed-mailcow/remove-watchdog-mail.yaml b/playbooks/managed-mailcow/remove-watchdog-mail.yaml index 6d5139e..fdaa6d0 100644 --- a/playbooks/managed-mailcow/remove-watchdog-mail.yaml +++ b/playbooks/managed-mailcow/remove-watchdog-mail.yaml @@ -2,6 +2,8 @@ - name: Enable SNI globally hosts: all + user: tincadmin + become: true vars: debug: false tasks: diff --git a/playbooks/managed-mailcow/start-stop-mailcow.yaml b/playbooks/managed-mailcow/start-stop-mailcow.yaml index 05b0480..ad4ebd0 100644 --- a/playbooks/managed-mailcow/start-stop-mailcow.yaml +++ b/playbooks/managed-mailcow/start-stop-mailcow.yaml @@ -1,5 +1,7 @@ - name: Start/Stop mailcow hosts: all + user: tincadmin + become: true tasks: - import_role: name: managed-mailcow diff --git a/playbooks/managed-mailcow/update-mailcow.yaml b/playbooks/managed-mailcow/update-mailcow.yaml index adb0b37..04d87b2 100644 --- a/playbooks/managed-mailcow/update-mailcow.yaml +++ b/playbooks/managed-mailcow/update-mailcow.yaml @@ -1,10 +1,23 @@ - name: Update mailcow (update.sh) hosts: all + user: tincadmin + become: true vars: - github_mailcow_ver: "2025-09b" # GitHub Version Tag | Value to compare the current running mailcow version to. - disk_space_percent_max: "97" # Number in percent | Defines the max allowed disk utilization until ansible is not updating mailcow automatically + github_mailcow_ver: "2026-01" # GitHub Version Tag | Value to compare the current running mailcow version to. + do_snapshots: true # Set to true to create Proxmox snapshots before updating mailcow debug: true # Or False if you dont' wanna see verbose outputs of role outputs + + load_vault: true # Set to true to load vault file for sensitive data like Proxmox API tokens + + pre_tasks: + - name: Load vault vars (optional) + ansible.builtin.include_vars: + file: ../../vault.yml + when: load_vault | bool + no_log: true + tasks: + - import_role: name: roles/managed-mailcow tasks_from: find-mailcow-composedir.yml @@ -12,16 +25,52 @@ - import_role: name: roles/managed-mailcow tasks_from: install-mailcow-components.yml + when: mailcow_dir_result.files[0].path is defined + + - ansible.builtin.import_role: + name: roles/managed-mailcow + tasks_from: check-mailcow-install-status.yml + when: mailcow_dir_result.files[0].path is defined + + - ansible.builtin.import_role: + name: roles/managed-mailcow + tasks_from: get-mailcow-current-version.yml + when: mailcow_conf.stat.exists + failed_when: local_mailcow_version is not defined + + - name: Check Disk Utilization + import_role: + name: roles/system + tasks_from: check-disk-utilization.yaml + + - block: + - name: Include Proxmox Info task + ansible.builtin.include_role: + name: proxmox-automation + tasks_from: get-vmid + + - name: Create Snapshot before Modifications + ansible.builtin.include_role: + name: proxmox-automation + tasks_from: create-snapshots + vars: + snapshot_name: "pre-mailcow-update-{{ github_mailcow_ver }}" + when: + - do_snapshots + - local_mailcow_version.stdout != github_mailcow_ver + - disk_space_output.stdout | bool # Checks if snapshots are available, mailcow needs an update and disk space is sufficient if any of these is false no snapshot will be created + - proxmox_host is defined + - proxmox_user is defined + - proxmox_token_id is defined + - proxmox_token_secret is defined + - import_role: name: roles/managed-mailcow tasks_from: update-mailcow.yml - - - import_role: - name: roles/docker - tasks_from: restart-daemon.yml - when: github_mailcow_ver == "2025-09b" # Only restart docker if mailcow was updated + when: local_mailcow_version.stdout != github_mailcow_ver and disk_space_output.stdout | bool - import_role: name: roles/docker - tasks_from: cleanup-all.yml \ No newline at end of file + tasks_from: cleanup-all.yml + when: update_mailcow is changed \ No newline at end of file diff --git a/playbooks/managed-mailcow/update-mailcow.yaml.old b/playbooks/managed-mailcow/update-mailcow.yaml.old deleted file mode 100644 index 926296a..0000000 --- a/playbooks/managed-mailcow/update-mailcow.yaml.old +++ /dev/null @@ -1,41 +0,0 @@ ---- -- name: Update mailcow stacks - hosts: all - vars: - github_mailcow_ver: "2024-08a" - mailcow_search_paths: - - /opt - - /data - - /root - tasks: - - - name: Find mailcow-dockerized directory - ansible.builtin.find: - file_type: directory - paths: "{{ mailcow_search_paths }}" - patterns: mailcow-dockerized - recurse: yes - register: mailcow_dir_result - ignore_errors: true - - - name: 'DEBUG: Show file paths' - debug: - msg: "{{ mailcow_dir_result.files[0].path }}" - when: mailcow_dir_result is defined - - - name: Check if mailcow.conf exists - ansible.builtin.stat: - path: "{{ mailcow_dir_result.files[0].path }}/mailcow.conf" - register: mailcow_conf - when: mailcow_dir_result is defined - - - name: Check mailcow Version - ansible.builtin.shell: | - cd {{ mailcow_dir_result.files[0].path }}/data/web/inc - grep -oP '\$MAILCOW_GIT_VERSION="\K[^"]+' app_info.inc.php - register: local_mailcow_version - when: mailcow_conf.stat.exists - - - name: Update mailcow - shell: "cd {{ mailcow_dir_result.files[0].path }} && git fetch && git checkout origin/master update.sh && ./update.sh --force" - when: local_mailcow_version.stdout != github_mailcow_ver and mailcow_conf.stat.exists diff --git a/playbooks/managed-mailcow/use-docker-image-proxy.yaml b/playbooks/managed-mailcow/use-docker-image-proxy.yaml index 2f68117..fcde82e 100644 --- a/playbooks/managed-mailcow/use-docker-image-proxy.yaml +++ b/playbooks/managed-mailcow/use-docker-image-proxy.yaml @@ -1,7 +1,8 @@ --- - name: Update Docker Daemon configuration and apply proxy settings hosts: all - become: yes + user: tincadmin + become: true tasks: - name: Read current Docker daemon.json ansible.builtin.slurp: diff --git a/playbooks/managed-mailcow/use-syslog-server.yaml b/playbooks/managed-mailcow/use-syslog-server.yaml index 4977e9f..ad7ab5d 100644 --- a/playbooks/managed-mailcow/use-syslog-server.yaml +++ b/playbooks/managed-mailcow/use-syslog-server.yaml @@ -1,7 +1,8 @@ --- - name: Update Docker Daemon configuration to use Syslog Server hosts: all - become: yes + user: tincadmin + become: true tasks: - name: Read current Docker daemon.json ansible.builtin.slurp: diff --git a/playbooks/os-change-mirror.yml b/playbooks/os-change-mirror.yml index 2184dd9..7161756 100644 --- a/playbooks/os-change-mirror.yml +++ b/playbooks/os-change-mirror.yml @@ -1,5 +1,7 @@ - name: "Change Mirror" hosts: all + user: tincadmin + become: true tasks: - name: Verify if system is Debian ansible.builtin.debug: diff --git a/playbooks/os-major-upgrade.yml b/playbooks/os-major-upgrade.yml index 5bc9a45..cba06f6 100644 --- a/playbooks/os-major-upgrade.yml +++ b/playbooks/os-major-upgrade.yml @@ -2,10 +2,13 @@ vars: os_update_major_version: true # Can either be true or false | To toggle if systems need to be upgraded to newer codename os_update_version_codename: "trixie" # Change to switch major release (e.g. bookworm or trixie) | Used for jinja2 Template fill in as it determines the current codename of system where ansible is run on + do_snapshots: true # Can either be true or false | To toggle if snapshots should be created before major upgrade snapshot_name: "AUTO_before_major_{{ ansible_date_time.date }}" # Name of the snapshot to be created before major upgrade vars_files: # Load vault file for sensitive data like Proxmox API tokens - ../vault.yml + user: tincadmin + become: true tasks: - name: Verify if system is Debian debug: @@ -43,6 +46,7 @@ when: - ansible_os_family == "Debian" - current_os_codename | lower != os_update_version_codename | lower + - do_snapshots | default(false) - name: Create Snapshot before Modifications ansible.builtin.include_role: @@ -51,6 +55,7 @@ when: - ansible_os_family == "Debian" - current_os_codename | lower != os_update_version_codename | lower + - do_snapshots | default(false) - name: Include OS update role ansible.builtin.include_role: diff --git a/playbooks/os-update.yml b/playbooks/os-update.yml index 12a273d..5b7409a 100644 --- a/playbooks/os-update.yml +++ b/playbooks/os-update.yml @@ -1,7 +1,14 @@ - hosts: all + user: tincadmin + become: true vars: - os_update_major_version: true # Can either be true or false | To toggle if systems need to be upgraded to newer codename + os_also_update_mirror: false # Can either be true or false | To toggle if mirrors should be updated during major upgrade os_update_version_codename: "trixie" # Change to switch major release (e.g. bookworm or trixie) | Used for jinja2 Template fill in as it determines the current codename of system where ansible is run on + do_snapshots: true # Can either be true or false | To toggle if snapshots should be created before os update + snapshot_name: "AUTO_before_os_update_{{ ansible_date_time.date }}" # Name + vars_files: + # Load vault file for sensitive data like Proxmox API tokens + - ../vault.yml tasks: - name: Verify if system is Debian debug: @@ -13,7 +20,45 @@ msg: "This playbook only supports Debian." when: ansible_os_family != "Debian" + - name: Check for available updates + ansible.builtin.apt: + update_cache: yes + cache_valid_time: 0 + register: apt_update + when: ansible_os_family == "Debian" + + - name: Check if upgrades are available + ansible.builtin.command: apt list --upgradable + register: upgradable_packages + changed_when: false + when: ansible_os_family == "Debian" + + - name: Set fact if updates are needed + set_fact: + updates_needed: "{{ upgradable_packages.stdout_lines | length > 1 }}" + when: ansible_os_family == "Debian" + + - name: Include Proxmox Info task + ansible.builtin.include_role: + name: proxmox-automation + tasks_from: get-vmid + when: + - ansible_os_family == "Debian" + - do_snapshots | default(false) + - updates_needed | default(false) + + - name: Create Snapshot before Modifications + ansible.builtin.include_role: + name: proxmox-automation + tasks_from: create-snapshots + when: + - ansible_os_family == "Debian" + - do_snapshots | default(false) + - updates_needed | default(false) + - name: Include OS update role - include_role: + ansible.builtin.include_role: name: os-updates - when: ansible_os_family == "Debian" \ No newline at end of file + when: + - ansible_os_family == "Debian" + - updates_needed | default(false) \ No newline at end of file diff --git a/playbooks/reinstall-cmk-agent.yml b/playbooks/reinstall-cmk-agent.yml new file mode 100644 index 0000000..faa2bfc --- /dev/null +++ b/playbooks/reinstall-cmk-agent.yml @@ -0,0 +1,49 @@ +- name: "Reinstall CMK Agent" + hosts: all + user: tincadmin + become: true + strategy: linear + vars_files: + - ../vault.yml + vars: + # Basic server and authentication information. + # You have to provide the distributed setup yourself. + checkmk_agent_version: "2.4.0p17" + checkmk_agent_edition: "cee" + checkmk_agent_user: "{{ checkmk_automation_user }}" + checkmk_agent_pass: "{{ checkmk_automation_pass }}" + # Here comes the part, where we get into remote registration + checkmk_agent_server_protocol: https + # The following should be set to the central site. + # This where you configure the host objects. + # Currently the agent package is also pulled from here. + checkmk_agent_server: servercow.observer + checkmk_agent_site: "scowmon" + checkmk_server_url: "https://servercow.observer" + checkmk_monitoring_site: "scowmon" + # The following should be pointed to the respective remote site. + # This is where the registration will happen. + checkmk_agent_registration_server: "{{ checkmk_agent_server }}" + checkmk_agent_registration_site: "{{ checkmk_agent_site }}" + # The folder might differ from your remote site name, + # as it is the technical path. Check your configuration for this information. + checkmk_agent_folder: "/managed_mailcows" + # These options need to be enabled for all registrations to work. + # You can however disable the one you do not want to perform. + # But the host needs to be added and changes activated in any case. + checkmk_agent_auto_activate: true + checkmk_agent_update: true + checkmk_agent_tls: true + # These are some generic agent options you might want to configure. + checkmk_agent_discover: true + checkmk_agent_discover_max_parallel_tasks: 0 + checkmk_agent_force_install: true + checkmk_agent_delegate_api_calls: localhost + checkmk_agent_delegate_download: "{{ inventory_hostname }}" + checkmk_agent_host_name: "{{ inventory_hostname }}" + checkmk_agent_host_folder: "{{ checkmk_agent_folder }}" + checkmk_agent_host_ip: "{{ ansible_host }}" + checkmk_agent_host_attributes: + ipaddress: "{{ checkmk_agent_host_ip | default(omit) }}" + roles: + - checkmk.general.agent \ No newline at end of file diff --git a/playbooks/setup-checkmk-monitoring.yml b/playbooks/setup-checkmk-monitoring.yml index 8598c04..5daaa90 100644 --- a/playbooks/setup-checkmk-monitoring.yml +++ b/playbooks/setup-checkmk-monitoring.yml @@ -1,5 +1,7 @@ - name: "Setup CheckMK Monitoring" hosts: all + user: tincadmin + become: true vars_files: - ../vault.yml tasks: diff --git a/playbooks/setup-chronyd.yml b/playbooks/setup-chronyd.yml index 69748f7..6ae564b 100644 --- a/playbooks/setup-chronyd.yml +++ b/playbooks/setup-chronyd.yml @@ -1,5 +1,7 @@ - name: "Setup chronyd" hosts: all + user: tincadmin + become: true tasks: - name: Verify if system is Debian or Ubuntu ansible.builtin.debug: diff --git a/roles/manage-ssh-keys/defaults/main.yml b/roles/manage-ssh-keys/defaults/main.yml index 320083a..36b38ba 100644 --- a/roles/manage-ssh-keys/defaults/main.yml +++ b/roles/manage-ssh-keys/defaults/main.yml @@ -3,7 +3,7 @@ ssh_user: "root" authorized_keys_file: >- {{ "/root/.ssh/authorized_keys" if ssh_user == "root" else "/home/{{ ssh_user }}/.ssh/authorized_keys" }} -# Liste der erwünschten (Good) Keys +# List of desired (good) keys good_keys: - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCKcSu464ffJh6fcrWSajlkdGzyeP1+eStHeiFWjfvTZN1YD/05LsADLv8QwnwDbjIHpi/jO2N9mzN55O2MP4FP33Ztmex5CW1sALHynCX7/LtxmklUxbezoJPp1+evhcEQ670KfCpuWWTgGI2ChANnfb/QlON6UWERjauHoNvO33LnO2ySWxHULDlv7BuJCrmk1ZgH2DI7nGIl2KEdkvtJrUaz/fkjalzdfsD+5bsCVxEXBwF5vOAflYdgLAA9AiiHNrwmoU7ELy+WN7YYA0ikoFAUsaW3R4lzA9Cl9wGQmnF30fMChB3JOHF+fFVLFgftChKlB1A1pddaNMPULPyxNJXBXpZCw0ntLcA3UNtnBl0McVKLdVvQfyeWygqqu9OYtkWWO1KApGxss2KDabKG9C+WRhx6z06lFlPMqZK2bmaZDszd8fKI+jbVRKBq2njZmE/uRfEvHHSXqskBDefdMqIUpRN8cN05vZm+sphIaHfOX1vCy1ZDVTiThFcd/z0= root@ansible-servercow" - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDsWfznWCcqpgoq4awYDp2W8y62rDT8PEN0xx7818OA1B/mENiBb6jB9qojBpXuSqXKCg7WIVawtl4DSufN4tx2CCNXJPZGcYxkzYrA+bYHMgNUtDF6ps1odFFCu7D1ioVj+hSiM0coFzdgBeT4owg2S8h8kdUmwEbOECp75/3KjV/JUsHrytfJlSTN2mr+SpV3LRL19zFJ67PQXLUyC5oXUR1DZxgzCR2+bWPM7zW0xkVD3c1D+S2JRV4RCZts1Lfgoo/Fl88YMjwk1s3W38Zp/uAgIY6Boan193RWY1yqeCq6u2xAcIiAUqZrVnKesWVnXeRiPuTEESuthK3xSjxd mschild@WS-WIL-MSCHILD" @@ -15,10 +15,11 @@ good_keys: - "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINlJlysj2Ff/8lLgNTkNX/uJVz4uIiEtvO/s3qzUMH1j eddsa-key-mv-tinc-20230130" - "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICyZYxVyFQlhn/O6XpvnQL9l9bv652pH4jrkiUuNHMsT nm-tinc-eddsa-key-20240805" - "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPb3H/K8w22FIpsb+tad+T1PQjrTdry+cM/fmYiLbSDo root@ansible-servercow" - - "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICyZYxVyFQlhn/O6XpvnQL9l9bv652pH4jrkiUuNHMsT nm-tinc-eddsa-key-20240805" + - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDjDcw9yWAsc1OOtvefPP0NyoJC3c97OeklpH+UbT2FG44iraTshsYGtQrU6cRAOvZM+wOuv78QL9duqNM8PHbwahverwzrBkBz6lQ1owCaqXFdqdbdixutepPGC2f9qPeJIcKj7Y+IWqMTwF8P7+rp4ueLipu3gJEJOvtWYnFloVEf66zoHh5uuDyf7xSPkwL/oChVcCg/O6zRXFcNnLe4tpBWMv+x5HtR9HOnUEddkKB+GtGkm8wZpbuzkVN3LIGZnW/bxtphqkAKcUDTM4yBeQXsIm/nKw6eNeHjDHh5z19RUNWKSIywCn4RNaCTla//RA2CkSflnybbmNbqd+yRnza3xv9TnIGV3lvnluxB9Of87mGLFPr9aebR1SAiTz7owqbZmai1bs3wDebt0hjr0ixxFrPTDjp9X/f20MAN847R/2sV6adD+4M4ej4JZJb7Fq6YjCq7bw0xNgHdEi7mqQWDrwM37PtjLNN0PJ27A5a5ltyyHRBRXDKLXYTqfkZVOLBSKDG6iN3oKqA1hTSWHJqrdys9Kwnwtq3b+cav2m0N0seURjdniTDEh27S5AECL0VcoxnTIo73WpY2LglnA30wXp0NLZflChG7+wOT0I5p/O4GyTENPHFsqjHgiAKhYDv/h6ioTZXJxFJuLBdq2lFJ6HyG8qovnNYUS/h3xw== tobimuel@tobimuel-q6600" + - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDg1KyrhUDDor3nluin7QbgTFMF4HgUKxXxNRq6rwiTPZgi8DF41YMd4Xk7jgiWQDYqb9P1n/vgR/CiMgDzeF93pd0ShAwgOV+s8TSvtCgQAyfiV+fe0cO/abx2/lUXVIGUzVhhoU2atIgH6GcDWSR4FdjvaF+fhGcbiuqZQ/q7XMjJvn4MoW5uN4MuPMVr5wadBo8CrhxQi1MmaSfkGj9w5VKvjefbrz1CuEvxfnXmbgzsaczSN422BRvWeLY1RHnGf7m47LUQWRTZj1aguuon1uwx7g5uY+1zrYrD+5nU5J8ezgp//g2gOXrvkK1T9c+g2QloDigVJuFRnKwO9OkcJhhlDgZ+fDViLLnka7YPtPsr8qC6pUNfvzK2FMTNPxH7o/V4TNFiFoyP1EJvQSnUnAz96j3HBbbDdXXFOAosLEra4Zkpnkz8aZ0wjWTFtH3io25ok/h9VmWvJ7kPlTnyAMtW3V07Y7Fqc3+gDye5yqNmZxOIk0Qzg61PwWswsn7RL8o7aLNINZpmIHPCUO7vlcUi3hOVchkjfMBLB1kIE60yZgS8wVQYo2z10o9RsyvQHtLiGEpqEBP1/ofoEN2t0HuIJQfyOGsUGZbsddOPcduXGLLMpbazVGI7sS2vSRjcUVEgcqEHIjQq3i60Xv9k66IA2JJlLNs22qM2tp72DQ== tobimuel@tobimuel-e480" -# Liste der unerwünschten (Bad) Keys +# List of undesired (bad) keys bad_keys: - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCx5Gwq39Jaf9YQr0qWzCZMU0l1sPfrJE7vWyrZiQRv2IgVvkIuDl1gv+Gaf1wL69WookC0TGc4Ce2tH5xfcz2tiH72jIDf60izrf2attmPcbLnZfFgN6cPFzCIoMVMIMhROgOF9wF1MzO9WUggJBEpcxotoiPfKkmIrfYXLnnMmZ6XXs3LCcdP1wNOkh/mZ3KfwhH6/GhV/0/mjymzrO5DL/piu+89ZrLmsVU9F/VUZciG7zCv8g6Hhiy25vyOmtGL/DPHfszzlQuvRo0hjTjEdNsnv9b44zc7OtGYdrZ4SPK7v2dSLdzU9eL3+7m6zocaVrbM6YWTph9acwkKOehV root@ccp-wil-backup01" - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCcqqrN2lC4lajOmiFuUqHBQ2C07YTl3w5e/FT3+ddZ5YOiONr+e8FvKkiw4he5fvGnt6/RUZgnJW+rI7jlF5qPJjdkdJ3wZNiwp4gTiebNV2hvLx3AL0aoH/5tN9m4KDTYZKfnF1JZAgsZrLNrfYJp8F8+AQk24rAQINQ3Cku0i4cgenOQBrT48/Ibv7erav7ZkUFvIPkh4B4Owzu6MUGzKNFoLypgMRXMmLN2vyaor/q4aA9xeha2CKdbJYhTwgrYMieiAyDw9dbe8rJe0BB7VXxDmX54seLsmSWhs6/6L2JNDAdpV/f4Jb2n2L0GaFlyjGpi64nwfoWng2Meou0J mo@LenovoP340-Tiny" diff --git a/roles/manage-ssh-keys/tasks/add-goodkeys.yml b/roles/manage-ssh-keys/tasks/add-goodkeys.yml index f725dc6..ff34df0 100644 --- a/roles/manage-ssh-keys/tasks/add-goodkeys.yml +++ b/roles/manage-ssh-keys/tasks/add-goodkeys.yml @@ -1,5 +1,5 @@ --- -- name: Good Keys hinzufügen +- name: Add good keys lineinfile: path: "{{ authorized_keys_file }}" line: "{{ item }}" diff --git a/roles/manage-ssh-keys/tasks/main.yml b/roles/manage-ssh-keys/tasks/main.yml index d338881..724f78f 100644 --- a/roles/manage-ssh-keys/tasks/main.yml +++ b/roles/manage-ssh-keys/tasks/main.yml @@ -1,10 +1,10 @@ --- -# Haupt-Task der Rolle: Modularer Aufbau mit Subtasks -- name: Validiere SSH Keys +# Main task of the role: modular structure with subtasks +- name: Validate SSH keys import_tasks: validate-keys.yml -- name: Füge Good Keys hinzu +- name: Add good keys import_tasks: add-goodkeys.yml -- name: Entferne Bad Keys +- name: Remove bad keys import_tasks: remove-badkeys.yml \ No newline at end of file diff --git a/roles/manage-ssh-keys/tasks/remove-badkeys.yml b/roles/manage-ssh-keys/tasks/remove-badkeys.yml index 78ceb5c..8aafffd 100644 --- a/roles/manage-ssh-keys/tasks/remove-badkeys.yml +++ b/roles/manage-ssh-keys/tasks/remove-badkeys.yml @@ -1,5 +1,5 @@ --- -- name: Bad Keys entfernen +- name: Remove bad keys lineinfile: path: "{{ authorized_keys_file }}" line: "{{ item }}" diff --git a/roles/manage-ssh-keys/tasks/validate-keys.yml b/roles/manage-ssh-keys/tasks/validate-keys.yml index b6e4e60..f42ebaf 100644 --- a/roles/manage-ssh-keys/tasks/validate-keys.yml +++ b/roles/manage-ssh-keys/tasks/validate-keys.yml @@ -1,5 +1,5 @@ --- -- name: Stelle sicher, dass das .ssh-Verzeichnis existiert +- name: Ensure that .ssh directory exists file: path: "{{ authorized_keys_file | dirname }}" state: directory diff --git a/roles/managed-mailcow/tasks/check-mailcow-install-status.yml b/roles/managed-mailcow/tasks/check-mailcow-install-status.yml new file mode 100644 index 0000000..5856711 --- /dev/null +++ b/roles/managed-mailcow/tasks/check-mailcow-install-status.yml @@ -0,0 +1,6 @@ +--- +- name: Check if mailcow.conf exists + ansible.builtin.stat: + path: "{{ mailcow_dir_result.files[0].path | default('/opt/mailcow-dockerized') }}/mailcow.conf" + register: mailcow_conf + when: mailcow_dir_result.files[0].path is defined \ No newline at end of file diff --git a/roles/managed-mailcow/tasks/get-mailcow-current-version.yml b/roles/managed-mailcow/tasks/get-mailcow-current-version.yml new file mode 100644 index 0000000..323e532 --- /dev/null +++ b/roles/managed-mailcow/tasks/get-mailcow-current-version.yml @@ -0,0 +1,6 @@ +--- +- name: Check mailcow Version + ansible.builtin.shell: | + cd {{ mailcow_dir_result.files[0].path | default('/opt/mailcow-dockerized') }}/data/web/inc + grep -oP '\$MAILCOW_GIT_VERSION="\K[^"]+' app_info.inc.php + register: local_mailcow_version \ No newline at end of file diff --git a/roles/managed-mailcow/tasks/update-mailcow.yml b/roles/managed-mailcow/tasks/update-mailcow.yml index fb4ccdc..f6a365e 100644 --- a/roles/managed-mailcow/tasks/update-mailcow.yml +++ b/roles/managed-mailcow/tasks/update-mailcow.yml @@ -1,22 +1,5 @@ --- -- name: Check if mailcow.conf exists - ansible.builtin.stat: - path: "{{ mailcow_dir_result.files[0].path }}/mailcow.conf" - register: mailcow_conf - when: mailcow_dir_result.files[0].path is defined - -- name: Check mailcow Version - ansible.builtin.shell: | - cd {{ mailcow_dir_result.files[0].path }}/data/web/inc - grep -oP '\$MAILCOW_GIT_VERSION="\K[^"]+' app_info.inc.php - register: local_mailcow_version - when: mailcow_conf.stat.exists - -- name: Check Disk Utilization - import_role: - name: roles/system - tasks_from: check-disk-utilization.yaml - - name: Update mailcow + throttle: 30 shell: "cd {{ mailcow_dir_result.files[0].path }} && git fetch && git checkout origin/master update.sh && git checkout origin/master _modules && ./update.sh --force" - when: local_mailcow_version.stdout != github_mailcow_ver and mailcow_conf.stat.exists and disk_space_output.stdout | bool + register: update_mailcow diff --git a/roles/os-updates/defaults/main.yml b/roles/os-updates/defaults/main.yml index 48662c0..fc6ec9d 100644 --- a/roles/os-updates/defaults/main.yml +++ b/roles/os-updates/defaults/main.yml @@ -1,4 +1,4 @@ -# Standardwerte, die überschrieben werden können +# Default values that can be overridden os_update_auto_upgrade: true os_also_update_mirror: true # Can either be true or false | Use this to enable mirror changes. Useful for first runs. os_update_mirrors: diff --git a/roles/os-updates/tasks/upgrade_packages.yaml b/roles/os-updates/tasks/upgrade_packages.yaml index 8d8bb7d..de63e63 100644 --- a/roles/os-updates/tasks/upgrade_packages.yaml +++ b/roles/os-updates/tasks/upgrade_packages.yaml @@ -16,6 +16,10 @@ register: running_kernel changed_when: false failed_when: false + +- name: Trigger reboot if kernel has been updated + command: /bin/true notify: - Reboot system when: running_kernel.stdout != latest_kernel.stdout + changed_when: true diff --git a/roles/proxmox-automation/requirements.yml b/roles/proxmox-automation/requirements.yml index b2ce184..a4fe3cd 100644 --- a/roles/proxmox-automation/requirements.yml +++ b/roles/proxmox-automation/requirements.yml @@ -1,4 +1,4 @@ --- collections: - name: community.proxmox - version: 1.4.0 \ No newline at end of file + version: 1.5.0 \ No newline at end of file diff --git a/roles/proxmox-automation/tasks/delete-snapshots.yaml b/roles/proxmox-automation/tasks/delete-snapshots.yaml index 0801cd2..d7238e9 100644 --- a/roles/proxmox-automation/tasks/delete-snapshots.yaml +++ b/roles/proxmox-automation/tasks/delete-snapshots.yaml @@ -1,4 +1,14 @@ -- name: Delete snapshot before_major +- name: Get all snapshots + community.proxmox.proxmox_snap_info: + api_host: "{{ proxmox_host }}" + api_user: "{{ proxmox_user }}" + api_token_id: "{{ proxmox_token_id }}" + api_token_secret: "{{ proxmox_token_secret }}" + vmid: "{{ vmid }}" + register: snapshot_info + delegate_to: localhost + +- name: Delete all snapshots community.proxmox.proxmox_snap: api_host: "{{ proxmox_host }}" api_user: "{{ proxmox_user }}" @@ -6,5 +16,7 @@ api_token_secret: "{{ proxmox_token_secret }}" vmid: "{{ vmid }}" state: absent - snapname: before_major + snapname: "{{ item.name }}" + loop: "{{ snapshot_info.snapshots }}" + when: item.name != "current" delegate_to: localhost \ No newline at end of file diff --git a/roles/ssh/tasks/hardenize-ssh-algos.yaml b/roles/ssh/tasks/hardenize-ssh-algos.yaml new file mode 100644 index 0000000..e69de29 diff --git a/roles/system/tasks/check-disk-utilization.yaml b/roles/system/tasks/check-disk-utilization.yaml index 1242b2e..809990b 100644 --- a/roles/system/tasks/check-disk-utilization.yaml +++ b/roles/system/tasks/check-disk-utilization.yaml @@ -1,6 +1,6 @@ - name: Run disk space command - ansible.builtin.shell: "df --output=used,avail / | awk 'NR==2 {used=$1; available=$2; total=used+available; percentage=used*100/total; if (percentage < {{ disk_space_percent_max }} ) printf \"true\"; else printf \"false\"}'" - # System uses the disk_space_percent_max variable to determine condition this check is getting. Over the amount defined in the var causes the check to fail! + ansible.builtin.shell: "df --output=avail / | awk 'NR==2 {avail=$1; if (avail >= 4194304) printf \"true\"; else printf \"false\"}'" + # System checks if root partition has at least 4 GB (4194304 KB) available for updates register: disk_space_output - name: "**DEBUG**: Server disk Utilization condition" diff --git a/roles/system/tasks/install-docker.yaml b/roles/system/tasks/install-docker.yaml index 7fe0952..aaa61f3 100644 --- a/roles/system/tasks/install-docker.yaml +++ b/roles/system/tasks/install-docker.yaml @@ -1,5 +1,11 @@ +- name: Install gpg package + ansible.builtin.apt: + name: gnupg + state: present + - name: Install Docker from official repo when: docker_install_source == "official" + block: - name: Ensure Docker GPG key is dearmored and installed ansible.builtin.get_url: diff --git a/roles/system/tasks/special-admin-create.yaml b/roles/system/tasks/special-admin-create.yaml index d45c8c9..a01e302 100644 --- a/roles/system/tasks/special-admin-create.yaml +++ b/roles/system/tasks/special-admin-create.yaml @@ -34,7 +34,7 @@ group: "{{ admin_user }}" mode: "0600" - - name: Jeden Key einzeln mit authorized_key hinzufügen + - name: Add each key individually with authorized_key ansible.builtin.authorized_key: user: "{{ admin_user }}" key: "{{ item | trim }}" @@ -42,7 +42,7 @@ loop: "{{ key_list }}" when: item | trim != "" - - name: Passwordless‑sudo für alle Befehle konfigurieren + - name: Configure passwordless sudo for all commands ansible.builtin.copy: dest: "/etc/sudoers.d/{{ admin_user }}" content: | diff --git a/roles/system/tasks/ssh-hardening.yaml b/roles/system/tasks/ssh-hardening.yaml index 582771c..9c6cd05 100644 --- a/roles/system/tasks/ssh-hardening.yaml +++ b/roles/system/tasks/ssh-hardening.yaml @@ -17,7 +17,7 @@ group: "root" mode: "0600" -- name: Jeden Key einzeln mit authorized_key hinzufügen +- name: Add each key individually with authorized_key ansible.builtin.authorized_key: user: "root" key: "{{ item | trim }}"