Contents
Understanding SSD Wear in Proxmox Environments
Although data hoarders often avoid SSDs due to their limited write cycles and long-term storage challenges, these high-speed drives are excellent for booting consumer PCs and home servers. I prefer using SSDs when setting up bare-metal containerization environments, virtualization platforms, and server/NAS operating systems.
However, certain services can negatively impact SSD health, especially under heavy server workloads. As someone who relies on Proxmox for self-hosting tasks and computing experiments, I’ve implemented several strategies to maintain the performance of my boot drive.
Identifying the Culprits Behind SSD Wear
SSDs use NAND flash cells, which degrade over time with repeated write operations. Even if you move I/O-heavy virtual machines to HDDs, Proxmox has background services that can still cause excessive writes to the boot drive.
The iotop utility is a helpful tool for identifying these services. On Debian-based systems like Proxmox, you can install it using the command:
apt install iotop -y
Then run:
iotop -ao
This will show which processes are causing the most write activity. Typically, the systemd-journald
service and specific pve-cfs
processes are the main contributors.
Disabling Unnecessary Services
For users not running a Proxmox cluster, disabling certain services can significantly reduce SSD wear. The two primary offenders are pve-ha-lrm.service
and pve-ha-crm.service
.
pve-ha-lrm.service
manages resource allocation on a local PVE node.pve-ha-crm.service
oversees the entire cluster, monitoring node status and ensuring high availability.
If you’re using a single-node setup, you can disable these services to slow down the wear on your boot drive. Use the following commands:
systemctl stop pve-ha-lrm.service
systemctl disable pve-ha-lrm.service
systemctl stop pve-ha-crm.service
systemctl disable pve-ha-crm.service
Optimizing Log Storage
Systemd-journald is crucial for logging system events, but it can also contribute to SSD wear. For low-quality SSDs, changing the Storage
parameter in /etc/systemd/journald.conf
to volatile
forces logs to be stored in RAM. However, this means logs are lost upon shutdown or crash, making troubleshooting difficult.
An alternative solution is Log2Ram, which stores logs in memory and periodically syncs them to the SSD. This approach offers a balance between performance and durability. To set it up:
Add the Log2Ram repository:
echo "deb [signed-by=/usr/share/keyrings/azlux-archive-keyring.gpg] http://packages.azlux.fr/debian/ bookworm main" | tee /etc/apt/sources.list.d/azlux.list
wget -O /usr/share/keyrings/azlux-archive-keyring.gpg https://azlux.fr/repo.gpgInstall Log2Ram:
apt update && apt install log2ram -y
Reboot your Proxmox node and verify the installation with:
systemctl status log2ram
You can adjust settings like SIZE
and PATH_DISK
by editing /etc/log2ram.conf
.
Choosing the Right Boot Drive
While it’s not necessary to invest in a top-of-the-line PCIe 5.0 SSD for a Proxmox boot drive, opting for a reliable, mid-range model is essential. Low-quality drives from unknown manufacturers tend to degrade faster, even with optimization measures in place. Prioritizing quality ensures smoother performance and fewer headaches in the long run.