While automating my Proxmox environment with Packer, most of the workflow worked flawlessly: Ubuntu autoinstall, cloud-init, SSH provisioning, and qemu-guest-agent all behaved exactly as expected. But every build consistently failed at the very last step, converting the VM into a template, which was very annoying.

Despite the VM installing perfectly, Proxmox refused to stop it cleanly and returned a persistent lock-related error. This led to a surprisingly long troubleshooting process, which eventually revealed a simple root cause: stale lock files left behind from earlier interrupted builds……sigh

In this post, I’ll share the exact error, the steps I went through to diagnose it, and how cleaning up these old lock files immediately restored stable, repeatable builds, it’s been a few very long days…

Over the past weeks I’ve been working on automating my homelab using Infrastructure as Code. The goal, fully automated builds of Ubuntu Server and Ubuntu Desktop images using Packer, with the output automatically converted into Proxmox templates. Very cool stuff but with a steep learning curve!

Most of the setup worked flawlessly:

  • Autoinstall for Server and Desktop.
  • cloud-init NoCloud ISO.
  • SSH key authentication for the communicator.
  • qemu-guest-agent running correctly.
  • Cleanup + hardening scripts.
  • Reboots executed using provisioners.

Except the last step would always fail resulting in this specific error message:

Error converting VM to template, could not stop: can't lock file '/var/lock/qemu-server/lock-607.conf' - got timeout

Packer tried to stop the VM (via Proxmox API), but Proxmox couldn’t acquire the lock on the VM config file. Because of this:

  • Proxmox never issued the shutdown.
  • The template conversion failed.
  • Packer deleted the VM.
  • And nothing was produced.

It happened every single time, regardless of what I changed in the autoinstall or provisioning steps.

Ruling out the usual suspects

Before blaming Proxmox, I verified everything inside the VM:

  • qemu-guest-agent was active and responsive.
  • Autoinstall completed normally.
  • Desktop install using boot commands worked.
  • Cleanup script executed correctly.
  • Reboot inside provisioning worked.
  • Packer connected via SSH without issues.

I even added:

  • a reboot provisioner including, (expect_disconnect = true)
  • a “wait until qemu-guest-agent is active” loop

Everything inside the VM behaved perfectly. So I eventually shifted focus to the Proxmox host itself.

The breakthrough: “/var/lock/qemu-server/” directory

During debugging I inspected the lock directory on the Proxmox node, and there it was: a graveyard of stale lock files, including locks for VM IDs that even no longer existed…. another sigh. This explained the exact failure, Proxmox requires a lock file for VM operations such as:

  • starting.
  • stopping.
  • templating.
  • backup.
  • migration.

If a lock file already exists and is possibly corrupt for some reason (one being a unsuccessful Packer run), but the process that created it is long gone, then any new task trying to acquire that lock will potentially hang until timeout, hence my issue!

The solution

The solution turned out to be surprisingly simple, so why didn’t I think about this before spending two days searching the Internet for similar situations and solutions…

1. Check for locks, and delete the stale file

ls -l /var/lock/qemu-server/

2. Try a clean unlock first

qm unlock <vmid>


3. If the VM no longer exists but a lock file does

rm /var/lock/qemu-server/lock-<vmid>.conf

After cleaning up the entire directory of old lock files:

  • 1st Packer run → successful
  • 2nd run → successful
  • 3rd run → successful
  • Aborted build → next run still successful
  • Build with lock intentionally left behind → still successful

This confirmed that we had been dealing with stale/corrupted locks, not a configuration issue. From that moment on, every build completed end-to-end without errors, it took me two days, but the learning curve was amazing. Good stuff to have an error and learn!

Hopefully this post helps someone tackle this issue sooner! As always let me know in the comments if you have remarks or questions or simply send me a message through the contacts page.