VM Migration Between Hosts in Proxmox: Complete Guide to Live and Offline Migration
Virtual machine migration is one of the most valuable capabilities within Proxmox Virtual Environment, enabling administrators to move VMs between physical nodes without downtime or service interruption. Whether you are maintaining a large cluster, preparing hardware for upgrades, or balancing workloads to improve performance, understanding VM migration between hosts in Proxmox is essential.
This comprehensive guide explores how Proxmox migration works, step-by-step instructions for both live and offline migrations, storage considerations, networking requirements, best practices, troubleshooting, and recommended tools to streamline the process. Additionally, you will find links to useful resources, FAQs, and comparison tables to help you optimize your Proxmox cluster.
What Is VM Migration in Proxmox?
VM migration is the process of relocating a running or stopped virtual machine to another node within the same Proxmox cluster. Proxmox supports two primary types of migration:
- Live Migration โ Moves a running VM to another host without downtime.
- Offline Migration โ Migrates a powered-off VM or one briefly stopped for transfer.
Both methods are essential for cluster maintenance, load balancing, and achieving high availability. Live migration allows you to maintain uptime, while offline migration is useful for large disk transfers or when live migration conditions are not met.
Benefits of VM Migration Between Hosts in Proxmox
VM migration plays a central role in modern virtualization strategies. Some benefits include:
- Zero-downtime maintenance on physical hosts
- Improved workload distribution across nodes
- Enhanced resource optimization
- Greater reliability and fault tolerance
- Support for Proxmox HA failover environments
- Flexibility to scale and upgrade hardware
Requirements for VM Migration in Proxmox
Before initiating migration, ensure the environment meets the required technical conditions.
Cluster Requirements
- All nodes must belong to the same Proxmox cluster.
- Cluster must be healthy and in a quorum state.
- Nodes must have consistent time synchronization (via NTP).
Storage Requirements
Live migration of VMs with local disks requires shared storage. Acceptable types include:
- NFS
- Ceph
- iSCSI
- ZFS over iSCSI
- GlusterFS
VMs stored on local disks (e.g., local-lvm) can still be migrated using the with disk migration option, but this is slower and requires downtime.
Networking Requirements
- VM network configurations must exist on both nodes.
- Bridge names should be identical across nodes.
- Migration network should be configured for optimal performance.
How Live Migration Works in Proxmox
Live migration relies on memory replication while the VM continues running. The process includes:
- Initial transfer of VM memory to the destination node.
- Dirty page tracking to capture changes made during transfer.
- Final memory sync and switch over when changes are minimal.
- Resume VM operations on the new host.
This method ensures continuous availability of services with minimal interruption, usually measured in milliseconds.
Step-by-Step Guide: Live Migration of VMs in Proxmox
Performing live migration in Proxmox is straightforward.
Method 1: Using the Proxmox Web Interface
- Select the VM you want to migrate.
- Click Migrate in the top menu.
- Select the target node from the dropdown list.
- Choose Online migration mode.
- Click Migrate to begin.
Proxmox will show migration progress and automatically switch the VM to the new node once complete.
Method 2: Using the Proxmox CLI
Run the following command:
qm migrate VMID TARGETNODE –online
This is especially useful for batch processing or automation through scripts.
Step-by-Step Guide: Offline Migration in Proxmox
Offline migration is useful for VMs with local storage or when live migration prerequisites are not met.
Performing Offline Migration via Web UI
- Shut down or stop the VM.
- Click the Migrate button.
- Select the target node.
- Ensure Offline mode is selected.
- Start migration.
Command Line Offline Migration
qm migrate VMID TARGETNODE –offline
Offline migration waits until the VM is fully stopped, then transfers the configuration and storage.
Live Migration vs Offline Migration Comparison
| Feature | Live Migration | Offline Migration |
| Downtime | None or negligible | VM is stopped during migration |
| Speed | Fast for shared storage | Slower especially with disk copy |
| Storage Requirement | Shared storage required | Works with local storage |
| Use Case | Maintenance, load balancing | Disk migration, hardware changes |
Networking Best Practices for Successful Migration
Consistent and optimized networking is essential for efficient migration.
- Use dedicated migration networks when possible.
- Ensure bridges have identical names across nodes.
- Use 10GbE or higher for large VMs.
- Enable jumbo frames to improve throughput.
- Test node-to-node latency regularly.
Storage Options for VM Migration
Storage selection significantly impacts migration performance and capabilities.
1. Shared Storage
- Best for live migration
- Fastest migration performance
- Recommended for high availability clusters
2. Local Storage with Disk Migration
- Supports offline or live with disk copy
- Slower migration due to full disk transfer
- Useful for standalone nodes or budget setups
3. Ceph Distributed Storage
- Excellent for scalable clusters
- Supports live migration seamlessly
- Fully integrated with Proxmox
Best Practices for VM Migration on Proxmox Clusters
- Ensure cluster nodes run the same Proxmox version.
- Plan migrations during low workload periods.
- Use shared storage for mission-critical VMs.
- Monitor network bandwidth to prevent congestion.
- Enable migration logging for troubleshooting.
- Use HA for automated failover and resilience.
Common Migration Issues and How to Fix Them
Error: “Inconsistent Bridge Names”
Ensure that network bridges are identical across all nodes. Check via:
cat /etc/network/interfaces
Error: “Disk Locked”
This typically happens after failed backups or snapshots. Unlock using:
qm unlock VMID
Error: “Cannot Migrate VM Using Local Storage”
Use the “with disk” migration option or move the VM disk to shared storage first.
Error: “Network Timeout During Migration”
- Check MTU settings.
- Test latency between nodes.
- Verify migration network stability.
Recommended Tools and Resources
- High-performance SSDs for Proxmox storage
- 10GbE network adapters
- Learn more about Proxmox storage configuration
Frequently Asked Questions (FAQ)
Can I migrate VMs with local storage in Proxmox?
Yes, but it requires disk migration. This process is slower and may need downtime depending on VM size.
Does live migration cause downtime?
No. Live migration keeps VMs running throughout the process with minimal service interruption.
Can LXC containers be migrated?
Yes, LXC containers support live and offline migration when using shared storage.
Is shared storage required for migration?
It is required for fast live migrations but not for offline migrations.
Why does my migration fail with “no quorum”?
Your cluster lost quorum. Check corosync status and node communication.
Conclusion
VM migration between hosts in Proxmox is one of the platformโs most important features, enabling zero-downtime maintenance, improved load balancing, and enhanced cluster performance. Whether using live or offline migration, following best practices ensures smooth transitions and a reliable virtualization environment. With careful planning and the right storage and networking setup, Proxmox can provide enterprise-grade flexibility and resilience for virtual infrastructure.











