Uploaded image for project: 'XenServer Org'
  1. XenServer Org
  2. XSO-860

XenMotion fail - xc_domain_restore: [1] Restore failed (1 = Operation not permitted)

    XMLWordPrintable

Details

    • Bug
    • Resolution: Won't Do
    • Major
    • None
    • 7.3
    • VM Lifecycle
    • None

    Description

      Hello,

      We are migrating ~40 VMs (Debian/Ubuntu) from a pool (XS7.2 up-to-date, CPU E5-2618L v4, local SR) to another pool (XS7.2 up-to-date CPU E5-2618L v3, local SR) with XenMotion. 

       

      Some has been moved smoothly with XenMotion but some other fails.

      When it fails, the VM is leaved (status "paused") on the source-pool, and I need to force-shutdown the VM to be able to get it running again.

      It always fails at the end of the process. 

       

      Log on the source Pool : 
      xenopsd-xc: [error|serv1|37 |VM.migrate_send R:a0b3d4d1a709|task_server] Task 1082979 failed; Xenops_migrate.Remote_failed("unmarshalling error message from remote")
      xapi: [debug|serv1|6273252 INET :::80|VM.migrate_send R:a0b3d4d1a709|xapi] xenops: not retrying migration: caught Xenops_interface.Internal_error("Xenops_migrate.Remote_failed(\"unmarshalling error message from remote\")") from sync_with_task in attempt 1 of 3.
      xapi: [debug|serv1|6273252 INET :::80|VM.migrate_send R:a0b3d4d1a709|xenops] re-enabled xenops events on VM: 48c93cd6-0fce-1c9e-7018-febf2c0d6c39; refreshing VM
      xenopsd-xc: [debug|serv1|856342 |VM.migrate_send R:a0b3d4d1a709|xenops_server] UPDATES.refresh_vm 48c93cd6-0fce-1c9e-7018-febf2c0d6c39
      xenopsd-xc: [debug|serv1|856342 |VM.migrate_send R:a0b3d4d1a709|xenops_server] VM_DB.signal 48c93cd6-0fce-1c9e-7018-febf2c0d6c39
      xenopsd-xc: [debug|serv1|856342 |VM.migrate_send R:a0b3d4d1a709|xenops_server] VBD_DB.signal 48c93cd6-0fce-1c9e-7018-febf2c0d6c39.xvda
      xenopsd-xc: [debug|serv1|856342 |VM.migrate_send R:a0b3d4d1a709|xenops_server] VIF_DB.signal 48c93cd6-0fce-1c9e-7018-febf2c0d6c39.1
      xenopsd-xc: [debug|serv1|856342 |VM.migrate_send R:a0b3d4d1a709|xenops_server] VIF_DB.signal 48c93cd6-0fce-1c9e-7018-febf2c0d6c39.0
      xapi: [debug|serv1|6273252 INET :::80|VM.migrate_send R:a0b3d4d1a709|xenops] Client.UPDATES.inject_barrier 95
      xenopsd-xc: [debug|serv1|856344 |VM.migrate_send R:a0b3d4d1a709|xenops_server] UPDATES.inject_barrier 48c93cd6-0fce-1c9e-7018-febf2c0d6c39 95
      xapi: [debug|serv1|6273252 INET :::80|VM.migrate_send R:a0b3d4d1a709|mscgen] xapi=>smapiv2 [label="DATA.MIRROR.stat"];
      xapi: [ info|serv1|6274247 INET :::80|Querying services D:7a60d8c8971f|storage_impl] DATA.MIRROR.stat dbg:VM.migrate_send R:a0b3d4d1a709 id:26ca89d3-5499-98a3-ff72-86dc73b59bf2/07e474b8-c4e4-4128-9108-3fd91c3d8bbb
      xapi: [debug|serv1|6273252 INET :::80|VM.migrate_send R:a0b3d4d1a709|storage_migrate] Got failure: checking for redirect
      xapi: [debug|serv1|6273252 INET :::80|VM.migrate_send R:a0b3d4d1a709|storage_migrate] Call was: -> DATA.MIRROR.stat({dbg:S(VM.migrate_send R:a0b3d4d1a709);id:S(26ca89d3-5499-98a3-ff72-86dc73b59bf2/07e474b8-c4e4-4128-9108-3fd91c3d8bbb)})
      xapi: [debug|serv1|6273252 INET :::80|VM.migrate_send R:a0b3d4d1a709|storage_migrate] result.contents: ["Does_not_exist", ["mirror", "26ca89d3-5499-98a3-ff72-86dc73b59bf2\/07e474b8-c4e4-4128-9108-3fd91c3d8bbb"]]
      xapi: [debug|serv1|6273252 INET :::80|VM.migrate_send R:a0b3d4d1a709|storage_migrate] Not a redirect
      xapi: [debug|serv1|6273252 INET :::80|VM.migrate_send R:a0b3d4d1a709|mscgen] xapi=>remote_xapi [label="VDI.destroy"];
      xapi: [debug|serv1|6273252 INET :::80|VM.migrate_send R:a0b3d4d1a709|stunnel] stunnel start
      xapi: [debug|serv1|6273252 INET :::80|VM.migrate_send R:a0b3d4d1a709|xmlrpc_client] stunnel pid: 21470 (cached = false) connected to 192.168.124.13:443
      xapi: [debug|serv1|6273252 INET :::80|VM.migrate_send R:a0b3d4d1a709|xmlrpc_client] with_recorded_stunnelpid task_opt=None s_pid=21470
      xapi: [error|serv1|6273252 INET :::80|VM.migrate_send R:a0b3d4d1a709|xapi] Failed to destroy remote VDI
      xapi: [debug|serv1|6273252 INET :::80|VM.migrate_send R:a0b3d4d1a709|mscgen] xapi=>smapiv2 [label="DP.destroy"];
      xapi: [ info|serv1|6274252 INET :::80|Querying services D:48bbf69f4b1c|storage_impl] DP.destroy dbg:VM.migrate_send R:a0b3d4d1a709 dp:mirror_vbd/19/xvda allow_leak:false
      xapi: [error|serv1|6273252 INET :::80|VM.migrate_send R:a0b3d4d1a709|xapi] Caught Storage_interface.Does_not_exist(_): cleaning up
      xapi: [debug|serv1|6273252 INET :::80|VM.migrate_send R:a0b3d4d1a709|xenops] suppressing xenops events on VM: 48c93cd6-0fce-1c9e-7018-febf2c0d6c39
      xenopsd-xc: [debug|serv1|856351 |VM.migrate_send R:a0b3d4d1a709|xenops_server] VM.stat 48c93cd6-0fce-1c9e-7018-febf2c0d6c39
      xenopsd-xc: [error|serv1|856351 ||backtrace] VM.migrate_send R:a0b3d4d1a709 failed with exception Xenops_interface.Does_not_exist(_)
       

      Log on the destination pool : 

       
      xenopsd-xc: [debug|serv2|25 |VM.migrate_send R:a0b3d4d1a709|xenguesthelper] connect: args = [ -mode hvm_restore -domid 9 -fd e7848009-2ff2-46ca-b621-6365766be462 -store_port 5 -console_port 6 -fork true ]
      xenopsd-xc: [error|serv2|25 |VM.migrate_send R:a0b3d4d1a709|xenguesthelper] Received: xenguest: xc_domain_restore: [1] Restore failed (1 = Operation not permitted)
      xenopsd-xc: [error|serv2|25 |VM.migrate_send R:a0b3d4d1a709|xenguesthelper] Memory F 25446680 KiB S 0 KiB T 65422 MiB
      xenopsd-xc: [error|serv2|25 |VM.migrate_send R:a0b3d4d1a709|memory] VM 48c93cd6-0fce-1c9e-7018-febf2c0d6c39: restore failed: (Failure#012  "Error from xenguesthelper: xenguest: xc_domain_restore: [1] Restore failed (1 = Operation not permitted)")
      xenopsd-xc: [debug|serv2|25 |VM.migrate_send R:a0b3d4d1a709|memory] Memory reservation_id = 32e7595b-5609-4e29-8a34-0c39b0df8b49
      xenopsd-xc: [debug|serv2|25 |VM.migrate_send R:a0b3d4d1a709|memory] delete_reservation 32e7595b-5609-4e29-8a34-0c39b0df8b49
      xenopsd-xc: [ info|serv2|25 |VM.migrate_send R:a0b3d4d1a709|xenops_server] Caught (Failure#012  "Error from xenguesthelper: xenguest: xc_domain_restore: [1] Restore failed (1 = Operation not permitted)") executing ["VM_receive_memory", ["48c93cd6-0fce-1c9e-7018-febf2c0d6c39", 8589934592, 17]]: triggering cleanup actions
      xenopsd-xc: [debug|serv2|25 |VM.migrate_send R:a0b3d4d1a709|xenops_server] Task 13961 reference VM.migrate_send R:a0b3d4d1a709: ["VM_check_state", "48c93cd6-0fce-1c9e-7018-febf2c0d6c39"]
      xenopsd-xc: [debug|serv2|25 |VM.migrate_send R:a0b3d4d1a709|xenops_server] VM.shutdown 48c93cd6-0fce-1c9e-7018-febf2c0d6c39
      xenopsd-xc: [debug|serv2|25 |VM.migrate_send R:a0b3d4d1a709|xenops_server] Performing: ["VM_destroy_device_model", "48c93cd6-0fce-1c9e-7018-febf2c0d6c39"]
       

       

      Another tests I made: 

      • created 3 VMs (debian jessie & ubuntu 16.04, pfsense 2.4.2) on the Lv3 CPU
      • move them to the Lv4 CPU
      • move them again to the Lv3 CPU.

      It works.

       

      Another tests I made: 

      • created 3 VMs (debian jessie & ubuntu 16.04, pfsense 2.4.2) on the Lv3 CPU
      • move them to another host with Lv3 CPU.

      It works.

       

      • created 3 VMs (debian jessie & ubuntu 16.04, pfsense 2.4.2) on the Lv3 CPU
      • move them to the Lv4 CPU
      • stop/start the VM
      • move them again to the Lv3 CPU.

      It fails for debian jessie & ubuntu, but works for pfsense 2.4.2.

       

      Is there a way to make it work for all VMs?

      Thank you.

       

      Attachments

        Activity

          People

            Unassigned Unassigned
            delaf Guillaume de Lafond
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: