Category Archives: OS Compute

Nova Placement ApI / NoValidHost

To find the gap when placement api db out of sync or keep receiving NoValidHost  error. There is great chance resources in placement db is out of track.   One change in nova scheduler is a memory.disk.cpu filters are there any more, instead nova scheduler ask placement api for a list HV candidates by flavor first, then nova scheduler continues the filtering like compute_filter etc…

DB query to get basic capacity details:

select p.uuid, p.name hypervisor_hostname,
(select i.total from inventories i where p.id=i.resource_provider_id and i.resource_class_id=0) vcpus,
(select sum(used) from allocations a where a.resource_provider_id=p.id and a.resource_class_id=0) vcpus_used,
(select i.total from inventories i where p.id=i.resource_provider_id and i.resource_class_id=1) memory_mb,
(select sum(used) from allocations a where a.resource_provider_id=p.id and a.resource_class_id=1) memory_mb_used,
(select i.total from inventories i where p.id=i.resource_provider_id and i.resource_class_id=2) local_gb,
(select sum(used) from allocations a where a.resource_provider_id=p.id and a.resource_class_id=2) local_gb_used,
(select count(id) from allocations a where a.resource_provider_id=p.id and a.resource_class_id=2) running_vms
from resource_providers p order by p.name;

 

 

combine nova db + placement

assume running this query in placement api db and  nova cellx db name is nova_cell_1

select x.hypervisor_hostname, x.vcpus p_vcpus, y.vcpus n_vcpus, x.vcpus_used p_vcpus_used,y.vcpus_used n_vcpus_used, x.memory_mb p_memory_mb, y.memory_mb n_memory_mb, x.memory_mb_used p_memory_mb_used, y.memory_mb_used n_memory_mb_used, x.local_gb p_local_gb, y.local_gb n_local_gb, x.local_gb_used p_local_gb_used, y.local_gb_used n_local_gb_used, x.running_vms p_running_vms, y.running_vms n_running_vms from
(select p.name hypervisor_hostname,
(select i.total from inventories i where p.id=i.resource_provider_id and i.resource_class_id=0) vcpus,
(select sum(used) from allocations a where a.resource_provider_id=p.id and a.resource_class_id=0) vcpus_used,
(select i.total from inventories i where p.id=i.resource_provider_id and i.resource_class_id=1) memory_mb,
(select sum(used) from allocations a where a.resource_provider_id=p.id and a.resource_class_id=1) memory_mb_used,
(select i.total from inventories i where p.id=i.resource_provider_id and i.resource_class_id=2) local_gb,
(select sum(used) from allocations a where a.resource_provider_id=p.id and a.resource_class_id=2) local_gb_used,
(select count(id) from allocations a where a.resource_provider_id=p.id and a.resource_class_id=2) running_vms
from resource_providers p) x
join
(select hypervisor_hostname, vcpus, vcpus_used, memory_mb, memory_mb_used, local_gb, local_gb_used, running_vms from nova_cell_1.compute_nodes where deleted=0 ) y
on x.hypervisor_hostname=y.hypervisor_hostname
order by x.hypervisor_hostname;

 

Any utility to rebuild db?

Set up Spice Console

Basic Flow

Obtain an URL for the web based spice console

openstack console url show –spice UUID

+-------+-----------------------------------------------------------------------------------------------------+
| Field | Value                                                                                               |
+-------+-----------------------------------------------------------------------------------------------------+
| type  | spice-html5                                                                                         |
| url   | http://yyy.xxx.com/spice_auto.html?token=9654cb37-000-000-000-fbf30f17293b |
+-------+-----------------------------------------------------------------------------------------------------+

Stick the above url into a browser which will load spice console written in javascript, connecting back to spicehtml5proxy, which in turn forward traffics to the corresponding HV.

Obtain access url and token

request path

Client → Nova API → RPC (get_spice_console)→ Cell → RPC (get_spice_console) → HV

response path

Client ←(url+Token) ←  Nova API  ← Cell (Save token → connection info  into Console Auth Store with a TTL) ←  (host, port, url+token, token )  ← HV

On browser:

URL allows loading app

round 1.

browser → (url with token) → vip → spicehtml5proxy ( from default.web)

browser  ← vip ← (spice web based client) ← spicehtml5proxy

2.

browser ← → (websocket based spice traffics + token) ← →vip  ← →  spicehtml5proxy (obtain connection info by token from Console Auth) ←  (forward traffics)  → HV

* python >=2.74 is required on controller

Nova Controller setting

[default]
web = /usr/share/spice-html5 (this is default location from where all web contents get loaded.)
[vnc]
enabled = False
[spice]
agent_enabled = True (optional)
enabled = True
html5proxy_base_url =  http://os-vnc-vip-b01.ccg23.paypalc3.com/spice_auto.html (URL end user can reach from browser, VIP etc...)
#server_listen  (used by HV only)
#server_proxyclient_address  (used by HV only)
html5proxy_host = controller IP proxy listen at, should be reachable from VIP
html5proxy_host = port proxy listening at
[console]
allowed_origins=http://os-vnc-vip-b01.ccg23.paypalc3.com (To be save in case LB modified Origin or Host)
token_ttl =  set up token ttl here

Hypervisor setting

HV will return connection details like where to connect back from proxy including host and port.

Token is also generated on HV, and cell manager add it into console Auth later, so is full access url with token, so html5proxy_base_url is required on every HV as well.

[vnc]
enabled = False
[spice]
agent_enabled = True (optional)
enabled = True
server_listen = IP reachable form controller
server_proxyclient_address  =  hostname or IP reachable form controller
server_listen & server_proxyclient_address needs to be identical
html5proxy_base_url = (URL end user can reach from browser, VIP etc...)

Console-auth & token store backend

By default Console-auth save everything in local dictionary. So each spicehtml5proxy or cell manager ( when authorize a new token) may talk to different console-auth which maintains it’s own independent token cache.

Fortunately token store is implemented with oslo_cache library  with caching back-end configurable.

In production a memcachd should be used so that console-auth can scale.

[cache]
enabled = True
memcache_servers = .....
[consoleauth]
token_ttl =  set up token ttl here

more details regarding  oslo cache configuration

https://docs.openstack.org/oslo.cache/latest/configuration/index.html

3 VIP/LB

Last, but not the least, Vip/LB has to  support either websocket, if doing L7, or just plain TCP,  since spice traffics requires upgrading http to websocket.

Openstack Nova Live Migration Security SASL

Live migration set up

block based migration or leaving it with default

# /etc/nova/nova.conf

block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_TUNNELLED,VIR_MIGRATE_NON_SHARED_INC,VIR_MIGRATE_AUTO_CONVERGE

====TCP based======

/etc/nova/nova.conf

——————

..

[libvirt]

live_migration_uri = “qemu+tcp://%s/system”

..

/etc/libvirt/libvirtd.conf

——————

listen_tls = 0

listen_tcp = 1

unix_sock_group = “libvirt”

unix_sock_ro_perms = “0777”

unix_sock_rw

_perms = “0770”

auth_unix_ro = “none”

auth_unix_rw = “none”

auth_tcp = “none”

/etc/default/libvirtd

——————————

start_libvirtd=”yes”

libvirtd_opts=”-l”

*restart both libvirt-bin & nova-compute *

testing

virsh -c qemu+tcp://hostname_peer/system hostname

====TCP+SASL ======

install

apt update

apt install sasl2-bin

/etc/nova/nova.conf

——————

..

[libvirt]

live_migration_uri = “qemu+tcp://%s/system”

..

/etc/libvirt/libvirtd.conf

——————

listen_tls = 0

listen_tcp = 1

unix_sock_group = “libvirt”

unix_sock_ro_perms = “0777”

unix_sock_rw

_perms = “0770”

auth_unix_ro = “none”

auth_unix_rw = “none”

auth_tcp = “sasl”

/etc/default/libvirtd

——————————

start_libvirtd=”yes”

libvirtd_opts=”-l”

/etc/sasl2/libvirt.conf

—————————————

mech_list: digest-md5

sasldb_path: /etc/sasldb2

create user test and give nova as password

———

saslpasswd2 -a libvirt test

sasldblistusers2 -f /etc/sasldb2

enable libvirt client auto authenticate without prompt

——

/etc/libvirt/auth.conf

——————————

[credentials-defgrp]

authname=test

password=nova

[auth-libvirt-default]

credentials=defgrp

*restart both libvirt-bin & nova-compute *

test, should just work, with not prompt for user&password

virsh -c qemu+tcp://hostname_peer/system hostname

=====================

====TLS full validation ======

key tools

apt-get install gnutls-bin

/etc/nova/nova.conf

——————

..

[libvirt]

live_migration_uri = “qemu+tls://%s/system”

..

/etc/libvirt/libvirtd.conf

——————

listen_tls = 1

tls_no_verify_certificate = 0

tls_no_verify_address = 0

listen_tcp = 0

unix_sock_group = “libvirt”

unix_sock_ro_perms = “0777”

unix_sock_rw

_perms = “0770”

auth_unix_ro = “none”

auth_unix_rw = “none”

auth_tls = “none”

/etc/default/libvirtd

——————————

start_libvirtd=”yes”

libvirtd_opts=”-l”

generate key pairs & certs following

https://wiki.libvirt.org/page/TLSSetup

service nova-compute restart

virsh -c qemu+tls://venus-2/system hostname

virsh -c qemu+tls://venus-6/system hostname

*restart both libvirt-bin & nova-compute *

test, should just work

virsh -c qemu+tls://hostname_peer/system hostname

======================

===TLS no validation ======

key tools

apt-get install gnutls-bin

/etc/nova/nova.conf

——————

..

[libvirt]

live_migration_uri = “qemu+tls://%s/system”

..

/etc/libvirt/libvirtd.conf

——————

listen_tls = 1

tls_no_verify_certificate = 1

tls_no_verify_address = 1

listen_tcp = 0

unix_sock_group = “libvirt”

unix_sock_ro_perms = “0777”

unix_sock_rw

_perms = “0770”

auth_unix_ro = “none”

auth_unix_rw = “none”

auth_tls = “none”

/etc/default/libvirtd

——————————

start_libvirtd=”yes”

libvirtd_opts=”-l”

generate key pairs & certs following

https://wiki.libvirt.org/page/TLSSetup

service libvirt-bin restart

service nova-compute restart

virsh -c qemu+tls://host-1/system hostname

virsh -c qemu+tls://host-2/system hostname

*restart both libvirt-bin & nova-compute *

testing

virsh -c “qemu+tls://host-1/system?no_verify=1” hostname

virsh -c “qemu+tls://host-2/system?no_verify=1” hostname

 

========TLS no validation + SASL========

following TLS no validation

a few changes

/etc/libvirt/libvirtd.conf

——————

listen_tls = 1

tls_no_verify_certificate = 1

tls_no_verify_address = 1

listen_tcp = 0

unix_sock_group = “libvirt”

unix_sock_ro_perms = “0777”

unix_sock_rw

_perms = “0770”

auth_unix_ro = “none”

auth_unix_rw = “none”

auth_tls = “sasl”

/etc/sasl2/libvirt.conf

—————————————

mech_list: digest-md5

sasldb_path: /etc/sasldb2

  • scram-sha-1 requires properly signed certs

create user test and give nova as password

———

saslpasswd2 -a libvirt test

sasldblistusers2 -f /etc/sasldb2

libvirt client auto authenticate without prompt

——

/etc/libvirt/auth.conf

——————————

[credentials-defgrp]

authname=test

password=nova

[auth-libvirt-default]

credentials=defgrp

*restart both libvirt-bin & nova-compute *

testing

virsh -c “qemu+tls://host-1/system?no_verify=1” hostname

====ssh tunneling ======

No libvirt change required

only nova.conf and generate keys for user nova

nova.conf

live_migration_uri = “qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa”

see user nova  and path to its private key file is specified in the url

add public keys to authenticated hosts files.

Note: nova user login shell required.. further research required to limit set of binaries..

Havana message reorder fix

Probably a few people are still running Havana. A critical messaging related issue is message reordering.

Nova RPC sending back results in multiple messages which may go through different connections and reach a node running master queue owner through different routes in Rabbitmq cluster.

RPC results may become corrupted and incomplete.

You will see at least two messages will be sent out for a single RPC result. Each message was sent using a connection grabbed from pool on the fly.

nova/openstack/common/rpc/amqp.py

    
   def _process_data(self, ctxt, version, method, namespace, args):
        ctxt.update_store()

        try:
            rval = self.proxy.dispatch(ctxt, version, method, namespace,
                                       **args)
            # Check if the result was a generator
            if inspect.isgenerator(rval):
                for x in rval:
                    ctxt.reply(x, None, connection_pool=self.connection_pool)
            else:
                ctxt.reply(rval, None, connection_pool=self.connection_pool)
            # This final None tells multicall that it is done.
            ctxt.reply(ending=True, connection_pool=self.connection_pool) 

class RpcContext(rpc_common.CommonRpcContext):

    def reply(self, reply=None, failure=None, ending=False,
              connection_pool=None, log_failure=True):
        if self.msg_id:
            msg_reply(self.conf, self.msg_id, self.reply_q, connection_pool,
                      reply, failure, ending, log_failure)
            if ending:
                self.msg_id = None
def msg_reply(conf, msg_id, reply_q, connection_pool, reply=None,
              failure=None, ending=False, log_failure=True):
    ....
    with ConnectionContext(conf, connection_pool) as conn:
    ....
        if reply_q:
            msg['_msg_id'] = msg_id
            conn.direct_send(reply_q, rpc_common.serialize_msg(msg))
        else:
            conn.direct_send(msg_id, rpc_common.serialize_msg(msg))

A quick fix is to force all messages for the same reply going through same connection.

class RpcContext(rpc_common.CommonRpcContext):
    def reply2(self, reply=None, connection_pool=None):
        if self.msg_id:
            msg_reply2(self.conf,
                                             self.msg_id,
                                             self.reply_q,
                                             connection_pool,
                                             reply)
            self.msg_id = None



def msg_reply2(conf, msg_id, reply_q,
                                     connection_pool, reply=None):
    def reply_msg(content, ending, conn):
        msg = {'result': content, 'failure': None}
        if ending:
            msg['ending'] = True
        _add_unique_id(msg)
        if reply_q:
            msg['_msg_id'] = msg_id
            conn.direct_send(reply_q, rpc_common.serialize_msg(msg))
        else:
            conn.direct_send(msg_id, rpc_common.serialize_msg(msg))

    with ConnectionContext(conf, connection_pool) as conn:
    # Check if the result was a generator
        if inspect.isgenerator(reply):
            for x in reply:
                reply_msg(x, False, conn)
        else:
            reply_msg(reply, False, conn)
        reply_msg(None, True, conn)



class ProxyCallback(_ThreadPoolWithWait):

    def _process_data(self, ctxt, version, method, namespace, args):
        """Process a message in a new thread.

        If the proxy object we have has a dispatch method
        (see rpc.dispatcher.RpcDispatcher), pass it the version,
        method, and args and let it dispatch as appropriate.  If not, use
        the old behavior of magically calling the specified method on the
        proxy we have here.
        """
        ctxt.update_store()
        try:
            rval = self.proxy.dispatch(ctxt, version, method, namespace,
                                       **args)
            ctxt.reply2(rval, self.connection_pool)
        except rpc_common.ClientException as e:
            LOG.debug(_('Expected exception during message handling (%s)') %
                      e._exc_info[1])
            ctxt.reply(None, e._exc_info,
                       connection_pool=self.connection_pool,
                       log_failure=False)
        except Exception:
            # sys.exc_info() is deleted by LOG.exception().
            exc_info = sys.exc_info()
            LOG.error(_('Exception during message handling'),
                      exc_info=exc_info)
            ctxt.reply(None, exc_info, connection_pool=self.connection_pool)


A more robust fix should be reconstruct results based on sequence number and total number of messages. then fix alwasy need to handle timeout etc.

Kilo has already fixed this issue by returning result in one message.

nova suspend issue

nova suspend ended up all error.

look at libvirt log seeing error :  qemuMigrationUpdateJobStatus:946 : operation failed: domain save job: unexpectedly failed

Found images were generated in the following directory

/var/lib/libvirt/qemu/save

/var is mounted on a separate partition with limited space. run out of save on /var again…

create a soft link to a different