Thursday, October 16, 2014

OpenStack icehouse installation error : nova-api service getting stopped

While trying to install OpenStack icehouse, faced an issue with nova-api service.It was not getting started. The following error was coming up in the Nova-api log

Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-save -c
Exit code: 1

 nova Stdout: ''
2014-10-17 07:21:08.058 27270 TRACE nova Stderr: 'Traceback (most recent call last):\n  File "/usr/bin/nova-rootwrap", line 6, in <module>\n    from oslo.rootwrap.cmd import main\nImportError: No module named rootwrap.cmd\n'

Problem was with one of the oslo.rootwrap module. It was broken

Solution is to upgrade the module using pip

 #pip install oslo.rootwrap --upgrade

Tuesday, September 2, 2014

OpenStack: Restrict instance deletion

In OpenStack, by default users who are members of a  tenant can delete all instances in that tenant, even if it is spinned up by other users. If you want to restrict that, you  need to tweak the nova policy file  ie /etc/nova/policy.json

Add the following lines in the file:

    "admin_or_user":"is_admin:True or user_id:%(user_id)s",

Make the same changes in the /etc/openstack-dashboard/nova_policy.json file also

Now restart the openstack-nova-api service

Now user will be able to delete only those instances spinned up by them. Admin users will be able to delete all instances

Monday, September 1, 2014

OpenStack : Assign floating IP using heat template

Creating Yaml templates that assign floating IPs to your instances being spawned can be a bit tricky.Let us look at a scenario where we need to spin up a VM, assign a floating IP from a pool and make reference to this floating IP in your userdata as well. We will make use of the network ID of the internal and external network, as well as the Subnet ID of the internal network

The logical workflow is as follows:

  •  Create a port resource using internal network and internal subnet IDs
  • Create a floating IP resource , referring to the external network ID
  •  Associate the floating IP to the port
  •   In the server resource being created, associate the port resource
  Now we will see how this can be implemented using both HOT and AWS template formats

HOT template  sample:

1. Define the network ID parameters:

    type: string
    default: "<default private network id>"
    description: Id of the private network for the compute server
    type: string
    default: "<default private subnet id>"
    description: Id of the private sub network for the compute server
    type: string
    default: "<default external network id>"
    description: Id of the public network for the compute server

You can get the ID of the networks and subnet from the Openstack UI or using command line

2. Create the resources:

Define a security group,Neutron port, floating IP and associate the floating IP

    type: AWS::EC2::SecurityGroup
      GroupDescription: Enable access to the application and SSH access
      SecurityGroupIngress: [
        {IpProtocol: tcp, FromPort: {get_param: port}, ToPort: {get_param: port},
         CidrIp: ""},
        {IpProtocol: tcp, FromPort: "8080", ToPort: "8080",
         CidrIp: ""},
        {IpProtocol: icmp, FromPort: "-1", ToPort: "-1",
         CidrIp: ""}]

    type: OS::Neutron::Port
      network_id: { get_param: private_net }
      - subnet_id: { get_param: private_subnet }
      - {get_resource: external_access}

    type: OS::Neutron::FloatingIP
      floating_network_id: { get_param: public_net }
      port_id: { get_resource: public_port }

3. Associate the port to your VM instance:

    type: OS::Nova::Server
        - port: { get_resource: public_port }

AWS template  sample:

Almost same as hot template logic, just that we are not defining the security groups here

1. Define the network ID parameters:

    "external_network" : {
      "Default": "<default external network id>",
      "Description" : "UUID of an existing external network",
      "Type" : "String"
    "internal_network" : {
      "Default": "<default private network id>"",
      "Description" : "UUID of an existing internal network",
      "Type" : "String"
    "internal_subnet" : {
      "Default": "<default private subnet id>",
      "Description" : "UUID of an existing internal subnet",
      "Type" : "String"

2. Create the resources:

    "port_floating": {
      "Type": "OS::Neutron::Port",
      "Properties": {
        "network_id": { "Ref" : "internal_network" },
        "fixed_ips": [
          {"subnet_id": { "Ref" : "internal_subnet" }

    "floating_ip": {
      "Type": "OS::Neutron::FloatingIP",
      "Properties": {
        "floating_network_id": { "Ref" : "external_network" }
    "floating_ip_assoc": {
      "Type": "OS::Neutron::FloatingIPAssociation",
      "Properties": {
        "floatingip_id": { "Ref" : "floating_ip" },
        "port_id": { "Ref" : "port_floating" }

3. Associate the port to your VM instance:

    "WebServer": {
      "Type": "AWS::EC2::Instance",
      "Properties": {
        "NetworkInterfaces" : [ { "Ref" : "port_floating" } ],

Friday, August 29, 2014

OpenStack monitoring: Zabbix Ceilometer proxy installation

Recently a Ceilometer proxy for Zabbix was released by OneSource. This proxy will pull all the instance information from OpenStack and populate it in Zabbix

The source code can be downloaded from here:

The basic prerequisites for the server where the proxy is running is Python and Pika library. Also there should be network connectivity from the proxy machine to your OpenStack installation.

In the test installation, I tried it on a standalone Ubuntu machine. Python can be installed using apt-get and was pretty much straight forward. The document suggests installation of Pika using PIP package manager. Since the machine is sitting behind a proxy, we had some trouble using PIP. As a workaround, the Pika can be directly downloaded and installed from the source repository here

Simply download all files in the repo and execute the script to install pika

Now coming back to the Ceilometer proxy installation. For this you have to uncomment this line in your keystone.conf file.

 notification_driver = keystone.openstack.common.notifier.rpc_notifier
 Next step is to update the proxy.conf file with your OpenStack installation connection parameters.This is also pretty much straight forward for people familiar with OpenStack. I had some confusion on the rabbitmq account to be used, the 'guest' account worked fine for me. Other than that it is just your ceilometer api IP address and keystone authentication details. In the zabbix_configs section, you need to provide your zabbix host IP and admin credentials for web login

Once the proxy.conf file is updated, you can simply run the script to start the monitoring. A new entry for the proxy will be created Administration->DM .

Note: One shortcoming we have noticed is that the instances created using heat orchestrator are not being picked up by the proxy.  Also the cleanup of machines from Zabbix once they are deleted from openstack is not happening.


Friday, August 22, 2014

Agentless openstack monitoring using zabbix

Zabbix can be a tough cookie to crack!! And if you are planning to monitor Openstack using Zabbix, there is lot of additional work to be done .More so, if you want to go the agentless way, ie using SNMP

So, here we go.I am using Ubuntu 12.04 OS, both for my Zabbix server as well as openstack nodes

  • First you need to install the following packages using apt-get in the machine being monitored ie the openstack node

apt-get install snmpd
apt-get install snmp snmp-mibs-downloader

  • snmpd will be installed by default in your zabbix server, but you need to install the snmp and snmp-mibs-downloader packages as well in the server
  •  Once that is done, edit the /etc/snmp/snmpd.conf file in your openstack node. Update the following values
agentAddress udp:161,udp6:[::1]:161
rocommunity public <Ip of your zabbix server>
proc  apache2
proc  neutron-server
proc  nova-api

PS: the process names will depend on the openstack node. Name all the processes that you want to monitor

  • Create the openstack host in the Zabbix server, select snmp interface during the host creation
  • By default zabbix has snmp templates for monitoring disk space , cpu utilization, network interface status and  system uptime. You can attach those template to your host
  • Inorder to monitor memory of system using SNMP, we can make us eof the following OIDs to create new templates
Memory Statistics:
Total Swap Size: .
Available Swap Space: .
Total RAM in machine: .
Total RAM used: .
Total RAM Free: .
Total RAM Shared: .
Total RAM Buffered: .
Total Cached Memory: .

  • For eg, if you want to monitor the total RAM used, first execute the following command from the zabbix server
 snmpwalk -v 2c -c public <openstack node ip> .

You will be getting an output which will look like this]

UCD-SNMP-MIB::memAvailReal.0 = INTEGER: 2420936 kB

In this case memAvailReal.0 is the value you should be using for SNMP OID value in the next step
  • You can clone any of the existing SNMP templates, and create new items . You will have to update the 'key' and 'SNMP OID' value in the new item based on the above output. The Key can be any unique value, make sure that the OIDs match the value mentioned in above step
  • In case you want to monitor a process via snmp, as mentioned earlier, it should be defined on the machine's snmpd.conf. Now execute the following command from the zabbix server 

 snmpwalk -v 2c -c public <openstack node ip> prTable 
  • Output should look something like this

UCD-SNMP-MIB::prNames.1 = STRING: mountd
UCD-SNMP-MIB::prNames.2 = STRING: ntalkd
UCD-SNMP-MIB::prNames.3 = STRING: sendmail
UCD-SNMP-MIB::prNames.4 = STRING: /usr/bin/nova-api
UCD-SNMP-MIB::prNames.5 = STRING: apache2
UCD-SNMP-MIB::prNames.6 = STRING: neutron-server
UCD-SNMP-MIB::prNames.7 = STRING: nova-api

UCD-SNMP-MIB::prErrorFlag.1 = INTEGER: error(1)
UCD-SNMP-MIB::prErrorFlag.2 = INTEGER: noError(0)
UCD-SNMP-MIB::prErrorFlag.3 = INTEGER: error(1)
UCD-SNMP-MIB::prErrorFlag.4 = INTEGER: error(1)
UCD-SNMP-MIB::prErrorFlag.5 = INTEGER: noError(0)
UCD-SNMP-MIB::prErrorFlag.6 = INTEGER: noError(0)
UCD-SNMP-MIB::prErrorFlag.7 = INTEGER: noError(0)

Note the prErrorFlag.n field. We will be using this as SNMP OID in the template for process monitoring. The logic to be used, as clear from the output above is that, if the process is up and running the output will be noError(0)

Thursday, August 21, 2014

Tech tip: Increase openstack project quota from command line

1. List the keystone tenants and search for the required tenant

keystone tenant-list |grep <tenantname>

 Note the id of the tenant being displayed. You need to use this id in the next command

2. Get quota details of the tenant using the following command

nova-manage project quota <tenantid>

You will be getting output similar to this

Quota                                Limit      In Use     Reserved
metadata_items                       128        0          0
injected_file_content_bytes          10240      0          0
ram                                  51200      0          0
floating_ips                         10         0          0
security_group_rules                 20         0          0
instances                            10         0          0
key_pairs                            100        0          0
injected_files                       5          0          0
cores                                20         0          0
fixed_ips                            unlimited  0          0
injected_file_path_bytes             255        0          0
security_groups                      10         0          0

3. Update value of the key, depending on which item you want to update. For eg, if you want to increase the number of instances from 10 to 20, give the following command

nova-manage project quota <tenantid> --key instances --value 20

4.Now run the "nova-manage quota <tenantid> " command to see if the quota is updated

Wednesday, August 13, 2014

Instances goes to paused state in Openstack Havanna


All instances in openstack will be in paused node. You will not be able to create new instances or switch on any of the paused instances


Most often the reason will be lack of disk space in your compute node. By default the instances are created in the /var/lib/nova/instances folder of the compute node. This location is defined by the parameter "instances_path" in nova.conf of the  compute node. If your "/" partition is running out of disk space, then you cannot perform any instance related operations


  • Change the "instances_path" location to a different location. Ideally you could attach an additional disk and mount it to a directory and update the directory path in the "instance_path" variable.
  • Problem arises when you already have a number of instances  in the previous folder. You should move them over to the new location.
  •  Also you should set the group and ownership of the new instances folder to "nova" user, so that the permissions, ownership and group memberships are same as that of the previous folder

Openstack havanna neutron agent-list alive status error

In some scenarios, the openstack neutron-agent status will show as xxx even though you could see he neutron agents services are up and running in the network and compute nodes. Also you could see a fluctuation in the agent status if you try the agent-list command repeatedly.  Confusing, right?

Actually  the problem is not in the actual agent status, but with two default configurations in neutron.conf ie agent_down_time and report_interval. It is the interval during which neutron will check the agent status. There is a bug reported against this issue

As per the details in the bug " report_interval" is how often an agent sends out a heartbeat to the service. The Neutron service responds to these 'report_state' RPC messages by updating the agent's heartbeat DB record. The last heartbeat is then compared to the configured agent_down_time to determine if the agent is up or down"

The neutron agent-list command uses the agent_down_time value to display the status. The default values are set very low, because of which the alive status is shown as down/fluctuating.

Solution: As suggested in the solution for the bug, update the values of agent_down_time and report_interval to 75 and 30 seconds respectively. Since the above mentioned rpc issue with open-vswitch agent in compute is  resolved by this, all the agents will be shown as alive

Friday, July 25, 2014

Ubuntu 12.04 P2V conversion using non-root user

Ubuntu P2V conversion is not as straight forward as other Linux machines with a root user. This is because we use a non-root user by default for managing Ubuntu machines and the root credentials are not known to us. So how do you convert a physical Ubuntu VM to virtual without the root credentials? Here are the steps

PS: please note the steps are for VMware vCenter standalone converter 5.5

1.Edit VMware configuration files converter-agent.xml and converter-worker.xml files present in C:\ProgramData\VMware\VMware vCenter Converter Standalone , update the useSudo flag from false to true

2. Reboot the VMware converter standalone agent service

3. On the Physical server that needs to be converted, edit the /etc/sudoers file and add the following entry

<username> ALL=(ALL) NOPASSWD: ALL

4.Ensure that the following entry is not present in /etc/sudoers

Defaults requiretty

5. You need to change the userid and group id of the non-root user to 0. Edit /etc/password and /etc/group file for this

For eg: in /etc/password, update as following for the user


In /etc/group, update as follows


6.In the /etc/ssh/sshd_config . allow root login through ssh

PermitRootLogin yes

7.Now you need to open your standalone convertor as administrator and start the conversion wizard

Networking considerations during the conversion

A helper VM will be created during the conversion process, which will either get an IP from DHCP or you should assign a static IP to you. It will be assigned by default  to the "VM network" port group, though there is option to change it . If your network doesnt have a dhcp, assign a static  IP  to the helper VM and make sure that VMs in the assigned  port group can communicate with the physical server being converted

Monday, July 21, 2014

Tech tip: Create separate routing table for multiple nics in Ubuntu

Scenario: 2 nics in Ubuntu machine, requirement to assign IPs from different VLANs to each of these interfaces, access from outside world to all the assigned IPs.

The situation was a bit complex since the machine was a VM  in ESXi and each of these nics were added to portgroups of two VLANs 200 and 201. The first nic eth0 was assigned a gateway , and was accessible from outside world. The second nic eth1 was assigned IP in 201  VLAN, but was not able to ping to that machine from other machines in a different VLAN


Inorder to solve the issue, we had to add an additional routing table, to select default route for packages which should go out of eth1. The following lines added to the eth1 interface configuration in /etc/networking/interfaces file did the trick

post-up ip route add default via 10.1111.0.1 dev eth1 table 101
post-up ip rule add from lookup 101