Search This Blog

Friday, September 23, 2016

OpenStack Neutron VMware NSX REST API Extension Refence Now Available

Forward

The OpenStack neutron team has done a fantastic job consolidating the neutron REST API reference source into the neutron-lib tree (note that this is an ongoing effort). Once built, the resulting documentation is published to the docs site and is what you see when viewing the neutron api-ref. While stadium projects can contribute their api-ref to the neutron-lib tree, non-stadium projects (such as the numerous neutron plugins including the VMware NSX plugin) must publish/maintain their own API reference documentation.


VMware NSX Neutron Plugin REST API Reference

We recently decided the most straight forward place to publish the OpenStack neutron VMware NSX plugin REST api-ref was right alongside the plugin source code. This document is in markdown format and can be found at vmware-nsx/api-ref/rest.md.

OpenStack neutron VMware NSX api-ref rendered in markdown

Moving forward, our goal is to keep the VMware NSX plugin api-ref in sync with the plugin source code so consumers can always find the api-ref for the release of the plugin they are using. Consumers using the VMware NSX neutron plugin for release REL can access the following URL to view its api-ref:

https://github.com/openstack/vmware-nsx/tree/stable/REL/api-ref/rest.md

However since we just committed this documentation, consumers will only be able to access the api-ref using the above URL starting with the Ocata release (for now it can be accessed from the master branch of the plugin source repository).

We look forward to any feedback on this api-ref so feel free to open a bug, or reach out to me directly.

Tuesday, September 13, 2016

What's new with neutron-lib 0.4.0

Forward

OpenStack neutron-lib version 0.4.0 was recently released to PyPI  and contains a number of updates to constantsdbexceptionspolicy and utils.

The complete list of public API changes are summarized below (and can be viewed on github):
New API Signatures
-----------------------------------------------------
neutron_lib.constants.DEVICE_OWNER_BAREMETAL_PREFIX = baremetal:
neutron_lib.db.constants.DESCRIPTION_FIELD_SIZE = 255
neutron_lib.db.constants.DEVICE_ID_FIELD_SIZE = 255
neutron_lib.db.constants.DEVICE_OWNER_FIELD_SIZE = 255
neutron_lib.db.constants.IP_ADDR_FIELD_SIZE = 64
neutron_lib.db.constants.LONG_DESCRIPTION_FIELD_SIZE = 1024
neutron_lib.db.constants.MAC_ADDR_FIELD_SIZE = 32
neutron_lib.db.constants.NAME_FIELD_SIZE = 255
neutron_lib.db.constants.PROJECT_ID_FIELD_SIZE = 255
neutron_lib.db.constants.RESOURCE_TYPE_FIELD_SIZE = 255
neutron_lib.db.constants.STATUS_FIELD_SIZE = 16
neutron_lib.db.constants.UUID_FIELD_SIZE = 36
neutron_lib.db.model_base.BASEV2 = PYIR UNKNOWN VALUE
neutron_lib.db.model_base.HasId
neutron_lib.db.model_base.HasId.id = PYIR UNKNOWN VALUE
neutron_lib.db.model_base.HasProject
neutron_lib.db.model_base.HasProject.get_tenant_id(self)
neutron_lib.db.model_base.HasProject.project_id = PYIR UNKNOWN VALUE
neutron_lib.db.model_base.HasProject.set_tenant_id(self, value)
neutron_lib.db.model_base.HasProject.tenant_id(cls)
neutron_lib.db.model_base.HasProjectNoIndex
neutron_lib.db.model_base.HasProjectNoIndex.project_id = PYIR UNKNOWN VALUE
neutron_lib.db.model_base.HasProjectPrimaryKey
neutron_lib.db.model_base.HasProjectPrimaryKey.project_id = PYIR UNKNOWN VALUE
neutron_lib.db.model_base.HasProjectPrimaryKeyIndex
neutron_lib.db.model_base.HasProjectPrimaryKeyIndex.project_id = PYIR UNKNOWN VALUE
neutron_lib.db.model_base.HasStatusDescription
neutron_lib.db.model_base.HasStatusDescription.status = PYIR UNKNOWN VALUE
neutron_lib.db.model_base.HasStatusDescription.status_description = PYIR UNKNOWN VALUE
neutron_lib.db.model_base.NeutronBaseV2
neutron_lib.db.utils.is_retriable(exception)
neutron_lib.db.utils.reraise_as_retryrequest(function)
neutron_lib.exceptions.DeviceNotFoundError
neutron_lib.exceptions.DeviceNotFoundError.message = Device '%(device_name)s' does not exist.
neutron_lib.exceptions.MultipleExceptions
neutron_lib.exceptions.PolicyCheckError
neutron_lib.exceptions.PolicyCheckError.message = Failed to check policy %(policy)s because %(reason)s.
neutron_lib.exceptions.PolicyInitError
neutron_lib.exceptions.PolicyInitError.message = Failed to initialize policy %(policy)s because %(reason)s.
neutron_lib.hacking.checks.check_no_eventlet_imports(logical_line)
neutron_lib.hacking.translation_checks.check_delayed_string_interpolation(logical_line, filename, noqa)
neutron_lib.policy.check_is_admin(context)
neutron_lib.policy.check_is_advsvc(context)
neutron_lib.policy.init(conf=PYIR UNKNOWN VALUE, policy_file=None)
neutron_lib.policy.refresh(policy_file=None)
neutron_lib.policy.reset()
neutron_lib.utils.file.ensure_dir(dir_path)
neutron_lib.utils.file.replace_file(file_name, data, file_mode=420)
neutron_lib.utils.helpers._(s)
neutron_lib.utils.helpers.camelize(s)
neutron_lib.utils.helpers.compare_elements(a, b)
neutron_lib.utils.helpers.dict2str(dic)
neutron_lib.utils.helpers.dict2tuple(d)
neutron_lib.utils.helpers.diff_list_of_dict(old_list, new_list)
neutron_lib.utils.helpers.get_random_string(length)
neutron_lib.utils.helpers.parse_mappings(mapping_list, unique_values=True, unique_keys=True)
neutron_lib.utils.helpers.round_val(val)
neutron_lib.utils.helpers.safe_decode_utf8(s)
neutron_lib.utils.helpers.safe_sort_key(value)
neutron_lib.utils.helpers.str2dict(string)
neutron_lib.utils.host.cpu_count()
neutron_lib.utils.net.get_hostname()
-----------------------------------------------------

Removed API Signatures
-----------------------------------------------------
-----------------------------------------------------

Changed API Signatures
-----------------------------------------------------
-----------------------------------------------------

Note that the report above does not include private API changes, tests, etc.. As always consumers should refrain from using private APIs as they are susceptible to change at any time.

In the previous neutron-lib release blog we dug into some of the actual API usage with sample python code. However as neutron-lib 0.4.0 has a number of new APIs, we'll stick a high-level overview in this post and forgo the python code samples.


Database

As shown in the public API report above, a number of classes, functions and constants have been re-homed from neutron.db into neutron_lib.db (for more details see the review). The goal here is to centralize common database functionality used across neutron stadium projects into neutron-lib. As part of this effort we need to be careful not to pull over any database functionality that couples neutron-lib to neutron.

In neutron-lib 0.4.0 neutron_lib.db.constants was added and defines a number of common database field sizes for use in place of statically typed field size values. For example, instead of using 255 to define the size of a description field, consumers can use the neutron_lib.db.constants.DESCRIPTION_FIELD_SIZE constant.

Additionally neutron_lib.db.model_base was added. This module contains a handful of "barebones" neutron base model definitions/mix-ins such as HasProject and others. Consumers can now start replacing their use of these models from neutron with the definition in neutron_lib.db.model_base.

Two new functions were also added to neutron_lib.db.utils (see API report above). These functions have been re-homed from neutron.db.api and are now ready for use.


Exceptions

A few exceptions where re-homed to neutron-lib including DeviceNotFoundError, MultipleExceptions, PolicyCheckError and PolicyInitError. These exceptions are now ready for consumption in neutron-lib and will soon be deprecated in their neutron origination.


Hacking Checks

The 0.4.0 release of neutron-lib contains two new hacking checks.

As the names imply, check_no_eventlet_imports checks that the eventlet library is not imported and  check_delayed_string_interpolation ensures all logging calls use delay string interpolation. While check_no_eventlet_imports is intended for neutron-lib specific hacking checks (consumers need not comply)check_delayed_string_interpolation will likely become one of the checks registered in neutron-lib's hacking check factory longer-term.

Neither checks are automatically registered in neutron-lib's hacking check factory for 0.4.0. Before rolling out new hacking checks (via factory), we need to better solidify neutron-lib's hacking check consumption and roll-out process (for example patch 350723).


Policy

Neutron's policy API was re-homed to neutron-lib in 0.4.0 with patch 303867. This change adds the neutron_lib.policy module and it's public APIs. Consumers should start moving their code to neutron_lib's policy rather than using neutrons.


Utils

In 0.4.0 we starting re-homing the common neutron utils into the neutron_lib.utils package (see 319769). Utility APIs in neutron-lib are grouped by functionality and thus we have modules like; neutron_lib.utils.net, neutron_lib.utils.host, etc.. Consumers can now start using these utils by importing the respective modules they need, rather than calling these utility APIs from neutron.







Monday, August 15, 2016

What's new with neutron-lib 0.3.0

Forward

OpenStack neutron-lib version 0.3.0 was recently released to PyPI  and contains a number of updates to API validators, constants and hacking checks.

The complete list of public API changes are summarized below (and can be viewed on github):
 New API Signatures  
 -----------------------------------------------------  
 neutron_lib.api.validators.get_validator(validation_type, default=None)  
 neutron_lib.api.validators.validate_integer(data, valid_values=None)  
 neutron_lib.api.validators.validate_subports(data, valid_values=None)  
 neutron_lib.constants.DHCPV6_STATEFUL = dhcpv6-stateful  
 neutron_lib.constants.DHCPV6_STATELESS = dhcpv6-stateless  
 neutron_lib.constants.IPV6_MODES = [u'dhcpv6-stateful', u'dhcpv6-stateless', u'slaac']  
 neutron_lib.constants.IPV6_SLAAC = slaac  
 neutron_lib.constants.L3_AGENT_MODE = agent_mode  
 neutron_lib.constants.L3_AGENT_MODE_DVR = dvr  
 neutron_lib.constants.L3_AGENT_MODE_DVR_SNAT = dvr_snat  
 neutron_lib.constants.L3_AGENT_MODE_LEGACY = legacy  
 neutron_lib.constants.Sentinel  
 neutron_lib.hacking.translation_checks.check_log_warn_deprecated(logical_line, filename)  
 neutron_lib.hacking.translation_checks.check_raised_localized_exceptions(logical_line, filename)  
 neutron_lib.hacking.translation_checks.no_translate_debug_logs(logical_line, filename)  
 neutron_lib.hacking.translation_checks.validate_log_translations(logical_line, physical_line, filename)  
 -----------------------------------------------------  
   
 Removed API Signatures  
 -----------------------------------------------------  
 -----------------------------------------------------  
   
 Changed API Signatures  
 -----------------------------------------------------  
 -----------------------------------------------------  


Note:
The above public API changes were generated using a new tool we're looking to include with neutron-lib and eventually perhaps in the change summary for each neutron-lib release.


API Validators

Two new validators were added in neutron-lib 0.3.0; validate_integer and validate_subports.

As expected, validate_integer ensures a value is in fact an integer. The implementation includes smarts to detect if the value is a str, float or bool; which are often missing in common integer validation functions. In addition, the function supports passing a list of valid_values to check for value inclusion.

As with other validator functions, validate_integer returns None if the value is valid and a str message otherwise. The string message for an invalid value is a user friendly message as to why the value is bad.

Here's a sample python snippet to showcase validate_integer

 from neutron_lib.api import validators  


 def test_validate(validator, val, valid_values=None):  
   result = validator(val, valid_values)  
   print("%s(%s, %s) --> %s" % (validator.__name__, val,  
                  valid_values, result))  


 print("Testing valid values...")  
 test_validate(validators.validate_integer, 1)  
 test_validate(validators.validate_integer, '-9')  
 test_validate(validators.validate_integer, 0)  
 test_validate(validators.validate_integer, 7, [9, 8, 7])  
 test_validate(validators.validate_integer, 7, [9, 8, 7])  

 print("\nTesting invalid values...")  
 test_validate(validators.validate_integer, True)  
 test_validate(validators.validate_integer, False)  
 test_validate(validators.validate_integer, '1.1')  
 test_validate(validators.validate_integer, -9.98933)  
 test_validate(validators.validate_integer, 7, [9, 8, 6])  

When run, it outputs:

   
 Testing valid values...  
 validate_integer(1, None) --> None  
 validate_integer(-9, None) --> None  
 validate_integer(0, None) --> None  
 validate_integer(7, [9, 8, 7]) --> None  
 validate_integer(7, [9, 8, 7]) --> None  
   
 Testing invalid values...  
 validate_integer(True, None) --> 'True' is not an integer:boolean  
 validate_integer(False, None) --> 'False' is not an integer:boolean  
 validate_integer(1.1, None) --> '1.1' is not an integer  
 validate_integer(-9.98933, None) --> '-9.98933' is not an integer  
 validate_integer(7, [9, 8, 6]) --> '7' is not in [9, 8, 6]  


The validate_subports validator is also new in neutron-lib 0.3.0. This validator is used as part of the vlan-aware-vms workstream currently under development. Rather than diving into the details on this one, we'll defer to blueprint and related change sets.

Finally in the API validators space, we have some work going on to remove direct access to the neutron_lib.api.validators.validators attribute. This attribute is a dict of the currently "registered" validators known to neutron_lib.

Today, consumers add a "local" (validator function defined outside of neutron-lib) validator by directly adding it to the dict.

For example:

validators.validators['type:my_validatable_type'] = my_validator_function

In general, this is bad practice and can cause complications if we ever decide to wrap API validator access with encapsulating logic. Consumers should now use the following accessors:

get_validator(validation_type, default=None)  
add_validator(validation_type, validator)

Both of which are defined in the neutron_lib.api.validators module. We've deprecated direct access to the validators dict and plan to remove it in the OpenStack "P" release.

For more information on related changes see review 350259.


Constants

The only thing overly interesting about the change in neutron_lib.constants, is the addition of the Sentinel class that allows you to create instances that don't change; even with deepcopy(). For example:

import copy

from neutron_lib.constants import Sentinel


singleton = Sentinel()
print("deepcopy() = %s" % (copy.deepcopy(singleton) == singleton))


When run outputs:

deepcopy() = True


Hacking checks

A handful of new translation hacking checks have been added in the 0.3.0 release:
[N532] Validate that LOG.warning is used instead of LOG.warn. The latter is deprecated.
[N534] Exception messages should be translated
[N533] Validate that debug level logs are not translated
[N531] Validate that LOG messages, except debug ones, have translations

The behavior of these hacking checks should be evident from the description shown above, so I won't belabor. These checks are all registered via neutron_lib.hacking.checks.factory() and therefore will be active by default if your project uses the neutron_lib factory function in it's tox.ini.

Looking forward in this space, we hope to further solidify the hacking check interface as well as the dev-ref for its intended usage. For example: 350723

Thursday, May 1, 2014

KVM and Docker LXC Benchmarking with OpenStack

Forward

Linux containers (LXCs) are rapidly becoming the new "unit of deployment" changing how we develop, package, deploy and manage applications at all scales (from test / dev to production service ready environments). This application life cycle transformation also enables fluidity to once frictional use cases in a traditional hypervisor Virtual Machine (VM) environment. For example, developing applications in virtual environments and seamlessly "migrating" to bare metal for production. Not only do containers simplify the workflow and life cycle of application development / deployment, but they also provide performance and density benefits which cannot be overlooked.

Monday, April 14, 2014

Docker (LXC) Enabled Images In SoftLayer

Forward

Anyone that's worked with me in the past 6-9 months knows that docker and Linux Containers (LXC) are near and dear to my heart. I firmly believe these technologies are poised to change our modern Cloud era and in fact I'd assert we're already beginning to see that change solidify now. There are numerous public resources which discuss the benefits of LXC as a "virtualization technology", but let's quickly recap some of those before going further. In particular let's focus on LXC from a docker perspective.

Monday, April 7, 2014

Giving Your SoftLayer Servers A Personality With Provisioning Scripts

Forward

Personalities no longer only apply to people (and some would argue animals); this terminology has found its way into the Cloud / virtualization space as well. In the Cloud / virtualization space the term personality refers to the act of tweaking or configuring a vanilla server instance (typically via a bootstrapping or automated process) for a particular purpose; the resulting "tweaked" server is said to have a personality. For example, let's say you take a vanilla Ubuntu Virtual Machine (VM), install the eclipse IDEpydev and a handful of python modules resulting in VM specifically tailored for python development. That VM can be said to have a "python dev" personality.

SoftLayer makes server personalities easy and convenient by means of "provisioning scripts". Not only are provisioning scripts a snap to use, but they also provide a consistent way to bootstrap any SoftLayer server type and image making them a very effective tool in the SoftLayer infrastructure.

Sunday, March 30, 2014

Managing OpenStack & SoftLayer Resources From A Single Pane of Glass With Jumpgate

Forward

Imagine a world of interconnected Clouds capable discovering, coordinating and collaborating in harmony to seamlessly carry out complex workloads in a transparent manner -- the intercloud. While this may be the dream of tomorrow, today's reality is a form of the intercloud called hybrid Cloud. In a hybrid Cloud model organizations manage a number of on-premise resources, but also use off-premise provider services or resources for specific capabilities, in time of excess demand which cannot be fulfilled via on-premise resources, or for cost effectiveness reasons. Both of these Cloud computing models have a common conduit to their realization -- open standardized APIs, formats and protocols which enable interoperability between disparate Cloud deployments.

Sunday, March 16, 2014

Linux Containers - Building Blocks, Underpinnings and Motivations

I firmly believe Linux Containers (LXC) are poised to be the next Virtual Machine in our modern computing era. Consider:

  • Linux Containers run at near bare metal speeds.
  • LXC operations (start, stop, spawn) execute very quickly (seconds or milliseconds).
  • Containers provide nearly the same agility as traditional VMs.
  • They can be deployed with very little per container (VM) penalty.
  • Linux Containers are lightweight -- they can virtualize a system (Operating System) or one or more applications.
  • LXC can be realized with features provided by a modern Linux kernel.

More details on how containers are realized and some of their benefits can be found in my slide share presentation embedded below.




I will be speaking about Linux Containers at the 2014 cloudexpo east conference in NYC -- I hope you can join me to talk LXC. Please contact me for free access to the conference.

OpenStack nova VM migration (live and cold) call flow

OpenStack nova compute supports two flavors of Virtual Machine (VM) migration:

  • Cold migration -- migration of a VM which requires the VM to be powered off during the migrate operation during which time the VM is inaccessible.
  • Hot or live migration -- zero down-time migration whereupon the VM is not powered off during the migration and thus remains accessible.

Understanding these VM migration operations from an OpenStack internals perspective can be a daunting task. I had the pleasure of digging into these flows in the latter part of 2013 and as part of that effort created a rough outline of the internal flows. Other's I've worked with found these flow outlines useful and thus they're provided below.

OpenStack nova boot server call diagram

The OpenStack architecture consists of multiple distributed services which often work together to carry out a single logical operation. Given the nature of this architecture, getting up to speed on the call flows and interactions can be a daunting task for developers and operational admins alike.

Not so long ago, I had to pleasure of digging into one of the more common flows in OpenStack nova compute -- the nova 'boot server' operation. As we all know the boot server operation provisions a new nova compute Virtual Machine (VM) on an underlying hypervisor such as KVM, ESXi, etc.. As part of the boot server operation, a number of OpenStack components are involved including:

Thursday, March 13, 2014

OpenStack Keystone Workflow & Token Scoping

While recently browsing the OpenStack documentation updates for the Folsom release, I came across a new (new to me anyway) Keystone diagram which provides a well deserved depiction of a typical end-user workflow using Keystone as an identity service provider. This diagram not only provides greater incite to this typical workflow, but it also illustrates the notion of scoped vs unscoped tokens. I've pasted the diagram below for convenience, but the original document can be found on the OpenStack documentation site.

Although this diagram paints a nice picture of a typical workflow, it leaves a bit to the imagination in terms of which APIs are used for each step. Moreover some of the steps are a bit misleading depending on which token type scheme you are using with Keystone.

This post aims to further solidify the steps in the workflow diagram above.