Plug-ins typically have requirements for particular software that must be run on each node that handles data packets. This includes any node that runs nova-compute and nodes that run dedicated OpenStack Networking service agents such as neutron-dhcp-agent, neutron-l3-agent, neutron-metering-agent or neutron-lbaas-agent.
A data-forwarding node typically has a network interface with an IP address on the management network and another interface on the data network.
This section shows you how to install and configure a subset of the available plug-ins, which might include the installation of switching software (for example, Open vSwitch) and as agents used to communicate with the neutron-server process running elsewhere in the data center.
If you use the NSX plug-in, you must also install Open vSwitch on each data-forwarding node. However, you do not need to install an additional agent on each node.
Warning
It is critical that you run an Open vSwitch version that is compatible with the current version of the NSX Controller software. Do not use the Open vSwitch version that is installed by default on Ubuntu. Instead, use the Open vSwitch version that is provided on the VMware support portal for your NSX Controller version.
To set up each node for the NSX plug-in
The DHCP service agent is compatible with all existing plug-ins and is required for all deployments where VMs should automatically receive IP addresses through DHCP.
To install and configure the DHCP agent
You must configure the host running the neutron-dhcp-agent as a data forwarding node according to the requirements for your plug-in.
Install the DHCP agent:
# apt-get install neutron-dhcp-agent
Update any options in the /etc/neutron/dhcp_agent.ini file that depend on the plug-in in use. See the sub-sections.
Important
If you reboot a node that runs the DHCP agent, you must run the neutron-ovs-cleanup command before the neutron-dhcp-agent service starts.
On Red Hat, SUSE, and Ubuntu based systems, the neutron-ovs-cleanup service runs the neutron-ovs-cleanup command automatically. However, on Debian-based systems, you must manually run this command or write your own system script that runs on boot before the neutron-dhcp-agent service starts.
Networking dhcp-agent can use dnsmasq driver which supports stateful and stateless DHCPv6 for subnets created with --ipv6_address_mode set to dhcpv6-stateful or dhcpv6-stateless.
For example:
$ neutron subnet-create --ip-version 6 --ipv6_ra_mode dhcpv6-stateful
--ipv6_address_mode dhcpv6-stateful NETWORK CIDR
$ neutron subnet-create --ip-version 6 --ipv6_ra_mode dhcpv6-stateless
--ipv6_address_mode dhcpv6-stateless NETWORK CIDR
If no dnsmasq process for subnet’s network is launched, Networking will launch a new one on subnet’s dhcp port in qdhcp-XXX namespace. If previous dnsmasq process is already launched, restart dnsmasq with a new configuration.
Networking will update dnsmasq process and restart it when subnet gets updated.
Note
For dhcp-agent to operate in IPv6 mode use at least dnsmasq v2.63.
After a certain, configured timeframe, networks uncouple from DHCP agents when the agents are no longer in use. You can configure the DHCP agent to automatically detach from a network when the agent is out of service, or no longer needed.
This feature applies to all plug-ins that support DHCP scaling. For more information, see the DHCP agent configuration options listed in the OpenStack Configuration Reference.
These DHCP agent options are required in the /etc/neutron/dhcp_agent.ini file for the OVS plug-in:
[DEFAULT]
enable_isolated_metadata = True
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
These DHCP agent options are required in the /etc/neutron/dhcp_agent.ini file for the NSX plug-in:
[DEFAULT]
enable_metadata_network = True
enable_isolated_metadata = True
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
The OpenStack Networking service has a widely used API extension to allow administrators and tenants to create routers to interconnect L2 networks, and floating IPs to make ports on private networks publicly accessible.
Many plug-ins rely on the L3 service agent to implement the L3 functionality. However, the following plug-ins already have built-in L3 capabilities:
Big Switch/Floodlight plug-in, which supports both the open source Floodlight controller and the proprietary Big Switch controller.
Note
Only the proprietary BigSwitch controller implements L3 functionality. When using Floodlight as your OpenFlow controller, L3 functionality is not available.
IBM SDN-VE plug-in
MidoNet plug-in
NSX plug-in
PLUMgrid plug-in
Warning
Do not configure or use neutron-l3-agent if you use one of these plug-ins.
To install the L3 agent for all other plug-ins
Install the neutron-l3-agent binary on the network node:
# apt-get install neutron-l3-agent
To uplink the node that runs neutron-l3-agent to the external network, create a bridge named “br-ex” and attach the NIC for the external network to this bridge.
For example, with Open vSwitch and NIC eth1 connected to the external network, run:
# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth1
Do not manually configure an IP address on the NIC connected to the external network for the node running neutron-l3-agent. Rather, you must have a range of IP addresses from the external network that can be used by OpenStack Networking for routers that uplink to the external network. This range must be large enough to have an IP address for each router in the deployment, as well as each floating IP.
The neutron-l3-agent uses the Linux IP stack and iptables to perform L3 forwarding and NAT. In order to support multiple routers with potentially overlapping IP addresses, neutron-l3-agent defaults to using Linux network namespaces to provide isolated forwarding contexts. As a result, the IP addresses of routers are not visible simply by running the ip addr list or ifconfig command on the node. Similarly, you cannot directly ping fixed IPs.
To do either of these things, you must run the command within a particular network namespace for the router. The namespace has the name qrouter-ROUTER_UUID. These example commands run in the router namespace with UUID 47af3868-0fa8-4447-85f6-1304de32153b:
# ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list
# ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping FIXED_IP
Note
For iproute version 3.12.0 and above, networking namespaces are configured to be deleted by default. This behavior can be changed for both DHCP and L3 agents. The configuration files are /etc/neutron/dhcp_agent.ini and /etc/neutron/l3_agent.ini respectively.
For DHCP namespace the configuration key: dhcp_delete_namespaces = True. You can set it to False in case namespaces cannot be deleted cleanly on the host running the DHCP agent.
For L3 namespace, the configuration key: router_delete_namespaces = True. You can set it to False in case namespaces cannot be deleted cleanly on the host running the L3 agent.
Important
If you reboot a node that runs the L3 agent, you must run the neutron-ovs-cleanup command before the neutron-l3-agent service starts.
On Red Hat, SUSE and Ubuntu based systems, the neutron-ovs-cleanup service runs the neutron-ovs-cleanup command automatically. However, on Debian-based systems, you must manually run this command or write your own system script that runs on boot before the neutron-l3-agent service starts.
The Neutron Metering agent resides beside neutron-l3-agent.
To install the metering agent and configure the node
Install the agent by running:
# apt-get install neutron-metering-agent
If you use one of the following plug-ins, you need to configure the metering agent with these lines as well:
An OVS-based plug-in such as OVS, NSX, NEC, BigSwitch/Floodlight:
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
A plug-in that uses LinuxBridge:
interface_driver = neutron.agent.linux.interface.
BridgeInterfaceDriver
To use the reference implementation, you must set:
driver = neutron.services.metering.drivers.iptables.iptables_driver
.IptablesMeteringDriver
Set the service_plugins option in the /etc/neutron/neutron.conf file on the host that runs neutron-server:
service_plugins = metering
If this option is already defined, add metering to the list, using a comma as separator. For example:
service_plugins = router,metering
Configure Load-Balancer-as-a-Service (LBaas) with the Open vSwitch or Linux Bridge plug-in. The Open vSwitch LBaaS driver is required when enabling LBaaS for OVS-based plug-ins, including BigSwitch, Floodlight, NEC, and NSX.
To configure LBaas with Open vSwitch or Linux Bridge plug-in
Install the agent:
# apt-get install neutron-lbaas-agent haproxy
Enable the HAProxy plug-in by using the service_provider option in the /etc/neutron/neutron.conf file:
service_provider = LOADBALANCER:Haproxy:neutron_lbaas.services
loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver
:default
Warning
The service_provider option is already defined in the /usr/share/neutron/neutron-dist.conf file on Red Hat based systems. Do not define it in neutron.conf otherwise the Networking services will fail to restart.
Enable the load-balancing plug-in by using the service_plugins option in the /etc/neutron/neutron.conf file:
service_plugins = lbaas
If this option is already defined, add lbaas to the list, using a comma as separator. For example:
service_plugins = router,lbaas
Enable the HAProxy load balancer in the /etc/neutron/lbaas_agent.ini file:
device_driver = neutron_lbaas.services.loadbalancer.drivers
haproxy.namespace_driver.HaproxyNSDriver
Select the required driver in the /etc/neutron/lbaas_agent.ini file:
Enable the Open vSwitch LBaaS driver:
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
Or, enable the Linux Bridge LBaaS driver:
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
Create the required tables in the database:
# neutron-db-manage --service lbaas upgrade head
Apply the settings by restarting the neutron-server and neutron-lbaas-agent services.
Enable load balancing in the Project section of the dashboard.
Change the enable_lb option to True in the local_settings file (on Fedora, RHEL, and CentOS: /etc/openstack-dashboard/local_settings, on Ubuntu and Debian: /etc/openstack-dashboard/local_settings.py, and on openSUSE and SLES: /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings .py):
OPENSTACK_NEUTRON_NETWORK = {
'enable_lb': True,
...
}
Apply the settings by restarting the web server. You can now view the Load Balancer management options in the Project view in the dashboard.
Before you install the OpenStack Networking Hyper-V L2 agent on a Hyper-V compute node, ensure the compute node has been configured correctly using these instructions.
To install the OpenStack Networking Hyper-V agent and configure the node
Download the OpenStack Networking code from the repository:
> cd C:\OpenStack\
> git clone https://git.openstack.org/cgit/openstack/neutron
Install the OpenStack Networking Hyper-V Agent:
> cd C:\OpenStack\neutron\
> python setup.py install
Copy the policy.json file:
> xcopy C:\OpenStack\neutron\etc\policy.json C:\etc\
Create the C:etcneutron-hyperv-agent.conf file and add the proper configuration options and the Hyper-V related options. Here is a sample config file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | [DEFAULT]
verbose = true
control_exchange = neutron
policy_file = C:\etc\policy.json
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_host = IP_ADDRESS
rabbit_port = 5672
rabbit_userid = guest
rabbit_password = <password>
logdir = C:\OpenStack\Log
logfile = neutron-hyperv-agent.log
[AGENT]
polling_interval = 2
physical_network_vswitch_mappings = *:YOUR_BRIDGE_NAME
enable_metrics_collection = true
[SECURITYGROUP]
firewall_driver = neutron.plugins.hyperv.agent.security_groups_driver.
HyperVSecurityGroupsDriver
enable_security_group = true
|
Start the OpenStack Networking Hyper-V agent:
> C:\Python27\Scripts\neutron-hyperv-agent.exe --config-file
C:\etc\neutron-hyperv-agent.conf
This table shows examples of Networking commands that enable you to complete basic operations on agents:
Operation | Command |
---|---|
List all available agents. | $ neutron agent-list |
Show information of a given agent. | $ neutron agent-show AGENT_ID |
Update the admin status and description for a specified agent. The command can be used to enable and disable agents by using --admin-state-up parameter set to False or True. | $ neutron agent-update --admin -state-up False AGENT_ID |
Delete a given agent. Consider disabling the agent before deletion. | $ neutron agent-delete AGENT_ID |
Basic operations on Networking agents
See the OpenStack Command-Line Interface Reference for more information on Networking commands.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License http://creativecommons.org/licenses/by/3.0/legalcode.