NetVirt: L2Gateway
Netvirt project has added support for L2Gateway plugin and this page describes how to set it up. This assumes you are already familiar with how to integrate devstack with OpenDaylight. For details refer OpenStack and OpenDaylight wiki page.
Installing OpenDaylight
Prerequisites: OpenDaylight dev environment with JDK8
Pull the latest Netvirt code and compile it.
git clone https://git.opendaylight.org/gerrit/netvirt.git cd netvirt mvn clean install
Run karaf from vpnservice distribution directory
cd vpnservice/distribution/karaf/target/assembly/bin ./karaf
Install vpnservice openstack feature
feature:install odl-netvirt-vpnservice-openstack
Check the logs to make sure there are not any fatal exceptions. Before we proceed make sure ODL is listening on ports 8080, 6640 and 6633.
Installing OpenStack
Prerequisites: local.conf from a working devstack+ODL setup, can be single or multinode.
Note: L2Gateway plugin and ODL Driver for it are only available in master
and stable/mitaka
as of now. If you have your setup on an older branch, you will need to upgrade or create a new setup.
On the controller node, make following changes to the local.conf:
enable_plugin networking-l2gw http://git.openstack.org/openstack/networking-l2gw enable_service l2gw-plugin NETWORKING_L2GW_SERVICE_DRIVER=L2GW:OpenDaylight:networking_odl.l2gateway.driver.OpenDaylightL2gwDriver:default
Netvirt:Vpnservice requires bridges to be added manually, so add br-int [and br-ex if require L3] This step is needed because current devstack script blocks till these bridges are present on the controller/compute.
sudo ovs-vsctl add-br br-int sudo ovs-vsctl set bridge br-int protocols=OpenFlow13 sudo ovs-vsctl set-controller br-int tcp:192.168.56.1:6633 sudo ovs-vsctl add-br br-ex
Note: Bridge configuration can be done later too, but add-br must be done before running devstack. Also, if for some reason you ./unstack
you will have to add bridge(s) again.
Run devstack
./stack.sh
If all goes fine your devstack will be up and running.
Configuring Tunnels
Unlike old OVSDB Netvirt where Netvirt discovered computes and created tunnels on demand, we now require some user configuration to create tunnels. For this we use the ITM [Internal Transport Manager] Module. It creates TransportZones which is nothing but a logical grouping of all the devices that will be part of a mesh. These devices are called TEPs: Tunnel EndPoints. To configure TEPs first we need dpn-id
of each device. This is the datapathId of br-int
in decimal format. A quick way to acquire this is making a GET call to following URL:
curl -s -u admin:admin -X GET http://${ODL_IP}:8181/restconf/operational/odl-interface-meta:bridge-ref-info/
There are two ways to configure TransportZones and TEPs:
Karaf CLI -
tep:add
is the command to create TransportZone and add devices.
TBD. Refer help on karaf for details.
Restconf: Here is a sample restconf to create a TransportZone named
TZA
of typeVxLan
. This also adds a TEP with dpn-id95311090836804
, tunnel interface nameeth2
and local endpoint ip as192.168.57.101
URL: http://${ODL-IP}:8181/restconf/config/itm:transport-zones/ JSON: { "transport-zone": [ { "zone-name": "TZA", "subnets": [ { "prefix": "192.168.57.0/24", "vlan-id": 0, "vteps": [ { "dpn-id": 95311090836804, "portname": "eth2", "ip-address": "192.168.56.101" } ], "gateway-ip": "0.0.0.0" } ], "tunnel-type": "odl-interface:tunnel-type-vxlan" } ] }
Configuring the TOR Device
TOR device is typically a physical switch running HWVTEP schema. OVS also provides a VTEP emulator.
Physical TOR Device
Actual configuration CLI will vary from vendor to vendor, but the basic steps would be same
Configure interfaces that connect to baremetals
TBD - Would prefer sample configuration from different vendors.
Configure manager connection:
Actual configuration CLI will vary from vendor to vendor but it should be something like this:
ovs-vsctl set-manager tcp:${ODL_IP}:6640
TBD - sample configuration from different vendors
You can verify TOR connectivity by making following rest call to ODL
curl -s -u admin:admin -X GET http://${ODL_IP}:8181/restconf/operational/network-topology:network-topology/ | python -mjson.tool
OVS Vtep Emulator
We need OVS Version 2.4.0
.This will run on a different VM. Unlike Openstack controller or Compute VMs, this can be a lightweight VM as we'll only be running OVS in this.
Configure the VTEP Emulator: Details on OVS VTEP Emulator and can be found a How to Use the VTEP Emulator.
Configure simulated Baremetals: We will use namespaces to simulate baremetals in this setup.
export BRIDGE=ps1 sudo ip netns add nsbm1 sudo ovs-vsctl add-port $BRIDGE tapbm1 -- set Interface tapbm1 type=internal sudo ip link set tapbm1 netns nsbm1 sudo ip netns exec nsbm1 ip link set dev tapbm1 up sudo ip netns exec nsbm1 ifconfig tapbm1 11.11.11.111 netmask 255.255.255.0 sudo ip netns exec nsbm1 ifconfig
We've created a namespace nsbm1
which will act as our BareMetal device. Then we create a tapbm1
port and add it to the bridge ps1
that acts as our simulated TOR. Finally we configure it with an IP address.
We can create multiple namespaces like this to simulate multiple BareMetals. For example:
export BRIDGE=ps1 sudo ip netns add nsbm2 sudo ovs-vsctl add-port $BRIDGE tapbm2 -- set Interface tapbm2 type=internal sudo ip link set tapbm2 netns nsbm2 sudo ip netns exec nsbm2 ip link set dev tapbm1 up sudo ip netns exec nsbm2 ifconfig tapbm1 11.11.11.112 netmask 255.255.255.0 sudo ip netns exec nsbm2 ifconfig
Configuring L2Gateway
Create Neutron Network and Subnet
Before creating L2Gateway, we will first configure Neutron network and subnet. Note that BareMetal IP should be in same subnet as the one being created in subnet-create
.
neutron net-create mynet1 --tenant_id $(openstack project list |grep '\sadmin' | awk '{print $2}') --provider:network_type vxlan --provider:segmentation_id 1001 neutron subnet-create mynet1 11.11.11.0/24 --name net1-snet1
Create L2Gateway
Next step is to create L2Gateway
neutron l2-gateway-create gw1 --tenant_id $(openstack project list | grep '\sadmin' | awk '{print $2}') --device name=${PS_NAME},interface_names=${INTERFACE_NAME}
This will configure an L2Gateway named gw1
where:
PS_NAME
is name of the PhysicalSwitch on TOR deviceINTERFACE_NAME
is the name of physical port on TOR to which the given Bare Metal is connected.
Multiple baremetals can also be configured alongwith VLANID to be used for each. Refer L2Gateway API Usage for more details.
Create L2GatewayConnection
Finally we create L2GatewayConnection which will result in tunnels to be created.
neutron l2-gateway-connection-create gw1 mynet1 --default-segmentation-id 0
gw1
and mynet1
are the L2Gateway and Neutron Network we created earlier. If you have multiple TOR devices with multiple Baremetals, you can add L2GatewayConnection for each of those. In which case you should now be able to ping between BareMetals.
Spawn VMs
Now you can spawn VMs on your computes. Once they come up and everything went fine, you should be able to ping between your VMs and BareMetals.
Sample Configurations
TBD.
Troubleshooting
This section will cover common issues you can run into and how to fix them.
Neutron server fails to come up
Problem: Devstack fails with error ovs-vsctl:
unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied)
Solution: Refer BUG 1576560
Creating L2Gateway gives 404 error
Problem: L2Gateway creation gives error The resource could not be found (404)
Solution: Open /etc/neutron/neutron.conf file and make sure service_plugin has entry for l2gateway plugin. If not, append it to existing list of service_plugins service_plugins = networking_odl.l3.l3_odl.OpenDaylightL3RouterPlugin,networking_l2gw.services.l2gateway.plugin.L2GatewayPlugin