All-In-One Openstack using RDO Packstack with External Public IP’s

Click the link below for the latest version of this guide

All-In-One OpenStack Liberty using RDO Packstack with External Public IPs

OLD Guide Below

Summary

This is a single node proof-of-concept that includes setting up the external connectivity to your VM’s (aka “instances” in openstackian lingo). It uses the RDO tool Packstack and will install Openstack Havanna. You will have full access to the VM from outside the openstack environment via its Public Floating IP. I did this writeup because I had to follow 3-4 different guides to get here and figured others might like this all compiled into one guide.

TIP: I highly recommend initially doing this inside an existing Virtual system (aka VirtualBox). Openstack works well running inside another virtual environment and the benefits of snapshots/clones/virtual nic’s will make your experiments and testing much easier. I’ll be providing specifics for using VirtualBox with this guide alongside the non-virtual instructions.

Example Setup

Scenerio 1: Openstack VM Running Inside VirtualBox

This diagram shows the architectural layout assuming you’re going to run it inside a VirtualBox VM.]

Architecture including VirtualBox

Architecture Inside of VirtualBox VM

In this example Openstack will be run from inside a VM on a computer running VirtualBox.  Network connectivity for the VM will be provided by the NAT Network in Virtualbox.  The down side to using a NAT Network instead of a Bridge Interface is that the Openstack Instance Public IP’s will only be public to other VirtualBox VM’s.  I chose a NAT Network interface because Openstack needs a static IP to operate and a NAT Network guarantees that.  In my case I kept breaking my openstack install because I would take my laptop home and all the IP’s whould change when I connected there.  If your VirtualBox host will always remain attached to the same network feel free to use a Bridge Interface for your Openstack VM which would allow Openstack, and its Instances, to have true Public IP’s.

Scenerio 2: Openstack Running Directly on Hardware

This diagram is the same as before just without the VirtualBox VM Sandbox.  Openstack Public IP’s will always be real ones here.

1nodearch

Physical Architecture

Meet the Networks

External Network

This is the outside network.  For an all-in-one install we use this for external and Openstack API Traffic.    Our Public IP’s will use this network to communicate with the outside world (virtual and real).  For the VirtualBox Scenario this will be the NAT Network.  For the Physical Scenario this will be your Office/LAB/Internet.  This is the network our NIC will be attached to.

Note: We’re using our external interface for our private Openstack API traffic.  In an all-in-one the API traffic will never leave the public interface because it’s always pointed at it’s own IP and will therefore loop back.  Multi-node installs would have a separate private network for API communication as it really should be kept off the public interface.  

In our example we’ll assume our external network is setup like this:

  • Subnet: 10.20.0.0/24
  • Netmask: 255.255.255.0
  • Gateway: 10.20.0.1
  • DNS1: 8.8.8.8
  • DNS2: 8.8.4.4
  • Public IP Range: 10.20.0.50 – 10.20.0.254

FYI: The DNS 8.8.8.8 and 8.8.4.4 are google provided DNS servers that work everywhere

Private Network

The private network is what the Instances are connected to.  All Instance traffic will go though the private networks.  In an all-in-one box the private interface is just the loopback device “lo” as all the Instances are located on the same machine. Traffic headed out to the external network will start on the private and then route to the public network via a virtual router inside of Openstack (Neat!!  … Once it all works).

In our example we’ll assume our private network is setup like this:

  • Subnet: 10.0.30.0/24
  • Netmask: 255.255.255.0
  • Gateway: 10.0.30.1
  • DNS1: 8.8.8.8
  • DNS2: 8.8.4.4
  • Public IP Range: 10.0.30.10 – 10.0.30.254

VirtualBox Setup

If your going to do this inside VirtualBox here’s what you should do to setup your environment:

Nat Network Setup

So that we can provide a stable network in our virtual environment we’re going to setup a NAT Network.  This is a combination of an Internal Network and a NAT so that it can access the outside world.  We’re doing this so that our Openstack server will always have a consistent IP (10.20.0.20) no matter what network our physical machine is connected to. I was doing my testing on a laptop I would transfer between home and work so my networks would change.

Note: If you want Openstack to have a real physical IP and your physical machine isn’t going to be changing networks you can skip this and just attach the virtual NIC to a Bridged Adapter.

  1. Open Virtualbox
  2. Navigate to the Network Preferences
  3. Create a new NAT Network and name it “PubNatNet”
  4. Edit NAT Network you just created
    1. Unselect “supports DHCP” (Statically define your Openstack VM’s IP or you’ll have problems if it’s IP changes)
    2. Add the following “Port Forwarding” rules
        • Protocol: TCP Host IP: 127.0.0.1 Host Port: 2022 Guest IP: 10.20.0.20 Guest port: 22
        • Protocol: TCP Host IP: 127.0.0.1 Host Port: 2080 Guest IP: 10.20.0.20 Guest port: 80
          This will allow you to access the Openstack VM from your physical machine via ssh and your web browser.
          I.E. ssh root@localhost -p 2022 and http://localhost:2080

Virtual Machine Configuration

    • Name: ostack-allinone
    • VCPUs: 1
    • Ram: 3GB Min 4GB recommended (2GB will only let you start 2 128MB instances)
    • Storage: 10G Storage (Fixed Size)
    • Network: 1GB NIC
        • Attached to VirtualBox Nat Network (PubNat)
        • Under the Advanced Settings Set Adapter Type to “virtio-net”
          This will provide better network performance
        • Under the Advanced Settings Allow Promiscuous Mode
          This allows the Public IP’s we will be creating to communicate via this NIC
    • Install VirtualBox Guest Additions for better performance

Setup a Workstation VM

For the most part you can use the Nat Port Forwarding rules to control your Openstack VM. One exception will be the console connection to newly created instances. When you access the console from an instance via the Openstack Web interface it will redirect you to the Openstack VM’s IP and a port assigned for that VNC session. Because we’re using Port Forwarding and accessing Openstack via localhost on a forwarded port this will break. The ugly way around this is to setup a VM that can run a web browser and attach it to the PubNat Nat Network. When you need to access the console of an instance you will console into this Workstation VM and though it access the Openstack Web interface. Since it is inside the PubNat network the redirection for the instance console will work.
I’m not proud of this workaround but it gets the job done

Openstack Host OS

OS Install

I did a minimal install of Centos 6.5 with a single large root partition using the entire 10GB of space.  I will assume your root password is “0p3n5t4cK”.  Configure the host with the following Network settings:

  • Network Type: Static
  • IP: 10.20.0.20
  • NetMask: 255.255.255.0
  • Gateway: 10.20.0.1
  • DNS: 8.8.8.8

Now would be a good time to test to make sure your network is working right.  Your Openstack machine should be able to ping an outside system via it’s FQDN.

# ping pingdom.com

Install Openstack Prerequisites

Install the RDO repo

root# yum install -y http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-8.noarch.rpm

Install the EPEL repo

root# yum install -y http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

Make sure all packages are current

root# yum -y upgrade

Install some nice utilities

root# yum install -y screen traceroute bind-utils

Install the packstack installer and its utilities

root# yum install -y openstack-packstack openstack-utils

Generate the answer initial answer file.Note: This is easier to manage than packstack command line options

packstack --gen-answer-file=allinone-answers.cfg

Reboot to make sure we’re using the latest installed kernel etc…

root# reboot

Modify the answer file for our All-in-one install.

vi /root/allinone-answers.cfg

Modify the following variables:

CONFIG_NTP_SERVERS=0.rhel.pool.ntp.org,1.rhel.pool.ntp.org
CONFIG_KEYSTONE_ADMIN_PW=0p3n5t4cK
CONFIG_CINDER_VOLUMES_SIZE=4G
CONFIG_NOVA_COMPUTE_PRIVIF=lo
CONFIG_NOVA_NETWORK_PRIVIF=lo
CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1:10:20
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex

CONFIG_PROVISION_DEMO=y
Note: You can use the openstack-config utility to automate this in a script
Ex. openstack-config –set ~/allinone-answers.cfg general CONFIG_NTP_SERVERS 0.rhel.pool.ntp.org,1.rhel.pool.ntp.org

Here what these variables do:

Varirable Name Value Description
CONFIG_NTP_SERVERS 0.rhel.pool.ntp.org,
1.rhel.pool.ntp.org
Time Servers to keep our time in sync (Not required, but why not)
CONFIG_KEYSTONE_ADMIN_PW 0p3n5t4cK Initial Admin Password for Openstack
CONFIG_CINDER_VOLUMES_SIZE 4G How much space we’ll reserve for add-on volumes*
CONFIG_NOVA_COMPUTE_PRIVIF lo For the All-in-one Compute service we use a loopback for our Private Network
CONFIG_NOVA_NETWORK_PRIVIF lo For the All-in-one network service use a loopback for our Private Network
CONFIG_NEUTRON_OVS_VLAN_RANGES physnet1:10:20 This defines the name physnet1, used by the Openstack Virtual Switch, for our physical network that will be available to tennants.**
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS physnet1:br-ex mapping from the physical network name physnet1 to the our external bridge name br-ex.
CONFIG_PROVISION_DEMO n Don’t have packstack provision a demo project etc.. The defaults for this are hard coded and it’s easier to do it manually

*Cinder provides ephemeral storage to your instances… OK before you go looking that up on dictionary.com think of it as an external drive.  Your OS storage is handled by Swift so Cinder isn’t required to run your instances.  Also in a default All-in-one install it’s actually using a sparse file that is attached though a loopback fs so it’s going to be hideously slow.

**So the variable name CONFIG_NEUTRON_OVS_VLAN_RANGES is a bit of a misnomer as we’re using “local” routing instead of VLANs. The vlan ranges “10:20” are ignored and are just here because of an RDO error check that will barf if we don’t put them in. We’re just using this to define the physical network name so we can associate it with the external bridge.

Install Time

Run RDO packstack with the answer file we generated

packstack --answer-file=allinone-answers.cfg

Note: This takes a while. Patience, go get a soda from the fridge. You should be seeing a bunches of [ Done ] red is bad. Sometimes just rerunning will clear an occasional red message.

Fix Horizon Config Bugs

For some reason RDO sets the default user role to a non-existent role. The result of this is that when you try to add users or projects though Horizon it will throw an error.

Another minor glitch is that Horizon is configured to listen to connections only coming from itself. This config seems to be ignored as you’ll still be able to connect from other machines but I figure you might as well have it configured correctly just in case.

vi /etc/openstack-dashboard/local_settings
ALLOWED_HOSTS = ["*"]
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_"

Restart Apache to apply the changes to Horizon

service httpd restart

Switch to Full SW Virtualizaiton (VirtualBox Only)

If you’re installing openstack in a VirtualBox VM you need to switch to full software virtualization to run instances.

Reconfigure nova to use qemu vs kvm

root# openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu

Restart services to apply the change

service openstack-nova-compute restart

Here is What Your Bridges Should Look Like Now

Right after the install finishes your bridge config should look like this:

[root@ostack ~]# ovs-vsctl show
c448ffd1-2acb-4cb1-8720-5b3adf6a628d
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
    Bridge br-int
        Port int-br-ex
            Interface int-br-ex
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "1.11.0"

Note: 2 ports we’re created as a result of CONFIG_NEUTRON_OVS_VLAN_RANGES and CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS. On the External Bridge, br-ex, the phy-br-ex port was created. On the Internal Bride, br-int, the int-br-ex port was created.

Attach the Openstack Bridge to the NIC

Now that we’re done with the initial install of openstack lets look at setting up the bridge on the OS side and attaching it to the NIC.  We’re going to create a bridge called br-ex and transfer our IP configuration from eth0 to it.  We’ll also attach eth0 to the bridge

Create the br-ex Bridge

So first step is to create the br-ex network device and copy the IP settings from the physical interface. Assuming you’re using our example external network the config should look like this:

vi /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=none
IPADDR=10.20.0.20
NETMASK=255.255.255.0
GATEWAY=10.20.0.1
DNS1=8.8.8.8
DNS2=8.8.4.4
ONBOOT=yes

IPADDR/NETMASK/GATEWAY would all be copied from our physical NIC configuration “ifcfg-eth0”

Attach the physical NIC to your Bridge

Now that the bridge is configured we reconfigure the real NIC to point to it.

vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
ONBOOT=yes
IPV6INIT=no
USERCTL=no

Also notice that we’ve removed all the IP info from the physical NIC. Also make sure to remove BOOTPROTO.

Restart the Network

To apply the configs just restart the network service. It would be a good idea to be physically on the box at this point but if you’re certain you got everything right you can do it while ssh’ed into the box and it should come back up.

service network restart

Now would be a good time to verify your Openstack machine can still access the outside world.

# ping pingdom.com

Checkout our New Bridge Setup

On the OS side we see:

[root@ostack ~]# ip a
...
...
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:11:8f:5b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a00:27ff:fe11:8f5b/64 scope link
       valid_lft forever preferred_lft forever
...
...
5: br-ex: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 08:00:27:11:8f:5b brd ff:ff:ff:ff:ff:ff
    inet 10.20.0.20/24 brd 10.20.0.255 scope global br-ex
    inet6 fe80::9ca2:8fff:febb:24c8/64 scope link
       valid_lft forever preferred_lft forever

As you can see the bridge now has the IP and eth0 is unassigned.

On the Openstack Side we see:

[root@ostack ~]# ovs-vsctl show
72fe59a9-c26d-47c4-8805-f6b21b705805
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
    Bridge br-ex
        Port "eth0"
            Interface "eth0"
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "1.11.0"

So you’ll notice that our bridge “br-ex” now has our NIC attached as a port. Think of the bridge as a switch and we’ve just attached our uplink aka eth0

Almost There!!

At this point you should have a fully operational Openstack All-in-one with external network connectivity. All that’s left to do is setup the environment for our projects inside openstack itself. The rest of this tutorial can be done from the Web GUI (Horizon). I’ll also include a script at the end that does everything via a shell prompt.

Accessing the Web-Interface

The web interface is called Horizon. It allows you to administer many aspects of your openstack install as well as provide a self service web interface for your tennants.

Accessing Horizon From a VirtualBox Setup

If you’re using Virtualbox we will be using one of the NAT rules we made as part of our Nat Network config. The url is http://localhost:2080.

Accessing Horizon From a Physical Server Setup

If you’ve setup a physical openstack server just access via its ip http://{physcial server}/

The initial username is “admin” and the password is what you set in CONFIG_KEYSTONE_ADMIN_PW aka “0p3n5t4cK”. RDO also stores the password in an environment script “/root/keystonerc_admin”

Install a Test Image

We’re going to upload a minimal linux that we will use as a test instance later on

  1. Login in to Horizon as “admin” and make sure you’re on the “Admin” tab
  2. Select “Images” from the left-side menu
  3. Select “+ Create Image”
    Name: cirros
    Image Source: Image Location
    Image Location: http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img
    Format: QCOW2 – QEMU Emulator
    Mimimum Disk (GB): 1
    Mimimum Ram (MB): 128
    Public: Checked
  • Select “Create Image”

Create a Nano Flavor

Since your proof of concept is probably working in tight spaces, in terms of ram/storage, we’re going to create a new flavor to minimize the resources we use to launch our test instances. A flavor is a resource profile we apply when launching an instance.

  1. Select “Flavors” from the left-side menu
  2. Select “+ Create Flavor”
    Name: m1.nano
    ID: auto
    VCPUs: 1
    RAM MB: 128
    Root Disk GB: 1
    Ephemeral Disk GB: 0
    Swap Disk MB: 0
  3. Select “Create Flavor”

Create Public Floating Network

This is the virtual network that Openstack will bridge to the outside world. We will assign Public IP’s to our instances from this network.

  1. Select “Network” from the left-side menu
  2. Select “+ Create Network”
    Name: public
    Project: Admin
    Admin State: Checked This is a funny way of saying enabled
    Shared: Un-Checked Our tennants will never directly access this network. There will be a virtual router connecting it to their private network.
    External Network: Checked Attach this network to the br-ex bridge
  3. Select “Create Network”

Create Public Floating Subnet

Now that we’ve created the network we need to add the range of IP’s to it that it will assing out.

  1. Select “Network” from the left-side menu
  2. Select the network you just created “Public”
  3. Select “+ Create Subnet”
    Subnet Name: public_subnet
    Network Address: 10.20.0.0/24
    IP Version: IPv4
    Gateway IP: Leave this blank. It will be automatically filled with 10.20.0.1.
  4. Select “Create”

Setup Our 1st Project

Create the Demo Project

This will be the project we use for testing. Technicallly an admin project is created but you really shouldn’t use this to setup user instances etc.. Openstack also calls projects Tennants.

  1. Select “Projects” from the left-side menu
  2. Select “+ Create Project”
    Name: Demo
    Enabled: Checked
  3. Select “Create Project”

Create the Demo User

This will be the user we use for testing. They will be a member of the Demo project

  1. Select “Users” from the left-side menu
  2. Select “+ Create User”
    User Name: demo
    Password: demo
    Primary Project: Demo
    Role: _member_
  • Select “Create User”

Access the Demo Procect

The rest of the config is done inside the project as the demo user and is what allows the project to connect to the rest of openstack and the outside world.

Logout of admin and login to horizon using the newly created demo account.

Once you login to the demo project you’ll see a similar setup to when we were logged in as the Admin. The admin tab is absent of course and so are a couple of the other options on the left-side menu.

Setup the Private Network

Under Construction Past This Point

I published this before it was completely done so things past this point are pretty crude. I should have this revised and complete in about a week.

Source the admin credentials
cd /root
. keystonerc_admin
Create the network

neutron net-create public –router:external=True

Create the new pool

neutron subnet-create –name public_subnet –disable-dhcp –allocation-pool start=10.20.0.100,end=10.20.0.254 public 10.20.0.0/24

Setup Demo User/Network
Create a test user
Users/Create User
fill in info
for project hit plus and make project IT
Setup environment script
cd /root
cp keystonerc_admin keystonerc_{username}
Modify
export OS_USERNAME=jsaintro
export OS_TENANT_NAME=IT
export OS_PASSWORD={password}
export OS_AUTH_URL=http://10.20.0.20:35357/v2.0/
export PS1='[\u@\h \W(keystone_jsaintro)]\$ ‘

replace admin with {username} and replace password

Create Tennant Network
cd /root
. keystonerc_{username}
tennant# neutron net-create private
tennant# neutron subnet-create –name private_subnet –dns-nameserver 8.8.8.8 –allocation-pool start=10.0.30.10,end=10.0.30.254 private 10.0.30.0/24
tennant# neutron router-create extrouter
tennant# neutron router-gateway-set extrouter public
tennant# neutron router-interface-add extrouter private_subnet
Fixups

Fix Web interface (Horizon) allowed hosts
vi /etc/openstack-dashboard/local_settings
ALLOWED_HOSTS = [ “*” ]
service httpd restart
Fix CEILOMETER
Bug with RDO note says you need to run it twice
DNS resolution for instances
By design openstack ignores the openstack hosts resolv.conf for dnsmasq (aka –no-resolv). You are expected to put the dns servers in by hand. In the GUI this is under the “subnet details” tab.
Create nano flavor
Login as admin
In “Admin” tab select “Flavors”
Select “Create Flavor”
Name: m1.nano
ID: auto
VCPUs: 1
RAM MB: 128
Root Disk GB: 1
Ephemeral Disk: 0
Swap Disk: 0
Install the cirros test image
Login as admin
In “Admin” tab select “Images”
Select “Create Image”
In “Name” field enter “cirros”
Description: “Small Linux Test Image
For image enter “http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img”
Format: “QCOW2”
Check “Public” box
Select “Create Image”

Create a keypair:
Switch to your test user
Login via web interface
Access & Security/Keypairs
Create Keypair
{userid}_key
Download when prompted

Or create and import

ssh-keygen -t rsa -b 2048 -N ” -f id_rsa_demo

First Instance TEST
Login to dashboard http://node1/dashboard as the “demo” user
The password can be found in the file keystonerc_demo in the /root/ directory of the control node.
Enable ssh on your default security group
Select “Access & Security”
Select “Edit Rules” under “default” security group
Select “Add Rule” and pick “SSH” from the dropdown
Confirm with the “Add” button
Select “Add Rule” and pick “Custom ICMP” from the dropdown
Type -1 Code -1
Confirm with the “Add” button
Launch the instance
Select “Instances”
Select “Launch Instance”
for “Instance Name” enter “CTest”
Select the “m1.tiny” flavor
Instance Boot Source
Boot from image
Image Name
cirros
Select the “networking” tab
Pick a network “Private”
launch
Associate the floating IP
under the “Instances” heading, Select “More” for your launched instance
Select “Associate Floating IP”
Select the “+” next to the IP Address
Select “allocate IP”
Select “Associate”
link for the instance you just launched. You should see both public and private IP addresses listed in the “IP Address” column for your instance.
For additional details, please read how to set a floating IP range.

6 thoughts on “All-In-One Openstack using RDO Packstack with External Public IP’s

  1. ismael

    thank you, jsaintro. as you said in the intro of the guide, it is hard to have to spend hours and hours reading different guides and docs just so you can get a POC going. cheers

    Reply
  2. Stefan

    Just leaving a message saying thank you. Just like you I found it very hard to find a collective resource on how to setup Packstack with working networking in VirtualBox. Thanks to your post I finally got it working so thanks!

    Reply

Leave a comment