Tag Archives: liberty

All-In-One OpenStack Liberty using RDO Packstack with External Public IPs

Summary

This is a single node, proof-of-concept that includes setting up the external connectivity to your VMs (aka, “instances” in OpenStackian lingo). It uses the RDO tool Packstack and will install the Liberty release of OpenStack. You will have full access to an instance from outside the OpenStack environment via its Public floating IP. This guide uses the VXLAN provider, so it should work in multi-node configurations. The current RDO documentation for Public floating IPs uses the FLAT provider, which only works for a single node. I wrote this article because I had to follow 3-4 different guides to get here and figured others might like all of this information compiled into one guide.

TIP: I highly recommend using a virtual system (aka, VirtualBox) for the setup, at least initially. OpenStack works well running inside another virtual environment; the benefits of snapshots/clones/virtual nics will make your experiments and testing much easier. I’ll provide specifics for using VirtualBox with this guide, alongside the non-virtual instructions.

Example Setup

Scenario 1: OpenStack VM Running Inside VirtualBox

This diagram shows the architectural layout, assuming you run it inside a VirtualBox VM.

All-in-One Architecture

Architecture inside of VirtualBox VM

OpenStack will be run from inside a VM, on a computer running VirtualBox. Network connectivity for the VM will be provided by the NAT Network in Virtualbox. The downside to using a NAT Network instead of a Bridge Interface is that the OpenStack instance public IPs will only be public to other VirtualBox VMs. I chose a NAT Network interface because OpenStack needs a static IP to operate and a NAT Network guarantees that. In my case, I kept breaking my OpenStack install because I would take my laptop home and all the IPs would change when I connected there. If your VirtualBox host will always remain attached to the same network, then feel free to use a Bridge Interface for your OpenStack VM, which would allow OpenStack, and its instances, to have true public IPs.

Scenerio 2: OpenStack Running Directly on Hardware

This diagram is the same as before, just without the VirtualBox VM sandbox. OpenStack public IPs will be real ones from your Office/Lab/Internet network.

All-in-One Physical Architecture

Architecture For Baremetal Install

Meet the Networks

External Network

The external network is the outside network. For an all-in-one install, you use this network for external and OpenStack API traffic. your public IPs will use this network to communicate with the outside world (virtual and real). For the VirtualBox Scenario, this will be the NAT Network. For the Physical Scenario, this will be your Office/Lab/Internet. This is the network your NIC will be physically attached to.

*Note: We’re using your external interface for your private OpenStack API traffic. In an all-in-one, the API traffic will never leave the public interface because it’s always pointed at its own IP and will therefore loop back. Multi-node installs should have a separate private network for API communication, as it really should be kept off the public interface. However, most of the service will be listening on the external IP; so if you want to use “keystone,” for example, from another machine on the network, you can. *

In your example we’ll assume your external network is setup like this:

  • Subnet: 10.20.0.0/24
  • Netmask: 255.255.255.0
  • Gateway: 10.20.0.1
  • DNS1: 8.8.8.8
  • DNS2: 8.8.4.4
  • public IP Range: 10.20.0.50 – 10.20.0.254
    FYI: The DNS 8.8.8.8 and 8.8.4.4 are google-provided DNS servers that work everywhere.

Private Network

The private network is what the instances are connected to. All instance traffic will go though the private network. In an all-in-one box, the private interface is just the loopback device “lo,” since all the instances are located on the same machine. Traffic headed out to the external network will start on the private network and then route to the public network via a virtual router inside of OpenStack (neat!!… once it all works).

In your example, we’ll assume your private network is setup like this:

  • Subnet: 10.0.30.0/24
  • Netmask: 255.255.255.0
  • Gateway: 10.0.30.1
  • DNS1: 8.8.8.8
  • DNS2: 8.8.4.4
  • public IP Range: 10.0.30.10 – 10.0.30.254

VirtualBox Setup

If you’re going to do this inside VirtualBox, here’s what you should do to setup your environment:

NAT Network Setup

In order to provide a stable network in your virtual environment, we’re going to setup a NAT Network. This is a combination of an internal network and a NAT, so that it can access the outside world. We’re doing this so that your OpenStack server will always have a consistent IP (10.20.0.20), no matter what network your physical machine is connected to. I was doing my testing on a laptop that I would transfer between work and home, which meant that my networks would change.

Note: If you want OpenStack to have a real physical IP, and your physical machine isn’t going to be changing networks, then you can skip this and just attach the virtual NIC to a bridged adapter.

In VirtualBox Create a New NAT Network

  • Name: PubAIO
  • Network CIDR: 10.20.0.0/24
  • Supports DHCP: Unchecked
    Statically define your OpenStack VMs IP or you’ll have problems if its IP changes
  • Port Forwarding:
Name Protocol Host IP Host Port Guest IP Guest Port
SSH TCP 127.0.0.1 2022 10.20.0.20 22
HTTP TCP 127.0.0.1 2080 10.20.0.20 80

This will allow you to access the OpenStack VM from your physical machine via ssh and your web browser.

Virtual Machine Configuration

  • Name: osaio
  • Type: Linux
  • Red Hat (64-bit)
  • VCPUs: 1
  • Ram: 3GB Min 4GB recommended (2GB will only let you start two 128MB instances)
  • Storage: 10GB Storage (fixed size)
  • Network: 1Gb NIC
  • Attached to: NAT Network
  • Name: PubAIO
  • Adapter Type: Paravirtualized Network (virtio-net)
    This will provide better network performance
  • Promiscuous Mode: Allow All
    This allows the public IPs you will be creating to communicate via this NIC

Install VirtualBox Guest Additions for better performance (and less annoyance)

Setup a Workstation VM

You can use the NAT Port Forwarding rules to control your OpenStack VM. One exception will be the console connection to newly created instances. When you access the console from an instance, via the OpenStack web interface, it will redirect you to the OpenStack VM’s IP and a port assigned for that VNC session. Because we’re using Port Forwarding and accessing OpenStack via localhost on a forwarded port, the VNC session will break. The ugly way around this is to setup a workstation VM that can run a web browser and attach it to the PubAIO NAT Network. When you need to access the console for an instance, you will console into the workstation VM and through it, access the OpenStack web interface. Since it is inside the PubAIO NAT Network, the redirection for the instance’s console will work.
I’m not proud of this workaround, but it gets the job done.

Install OpenStack

OS Install

I did a minimal install of Centos 7.X, with a single large root partition using the entire 10GB of space. The minimal install auto-partitioner is pretty dumb, so make sure to select manual partitioning. Once selected, you’ll be given an option to autoconfig and review the proposed changes. Configure the host with the following system settings:

  • Partitions:
  • /boot 500MiB
  • /root 8672MiB
  • /swap 1024MiB
  • Network Type: Manual
  • IP: 10.20.0.20
  • NetMask: 255.255.255.0
  • Gateway: 10.20.0.1
  • DNS: 8.8.8.8
  • Root Password: 0p3n5t4cK
  • Hostname: osaio

Once install is finished, confirm the VM can access the internet

root@vm:~$ ping pingdom.com

Test the port forwarding

user@workstation:~$ ssh root@localhost -p 2022
root@osaio

Install OpenStack Prerequisites

  1. Make sure your environment has sane defaults
    root@osaio:~$ vi /etc/environment
    LANG=en_US.utf-8
    LC_ALL=en_US.utf-8
    
  2. Install the RDO repo
    root@osaio:~$ yum install -y https://www.rdoproject.org/repos/rdo-release.rpm
    
  3. Install the EPEL repo
    root@osaio:~$ yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
    
  4. Make sure all packages are current
    root@osaio:~$ yum -y upgrade
    
  5. Install some nice utilities
    root@osaio:~$ yum install -y screen traceroute bind-utils
    
  6. Disable network manager (RDO doesn’t like it)
    root@osaio:~$ systemctl stop NetworkManager
    root@osaio:~$ systemctl disable NetworkManager
    root@osaio:~$ systemctl start network.service
    root@osaio:~$ systemctl enable network.service
    
  7. Install the Packstack installer and its utilities
    root@osaio:~$ yum install -y OpenStack-packstack OpenStack-utils
    
  8. Generate the initial answer file
    Note: This is easier to manage than Packstack command line options

    root@aionode:~$ packstack --gen-answer-file=allinone-answers.cfg
    
  9. Reboot to make sure you’re using the latest installed kernel, etc…
    root@aionode:~$ reboot
    
  10. Modify the answer file for your All-in-One install
    root@aionode:~$ vi /root/allinone-answers.cfg
    CONFIG_NTP_SERVERS=0.rhel.pool.ntp.org,1.rhel.pool.ntp.org
    CONFIG_DEFAULT_PASSWORD=0p3n5t4cK
    CONFIG_KEYSTONE_ADMIN_PW=0p3n5t4cK
    CONFIG_CINDER_VOLUMES_SIZE=4G
    CONFIG_NOVA_COMPUTE_PRIVIF=lo
    CONFIG_NOVA_NETWORK_PRIVIF=lo
    CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
    CONFIG_PROVISION_DEMO=n
    CONFIG_NOVA_NETWORK_PUBIF=eth0
    CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eth0
    

Note: You can use the openstack-config utility to automate this in a script

Example:

openstack-config --set ~/allinone-answers.cfg general CONFIG_NTP_SERVERS 0.rhel.pool.ntp.org,1.rhel.pool.ntp.org
openstack-config --set ~/allinone-answers.cfg general CONFIG_DEFAULT_PASSWORD 0p3n5t4cK
openstack-config --set ~/allinone-answers.cfg general CONFIG_KEYSTONE_ADMIN_PW 0p3n5t4cK
openstack-config --set ~/allinone-answers.cfg general CONFIG_CINDER_VOLUMES_SIZE 4G
openstack-config --set ~/allinone-answers.cfg general CONFIG_NOVA_COMPUTE_PRIVIF lo
openstack-config --set ~/allinone-answers.cfg general CONFIG_NOVA_NETWORK_PRIVIF lo
openstack-config --set ~/allinone-answers.cfg general CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS physnet1:br-ex
openstack-config --set ~/allinone-answers.cfg general CONFIG_PROVISION_DEMO n
openstack-config --set ~/allinone-answers.cfg general CONFIG_NOVA_NETWORK_PUBIF eth0
openstack-config --set ~/allinone-answers.cfg general CONFIG_NEUTRON_OVS_BRIDGE_IFACES br-ex:eth0

Here is an explanation of the variables:

VARIABLE NAME VALUE DESCRIPTION
CONFIG_NTP_SERVERS 0.rhel.pool.ntp.org, 1.rhel.pool.ntp.org Time Servers to keep your time in sync (not required, but why not)
CONFIG_DEFAULT_PASSWORD 0p3n5t4cK Set default password for various services
CONFIG_KEYSTONE_ADMIN_PW 0p3n5t4cK Initial admin password for OpenStack
CONFIG_CINDER_VOLUMES_SIZE 4G How much space you’ll reserve for add-on volumes1
CONFIG_NOVA_COMPUTE_PRIVIF lo For the All-in-One Compute service, you use a loopback for your private network
CONFIG_NOVA_NETWORK_PRIVIF lo For the All-in-One network service, use a loopback for your private network
CONFIG_NOVA_NETWORK_PUBIF eth02 This should be the NIC on your VM/physical server that can reach the rest of the network
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS physnet1:br-ex Mapping from the physical network name, physnet1, to the external bridge name, br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES br-ex:eth02 Similar to PUBIF, this automatically creates the bridge, br-ex, and transfers the eth0 config to it
CONFIG_PROVISION_DEMO n Don’t have Packstack provision a demo project. You'll be creating this manually with different values

Minimal OS for the Impatient (Optional)

These projects can be disabled, which will let the install go more quickly, but you will still have enough functionality to complete this guide.

openstack-config --set ~/allinone-answers.cfg general CONFIG_CINDER_INSTALL n
openstack-config --set ~/allinone-answers.cfg general CONFIG_SWIFT_INSTALL n
openstack-config --set ~/allinone-answers.cfg general CONFIG_CEILOMETER_INSTALL n
openstack-config --set ~/allinone-answers.cfg general CONFIG_NAGIOS_INSTALL n

Pre-Deploy Network

root@osaio:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:1b:12:7d brd ff:ff:ff:ff:ff:ff
inet 10.20.0.20/24 brd 10.20.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe1b:127d/64 scope link
valid_lft forever preferred_lft forever

So before deployment, you have the loopback and your physical interface that is configured with an IP address. The CONFIG_NEUTRON_OVS_BRIDGE_IFACES will change this.

Install Time

Run RDO Packstack With the Answer File you Generated

root@aionode:~$ packstack --answer-file=allinone-answers.cfg

Note: This takes a while. Patience, go get a soda from the fridge. You should be seeing bunches of [ Done ] (red is bad). Sometimes just rerunning will clear an occasional red message.

Switch to Full SW Virtualization (VirtualBox Only)

If you’re installing OpenStack in a VirtualBox VM, you need to switch to full software virtualization to run instances.

  1. Reconfigure Nova to use qemu vs. kvm
root@aionode:~$ OpenStack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu
  1. Restart services to apply the change
root@aionode:~$ systemctl restart OpenStack-nova-compute.service

#Review Networking

Post-Deploy Network

root@osaio:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
link/ether 08:00:27:1b:12:7d brd ff:ff:ff:ff:ff:ff
inet6 fe80::a00:27ff:fe1b:127d/64 scope link
valid_lft forever preferred_lft forever
3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether aa:37:65:96:52:74 brd ff:ff:ff:ff:ff:ff
5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 08:00:27:1b:12:7d brd ff:ff:ff:ff:ff:ff
inet 10.20.0.20/24 brd 10.20.0.255 scope global br-ex
valid_lft forever preferred_lft forever
inet6 fe80::a45b:5aff:fe0d:4e4a/64 scope link
valid_lft forever preferred_lft forever
6: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether 3e:bb:67:4f:b4:46 brd ff:ff:ff:ff:ff:ff
7: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether fa:f9:5a:7a:68:4f brd ff:ff:ff:ff:ff:ff

So the IP address has moved from the physical interface eth0 to br-ex. Also several other bridges have been created: br-int,br-tun.

OpenVswitch Config

Right after the install finishes your bridge config should look like this:

[root@ostack ~]# ovs-vsctl show
96ab8860-e31e-4455-8376-09dc774f4304
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "eth0"
Interface "eth0"
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-tun
fail_mode: secure
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
ovs_version: "2.4.0"

So you’ll notice that your bridge br-ex now has your NIC attached as a port. Think of the bridge as a switch and we’ve just attached your uplink, aka eth0.

The directive CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS created two new ports. On the external bridge br-ex the phy-br-ex port was created. On the internal bride br-int the int-br-ex port was created.

The directive CONFIG_NEUTRON_OVS_BRIDGE_IFACES created the br-ex interface and migrated the IP information from the physical interface eth0.

Almost There!!

At this point you should have a fully operational OpenStack All-in-One with external network connectivity. All that’s left to do is setup the environment for your projects inside OpenStack itself. The rest of this tutorial can be done from the web GUI (Horizon).

Accessing the Web-Interface

As mentioned above, the web interface is called Horizon. It allows you to administer many aspects of your OpenStack install, as well as provide a self-service web interface for your tenants.

Accessing Horizon From a VirtualBox Setup

If you’re using VirtualBox, you will be using one of the NAT rules you made as part of your NAT Network config. The URL is http://localhost:2080.

Accessing Horizon From a Physical Server Setup

If you’ve setup a physical OpenStack server, just access via its IP and not localhost. Example: http://{physical server}/.

The initial username is “admin” and the password is what you set in CONFIG_KEYSTONE_ADMIN_PW, aka “0p3n5t4cK”. RDO also stores the password in an environment script /root/keystonerc_admin.

Setup OpenStack Environment for Tenants

Install a Test Image

You’re going to upload a minimal Linux that you will use as a test instance later on.

  1. Login to Horizon as “admin”
  2. Select the “Admin” tab
  3. Select “Images” from the left-side menu
  4. Select “+ Create Image”
  5. Select “Create Image”

Create a Nano Flavor

Since your proof of concept is probably working in tight spaces, in terms of RAM/storage, you’re going to create a new flavor, to minimize the resources you use to launch your test instances. A flavor is a resource profile you apply when launching an instance.

  1. Select “Flavors” from the left-side menu
  2. Select “+ Create Flavor”
    • Name: m1.nano
    • ID: auto
    • VCPUs: 1
    • RAM MB: 128
    • Root Disk GB: 1
    • Ephemeral Disk GB: 0
    • Swap Disk MB: 0
  3. Select “Create Flavor”

Create a Project

This will be the project you use for testing. Technically an admin project is created but you really shouldn’t use this to setup user instances. OpenStack also calls projects Tenants.

  1. Select "Identity/Projects" from the left-site menu
  2. Select "Create Project"
    • Name "Demo Project"
    • Enabled "Checked"
  3. Select "Quota" Tab
    • Volumes "10"
    • Volume Snapshots "10"
    • Total Size of Volumes "5"

You'll see some errors: unable to set quotas and can't determine volume limit. Don't sweat it.

Create the Demo User

This will be the user you use for testing. They will be a member of the Demo project.

  1. Select “Identity/Users” from the left-side menu
  2. Select “+ Create User”
    • User Name: demo
    • Password: demo
    • Primary Project: Demo Project
    • Role: member
  3. Select “Create User”

Setup Internal OpenStack Network

Create Public Floating Network (All Tenants)

This is the virtual network that OpenStack will bridge to the outside world. You will assign public IPs to your instances from this network.

Web Interface Method

  1. Select “System/Networks” from the left-side menu
  2. Select “+ Create Network”
    • Name: public
    • Project: admin
    • Provider Network Type: VXLAN
    • Segmentation ID: 96
      I think this is an arbitrary number/just shouldn't conflict
    • External Network: Checked
  3. Select “Create Network”

CLI Method

root@aionode:~$ source /root/keystone_admin
root@aionode:~$ neutron net-create public --router:external=True --provider:network_type=vxlan --provider:segmentation_id=96

Results

Field Value
admin_state_up True
id ef430df2-5206-4b5f-b630-ef25176eb351
mtu 0
name public
provider:network_type vxlan
provider:physical_network
provider:segmentation_id 96
router:external True
shared False
status ACTIVE
subnets
tenant_id c6ed09fb7970466d994985571201e775

Create IP Range

Now that you’ve created the network, you need to add a range of IPs to be used as the floating IP pool.

Web Interface Method

  1. Select “System/Networks” from the left-side menu
  2. Under the "Network Name" column select "public"
    Clicking "Edit Network" takes you somewhere else
  3. Select "Create Subnet"
    • Name: public_subnet
    • Network Address: 10.20.0.0/24
    • IP Version: IPv4
    • Gateway: 10.20.0.1
    • Disable Gateway: Unchecked
  4. Select "Subnet Details"
    • Enable DHCP: Unchecked
    • Allocation Pools: 10.20.0.100,10.20.0.150
  5. Select “Create”

CLI Method

root@aionode:~(keystone_admin)$ neutron subnet-create --name public_subnet --disable-dhcp --allocation-pool start=10.20.0.100,end=10.20.0.150 public 10.20.0.0/24

Result

Field Value
allocation_pools {“start”: “10.20.0.100”, “end”: “10.20.0.150”}
cidr 10.20.0.0/24
dns_nameservers
enable_dhcp False
gateway_ip 10.20.0.1
host_routes
id 8ffd99c5-b6d9-4d46-ac5a-92b6e659f839
ip_version 4
ipv6_address_mode
ipv6_ra_mode
name public_subnet
network_id 267ee4a2-8e11-4bb1-8e46-a4cc89ad23e3
subnetpool_id
tenant_id c6ed09fb7970466d994985571201e775

Done with Setup from Admin Side!!

At this point you’re done with the setup of the environment from the administrative side. All that remains is to login as the tenant and do some final network setup.

Finish the Network Setup in the Demo Project

The rest of the configuration is done inside the project as the demo user. You’re going to add some virtual routers and a private subnet, so that your instances will have private IPs and a route to the OpenStack wide floating IP network.

Access the Demo Project

Logout of the web interface and log back in using the demo/demo credentials.

Once you login to the demo project, you’ll see a similar setup to when you were logged in as the admin. The admin tab is absent, of course, and so are a couple of the other options on the left-side menu.

Setup the Private Network (Tenant Specific)

In order to access the cli commands as the demo user, you need to create a shell script with their credentials.

Setup demo user environment script
1. Select “Compute/Access & Security” tab
2. Select “API Access”
3. Select “Download OpenStack RC file”
4. Copy contents to: root@aionode:~$ keystonerc_demo

Setup Tenant Network/Subnet

This is the private network your instances will attach to. Instances will be issued IPs from this private IP subnet.

Create Tenant Network

Web Interface Method

  1. Select “Network/Networks” from the left-side menu
  2. Select “Create Network”
    • Name: private
    • Create Subnet: checked
  3. Select “Subnet” tab
    • Subnet Name: private_subnet
    • Network Addresses: 10.0.30.0/24
    • IP Version: IPv4
    • Gateway IP: Default of 1st IP is fine so leave this blank
  4. Select “Subnet Details” tab
    • Allocation Pools: 10.0.30.50,10.0.30.100
    • DNS Name Servers:
    • 8.8.8.8
    • 8.8.4.4
  5. Select “Create”

CLI Method

root@osaio:~$ source /root/keystonerc_demo
root@osaio:(keystone_demo)$ neutron net-create private
root@osaio:(keystone_demo)$ neutron subnet-create --name private_subnet --dns-nameserver 8.8.8.8 --dns-nameserver 8.8.4.4 --allocation-pool start=10.0.30.10,end=10.0.30.254 private 10.0.30.0/24

Note: The prompt won’t actually display (keystone_demo) because the shell script doesn’t set it, but I’m using it here to indicate that you should be sourcing the demo users credentials.

Private Network Results
[root@osaio ~(keystone_admin)]# neutron net-show private

Field Value
admin_state_up True
id b018b8ac-002e-4ab9-bb60-2bd82f060728
mtu 0
name private
provider:network_type vxlan
provider:physical_network
provider:segmentation_id 54
router:external False
shared False
status ACTIVE
subnets 4009aae4-624a-4134-a0f4-05711278a6a7
tenant_id 3fa9a67f91e14f09a7b40a180e7a596c

Private Subnet Results
[root@osaio ~(keystone_admin)]# neutron subnet-show private_subnet

Field Value
allocation_pools {“start”: “10.0.30.50”, “end”: “10.0.30.100”}
cidr 10.0.30.0/24
dns_nameservers 8.8.8.8
8.8.4.4
enable_dhcp True
gateway_ip 10.0.30.1
host_routes
id 4009aae4-624a-4134-a0f4-05711278a6a7
ip_version 4
ipv6_address_mode
ipv6_ra_mode
name private_subnet
network_id b018b8ac-002e-4ab9-bb60-2bd82f060728
subnetpool_id
tenant_id 3fa9a67f91e14f09a7b40a180e7a596c

Create an External Router to Attach to floating IP Network

This router will attach to your private subnet and route to the public network, which is where your floating IPs are located.
Web Interface Method

  1. Select “Network/Routers” from the left-side menu
  2. Select “Create Router”
    • Name: extrouter
    • External Network: public
  3. Select “Create Router”
  4. Under the “Name” column select “extrouter”
  5. Select the “Interfaces” tab
  6. Select “Add Interface”
    • Subnet: private
  7. Select “Add Interface”

CLI Method

root@osaio:(keystone_demo)$ neutron router-create extrouter
root@osaio:(keystone_demo)$ neutron router-gateway-set extrouter public
root@osaio:(keystone_demo)$ neutron router-interface-add extrouter private_subnet

First Instance TEST

Login to dashboard http://node1/dashboard as the “demo” user.

Create a Keypair

This is used to ssh into your instances without a password.

  1. Login via web interface as the demo user
  2. Select “Compute/Access & Security/Keypairs
  3. Select “Create Keypair”
    • Key Pair Name: {{ userid }}_key
  4. Download the private key when prompted
    You’ll only get one shot at this. You can’t go back later and get this file again

Note: To use the private key in putty, you will have to load the pem file into puttygen and save as a ppk and then import into the sshagent.

Note: For OS X and linux just invoke ssh with -i pemfile.pem to login.

Setup Security Groups

The Security Groups are the equivalent to firewall rules in OpenStack. These rules will be applied to the interfaces of any instances you create in your tenant. You will open the ports for ssh and ping to the world. Obviously not a great idea to do this in a production environment. 😉

  1. Select “Compute/Access & Security”
  2. Select “Manage Rules” under “default” security group
  3. Select “Add Rule”
    • Rule: SSH
  4. Confirm with the “Add” button
  5. Select “Add Rule”
    • Rule: Custom ICMP
    • Type: -1
    • Code: -1
  6. Confirm with the “Add” button

Create/Launch the Instance

  1. Select “Compute/Instances”
  2. Select “Launch Instance”
    • Instance Name: CTest
    • Flavor: m1.nano
    • Instance Boot Source: Boot from image
    • Image Name: cirros
  3. Select the “networking” tab
    • Selected Networks: Private
  4. “Launch”

If you are doing this on physical hardware, you should be able to access the instance’s console now. If you’re in VirtualBox, you’ll need the Workstation VM on the 10.20.0.0 NAT Network with a GUI/web browser to access the console. Don’t sweat it if you can’t access the console because in the next step you’ll add a floating IP and will be able to access it that way.

Associate the Floating IP

  1. Select “Compute/Instances”
  2. Under “Actions” select “Associate floating IP”
  3. Select the “+” next to the IP Address
  4. Select “allocate IP”
  5. Select “Associate”
    You should see both public and private IP addresses listed in the “IP Address” column for your instance.

Now Test Floating IP

VirtualBox

Since you setup your All-In-One install in a NAT Network you need to add new rules to allow access from your workstation to the new instance.

  1. Select “VirtualBox/Preferences/Networking”
  2. Edit “PubAIO”
  3. Select “Port Forwarding”
Name Protocol Host IP Host Port Guest IP Guest Port
SSH INST1 TCP 127.0.0.1 3022 10.20.0.101 22

Note: This assumes that the floating IP you’re issued for your VM is “10.20.0.101” otherwise, fill it in with the correct value.

  1. ping the floating IP (From osaio console)
    root@osaio ~]# ping 10.20.0.101
    PING 10.20.0.101 (10.20.0.101) 56(84) bytes of data.
    64 bytes from 10.20.0.101: icmp_seq=1 ttl=63 time=1.25 ms
    64 bytes from 10.20.0.101: icmp_seq=2 ttl=63 time=0.503 ms
    64 bytes from 10.20.0.101: icmp_seq=3 ttl=63 time=0.904 ms
    
  2. ssh into instance (From your workstation using the VirtualBox floating IP)
    user@workstation ~$ chmod 600 demo_key.pem
    user@workstation ~$ ssh -i demo_key.pem -p 3022 cirros@localhost
    

    Note: If the pem key isn’t working, you can login with user: cirros and pass: cubswin:).

Caveats

Rebooting

  • It can take a while for all services to startup after rebooting
  • If an instance starts before everything is up it may not have connectivity (Try shutting down and restarting the instance from the OpenStack interface)

Bonded Interfaces

  • Don’t
  • If you have them, break them
  • I gave myself a serious migraine trying to get this working
  • It’s not worth it 😉

  1. Cinder provides block storage to your instances… OK, before you go looking that up on dictionary.com, think of it as an external drive. Your OS storage is handled by Glance (the image service), so Cinder isn’t required to run your instances. Also, in a default All-in-One install, it’s actually using a sparse file that is attached though a loopback file system, so it’s going to be hideously slow. 

  2. Assuming the interface connected to the external network is eth0. Otherwise replace eth0 with the correct interface name.