Separate VM network in a multi-node configuration with FlatManager

Asked by Davor Cubranic

I'm setting up a two-machine cloud where one machine is the controller, and the other is the compute node. Each machine has two NICs, one of which is publicly accessible (let's call it 99.99.99.x, on eth0), while the other is on the cloud's private subnet (e.g., 192.168.1.x, on eth1). I want to use this private subnet as the management network, and have another separate network for the VMs (e.g., 10.x).

This setup is similar to that described in question 154185, but I don't want to run VlanManager. I tried to use FlatManager, but cannot figure out how to configure the bridge to make 10.x accessible from the host. (I was able to start an instance, but now have no way to even ping it.) Is this doable with the FlatManager, and if so, what should go into /etc/network/interfaces and/or iptables?

Or do I have to use FlatDhcpManager? Does the additional network configuration in this case reside entirely in nova.conf, and then in /etc/network/interfaces I only set up eth0 and eth1 for their respective subnets (99.x, and 192.x)? The docs talk about setting this up when there is an unused NIC on the machine (by assigning it to "--flat-interface"), but I don't have one.

Question information

Language:
English Edit question
Status:
Solved
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Solved by:
Davor Cubranic
Solved:
Last query:
Last reply:
Revision history for this message
Launchpad Janitor (janitor) said :
#1

This question was expired because it remained in the 'Open' state without activity for the last 15 days.

Revision history for this message
Davor Cubranic (cubranic) said :
#2

Why couldn't I even ping a vm instance running on the host controller? Instance 10.0.1.2 is shown as running, but tracepath ends at 10.0.1.1 (the host controller). In the routing table, I have:

Destination Gateway Genmask Flags Metric Ref Use Iface
10.0.1.0 0.0.0.0 255.255.255.224 U 0 0 0 br100

Interface br100:

br100 Link encap:Ethernet HWaddr 00:19:b9:cb:29:75
          inet addr:10.0.1.1 Bcast:10.0.1.31 Mask:255.255.255.224
          inet6 addr: fe80::101b:8aff:feb5:c326/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:11 errors:0 dropped:0 overruns:0 frame:0
          TX packets:783 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:2528 (2.5 KB) TX bytes:67026 (67.0 KB)

Revision history for this message
Davor Cubranic (cubranic) said :
#3

My problem was essentially solved in answer 161446: I switched to FlatDHCPManager, and once I had flat_network_dhcp_start set properly, the instance was issued a valid address and I could SSH into it.

Important debugging step is not to rely on "euca-describe-instances" as an indication of an instance's status, but to look at its console log (euca-get-console-output).