本文为半年前的文档,是在https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst的基础上,通过自己做的实验的修改版本,搭建好之后虚拟机无法获得ip,今天搭建Girzzly版时发现解决办法,在网络节点上修改Quantum的权限:
visudo /etc/sudoers
添加:
#Modifythe quantum user
quantumALL=NOPASSWD: ALL
还没有在Folsom版本中做过实验,有Folsom的可以试一试。
2. Getting Ready
2.1. Preparing Ubuntu 12.10
· After you install Ubuntu 12.10 Server 64bits, Go to thesudo mode and don't leave it until the end of this guide:
sudo su
· Update your system:
apt-get update
apt-get upgrade
apt-get dist-upgrade
2.2.Networking
· Only one NIC on the controller node need internet access:
vim/etc/network/interfaces
auto lo
iface lo inet loopback
# The primary network interface
auto p51p1
iface p51p1 inet static
address 10.10.10.14
netmask 255.255.255.0
network 10.10.10.0
broadcast 10.10.10.255
gateway 10.10.10.1
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 8.8.8.8
dns-search ourfuture.cn
auto p51p2
iface p51p2 inet static
address 192.168.31.14
netmask 255.255.255.0
/etc/init.d/networking restart
2.3. MySQL &RabbitMQ
· Install MySQL:
apt-get install mysql-server python-mysqldb
· Configure mysql to accept all incoming requests:
sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
service mysql restart
让mysql支持外部访问:
命令:
mysql -uroot -ppassword
grant all privileges on *.* to root@"%"identified by "password" with grant option;
FLUSH PRIVILEGES;
删除空用户:
命令:
use mysql;
delete from user where user="";
quit;
重启服务:
命令:
service mysql restart
· Install RabbitMQ:
apt-get install rabbitmq-server
2.4. Node synchronization
· Install other services:
apt-get install ntp
· Configure the NTP server to synchronize between yourcompute nodes and the controller node:
sed -i 's/server ntp.ubuntu.com/serverntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g'/etc/ntp.conf
service ntp restart
2.5. Others
· Install other services:
apt-get install vlan bridge-utils
· Enable IP_Forwarding:
vim /etc/sysctl.conf
# Uncomment net.ipv4.ip_forward=1, to save you fromrebooting, do this:
sysctl net.ipv4.ip_forward=1
2.6. Keystone
· Start by the keystone packages:
apt-get install keystone
· Create a new MySQL database for keystone:
mysql -uroot -ppassword
CREATE DATABASE keystone;
GRANT ALL ON keystone.* TO 'keystoneUser'@'%' IDENTIFIEDBY 'keystonePass';
quit;
· Adapt the connection attribute in the/etc/keystone/keystone.conf to the new database:
vim /etc/keystone/keystone.conf
connection = mysql://keystoneUser:keystonePass@192.168.31.14/keystone
· Restart the identity service then synchronize thedatabase:
service keystone restart
keystone-manage db_sync
· Fill up the keystone database using the two scriptsavailable in the Scriptsfolder of this git repository. Beware that you MUSTcomment every part related to Quantum if you don't intend to install itotherwise you will have trouble with your dashboard later:
#Modify the HOST_IP and EXT_HOST_IP variables beforeexecuting the scripts
chmod +x keystone_basic.sh
chmod +x keystone_endpoints_basic.sh
./keystone_basic.sh
./keystone_endpoints_basic.sh
· Create a simple credential file and load it so you won'tbe bothered later:
nano creds
#Paste the following:
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin_pass
export OS_AUTH_URL="http://10.10.10.14:5000/v2.0/"
# Load it:
source creds
· To test Keystone, we use a simple curl request:
apt-get install curl openssl
curl http:// 10.10.10.14:35357/v2.0/endpoints -H'x-auth-token: ADMIN'
2.7. Glance
· After installing Keystone, we continue with installingimage storage service a.k.a Glance:
apt-get install glance
· Create a new MySQL database for Glance:
mysql -uroot -ppassword
CREATE DATABASE glance;
GRANT ALL ON glance.* TO 'glanceUser'@'%' IDENTIFIED BY'glancePass';
quit;
· Update /etc/glance/glance-api-paste.ini with:
vim /etc/glance/glance-api-paste.ini
[filter:authtoken]
paste.filter_factory =keystone.middleware.auth_token:filter_factory
auth_host = 192.168.31.14
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = service_pass
· Update the /etc/glance/glance-registry-paste.ini with:
vim /etc/glance/glance-registry-paste.ini
[filter:authtoken]
paste.filter_factory =keystone.middleware.auth_token:filter_factory
auth_host =192.168.31.14
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = service_pass
· Update /etc/glance/glance-api.conf with:
vim /etc/glance/glance-api.conf
sql_connection = mysql://glanceUser:glancePass@192.168.31.14/glance
And:
[paste_deploy]
flavor = keystone
· Update the /etc/glance/glance-registry.conf with:
vim /etc/glance/glance-registry.conf
sql_connection = mysql://glanceUser:glancePass@192.168.31.14/glance
And:
[paste_deploy]
flavor = keystone
· Restart the glance-api and glance-registry services:
service glance-api restart; service glance-registryrestart
· Synchronize the glance database:
glance-manage db_sync
· Restart the services again to take into account the newmodifications:
service glance-registry restart; service glance-apirestart
· To test Glance's well installation, we upload a new imageto the store. Start by downloading the cirros cloud image to your node and thenuploading it to Glance:
· mkdir images
· cd images
· wgethttps://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
· glance image-create
--name myFirstImage --is-public true--container-format bare
--disk-format qcow2
· Now list the images
to see what you have just uploaded:
glance image-list
2.8. Quantum
· Install the Quantum
server:
apt-get install quantum-server quantum-plugin-openvswitch
· Create a
database:
mysql -uroot -ppassword
CREATE DATABASE quantum;
GRANT ALL ON quantum.* TO 'quantumUser'@'%' IDENTIFIED
BY'quantumPass';
quit;
· Edit the OVS plugin
configuration
file/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
with:
vim /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
#Under the database section
[DATABASE]
sql_cOnnection=
mysql://quantumUser:quantumPass@192.168.31.14/quantum
#Under the OVS section
[OVS]
tenant_network_type=vlan
network_vlan_ranges = physnet1:1:4094
· Edit
/etc/quantum/api-paste.ini
vim /etc/quantum/api-paste.ini
[filter:authtoken]
paste.filter_factory =
keystone.middleware.auth_token:filter_factory
auth_host = 192.168.31.14
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = service_pass
· Restart the quantum
server:
service quantum-server restart
2.9. Nova
· Start by installing
nova components:
apt-get install nova-api nova-cert novnc
nova-consoleauthnova-scheduler nova-novncproxy
· Prepare a Mysql
database for Nova:
mysql -uroot -ppassword
CREATE DATABASE nova;
GRANT ALL ON nova.* TO 'novaUser'@'%' IDENTIFIED
BY'novaPass';
quit;
· Now modify authtoken
section in the/etc/nova/api-paste.ini file to this:
vim /etc/nova/api-paste.ini
[filter:authtoken]
paste.filter_factory =keystone.middleware.auth_token:filter_factory
auth_host = 192.168.31.14
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = service_pass
signing_dirname = /tmp/keystone-signing-nova
· Modify the
/etc/nova/nova.conf like this:
vim /etc/nova/nova.conf
[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_cOnfig=/etc/nova/api-paste.ini
scheduler_driver=nova.scheduler.simple.SimpleScheduler
s3_host=192.168.31.14
ec2_host=192.168.31.14
ec2_dmz_host=192.168.31.14
rabbit_host=192.168.31.14
cc_host=192.168.31.14
dmz_cidr=169.254.169.254/32
metadata_host=192.168.31.14
metadata_listen=0.0.0.0
nova_url=http:// 192.168.31.14:8774/v1.1/
sql_cOnnection=mysql://novaUser:novaPass@192.168.31.14/nova
ec2_url=http://192.168.31.14:8773/services/Cloud
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
# Auth
use_deprecated_auth=false
auth_strategy=keystone
keystone_ec2_url=http://192.168.31.14:5000/v2.0/ec2tokens
# Imaging service
glance_api_servers=192.168.31.14:9292
image_service=nova.image.glance.GlanceImageService
# Vnc configuration
novnc_enabled=true
novncproxy_base_url=http://10.10.10.14:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=10.10.10.14
vncserver_listen=0.0.0.0
# Network settings
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://192.168.31.14:9696
quantum_auth_strategy=keystone
quantum_admin_tenant_name=service
quantum_admin_username=quantum
quantum_admin_password=service_pass
quantum_admin_auth_url=http://192.168.31.14:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
# Compute #
compute_driver=libvirt.LibvirtDriver
# Cinder #
volume_api_class=nova.volume.cinder.API
osapi_volume_listen_port=5900
· Synchronize your
database:
nova-manage db sync
· Restart nova-*
services:
cd /etc/init.d/; for i in $( ls nova-* ); do sudo service$i
restart; done
· Check for the
smiling faces on nova-* services to confirmyour installation:
nova-manage service list
2.10. Cinder
· Install the required
packages:
apt-get install cinder-api cinder-scheduler
cinder-volumeiscsitarget open-iscsi iscsitarget-dkms
· Configure the iscsi
services:
sed -i 's/false/true/g' /etc/default/iscsitarget
· Restart the
services:
service iscsitarget start
service open-iscsi start
· Prepare a Mysql
database for Cinder:
mysql -uroot -ppassword
CREATE DATABASE cinder;
GRANT ALL ON cinder.* TO 'cinderUser'@'%' IDENTIFIED
BY'cinderPass';
quit;
· Configure
/etc/cinder/api-paste.ini like the following:
vim /etc/cinder/api-paste.ini
[filter:authtoken]
paste.filter_factory =keystone.middleware.auth_token:filter_factory
service_protocol = http
service_host = 10.10.10.14
service_port = 5000
auth_host = 192.168.31.14
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = service_pass
· Edit the
/etc/cinder/cinder.conf to:
vim /etc/cinder/cinder.conf
[DEFAULT]
rootwrap_cOnfig=/etc/cinder/rootwrap.conf
sql_connection =
mysql://cinderUser:cinderPass@192.168.31.14/cinder
api_paste_cOnfg= /etc/cinder/api-paste.ini
iscsi_helper=ietadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
#osapi_volume_listen_port=5900
· Then, synchronize
your database:
cinder-manage db sync
· Finally, don't
forget to create a volumegroup and name itcinder-volumes:
dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=2G
losetup /dev/loop2 cinder-volumes
fdisk /dev/loop2
#Type in the followings:
n
p
1
ENTER
ENTER
t
8e
w
· Proceed to create
the physical volume then the volumegroup:
pvcreate /dev/loop2
vgcreate cinder-volumes /dev/loop2
Note: Bewarethat this volume group gets lost
after a system reboot. (Click Here to knowhow to load it after
a reboot)
· Restart the cinder
services:
service cinder-volume restart
service cinder-api restart
2.11. Horizon
· To install horizon,
proceed like this
apt-get install openstack-dashboard memcached
· If you don't like
the OpenStack ubuntu theme, you candisabled it and go back to the
default look:
vim /etc/openstack-dashboard/local_settings.py
#Comment these lines
#Enable the Ubuntu theme if it is present.
#try:
# fromubuntu_theme import *
#except ImportError:
# pass
· Reload Apache and
memcached:
service apache2 restart; service memcached restart
You can nowaccess your OpenStack 10.10.10.14/horizon withcredentials admin:admin_pass.
Note: A rebootmight be needed for a successful
login
3. Network Node
3.1. Preparing the Node
· Update your
system:
apt-get update
apt-get upgrade
apt-get dist-upgrade
· Install ntp
service:
apt-get install ntp
· Configure the NTP
server to follow the controller node:
sed -i 's/server ntp.ubuntu.com/server
192.168.31.14/g'/etc/ntp.conf
service ntp restart
· Install other
services:
apt-get install vlan bridge-utils
· Enable
IP_Forwarding:
vim /etc/sysctl.conf
# Uncomment net.ipv4.ip_forward=1, to save you fromrebooting,
perform the following
sysctl net.ipv4.ip_forward=1
3.2.Networking
· 2 NICs must be
present:
vim /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback
# VM internet Access
auto p51p1
iface p51p1 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down
# OpenStack management & VM conf
auto p51p2
iface p51p2 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down
auto br-eth1
iface br-eth1 inet static
address 192.168.31.14
netmask 255.255.255.0
auto br-ex
iface br-ex inet static
address 10.10.10.14
netmask 255.255.255.0
network 10.10.10.0
broadcast 10.10.10.255
gateway 10.10.10.1
# dns-* options are implemented by the resolvconfpackage, if
installed
dns-nameservers 8.8.8.8
dns-search future.cn
/etc/init.d/networkingrestart
3.4. OpenVSwitch
· Install the
openVSwitch:
apt-get install openvswitch-switchopenvswitch-datapath-dkms
· Create the
bridges:
#br-int will be used for VM integration
ovs-vsctl add-br br-int
#br-eth1 will be used for VM configuration
ovs-vsctl add-br br-eth1
ovs-vsctl add-port br-eth1 p51p2
#br-ex is used to make to VM accessible from the internet
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex p51p1
3.5. Quantum
· Install the Quantum
openvswitch agent, l3 agent and dhcpagent:
apt-get install
quantum-plugin-openvswitch-agentquantum-dhcp-agent
quantum-l3-agent
· Edit
/etc/quantum/api-paste.ini:
vim /etc/quantum/api-paste.ini
[filter:authtoken]
paste.filter_factory =keystone.middleware.auth_token:filter_factory
auth_host = 192.168.31.14
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = service_pass
· Edit the OVS plugin
configuration
file/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
with:
vim /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
#Under the database section
[DATABASE]
sql_cOnnection=
mysql://quantumUser:quantumPass@192.168.31.14/quantum
#Under the OVS section
[OVS]
tenant_network_type=vlan
network_vlan_ranges = physnet1:1:4094
bridge_mappings = physnet1:br-eth1
· In addition, update
the /etc/quantum/l3_agent.ini:
vim /etc/quantum/l3_agent.ini
auth_url = http://192.168.31.14:35357/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = quantum
admin_password = service_pass
metadata_ip = 10.10.10.14
metadata_port = 8775
· Make sure that your
rabbitMQ IP in/etc/quantum/quantum.conf is set to the controller
node:
vim /etc/quantum/quantum.conf
rabbit_host = 192.168.31.14
· Restart all the
services:
service quantum-plugin-openvswitch-agent restart
service quantum-dhcp-agent restart
service quantum-l3-agent restart
4. Compute Node
4.1. Preparing the Node
· Update your
system:
apt-get update
apt-get upgrade
apt-get dist-upgrade
· Install ntp
service:
apt-get install ntp
· Configure the NTP
server to follow the controller node:
sed -i 's/server ntp.ubuntu.com/server
192.168.31.14/g'/etc/ntp.conf
service ntp restart
· Install other
services:
apt-get install vlan bridge-utils
· Enable
IP_Forwarding:
· nano
/etc/sysctl.conf
· # Uncomment
net.ipv4.ip_forward=1, to save you from rebooting, perform
thefollowing
· sysctl
net.ipv4.ip_forward=1
4.2.Networking
· Perform the
following:
vim /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto p51p1
iface p51p1 inet static
address 10.10.10.14
netmask 255.255.255.0
network 10.10.10.0
broadcast 10.10.10.255
gateway 10.10.10.1
# dns-* options are implemented by the resolvconfpackage, if
installed
dns-nameservers 8.8.8.8
dns-search ourfuture.cn
# OpenStack management & VM conf
auto p51p2
iface p51p2 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down
auto br-eth1
iface br-eth1 inet static
address 192.168.31.14
netmask 255.255.255.0
/etc/init.d/networkingrestart
4.3 KVM
· make sure that your
hardware enables virtualization:
apt-get install cpu-checker
kvm-ok
· Normally you would
get a good response. Now, move toinstall kvm and configure it:
apt-get install kvm libvirt-bin pm-utils
· Edit the
cgroup_device_acl array in the/etc/libvirt/qemu.conf file to:
vim /etc/libvirt/qemu.conf
cgroup_device_acl = [
"/dev/null", "/dev/full","/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm","/dev/kqemu",
"/dev/rtc","/dev/hpet","/dev/net/tun"
]
· Delete default
virtual bridge
virsh net-destroy default
virsh net-undefine default
· Enable live
migration by updating/etc/libvirt/libvirtd.conf file:
vim /etc/libvirt/libvirtd.conf
listen_tls = 0
listen_tcp = 1
auth_tcp = "none"
· Edit libvirtd_opts
variable in /etc/init/libvirt-bin.conffile:
vim /etc/init/libvirt-bin.conf
env libvirtd_opts="-d -l"
· Edit
/etc/default/libvirt-bin file
vim /etc/default/libvirt-bin
libvirtd_opts="-d -l"
· Restart the libvirt
service to load the new values:
service libvirt-bin restart
4.4. OpenVSwitch
· Install the
openVSwitch:
apt-get install openvswitch-switchopenvswitch-datapath-dkms
· Create the
bridges:
#br-int will be used for VM integration
ovs-vsctl add-br br-int
#br-eth1 will be used for VM configuration
ovs-vsctl add-br br-eth1
ovs-vsctl add-port br-eth1 p51p2
4.5. Quantum
· Install the Quantum
openvswitch agent:
apt-get install quantum-plugin-openvswitch-agent
· Edit the OVS plugin
configuration
file/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
with:
#Under the database section
[DATABASE]
sql_cOnnection=
mysql://quantumUser:quantumPass@192.168.31.14/quantum
#Under the OVS section
[OVS]
tenant_network_type=vlan
network_vlan_ranges = physnet1:1:4094
bridge_mappings = physnet1:br-eth1
· Make sure that your
rabbitMQ IP in/etc/quantum/quantum.conf is set to the controller
node:
rabbit_host = 192.168.31.14
· Restart all the
services:
service quantum-plugin-openvswitch-agent restart
4.6. Nova
· Install nova's
required components for the compute node:
apt-get installnova-compute-kvm
· Now modify authtoken
section in the/etc/nova/api-paste.ini file to this:
vim /etc/nova/api-paste.ini
[filter:authtoken]
paste.filter_factory =keystone.middleware.auth_token:filter_factory
auth_host = 192.168.31.14
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = service_pass
signing_dirname = /tmp/keystone-signing-nova
· Edit
/etc/nova/nova-compute.conf file
vim /etc/nova/nova-compute.conf
[DEFAULT]
libvirt_type=kvm
libvirt_ovs_bridge=br-int
libvirt_vif_type=ethernet
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
libvirt_use_virtio_for_bridges=True
· Modify the
/etc/nova/nova.conf like this:
[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_cOnfig=/etc/nova/api-paste.ini
scheduler_driver=nova.scheduler.simple.SimpleScheduler
s3_host=192.168.31.14
ec2_host=192.168.31.14
ec2_dmz_host=192.168.31.14
rabbit_host=192.168.31.14
cc_host=192.168.31.14
dmz_cidr=169.254.169.254/32
metadata_host=192.168.31.14
metadata_listen=0.0.0.0
nova_url=http://192.168.31.14:8774/v1.1/
sql_cOnnection=mysql://novaUser:novaPass@192.168.31.14/nova
ec2_url=http://192.168.31.14:8773/services/Cloud
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
# Auth
use_deprecated_auth=false
auth_strategy=keystone
keystone_ec2_url=http://192.168.31.14:5000/v2.0/ec2tokens
# Imaging service
glance_api_servers=192.168.31.14:9292
image_service=nova.image.glance.GlanceImageService
# Vnc configuration
novnc_enabled=true
novncproxy_base_url=http://10.10.10.14:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=100.10.10.53
vncserver_listen=0.0.0.0
# Network settings
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://192.168.31.14:9696
quantum_auth_strategy=keystone
quantum_admin_tenant_name=service
quantum_admin_username=quantum
quantum_admin_password=service_pass
quantum_admin_auth_url=http://192.168.31.14:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
# Compute #
compute_driver=libvirt.LibvirtDriver
# Cinder #
volume_api_class=nova.volume.cinder.API
osapi_volume_listen_port=5900
· Restart nova-*
services:
cd /etc/init.d/; for i in $( ls nova-* ); do sudo service$i
restart; done
· Check for the
smiling faces on nova-* services to confirmyour installation:
nova-manage service list
5. Your First VM
To start yourfirst VM, we first need to create a new tenant,
user, internal and externalnetwork. SSH to your controller node and
perform the following.
· Create a new
tenant
keystone tenant-create --name project_one
keystone user-create
--name=user_one --pass=user_one--tenant-id $put_id_of_project_one --email=user_one@domain.com
root@node15:/etc/init.d# keystone role-list
keystone user-role-add
--tenant-id $put_id_of_project_one --user-id $put_id_of_user_one --role-id$put_id_of_member_role
· Create a new network
for the tenant:
quantum net-create
--tenant-id $put_id_of_project_onenet_proj_one --provider:network_type
vlan --provider:physical_network physnet1--provider:segmentation_id
1024
quantum subnet-create --tenant-id $put_id_of_project_one
net_proj_one50.50.1.0/24
quantum router-create
--tenant-id $put_id_of_project_onerouter_proj_one
quantum router-interface-add
$put_router_proj_one_id_here$put_subnet_id_here
· Create your external
network with the tenant id belongingto the service tenant (keystone
tenant-list to get the appropriate id)
quantum net-create
--tenant-id $put_id_of_service_tenantext_net --router:external=True
root@node55:~# keystone tenant-list
quantum subnet-create
--tenant-id$put_id_of_service_tenant --allocation-pool
start=10.10.10.102,end=10.10.10.126--gateway 10.10.10.1 ext_net
10.10.10.100/24 --enable_dhcp=False
quantum router-gateway-set
$put_router_proj_one_id_here$put_id_of_ext_net_here
VMs gain accessto the metadata server locally present in the
controller node via the externalnetwork. To create that necessary
connection perform the following:
· Get the IP address
of router proj one:
quantum port-list
-- --device_id
route add -net 50.50.1.0/24 gw $router_proj_one_IP
Unfortunatly,you can't use the dashboard to assign floating IPs
to VMs so you need to getyour hands a bit dirty to give your VM a
public IP.
· Start by allocating
a floating ip to the project onetenant:
quantum floatingip-create
--tenant-id $put_id_of_project_oneext_net
quantum port-list
· Associate the
floating IP to your VM:
quantum floatingip-associate
$put_id_floating_ip$put_id_vm_port
This is it!, You can nowping you VM and start
administrating you OpenStack !
I Hope youenjoyed this guide, please if you have any feedbacks,
don't hesitate.