生产线(几十台服务器,网络结构复杂,几百个vm)用G版quantum + ovs 1.4 +gre方式,出现几个问题:
a.网络延迟问题,无压力情况下局域网ping,延迟在20ms,压力情况下,延迟超过1000ms,局域网内业务之间通信基本不可用
b.ovs升级到1.11后,延迟问题解决,但nova和quantum多节点部署后出现网络风暴问题。也导致网络不可用。
可见quantum + ovs +gre并不是成熟方案,不建议生产环境使用,此处省略万字血泪史
查找社区文档,除quantum+ovs plugin的方式,还有一种组网方式就是quantum+linux bridge plugin方式,但一直没有公开的文档告诉你如何配置成功。经过两个月的各种折腾,终于调试成功,记录下,准备测试一段时间,成功后替换生产线quantum+ovs+gre模式。
—————————————————————————————————————————
0.网络环境
外部网络: eth0 192.168.134.0/23 floating ip
管理网络: eth1 192.168.22.0/24 用来 Mysql、AMQP、API和VM之间的vlan
1.网卡设置
控制节点&网络节点IP
eth0 192.168.134.8
eth1 192.168.22.8
nova-computer节点
eth0 192.168.134.7
eth1 192.168.22.7
编辑/etc/network/interface
#———————————start——————————-
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet static
address 192.168.134.8
gateway 192.168.135.254
netmask 255.255.254.0
dns-nameservers 192.168.0.201
auto eth1
iface eth1 inet static
address 192.168.22.8
netmask 255.255.255.0
#———————————--end——————————-
2.部署结构
控制节点安装keystone,glance,nova,quantum,cinder等等所有服务,类似单节点安装
计算节点安装quantum 和nova-compute服务
—————————————————————————————————————————--
以下部署配置在控制点和计算节点都要做:
3.安装源
cat > /etc/apt/sources.list.d/grizzly.list <<_GEEK_
deb
_GEEK_
apt-get update
apt-get upgrade
apt-get install ubuntu-cloud-keyring
4.系统参数修改
sed -i ‘s/#net.ipv4.ip_forward = 1/net.ipv4.ip_forward = 1/g’
/etc/sysctl.conf
sysctl -p
PROFILE_COnF=${PROFILE_CONF:-”/etc/profile”}
cat <>$PROFILE_CONF
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123qwe
export OS_AUTH_URL=http://$MANAGEMENT_NODE_IP:5000/v2.0/
export OS_REGION_NAME=RegionOne
export SERVICE_TOKEN=www.domain.com
export SERVICE_ENDPOINT=http://$MANAGEMENT_NODE_IP:35357/v2.0/
PROFILE
source /etc/profile
5.kvm及工具优化
apt-get install -y kvm libvirt-bin pm-utils
编辑/etc/libvirt/qemu.conf :
cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc", "/dev/hpet","/dev/net/tun"
]
删除libvirt默认桥
virsh net-destroy default
virsh net-undefine default
编辑/etc/libvirt/libvirtd.conf:
listen_tls = 0
listen_tcp = 1
auth_tcp = “none”
编辑/etc/init/libvirt-bin.conf:
env libvirtd_opts=”-d -l”
编辑/etc/default/libvirt-bin :
libvirtd_opts=”-d -l”
重启服务
service dbus restart && service libvirt-bin restart
———————————————————————————————————————————
以下部署配置在控制点上进行:
6.数据库、mq、nt安装和配置
apt-get -y --force-yes install mysql-client-5.5
mysql-client-core-5.5 mysql-server-core-5.5 mysql-server-5.5
mysql-server rabbitmq-server mysql-client
mkdir /etc/mysql
cp my.cnf /etc/mysql/
sed -i ‘s/127.0.0.1/0.0.0.0/g’ /etc/mysql/my.cnf
sed -i ’44 i skip-name-resolve’ /etc/mysql/my.cnf
/etc/init.d/mysql restart
MYSQL_PASSWD=${MYSQL_PASSWD:-”root”}
for DB_NAME in keystone glance quantum cinder nova
do
mysql -uroot -p$MYSQL_PASSWD -e “CREATE DATABASE $DB_NAME;”
mysql -uroot -p$MYSQL_PASSWD -e “GRANT ALL ON $DB_NAME.* TO
‘$DB_NAME’@'%’ IDENTIFIED BY ‘$DB_NAME’;”
done
mysql -uroot -p$MYSQL_PASSWD -e “grant all on *.* to ‘root’@'%’
identified by ‘root’;”
mysql -uroot -p$MYSQL_PASSWD -e “flush privileges;”
apt-get install ntp
sed -i
‘s/server
service ntp restart
7.控制节点安装,大集合
apt-get -y --force-yes install nova-api nova-common
nova-compute-kvm nova-conductor nova-novncproxy nova-scheduler
nova-spiceproxy nova-consoleauth keystone glance glance-api
glance-common glance-registry cinder-api cinder-common
cinder-scheduler cinder-volume quantum-common quantum-dhcp-agent
quantum-l3-agent quantum-metadata-agent quantum-plugin-linuxbridge*
quantum-server python-mysqldb python-cliff python-pyparsing
python-quantumclient python-cinderclient nova-ajax-console-proxy
nova-cert memcached libapache2-mod-wsgi openstack-dashboard
8.keystone配置
8.1修改/etc/keystone/keystone.conf 其他内容不变,修改下面内容
[DEFAULT]
admin_token =
bind_host = 0.0.0.0
[sql]
cOnnection=
8.2初始化keystone
rm -f /var/lib/keystone/keystone.db
/etc/init.d/keystone restart
keystone-manage db_sync
新建文件keystone-initialization.sh 内容如下
#——————————————————start————————————————-
#!/bin/bash
usage() {
cat <
$0 openstack grizzly keystone initialization scripts
(ubuntu)
IP is the control node management network IP address
UEND
}
[ $# -ne 1 ] && usage && exit 1
KEYSTONE_IP=”$1″
ADMIN_PASSWORD=${ADMIN_PASSWORD:-123qwe}
SERVICE_PASSWORD=${SERVICE_PASSWORD:-123qwe}
export SERVICE_TOKEN=”www.domain.com”
export SERVICE_ENDPOINT=”http://$KEYSTONE_IP:35357/v2.0″
SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service}
KEYSTONE_REGION=RegionOne
# If you need to provide the service, please to open
keystone_wlan_ip and swift_wlan_ip
# of course you are a multi-node architecture, and swift
service
# corresponding ip address set the following variables
#KEYSTONE_WLAN_IP=$KEYSTONE_IP
SWIFT_IP=$KEYSTONE_IP
#SWIFT_WLAN_IP=$KEYSTONE_IP
COMPUTE_IP=$KEYSTONE_IP
EC2_IP=$KEYSTONE_IP
GLANCE_IP=$KEYSTONE_IP
VOLUME_IP=$KEYSTONE_IP
QUANTUM_IP=$KEYSTONE_IP
get_id () {
echo `$@ | awk ‘/ id / { print $4 }’`
}
# Create Tenants
ADMIN_TENANT=$(get_id keystone --os-token $SERVICE_TOKEN
--os-endpoint $SERVICE_ENDPOINT tenant-create --name=admin)
SERVICE_TENANT=$(get_id keystone --os-token $SERVICE_TOKEN
--os-endpoint $SERVICE_ENDPOINT tenant-create
--name=$SERVICE_TENANT_NAME)
DEMO_TENANT=$(get_id keystone --os-token $SERVICE_TOKEN
--os-endpoint $SERVICE_ENDPOINT tenant-create --name=demo)
INVIS_TENANT=$(get_id keystone --os-token $SERVICE_TOKEN
--os-endpoint $SERVICE_ENDPOINT tenant-create
--name=invisible_to_admin)
# Create Users
ADMIN_USER=$(get_id keystone --os-token $SERVICE_TOKEN
--os-endpoint $SERVICE_ENDPOINT user-create --name=admin
--pass=”$ADMIN_PASSWORD”
DEMO_USER=$(get_id keystone --os-token $SERVICE_TOKEN --os-endpoint
$SERVICE_ENDPOINT user-create --name=demo
--pass=”$ADMIN_PASSWORD”
# Create Roles
ADMIN_ROLE=$(get_id keystone --os-token $SERVICE_TOKEN
--os-endpoint $SERVICE_ENDPOINT role-create --name=admin)
KEYSTONEADMIN_ROLE=$(get_id keystone --os-token $SERVICE_TOKEN
--os-endpoint $SERVICE_ENDPOINT role-create
--name=KeystoneAdmin)
KEYSTONESERVICE_ROLE=$(get_id keystone --os-token $SERVICE_TOKEN
--os-endpoint $SERVICE_ENDPOINT role-create
--name=KeystoneServiceAdmin)
# Add Roles to Users in Tenants
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE
--tenant-id $ADMIN_TENANT
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE
--tenant-id $DEMO_TENANT
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
user-role-add --user-id $ADMIN_USER --role-id $KEYSTONEADMIN_ROLE
--tenant-id $ADMIN_TENANT
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
user-role-add --user-id $ADMIN_USER --role-id $KEYSTONESERVICE_ROLE
--tenant-id $ADMIN_TENANT
# The Member role is used by Horizon and Swift
MEMBER_ROLE=$(get_id keystone --os-token $SERVICE_TOKEN
--os-endpoint $SERVICE_ENDPOINT role-create --name=Member)
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
user-role-add --user-id $DEMO_USER --role-id $MEMBER_ROLE
--tenant-id $DEMO_TENANT
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
user-role-add --user-id $DEMO_USER --role-id $MEMBER_ROLE
--tenant-id $INVIS_TENANT
# Configure service users/roles
NOVA_USER=$(get_id keystone --os-token $SERVICE_TOKEN --os-endpoint
$SERVICE_ENDPOINT user-create --name=nova
--pass=”$SERVICE_PASSWORD” --tenant-id
$SERVICE_TENANT
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER
--role-id $ADMIN_ROLE
GLANCE_USER=$(get_id keystone --os-token $SERVICE_TOKEN
--os-endpoint $SERVICE_ENDPOINT user-create --name=glance
--pass=”$SERVICE_PASSWORD” --tenant-id
$SERVICE_TENANT
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
user-role-add --tenant-id $SERVICE_TENANT --user-id $GLANCE_USER
--role-id $ADMIN_ROLE
SWIFT_USER=$(get_id keystone --os-token $SERVICE_TOKEN
--os-endpoint $SERVICE_ENDPOINT user-create --name=swift
--pass=”$SERVICE_PASSWORD” --tenant-id
$SERVICE_TENANT
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
user-role-add --tenant-id $SERVICE_TENANT --user-id $SWIFT_USER
--role-id $ADMIN_ROLE
RESELLER_ROLE=$(get_id keystone --os-token $SERVICE_TOKEN
--os-endpoint $SERVICE_ENDPOINT role-create
--name=ResellerAdmin)
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER
--role-id $RESELLER_ROLE
QUANTUM_USER=$(get_id keystone --os-token $SERVICE_TOKEN
--os-endpoint $SERVICE_ENDPOINT user-create --name=quantum
--pass=”$SERVICE_PASSWORD” --tenant-id
$SERVICE_TENANT
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
user-role-add --tenant-id $SERVICE_TENANT --user-id $QUANTUM_USER
--role-id $ADMIN_ROLE
CINDER_USER=$(get_id keystone --os-token $SERVICE_TOKEN
--os-endpoint $SERVICE_ENDPOINT user-create --name=cinder
--pass=”$SERVICE_PASSWORD” --tenant-id
$SERVICE_TENANT
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
user-role-add --tenant-id $SERVICE_TENANT --user-id $CINDER_USER
--role-id $ADMIN_ROLE
## Create Service
KEYSTONE_ID=$(keystone --os-token $SERVICE_TOKEN --os-endpoint
$SERVICE_ENDPOINT service-create --name keystone --type identity
--description ‘OpenStack Identity’| awk ‘/ id / { print $4 }’ )
COMPUTE_ID=$(keystone --os-token $SERVICE_TOKEN --os-endpoint
$SERVICE_ENDPOINT service-create --name=nova --type=compute
--description=’OpenStack Compute Service’| awk ‘/ id / { print $4
}’ )
CINDER_ID=$(keystone --os-token $SERVICE_TOKEN --os-endpoint
$SERVICE_ENDPOINT service-create --name=cinder --type=volume
--description=’OpenStack Volume Service’| awk ‘/ id / { print $4 }’
)
GLANCE_ID=$(keystone --os-token $SERVICE_TOKEN --os-endpoint
$SERVICE_ENDPOINT service-create --name=glance --type=image
--description=’OpenStack Image Service’| awk ‘/ id / { print $4 }’
)
SWIFT_ID=$(keystone --os-token $SERVICE_TOKEN --os-endpoint
$SERVICE_ENDPOINT service-create --name=swift --type=object-store
--description=’OpenStack Storage Service’ | awk ‘/ id / { print $4
}’ )
EC2_ID=$(keystone --os-token $SERVICE_TOKEN --os-endpoint
$SERVICE_ENDPOINT service-create --name=ec2 --type=ec2
--description=’OpenStack EC2 service’| awk ‘/ id / { print $4 }’
)
QUANTUM_ID=$(keystone --os-token $SERVICE_TOKEN --os-endpoint
$SERVICE_ENDPOINT service-create --name=quantum --type=network
--description=’OpenStack Networking service’| awk ‘/ id / { print
$4 }’ )
## Create Endpoint
#identity
if [ "$KEYSTONE_WLAN_IP" != '' ];then
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
endpoint-create --region $KEYSTONE_REGION --service-id=$KEYSTONE_ID
--publicurl
fi
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
endpoint-create --region $KEYSTONE_REGION --service-id=$KEYSTONE_ID
--publicurl
#compute
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
endpoint-create --region $KEYSTONE_REGION --service-id=$COMPUTE_ID
--publicurl
#volume
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
endpoint-create --region $KEYSTONE_REGION --service-id=$CINDER_ID
--publicurl
#image
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
endpoint-create --region $KEYSTONE_REGION --service-id=$GLANCE_ID
--publicurl
#object-store
if [ "$SWIFT_WLAN_IP" != '' ];then
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
endpoint-create --region $KEYSTONE_REGION --service-id=$SWIFT_ID
--publicurl
fi
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
endpoint-create --region $KEYSTONE_REGION --service-id=$SWIFT_ID
--publicurl
#ec2
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
endpoint-create --region $KEYSTONE_REGION --service-id=$EC2_ID
--publicurl
#network
keystone --os-token $SERVICE_TOKEN --os-endpoint $SERVICE_ENDPOINT
endpoint-create --region $KEYSTONE_REGION --service-id=$QUANTUM_ID
--publicurl
#———————————————-end————————————————--
chmod +x keystone-initialization.sh
运行之
./keystone-initialization.sh 192.168.22.8
9.glance安装配置
9.1 /etc/glance/glance-api.conf
#———————————-start—————————————
[DEFAULT]
verbose = True
debug = True
default_store = file
bind_host = 0.0.0.0
bind_port = 9292
log_file = /var/log/glance/api.log
backlog = 4096
sql_cOnnection=
sql_idle_timeout = 3600
workers = 10
registry_host = 192.168.22.8
registry_port = 9191
registry_client_protocol = http
notifier_strategy = rabbit
rabbit_host = 192.168.22.8
rabbit_port = 5672
rabbit_use_ssl = false
rabbit_userid = guest
rabbit_password = guest
rabbit_virtual_host = /
rabbit_notification_exchange = glance
rabbit_notification_topic = notifications
rabbit_durable_queues = False
filesystem_store_datadir = /var/lib/glance/images/
swift_store_auth_version = 2
swift_store_auth_address
=
swift_store_user =
swift_store_key = 123qwe
swift_store_cOntainer= glance
swift_store_create_container_on_put = False
swift_store_large_object_size = 5120
swift_store_large_object_chunk_size = 200
swift_enable_snet = False
s3_store_host = 127.0.0.1:8080/v1.0/
s3_store_access_key =
s3_store_secret_key =
s3_store_bucket = glance
s3_store_create_bucket_on_put = False
rbd_store_ceph_cOnf= /etc/ceph/ceph.conf
rbd_store_user = glance
rbd_store_pool = images
rbd_store_chunk_size = 8
delayed_delete = False
scrub_time = 43200
scrubber_datadir = /var/lib/glance/scrubber
image_cache_dir = /var/lib/glance/image-cache/
[keystone_authtoken]
auth_host = 192.168.22.8
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = 123qwe
[paste_deploy]
config_file = /etc/glance/glance-api-paste.ini
flavor = keystone
#———————————————end——————————————-
9.2 /etc/glance/glance-registry.conf
#—————————————--start————————————--
[DEFAULT]
verbose = True
debug = True
bind_host = 0.0.0.0
bind_port = 9191
log_file = /var/log/glance/registry.log
backlog = 4096
sql_cOnnection=
sql_idle_timeout = 3600
api_limit_max = 1000
limit_param_default = 25
[keystone_authtoken]
auth_host = 192.168.22.8
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = 123qwe
[paste_deploy]
config_file = /etc/glance/glance-registry-paste.ini
flavor = keystone
#—————————————end———————————————————-
9.3 初始化glance
rm -f /var/lib/glance/glance.sqlite
/etc/init.d/glance-api restart
/etc/init.d/glance-registry restart
glance-manage version_control 0
glance-manage db_sync
9.4导入镜像,镜像自己找
glance image-create --name=’cirros0.3′ --public
--container-format=ovf --disk-format=qcow2 <
cirros-0.3.0-x86_64-disk.img
glance image-create --name=’winxp’ --public --container-format=ovf
--disk-format=qcow2
glance image-create --name=’centos6.3′ --public
--container-format=ovf --disk-format=qcow2 <
centos6.3_64.img
10.cinder安装配置
10.1 /etc/cinder/api-paste.ini 其他内容不变,结尾处
#———————————--start———————————
[filter:authtoken]
paste.filter_factory =
keystoneclient.middleware.auth_token:filter_factory
service_protocol = http
service_host = 192.168.22.8
service_port = 5000
auth_host = 192.168.22.8
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = 123qwe
signing_dir = /var/lib/cinder
#———————————end———————————--
10.2 /etc/cinder/cinder.conf
#————————————--start———————————-
[DEFAULT]
verbose = True
debug = True
iscsi_helper = tgtadm
auth_strategy = keystone
volume_group = cinder-volumes
volume_name_template = volume-%s
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes
rootwrap_cOnfig= /etc/cinder/rootwrap.conf
api_paste_cOnfig= /etc/cinder/api-paste.ini
rabbit_host = 192.168.22.8
rabbit_password = guest
rpc_backend = cinder.openstack.common.rpc.impl_kombu
sql_cOnnection=
osapi_volume_extension = cinder.api.contrib.standard_extensions
#————————————end————————————-
10.3初始化cinder
cinder-manage db sync
cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i
restart; done
11.nova安装配置
11.1/etc/nova/api-paste.ini 其他内容不变,结尾部分修改
#————————————start———————————-
[filter:authtoken]
paste.filter_factory =
keystoneclient.middleware.auth_token:filter_factory
auth_host = 192.168.22.8
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = 123qwe
signing_dir = /var/lib/nova/keystone-signing
auth_version = v2.0
#————————————--end——————————--
11.2 /etc/nova/nova.conf
#—————————————-start——————————-
[DEFAULT]
debug = False
verbose = True
logdir = /var/log/nova
state_path = /var/lib/nova
lock_path = /var/lock/nova
rootwrap_cOnfig= /etc/nova/rootwrap.conf
dhcpbridge = /usr/bin/nova-dhcpbridge
#Aggregate
#scheduler_default_filters=AggregateInstanceExtraSp
compute_scheduler_driver =
nova.scheduler.filter_scheduler.FilterScheduler
volume_api_class = nova.volume.cinder.API
sql_cOnnection=
instance_name_template = instance-x
api_paste_cOnfig= /etc/nova/api-paste.ini
allow_resize_to_same_host = True
osapi_compute_extension =
nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host = 192.168.22.8
s3_host = 192.168.22.8
rabbit_host = 192.168.22.8
rabbit_password = guest
image_service = nova.image.glance.GlanceImageService
glance_api_servers = 192.168.22.8:9292
network_api_class = nova.network.quantumv2.api.API
quantum_url =
quantum_auth_strategy = keystone
quantum_admin_tenant_name = service
quantum_admin_username = quantum
quantum_admin_password = 123qwe
quantum_admin_auth_url
=
#linux-bridge独有配置,openvswitch不需要配置
quantum_use_dhcp=true
network_manager=nova.network.quantum.manager.QuantumManager
service_quantum_metadata_proxy = True
metadata_host = 192.168.22.8
metadata_listen = 0.0.0.0
forward_bridge_interface = all
firewall_driver =
nova.virt.libvirt.firewall.IptablesFirewallDriver
novncproxy_base_url
=
#下面的IP其他nova-compute节点请对应修改成自己的公网IP
vncserver_proxyclient_address
=
vncserver_listen = 0.0.0.0
vnc_enabled = True
auth_strategy = keystone
[keystone_authtoken]
auth_host = 192.168.22.8
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = 123qwe
signing_dir = /tmp/keystone-signing-nova
#—————————————-end——————————--
11.3 /etc/nova/nova-compute.conf
#—————————————-start——————————--
[DEFAULT]
connection_type=libvirt
libvirt_type=kvm
compute_driver=libvirt.LibvirtDriver
libvirt_vif_type=ethernet
libvirt_vif_driver=nova.virt.libvirt.vif.QuantumLinuxBridgeVIFDri
linuxnet_interface_driver=nova.network.linux_net.QuantumLinuxBridgeInterf
#—————————————-end——————————--
11.4初始化nova
nova-manage db sync
cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i
restart; done
查看nova服务状态
nova-manage service list
添加网络策略
nova secgroup-add-rule default tcp 1 65535 0.0.0.0/0
nova secgroup-add-rule default udp 1 65535 0.0.0.0/0
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
12. quantum配置和安装
12.1 /etc/quantum/l3_agent.ini
#————————————--start————————————
[DEFAULT]
debug = True
verbose = True
interface_driver =
quantum.agent.linux.interface.BridgeInterfaceDriver
use_namespaces = True
signing_dir = /var/cache/quantum
admin_tenant_name = service
admin_user = quantum
admin_password = 123qwe
auth_url =
l3_agent_manager =
quantum.agent.l3_agent.L3NATAgentWithStateRepor
root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
# Name of bridge used for external network traffic. This
should be set to
# empty value for the linux bridge
#external_network_bridge = br-ex
external_network_bridge =
enable_multi_host = True
#—————————————--end————————————-
12.2 /etc/quantum/dhcp_agent.ini
#————————————--start————————————
[DEFAULT]
debug = True
verbose = True
interface_driver =
quantum.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq
use_namespaces = True
signing_dir = /var/cache/quantum
dmin_tenant_name = service
admin_user = quantum
admin_password = 123qwe
auth_url =
dhcp_agent_manager =
quantum.agent.dhcp_agent.DhcpAgentWithStateReport
root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
state_path = /var/lib/quantum
enable_isolated_metadata = False
enable_multi_host = True
#—————————————--end————————————-
12.3 /etc/quantum/metadata_agent.ini
#————————————--start————————————
[DEFAULT]
debug = True
auth_url =
auth_region = RegionOne
admin_tenant_name = service
admin_user = quantum
admin_password = 123qwe
nova_metadata_ip = 192.168.22.8
nova_metadata_port = 8775
#—————————————--end————————————-
12.4 /etc/quantum/quantum.conf
#————————————--start————————————
[DEFAULT]
debug = True
verbose = True
state_path = /var/lib/quantum
lock_path = $state_path/lock
bind_host = 0.0.0.0
bind_port = 9696
core_plugin =
quantum.plugins.linuxbridge.lb_quantum_plugin.LinuxBridgePluginV2
api_paste_cOnfig= /etc/quantum/api-paste.ini
dhcp_lease_duration = 1200
control_exchange = quantum
rabbit_host = 192.168.22.8
rabbit_password = guest
rabbit_port = 5672
rabbit_userid = guest
notification_driver =
quantum.openstack.common.notifier.rpc_notifier
default_notification_level = INFO
notification_topics = notifications
[QUOTAS]
quota_items = network,subnet,port
# default number of resource allowed per tenant, minus for
unlimited
default_quota = -1
# number of networks allowed per tenant, and minus means
unlimited
quota_network = -1
# number of subnets allowed per tenant, and minus means
unlimited
quota_subnet = -1
# number of ports allowed per tenant, and minus means unlimited
quota_port = 150
# number of security groups allowed per tenant, and minus means
unlimited
quota_security_group = -1
# number of security group rules allowed per tenant, and minus
means unlimited
quota_security_group_rule = -1
# default driver to use for quota checks
quota_driver = quantum.quota.ConfDriver
[DEFAULT_SERVICETYPE]
[AGENT]
root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
[keystone_authtoken]
auth_host = 192.168.22.8
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = 123qwe
signing_dir = /var/lib/quantum/keystone-signing
#—————————————————end————————————--
12.5
/etc/quantum/plugins/linuxbridge/linuxbridge_conf.ini
#————————————start——————————
[VLANS]
tenant_network_type =
network_vlan_ranges
=
[DATABASE]
sql_cOnnection=
reconnect_interval = 2
[LINUX_BRIDGE]
physical_interface_mappings
=
[AGENT]
polling_interval = 2
[SECURITYGROUP]
#firewall_driver =
quantum.agent.linux.iptables_firewall.IptablesFirewallDriver
#必须使用下面的驱动,不然会出现网络不通的情况
firewall_driver =
quantum.agent.firewall.NoopFirewallDriver
#———————————end————————————--
12.6 编辑/etc/default/quantum-server
#QUANTUM_PLUGIN_COnFIG=”/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini”
QUANTUM_PLUGIN_COnFIG=”/etc/quantum/plugins/linuxbridge/linuxbridge_conf.ini”
12.7 初始化quantum
cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i
restart; done
13. 配置dashboard
mv /etc/openstack-dashboard/ubuntu_theme.py
/etc/openstack-dashboard/ubuntu_theme.py.bak
sed -i ‘s/127.0.0.1/$MANAGEMENT_NODE_IP/g’ /etc/memcached.conf
/etc/init.d/memcached restart
/etc/init.d/apache2 restart
14.创建网络,注意橙色标注部分,很重要,配错就网络不通
14.1创建外部网络,使用flat方式,绑定的是eth0第一张网卡
quantum net-create external-network
--provider:network_type
quantum subnet-create --name floating external-network --gateway
192.168.135.254 192.168.134.0/23 --disable-dhcp --allocation-pool
start=192.168.134.70,end=192.168.134.90 --dns-nameserver
192.168.0.201
14.2创建admin用户的局域网,使用vlan模式,绑定eth1第二张网卡,可有效分流第一张卡的压力
quantum net-create admin-net01
--provider:network_type
quantum subnet-create --name admin-fix1 admin-net01
10.10.18.0/24
14.3 创建路由,将内网和外网关联
quantum router-create admin-router
quantum router-gateway-set admin-router external-network
quantum router-interface-add admin-router admin-fix1
————————————————————————————————————————--
以下部分只在nova-compute节点安装配置
15.只需安装nova-compute和quantum-plugin-linuxbridge-agent
apt-get -y --force-yes install quantum-plugin-linuxbridge-agent
nova-compute
nova和quantum的配置请拷贝上面控制节点的配置,完全一样(nova.conf的vncclient IP需要修改,已蓝色标注)
16.如果想控制节点、网络节点、计算节点分离
控制节点只需安装quantum-server nova-api nova-common nova-conductor
nova-novncproxy nova-scheduler nova-spiceproxy nova-consoleauth
网络节点需要安装quantum-dhcp-agent quantum-l3-agent quantum-metadata-agent
quantum-plugin-linuxbridge*
计算节点只需安装nova-compute quantum-plugin-linuxbridge-agent
到此,所有配置完成,可以登录dashboard建立虚拟机并分配floatingip,注意是在admin用户下
登录用户名密码 admin/123qwe
以上所有配置请注意颜色标注的对应