OpenStack多节点一键安装部署

2020-06-16 198

OpenStack多节点一键安装部署

一、环境准备

主机名  内存  CPU  硬盘网卡
Controller (控制节点)4GB2核100G+500G网卡1:192.168.100.10     网卡2: 192.168.200100
Computer (计算节点) 4GB2核100G+500G 网卡1:192.168.100.20        网卡2:192.168.200.200

二、各节点配置

1. 更改控制节点主机名,并修改两块网卡配置
# 修改主机名 [root@localhost ~]# hostnamectl set-hostname controller [root@localhost ~]# bash # 修改第一张网卡配置 [root@controller ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33 [root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 TYPE=Ethernet BOOTPROTO=static DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens33 UUID=cb333583-99f1-4542-9a09-69ab9e6ddf83 DEVICE=ens33 ONBOOT=yes IPADDR=192.168.100.10 NETMASK=255.255.255.0 GATEWAY=192.168.100.2 DNS1=8.8.8.8 DNS2=114.114.114 [root@controller ~]# # 修改第二张网卡配置 [root@controller ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens34 [root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens34 TYPE=Ethernet BOOTPROTO=static DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens34 UUID=d7cd310c-b709-411a-a83c-bd3ba72ceb32 DEVICE=ens34 ONBOOT=yes IPADDR=192.168.200.100 NETMASK=255.255.255.0 GATEWAY=192.168.200.2 DNS1=8.8.8.8 DNS2=114.114.114.114 [root@controller ~]# # 重启网卡 [root@controller ~]# systemctl restart network
2. 更改计算节点主机名,并修改两块网卡配置
# 修改主机名 [root@localhost ~]# hostnamectl set-hostname compute [root@localhost ~]# bash # 修改第一张网卡配置 [root@compute ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33 [root@compute ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33TYPE=Ethernet BOOTPROTO=static DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens33 UUID=2e3cdda5-dfad-410f-afd9-e07243c8866d DEVICE=ens33 ONBOOT=yes IPADDR=192.168.100.20 NETMASK=255.255.255.0 GATEWAY=192.168.100.2 DNS1=8.8.8.8 DNS2=114.114.114.114 # 修改第二张网卡配置 [root@compute ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens34 [root@compute ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens34TYPE=Ethernet BOOTPROTO=static DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens34 UUID=06f1ebb1-3545-42e1-88b4-db0b7b2e9365 DEVICE=ens34 ONBOOT=yes IPADDR=192.168.200.200 NETMASK=255.255.255.0 GATEWAY=192.168.200.2 DNS1=8.8.8.8 DNS2=114.114.114.114 [root@compute ~]# # 重启网卡 [root@compute ~]# systemctl restart network
3. 控制节点增加解析地址
# 若没有解析地址则手动添加 [root@controller ~]# cat /etc/resolv.conf # Generated by NetworkManager nameserver 8.8.8.8 nameserver 114.114.114.114 [root@controller ~]#
4. 计算节点增加解析地址
# 若没有解析地址则手动添加 [root@compute ~]# cat /etc/resolv.conf # Generated by NetworkManager nameserver 8.8.8.8 nameserver 114.114.114.114 [root@compute ~]#
5. 测试网络连通性(计算节点也要测试,这里只测控制节点)
# 测试控制节点 [root@controller ~]# ping www.baidu.com PING www.wshifen.com (103.235.46.39) 56(84) bytes of data. 64 bytes from 103.235.46.39 (103.235.46.39): icmp_seq=1 ttl=128 time=253 ms 64 bytes from 103.235.46.39 (103.235.46.39): icmp_seq=2 ttl=128 time=234 ms 64 bytes from 103.235.46.39 (103.235.46.39): icmp_seq=3 ttl=128 time=244 ms ^C --- www.wshifen.com ping statistics --- 4 packets transmitted, 3 received, 25% packet loss, time 3002ms rtt min/avg/max/mdev = 234.305/243.852/253.029/7.659 ms [root@controller ~]# # 测试计算节点 [root@compute ~]# ping www.baidu.com PING www.wshifen.com (104.193.88.123) 56(84) bytes of data. 64 bytes from 104.193.88.123 (104.193.88.123): icmp_seq=1 ttl=128 time=251 ms 64 bytes from 104.193.88.123 (104.193.88.123): icmp_seq=2 ttl=128 time=267 ms 64 bytes from 104.193.88.123 (104.193.88.123): icmp_seq=3 ttl=128 time=248 ms ^C --- www.wshifen.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 248.403/255.839/267.223/8.174 ms [root@compute ~]#
6. 在控制节点上传OpenStack-rocky.tar.gz压缩文件并解压
# 通过工具上传OpenStack-rocky.tar.gz包到根目录 [root@controller ~]# ls anaconda-ks.cfg openstack_rocky.tar.gz [root@controller ~]# # 解压 [root@controller ~]# tar -xf openstack_rocky.tar.gz [root@controller ~]# ls anaconda-ks.cfg openstack_rocky openstack_rocky.tar.gz [root@controller ~]#
7. 在计算节点上传OpenStack-rocky.tar.gz压缩文件并解压
# 通过工具上传OpenStack-rocky.tar.gz包到根目录 [root@compute ~]# ls anaconda-ks.cfg openstack_rocky.tar.gz [root@compute ~]# # 解压 [root@compute ~]# tar -xf openstack_rocky.tar.gz [root@compute ~]# ls anaconda-ks.cfg openstack_rocky openstack_rocky.tar.gz [root@compute ~]#
8. 控制节点配置 yum 源
# 首先安装yum元数据库生成工具 [root@controller ~]# yum install createrepo -y # 把 /opt/ 目录作为源路径 [root@controller ~]# mv openstack_rocky /opt/ # 在 /opt/openstack_rocky/ 目录下生成元数据库 [root@controller ~]# createrepo /opt/openstack_rocky/ # 移走自带yum源文件 [root@controller ~]# mv /etc/yum.repos.d/* /opt/ # 手动配置yum源 [root@controller ~]# vi /etc/yum.repos.d/openstack.repo [root@controller ~]# cat /etc/yum.repos.d/openstack.repo [openstack] name=openstack baseurl=file:///opt/openstack_rocky gpgcheck=0 [root@controller ~]# # 清空缓存后重新加载源 [root@controller ~]# yum clean all && yum repolist Loaded plugins: fastestmirror Cleaning repos: openstack Cleaning up everything Cleaning up list of fastest mirrors Loaded plugins: fastestmirror openstack | 2.9 kB 00:00 openstack/primary_db | 505 kB 00:00 Determining fastest mirrors repo id repo name status openstack openstack 787 repolist: 787 [root@controller ~]#
9. 计算节点配置yum源
# 首先安装yum元数据库生成工具 [root@compute ~]# yum install createrepo -y # 把 /opt/ 目录作为源路径 [root@compute ~]# mv openstack_rocky /opt/ # 在 /opt/openstack_rocky/ 目录下生成元数据库 [root@compute ~]# createrepo /opt/openstack_rocky/ # 移走自带yum源文件 [root@compute ~]# mv /etc/yum.repos.d/* /opt/ # 手动配置yum源 [root@compute ~]# vi /etc/yum.repos.d/openstack.repo [root@compute ~]# cat /etc/yum.repos.d/openstack.repo [openstack] name=openstack baseurl=file:///opt/openstack_rocky # 清空缓存后重新加载源 [root@compute ~]# yum clean all && yum repolist
10. 控制节点关闭防火墙关闭selinux和网络管理
# 关闭防火墙 [root@controller ~]# systemctl stop firewalld [root@controller ~]# systemctl disable firewalld Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service. # 关闭SElinux [root@controller ~]# setenforce 0 [root@controller ~]# sed -ri 's/^(SELINUX=).*/\1disabled/g' /etc/selinux/config # 关闭 NetworkManager [root@controller ~]# systemctl stop NetworkManager [root@controller ~]# systemctl disable NetworkManager Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service. Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service. Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service. [root@controller ~]#
11. 计算节点关闭防火墙关闭selinux和网络管理
# 关闭防火墙 [root@compute ~]# systemctl stop firewalld [root@compute ~]# systemctl disable firewalld Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service. [root@compute ~]# # 关闭SElinux [root@compute ~]# setenforce 0 [root@compute ~]# sed -ri 's/^(SELINUX=).*/\1disabled/g' /etc/selinux/config # 关闭 NetworkManager [root@compute ~]# systemctl stop NetworkManager [root@compute ~]# systemctl disable NetworkManager Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service. Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service. Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service. [root@compute ~]#
12. 控制节点host映射
[root@controller ~]# echo "192.168.100.10 controller" >> /etc/hosts [root@controller ~]# echo "192.168.100.20 compute" >> /etc/hosts [root@controller ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.100.10 controller 192.168.100.20 compute [root@controller ~]#
13. 计算节点host映射
[root@compute ~]# echo "192.168.100.10 controller" >> /etc/hosts [root@compute ~]# echo "192.168.100.20 compute" >> /etc/hosts [root@compute ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.100.10 controller 192.168.100.20 compute [root@compute ~]#
14. 在控制节点进行ping测试计算节点
[root@controller ~]# ping compute PING compute (192.168.100.20) 56(84) bytes of data. 64 bytes from compute (192.168.100.20): icmp_seq=1 ttl=64 time=0.377 ms 64 bytes from compute (192.168.100.20): icmp_seq=2 ttl=64 time=1.03 ms ^C --- compute ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.377/0.704/1.031/0.327 ms [root@controller ~]#
15. 控制节点配置ntp服务器同步
# 安装时间同步工具 [root@controller ~]# yum -y install ntpdate # 同步阿里云时间服务器时间 [root@controller ~]# ntpdate ntp.aliyun.com # 设置定时任务,每30分钟同步时间 [root@controller ~]# crontab -e no crontab for root - using an empty one crontab: installing new crontab [root@controller ~]# crontab -l */30 * * * * /usr/sbin/ntpdate ntp.aliyun.com >>/var/log/ntpdate.log [root@controller ~]#
16. 计算节点配置ntp服务器同步
# 安装时间同步工具 [root@compute ~]# yum -y install ntpdate # 同步阿里云时间服务器时间 [root@compute ~]# ntpdate ntp.aliyun.com 20 May 06:01:21 ntpdate[9830]: adjust time server 203.107.6.88 offset 0.012468 sec [root@compute ~]# # 设置定时任务,每30分钟同步时间 [root@compute ~]# crontab -e no crontab for root - using an empty one crontab: installing new crontab [root@compute ~]# crontab -l */30 * * * * /usr/sbin/ntpdate ntp.aliyun.com >>/var/log/ntpdate.log [root@compute ~]#
17. 控制节点对NTP时钟服务器配置
# 安装ntp [root@controller ~]# yum install ntp -y # 找到下面的配置,第一条取消注释修改ip为第一张网卡IP,下面三条添加注释 [root@controller ~]# vi /etc/ntp.conf restrict 192.168.100.0 mask 255.255.255.0 nomodify notrap #server 0.centos.pool.ntp.org iburst #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst # 关闭 chronyd [root@controller ~]# systemctl disable chronyd.service Removed symlink /etc/systemd/system/multi-user.target.wants/chronyd.service. # 启动 ntpd [root@controller ~]# systemctl restart ntpd [root@controller ~]# systemctl enable ntpd Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service. [root@controller ~]#
18. 控制节点配置密码免交互
[root@controller ~]# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 81:98:ed:47:83:ab:51:07:c6:24:ca:66:16:2f:80:e1 root@controller The key's randomart image is: +--[ RSA 2048]----+ |+.. .o+ | |oo + *.+ | | EB + = = | | + . o + o | | . o S | | o . | | . | | | | | +-----------------+ [root@controller ~]# [root@controller ~]# ssh-copy-id compute [root@controller ~]# ssh-copy-id controller
19. 计算节点配置密码免交互
[root@compute ~]# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 8d:41:12:5b:13:f6:04:21:1d:96:10:77:bc:78:ef:58 root@compute The key's randomart image is: +--[ RSA 2048]----+ | *=%Bo | | Xo=. | | . .... | | .+o | | S... | | E | | + | | . . | | | +-----------------+ [root@compute ~]# [root@compute ~]# ssh-copy-id controller [root@compute ~]# ssh-copy-id compute
20. 控制节点配置应答文件,注意修改ip
[root@controller ~]# yum -y install openstack-packstack [root@controller ~]# packstack --gen-answer-file=OpenStack.txt [root@controller ~]# vi OpenStack.txt ●第41行SWIFT是OpenStack的对象存储组件,默认是Y,在生产环境中一般是不装,所以该n ●第50行,默认是Y 需要改n ●第94行CONTROLLER是OpenStack的控制节点,在控制节点上配置的,不需要更改IP地址。 ●第97行需要更改计算节点的IP地址 ●第101行 需要更改网络节点的IP地址 ●第557行 系统在创建CINDER组件的的时候回创建一个20G卷,因为我们虚拟机空间有限所以把空间改下点1G ●第778行,这样的29948657b3aa409c是密码 ●第782行 LBAAS负载均衡组件。必选要装 选y ●第790行 FWAAS是防火墙组件。必选要装 选y ●第794行 VPNAAS是VPN组件。必选要装 选y ●第817行,FLAT网络这边要设置物理网卡名字 ●第862行,这边要设置物理网卡的名字 ●第873行,这边br-ex:eth1是网络节点的nat网卡 ●第1185行 是OpenStack联网下载一个测试镜像,这边没联网。说以改成n [root@controller ~]# sed -i -r 's/192.168.17.10/192.168.200.100/g' OpenStack.txt [root@controller ~]# sed -i -r 's/(.+_PW)=.+/\1=123/' OpenStack.txt [root@controller ~]# sed -i -r 's/192.168.47.100/192.168.200.100/g' OpenStack.txt [root@controller ~]# grep -vE "^#|^$" OpenStack.txt >OpenStackbak.txt
21. 控制节点永久挂着光盘镜像文件
[root@controller ~]# echo "/dev/sr0 /mnt iso9660 defaults 0 0" >> /etc/fstab [root@controller ~]# mount -a mount: /dev/sr0 is write-protected, mounting read-only [root@controller ~]# [root@controller ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 50G 1.8G 49G 4% / devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.6M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sda5 46G 33M 46G 1% /home /dev/sda1 1014M 131M 884M 13% /boot tmpfs 378M 0 378M 0% /run/user/0 /dev/sr0 4.1G 4.1G 0 100% /mnt [root@controller ~]#
22. 计算节点永久挂载光盘镜像文件
[root@compute ~]# echo "/dev/sr0 /mnt iso9660 defaults 0 0" >> /etc/fstab [root@compute ~]# mount -a mount: /dev/sr0 is write-protected, mounting read-only [root@compute ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 50G 1.7G 49G 4% / devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.6M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sda5 46G 33M 46G 1% /home /dev/sda1 1014M 131M 884M 13% /boot tmpfs 378M 0 378M 0% /run/user/0 /dev/sr0 4.1G 4.1G 0 100% /mnt [root@compute ~]#
23. 在控制节点的local.repo源中添加本地系统镜像yum源文件路径
[root@controller ~]# vi /etc/yum.repos.d/openstack.repo [root@controller ~]# cat /etc/yum.repos.d/openstack.repo [openstack] name=openstack baseurl=file:///opt/openstack_rocky gpgcheck=0 [centos] name=centos baseurl=file:///mnt/ gpgcheck=0 [root@controller ~]#
24. 控制节点进行自动安装部署,(第一次执行packstack --answer-file=OpenStack.txt部署时可能报错,报错后修改查看报错信息中是否有swif关键字,有就直接去 vi OpenStack.txt 修改 ●第41行SWIFT是OpenStack的对象存储组件,为 n)
[root@controller ~]# packstack --answer-file=OpenStack.txt Welcome to the Packstack setup utility The installation log file is available at: /var/tmp/packstack/20200520-071255-kT4RyC/openstack-setup.log Installing: Clean Up [ DONE ] Discovering ip protocol version [ DONE ] Setting up ssh keys [ DONE ] Preparing servers [ DONE ] Pre installing Puppet and discovering hosts' details [ DONE ] Preparing pre-install entries [ DONE ] Setting up CACERT [ DONE ] Preparing AMQP entries [ DONE ] Preparing MariaDB entries [ DONE ] Fixing Keystone LDAP config parameters to be undef if empty[ DONE ] Preparing Keystone entries [ DONE ] Preparing Glance entries [ DONE ] Checking if the Cinder server has a cinder-volumes vg[ DONE ] Preparing Cinder entries [ DONE ] Preparing Nova API entries [ DONE ] Creating ssh keys for Nova migration [ DONE ] Gathering ssh host keys for Nova migration [ DONE ] Preparing Nova Compute entries [ DONE ] Preparing Nova Scheduler entries [ DONE ] Preparing Nova VNC Proxy entries [ DONE ] Preparing OpenStack Network-related Nova entries [ DONE ] Preparing Nova Common entries [ DONE ] Preparing Neutron LBaaS Agent entries [ DONE ] Preparing Neutron API entries [ DONE ] Preparing Neutron L3 entries [ DONE ] Preparing Neutron L2 Agent entries [ DONE ] Preparing Neutron DHCP Agent entries [ DONE ] Preparing Neutron Metering Agent entries [ DONE ] Checking if NetworkManager is enabled and running [ DONE ] Preparing OpenStack Client entries [ DONE ] Preparing Horizon entries [ DONE ] Preparing Gnocchi entries [ DONE ] Preparing Redis entries [ DONE ] Preparing Ceilometer entries [ DONE ] Preparing Aodh entries [ DONE ] Preparing Puppet manifests [ DONE ] Copying Puppet modules and manifests [ DONE ] Applying 192.168.200.100_controller.pp 192.168.200.100_controller.pp: [ DONE ] Applying 192.168.200.100_network.pp 192.168.200.100_network.pp: [ DONE ] Applying 192.168.200.100_compute.pp 192.168.200.100_compute.pp: [ DONE ] Applying Puppet manifests [ DONE ] Finalizing [ DONE ] **** Installation completed successfully ****** Additional information: * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components. * File /root/keystonerc_admin has been created on OpenStack client host 192.168.200.100. To use the command line tools you need to source the file. * To access the OpenStack Dashboard browse to http://192.168.200.100/dashboard . Please, find your login credentials stored in the keystonerc_admin in your home directory. * The installation log file is available at: /var/tmp/packstack/20200520-071255-kT4RyC/openstack-setup.log * The generated manifests are available at: /var/tmp/packstack/20200520-071255-kT4RyC/manifests [root@controller ~]#