KOLLA-ANSIBLE deployed OpenStack

Design planning
Currently, there are two types of roles designed, ceph and nova. As long as they are not nodes of the ceph cluster, they are all nova, which need to assume computing services. The control node and network node are currently controlled by ceph {01..03} Serving. Connected with double lines.

< td>os-tenant

vlan Name Network segment (CIDR mark) Use Equipment Remarks
1031-1060 User-defined Project private network The second layer switch where the computing and network nodes are located 31 Private network, it should be enough, otherwise it will be expanded to 900-1030 in the future.?
1031 os-wuhan31 100.100 .31.0/24 Business area (wuhan31) host network The second layer switch where the computing and network nodes are located This cluster does not need it. It is written for Avoid wrong use.
33 os-extnet 192.168.33.0/24 Floating IP Net. Private network NAT. Layer two switches, layer three switches for all nodes. Enable private networks to access the outside world, or enter from outside (after binding floating IP)< /td>
34-37 os-pubnet 192.168.34.0/24-192.168.37.0/24 Straight through the intranet Layer 2 switches and Layer 3 switches for all nodes Generally used as a public egress network.

IP and hostname planning
The gateway is 100.100. 31.1

127.0.0.1 localhost
::1???????? localhost localhost.localdomain localhost6 localhost6.localdomain6
?
100.100.31.254 cloud-wuhan31 .***.org
?
100.100.31.201 wuhan31-ceph01.v3.os wuhan31-ceph01
100.100.31.202 wuhan31-ceph02.v3.os wuhan31-ceph02
100.100 .31.203 wuhan31-ceph03.v3.os wuhan31-ceph03
100.100.31.102 wuhan31-nova01.v3.os wuhan31-nova01
100.100.31.103 wuhan31-nova02.v3.os wuhan31-nova02

Virtual machine specifications
cpu 1 2 4 8

Memory 1 2 4 8 16

Disk 20 50

And limit memory/cpu The value needs to be between 1-4. The script is written as follows: Finally 22 flavors are generated.

#!/bin/bash
desc="create flavors for openstack."

log_file="/dev/shm/create-flavor.log"
# config cpu, ram, and disk. seperated value with space.
cpu_count_list="1 2 4 8"
ram_gb_list="1 2 4 8 16"
disk_gb_list="20 50"
# accept ram/cpu ratio.
ram_cpu_factor_min=1
ram_cpu_factor_max=4

tip(){ echo >&2 "$*"; }
die(){ tip "$*"; exit 1; }

#opens tack flavor create [-h] [-f {json,shell,table,value,yaml}]
# [-c COLUMN] [--max-width ]
# [-- fit-width] [--print-empty] [--noindent]
# [--prefix PREFIX] [--id ] [--ram ]
# [--disk ] [--ephemeral ]
# [--swap ] [--vcpus ]
# [- -rxtx-factor ] [--public | --private]
# [--property ] [--project ]
# [--description < description>]
# [--project-domain ]
#
OSC="openstack flavor create"
if ["$1" != "run" ]; then
tip "Usage: $0 [run] - $desc"
tip "add argument'run' to execute these command really, otherwise show it on screen only."
tip ""
OSC="echo $OSC"
else
# check openrc env.
[-z "$OS_USERNAME"] && die "to run openstack command, you need source openrc file first."
fi

for cpu in $cpu_count_list; do
for ram in $ram_gb_list; do
ram_cpu_factor=$((ram/cpu))
[$ram_cpu_factor -lt $ram_cpu_factor_min] && {tip "INFO: ignore flavor beacuse ram_cpu_factor is less than ram_cpu_factor_min: $ram/$cpu <$ram_cpu_factor_min"
continue; }
[$ram_cpu_factor -gt $ram_cpu_factor_max] && {tip "INFO: ignore flavor beacuse ram_cpu_factor is more than ram_cpu_factor_max: $ram/$cpu> $ram_cpu_factor_max"
continue; }
for disk in $disk_gb_list; do
name="c$cpu-m${ram}Gd${disk}G"
$OSC --id "$name" --vcpus "$cpu" --ram $((ram*1024)) --disk "$disk" "$name"
sleep 0.01
done
done
done

This is the view after the installation is complete< /p>

[[email protected] ~]# openstack flavor list
+--------------+------------- -+-------+------+-----------+-------+-----------+< br />| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------+----------- ---+-------+------+-----------+-------+----------- +
| c1-m1G-d20G | c1-m1G-d20G | 1024 | 20 | 0 | 1 | True |
| c1-m1G-d50G | c1-m1G-d50G | 1024 | 50 | 0 | 1 | True |
| c1-m2G-d20G | c1-m2G-d20G | 2048 | 20 | 0 | 1 | True |
| c1-m2G-d50G | c1-m2G-d50G | 2048 | 50 | 0 | 1 | True |
| c1-m4G-d20G | c1-m4G-d20G | 4096 | 20 | 0 | 1 | True |
| c1-m4G-d50G | c1-m4G -d50G | 4096 | 50 | 0 | 1 | True |
| c2-m2G-d20G | c2-m2G-d20G | 2048 | 20 | 0 | 2 | True |
| c2-m2G-d50G | c2-m2G-d50G | 2048 | 50 | 0 | 2 | True |
| c2-m4G-d20G | c2-m4G-d20G | 4096 | 20 | 0 | 2 | True |
| c2 -m4G-d50G | c2-m4G-d50G | 4096 | 50 | 0 | 2 | True |
| c2-m8G-d20G | c2-m8G-d20G | 8192 | 20 | 0 | 2 | True |
| c2-m8G-d50G | c2-m8G-d50G | 8192 | 50 | 0 | 2 | True |
| c4-m16G-d20G | c4-m16G-d20G | 16384 | 20 | 0 | 4 | True |
| c4-m16G-d50G | c4 -m16G-d50G | 16384 | 50 | 0 | 4 | True |
| c4-m4G-d20G | c4-m4G-d20G | 4096 | 20 | 0 | 4 | True |
| c4-m4G -d50G | c4-m4G-d50G | 4096 | 50 | 0 | 4 | True |
| c4-m8G-d20G | c4-m8G-d20G | 8192 | 20 | 0 | 4 | True |
| c4-m8G-d50G | c4-m8G-d50G | 8192 | 50 | 0 | 4 | True |
| c8-m16G-d20G | c8-m16G-d20G | 16384 | 20 | 0 | 8 | True |
| c8-m16G-d50G | c8-m16G-d50G | 16384 | 50 | 0 | 8 | True |
| c8-m8G-d20G | c8-m8G-d20G | 8192 | 20 | 0 | 8 | True |
| c8-m8G-d50G | c8-m8G-d50G | 8192 | 50 | 0 | 8 | True |
+-------------- +--------------+-------+------+-----------+------- +-----------+
[[emailprotected] ~]#

Virtual machine network
Provides 2 ways of networking for virtual machines. Direct access to the intranet And private network.?

Please refer to the corresponding chapter for vlan planning.

Direct Intranet
Provide 4/24 network segments, which can connect up to 251*4 devices . (254 host ip-1 gateway-2dhcp), if you need it later, please expand it yourself.

# Create a private network that can directly connect to the intranet. Because the vlan id is not in the above-defined range, it needs to be used Created with administrator rights.
for net in {3 4..37}; do
?openstack network create --provider-network-type vlan --provider-physical-network physnet0 --provider-segment "$net" --share --project admin net-lan $net
?openstack subnet create --network net-lan$net --gateway 192.168.$net.1 --subnet-range 192.168.$net.0/24 --dns-nameserver 100.100.31.254 subnet- lan$net
done

Private network
Preset 30 vlans, 30 independent networks can be deployed, and there is no limit on the number and size of each network subnet.
Private network You can create subnets and routes at will. It is recommended to only use to create a cluster intranet.?
If you need to communicate with the outside world, you can connect to a floating IP network. If you need to access from the outside world, you need to bind a floating IP or use load balancing ?(To be confirmed in this paragraph)
There are currently 250 IPs in the floating IP network. If there are a large number of virtual machines in the network that need to be accessed from the outside, it is recommended to select the “through intranet” method to access the network.

< pre># Create external network, administrator authority.
for net in {33..33}; do
?openstack network create –external –provider-network-type vlan –provider-physical -network physnet1 –provider-segment “$net” –share –project antiy net-ext-lan$net
?openstack subnet create –network net-ext-lan$net –gateway 192.168.$ net.1 –subnet-range 192.168.$net.0/24 –dns-nameserver 100.100.31.254 subnet-floating$net
done

The following operations can be done by ordinary users:?

# Create a private network Network. User permissions are also available.
openstack network create --project antiy net-private-antiy01
# Create a router.
openstack router create --ha --project antiy router-antiy
# Connect the router to the network. I haven’t found the command to configure the external network. I suggest this paragraph to be configured on the web interface.
#openstack router add subnet router-antiy subnet-private-antiy01
#openstack router add subnet router-antiy subnet-floating43

Physical network configuration
Configuration omitted, core x0/0/1 is connected to C8-41 x0/0/1

The above is the design and planning of the network, and the formal deployment is below

One, basic environment preparation
1, environment preparation

system ip host name role
centos7.4< /td>

100.100.31.201 wuhan31-ceph01.v3.os ceph01, kolla-ansible
centos7 .4 100.100.31.202 wuhan31-ceph02.v3.os ceph02
centos7. 4 100.100.31.203 wuhan31-ceph03.v3.os ceph03
centos7.4 100.100.31.101 wuhan31-nova01.v3.os nova01
centos7.4 100.100.31.102 wuhan31-nova02.v3.os nova01

Write the ip and host name into /etc/hosts

2, modify the host name

hostnamectl set-hostname wuhan31-ceph01.v3.os
hostnamectl set-hostname wuhan31-ceph02.v3.os
hostnamectl set-hostname wuhan31 -ceph03.v3.os

3, turn off the firewall, selinux

systemctl stop firewalld
systemctl disable firewalld
sed -i's/SELINUX=enforcing/ SELINUX=disabled/g' /etc/selinux/config
setenforce 0

4, configure yum source:

Modify yum source to internal company source.

Include centos cloud and ceph mimic source:

curl -v http://mirrors.***.org/repo/centos7.repo> /etc/yum.repos. d/CentOS-Base.repo
curl -v http://mirrors.***.org/repo/cloud.repo> /etc/yum.repos.d/cloud.repo
yum makecache

5, unified network card name

[[email protected] network-scripts]# cat ifcfg-bond0
DEVICE=bond0
BOOTPROTO=static
TYPE=bond
ONBOOT=yes
IPADDR=100.100.31.203
NETMASK=255.255.255.0
GATEWAY=100.100.31.1
DNS1=192.168.55.55
USERCTL=no
BONDING_MASTER=yes
BONDING_OPTS="miimon=200 mode=1"
[[emailprotected] network-scripts]# cat ifcfg-em1
TYPE=Ethernet
BOOTPROTO=none
DEVICE= em1
ONBOOT=yes
MASTER=bond0
SLAVE=yes

[[emailprotected] network-scripts]# cat ifcfg-em2
TYPE= Ethernet
BOOTPROTO=none
DEVICE=em2
ONBOOT=yes
MASTER=bond0
SLAVE=yes
[[emailprotected] network-scripts]#

All device network card names use bond0

6, install docker

Configure docker yum source

cat> /etc/yum.repos .d/docker.repo <[docker]
name=docker
baseurl=https://download.docker.com/linux/centos/7/x86_64/stable

enabled=1
gpgcheck=0
EOF

Then install docker-ce

curl http://mirrors.*** .org/repo/docker.repo> /etc/yum.repos.d/docker.repo
yum install docker-ce

Configure private warehouse

mkdir /etc /docker
cat> /et c/docker/daemon.json <{
"registry-mirrors": ["http://mirrors.***.org:5000"]
}
EOF

Start the service

systemctl enable docker
systemctl start docker

7, install the required software
All nodes need to be installed:

yum install ceph python-pip -y

Debugging aids, in order to facilitate debugging, it is recommended to install the completion script.

yum install bash-completion-extras libvirt-bash-completion net-tools bind-utils sysstat iftop nload tcpdump htop -y

8, install kolla-ansible
install pip .

yum install python-pip -y

Install the dependent software required for kolla-ansible:

yum install ansible python2-setuptools python-cryptography python-openstackclient -y

Use pip to install kolla-ansible:

pip install kolla-ansible

Note:

If there is an error of `requests 2.20.0 has requirement idna<2.8,>=2.5, but you'll have idna 2.4 which is incompatible.`, then mandatory update requets library

pip install --ignore-installed requests
Similarly, Cannot uninstall'PyYAML' appears. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. Error, mandatory update

sudo pip install --ignore-installed PyYAML

Note: Steps 1-7, 9 operations on all nodes, step 8 operations on deployment nodes (here wuhan32-ceph01)

Second, deploy ceph cluster
1, configure ceph user remote login< br>All ceph node operations. (The following public key is filled in according to the actual situation of your own machine. The purpose of this step is to allow the ceph user to log in to the system through the key)

ssh-keygen -t rsa //Enter all the way
usermod- s / bin / bash ceph
mkdir ~ ceph / .ssh /
cat >> ~ ceph / .ssh / authorized_keys << EOF
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW6VghEC1cUrTZ6TfI9XcOEJZShkoL5YqtHBMtm2iZUnw8Pj6S3S1TCwKfdY0m + kInKlfZhoFCw3Xyee9XY7ZwPX6IEnixZMqO9EpC58LfxH841lw6xC0HesfF0QwWs + EVs5I1RwCN + Zoz2NPfu8RH30LHhBoSQpm75vRkF2trEbdtEI / kuzysO + 73oF7R42lGJtgJtFbzLQSO2Vp / Xo7jdD / tdD / gcEsPniSPP3vFQg4EuSafdwxnJFuAxLAMCK + K1SQg7eNqboWYGhSWjOy39bTCZjieXOyNehPTVoqn3 / qyC88c7D0PEbvTYxbNkuFU2MM7x9 / k + ZGyvYnpex4t [email protected]
EOF
cat >> ~ / .ssh / authorized_keys << EOF
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW6VghEC1cUrTZ6TfI9XcOEJZShkoL5YqtHBMtm2iZUnw8Pj6S3S1TCwKfdY0m + kInKlfZhoFCw3Xyee9XY7ZwPX6IEnixZMqO9EpC58LfxH841lw6xC0HesfF0QwWs + EVs5I1RwCN + Zoz2NPfu8RH30LHhBoSQpm75vRkF2trEbdtEI / kuzysO + 73oF7R42lGJtgJtFbzLQSO2Vp / Xo7jdD / tdD / gcEsPniSPP3vFQg4EuSafdwxnJFuAxLAMCK + K1SQg7eNqboWYGhSWjOy39bTCZjieXOyNehPTVoqn3 / qyC88c7D0PEbvTYxbNkuFU2 MM7x9/k+ZGyvYnpex4t [email protected]
EOF
cat> /etc/sudoers.d/ceph <ceph ALL = (root) NOPASSWD:ALL
Defaults: ceph !requiretty
EOF
chown -R ceph:ceph ~ceph/.ssh/
chmod -R o-rwx ~ceph/.ssh/

2, create ceph cluster
Operate on deployment node

Install the deployment tool ceph-deploy

yum install ceph-deploy -y

 mkdir ~ceph/ceph-deploy
cd ~ceph/ceph-deploy
ceph-deploy new wuhan31-ceph{01..03}.os

Edit the configuration file ceph.conf

vim ceph.conf
[global]
fsid = 567be343-d631-4348-8f9d-2f18be36ce74
mon_initial_members = wuhan31t-ceph01, wuhan31-ceph02,wuhan31-ceph03
mon_host = wuhan31-ceph01,wuhan31-ceph02,wuhan31-ceph03
mon_addr = 100.100.31.201:6789,00.100.31.202:6789,00.100.31.203:6789
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
mon_allow_pool_delete = 1

[osd]
osd_client_message_size_cap = 524288000d_debr /> ep_scrub_stride = 131072
osd_op_threads = 2
osd_disk_threads = 1
osd_mount_options_xfs = "rw,noexec,nodev,noatime,nodiratime,nobarrier"
osd_recovery_op_cover_max_max = 1

osd_max_backfills = 1
osd-recovery-threads=1

[client]
rbd_cache = true
rbd_cache_size = 1073741824
rbd_cache_max_dirty = 134217728
rbd_cache_max_dirty_age = 5
rbd_cache_writethrough_until_flush = true
rbd_concurrent_management_ops = 50
rgw frontends = civetweb port=7480

Then create the initial node:

ceph-deploy mon create-initial
ceph-deploy admin wuhan31-ceph01 wuhan31-ceph02,wuhan31-ceph03
# Optional: Allow ceph users to use admin keyring.
sudo setfacl -mu: ceph:r /etc/ceph/ceph.client.admin.keyring

Create mgr:
ceph-deploy mgr create wuhan31-ceph01 wuhan31-ceph02,wuhan31-ceph03

Add osd
This is a multiplexed hard disk, so you need to zap disk: (my machine disk is from sdb to sdk)

ceph-deploy disk zap wuhan31-ceph01 /dev/sd{b..k}
ceph-d eploy disk zap wuhan31-ceph02 /dev/sd{b..k}
ceph-deploy disk zap wuhan31-ceph03 /dev/sd{b..k}

You can use the following script Add osd in batch:

for dev in /dev/sd{b..k}; do ceph-deploy osd create --data "$dev"wuhan31-ceph01 || break; done
for dev in /dev/sd{b..k}; do ceph-deploy osd create --data "$dev" wuhan31-ceph02 || break; done
for dev in /dev/sd{b.. k}; do ceph-deploy osd create --data "$dev" wuhan31-ceph03 || break; done

If you encounter an error during this period, you can continue the execution alone.

3, create pools

Operate on deployment node
create pools required for openstack:
Calculation method: https://ceph.com/pgcalc/
Set different pg numbers according to the size. Since there are currently only 3*10 osd. So the preliminary pg number is as follows: Later, it will be expanded as needed.

images 32
volumes 256
vms 64
backups 128

Execute creation on any node with ceph admin permission:

ceph osd pool create images 32
ceph osd pool create volumes 256
ceph osd pool create vms 64
ceph osd pool create backups 128

4, create a ceph client
operate on the deployment node
create a client, and grant permissions, the following The information is written into the script for execution, or executed directly

# Define the client
clients="client.cinder client.nova client.glance client.cinder-backup"
# Create the client .
for client in $clients; do
ceph auth get-or-create "$client"
done

# Configure permissions
ceph auth caps client.cinder mon'allow r'osd'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=cinder-ssd, allow rwx pool=vms, allow rwx pool=images'
ceph auth caps client.nova mon'allow r'osd'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=cinder -ssd, allow rwx pool=vms, allow rwx pool=images'
ceph auth caps client.glance mon'allow r'osd'allow class-read object_prefix rbd_children, allow rwx pool=images'
ceph auth caps client.cinder-backup mon'allow r'osd'allow class-read object_prefix rbd_children, allow rwx pool=backups'
# export
for client in $clients; do
ceph auth export "$client" -o /etc/ceph/ceph."$client".keyring
done

Define and create a client:

ceph auth get-or- create client.cinder
ceph auth get-or-create client.nova
ceph auth get-or-create client.glance
ceph auth get-or-create client.cinder-backup

Configuration permissions

ceph auth caps client.cinder mon'allow r'osd'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=cinder-ssd, allow rwx pool=vms, allow rwx pool=images'
ceph auth caps client.nova mon'allow r'osd'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=cinder-ssd, allow rwx pool=vms, all ow rwx pool=images'
ceph auth caps client.glance mon'allow r'osd'allow class-read object_prefix rbd_children, allow rwx pool=images'
ceph auth caps client.cinder-backup mon ' allow r'osd'allow class-read object_prefix rbd_children, allow rwx pool=backups'

export keyting

ceph auth export client.cinder -o /etc/ceph/ceph.client .cinder.keyring
ceph auth export client.nova -o /etc/ceph/ceph.client.nova.keyring
ceph auth export client.glance -o /etc/ceph/ceph.client.glance .keyring
ceph auth export client.cinder-backup -o /etc/ceph/ceph.client.cinder-backup.keyring

5, configure the ceph plug-in dashboard
operate on the deployment node

ceph mgr module enable dashboard
ceph config set mgr mgr/dashboard/ssl false
ceph config set mgr mgr/dashboard/server_address ::
ceph config set mgr mgr /dashboard/server_port 7000
ceph dashboard set-login-credentials username and password

Three, kolla deploy openstack
The following deployment node operations

1, write configuration< br>Copy template
Copy kolla-ansible template, here is installed using pip:
Required: Copy configuration template
cp -ar /usr/share/kolla-ansible/etc_examples/* /etc/

2, generate password
please Be sure to complete the "copy template" link. Otherwise, the password cannot be generated.
Execute the following command
kolla-genpwd

glolals.yml
Edit and modify/ etc/kolla/globals.yml

# Here is the version information of openstack. Here, the rocky version is selected, and the source is the source code installation, because the software package in this way is the most complete. If it is binary and is a CentOS system, Then only the packages provided by Red Hat are not complete.
kolla_install_type: "source"
openstack_release: "rocky"

# If there are multiple control nodes, enable high availability, Note that the vip (virtual IP) must be an IP that is not currently used. It must be in the same network segment as the node IP.
enable_haproxy: "yes"
kolla_internal_vip_address: "100.100.31.254"
< br /># These fqdns need to be resolved in the internal network DNS and hosts files at the same time.
kolla_internal_fqdn: "xiaoxuantest.***.org"
kolla_external_fqdn: "xiaoxuantest.***.org"< br />
# Here is the path of the custom configuration. Only on the deployment node.
node_custom_config: "/etc/kolla/config"

# Virtualization type, if The experiment is done in a virtual machine. The type here needs to be changed to qemu. Slower is slower.
# The kvm type requires CPU, motherboard and BIOS support, and the BIOS has hardware virtualization enabled. If it is not available on the compute node Install the kvm kernel module, please check according to the dmesg error report.
nova_compute_virt_type: "kvm"

# Network interface. Note that external must be an independent interface, otherwise it will cause the node to disconnect.
neutron_external_i nterface: "eth1"
network_interface: "bond0"
api_interface: "bond0"
storage_interface: "bond0"
cluster_interface: "bond0"
tunnel_interface: "bond0"
# dns_interface: "eth" # The dns function is not integrated, please study it yourself later.

# Network virtualization technology. We don’t use openvswitch here, we use linuxbridge directly
neutron_plugin_agent: "linuxbridge"
enable_openvswitch: "no"
# Network high availability, is to create multiple agents: dhcp and l3 (routing)
enable_neutron_agent_ha: "yes"
# Network encapsulation, Currently they are all VLANs, and flat is reserved for the use of physical network cards directly.
neutron_type_drivers: "flat,vlan"
# The isolation method of the tenant network, here is VLAN, but Kolla does not support it, so we You need to add your own custom configuration in the corresponding directory of node_custom_config.
neutron_tenant_network_types: "vlan"

# Network plug-in
enable_neutron_lbaas: "yes"
enable_neutron_** *aas: "yes"
enable_neutron_fwaas: "yes"

# elk centralized log management
enable_central_logging: "yes"
# Enable debug mode, the log is very detailed. Turn it on temporarily as needed.
#openstack_logging_debug: "True"

# Forget the purpose here... You can turn it off and try it, if other components have dependencies, it will be turned on automatically.
enable_kafka: "yes"
enable_fluentd: "yes"

# Here is our The external ceph is used and Kolla is not allowed to deploy, because some osd may have problems during Kolla deployment, which may cause the osd id sequence to be misplaced, which seems inconvenient. Also, don’t worry about managing the storage cluster from the host later.
enable_ceph: "no"
glance_backend_ceph: "yes"
cinder_backend_ceph: "yes"
nova_backend_ceph: "yes"
gnocchi_backend_storage: "ceph"
enable_manila_backend_cephfs_native: "yes" />
# Enabled features.
#enable_ceilometer: "yes"
enable_cinder: "yes"
#enable_designate: "yes"
enable_destroy_images: "yes"< br />#enable_gnocchi: "yes"
enable_grafana: "yes"
enable_heat: "yes"
enable_horizon: "yes"
#enable_ironic: "yes"
#enable_ironic_ipxe: "yes"
#enable_ironic_neutron_agent: "yes"
#enable_kuryr: "yes"
#enable_magnum: "yes"
# enable_neutron_dvr
# enable_ovs_dpdk
#enable_nova_serialconsole_proxy: "yes"
#enable_octavia: "yes"
enable_redis: "yes"
#enable_trove: "yes"

# Other configuration
glance_backend_file: "no"
#designate_ns_record: "nova."
#ironic_dnsmasq_dhcp_range: "11.0.0.10,11.0.0.111"openstack_region_name: "xiaoxuantest"

3, write inventory files
Create a directory for writing inventory files:

mkdir kolla-ansible
cp / usr/share/kolla-ansible/ansible/inventory/multinode kolla-ansible/inventory-xiaoxuantest

The key content of the inventory file after editing, other places remain unchanged:
Key content

< pre>[control]
wuhan31-ceph01
wuhan31-ceph02
wuhan31-ceph03

[network]
wuhan31-ceph01
wuhan31- ceph02
wuhan31-ceph03

[external-compute]
wuhan31-ceph01
wuhan31-ceph02
wuhan31-ceph03

[monitoring:children]
control

[storage:children]
control

4, ceph integration
network
because we use vlan, so you need to configure it manually:

mkdir /etc/kolla/config/neutron
cat> /etc/kolla/config/neutron/ml2_conf.ini <[ml2_type_vlan ]
network_vlan_ranges = physnet0:1031:1060,physnet1

[linux_bridge]
physical_interface_mappings = physnet0:eth0,physnet1:eth1
EOF

Dashboard
The virtual machine creation interface prohibits the creation of new volumes by default.

mkdir /etc/kolla/config/ horizon/
cat> /etc/kolla/config/horizon/custom_local_settings <LAUNCH_INSTANCE_DEFAULTS = {
'create_volume': False,
}
EOF

Paste all the files in the /etc/kolla/config/ directory directly

[[emailprotected] config]# ls -lR
.:
total 4< br />lrwxrwxrwx. 1 kolla kolla 19 Mar 11 17:06 ceph.conf -> /etc/ceph/ceph.conf
drwxr-xr-x. 4 kolla kolla 117 Mar 28 14:43 cinder
-rw-r--r--. 1 root root 39 Mar 28 14:39 cinder.conf
drwxr-xr-x. 2 kolla kolla 80 Mar 11 17:18 glance
drwxr-xr -x. 2 root root 35 Mar 19 11:21 horizon
drwxr-xr-x. 2 root root 26 Mar 14 15:49 neutron
drwxr-xr-x. 2 kolla kolla 141 Mar 11 17 :18 nova

./cinder:
total 8
lrwxrwxrwx. 1 kolla kolla 19 Mar 11 17:10 ceph.conf -> /etc/ceph/ceph.conf< br />drwxr-xr-x. 2 kolla kolla 81 Mar 11 17:18 cinder-backup
-rwxr-xr-x. 1 kolla kolla 274 Feb 26 16:47 cinder-backup.conf
drwxr-xr-x. 2 kolla kolla 40 Mar 11 17:18 cinder-volume
-rwxr-xr-x. 1 kolla kolla 534 Mar 28 14:38 cinder-volume.conf

./cinder/cinder-backup:
total 0
lrwxrwxrwx. 1 kolla kolla 43 Mar 11 17:18 ceph.client.cinder-backup.keyring -> /etc/ceph/ceph.client.cinder-backup.keyring
lrwxrwxrwx. 1 kolla kolla 36 Mar 11 17:18 ceph.client.cinder.keyring -> / etc/ceph/ceph.client.cinder.keyring

./cinder/cinder-volume:
total 0
lrwxrwxrwx. 1 kolla kolla 36 Mar 11 17:18 ceph. client.cinder.keyring -> /etc/ceph/ceph.client.cinder.keyring

./glance:
total 4
lrwxrwxrwx. 1 kolla kolla 36 Mar 11 17 :18 ceph.client.glance.keyring -> /etc/ceph/ceph.client.glance.keyring
lrwxrwxrwx. 1 kolla kolla 19 Mar 11 17:07 ceph.conf -> /etc/ceph/ceph. conf
-rwxr-xr-x. 1 kolla kolla 138 Feb 27 11:55 glance-api.conf

./horizon:
total 4
-rw -r--r--. 1 root root 59 Mar 19 11:21 custom_local_settings

./neutron:
total 4
-rw-r--r--. 1 root root 141 Mar 14 15:49 ml2_conf.ini

./nova:
total 8
lrwxrwxrwx. 1 kolla kolla 36 Mar 11 17:18 ceph.client.cinder.keyring -> /etc/ceph/ceph.client.cinder.keyring
lrwxrwxrwx. 1 kolla kolla 34 Mar 11 17:18 ceph.client.nova.keyring -> /etc/ceph/ceph.client.nova.keyring
lrwxrwxrwx. 1 kolla kolla 19 Mar 11 17:07 ceph .conf -> /etc/ceph/ceph.conf
-rwxr-xr-x. 1 kolla kolla 101 Feb 27 17:28 nova-compute.conf
-rwxr-xr-x. 1 kolla kolla 39 Dec 24 16:52 nova-scheduler.conf

The content of the relevant configuration file in the config directory is as follows:

# find -type f -printf "=== FILE: %p ===\n" -exec cat {} \;
=== FILE: ./cinder/cinder-backup.conf ===
[DEFAULT]
backup_ceph_conf=/etc/ ceph/ceph.conf
backup_ceph_user=cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool=backups
backup_driver = cinder.backup.drivers.ceph
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
=== FILE: ./cinder/cinder-volume.conf ===
[DEFAULT]
ena bled_backends=cinder-sas,cinder-ssd

[cinder-sas]
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder
backend_host=rbd :volumes
rbd_pool=volumes
volume_backend_name=cinder-sas
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid=5b3ec4eb-c276-4cf2-a042-8ec906d05f69

[cinder-ssd]
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder
backend_host=rbd:volumes
rbd_pool=cinder-ssd
volume_backend_name=cinder-ssd
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid=5b3ec4eb-c276-4cf2-a042-8ec906d05f69
=== FILE: ./ glance/glance-api.conf ===
[glance_store]
default_store = rbd
stores = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
=== FILE: ./nova/nova-compute.conf ===
[libvirt]
images_rbd_pool=vms
images_type= rbd
images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=nova
=== FILE: ./nova/nova-scheduler.conf ===
[DEFAULT]
scheduler_max_attempts = 100
=== FILE: ./neutron/ml2_conf.ini ===
[ml2_type_vlan]
network_vlan_ranges = physnet0:1000:1030,physnet1

[linux_bridge]
physical_interface_mappings = physnet0:eth0,physnet1:eth1

= == FILE: ./horizon/custom_local_settings ===

LAUNCH_INSTANCE_DEFAULTS = {
'create_volume': False,
}

=== FILE : ./cinder.conf ===
[DEFAULT]
default_volume_type=standard

*Attention, fill in rbd_secret_uuid= in ./cinder/cinder-volume.conf The following value

[[email protected] ~]# grep cinder_rbd_secret_uuid /etc/kolla/passwords.yml 
cinder_rbd_secret_uuid: 1ae2156b-7c33-4fbb-a26a-c770fadc54b6

The created connection is as follows

ceph.conf:
ln -s /etc/ceph/ceph.conf /etc/kolla/config/nova/
ln- s /etc/ceph/ceph.conf /etc/kolla/config/glance/
ln -s /etc/ceph/ceph.conf /etc/kolla/config/cinder/

keyring:

ln -s /etc/ ceph/ceph.client.cinder-backup.keyring /etc/kolla/config/cinder/cinder-backup/
ln -s /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/ cinder/cinder-backup/
ln -s /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/cinder/cinder-volume/
ln -s /etc/ceph/ ceph.client.glance.keyring /etc/kolla/config/glance/
ln -s /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/nova/
ln- s /etc/ceph/ceph.client.nova.keyring /etc/kolla/config/nova/

5, deploy openstack

Preconfigure the node environment
bootstrap will also安装不少东西, 后期可以考虑提前安装这些.
kolla-ansible -i kolla-ansible/inventory-xiaoxuantest bootstrap-servers

预检查
< code>kolla-ansible -i kolla-ansible/inventory-xiaoxuantest prechecks

正式部署
kolla-ansible -i kolla-ansible/inventory-xiaoxuantest deploy

调整配置及重新部署
如果需要调整配置. 那么编辑globals.yml后, 然后运行reconfigure. 使用-t 参数可以只对变动的模块进行调整.

< pre>kolla-ansible -i kolla-ansible/inventory-xiaoxuantest reconfigure -t neutron
kolla-an sible -i kolla-ansible/inventory-xiaoxuantest deploy -t neutron

完成部署
这步主要是生成admin-openrc.sh.

kolla-ansible post-deploy
. /etc/kolla/admin-openrc.sh

初始demo

执行后会自动下载cirros镜像, 创建网络, 并创建一批测试虚拟机.

/usr/share/kolla-ansible/init-runonce

查询登录密码:

grep admin /etc/kolla/passwords.yml

如果没有内部源部署时间比较长,国外的资源下载很慢。
官方的故障排查指南: https://docs.openstack.org/kolla-ansible/latest/user/troubleshooting.html

部署成果后运行的容器如下:

[[email protected] ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d141ac504ec6 kolla/centos-source-grafana:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks grafana
41e12c24ba7f kolla/centos-source-horizon:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks horizon
6989a4aeb33a kolla/centos-source-heat-engine:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks heat_engine
35665589b4a4 kolla/centos-source-heat-api-cfn:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks heat_api_cfn
f18c98468796 kolla/centos-source-heat-api:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks heat_api
bc77f4d3c957 kolla/centos-source-neutron-metadata-agent:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks neutron_metadata_agent
7334b93c6564 kolla/centos-source-neutron-lbaas-agent:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks neutron_lbaas_agent
0dd7a55245c4 kolla/centos-source-neutron-l3-agent:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks neutron_l3_agent
beec0f19ec7f kolla/centos-source-neutron-dhcp-agent:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks neutron_dhcp_agent
af6841ebc21e kolla/centos-source-neutron-linu xbridge-agent:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks neutron_linuxbridge_agent
49dc0457445d kolla/centos-source-neutron-server:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks neutron_server
677c0be4ab6b kolla/centos-source-nova-compute:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks nova_compute
402b1e673777 kolla/centos-source-nova-novncproxy:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks nova_novncproxy
e35729b76996 kolla/centos-source-nova-consoleauth:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks nova_consoleauth
8b193f562e47 kolla/centos-source-nova-conductor:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks nova_conductor
885581445be0 kolla/centos-source-nova-scheduler:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks nova_scheduler
171128b7bcb7 kolla/centos-source-nova-api:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks nova_api
8d7f3de2ad63 kolla/centos-source-nova-placement-api:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks placement_api
ab763320f268 kolla/centos-source-nova-libvirt:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks nova_libvirt
bbd4c3e2c961 kolla/centos-source-nova-ssh:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks nova_ssh
80e7098f0bfb kolla/centos-source-cinder-backup:rocky "dumb-init - -single-…" 8 weeks ago Up 8 weeks cinder_backup
20e2ff43d0e1 kolla/centos-source-cinder-volume:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks cinder_volume
6caba29f7ce2 kolla/centos-source-cinder-scheduler:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks cinder_scheduler
3111622e4e83 kolla/centos-source-cinder-api:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks cinder_api
2c011cfae829 kolla/centos-source-glance-api:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks glance_api
be84e405afdd kolla/centos-source-kafka:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks kafka
09aef04ad59e kolla/cen tos-source-keystone-fernet:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks keystone_fernet
2ba9e19844fd kolla/centos-source-keystone-ssh:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks keystone_ssh
8eebe226b065 kolla/centos-source-keystone:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks keystone
662d85c00a64 kolla/centos-source-rabbitmq:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks rabbitmq
7d373ef0fdee kolla/centos-source-mariadb:rocky "dumb-init kolla_sta…" 8 weeks ago Up 8 weeks mariadb
ab9f5d612925 kolla/centos-source-memcached:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks memcached
a728298938f7 kolla/centos-source-kibana:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks kibana
7d22d71cc31b kolla/centos-source-keepalived:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks keepalived
dae774ca7e33 kolla/centos-source-haproxy:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks haproxy
14b340bb8139 kolla/centos-source-redis-sentinel:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks redis_sentinel
3023e95f465f kolla/centos-source-redis:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks redis
a3ed7e8fe8ff kolla/centos-source-elasticsearch:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks elasticsearch
06b28cd0f7c7 kolla/centos-source-zookeeper:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks zookeeper
630219f5fb29 kolla/centos-source-chrony:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks chrony
6f6189a4dfda kolla/centos-source-cron:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks cron
039f08ec1bbf kolla/centos-source-kolla-toolbox:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks kolla_toolbox
f839d23859cc kolla/centos-source-fluentd:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks fluentd
[[email protected] ~]#

kolla-ansible部署openstack

这是添加的网络,全部使用直通内网

kolla-ansible部署openstack

四,常见故障
mariadb,这种情况是所有节点关机后遇到的。
vip 3306监听状态,节点3306非监听状态,容器反复重启中
可能原因是节点同时断开后,mariadb服务不可用,需要恢复服务
kolla-ansible -i kolla-ansible/inventory-xiaoxuantest mariadb_recovery
执行完后确认各节点3306为监听状

遇到的问题:
ironic : Checking ironic-agent files exist for Ironic Inspector

TASK [ironic : Checking ironic-agent files exist for Ironic Inspector] ********************
failed: [localhost -> localhost] (item=ironic-agent.kernel) => {"changed": false, "failed_when_result": true, "item": "ironic-agent.kernel", "stat": {"exists": false}}
failed: [localhost -> localhost] (item=ironic-agent.initramfs) => {"changed": false, "failed_when_result": true, "item": "ironic-agent.initramfs", "stat": {"exists": false}}

临时关闭enable_ironic开头的配置解决.

neutron : Checking if ‘MountFlags‘ for docker service is set to ‘shared‘

TASK [neutron : Checking if ‘MountFlags‘ for docker service is set to ‘shared‘] ***********
fatal: [localhost]: FAILED! => {"changed": false, "cmd": ["syste mctl", "show", "docker"], "delta": "0:00:00.010391", "end": "2018-12-24 20:44:46.791156", "failed_when_result": true, "rc": 0, "start": "2018-12-24 20:44:46.780765",...

见章节 环境准备--docker--

ceilometer : Checking gnocchi backend for ceilometer

TASK [ceilometer : Checking gnocchi backend for ceilometer] *******************************
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "gnocchi is required but not enabled"}

启用 gnocchi

octavia : Checking certificate files exist for octavia

TASK [octavia : Checking certificate files exist for octavia] *****************************
failed: [localhost -> localhost] (item=cakey.pem) => {"changed": false, "failed_when_result": true, "item": "cakey.pem", "stat": {"exists": false}}
failed: [localhost -> localhost] (item=ca_01.pem) => {"changed": false, "failed_when_result": true, "item": "ca_01.pem", "stat": {"exists": false}}
failed: [localhost -> localhost] ( item=client.pem) => {"changed": false, "failed_when_result": true, "item": "client.pem", "stat": {"exists": false}}

运行 kolla-ansible certificates 依旧没有生成, 查了下, 官方还没修复: https://bugs.launchpad.net/kolla-ansible/+bug/1668377
先禁用 octavia . 后期排查了人工生成.

common : Restart fluentd container

RUNNING HANDLER [common : Restart fluentd container] **************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unknown error message: Get https://192.168.55.201:4000/v1/_ping: dial tcp 100.100.31.201:4000: getsockopt: connection refused"}

看了下, 确实没启动4000端口. 根据官方文档[参1]部署了registry.

Leave a Comment

Your email address will not be published.