The following is a description of the O version of OpenStack core components published by a friend of 51CTO. The summary is very in place, so I will not make wheels anymore.~,~
https ://down.51cto.com/data/2448945
私有云
公有云
混合云
IaaS(基础架构即服务): OpenStack, CloudStack
PaaS (Platform as a Service): Docker, Openshift
SaaS (Service as a Service): Mainly for end users, you can use any application through a browser without installation.
DBaaS (Database as a Service)
FWaaS(Firewall as a Service)
异步队列服务:接收创建、启动、删除等等任务的队列,当同时要启动200个VM实例时,我只需将启动VM的请求放到After being in the asynchronous queue,
I started to do other things.
OpenStack components:
OpenStack’s API style is: RESTful, it can be compatible with AWS (Amazon Cloud), S3; that is, OpenStack can directly call applications on AWS or S3,
can also be directly Call OpenStack applications on AWS and S3; it is a very convenient component hybrid cloud.
Core components:
1. Service name: Compute (code name: Nova): It is mainly used to manage the complete life cycle of VM instances, startup, resource allocation, shutdown, destruction, SSH key injection during operation, SSH connection is provided by it.
2. Service name: Networking (code name: Neutron): Early provided by Nova, that is, Compute, independent from the F version (Folsom release), used to provide network connection services, it uses plug-in design, support many Popular network management plug-in.
3.Storage: Divided into two components, one is Block storage (Cinder), the other is object storage (Swift)
Object storage: similar to VMware’s disk file, but VMware’s Disk files are not object storage. Object storage contains its own metadata. Even if it is placed on a disk without a file system, it can manage itself. OpenStack uses Swift, a heavyweight distributed storage system, because the company that developed the system was one of the early initiators of OpenStack, and the company also contributed its own distributed storage system to OpenStack as its object storage system. It’s Swift.
Object Storage: Code name: Swift, which stores and retrieves unstructured data objects through RESTful interfaces. It is a highly fault-tolerant and scalable storage architecture.
Block Storage: Code name: Cinder, early provided by Nova, that is, Compute component, independent from version F, it mainly provides persistent block storage components for VMs.
4. Identify, code name: Keystore, it provides an authentication and authorization service and endpoint catalog service for all components except itself, that is, a function similar to a directory service, through which access to all components can be retrieved path.
5. Image, code name: Glance, it is used as the front end of Swift to provide metadata retrieval of storage objects. Simply put: it needs to know where the disk image file exists before the VM starts, and it needs to access the Image service To retrieve, all Swift storage location information is stored on the Image service. It will tell VMclient where to download the image, and then VMclient will find it by itself.
6. Dashboard (User Interaction Interface) (Horizon): It is a web-based access interface that interacts with OpenStack components.
7. Telemetry, code name: Ceilometer, monitor and meter VM, metering: charge according to the resources of the user using the VM, such as: how much RAM, CPU, network bandwidth, disk space and so on you use.
8. Orachestration: Code name: Heat, based on template format or AWS CloudFormation template format to quickly link multiple resources to complete unified service functions. Simple understanding: implement system management based on templates.
9 . Database: Code name: Trove, used to provide DBaaS components.
10.Data processing: Code name: Sahara (沙哈拉), used to implement on-demand and scalable management of Hadoop in OpenStack.
Interactive diagram between OpenStack components
A brief description of the interaction between the components:
1. Through the Dashboard/CLI (OpenStack command line interface)/the interface of creating a VM developed by yourself, you must start the VM , You need to authenticate and log in to the Keystore, and then use the obtained Token to access Nova-API.
2. Nova-API is a calling interface that receives user requests, and it needs to listen on a socket to pass TCP/IP protocol to access. When Dashboard/CLI sends a request, Nova-API will first verify whether its Token is legal, and if it is legal, it will accept the request. Then ask Nova-DB (MySQL) to query the VM. Nova-DB queries the created VM. If there is a VM to be started, it obtains the information of the VM (such as: instance name/RAM size/CPU/disk, etc.) ), and return this information to Nova-API.
3. When Nova-API obtains the VM information, Nova-API will organize the VM information into a specific request packet that uses the Hypervisor to operate the VM on the Hypervisor or Nova-compute, and sends it to the Hypervisor or Nova-compute , It is responsible for operating the VM. The interaction between Nova-API and Nova-compute is asynchronous, and there is a queue between them.;
[Note: Nova-API is only used to receive VM operation requests and verify whether Have the authority to operate the VM, and obtain the VM information and submit it to the Hypervisor to complete the actual management operations of the VM.]
4. After Nova-API drops the specific request packet into the queue, Nova-Scheduler will obtain these requests and provide them for the request. After the scheduling is completed, Nova-scheduler will put the scheduling information into the queue again, and at the same time send the update information that the VM has started to Nova-DB. After Nova-DB updates the status of the VM, it will notify Nova- scheduler.
5.Nova-compute can be regarded as a specific physical physical machine, it will get the scheduling information sent by Nova-scheduler from the queue, and determine whether it is its own task, and if it is, it will get the start VM Task, and generate the status information of the VM that has been started, and throw it into the queue.
6. When Nova-conductor finds the VM status update information that Nova-compute throws in the queue, it will read the information and send the information to Nova-DB to update the status of the VM. When Nova-compute After the DB update is completed, Nova-DB will notify Nova-conductor, and then Nova-conductor will put the information in the queue. When Nova-compute detects the successful update information thrown by Nova-conductor, Nova-compute will start. The next VM startup steps.
7. After knowing that the VM update information has been completed, Nova-compute starts to request from glance-API to start the disk image of the VM. At this time, glance-API will first query glance-DB through glance-registry to see if it exists If there is disk information of the VM, glance-registry will return the disk image information to glance-API. Then, glance-API will first go to the Keystore to verify whether the Token provided by Nova-compute is legal. If it is legal, it will pass the The VM disk image information is downloaded to the Image Store to download the disk image file started by the VM and returned to Nova-compute.
8.quantum-server: It is responsible for creating network infrastructure for VMs, such as: building virtual bridges, virtual routers and adding corresponding forwarding rules, even NAT; quantum-server is a very busy system, because It needs to create a network infrastructure for each VM, but quantum-server does not directly create a network infrastructure for the VM, but throws the request into a queue inside the quantum subsystem.
9. The corresponding quantum-plugin will read the information about building network facilities from the queue inside the quantum subsystem, and query Quantum-DB to determine which parts are created on the Nova-compute that actually runs the VM (such as: To create a bridge, you need to create it locally.), those created on quantum, if it needs to be created on the node running the VM, the quantum-plugin will throw this information into the internal queue, and the quantum-agent running on Nova-compute This information will be obtained from the quantum internal queue and the corresponding network infrastructure will be created locally. After completion, quantum-agent will send the completion information to Quantum-DB to update the status information of the network infrastructure. [Note: Quantum-server will also go to Keystore to verify whether it is legal after receiving a request from Nova-compute. 】Finally, quantum-server will notify Nova-compute that the network has been constructed.
10. Nova-compute will see from the information obtained to start the VM whether it needs to load the Block device it has previously associated with, and if so, it will initiate a request to the cinder-API, and the cinder-API will first send the request information to the cinder -volume, which will query Cinder-DB to see if there is a Block device requested by Nova-compute. If there is, cinder-volume will notify cinder-API. Cinder-API initiates a verification request to Keystore. After verification is passed, cinder-api The block device information will be directly returned to nova-compute. If this is a request to create a block, cinder-volume will throw the block information into the cinder internal queue, and cinder-scheduler will obtain the back-end storage from the cinder-DB Information, and according to the size of the volume to be created, VolumeType, required characteristics and other information, select the most matching one from the back-end storage host, and write the result to the cinder queue. Cinder-volume stores the Call the corresponding storage driver on the node to create a volume. After the creation is completed, the information is written into cinder-db. Finally, cinder-volume tells cinder-api that cinder-api is returning information to nove-compute.
Controller Node + Network Node + Compute Node This three-node OpenStack architecture is the smallest structure of OpenStack.
Controller Node:
basic services:
1. Identity (authentication) service. That is Keystore
2.Image Service.
3.compute Nova Management.
4.Networking: Neutron server, ML2 Plug-in.
5.Dashboard: Graphical management interface
Support services:
1.Database 1.Database 1.Database 1.Database 1. Qpid, it is used to provide a cache area for basic services and database connections.
It needs at least one network card interface as a management interface.
1. Note: If you need to provide public cloud services to the Internet, you should also need to change the Controller Node to provide an Internet access network card. Service.
Network Node:
needs to provide basic Networking services, this service includes:
ML2 Plug-in (second layer plug-in),
: : 2nd layer:OVS Agent (layer 2) OVS is used to build bridges, VLANs, VxLAN, GRE, and communicate with Network Node nodes.
Layer3 Agent: Create NAT NS and build NAT rules, iptables filtering rules, routing and forwarding rules, etc.
DHCP Agent: Used to provide DHCP services and provide dynamic address allocation management for VMs.
It needs to provide three network card interfaces:
1.Management interface.
2. The interface for building a tunnel with the instance.
3. The interface for connecting to the external network.
Compute Node:
它是运行VM的基本节点。 It can have a lot.
The basic services it needs to run:
1.Compute (computing service):Nova Hypervisor; KVM or QEMU
2.Networking: ML2
Plug-in It needs two network card interfaces:
1. Management interface
2. Construct a tunnel interface.
What is necessary for the control node Components? For the control node, the most important components are nova (nova-api, nova-sheduler, nova-conductor, etc.), neutron-server (neutron’s second and third layer, metadata agent), Glance (mirror service), Keystone (authentication service) usually deploys cinder services, so control nodes generally need to run these basic server processes on the control node, but sometimes they also need to use services such as manila and heat. They also need to be deployed on the control node. I don’t have much research on heat. In my practice, nova, neutron, glance, keystone, and cinder must be configured. For example, when we start a VM, we must first access nova-api, and nova-api will access keystone to authenticate users. Provided credentials, after authentication is passed, the request will be processed further. For example, to create a VM, nova-api will send the request to nova-sheduler, and it will query the nova database to obtain resources on all current computing nodes According to the scheduling algorithm, select the best computing node, and send the request to create the VM to the corresponding message queue pipeline, so that all the nova-computes on the computing nodes that subscribe to this pipeline are It can receive and parse the request. If it is its own task, it will start the task of creating a VM on its own node. Of course, this process involves all the components mentioned above. For example, it will first access glance to obtain the image of the created VM. Then access neuron to create the necessary network, and finally access cinder to obtain the cloud disk information that the VM needs to mount, etc., and finally notify nova-conductor to write the startup status of the VM into the nova database. Of course, this is the beginning of the startup. We will inform nova-conductor first, so that we can see the startup status of the VM on the front-end Dashboard, so the status report is required at each stage. Also, like neutron, it usually occurs when the user creates a network. For example, a Layer 2 network or a virtual router, the neutron-server will notify the neutron-agent on each computing node through a message queue to complete the creation of the basic virtual bridge on the computing node. In addition, I found in practice that rabbitmq is usually used for message queue services, memcached is commonly used for cache and Session, and Mariadb is usually used for database, and these three services are best deployed separately. Generally, when deploying a test environment, they will be placed separately Deploy on one host. Use HAProxy or Nginx+Keepalived in the front. This deployment will reduce the message queue, cache and database pressure and bottleneck as much as possible. There will be no mysql hanging, and all services will not work properly. MySQL is the dual master. There will be more, but when used, it is also used in a master-slave manner. What components are usually required for a computing node? For computing nodes, it is usually necessary to run nova-compute, which is the most basic, because after nova-api on the control node verifies the user’s identity, take the creation of a VM as an example, it will hand over the task of creating the VM. To nova-scheduler, it can obtain information about each computing node from the query nova database, such as memory, disk, CPU and other information, find a computing node with the most abundant resources, and notify the subscription to the message channel through the message queue Nova-compute and nova-compute need to communicate with the underlying hayervsior on the current computing node, but basically communicate with libvirtd, because libvirtd has unified most of the underlying haypervsior, and it is usually a private cloud at the IaaS level on Linux. It is usually KVM+Qume, but if the hardware does not support virtualization, you can only use an emulator like Qume, but this is impossible for the server. VMs can be created, but users must also be able to access it. Therefore, the neutron agent must be installed. Neutron is to accept neutron-server through network agents on computing nodes such as linuxbridge-agent and ovs-agent. The command to create a virtual network is sent through the message queue, so that a two-layer network bridge can be created on the computing node, and one end of the virtual network cable is automatically installed in the VM, and the other is connected to the bridge. Of course, if a security group is used, A network name space is needed between the VM and the virtual network as a security policy, so that the tunnel network that the VM is connected to can be connected to the tunnel network, usually not directly on the public bridge to which the VM is connected, but through the tunnel network Connected to the Neutron-server side, the VM is used to access the Internet through NAT, but for the remote management of the VM, it is more complicated. This part mainly involves the nova-compute on the compute node and the control node of several main components of nova. After complex exchanges between nova-consoleauth, nova-novncproxy, and nova-api, tokens are generated, the URL is converted to a URL accessible on the public network, and finally returned to the user’s browser, so that the end user can see the management control of the VM Taiwan, during this period, neutron network components are not involved.