Micro service gateway Kong actual combat

The current Internet, especially the mobile Internet, is based on the service API interface for the interaction between devices and platforms . API-driven development is the most common way of collaboration between teams, and as the cornerstone of interaction, the accuracy, completeness and timeliness of API are the keys to development efficiency.

In the production environment, create, publish, maintain, monitor and protect APIs of any scale, receive and process thousands of concurrent API calls, and manage Traffic, authorization and access control, monitoring, and API version are also issues that must be solved when adopting a microservice architecture.

A solution to the above-mentioned problems is the API Gateway. The main responsibility of the API Gateway is to provide a unified service entrance. , Make microservices transparent to the front end, and provide management functions such as routing, caching, security, filtering, flow control, etc.

API gateway is located in the network topology on the client and Among back-end services, in the current Internet architecture, reverse proxy and load balancing systems such as Nginx usually play an important role in this position. Therefore, extending Nginx (combined with LUA) to realize the function of API gateway is also a common solution. The KONG project developed by Mashape is an open source product based on this idea.

KONG

KONG is a Lua application that runs in Nginx through lua-nginx-module. KONG does not use lua-nginx-module to directly compile Nginx, but is based on OpenResty and is distributed together with OpenResty that already includes lua-nginx-module.

KONG supports Cassandra and PostgreSQL as data storage servers, and loads and stores data from Kong operations. At the same time, with the help of Nginx, OpenResty and Lua plug-in systems, KONG is used as a The API gateway has high performance, high scalability and flexibility.

KONG provides API to support user-defined plug-in extensions, and has implemented many The plug-in supports API gateway functions:

  • Authentication
    HTTP basic authentication, key authentication , OAuth, JWT, HMAC and LDAP

  • Security
    ACL, CORS (Cross-origin Resource Sharing, Cross-domain resource sharing), dynamic SSL, IP restriction and robot detection

  • flow control
    request limit Flow, request response flow limit, request load limit

  • Analysis and monitoring
    Galileo, Datadog, Runscope< /p>

  • Data conversion
    Request conversion, response conversion, request/response association

  • Logging
    TCP, UDP, HTTP, file log, Syslog, StatsD and Loggly.

Installation and Deployment

KONG provides multiple deployment methods and supports Docker container deployment , AWS deployment, CentOS/RedHat, Debian/Ubuntu, Heroku, OSX, Vagrant and source code compilation and deployment.

Take Docker container deployment as an example:

  1. Start the database ( Cassandra is used here, please refer to the official document when using PostgreSQL):

    1
    2
    3
    $ docker run -d –name kong-database \
    -p 9042:9042 \
    cassandra:2.2
  2. Start KONG service instance:

    1
    2
    3
    4
    5
    6
    7

    8
    9
    10
    $ docker run -d –name kong \
    –link kong-database:kong-database \
    -e “KONG_DATABASE=cassandra” \
    – e “KONG_CASSANDRA_CONTACT_POINTS=kong-database” \
    -p 8000:8000 \
    -p 8443:8443 \
    -p 8001:8001 \

    -p 7946:7946 \

    -p 7946:7946/udp \ < /div>

    kong
  3. Check whether KONG starts normally:

    1
    $ curl http://127.0.0.1:8001
  4. To use KONG, please refer to the official 5-minute quick start example.

Configuration Management

Mashape officially provides commercial online monitoring and analysis for KONG Tool Galileo and online API development tool Gelato.

There are also some third-party developed tools on Github. Those involved in graphical configuration management include Django Kong Admin, Jungle, Kong Dashboard, etc., which are briefly introduced belowKong Dashboard.

Kong Dashboard is implemented with Javascript, and can be easily installed and started via NPM or Docker:

NPM method:

< td style="padding: 0px; vertical-align: middle; font-weight: normal; border: none;" class="code">

# Install Kong Dashboard
npm install -g kong-dashboard

# Start Kong Dashboard
kong-dashboard start
# To start Kong Dashboard on a custom port
kong-dashboard start -p [port]
# To start Kong Dashboard with basic auth
kong-dashboard start -a user=password < /div>

# You can set basic auth user with environment variables < /div>

# Do not set -a parameter or this will be overwritten
set kong-dashboard-name=admin && set kong-dashboard-pass=password && kong-dashboard start

1
2 < /div>

3
4
5
6
7
8
9
10
11
12
13 < /div>

14
15

Docker method:

1
2
3
4
5
6
7
8
# Start Kong Dashboard
docker run -d -p 8080:8080 pgbi/kong-dashboard
# Start Kong Dashboard on a custom port
docker run -d -p [port]:8080 pgbi/kong-dashboard < /div>

# Start Kong Dashboard with basic auth
docker run -d -p 8080:8080 pgbi/kong-dashboard npm start – -a user=password

Open through a browser, you can see the following interface:

You can use the interface Convenient management API, users and plugins.

Management API:

Add API:

Manage users:

Add users:

Manage plugins:

Main functions

KONG as an API gateway, Its core function is to proxy client requests and process them through a wealth of plug-ins. In addition, KONG also supports cluster deployment, CLI command line management, rich management APIs for third-party integration, and plug-in development APIs for users to develop their own processing plug-ins. For details, please refer to the official documentation.

KONG and DC/OS combination

As shown in the figure above, there are two modes for the integration of API gateway and DC/OS. In mode 1, the API gateway is independent of the DC/OS cluster, and Marathon-LB is deployed on public nodes as an external load balancing service ; In mode 2, the API gateway is located inside the DC/OS cluster and is deployed on the public node (it can also be deployed on the private node, at this time an additional Marathon is required) -LB as a external load balancing service), Marathon-LB as a internal load balancing service.

KONG as an API gateway to integrate with DC/OS cluster can be deployed according to the above-mentioned mode 1 or according to mode 2. When deploying in mode 2, whether the API gateway covers the entire service with a single instance or splits the instances according to the service can also be adjusted according to actual needs.

The following steps are deployed according to a single instance covering the entire business model. Other scenarios can be adjusted according to actual needs. In this scenario, the flow of client requests is as follows:

Client requests<—> Marathon-LB (“external”) <—> KONG <—> Marathon-LB (“internal”) <—> Internal service

  1. Deploy Marathon-LB (” external”):

    1
    dcos package install marathon-lb

    < /li>

  2. Database required to deploy KONG:

    Cassandra storage required to deploy KONG (You can also use Post greSQL). Note, versions prior to Kong 0.9.x (inclusive) only support Cassandra 2.2, and Kong 0.10 starts to support Cassandra 3.x.

    The following is the application deployment JSON definition of Cassandra 2.2:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11

    < div style="height: 20px;" class="line"> 12

    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    {
    “id” : “/kong/cassandra2”,
    < span class="attr">“instances”: 1,
    “cpus”: 0.5,
    “mem”: 2048,
    “disk”: 0,
    “containe r”: {
    “docker”: {

    < div style="height: 20px;" class="line"> “image”: “cassandra :2.2”,

    “forcePullImage”: false,
    “privileged” span>: false,
    “portMappings”: [
    {
    “containerPort”: 9042,
    “protocol”: “tcp”,
    “hostPort”: 9042,
    “servicePort”: 10121
    }
    ],
    “network”: “BRIDGE”
    },
    “type”: “DOCKER”,
    “volumes”: [
    {
    “containerPath”: “/var/lib/cassandra”,
    ” hostPath”: “/data/cassandra/2.2”,
    “mode”: “RW”
    }
    ]
    },
    “healthChecks”: [
    {
    “protocol”: “TCP”,
    “gracePeriodSeconds”: 60 ,
    “intervalSeconds”: 30,
    “timeoutSeconds”: 30,
    “maxConsecutiveFailures”: 3
    }
    ],

    < div style="height: 20px;" class="line"> “portDefinitions”: [

    {
    “port”: 10121,
    “protocol”: “tcp”,
    “labels”: {}
    }
    ],
    < span class="attr">“requirePorts”: false
    }
  3. Deploy KONG :

    1
    dcos marathon app add kong.json

    KONG’s Marathon application JSON is defined as follows:< /p>

    < tr style="background-color: #f9f9f9;">

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17

    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76

    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    {
    “id”: “/kong”,
    “cmd”: “KONG_NGINX_DAEMON=\”off\” KONG_CLUSTER_ADVERTISE=$HOST:$PORT3 kong start”,
    “cpus”: 1,
    “mem”: 512,
    “disk”: 0,
    “instances”: 1,
    “acceptedResourceRoles”: [ “*”],
    “container”: {
    “type”: “DOCKER”,
    “volumes”: [],
    “docker”: {
    “image”: “kong”,
    “network”: “BRIDGE”,
    “portMappings”: [
    {
    “containerPort”: 8000,
    “hostPort”: 0,
    “servicePort”: 10001,
    “protocol”: “tcp”,
    “name”: “proxy”,
    “labels”: {}
    },
    {
    “containerPort”: 8001,
    “hostPort”: 0,
    “servicePort”: 10002,
    “protocol”: “tcp”,
    “name”: “admin”,
    “labels”: {}
    },
    {
    “containerPort”: 8443,
    “hostPort”: 0,
    “servicePort”: 10003,
    “protocol”: “tcp”,
    “name”: “ssl”,
    “labels”: {}
    },
    {
    “containerPort”: 7946,
    “hostPort”: 0,
    “service Port”: 10004,
    “protocol”: “tcp,udp”,
    “name”: “serf”,
    “labels”: {}
    }
    ],
    “privileged”: false,
    “parameters”: [],

    < div style="height: 20px;" class="line"> “forcePullImage”: true

    }
    },
    “env”: {
    “KONG_CASSANDRA_CONTACT_POINTS”: “node.cassandra.l4lb.thisdcos.directory”,
    “KONG_DATABASE”: “cassandra”
    },
    “healthChecks” : [
    {
    “protocol”: “TCP”,
    “portIndex”: 1,
    “gracePeriodSeconds”: 300,
    “intervalSeconds”: 60,
    “timeoutSeconds”: 20,
    “maxConsecutiveFailures”: 3,
    “ignoreHttp1xx”: false
    }
    ],
    “labels”: {
    “HAPROXY_1_GROUP”: “external”,
    “HAPROXY_0_GROUP”: “external”
    },
    “portDefinitions”: [
    {
    “port”: 10001,
    “protocol”: “tcp”,
    “name”: “proxy”,
    “labels”: {}
    },
    {
    “port”: 10002,
    “protocol”: “tcp”,
    “name”: “admin”,
    “labels”: {}
    },
    {
    “port”: 10003,
    “protocol”: “tcp”,
    “name”: “ssl”,
    “labels”: {}
    },
    {
    “port”: 10004,
    “protocol”: “udp”,
    “name”: “serf-udp”,
    “labels”: {}
    }
    ]
    }
  4. 部署Marathon-LB (“internal”):

    1
    dcos package install –options=marathon-lb-internal.json marathon-lb

    对应的Marathon应用JSON定义如下:

    1
    2
    3
    4
    5
    6
    7
    8
    {
    “marathon-lb”:{
    “name”: “marathon-lb-internal”,
    “haproxy-group”: “internal”,
    “bind-http-https”: false,
    “role”: “”
    }
    }
  5. 部署内部服务

    注意,本方案里用 “internal” Marathon-LB作为内部应用服务的负载均衡器,因此在部署应用服务时,在LABEL中“HAPROXY_GROUP”的值应设置为“internal”
    本例使用3个Nginx实例作为服务示例。

  6. 部署Kong Dashboard管理程序:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36

    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    {
    “id”: “/kong-dashboard”,
    “instances”: 1,
    “cpus”: 0.1,
    “mem”: 128,
    “disk”: 0,
    “container”: {
    “docker”: {
    “image”: “pgbi/kong-dashboard”,
    “forcePullImage”: false,
    “privileged”: false,
    “portMappings”: [
    {
    “containerPort”: 8080,
    “protocol”: “tcp”,
    “servicePort”: 10305,
    “labels”: {
    “VIP_0”: “/kong-dashboard:8080”
    }
    }
    ],
    “network”: “BRIDGE”
    }
    },
    “healthChecks”: [
    {
    “protocol”: “HTTP”,
    “path”: “/”,
    “gracePeriodSeconds”: 60,
    “intervalSeconds”: 60,
    “timeoutSeconds”: 30,
    “maxConsecutiveFailures”: 3,
    “ignoreHttp1xx”: false
    }
    ],
    “labels”: {
    “HAPROXY_GROUP”: “internal”
    },
    “portDefinitions”: [
    {
    “port”: 10305,
    “protocol”: “tcp”,
    “labels”: {}
    }
    ]
    }
  7. 检查KONG网关是否正常工作< /p>

    通过Kong Dashboard向API网关添加API接口,访问接口检查是否正常。

  8. 部署完成后,服务实例列表如下:

  9. 结论: 部署完成后,外部客户端通过外部MLB(192.168.1.51:10031)访问API网关KONG(192.168.1.81),KONG将请求代理给内部MLB(192.168.1.80),内部MLB为三个Nginx服务(微服务示例)提供负载均衡。

    通过以下CURL命令测试:

    1

      当前互联网特别是移动互联网,设备与平台之间的交互的基础是服务API接口。以API驱动的开发是团队之间最常用的协作方式,而作为交互的基石,API的准确性,完整性和及时性是影响开发效率的关键。

在生产环境中,创建、发布、维护、监控和保护任意规模的API,接收和处理成千上万个并发API的调用,管理流量、授权和访问控制、监控以及API版本也是采用微服务架构所必须解决的问题。

解决上述问题的一种方案是API网关(API Gateway),API网关的主要职责是提供统一的服务入口,让微服务对前端透明,并提供路由,缓存,安全,过滤,流量控制等管理功能

API网关在网络拓扑中位于客户端和后端服务之间,当前互联网架构中,在这个位置扮演重要角色的通常是反向代理和负载平衡系统如Nginx,因此,扩展Nginx(结合LUA)实现API网关的职能也是一种常见的方案,Mashape公司开发的KONG项目正是基于这种思路实现的一个开源产品。

KONG

KONG是一个通过lua-nginx-module实现的在Nginx中运行的Lua应用程序。 KONG没有使用lua-nginx-module直接编译Nginx,而是基于OpenResty,与已经包含了lua-nginx-module的OpenResty一起分发。

KONG支持用Cassandra和PostgreSQL作为数据存储服务器,负载存储来自Kong操作的数据,同时,借助于Nginx,OpenResty和Lua插件体系,KONG作为API网关具备了高性能,高扩展性和灵活性。

KONG提供了API支持用户自定义插件扩展,并实现了众多的插件支持API网关的职能:

  • 认证
    HTTP基本认证,密钥认证,OAuth,JWT,HMAC和LDAP

  • 安全
    ACL,CORS( Cross-origin Resource Sharing,跨域资源共享),动态SSL,IP限定和机器人侦测

  • 流量控制
    请求限流,请求响应限流,请求载荷限定

  • 分析与监控
    Galileo,Datadog,Runscope

  • 数据转换
    请求转换,响应转换,请求/响应关联

  • 日志记录
    TCP,UDP,HTTP,文件日志,Syslog,StatsD和Loggly。

安装部署

KONG提供了多种部署方式,支持Docker容器部署,AWS部署,CentOS/RedHat,Debian/Ubuntu,Heroku,OSX,Vagrant和源代码编译部署。

以Docker容器部署为例:

  1. 启动数据库(这里使用Cassandra,使用PostgreSQL时请参考官方文档):

    1
    2
    3
    $ docker run -d –name kong-database \
    -p 9042:9042 \
    cassandra:2.2
  2. 启动KONG服务实例:

    1
    2
    3
    4
    5
    6
    7
    8

    < div style="height: 20px;" class="line"> 9

    10
    $ docker run -d –name kong \
    –link kong-database:kong-database \
    -e “KONG_DATABASE=cassandra” \
    -e “KONG_CASSANDRA_CONTACT_POINTS=kong-database” \
    -p 8000:8000 \
    -p 8443:8443 \
    -p 8001:8001 \
    -p 7946:7946 \
    -p 7946:7946/udp \
    kong
  3. 检查KONG是否正常启动:

    1
    $ curl http://127.0.0.1:8001
  4. 使用KONG,可参考官方5分钟快速上手示例

配置管理

Mashape官方为KONG提供了商业化的在线监控分析工具Galileo和在线API开发工具Gelato。

Github上也存在一些第三方开发的工具,涉及图形化配置管理的有Django Kong Admin,Jungle,Kong Dashboard等,下面简要介绍Kong Dashboard的功能。

Kong Dashboard是用Javascript实现的,可以通过NPM或Docker等方式方便的安装和启动:

NPM方式:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Install Kong Dashboard
npm install -g kong-dashboard
 
# Start Kong Dashboard
kong-dashboard start
 
# To start Kong Dashboard on a custom port
kong-dashboard start -p [port]
 
# To start Kong Dashboard with basic auth
kong-dashboard start -a user=password
 
# You can set basic auth user with environment variables
# Do not set -a parameter or this will be overwritten
set kong-dashboard-name=admin && set kong-dashboard-pass=password && kong-dashboard start

Docker方式:

1
2
3
4
5
6
7
8
# Start Kong Dashboard
docker run -d -p 8080:8080 pgbi/kong-dashboard
 
# Start Kong Dashboard on a custom port
docker run -d -p [port]:8080 pgbi/kong-dashboard
 
# Start Kong Dashboard with basic auth
docker run -d -p 8080:8080 pgbi/kong-dashboard npm start — -a user=password

通过浏览器打开,可以看到如下界面:

通过界面可以方便的管理API,用户和插件。

管理API:

添加API:

管理用户:

添加用户:

管理插件:

主要功能

KONG作为API网关,其核心功能是代理客户端的请求并通过丰富的插件进行处理,此外KONG还支持集群部署,CLI命令行管理,丰富的管理API便于第三方集成以及插件开发API便于用户开发自己的处理插件。详细信息请参考官方文档。

KONG 与 DC/OS 的结合

如上图所示,API网关与DC/OS的整合存在两种模式,在模式1中,API网关独立于DC/OS集群之外,Marathon-LB部署于公开节点上作为外部负载均衡服务;在模式2中,API网关位于在DC/OS集群内部,部署在公开节点上(也可以部署在私有节点上,此时需要额外的Marathon-LB作为外部负载均衡服务),Marathon-LB作为内部负载均衡服务。

KONG作为API网关与DC/OS集群的整合既可以按上述模式1方式部署也可以按模式2进行。按模式2部署时,API网关是单实例覆盖全业务还是按业务进行实例拆分也可以根据实际需求进行调整。

下述步骤按单实例覆盖全业务的模式进行部署实践,其他场景可以根据实际需要调整。在此场景中,客户端请求的流动过程如下:

客户端请求 <—> Marathon-LB (“external”) <—> KONG <—> Marathon-LB (“internal”) <—> 内部服务

  1. 部署Marathon-LB (“external”):

    1
    dcos package install marathon-lb
  2. 部署KONG所需的数据库:

    部署KONG所需要的Cassandra存储(也可以使用PostgreSQL)。 注意,Kong 0.9.x(含)之前的版本仅支持Cassandra 2.2,Kong 0.10开始支持Cassandra 3.x。

    下述是Cassandra 2.2的应用部署JSON定义:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    {
    “id”: “/kong/cassandra2”,
    “instances”: 1,
    “cpus”: 0.5,
    “mem”: 2048,
    “disk”: 0,
    “container”: {
    “docker”: {
    “image”: “cassandra:2.2”,
    “forcePullImage”: false,
    “privileged”: false,
    “portMappings”: [
    {
    “containerPort”: 9042,
    “protocol”: “tcp”,
    “hostPort”: 9042,
    “servicePort”: 10121
    }
    ],
    “network”: “BRIDGE”
    },
    “type”: “DOCKER”,
    “volumes”: [
    {
    “containerPath”: “/var/lib/cassandra”,
    “hostPath”: “/data/cassandra/2.2”,
    “mode”: “RW”
    }
    ]
    },
    “healthChecks”: [
    {
    “protocol”: “TCP”,
    “gracePeriodSeconds”: 60,
    “intervalSeconds”: 30,
    “timeoutSeconds”: 30,
    “maxConsecutiveFailures”: 3
    }
    ],
    “portDefinitions”: [
    {
    “port”: 10121,
    “protocol”: “tcp”,
    “labels”: {}
    }
    ],
    “requirePorts”: false
    }
  3. 部署KONG:

    1
    dcos marathon app add kong.json

    KONG的Marathon应用程序JSON定义如下:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27 < /div>

    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    {
    “id”: “/kong”,
    “cmd”: “KONG_NGINX_DAEMON=\”off\” KONG_CLUSTER_ADVERTISE=$HOST:$PORT3 kong start”,
    “cpus”: 1,
    “mem”: 512,
    “disk”: 0,
    “instances”: 1,
    “acceptedResourceRoles”: [ “*”],
    “container”: {
    “type”: “DOCKER”,
    “volumes”: [],
    “docker”: {
    “image”: “kong”,
    “network”: “BRIDGE”,
    “portMappings”: [
    {
    “containerPort”: 8000,
    “hostPort”: 0,
    “servicePort”: 10001,
    “protocol”: “tcp”,
    “name”: “proxy”,
    “labels”: {}
    },
    {
    “containerPort”: 8001,
    “hostPort”: 0,
    ” servicePort”: 10002,
    “protocol”: “tcp”,
    “name”: “admin”,
    “labels”: {}
    },
    {
    “containerPort”: 8443,
    “hostPort”: 0,
    “servicePort”: 10003,
    “protocol”: “tcp”,
    “name”: “ssl”,
    “labels”: {}
    },
    {
    “containerPort”: 7946,
    “hostPort”: 0,
    “servicePort”: 10004,
    “protocol”: “tcp,udp”,
    “name”: “serf”,
    “labels”: {}
    }
    ],
    “privileged”: false,
    “parameters”: [],
    “forcePullImage”: true
    }
    },
    “env”: {
    “KONG_CASSANDRA_CONTACT_POINTS”: “node.cassandra.l4lb.thisdcos.directory”,
    “KONG_DATABASE”: “cassandra”
    },
    “healthChecks”: [
    {
    “protocol”: “TCP”,
    “portIndex”: 1,
    “gracePeriodSeconds”: 300,
    “intervalSeconds”: 60,
    “timeoutSeconds”: 20,
    “maxConsecutiveFailures”: 3,
    “ignoreHttp1xx”: false
    }
    ],
    “labels”: {
    “HAPROXY_1_GROUP”: “external”,
    “HAPROXY_0_GROUP”: “external”
    },
    “portDefinitions”: [
    {
    “port”: 10001,
    “protocol”: “tcp”,
    “name”: “proxy”,
    “labels”: {}
    },
    {
    “port”: 10002,
    “protocol”: “tcp”,
    “name”: “admin”,
    “labels”: {}
    },
    {
    “port”: 10003,

    “protocol”: “tcp”,
    “name”: “ssl”,
    “labels”: {}
    },
    {
    “port”: 10004,
    “protocol”: “udp”,
    ” name”: “serf-udp”,
    “labels”: {}
    }
    ]
    }
  4. 部署Marathon-LB (“internal”):

    1
    dcos package install –options=marathon-lb-internal.json marathon-lb

    对应的Marathon应用JSON定义如下:

    1
    2
    3
    4
    5
    6
    7
    8
    {
    “marathon-lb”:{
    “name”: “marathon-lb-internal”,
    “haproxy-group”: “internal”,
    “bind-http-https”: false,
    “role”: “”
    }
    }
  5. 部署内部服务

    注意,本方案里用 “internal” Marathon-LB作为内部应用服务的负载均衡器,因此在部署应用服务时,在LABEL中“HAPROXY_GROUP”的值应设置为“internal”
    本例使用3个Nginx实例作为服务示例。

  6. 部署Kong Dashboard管理程序:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    {
    “id”: “/kong-dashboard”,
    “instances”: 1,
    “cpus”: 0.1,
    “mem”: 128,
    “disk”: 0,
    “container”: {
    “docker”: {
    “image”: “pgbi/kong-dashboard”,

    “forcePullImage”: false,
    “privileged”: false,
    “portMappings”: [
    {
    “containerPort”: 8080,
    “protocol”: “tcp”,
    “servicePort”: 10305,
    “labels”: {
    “VIP_0”: “/kong-dashboard:8080”
    }
    }
    ],
    “network”: “BRIDGE”
    }
    },
    “healthChecks”: [
    {
    “protocol”: “HTTP”,
    “path”: “/”,
    “gracePeriodSeconds”: 60,
    “intervalSeconds”: 60,
    “timeoutSeconds”: 30,
    “maxConsecutiveFail ures”: 3,
    “ignoreHttp1xx”: false
    }
    ],
    “labels”: {
    “HAPROXY_GROUP”: “internal”
    },
    “portDefinitions”: [
    {
    “port”: 10305,
    “protocol”: “tcp”,
    “labels”: {}
    }
    ]
    }
  7. 检查KONG网关是否正常工作

    通过Kong Dashboard向API网关添加API接口,访问接口检查是否正常。

  8. 部署完成后,服务实例列表如下:

  9. 结论: 部署完成后,外部客户端通过外部MLB(192.168.1.51:10031)访问API网关KONG(192.168.1.81),KONG将请求代理给内部MLB(192.168.1.80),内部MLB为三个Nginx服务(微服务示例)提供负载均衡。

    通过以下CURL命令测试:

    1

1

2

3

$ docker run -d –name kong-database \

-p 9042:9042 \

cassandra:2.2

1

2

3

4

5

6

7

8

9

10

$ docker run -d –name kong \

–link kong-database:kong-d atabase \

-e “KONG_DATABASE=cassandra” \

-e “KONG_CASSANDRA_CONTACT_POINTS=kong-database” \

-p 8000:8000 \

-p 8443:8443 \

-p 8001:8001 \

-p 7946:7946 \

-p 7946:7946/udp \

kong

1

$ curl http://127.0.0.1:8001

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

# Install Kong Dashboard

npm install -g kong-dashboard

 

# Start Kong Dashboard

kong-dashboard start

 

# To start Kong Dashboard on a custom port

kong-dashboard start -p [port]

 

# To start Kong Dashboard with basic auth

kong-dashboard start -a user=password

 

# You can set basic auth user with environment variables

# Do not set -a parameter or this will be overwritten

set kong-dashboard-name=admin && set kong-dashboard-pass=password && kong-dashboard start

1

2

3

4

5

6

7

8

# Start Kong Dashboard

docker run -d -p 8080:8080 pgbi/kong-dashboard

 

# Start Kong Dashboard on a custom port

docker run -d -p [port]:8080 pgbi/kong-dashboard

 

# Start Kong Dashboard with basic auth

docker run -d -p 8080:8080 pgbi/kong-dashboard npm start — -a user=password

1

dcos package install marathon-lb

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

{

“id”: “/kong/cassandra2”,

“instances”: 1,

“cpus”: 0.5,

“mem”: 2048,

“disk”: 0,

“container”: {

“docker”: {

“image”: “cassandra:2.2”,

“forcePullImage”: false,

“privileged”: false,

“portMappings”: [

{

“containerPort”: 9042,

“protocol”: “tcp”,

“hostPort”: 9042,

“servicePort”: 10121

}

],

“network”: “BRIDGE”

},

“type”: “DOCKER”,

“volumes”: [

{

“containerPath”: “/var/lib/cassandra”,

“hostPath”: “/data/cassandra/2.2”,

“mode”: “RW”

}

]

},

“healthChecks”: [

{

“protocol”: “TCP”,

“gracePeriodSeconds”: 60,

“intervalSeconds”: 30,

“timeoutSeconds”: 30,

“maxConsecutiveFailures”: 3

}

],

“portDefinitions”: [

{

“port”: 10121,

“protocol”: “tcp”,

“labels”: {}

}

],

“requirePorts”: false

}

1

dcos marathon app add kong.json

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

{

“id”: “/kong”,

“cmd”: “KONG_NGINX_DAEMON=\”off\” KONG_CLUSTER_ADVERTISE=$HOST:$PORT3 kong start”,

“cpus”: 1,

“mem”: 512,

“disk”: 0,

“instances”: 1,

“acceptedResourceRoles”: [ “*”],

“container”: {

“type”: “DOCKER”,

“volumes”: [],

“docker”: {

“image”: “kong”,

“network”: “BRIDGE”,

“portMappings”: [

{

“containerPort”: 8000,

“hostPort”: 0,

“servicePort”: 10001,

“protocol”: “tcp”,

“name”: “proxy”,

“labels”: {}

},

{

“containerPort”: 8001,

“hostPort”: 0,

“servicePort”: 10002,

“protocol”: “tcp”,

“name”: “admin”,

“labels”: {}

},

{

“containerPort”: 8443,

“hostPort”: 0,

“servicePort”: 10003,

“protocol”: “tcp”,

“name”: “ssl”,

“labels”: {}

},

{

“containerPort”: 7946,

“hostPort”: < span style="color: #99cc99;" class="number">0,

“servicePort”: 10004,

“protocol”: “tcp,udp”,

“name”: “serf”,

“labels”: {}

}

],

“privileged”: false,

“parameters”: [],

“forcePullImage”: true

}

},

“env”: {

“KONG_CASSANDRA_CONTACT_POINTS”: “node.cassandra.l4lb.thisdcos.directory”,

“KONG_DATABASE”: “cassandra”

},

“healthChecks”: [

{

“protocol”: “TCP”,

“portIndex”: 1,

“gracePeriodSeconds”: 300,

“intervalSeconds”: 60,

“timeoutSeconds”: 20,

“maxConsecutiveFailures”: 3,

“ignoreHttp1xx”: false

}

],

“labels”: {

“HAPROXY_1_GROUP”: “exte rnal”,

“HAPROXY_0_GROUP”: “external”

},

“portDefinitions”: [

{

“port”: 10001,

“protocol”: “tcp”,

“name”: “proxy”,

“labels”: {}

},

{

“port”: 10002,

“protocol”: “tcp”,

“name”: “admin”,

“labels”: {}

},

{

“port”: 10003,

“protocol”: “tcp”,

“name”: “ssl”,

“labels”: {}

},

{

“port”: 10004,

“protocol”: “udp”,

“name”: “serf-udp”,

“labels”: {}

}

]

}

1

dcos package install –options=marathon-lb-internal.json marathon-lb

1

2

3

4

5

6

7

8

{

“marathon-lb”:{

“name”: “marathon-lb-internal”,

“haproxy-group”: “internal”,

“bind-http-https”: false,

“role”: “”

}

}

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

{

“id”: “/kong-dashboard”,

“instances”: 1,

“cpus”: 0.1,

“mem”: 128,

“disk”: 0,

“container”: {

“docker”: {

“image”: “pgbi/kong-dashboard”,

“forcePullImage”: false,

“privileged”: false,

“portMappings”: [

{

“containerPort”: 8080,

“protocol”: “tcp”,

“servicePort”: 10305,

“labels”: {

“VIP_0”: “/kong-dashboard:8080”

}

}

],

“network”: “BRIDGE”

}

},

“healthChecks”: [

{

“protocol”: “HTTP”,

“path”: “/”,

“gracePeriodSeconds”: 60,

“intervalSeconds”: 60,

“timeoutSeconds”: 30,

“maxConsecutiveFailures”: 3,

“ignoreHttp1xx”: false

}

],

“labels”: {

“HAPROXY_GROUP”: “internal”

},

“portDefinitions”: [

{

“port”: 10305,

“protocol”: “tcp”,

“labels”: {}

}

]

}

1

Leave a Comment

Your email address will not be published.