金丝雀发布(灰度发布)旨在以较小的增量步骤来部署应用程序,并将部分流量交给金丝雀发布的版本。具体有下面几种可能,最简单的方法是向新应用提供一定比例的流量;还有就是根据Header、Cookie分割流量。
注解说明
通过给ingress资源指定nginx ingress所支持的annotation可实现金丝雀发布。需要给服务创建2个ingress,其中一个是常规ingress,另一个带有
nginx.ingress.kubernetes.io/canary: “true” 表示使用canary ingress
nginx.ingress.kubernetes.io/canary-by-header
表示如果请求头中包含指定的 header 名称,并且值为 always
,就将该请求转发给该 Ingress 定义的对应后端服务。如果值为 never
则不转发,可以用于回滚到旧版。如果为其他值则忽略该 annotation。
nginx.ingress.kubernetes.io/canary-by-header-value
该 annotation 可以作为 canary-by-header
的补充,可指定请求头为自定义值,包含但不限于 always
或 never
。当请求头的值命中指定的自定义值时,请求将会转发给该 Ingress 定义的对应后端服务,如果是其它值则忽略该 annotation。
nginx.ingress.kubernetes.io/canary-by-header-pattern
与 canary-by-header-value
类似,区别为该 annotation 用正则表达式匹配请求头的值,而不是只固定某一个值。如果该 annotation 与 canary-by-header-value
同时存在,该 annotation 将被忽略。
nginx.ingress.kubernetes.io/canary-by-cookie
与 canary-by-header
类似,该 annotation 用于 cookie,仅支持 always
和 never
。
nginx.ingress.kubernetes.io/canary-weight
表示 Canary Ingress 所分配流量的比例的百分比,取值范围 [0-100]。例如,设置为10,则表示分配10%的流量给 Canary Ingress 对应的后端服务。
基于权重的流量切分

假设线上已经运行了一套对外提供的服务,现在希望仅按照流量比列灰度部分用户,待运行一段时间足够稳定后再逐渐全量上线新版本。
首先我们先应用我们的线上v1版本:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-v1 labels: app: nginx spec: replicas: 2 revisionHistoryLimit: 10 progressDeadlineSeconds: 600 strategy: type: RollingUpdate rollingUpdate: maxSurge: 25% maxUnavailable: 25% selector: matchLabels: app: nginx version: v1 template: metadata: labels: app: nginx version: v1 spec: affinity: {} restartPolicy: Always dnsPolicy: ClusterFirst schedulerName: default-scheduler securityContext: {} containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 lifecycle: postStart: exec: command: - "/bin/sh" - "-c" - "echo 'nginx-v1' > /usr/share/nginx/html/index.html" resources: requests: cpu: 100m memory: 512Mi limits: cpu: 100m memory: 512Mi terminationMessagePolicy: File terminationMessagePath: /dev/terminating-log --- apiVersion: v1 kind: Service metadata: name: nginx-v1 spec: type: ClusterIP selector: app: nginx version: v1 ports: - port: 80 targetPort: 80 --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: nginx-v1 annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: nginx.twf.com http: paths: - path: / backend: serviceName: nginx-v1 servicePort: 80
|
然后我们应用v2版本到线上
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-v2 labels: app: nginx spec: replicas: 2 revisionHistoryLimit: 10 progressDeadlineSeconds: 600 strategy: type: RollingUpdate rollingUpdate: maxSurge: 25% maxUnavailable: 25% selector: matchLabels: app: nginx version: v2 template: metadata: labels: app: nginx version: v2 spec: affinity: {} restartPolicy: Always dnsPolicy: ClusterFirst schedulerName: default-scheduler securityContext: {} containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 lifecycle: postStart: exec: command: - "/bin/sh" - "-c" - "echo 'nginx-v2' > /usr/share/nginx/html/index.html" resources: requests: cpu: 100m memory: 512Mi limits: cpu: 100m memory: 512Mi terminationMessagePolicy: File terminationMessagePath: /dev/terminating-log --- apiVersion: v1 kind: Service metadata: name: nginx-v2 spec: type: ClusterIP selector: app: nginx version: v2 ports: - port: 80 targetPort: 80
|
应用新的ingress,配置灰度发布策略:
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: nginx-v2 annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-weight: "10" spec: rules: - host: nginx.twf.com http: paths: - path: / backend: serviceName: nginx-v2 servicePort: 80
|
测试发现基本上v1和v2版本流量的比例是9:1
[root@master canary]# for i in {1..10}; do curl -H "Host: nginx.twf.com" http://nginx.twf.com:32080; done; nginx-v1 nginx-v1 nginx-v1 nginx-v1 nginx-v2 nginx-v1 nginx-v1 nginx-v1 nginx-v1 nginx-v1
|

针对这种场景,业务使用Header或Cookie来标识不同类型的用户,并通过配置Ingress来实现带有指定Header或Cookie的请求被转发到新版本,其他请求仍然转发至旧版本,从而实现将新版本灰度给部分用户。
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: nginx-v2 annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-by-header: "Region" nginx.ingress.kubernetes.io/canary-by-header-value: "cd" spec: rules: - host: nginx.twf.com http: paths: - path: / backend: serviceName: nginx-v2 servicePort: 80
|
经测试,带有 Region: cd 的Header流量被转发到v2版本,其他流量转发至v1版本
[root@master canary]# curl -H "Host: nginx.twf.com" -H "Region: cd" http://nginx.twf.com:32080 nginx-v2 [root@master canary]# curl -H "Host: nginx.twf.com" -H "Region: sz" http://nginx.twf.com:32080 nginx-v1 [root@master canary]# curl -H "Host: nginx.twf.com" http://nginx.twf.com:32080 nginx-v1
|
基于Cookie流量切分
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: nginx-v2 annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-by-cookie: "user_from_cd" spec: rules: - host: nginx.twf.com http: paths: - path: / backend: serviceName: nginx-v2 servicePort: 80
|
经测试,带有 user_from_cd 的Cookie流量被转发到v2版本,其他流量转发至v1版本
[root@master canary]# curl -H "Host: nginx.twf.com" --cookie "user_from_cd=always" http://nginx.twf.com:32080 nginx-v2 [root@master canary]# curl -H "Host: nginx.twf.com" --cookie "user_from_sz=always" http://nginx.twf.com:32080 nginx-v1 [root@master canary]# curl -H "Host: nginx.twf.com" http://nginx.twf.com:32080 nginx-v1
|