K8S ingress 实践
安装
NGINX Ingress Controller 是使用 Kubernetes Ingress 资源对象构建的,用 ConfigMap 来存储 Nginx 配置的一种 Ingress Controller 实现。
要使用 Ingress 对外暴露服务,就需要提前安装一个 Ingress Controller,我们这里就先来安装 NGINX Ingress Controller,由于 nginx-ingress 所在的节点需要能够访问外网,这样域名可以解析到这些节点上直接使用,所以需要让 nginx-ingress 绑定节点的 80 和 443 端口,所以可以使用 hostPort 来进行访问,当然对于线上环境来说为了保证高可用,一般是需要运行多个 nginx-ingress 实例的,然后可以用一个 nginx/haproxy 作为入口,通过 keepalived 来访问边缘节点的 vip 地址。
“所谓的边缘节点即集群内部用来向集群外暴露服务能力的节点,集群外部的服务通过该节点来调用集群内部的服务,边缘节点是集群内外交流的一个 Endpoint。”
这里采用 nginx-ingress-controller:0.30.0
[root@centos03 ~]# cd k8s
[root@centos03 k8s]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
拉取镜像:
[root@centos03 k8s]# cat mandatory.yaml | grep image
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
所有节点下载 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
[root@centos03 k8s]# docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
创建:
[root@centos03 k8s]# kubectl apply -f mandatory.yaml
Warning: resource namespaces/ingress-nginx is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
namespace/ingress-nginx configured
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
查看pod:
[root@centos03 k8s]# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-54b86f8f7b-tblw7 0/1 Running 0 24s
查看describe
[root@centos03 k8s]# kubectl describe pod -n ingress-nginx
Name: nginx-ingress-controller-54b86f8f7b-tblw7
Namespace: ingress-nginx
Priority: 0
Node: centos03/192.168.222.12
Start Time: Fri, 17 Dec 2021 07:25:32 +0800
Labels: app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
pod-template-hash=54b86f8f7b
Annotations: cni.projectcalico.org/podIP: 10.233.72.69/32
cni.projectcalico.org/podIPs: 10.233.72.69/32
kubernetes.io/limit-ranger: LimitRanger plugin set: cpu, memory request for container nginx-ingress-controller
prometheus.io/port: 10254
prometheus.io/scrape: true
Status: Running
IP: 10.233.72.69
IPs:
IP: 10.233.72.69
Controlled By: ReplicaSet/nginx-ingress-controller-54b86f8f7b
Containers:
nginx-ingress-controller:
Container ID: docker://f2521807b8b5dd7a146e0dad1d604bec7375553aba4fbbf3c559997d552acbd0
Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
Image ID: docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:b312c91d0de688a21075078982b5e3a48b13b46eda4df743317d3059fc3ca0d9
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--configmap=$(POD_NAMESPACE)/nginx-configuration
--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
--udp-services-configmap=$(POD_NAMESPACE)/udp-services
--publish-service=$(POD_NAMESPACE)/ingress-nginx
--annotations-prefix=nginx.ingress.kubernetes.io
State: Running
Started: Fri, 17 Dec 2021 07:27:07 +0800
Last State: Terminated
Reason: Error
Exit Code: 143
Started: Fri, 17 Dec 2021 07:26:17 +0800
Finished: Fri, 17 Dec 2021 07:27:06 +0800
Ready: False
Restart Count: 2
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=10s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=0s timeout=10s period=10s #success=1 #failure=3
Environment:
POD_NAME: nginx-ingress-controller-54b86f8f7b-tblw7 (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-serviceaccount-token-sgg6b (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
nginx-ingress-serviceaccount-token-sgg6b:
Type: Secret (a volume populated by a Secret)
SecretName: nginx-ingress-serviceaccount-token-sgg6b
Optional: false
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 111s default-scheduler Successfully assigned ingress-nginx/nginx-ingress-controller-54b86f8f7b-tblw7 to centos03
Normal Created 67s (x2 over 110s) kubelet Created container nginx-ingress-controller
Normal Started 66s (x2 over 110s) kubelet Started container nginx-ingress-controller
Warning Unhealthy 30s (x6 over 90s) kubelet Readiness probe failed: Get "http://10.233.72.69:10254/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 27s kubelet Liveness probe failed: Get "http://10.233.72.69:10254/healthz": dial tcp 10.233.72.69:10254: i/o timeout (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 20s (x2 over 50s) kubelet Readiness probe failed: Get "http://10.233.72.69:10254/healthz": dial tcp 10.233.72.69:10254: i/o timeout (Client.Timeout exceeded while awaiting headers)
Warning FailedPreStopHook 17s (x2 over 67s) kubelet Exec lifecycle hook ([/wait-shutdown]) for Container "nginx-ingress-controller" in Pod "nginx-ingress-controller-54b86f8f7b-tblw7_ingress-nginx(3a8106ae-a2da-4ab3-92e4-30d0e8fae251)" failed - error: command '/wait-shutdown' exited with 137: , message: ""
Warning Unhealthy 17s (x5 over 87s) kubelet Liveness probe failed: Get "http://10.233.72.69:10254/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Normal Killing 17s (x2 over 67s) kubelet Container nginx-ingress-controller failed liveness probe, will be restarted
Normal Pulled 16s (x3 over 110s) kubelet Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0" already present on machine
[root@centos03 k8s]# kubectl get pod -n nginx-ingress
No resources found in nginx-ingress namespace.
[root@centos03 k8s]# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-54b86f8f7b-tblw7 0/1 Running 3 2m39s
[root@centos03 k8s]# kubectl get svc -n ingress-nginx
No resources found in ingress-nginx namespace.
1、使用nodeport的方式暴露nginx-controller
下载 service-nodeport
[root@centos03 k8s]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/baremetal/service-nodeport.yaml
[root@centos03 k8s]# ls -l
总用量 12
-rw-r--r--. 1 root root 6635 12月 17 07:19 mandatory.yaml
-rw-r--r--. 1 root root 471 12月 17 07:50 service-nodeport.yaml
部署文件:
[root@centos03 k8s]# kubectl apply -f service-nodeport.yaml
service/ingress-nginx created
[root@centos03 k8s]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx NodePort 10.233.27.195 <none> 80:31974/TCP,443:30417/TCP 15s
[root@centos03 k8s]# kubectl get pod -o wide -n ingress-nginx
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-controller-54b86f8f7b-tblw7 1/1 Running 6 25m 10.233.72.69 centos03 <none> <none>
查看网络:
[root@centos03 k8s]# ipvsadm -Ln
-bash: ipvsadm: 未找到命令
[root@centos03 k8s]# yum -y install ipvsadm
[root@centos03 k8s]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.233.0.1:443 rr
-> 192.168.222.12:6443 Masq 1 20 0
TCP 10.233.38.206:15443 rr
-> 10.233.72.44:15443 Masq 1 0 0
TCP 10.233.40.5:9090 rr persistent 10800
查看pod状态:
[root@centos03 k8s]# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-54b86f8f7b-tblw7 1/1 Running 6 53m
[root@centos03 k8s]#
如果你看到 ingress controller
的状态是running,就代表安装成功,现在你可以安装自己的ingress了。
[root@centos03 k8s]# kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-54b86f8f7b-tblw7 1/1 Running 6 64m
查看 nginx ingress controller
暴露的端口
[root@centos03 k8s]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx NodePort 10.233.27.195 <none> 80:31974/TCP,443:30417/TCP 11m
您在 /var/spool/mail/root 中有新邮件
[root@centos03 k8s]#
可以看到这里对外暴露出来的端口是31974和30417两个。
这两个端口是在你配置LoadBalance的时候要用的,80-->31974,443-->30417
我们的nginx-ingress-controller部署好之后,它会去拉取集群中的所有ingress-rule,当请求通过worker node转发到ingress controller之后,它会根据具体的规则将请求转发到对应的Service.
尤其需要注意的是我们k8s集群的网络是私有的,如果你想要访问集群内的服务,只能通过worker node ip才能访问对应的服务
比如我的k8s集群中有3个worker node,它们的ip分别是下面这样
10.231.123.142
10.231.123.143
10.231.123.147
一般我们会添加一个LoadBalancer,映射这三个workerer node,这个时候会分配给你一个VIP,然后在DNS中设置域名到VIP的映射
mycms.com 10.128.161.21
DNS也是可以实现负载均衡的,只是DNS的实现和专业的LoadBanlance相比还是有很大区别的
这样当我们访问https://mycms.com/api的时候,请求通过LB转发到worker node,然后通过ingress controller转发到具体的service了。
实战
创建一个Service及后端Deployment(以tomcat为例)
[root@centos03 k8s]# cat tomcat-deploy.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat
namespace: default
spec:
selector:
app: tomcat
release: canary
ports:
- name: http
port: 8080
targetPort: 8080
- name: ajp
port: 8009
targetPort: 8009
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deploy
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
release: canary
template:
metadata:
labels:
app: tomcat
release: canary
spec:
containers:
- name: tomcat
image: tomcat:7-alpine
ports:
- name: httpd
containerPort: 8080
- name: ajp
containerPort: 8009
部署
[root@centos03 k8s]# kubectl apply -f tomcat-deploy.yaml
等待pod就绪:
[root@centos03 k8s]# kubectl get pod
NAME READY STATUS RESTARTS AGE
tomcat-deploy-765ff74cf-99vdv 0/2 PodInitializing 0 15s
tomcat-deploy-765ff74cf-bznct 0/2 PodInitializing 0 15s
tomcat-deploy-765ff74cf-gjhx9 0/2 PodInitializing 0 15s
将tomcat添加至ingress-nginx中
[root@centos03 k8s]# cat ingress-tomcat.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-tomcat
namespace: default
annotations:
kubernets.io/ingress.class: "nginx"
spec:
rules:
- host: tomcat.magedu.com
http:
paths:
- path:
backend:
serviceName: tomcat
servicePort: 8080
kubectl apply -f ingress-tomcat.yaml
配置域名解析,当前测试环境我们使用 hosts
文件进行解析
192.168.222.12 tomcat.magedu.com
使用域名进行访问:
相关文章:
浅谈 k8s ingress controller 选型
真一文搞定 ingress-nginx 的使用
部署kubernetes/ingress-nginx(踩坑)
K8S集群中安装Nginx Ingress Controller
kubernetes安装ingress-nginx实践
为者常成,行者常至
自由转载-非商用-非衍生-保持署名(创意共享3.0许可证)