配置背景介绍
kubernetes是google开源的容器集群管理系统,提供应用部署、维护、扩展机制等功能,
利用kubernetes能方便管理跨集群运行容器化的应用,简称:k8s(k与s之间有8个字母)
为什么要用kubernetes这么复杂的docker集群管理工具呢?一开始接触了docker内置的swarm,
这个工具非常简单快捷的完成docker集群功能。但是在使用docker1.13内置的swarm做集群的时候
遇到vip负载均衡没有正确映射端口到外网,或者出现地址被占用的情况,这对高可用性的需求是不利的,
然而又没找到一个解决方案,只能转投k8s。
实验环境
centos7.3 64位
安装
- yum-config-manager --add-repo https://docs.docker.com/v1.13/engine/inst
- allation/linux/repo_files/centos/docker.repo
- yum makecache fast
- yum -y install docker-engine-1.13.1
- yum install epel-release -y
- yum remove -y docker-engine*
- yum install -y kubernetes etcd docker flannel
复制代码
修改配置文件
注意下面的10.135.163.237换成自己服务器ip
- sed -i "s/localhost:2379/10.135.163.237:2379/g" /etc/etcd/etcd.conf
- sed -i "s/localhost:2380/10.135.163.237:2380/g" /etc/etcd/etcd.conf
- sed -i "s/10.135.163.237:2379/10.135.163.237:2379,http:\/\/127.0.0.1:2379/g" /etc/etcd/etcd.conf
- sed -i "s/127.0.0.1:2379/10.135.163.237:2379/g" /etc/kubernetes/apiserver
- sed -i "s/--insecure-bind-address=127.0.0.1/--insecure-bind-address=0.0.0.0/g" /etc/kubernetes/apiserver
- sed -i "s/--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,Servi
- ceAccount,ResourceQuota/--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQ
- uota/g" /etc/kubernetes/apiserver
- sed -i "s/--hostname-override=127.0.0.1/--hostname-override=10.135.163.237/g" /etc/kubernetes/kubelet
- sed -i "s/127.0.0.1:8080/10.135.163.237:8080/g" /etc/kubernetes/kubelet
- sed -i "s/--address=127.0.0.1/--address=0.0.0.0/g" /etc/kubernetes/kubelet
- sed -i "s/127.0.0.1:8080/10.135.163.237:8080/g" /etc/kubernetes/config
- sed -i "s/127.0.0.1:2379/10.135.163.237:2379/g" /etc/sysconfig/flanneld
复制代码
修改host
- vi /etc/hosts
- 10.135.163.237 k8s_master
复制代码
添加网络
- systemctl enable etcd.service
- systemctl start etcd.service
- etcdctl mk //atomic.io/network/config '{"Network":"172.17.0.0/16"}'
复制代码
启动服务
- service docker start
- for SERVICES in etcd kube-apiserver kube-controller-manager kube-sc
- heduler kube-proxy kubelet docker flanneld ; do systemctl restart $SERVICES;
- systemctl enable $SERVICES; systemctl status $SERVICES; done;
复制代码
第一个demo
编写文件a.yaml
- apiVersion: extensions/v1beta1
- kind: Deployment
- metadata:
- name: my-app
- spec:
- replicas: 2
- template:
- metadata:
- labels:
- app: my-app
- spec:
- containers:
- - name: my-app
- image: registry.alauda.cn/yubang/paas_base_test
- ports:
- - containerPort: 80
- command: ["/bin/bash", "/var/start.sh"]
- resources:
- limits:
- cpu: 0.5
- memory: 64Mi
复制代码
编写文件b.yaml
- codecodeapiVersion: v1
- kind: Service
- metadata:
- name: my-app-svc
- labels:
- app: my-app
- spec:
- ports:
- - port: 80
- targetPort: 80
- nodePort: 30964
- type: NodePort
- selector:
- app: my-app
- /code/code
复制代码
创建服务
- kubectl create -f a.yaml --validate
- kubectl create -f b.yaml --validate
复制代码
删除服务
- kubectl delete -f a.yaml
- kubectl delete -f b.yaml
复制代码
增加子节点
安装软件
- yum-config-manager --add-repo https://docs.docker.com/v1.13/en
- gine/installation/linux/repo_files/centos/docker.repo
- yum makecache fast
- yum -y install docker-engine-1.13.1
- yum install epel-release -y
- yum remove -y docker-engine*
- yum install -y kubernetes docker flannel
复制代码
修改配置文件(10.135.163.237为主节点ip,139.199.0.29为当前节点ip)
- sed -i "s/--hostname-override=127.0.0.1/--hostname-override=139.199.0.29/g" /etc/kubernetes/kubelet
- sed -i "s/127.0.0.1:8080/10.135.163.237:8080/g" /etc/kubernetes/kubelet
- sed -i "s/--address=127.0.0.1/--address=0.0.0.0/g" /etc/kubernetes/kubelet
- sed -i "s/127.0.0.1:8080/10.135.163.237:8080/g" /etc/kubernetes/config
- sed -i "s/127.0.0.1:2379/10.135.163.237:2379/g" /etc/sysconfig/flanneld
- sed -i "s/--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,Security
- ContextDeny,ServiceAccount,ResourceQuota/--admission_control=NamespaceLifecycle,Na
- mespaceExists,LimitRanger,ResourceQuota/g" /etc/kubernetes/apiserver
复制代码
启动服务
- service docker start
-
- for SERVICES in kube-proxy kubelet docker flanneld; do
- systemctl restart $SERVICES
- systemctl enable $SERVICES
- systemctl status $SERVICES
- done;
复制代码
在主服务器查看节点
重启服务(重新加入集群)
- systemctl restart kube-apiserver.service
复制代码
删除节点
|