Kubernetes学习(1)-----搭建kubernetes环境

4610阅读 0评论2017-08-31 frankzfz
分类:云计算

1. 搭建kubernetes环境

1.1 环境准备

1.1.1 基本配置

为了避免和Dockeriptables产生冲突,需要关闭Node节点上的防火墙

systemctl stop firewalld

systemctl disable firewalld

为了让各个节点的时间保持一致,需要为所有节点安装NTP

yum -y install ntp

systemctl start ntpd

systemctl enable ntpd

1.1.2 节点信息

 测试使用的两台centos7CentOS Linux release 7.2.1511 (Core),内核版本号为:3.10.0-327.22.2.el7.x86_64,节点信息如下:

Master:public ip: master1    proivate ip: 10.9.178.109

Node1:  public ip:  node1    proivate ip: 10.9.106.37

 

Master上需要部署的软件有: kube-apiserver kube-controller-manager  kube-scheduler  kubelet docker  etcd  flanneld

Node1 节点上需要部署的软件为: kube-proxy  kubelet  flanneld  docker

1.1.3 源码安装编译

1.1.3.1  kubernets编译安装 

这里使用的go1.8.3,通过wget进行下载安装的源码,

wget

tar -xzvf go1.8.3.linux-amd64.tar.gz

tar -C /usr/local/  -xzf go1.8.3.linux-amd64.tar.gz

export PATH=$PATH:/usr/local/go/bin

GOPATH=/data/k8s/go/src

go get -d k8s.io/kubernetes   这里获取kubernetes的源码,获取的源码在 /root目录下,使用源码进行编译生成二进制文件。实际编译的目录为:/data/k8s/kubernetes/_output

直接使用make进行编译,编译完成后在目录下生成_output目录,把_output/bin/文件夹下对应的二进制文件copy/usr/bin/目录下,在编译过程中,如果内存较小可能会提示内存不足的信息。

kubelet --version

Kubernetes v1.8.0-alpha.0.737+562e721ece8a16

1.1.3.2 etcd的安装

使用git获取 etcd的源码

git clone

进入etcd目录,./build

cp etcd /usr/bin/etcd

使用下面的命令获取etcd的版本信息

curl

{"etcdserver":"3.1.0","etcdcluster":"3.1.0"}

1.1.3.3 flannel的安装

git clone     

编译flannel:在$GOPATH的路径下,/data/k8s/go 建立github.com/coreos 目录,在该目录下clone flannel git clone

然后在flannel下使用下面的命令进行编译:CGO_ENABLED=1 make dist/flannel

1.1.3.4 docker安装

    Masternode1两台机器上使用的docker版本不一样,这里master使用的docker版本为1.12node1使用的docker版本为:17.06.0-ce17.06.0-ce版本和原先的老的版本在配置文件上有较大的变化。Docker安装使用rpm的方式进行安装。安装和下载文档:

         注意:上面所有源码进行安装的都必须写systemctlservice文件。

1.2 Master配置

1.2.1 Etcd

1.2.1.1 修改etcd的配置文件

配置etcd的配置文件/etc/etcd/etcd.conf,主要配置下面的几项:

ETCD_NAME=default

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_CLIENT_URLS=

ETCD_ADVERTISE_CLIENT_URLS= //这里的ip地址为master的地址

1.2.1.2 添加etcd.service

点击(此处)折叠或打开

  1. cat /usr/lib/systemd/system/etcd.service
  2. [Unit]
  3. Description=Etcd Server
  4. After=network.target
  5. After=network-online.target
  6. Wants=network-online.target

  7. [Service]
  8. Type=notify
  9. WorkingDirectory=/var/lib/etcd/
  10. EnvironmentFile=-/etc/etcd/etcd.conf
  11. User=etcd
  12. # set GOMAXPROCS to number of processors
  13. ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\""
  14. Restart=on-failure
  15. LimitNOFILE=65536

  16. [Install]
  17. WantedBy=multi-user.target

1.2.1.3 配置etcd的网络

etcdctl mk /atomic.io/network/config '{"Network":"192.168..0.0/16"}'

1.2.1.4 etcd的启动参数

/usr/bin/etcd --name=default --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=

1.2.2 Flanneld

1.2.2.1  flanneld 配置

点击(此处)折叠或打开

  1. cat /usr/lib/systemd/system/flanneld.service
  2. [Unit]
  3. Description=Flanneld overlay address etcd agent
  4. After=network.target
  5. After=network-online.target
  6. Wants=network-online.target
  7. After=etcd.service
  8. Before=docker.service

  9. [Service]
  10. Type=notify
  11. EnvironmentFile=/etc/sysconfig/flanneld
  12. EnvironmentFile=-/etc/sysconfig/docker-network
  13. ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS
  14. ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
  15. Restart=on-failure

  16. [Install]
  17. WantedBy=multi-user.target
  18. RequiredBy=docker.service

  19. Master上的Flanneld采用网段为:
  20. cat /run/flannel/docker
  21. DOCKER_OPT_BIP="--bip=192.168.16.1/24"
  22. DOCKER_OPT_IPMASQ="--ip-masq=true"
  23. DOCKER_OPT_MTU="--mtu=1404"
  24. DOCKER_NETWORK_OPTIONS=" --bip=192.168.16.1/24 --ip-masq=true --mtu=1404"
  25. [root@10-9-178-109 ~]# cat /etc/sysconfig/docker-network
  26. # /etc/sysconfig/docker-network
  27. DOCKER_NETWORK_OPTIONS=

  28. cat /run/flannel/
  29. docker subnet.env
  30. [root@10-9-178-109 ~]# cat /run/flannel/subnet.env
  31. FLANNEL_NETWORK=192.168.0.0/16
  32. FLANNEL_SUBNET=192.168.16.1/24
  33. FLANNEL_MTU=1404
  34. FLANNEL_IPMASQ=false

1.2.2.2  flanneld的启动参数

    /usr/bin/flanneld -etcd-endpoints= -etcd-prefix=/atomic.io/network

1.2.3 kubernetes 配置

把编译生成的可执行文件放到/usr/bin/目录下,把kubernetes/_output/bin目录下的kube-apiserver kube-controller-manager kube-scheduler kubelet copy/usr/bin目录下。

1.2.3.1 创建相应的配置文件

1.2.3.1.1 apiserver

/etc/kubernetes/目录下的apiserver文件

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

KUBE_API_PORT="--insecure-port=8080"

KUBE_ETCD_SERVERS="--etcd-servers=" //masterIP

KUBE_ADVERTISE_ADDR="--advertise-address=10.9.178.109"

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.9.0.0/16"

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

KUBE_API_ARGS=""

1.2.3.1.2  /etc/kubernets/config

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=0"

KUBE_ALLOW_PRIV="--allow-privileged=false"

KUBE_MASTER="--master=10.9.178.109:8080"

1.1.1.1.3 /etc/kubernets/controller-manager

KUBE_CONTROLLER_MANAGER_ARGS=""

1.2.3.1.4  /etc/kubernets/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"

KUBELET_HOSTNAME="--hostname-override=10.9.178.109"

KUBELET_API_SERVER="--api-servers="

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

KUBELET_ARGS=""

1.2.3.1.5 其他配置文件

controller-manager scheduler 都不需要配置

1.2.3.2  service配置文件

1.2.3.2.1
 /usr/lib/systemd/system/kube-apiserver.service

点击(此处)折叠或打开

  1. [Unit]
  2. Description=Kubernetes API Server
  3. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  4. After=network.target
  5. After=etcd.service

  6. [Service]
  7. EnvironmentFile=-/etc/kubernetes/config
  8. EnvironmentFile=-/etc/kubernetes/apiserver
  9. ExecStart=/usr/bin/kube-apiserver \
  10.      $KUBE_LOGTOSTDERR \
  11.      $KUBE_LOG_LEVEL \
  12.      $KUBE_ETCD_SERVERS \
  13.      $KUBE_API_ADDRESS \
  14.      $KUBE_API_PORT \
  15.      $KUBE_ALLOW_PRIV \
  16.      $KUBE_SERVICE_ADDRESSES \
  17.      $KUBE_ADMISSION_CONTROL \
  18.      $KUBE_API_ARGS
  19. Restart=on-failure
  20. Type=notify
  21. LimitNOFILE=65536

  22. [Install]
  23. WantedBy=multi-user.target
1.2.3.2.2 /usr/lib/systemd/system/kube-controller-manager.service

点击(此处)折叠或打开

  1. [Unit]
  2. Description=Kubernetes Controller Manager
  3. Documentation=https://github.com/kubernetes/kubernetes

  4. [Service]
  5. EnvironmentFile=-/etc/kubernetes/config
  6. EnvironmentFile=-/etc/kubernetes/controller-manager
  7. ExecStart=/usr/bin/kube-controller-manager ${KUBE_LOGTOSTDERR} \
  8.                                 ${KUBE_LOG_LEVEL} \
  9.                                 ${KUBE_MASTER} \
  10.                                 ${KUBE_CONTROLLER_MANAGER_ARGS}
  11. Restart=on-failure
  12. LimitNOFILE=65536

  13. [Install]
  14. WantedBy=multi-user.target
1.2.3.2.3  /usr/lib/systemd/system/kube-scheduler.service

点击(此处)折叠或打开

  1. [Unit]
  2. Description=Kubernetes Scheduler
  3. Documentation=https://github.com/kubernetes/kubernetes

  4. [Service]
  5. EnvironmentFile=-/etc/kubernetes/config
  6. EnvironmentFile=-/etc/kubernetes/scheduler
  7. ExecStart=/usr/bin/kube-scheduler ${KUBE_LOGTOSTDERR} \
  8.                         ${KUBE_LOG_LEVEL} \
  9.                         ${KUBE_MASTER} \
  10.                         ${KUBE_SCHEDULER_ARGS}
  11. Restart=on-failure
  12. LimitNOFILE=65536

  13. [Install]
  14. WantedBy=multi-user.target
1.2.3.2.4 /usr/lib/systemd/system/kubelet.service

点击(此处)折叠或打开

  1. [Unit]
  2. Description=Kubernetes Kubelet Server
  3. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  4. After=docker.service
  5. Requires=docker.service

  6. [Service]
  7. WorkingDirectory=/var/lib/kubelet
  8. EnvironmentFile=-/etc/kubernetes/config
  9. EnvironmentFile=-/etc/kubernetes/kubelet
  10. ExecStart=/usr/bin/kubelet \
  11.      $KUBE_LOGTOSTDERR \
  12.      $KUBE_LOG_LEVEL \
  13.      $KUBELET_API_SERVER \
  14.      $KUBELET_ADDRESS \
  15.      $KUBELET_PORT \
  16.      $KUBELET_HOSTNAME \
  17.      $KUBE_ALLOW_PRIV \
  18.      $KUBELET_POD_INFRA_CONTAINER \
  19.      $KUBELET_ARGS
  20. Restart=on-failure

  21. [Install]
  22. WantedBy=multi-user.target
使用下面的脚本进行启动:
#!/bin/bash
for svc in  kube-apiserver kube-controller-manager kube-scheduler kubelet; do
   systemctl restart $svc
   systemctl enable $svc
   systemctl status $svc
done
如果正常启动后可以看到下面的进程:
/usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers= --insecure-bind-address=0.0.0.0 --insecure-port=8080 --allow-privileged=false --service-cluster-ip-range=10.9.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota

/usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=10.9.178.109:8080

/usr/bin/kube-scheduler --logtostderr=true --v=0 --master=10.9.178.109:8080

/usr/bin/kubelet --logtostderr=true --v=0 --api-servers= --address=0.0.0.0 --hostname-override=10.9.178.109 --allow-privileged=false --pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest

正常启动后通过下面的命令获取nodes的信息
使用kubectl get nodes命令 可以看到master的状态信息:
10.9.178.109   Ready     1d        v1.8.0-alpha.0.737+562e721ece8a16

1.3  node节点配置

1.3.1 flanneld

1.3.1.1 flanneld配置

点击(此处)折叠或打开

  1. cat /etc/sysconfig/flanneld
  2. # Flanneld configuration options

  3. # etcd url location. Point this to the server where etcd runs
  4. FLANNEL_ETCD_ENDPOINTS=""

  5. # etcd config key. This is the configuration key that flannel queries
  6. # For address range assignment
  7. FLANNEL_ETCD_PREFIX="/atomic.io/network"

  8. # Any additional options that you want to pass
  9. #FLANNEL_OPTIONS=""
  10. 启动flanneld systemctl start flannel

1.3.1.2 创建网络配置

创建config.json文件,

{

"Network": "192.168.0.0/16",

"SubnetLen": 24,

"Backend": {

     "Type": "vxlan",

     "VNI": 7890

     }

 }

 

curl -L /v2/keys/atomic.io/network/config -XPUT --data-urlencode 创建成功后,可以在master上使用 下面的命令获取相关的配置信息:

etcdctl get /atomic.io/network/config

{

"Network": "192.168.0.0/16",

"SubnetLen": 24,

"Backend": {

     "Type": "vxlan",

     "VNI": 7890

     }

 }

启动flanneld后,可以看到下面分配的IP地址端:

[root@10-9-106-37 ~]# cat /run/flannel/docker

DOCKER_OPT_BIP="--bip=192.168.72.1/24"

DOCKER_OPT_IPMASQ="--ip-masq=true"

DOCKER_OPT_MTU="--mtu=1404"

DOCKER_NETWORK_OPTIONS="--bip=192.168.72.1/24 --mtu=1404  --bip=192.168.72.1/24 --ip-masq=true --mtu=1404"

[root@10-9-106-37 ~]# cat /run/flannel/subnet.env

FLANNEL_NETWORK=192.168.0.0/16

FLANNEL_SUBNET=192.168.72.1/24

FLANNEL_MTU=1404

FLANNEL_IPMASQ=false

         etcdctl get /atomic.io/network/subnets/192.168.72.0-24

1.3.2  kubernetes 配置

编译kubernetes后在k8s.io/kubernetes/_output/bin目录下的kubelet kube-proxy 文件copy/usr/bin 目录下:

1.3.2.1 kubelet kube-proxy配置文件

  cat /etc/kubernetes/kubelet

# --address=0.0.0.0: The IP address for the Kubelet to serve on (set to 0.0.0.0 for all interfaces)

KUBELET__ADDRESS="--address=0.0.0.0"

 

# --port=10250: The port for the Kubelet to serve on. Note that "kubectl logs" will not work if you set this flag.

KUBELET_PORT="--port=10250"

 

# --hostname-override="": If non-empty, will use this string as identification instead of the actual hostname.

KUBELET_HOSTNAME="--hostname-override=10.9.106.37"

 

# --api-servers=[]: List of Kubernetes API servers for publishing events,

# and reading pods and services. (ip:port), comma separated.

KUBELET_API_SERVER="--api-servers="

 

# pod infrastructure container 这个选项信息表示的是pod的基础镜像,pod中的container需要使用该images生成一个container通信,他们之间使用的是container的网络模式,

如果这里不指定镜像的问题,默认从google下载,gcr.io/google_containers/pause-amd64

由于防火墙可能下载不成功,可以在docker Hub上下载后,使用docker tag 命令重新打tag

打成下面的形式gcr.io/google_containers/pause-amd64 ,如果使用google的参数,则无需指定,自动会通过该images生成container

#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

 

# Add your own!

KUBELET_ARGS=""

 

cat /etc/kubernetes/config

# --logtostderr=true: log to standard error instead of files

KUBE_LOGTOSTDERR="--logtostderr=true"

 

# --v=0: log level for V logs

KUBE_LOG_LEVEL="--v=0"

 

# --allow-privileged=false: If true, allow privileged containers.

KUBE_ALLOW_PRIV="--allow-privileged=false"

 

# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER="--master=10.9.178.109:8080"

1.3.2.2  service配置文件

点击(此处)折叠或打开

  1. cat /usr/lib/systemd/system/kube-proxy.service
  2. Unit]
  3. Description=Kubernetes Proxy
  4. After=network.target

  5. [Service]
  6. EnvironmentFile=-/etc/kubernetes/config
  7. EnvironmentFile=-/etc/kubernetes/kube-proxy
  8. ExecStart=/usr/bin/kube-proxy \
  9.         $KUBE_LOGTOSTDERR \
  10.         $KUBE_LOG_LEVEL \
  11.         $KUBE_MASTER \
  12.         $KUBE_PROXY_ARGS
  13. Restart=on-failure
  14. LimitNOFILE=65536

  15. [Install]
  16. WantedBy=multi-user.target

  17. cat /usr/lib/systemd/system/kubelet.service
  18. [Unit]
  19. Description=Kubernetes Kubelet
  20. After=docker.service
  21. Requires=docker.service

  22. [Service]
  23. WorkingDirectory=/var/lib/kubelet
  24. EnvironmentFile=-/etc/kubernetes/config
  25. EnvironmentFile=-/etc/kubernetes/kubelet
  26. ExecStart=/usr/bin/kubelet ${KUBE_LOGTOSTDERR} \
  27.                     ${KUBE_LOG_LEVEL} \
  28.                     ${KUBELET__ADDRESS} \
  29.                     ${KUBELET_PORT} \
  30.                     ${KUBELET_HOSTNAME} \
  31.                     ${KUBELET_API_SERVER} \
  32.                     ${KUBE_ALLOW_PRIV} \
  33.                     ${KUBELET_POD_INFRA_CONTAINER}\
  34.                     ${KUBELET_ARGS}
  35. Restart=on-failure

  36. [Install]
  37. WantedBy=multi-user.target

1.3.2.3 启动

systemctl daemon-reload

#!/bin/bash

for svc in kube-proxy kubelet; do

    systemctl restart $svc

    systemctl enable $svc

    systemctl status $svc

done

  启动后正常的信息:

/usr/bin/kube-proxy --logtostderr=true --v=0 --master=10.9.178.109:8080

/usr/bin/kubelet --logtostderr=true --v=0 --address=0.0.0.0 --port=10250 --hostname-override=10.9.106.37 --api-servers= --allow-privileged=false

/usr/bin/flanneld -etcd-endpoints= -etcd-prefix=/atomic.io/network

 两边启动正常后,在master节点通过下面的命令,可以获取到node节点的信息:

上一篇:ipvlan内核代码流程
下一篇:Kubernetes学习(2)----创建dashboard