1. 搭建kubernetes环境
1.1 环境准备
1.1.1 基本配置
为了避免和Docker的iptables产生冲突,需要关闭Node节点上的防火墙
systemctl stop firewalld
systemctl disable firewalld
为了让各个节点的时间保持一致,需要为所有节点安装NTP
yum -y install ntp
systemctl start ntpd
systemctl enable ntpd
1.1.2 节点信息
测试使用的两台centos7,CentOS Linux release 7.2.1511 (Core),内核版本号为:3.10.0-327.22.2.el7.x86_64,节点信息如下:
Master:public ip: master1 proivate ip: 10.9.178.109
Node1: public ip: node1 proivate ip: 10.9.106.37
Master上需要部署的软件有: kube-apiserver kube-controller-manager kube-scheduler kubelet docker etcd flanneld
Node1 节点上需要部署的软件为: kube-proxy kubelet flanneld docker
1.1.3 源码安装编译
1.1.3.1 kubernets编译安装
这里使用的go为1.8.3,通过wget进行下载安装的源码,
wget
tar -xzvf go1.8.3.linux-amd64.tar.gz
tar -C /usr/local/ -xzf go1.8.3.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
GOPATH=/data/k8s/go/src
go get -d k8s.io/kubernetes 这里获取kubernetes的源码,获取的源码在 /root目录下,使用源码进行编译生成二进制文件。实际编译的目录为:/data/k8s/kubernetes/_output
直接使用make进行编译,编译完成后在目录下生成_output目录,把_output/bin/文件夹下对应的二进制文件copy到/usr/bin/目录下,在编译过程中,如果内存较小可能会提示内存不足的信息。
kubelet --version
Kubernetes v1.8.0-alpha.0.737+562e721ece8a16
1.1.3.2 etcd的安装
使用git获取 etcd的源码
git clone
进入etcd目录,./build
cp etcd /usr/bin/etcd
使用下面的命令获取etcd的版本信息
curl
{"etcdserver":"3.1.0","etcdcluster":"3.1.0"}
1.1.3.3 flannel的安装
git clone
编译flannel:在$GOPATH的路径下,/data/k8s/go 建立github.com/coreos 目录,在该目录下clone flannel git clone
然后在flannel下使用下面的命令进行编译:CGO_ENABLED=1 make dist/flannel
1.1.3.4 docker安装
Master和node1两台机器上使用的docker版本不一样,这里master使用的docker版本为1.12,node1使用的docker版本为:17.06.0-ce,17.06.0-ce版本和原先的老的版本在配置文件上有较大的变化。Docker安装使用rpm的方式进行安装。安装和下载文档:
注意:上面所有源码进行安装的都必须写systemctl的service文件。
1.2 Master配置
1.2.1 Etcd
1.2.1.1 修改etcd的配置文件
配置etcd的配置文件/etc/etcd/etcd.conf,主要配置下面的几项:
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS=
ETCD_ADVERTISE_CLIENT_URLS= //这里的ip地址为master的地址
1.2.1.2 添加etcd.service
点击(此处)折叠或打开
-
cat /usr/lib/systemd/system/etcd.service
-
[Unit]
-
Description=Etcd Server
-
After=network.target
-
After=network-online.target
-
Wants=network-online.target
-
-
[Service]
-
Type=notify
-
WorkingDirectory=/var/lib/etcd/
-
EnvironmentFile=-/etc/etcd/etcd.conf
-
User=etcd
-
# set GOMAXPROCS to number of processors
-
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\""
-
Restart=on-failure
-
LimitNOFILE=65536
-
-
[Install]
- WantedBy=multi-user.target
1.2.1.3 配置etcd的网络
etcdctl mk /atomic.io/network/config '{"Network":"192.168..0.0/16"}'
1.2.1.4 etcd的启动参数
/usr/bin/etcd --name=default --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=
1.2.2 Flanneld
1.2.2.1 flanneld 配置
点击(此处)折叠或打开
-
cat /usr/lib/systemd/system/flanneld.service
-
[Unit]
-
Description=Flanneld overlay address etcd agent
-
After=network.target
-
After=network-online.target
-
Wants=network-online.target
-
After=etcd.service
-
Before=docker.service
-
-
[Service]
-
Type=notify
-
EnvironmentFile=/etc/sysconfig/flanneld
-
EnvironmentFile=-/etc/sysconfig/docker-network
-
ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS
-
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
-
Restart=on-failure
-
-
[Install]
-
WantedBy=multi-user.target
-
RequiredBy=docker.service
-
-
Master上的Flanneld采用网段为:
-
cat /run/flannel/docker
-
DOCKER_OPT_BIP="--bip=192.168.16.1/24"
-
DOCKER_OPT_IPMASQ="--ip-masq=true"
-
DOCKER_OPT_MTU="--mtu=1404"
-
DOCKER_NETWORK_OPTIONS=" --bip=192.168.16.1/24 --ip-masq=true --mtu=1404"
-
[root@10-9-178-109 ~]# cat /etc/sysconfig/docker-network
-
# /etc/sysconfig/docker-network
-
DOCKER_NETWORK_OPTIONS=
-
-
cat /run/flannel/
-
docker subnet.env
-
[root@10-9-178-109 ~]# cat /run/flannel/subnet.env
-
FLANNEL_NETWORK=192.168.0.0/16
-
FLANNEL_SUBNET=192.168.16.1/24
-
FLANNEL_MTU=1404
- FLANNEL_IPMASQ=false
1.2.2.2 flanneld的启动参数
/usr/bin/flanneld -etcd-endpoints= -etcd-prefix=/atomic.io/network
1.2.3 kubernetes 配置
把编译生成的可执行文件放到/usr/bin/目录下,把kubernetes/_output/bin目录下的kube-apiserver kube-controller-manager kube-scheduler kubelet copy到/usr/bin目录下。
1.2.3.1 创建相应的配置文件
1.2.3.1.1 apiserver在/etc/kubernetes/目录下的apiserver文件
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--insecure-port=8080"
KUBE_ETCD_SERVERS="--etcd-servers=" //master的IP
KUBE_ADVERTISE_ADDR="--advertise-address=10.9.178.109"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.9.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""
1.2.3.1.2 /etc/kubernets/configKUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=10.9.178.109:8080"
1.1.1.1.3 /etc/kubernets/controller-managerKUBE_CONTROLLER_MANAGER_ARGS=""
1.2.3.1.4 /etc/kubernets/kubeletKUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=10.9.178.109"
KUBELET_API_SERVER="--api-servers="
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""
1.2.3.1.5 其他配置文件controller-manager scheduler 都不需要配置
1.2.3.2 service配置文件1.2.3.2.1 /usr/lib/systemd/system/kube-apiserver.service
点击(此处)折叠或打开
-
[Unit]
-
Description=Kubernetes API Server
-
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
-
After=network.target
-
After=etcd.service
-
-
[Service]
-
EnvironmentFile=-/etc/kubernetes/config
-
EnvironmentFile=-/etc/kubernetes/apiserver
-
ExecStart=/usr/bin/kube-apiserver \
-
$KUBE_LOGTOSTDERR \
-
$KUBE_LOG_LEVEL \
-
$KUBE_ETCD_SERVERS \
-
$KUBE_API_ADDRESS \
-
$KUBE_API_PORT \
-
$KUBE_ALLOW_PRIV \
-
$KUBE_SERVICE_ADDRESSES \
-
$KUBE_ADMISSION_CONTROL \
-
$KUBE_API_ARGS
-
Restart=on-failure
-
Type=notify
-
LimitNOFILE=65536
-
-
[Install]
- WantedBy=multi-user.target
点击(此处)折叠或打开
-
[Unit]
-
Description=Kubernetes Controller Manager
-
Documentation=https://github.com/kubernetes/kubernetes
-
-
[Service]
-
EnvironmentFile=-/etc/kubernetes/config
-
EnvironmentFile=-/etc/kubernetes/controller-manager
-
ExecStart=/usr/bin/kube-controller-manager ${KUBE_LOGTOSTDERR} \
-
${KUBE_LOG_LEVEL} \
-
${KUBE_MASTER} \
-
${KUBE_CONTROLLER_MANAGER_ARGS}
-
Restart=on-failure
-
LimitNOFILE=65536
-
-
[Install]
- WantedBy=multi-user.target
点击(此处)折叠或打开
-
[Unit]
-
Description=Kubernetes Scheduler
-
Documentation=https://github.com/kubernetes/kubernetes
-
-
[Service]
-
EnvironmentFile=-/etc/kubernetes/config
-
EnvironmentFile=-/etc/kubernetes/scheduler
-
ExecStart=/usr/bin/kube-scheduler ${KUBE_LOGTOSTDERR} \
-
${KUBE_LOG_LEVEL} \
-
${KUBE_MASTER} \
-
${KUBE_SCHEDULER_ARGS}
-
Restart=on-failure
-
LimitNOFILE=65536
-
-
[Install]
- WantedBy=multi-user.target
点击(此处)折叠或打开
-
[Unit]
-
Description=Kubernetes Kubelet Server
-
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
-
After=docker.service
-
Requires=docker.service
-
-
[Service]
-
WorkingDirectory=/var/lib/kubelet
-
EnvironmentFile=-/etc/kubernetes/config
-
EnvironmentFile=-/etc/kubernetes/kubelet
-
ExecStart=/usr/bin/kubelet \
-
$KUBE_LOGTOSTDERR \
-
$KUBE_LOG_LEVEL \
-
$KUBELET_API_SERVER \
-
$KUBELET_ADDRESS \
-
$KUBELET_PORT \
-
$KUBELET_HOSTNAME \
-
$KUBE_ALLOW_PRIV \
-
$KUBELET_POD_INFRA_CONTAINER \
-
$KUBELET_ARGS
-
Restart=on-failure
-
-
[Install]
- WantedBy=multi-user.target
#!/bin/bash
for svc in kube-apiserver kube-controller-manager kube-scheduler kubelet; do
systemctl restart $svc
systemctl enable $svc
systemctl status $svc
done
如果正常启动后可以看到下面的进程:
/usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers= --insecure-bind-address=0.0.0.0 --insecure-port=8080 --allow-privileged=false --service-cluster-ip-range=10.9.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota
/usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=10.9.178.109:8080
/usr/bin/kube-scheduler --logtostderr=true --v=0 --master=10.9.178.109:8080
/usr/bin/kubelet --logtostderr=true --v=0 --api-servers= --address=0.0.0.0 --hostname-override=10.9.178.109 --allow-privileged=false --pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest
正常启动后通过下面的命令获取nodes的信息
使用kubectl get nodes命令 可以看到master的状态信息:
10.9.178.109 Ready 1d v1.8.0-alpha.0.737+562e721ece8a16
1.3 node节点配置
1.3.1 flanneld
1.3.1.1 flanneld配置
点击(此处)折叠或打开
-
cat /etc/sysconfig/flanneld
-
# Flanneld configuration options
-
-
# etcd url location. Point this to the server where etcd runs
-
FLANNEL_ETCD_ENDPOINTS=""
-
-
# etcd config key. This is the configuration key that flannel queries
-
# For address range assignment
-
FLANNEL_ETCD_PREFIX="/atomic.io/network"
-
-
# Any additional options that you want to pass
-
#FLANNEL_OPTIONS=""
- 启动flanneld systemctl start flannel
1.3.1.2 创建网络配置
创建config.json文件,
{
"Network": "192.168.0.0/16",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan",
"VNI": 7890
}
}
curl -L /v2/keys/atomic.io/network/config -XPUT --data-urlencode 创建成功后,可以在master上使用 下面的命令获取相关的配置信息:
etcdctl get /atomic.io/network/config
{
"Network": "192.168.0.0/16",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan",
"VNI": 7890
}
}
启动flanneld后,可以看到下面分配的IP地址端:
[root@10-9-106-37 ~]# cat /run/flannel/docker
DOCKER_OPT_BIP="--bip=192.168.72.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1404"
DOCKER_NETWORK_OPTIONS="--bip=192.168.72.1/24 --mtu=1404 --bip=192.168.72.1/24 --ip-masq=true --mtu=1404"
[root@10-9-106-37 ~]# cat /run/flannel/subnet.env
FLANNEL_NETWORK=192.168.0.0/16
FLANNEL_SUBNET=192.168.72.1/24
FLANNEL_MTU=1404
FLANNEL_IPMASQ=false
etcdctl get /atomic.io/network/subnets/192.168.72.0-241.3.2 kubernetes 配置
编译kubernetes后在k8s.io/kubernetes/_output/bin目录下的kubelet kube-proxy 文件copy到/usr/bin 目录下:
1.3.2.1 kubelet kube-proxy配置文件
cat /etc/kubernetes/kubelet
# --address=0.0.0.0: The IP address for the Kubelet to serve on (set to 0.0.0.0 for all interfaces)
KUBELET__ADDRESS="--address=0.0.0.0"
# --port=10250: The port for the Kubelet to serve on. Note that "kubectl logs" will not work if you set this flag.
KUBELET_PORT="--port=10250"
# --hostname-override="": If non-empty, will use this string as identification instead of the actual hostname.
KUBELET_HOSTNAME="--hostname-override=10.9.106.37"
# --api-servers=[]: List of Kubernetes API servers for publishing events,
# and reading pods and services. (ip:port), comma separated.
KUBELET_API_SERVER="--api-servers="
# pod infrastructure container 这个选项信息表示的是pod的基础镜像,pod中的container需要使用该images生成一个container通信,他们之间使用的是container的网络模式,
如果这里不指定镜像的问题,默认从google下载,gcr.io/google_containers/pause-amd64
由于防火墙可能下载不成功,可以在docker Hub上下载后,使用docker tag 命令重新打tag,
打成下面的形式gcr.io/google_containers/pause-amd64 ,如果使用google的参数,则无需指定,自动会通过该images生成container
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
# Add your own!
KUBELET_ARGS=""
cat /etc/kubernetes/config
# --logtostderr=true: log to standard error instead of files
KUBE_LOGTOSTDERR="--logtostderr=true"
# --v=0: log level for V logs
KUBE_LOG_LEVEL="--v=0"
# --allow-privileged=false: If true, allow privileged containers.
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=10.9.178.109:8080"
1.3.2.2 service配置文件
点击(此处)折叠或打开
-
cat /usr/lib/systemd/system/kube-proxy.service
-
Unit]
-
Description=Kubernetes Proxy
-
After=network.target
-
-
[Service]
-
EnvironmentFile=-/etc/kubernetes/config
-
EnvironmentFile=-/etc/kubernetes/kube-proxy
-
ExecStart=/usr/bin/kube-proxy \
-
$KUBE_LOGTOSTDERR \
-
$KUBE_LOG_LEVEL \
-
$KUBE_MASTER \
-
$KUBE_PROXY_ARGS
-
Restart=on-failure
-
LimitNOFILE=65536
-
-
[Install]
-
WantedBy=multi-user.target
-
-
cat /usr/lib/systemd/system/kubelet.service
-
[Unit]
-
Description=Kubernetes Kubelet
-
After=docker.service
-
Requires=docker.service
-
-
[Service]
-
WorkingDirectory=/var/lib/kubelet
-
EnvironmentFile=-/etc/kubernetes/config
-
EnvironmentFile=-/etc/kubernetes/kubelet
-
ExecStart=/usr/bin/kubelet ${KUBE_LOGTOSTDERR} \
-
${KUBE_LOG_LEVEL} \
-
${KUBELET__ADDRESS} \
-
${KUBELET_PORT} \
-
${KUBELET_HOSTNAME} \
-
${KUBELET_API_SERVER} \
-
${KUBE_ALLOW_PRIV} \
-
${KUBELET_POD_INFRA_CONTAINER}\
-
${KUBELET_ARGS}
-
Restart=on-failure
-
-
[Install]
- WantedBy=multi-user.target
1.3.2.3 启动
systemctl daemon-reload
#!/bin/bash
for svc in kube-proxy kubelet; do
systemctl restart $svc
systemctl enable $svc
systemctl status $svc
done
启动后正常的信息:
/usr/bin/kube-proxy --logtostderr=true --v=0 --master=10.9.178.109:8080
/usr/bin/kubelet --logtostderr=true --v=0 --address=0.0.0.0 --port=10250 --hostname-override=10.9.106.37 --api-servers= --allow-privileged=false
/usr/bin/flanneld -etcd-endpoints= -etcd-prefix=/atomic.io/network
两边启动正常后,在master节点通过下面的命令,可以获取到node节点的信息: