第一章 rke部署k8s集群

机器
192.168.17.129
192.168.17.130
192.168.17.131
192.168.17.132
参考博客:https://blog.csdn.net/godservant/article/details/80895970

安装docker

四台机器均安装docker,版本要求1.11.x 1.12.x 1.13.x 17.03.x

1
2
3
4
5
6
7
8
9
10
11
12
chmod +x docker-compose
mv docker-compose /usr/bin/
rpm -ivh *.rpm --nodeps --force
systemctl start docker
systemctl enable docker

vim /etc/docker/daemon.json
{
"insecure-registries":["0.0.0.0/0"]
}

systemctl restart docker

运行环境

软件 版本/镜像 备注
OS RHEL 7.2
Docker 1.12.6
RKE v0.0.12-dev
kubectl v1.8.4
Kubernetes rancher/k8s:v1.8.3-rancher2
canal quay.io/calico/node:v2.6.2
quay.io/calico/cni:v1.11.0
quay.io/coreos/flannel:v0.9.1
etcd quay.io/coreos/etcd:latest
alpine alpine:latest
Nginx proxy rancher/rke-nginx-proxy:v0.1.0
Cert downloader rancher/rke-cert-deployer:v0.1.1
Service sidekick rancher/rke-service-sidekick:v0.1.0
kubedns gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5
dnsmasq gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
Kubedns sidecar gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5
Kubedns autoscaler gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.0.0

部署k8s集群

创建rke用户并设置rke的ssh免密

su - rke(只要是集群节点,就需要做免密,包括自己)

1、生成默认格式的密匙key,此过程会在/root/.ssh/文件件夹下生成id_rsa(私钥)和id_rsa.pub(公钥)。

1
ssh-keygen

查看/root/.ssh/目录

2、将公钥id_rsa.pub复制到远程主机/root/.ssh/文件中,并且重命名为authorized_keys。(此处以远程主机ip为192.168.17.130举例)

1
ssh-copy-id  -i  ~/.ssh/id_rsa.pub rke@192.168.17.130

关闭swap分区

Kubelet运行是需要worker节点关闭swap分区,执行以下命令关闭swap分区

1)永久禁用swap

可以直接修改/etc/fstab文件,注释掉swap项

1
vi /etc/fstab

2)临时禁用

1
swapoff -a

将rke用户加入到docker用户组

1
2
3
usermod -aG docker <user_name>

eg: usermod -aG docker rke

关闭内核防火墙

1
vim /etc/selinux/config

1
reboot

下载rke和kubectl

(1)rke最新版本下载: https://github.com/rancher/rke/releases/, 下载rke_linux-amd64

1
2
3
    mv rke_linux-amd64 rke
    chmod +x rke
mv rke  /usr/bin/

(2)Kubectl下载:https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/darwin/amd64/kubectl

1
2
chmod +x kubectl
mv kubectl  /usr/bin/

部署私有容器镜像仓库harbor

首先将搭建集群所需的镜像上传到公有harbor上.在镜像所在机器保存特定标签的镜像

1
2
3
docker images --format "{{.Repository}}:{{.Tag}}"\|grep 192.168.17.130

docker save -o rke.tar.gz $(docker images --format "{{.Repository}}:{{.Tag}}"|grep 192.168.17.130)

所有镜像下载链接: https://pan.baidu.com/s/1flpjyrVHX283f-t0b7RFtQ 提取码: ubct

修改hosts文件(非必须)

1
2
3
4
5
6
vim /etc/hots

192.168.17.129 rke
192.168.17.130 node1
192.168.17.131 node2
192.168.17.132 node3

编辑集群配置文件cluster.yml

可以直接在主机的rke用户下执行: ./rke config 命令, 生成配置文件: cluster.yml 。本次为了方便,我直接上传cluster.yml,具体内容如下:

1
su - rke

创建cluster.yml文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
# If you intened to deploy Kubernetes in an air-gapped environment,
# please consult the documentation on how to configure custom RKE images.
nodes:
- address: "192.168.17.129"
port: "22"
internal_address: ""
role:
- controlplane
- etcd
- worker
hostname_override: ""
user: rke
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/id_rsa
labels: {}
- address: "192.168.17.130"
port: "22"
internal_address: ""
role:
- etcd
- worker
hostname_override: ""
user: rke
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/id_rsa
labels: {}
- address: "192.168.17.131"
port: "22"
internal_address: ""
role:
- worker
hostname_override: ""
user: rke
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/id_rsa
labels: {}
services:
etcd:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
external_urls: []
ca_cert: ""
cert: ""
key: ""
path: ""
snapshot: false
retention: ""
creation: ""
kube-api:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
service_cluster_ip_range: 10.43.0.0/16
service_node_port_range: ""
pod_security_policy: false
kube-controller:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
cluster_cidr: 10.42.0.0/16
service_cluster_ip_range: 10.43.0.0/16
scheduler:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
kubelet:
image: ""
extra_args: {}
extra_binds: []
extra_env: [Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"]
cluster_domain: cluster.local
infra_container_image: ""
cluster_dns_server: 10.43.0.10
fail_swap_on: false
kubeproxy:
image: ""
extra_args: {}
extra_binds: []
extra_env: []
network:
plugin: canal
options: {}
authentication:
strategy: x509
options: {}
sans: []
addons: ""
addons_include: []
system_images:
etcd: 192.168.17.132/rke/quay.io/coreos/etcd:v3.1.12
alpine: 192.168.17.132/rke/rancher/rke-tools:v0.1.13
nginx_proxy: 192.168.17.132/rke/rancher/rke-tools:v0.1.13
cert_downloader: 192.168.17.132/rke/rancher/rke-tools:v0.1.13
kubernetes_services_sidecar: 192.168.17.132/rke/rancher/rke-tools:v0.1.13
kubedns: 192.168.17.132/rke/gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
dnsmasq: 192.168.17.132/rke/gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
kubedns_sidecar: 192.168.17.132/rke/gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
kubedns_autoscaler: 192.168.17.132/rke/gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.0.0
kubernetes: 192.168.17.132/rke/rancher/hyperkube:v1.9.7-rancher2
flannel: 192.168.17.132/rke/quay.io/coreos/flannel:v0.9.1
flannel_cni: 192.168.17.132/rke/quay.io/coreos/flannel-cni:v0.2.0
calico_node: 192.168.17.132/rke/quay.io/calico/node:v3.1.1
calico_cni: 192.168.17.132/rke/quay.io/calico/cni:v3.1.1
calico_controllers: ""
calico_ctl: 192.168.17.132/rke/quay.io/calico/ctl:v2.0.0
canal_node: 192.168.17.132/rke/quay.io/calico/node:v3.1.1
canal_cni: 192.168.17.132/rke/quay.io/calico/cni:v3.1.1
canal_flannel: 192.168.17.132/rke/quay.io/coreos/flannel:v0.9.1
wave_node: 192.168.17.132/rke/weaveworks/weave-kube:2.1.2
weave_cni: 192.168.17.132/rke/weaveworks/weave-npc:2.1.2
pod_infra_container: 192.168.17.132/rke/gcr.io/google_containers/pause-amd64:3.0
ingress: 192.168.17.132/rke/rancher/nginx-ingress-controller:0.16.2-rancher1
ingress_backend: 192.168.17.132/rke/k8s.gcr.io/defaultbackend:1.4
metrics_server: 192.168.17.132/rke/metrics-server-amd64:v0.2.1
ssh_key_path: ~/.ssh/id_rsa
ssh_agent_auth: false
authorization:
mode: rbac
options: {}
ignore_docker_version: false
kubernetes_version: ""
private_registries: []
ingress:
provider: ""
options: {}
node_selector: {}
extra_args: {}
cluster_name: ""
cloud_provider:
name: ""
prefix_path: ""
addon_job_timeout: 0
bastion_host:
address: ""
port: ""
user: ""
ssh_key: ""
ssh_key_path: ""
monitoring:
provider: ""
options: {}

启动rke,部署集群

执行

1
rke -d up

1
rke up -config cluster.yml

出现下图即为成功

移动集群配置文件

运行rke成功部署集群后会生成.kube_config_cluster.yml文件

1
cp kube_config_cluster.yml /home/rke/.kube/config

删除k8s集群

运行

1
rke -d remove

1
rke remove -config cluster.yml

清理节点上的docker容器,在每个节点上执行如下命令:

1
docker rm -fv $(docker ps -aq)

为集群节点创建标签

为节点创建标签

1
kubectl label node 192.168.17.131 ci.role=slave

获取节点信息(带节点的标签)

1
kubectl get node --show-labels

查询带特定标签的节点

1
kubectl get node -a -l " ci.role=slave "

删除一个Label,只需在命令行最后指定Label的key名并与一个减号相连即可

1
kubectl label nodes 192.168.17.131 ci.role-

然后在pipeline中就可以加:nodeSelector: ‘ci.role=slaver’ 语句了

为集群创建maven-setting

创建configmap-setting.yaml文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
apiVersion: v1
data:
settings.xml: |
<?xml version="1.0" encoding="UTF-8"?>

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">

<pluginGroups>
</pluginGroups>

<proxies>
</proxies>

<servers>
<server>
<id>chinaunicom</id>
<username>admin</username>
<password>11111111</password>
</server>
<server>
<id>maven-snapshot-manager</id>
<username>admin</username>
<password>11111111</password>
</server>
<server>
<id>maven-release-manager</id>
<username>admin</username>
<password>11111111</password>
</server>
</servers>

<mirrors>
<mirror>
<id>chinaunicom</id>
<mirrorOf>*</mirrorOf>
<name>Human Readable Name for this Mirror.</name>
<url>http://192.168.17.132:8081/repository/maven-public/</url>
</mirror>
<mirror>
<id>maven-snapshot-manager</id>
<mirrorOf>*</mirrorOf>
<name>Human Readable Name for this Mirror.</name>
<url>http://192.168.17.132:8081/repository/maven-snapshots/</url>
</mirror>
<mirror>
<id>maven-release-manager</id>
<mirrorOf>*</mirrorOf>
<name>Human Readable Name for this Mirror.</name>
<url>http://192.168.17.132:8081/repository/maven-releases/</url>
</mirror>


</mirrors>

<profiles>
</profiles>

</settings>
kind: ConfigMap
metadata:
name: configmap-maven-settings-mirror-31010
namespace: jenkins

执行命令

1
kubectl create -f configmap-setting.yaml

创建名为configmap-maven-settings-mirror-31010的configMap,然后pipeline中就可以如下配置:

查看configMap

1
kubectl get configmap -n jenkins

为集群创建configMap

创建kubeconfig-dev.yaml文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: v1
data:
config: |
apiVersion: v1
kind: Config
clusters:
- cluster:
api-version: v1
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN3akNDQWFxZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkcmRXSmwKTFdOaE1CNFhEVEU1TVRJd01URXlOREkwTTFvWERUSTVNVEV5T0RFeU5ESTBNMW93RWpFUU1BNEdBMVVFQXhNSAphM1ZpWlMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU53TVdZY1dheGk3ClA0VXNhOExCSW1NL3Q3bUlZMEVHTnZia2dGd3hpR2tBODhBMXJ3SjhzNFE3cFRmS3BzS3Zublp2QlY4aHdpNmQKKzZkK0xFMTlTRFNvL2owRHMzUDRJNFY5S3E1Mk9sRzN4U0pFTXY3VW9DM0VJOE9OM1pMZnIwSlpNRnpvaW1BbwpjUjFDTENlVVN0WDNudXk5K2Y0cjBuaEZGRDcyNkFJTmN3QlB6SjI5RHcwZ3grbmFuVnh6S3UzSjJOSm9hS05jCm41V0dMeUE4V2YxNUdUQVdGL29rcklkTHZFTUxQSUx5SG43elRHZm52TFNTRVhJWTFBZEl2bFRlcmI2UXdNUjIKcWFMSlg4WEo0WTlQR0UvdktUVGdrMHZGdm9ZQTcvbnZQdGRoUTF2aGJqcWpsL2NJM0JtYnhnNFZJL1RUQzl3SQppQ0IzeUdVeXpGRUNBd0VBQWFNak1DRXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CCkFmOHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBTTBJakVGSHM0TmNiT1FQNVdpd0tXZ0k0WDdMZy9Sa2VCK20KaWx3YzU4aU1XR3ZGT1hJc3ZBaFU1YVdUSHlIQUt2VGd2U2xyamlwSDJxeVZVQTlkSlNkcjBCL2RPdnBac1RIcAprakdFOGFTYkNnajJBQ2o1bEZjT21DSC9oeTRPVHlnSEl0amJDaVpjejMvOVVZVTlkczh4RWlLejlsUDRwelE5Cm5IRGxwemJyU09pbWsrQTFiZTRQWXNIL0ozUWJ6cGs1VERWbEp0eWlGbHhBU2RWdmJUUWtMMjRUMXZHWFp3K1YKcEE0S1h1OTlTaTZncVhHeFQrcElJUE9HSjZzcDM2eXh1RE5YSnhKdGpTUk9zTWhlMm9oT0pXencwYUpWYWtpSQpqbDNiSnlMcDlVd2ZrOEZFOUd2UlVBd1c4VitCYzk1MkhwR2wyaXVyazZoSTVqV2ZEUnc9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: "https://192.168.17.129:6443"
name: "local"
contexts:
- context:
cluster: "local"
user: "kube-admin-local"
name: "local"
current-context: "local"
users:
- name: "kube-admin-local"
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2VENDQWRHZ0F3SUJBZ0lJSXMrbG05aXN6RW93RFFZSktvWklodmNOQVFFTEJRQXdFakVRTUE0R0ExVUUKQXhNSGEzVmlaUzFqWVRBZUZ3MHhPVEV5TURFeE1qUXlORE5hRncweU1ERXhNekF4TWpReU5EWmFNQzR4RnpBVgpCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVJNd0VRWURWUVFERXdwcmRXSmxMV0ZrYldsdU1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXNLdGtaZWY4bzMzdzB0VmpmbFZITVMrVHpCM0QKV2NnM1o3L3JMZ0UzTS9ySlkvdUtJV2RiTmxYZG1ZeTBJakdQVEZqNFl4eHV4Ymo4cy9xVmdDR1dJQUk2dVAxcgpxYUlZdm5ud3dWZUVDcTZsZG9TM09vYTFGQzFlTFg3Z0o2OWE0UVprR2lYNHhZYkpkTUNTWm92N3JzamtIbUgvCmJ3Z010cVlzdk9sOTJ6S2NiaEtFNG5GejVlN3A2Qm5wL05BQXhvYnF2K0F4VHlrYUdBYWRna0trM3Jrd2VCUkIKODVQV0pSZDVITDBwOTUybzFwdkthbGx0MVh1a0ZPTHgzc3lJdTQrR1EwbWJWQkVmeVRYM2JoSFlpaG05NDlHdgo1c0MySVN4WUxLT0RSc2xuY1lVcGVYYVR5NldJRmFMODc1ZFVIaG9vRVZFTjJ5WnVYVlMvZmVMWWp3SURBUUFCCm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3RFFZSktvWkkKaHZjTkFRRUxCUUFEZ2dFQkFKQktqZjZ1Zk40ZjBmdTBHMFQxNHJZWS96VGRPSjNOeGxBMXp3aUJrQndNWHFXcApBQW5HOFp6bzlvYWE0MFNkR2FRUUVCSnR4ZHFBYWNWOW1kREFJM1lmdW1UQlVTdVZCSDdEYUx4RG9HU283NElDCkFpVkYzNWk1cGRkWjNZRCt4c2Q1Ni94QXBJdTFvOGpxVFFRejlQUDFIZFRMaTNZYm9OM0FOdFNKZnFlSlBVckcKYXJjTDdmdGRldEdvdWJUdlhPazIxOXk3YldpRE9xSlFqVkpVajVudkNXL25xRFBmbDZmbkZ1YStxMUg4bXZ4QwpzRDdlMEgra0F0RHN5TW1RbXRzd2l4alpkVU5vRzZYZUlMZ25aaEMrejVWdjlRd0hEMXd4YXdRc0pZK3YzU0drClAydWFMcXFXb1FwQklmZHZXaThzK29DdTA1NXlYNzZpODgxVTFsWT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBc0t0a1plZjhvMzN3MHRWamZsVkhNUytUekIzRFdjZzNaNy9yTGdFM00vckpZL3VLCklXZGJObFhkbVl5MElqR1BURmo0WXh4dXhiajhzL3FWZ0NHV0lBSTZ1UDFycWFJWXZubnd3VmVFQ3E2bGRvUzMKT29hMUZDMWVMWDdnSjY5YTRRWmtHaVg0eFliSmRNQ1Nab3Y3cnNqa0htSC9id2dNdHFZc3ZPbDkyektjYmhLRQo0bkZ6NWU3cDZCbnAvTkFBeG9icXYrQXhUeWthR0FhZGdrS2szcmt3ZUJSQjg1UFdKUmQ1SEwwcDk1Mm8xcHZLCmFsbHQxWHVrRk9MeDNzeUl1NCtHUTBtYlZCRWZ5VFgzYmhIWWlobTk0OUd2NXNDMklTeFlMS09EUnNsbmNZVXAKZVhhVHk2V0lGYUw4NzVkVUhob29FVkVOMnladVhWUy9mZUxZandJREFRQUJBb0lCQUN2MDJPd0tCbS9mTy9ZWgpKY0lmRWJHSk51cklWUHlYdGtGWUhQbTdUN0xkS1JKNVdXcnFQbVdNZzdCYXM4NzJLY05ETjduaEx5WisybEVsCmZlRDllazdJZnpmYnhkZlUvdmNWZS9OL0JObHJqcnVvVmJaNEljRzliL3M5NENPL200cjFmaDZMYUJRdGJ4NWYKYzQyVU1yRFFSd0hRUEMreC93ZkszTUs4RFpabDNRM0xzN21aM0ZHaVB3ajR1dFBYb0pQNzJxdC9xSUVNTnZIOQo0c0xZb1NzekM5NW9hcWk2Unh1V2pvZHVPazJxMjlZR2w4VHQxOXdiOEs3K2xzV0p5YndwVHQ5S1FSRXhucHorCmtYbFdXNWpwaXM2L3JHTkRYWGdhcHdvNW1jSCtPOHVrYmtscDlyaDVPS0hvRGc5aWhHUkhSODBQVkNWVkRxQ2QKRFUvbkpMRUNnWUVBd3MwTmFwcGNLSG9yKzFDU3d6ckJ6WlU5a3YvSXhDNFY5bjRqVUxyallXT1dNb1Bya3pDZwpqK3lRRk9pK0tjaEdJTEdhT3dzUlhGeGRocEdBWk16VGx0UmE0TmZONEI4MFg2SFdWWHFkZjB6ajBsNEcxRlUvCmpKM2xWS0hmU3czNmh2NEV1QVRrRE03NTNzeFY4OGRsK21qV1pKYkt3dHhBdGFQWW81blplbnNDZ1lFQTZDd2IKbDV1Y3dOVGd5RXY4QmYvWHJ4SFJRc2toSVBGbUtNOXBOUGRzUTBPT1pKTnBvYVprdVNVaEZ3NEZzV2Uvb1AvQwpjMGJ6OTlJTFMyZVRrYjhweUZTOStLYUhMNzBZQ1MzcTRjbWlEVE4xZWhwbURDajZ3c2d5aGY0K1pURTlzRmxmCnNkT2xaWElLOFB3c1NXQ3o2Ym1hWExQbnRPVlJ2SWR3WmVwdlYvMENnWUJORWlXeHZKcWpwUnFMbHVoSjk1QS8KejBFS1RNclkyMGJ6UEJxcTBSWXZMT0I2NGZpdFJucndGbTgyNXBKK0kyK2pkY0VJaFN0OE9Fc0VkOEt0bnVCRAo5NFp4R05DcVVJNC9HOStaK0NZaC9JRFNkVU1NZFNIc2QzZ0pVUFh3VXZxQXVEV1R2Tk9oUWE1WWVNMjA0bm8xClpZOFZReGU3bXJxN1lyVE9uWXNPeXdLQmdRQzgxUHNRSVBHcWFMbjJUczdKTmwvL05SZWxJUjcvd3pjYTVDOG0KZEVLcXBxeU9vdExjTmhCZ0FaSGJSWDFkNEFzYzhFZ0FLR3BQV3BmekdXZ050NVJOS3Bka1FGVmRmNGVvRjUrZApTcml4MGZPdmZ2OFd6dEc5VU1TKzlKMWRBbUt4SnMvTk8xMmZsOVRNVWQzWFJINndEMVE4SjlyQjUyM0dUOFljCkxrT25KUUtCZ1FDSXEydk42NTdHbUNOMUtCTlpMOGwvVWJud0twMEtKbVdOZ3RLU0VzTytvcnMvbFNnVlRwbVYKZE9IMXo1cmcvNHVCYnFXeEluMlA2eWpGUWlqck5odGlXR3N0anFPelJlU3FzYmt6QTJINUV3THB1WFVLWDUvZQpMRjhpNWt2bnFPZ1Vab1g4WVQ2L0N4SjZtU1pVTEN0NEVpditJeUc2bTQ1OHpDeW9zbFgrU3c9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
kind: ConfigMap
metadata:
name: configmap-kubeconfig-dev
namespace: jenkins

执行命令

1
kubectl create -f kubeconfig-dev.yaml

创建名为configmap-kubeconfig-dev的configMap,然后pipeline中就可以如下配置:

调试集群

查看集群部署信息

1
kubectl cluster-info

查看k8s的节点是否工作正常

1
kubectl get nodes -o wide

查看所有命名空间下的deployment

1
kubectl get deployment --all-namespaces

查看所有命名空间下的svc

1
kubectl get svc --all-namespaces

查看所有命名空间下的pod

1
kubectl get pod -o wide --all-namespaces

查看某个pod的运行日志

1
kubectl logs -f kubernetes-dashboard-7f9f8cc4cf-tm2ld -n kube-system

删除某个pod

1
kubectl delete pod ms-cloud-tenant-service-5dc69784b-f42vh -n cloud

查看某个失败的pod的明细

1
kubectl describe pod metrics-server-6c84bc5674-tf4qw -n kube-system

查看镜像描述信息

1
docker inspect 10.124.133.192/devops/jenkins-slaver-dind:v1.0.0

查看某个命名空间下pod的实时状态变化

1
kubectl get pod -o wide -n jenkins -w

查看所有启动的容器(包括失败的)

1
docker ps -a

部署k8s dashboard页面

参考博客:https://www.kubernetes.org.cn/4004.html

部署dashboard

(1)新建k8s-dashboard.yaml文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: 192.168.17.132/rke/k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard


执行

1
kubectl create -f k8s-dashboard.yaml

(2)新建dashboard-usr.yaml文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system

执行

1
kubectl create -f dashboard-usr.yml

(3)若对yml修改后,更新部署,则执行下述命令

1
kubectl apply -f dashboard-usr.yml

访问dashboard

通过window配置负载

(1)安装kubectl

1
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/windows/amd64/kubectl.exe

(2) 拷贝集群kube_config_cluster.ym文件到本地config.yml在cmd执行以下命令

1
kubectl --kubeconfig=C:\Users\Administrator\Desktop\config.yml proxy

(3)web访问

1
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

(4) 生成web访问的token

1
2
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret |
grep admin-user | awk '{print $1}')
通过configMap配置负载

(1)编辑configmap文件

1
kubectl edit configmap tcp-services -n ingress-nginx

添加以下两行

1
2
data:
"9090": kube-system/kubernetes-dashboard:443


(2)生成token

1
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

(3)第一次部署报错的情况下,删除token

1
kubectl --kubeconfig ../kube_config_cluster.yml delete secret kubernetes-dashboard-key-holder -n kube-system

访问dashboard地址:https://192.168.17.131:9090

部署k8s dashboard页面图形化数据

上传heapster-grafana.yaml heapster.yaml influxdb.yaml文件,创建
(1)创建heapster-grafana.yaml文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
apiVersion: apps/v1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
task: monitoring
k8s-app: grafana
template:
metadata:
labels:
task: monitoring
k8s-app: grafana
spec:
containers:
- name: grafana
image: 192.168.17.132/rke/heapster-grafana:v4.3.3
ports:
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certificates
readOnly: true
- mountPath: /var
name: grafana-storage
env:
- name: INFLUXDB_HOST
value: monitoring-influxdb
- name: GF_SERVER_HTTP_PORT
value: "3000"
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_SERVER_ROOT_URL
value: /
volumes:
- name: ca-certificates
hostPath:
path: /etc/ssl/certs
- name: grafana-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-grafana
name: monitoring-grafana
namespace: kube-system
spec:
type: NodePort
ports:
- nodePort: 30108
port: 80
targetPort: 3000
selector:
k8s-app: grafana

(2)创建heapster.yaml文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
apiVersion: v1
kind: ServiceAccount
metadata:
name: heapster
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
task: monitoring
k8s-app: heapster
template:
metadata:
labels:
task: monitoring
k8s-app: heapster
spec:
serviceAccountName: heapster
containers:
- name: heapster
image: 192.168.17.132/rke/heapster:v1.4.0
imagePullPolicy: IfNotPresent
command:
- /heapster
# - --source=kubernetes:https://$KUBERNETES_SERVICE_HOST:443?inClusterConfig=true&useServiceAccount=true
- --source=kubernetes:kubernetes:https://kubernetes.default?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true
- --sink=influxdb:http://monitoring-influxdb.kube-system.svc.cluster.local:8086?retention=0s
- --v=2
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: Heapster
name: heapster
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: heapster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: heapster
namespace: kube-system

(3)创建influxdb.yaml文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
apiVersion: apps/v1
kind: Deployment
metadata:
name: monitoring-influxdb
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
task: monitoring
k8s-app: influxdb
template:
metadata:
labels:
task: monitoring
k8s-app: influxdb
spec:
containers:
- name: influxdb
image: 192.168.17.132/rke/heapster-influxdb:v1.3.3
volumeMounts:
- mountPath: /data
name: influxdb-storage
volumes:
- name: influxdb-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
type: NodePort
ports:
- nodePort: 31001
port: 8086
targetPort: 8086
selector:
k8s-app: influxdb

(4)创建各个pod

1
2
3
kubectl create -f heapster-grafana.yaml
kubectl create -f heapster.yaml
kubectl create -f influxdb.yaml

–source: 指定连接的集群。
inClusterConfig:Use kube config in service accounts associated with Heapster’s namespace.(default: true)
kubeletPort: 指定kubelet的使用端口,默认10255
kubeletHttps: 是否使用https去连接kubelets(默认:false)
apiVersion: 指定K8S的apiversion
insecure: 是否使用安全证书(默认:false)
auth: 安全认证
useServiceAccount: 是否使用K8S的安全令牌

–sink: 指定后端数据存储。这里指定influxdb数据库。
后缀参数:
user: InfluxDB用户
pw: InfluxDB密码
db: 数据库名
secure: 安全连接到InfluxDB(默认:false)
withfields: 使用InfluxDB fields(默认:false)。

第二章 自动化构建部署流程搭建

部署jenkins-master

(1)下载镜像jenkins-master
链接: https://pan.baidu.com/s/1lhllaOIsvDXJoiFMPx47Qw 提取码: z24p
或者
链接: https://pan.baidu.com/s/1hROj7nt_iw0Vf1tQTShfYA 提取码: iieu

(2)在192.168.17.131 上建目录

1
mkdir /jenkins-data

(3)赋予权限

1
chown -R 1000:1000 /jenkins-data

(4)创建jenkins-k8s-resources.yml文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: jenkins-wdj
namespace: jenkins
labels:
k8s-app: jenkins-wdj
spec:
replicas: 1
selector:
matchLabels:
k8s-app: jenkins-wdj
template:
metadata:
labels:
k8s-app: jenkins-wdj
spec:
nodeSelector:
kubernetes.io/hostname: 192.168.17.131
containers:
- name: jenkins-wdj
image: 192.168.17.132/rke/jenkinsci-blueocean:v1.0
env:
- name: TZ
value: Asia/Shanghai
imagePullPolicy: IfNotPresent
volumeMounts:
- name: jenkins-home-test
mountPath: /var/jenkins_home
ports:
- containerPort: 8080
name: web
- containerPort: 50000
name: agent
volumes:
- name: jenkins-home-test
hostPath:
path: /jenkins-data
type: DirectoryOrCreate

---

kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: jenkins-wdj
name: jenkins-wdj
namespace: jenkins
spec:
type: NodePort
ports:
- port: 8080
name: web
targetPort: 8080
nodePort: 31666
- port: 50000
name: agent
targetPort: 50000
selector:
k8s-app: jenkins-wdj

---

apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins-wdj
namespace: jenkins

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins-wdj
namespace: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins-wdj
namespace: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins-wdj
subjects:
- kind: ServiceAccount
name: jenkins-wdj

(5)启动Jenkins-master

1
kubectl create -f jenkins-k8s-resources.yml

(6)在jenkins中安装kubernetes插件

部署jenkins-slave

构建jenkins-slave基础镜像

基于openshift/jenkins-slave-base-centos7:latest所构建的包含了maven, nodejs,helm, go, beego等常用构建工具
(1)下载jenkins-slave-base-centos7:latest镜像

下载链接: https://pan.baidu.com/s/1Wvs1pRotGWqTT_6BLoRrTg 提取码: xvde

(2)编辑Dockerfile文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
FROM openshift/jenkins-slave-base-centos7:latest

LABEL maintainer=caiy17@chinaunicom.cn
LABEL Description="基于`openshift/jenkins-slave-base-centos7:latest`所构建的包含了maven, nodejs, helm, go, beego等常用构建工具"
ENV TZ Asia/Shanghai

###############################################################################
# 生成ssh密钥
###############################################################################
USER root

RUN ssh-keygen -f /root/.ssh/id_rsa

###############################################################################
# 设置yum源 安装常用工具
###############################################################################
# RUN curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo \
# && yum makecache \
RUN yum install -y vim telnet net-tools wget unzip

###############################################################################
# 安装JDK
###############################################################################
ADD packages/jdk-8u181-linux-x64.tar.gz /usr/java

ENV JAVA_HOME /usr/java/jdk1.8.0_181
ENV PATH $JAVA_HOME/bin:$PATH

RUN chown root:root /usr/java/jdk1.8.0_181 -R \
&& chmod 755 /usr/java/jdk1.8.0_181 -R

###############################################################################
# 安装maven
###############################################################################
ADD packages/apache-maven-3.6.0-bin.tar.gz /opt

RUN chown root:root /opt/apache-maven-3.6.0 -R \
&& chmod 755 /opt/apache-maven-3.6.0 -R \
&& ln -s /opt/apache-maven-3.6.0/ /opt/maven

ENV M2_HOME /opt/maven
ENV PATH $M2_HOME/bin:$PATH

# 配置maven镜像源
RUN sed -i 'N;146 a \ \ \ \ <mirror>\n\ \ \ \ \ \ <id>self-maven</id>\n\ \ \ \ \ \ <mirrorOf>self-maven</mirrorOf>\n\ \ \ \ \ \ <name>self-maven</name>\n\ \ \ \ \ \ <url>http://10.236.5.18:8088/repository/maven-public/</url>\n\ \ \ \ </mirror>' /opt/maven/conf/settings.xml

###############################################################################
# 安装nodejs
###############################################################################
ADD packages/node-v10.15.3-linux-x64.tar.xz /opt

RUN chown root:root /opt/node-v10.15.3-linux-x64 -R \
&& chmod 755 /opt/node-v10.15.3-linux-x64 -R \
&& ln -s /opt/node-v10.15.3-linux-x64/ /opt/node

ENV NODE_HOME /opt/node
ENV PATH $PATH:$NODE_HOME/bin

###############################################################################
# 安装helm
###############################################################################
ADD packages/helm-v2.8.2-linux-amd64.tar.gz /opt/helm

ENV HELM_HOME /opt/helm
ENV PATH $PATH:$HELM_HOME/linux-amd64

###############################################################################
# 安装go
###############################################################################
ADD packages/go1.12.3.linux-amd64.tar.gz /opt

ENV GOROOT /opt/go
ENV GOPATH /root/go
ENV PATH $PATH:$GOROOT/bin:$GOPATH/bin

###############################################################################
# 安装beego
###############################################################################
ADD packages/bee.tar.gz /root/go/bin

(2)执行构建命令

1
docker build -t jenkins-slave:2019-09-11-v1 .

注意最后有个点,代表使用当前路径的 Dockerfile 进行构建

生成的镜像下载链接: https://pan.baidu.com/s/1Wvs1pRotGWqTT_6BLoRrTg 提取码: xvde

生成连接k8s集群的客户端证书

1)生成连接api-server的服务证书 key并进行连接测试

1
cat kube_config_cluster.yml

分别执行

1
2
3
4
5
echo certificate-authority-data | base64 -d > ca.crt

echo client-certificate-data | base64 -d > client.crt

echo client-key-data | base64 -d > client.key

然后根据如上内容生成客户端认证的证书cert.pfx

1
openssl pkcs12 -export -out cert.pfx -inkey client.key -in client.crt -certfile ca.crt

参考http://www.mamicode.com/info-detail-2399348.html

配置jenkins-master(参考)

打开 jenkins UI 界面依次点击
系统管理 –> 系统设置 –> 新增一个云(在界面最下方) –> Kubernetes
配置如下:

配置完成后点击保存

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
kubectl create serviceaccount jenkins-wdj -n jenkins

kubectl get serviceaccount --all-namespaces

kubectl describe serviceaccount/jenkins-wdj -n jenkins

kubectl get secret -n jenkins

kubectl get secret jenkin-wdj-token-cm449 -o yaml -n jenkins

kubectl get secret jenkins-wdj-token-ld6dc -n jenkins -o jsonpath={".data.token"} | base64 -d

kubectl delete serviceaccount/jenkins-xy -n jenkins

kubectl get sa jenkins-wdj -n jenkins -o yaml

kubectl get secret jenkins-wdj-token-cm449 -n jenkins -o jsonpath={".data.token"} | base64 -d

https://www.iteye.com/blog/m635674608-2361440

配置jenkins-master(选用)

(1)新建启动集群时生成的kube_config_cluster.yml文件;

(2)系统管理-》插件管理-》可选插件-》搜索kubernetes插件-》直接安装;

(3)系统管理-》系统设置-》新增一个云,配置如下:

新建pipeline流水线任务

demo1

(1)新建一个测试pipeline任务demo;

(2) 脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
def name = "ci-demo-backend"
def label = "${name}-${UUID.randomUUID().toString()}"

podTemplate(
label: label,
namespace: 'jenkins',
cloud: 'kubernetes',
containers: [
containerTemplate(name: 'jnlp', image: '192.168.17.132/rke/cicd/jenkins-slaver-dind:v1.1.5')
],
volumes: [
hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock')
]
) {
node(label) {
stage('test') {
echo "hello, world"
sleep 60
}
}
}

demo2

(1)K8s集群中设置harbor仓库认证(参考)

  1. 使用kubectl命令生成secret(不同的namespace要分别生成secret,不共用*)*
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    kubectl create secret docker-registry harborsecret \

    --docker-server=192.168.17.132 \

    --docker-username=admin \

    --docker-password=Harbor12345 \

    --docker-email=admin@example.com \

    --namespace=jenkins
  2. 查看此secret的配置内容

kubectl get secret harborsecret –output=yaml -n jenkins

(2)添加gitlab和harbor仓库的认证(选用)

(2)下载Git Parameter插件

(3)设置参数化构建过程

demo3

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
def label = "ci-ms-cloud-tenant-service-${UUID.randomUUID().toString()}"
def deployNamespace = "cloud"
def deploymentName = "ms-cloud-tenant-service"

podTemplate(
label: label,
cloud: 'kubernetes',
namespace: 'jenkins',
containers: [
containerTemplate(name: 'jnlp', image: '192.168.17.132/rke/cicd/jenkins-slaver-dind:v1.1.5', envVars: [envVar(key: 'LANG', value: 'en_US.UTF-8')],)
],
volumes: [
configMapVolume(configMapName: 'configmap-maven-setting-wdj', mountPath: '/home/jenkins/maven'),
configMapVolume(configMapName: 'configmap-kubeconfig-dev', mountPath: '/home/jenkins/kube'),
hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock')
]
) {
node(label) {
stage('拉取代码') {
git credentialsId: 'ab716d57-9399-4978-bfb4-82eaccaea9d2', url: 'http://192.168.17.132:8080/cloud/ms-cloud-tenant-service.git'
}
stage('打jar包') {
sh 'mvn clean package -U -Dmaven.test.skip=true --settings /home/jenkins/maven/settings.xml'
}
def dockerTag=new Date().format("yyyyMMddHHmmss")
def dockerImageName = "192.168.17.132/devops/ms-cloud-tenant-service:v${dockerTag}"
stage('构建docker镜像') {
pwd()
sh "docker build -f docker/Dockerfile --tag=\"${dockerImageName}\" ."
echo "===> finish build docker image: ${dockerImageName}"
sh 'docker images'
}
stage('发布docker镜像') {
withDockerRegistry(credentialsId: '1049ee9b-99fd-42df-8c40-a818fe66ae5a', url: 'http://192.168.17.132/'){
sh "docker push ${dockerImageName}"
sh "docker rmi ${dockerImageName}"
sh 'docker images'
}
}
stage('部署') {
sh "kubectl --kubeconfig=/home/jenkins/kube/config -n ${deployNamespace} patch deployment ${deploymentName} -p '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\":\"${deploymentName}\", \"image\":\"${dockerImageName}\"}]}}}}'"
echo "==> deploy ${dockerImageName} successfully"
}
}
}

参考文档
https://www.cnblogs.com/miaocunf/p/11694943.html

jenkins job迁移

Job Import Plugin导入

http://10.244.6.41:31000到本机192.168.17.131:31666)

(1)首先到新的Jenkins上,在插件管理里先安装下Job Import Plugin,如下所示:

(2)安装完后进入“Manage Jenkins” -> “Configure System”下,找到Job Import Pluguin配置的地方,进行如下设置:

name: 这个可以任意命名,这里我命名成要拷贝的Jenkins的IP
Url: 指要从哪里拷贝的Jenkins的URL,现在我们要从192.168.9.10拷贝job,因此这里要设置成192.168.9.10的Jenkins的URL
Credentials:需要添加一个旧Jenkins的账号(也就是192.168.9.10的账号),没有添加的时候点击Add手动添加下,就可以像上面的截图一样下拉选择到这个账号了

(3)设置完后点击保存下,回到Jenkins首页点击Job Import Plugin就可以进行Job的迁移了,如下所示:

在Job Import Plugin界面,下拉选择刚才添加的配置,然后点击Query按钮就可以搜索出配置的Jenkins下的job了,然后选择需要的job进行迁移导入即可:

(4)因为有时候旧的Jenkins上的插件新Jenkins上未必有,因此可以根据实际情况勾选是否需要安装必要的插件,如上面的截图所示,需不需要覆盖已有的job也根据实际情况勾选下。导入成功会有如下的提示:

(5)有了上面的提示后就可以会到新的Jenkins的首页,查看Job有没有成功进入,并进入导入的job查看设置有没有成功的复制过来,如下所示:

可以看到job及其设置成功的被导入到新的job了。
Job Import Pugin也支持多个job同时拷贝,如果旧的Job里有多个job,如上面的步骤里所示,query出来就有很多job可供选择,只需要勾选多个即可同时进行多个job的导入了。

Jenkins CLI方式导入

有时候在公司内部Jenkins部署到不同的网段里,不同网段间可能会限制无法相互访问,这种情况下通过Job Import Plugin进行job导入的方式就行不通了,这时候可以通过Jenkins CLI方式进行job配置导出,然后新Jenkins在根据导出的配置进行再导入操作,完成job的配置迁移。下面我们来具体讲解下。

点击进入Jenkins CLI,可以看到Jenkins命令行接口提供很多命令可以用来进行Jenkins的相关操作,可以看到有提供了get-job这样一个命令,这个命令可以将job的定义导出到xml的格式到输出流,这样我们可以通过这个命令将旧Jenkins上的job导出到外部文件,然后还可以看到有另外一个命令create-job,这个命令可以根据已有的xml配置文件进行job创建,那我们可以根据从旧job导出的job配置文件做为输入进行job的创建了。
(1)首先在旧的Jenkins上的cli页面点击jenkins-cli.jar就可以下载这个jar到本地,如下所示:

(2)接着点击下Jenkins右上角的账号,选择Configure,然后点击Show API Token,拷贝token,这个token可以用来进行配置导出的时候做为认证使用

(3)在jenkins-cli.jar下载的根目录下执行如下命令进行job导出,这里我新建了个job,命名为test4,现在执行下如下命令进行test4这个job配置的导出:

1
2
java -jar jenkins-cli.jar -s http://192.168.9.10:8080/jenkins -auth
admin:493375c06bc0006a455005804796c989 get-job "test4" > test4.xml

http://192.168.9.10:8080/jenkins 就Job的Jenkins地址

admin: 上面截图获取Show API Token下的User ID

493375c06bc0006a455005804796c989:上面截图获取API Token的值

test4: 需要导出配置的job名

test4.xml: 导出的文件的名称,可任意

根据实际情况替换下上面的四个值即可执行完上面的命令就可以看到test4.xml文件生成了

(4)接着在新的Jenkins下同样先下载下jenkins-cli.jar,然后将上面生成的test4.xml拷贝到新的Jenkins机器下,同样获取下新Jenkins登录账号的API Token和User ID,执行下如下命令就可以进行job导入了

1
2
java -jar jenkins-cli.jar -s http://192.168.9.8:8080/jenkins -auth
admin:51964e7b89a427be5dd2a28f38c86eff create-job "test4" < test4.xml

记得将URL替换成新Jenkins的URL,User ID和token也替换下上面的命令执行完后,就可以看到在新的Jenkins下新job被成功导入了

参考自:https://cloud.tencent.com/developer/article/1470433

Job 批量删除

点击系统管理-》脚本命令行-》脚本命令行-》输入代码,点击运行

1
2
3
4
5
6
7
8
9
10
11
12
13
def jobName = "devops-ci-ms-cloud-tenant-service"

def maxNumber = 75

Jenkins.instance.getItemByFullName(jobName).builds.findAll {

it.number < maxNumber

}.each {

it.delete()

}

配置jenkins的credentialsId

即生产连接gitlab、harbor的credentialsId
(1)设置jenkins连接gitlab的git credentialsId

出错了

成功的图

可能原因分析:
这是由于git客户端版本过低造成的!
解决方案:https://blog.csdn.net/wudinaniya/article/details/97921383
安装了2.16.5
(2)生成jenkins连接harbor的credentialsId

利用GitLab webhook来触发Jenkins构建

参考自:https://www.cnblogs.com/zblade/p/9480366.html
本文针对如何设置GitLab以及Jenkins,实现每次GitLab上有合并事件的时候,都能触发Jenkins执行相应的操作,主要分为以下几个步骤:

新建GitLab测试项目

进入个人GitLab账号,在右上角的加号中,选出GitLab 的 New Project,可以新建个人的GitLab工程:

其余都走默认的设置,填写好project的名字,就可以创建一个新的project,如图:

新建Jenkins的job

(1)首先验证是否安装了GitLab plugin
在“系统管理”->“插件管理”,查看已安装插件,输入 GitLab,看看是否已经安装,如果没有,则 查看 可选插件,搜索 GitLab,安装后重启即可。
(2)新建一个jenkins测试job,如图:

源码管理选择Git, 输入刚刚新建的GitLab的 URL以及个人的API_TOKEN:

目前只有master分支,后续可以根据不同分支对应设置不同的url,监听不同分支的情况。在构建触发器选项中,勾选 Build when a change is pushed to GitLab,该选项最后的URL就是这个工程的URL路径,注意如果是本机,则会显示localhost,可以将localhost改为个人的ip。注意这个url,下一步会用到这个url。点击应用和保存。

设置GitLab的webhook

在gitlab的当前工程->settings->integretions,使用当前工程owner角色的账号制作钩子。将上一步中的url和token填入,选择merge事件。当owner进行合并代码时自动触发jenkins构建操作。

点击Add webhook后,可能会报Url is blocked: Requests to the local network are not allowed

错误原因:
gitlab 10.6版本以后为了安全,不允许向本地网络发送webhook请求

解决方案:
最上面一排的扳手设置按钮—>进入左侧 设置—->网络—->
选择允许webhooks和本机网络交互

点击save changes后,再次回到当前工程->settings->integretions中重新制作钩子,然后点击Edit:

点击Test->Push events:

查看事件状态,若为200则钩子制作成功:

可能出现的其它问题

(1)不允许匿名构建问题

出现上述问题,原因就是不支持匿名build,回到jenkins中,在 系统管理->全局安全管理中,勾选匿名用户具有可读权限 如图:

然后点击应用和保存, 回到GitLab,继续测试.如果继续报该错,则进入刚刚构建的工程,点击构建触发器中选中的Build When a change is pushed右下角的高级选项,有一个Secret token,点击Generate,会生成一个安全代码:

复制到webhook中的url下面:

然后保存,再测试,就可以通过,这时候会触发jenkins执行一次操作:

看看控制台输出:

参考自https://www.cnblogs.com/zblade/p/9480366.html

附录 遇到的问题

(1)集群中的每个节点不能直接拉取公有harbor上的镜像

解决方案:必须把所有的镜像都拉到所有节点的本地?不是的,需要把harbor仓库的rke项目设为公有仓库。

(2)在129机器上运行./rke up 出现下图问题

解决方案:129机器没有自己对自己做免密

(3)dashboard页面没有图形化的统计数据

解决方案:

把–source改为图中红框内的配置。

(4)jenkins-master在192.168.17.131起不来

在131机器上创建目录mkdir /jenkins-data 并修改权限chown -R 1000:1000 /jenkins-data