tke上如何使用local-volume
kubernetes从1.10版本开始支持local volume(本地卷),workload(不仅是statefulsets类型)可以充分利用本地快速SSD,从而获取比remote volume(如cephfs、RBD)更好的性能。
在local volume出现之前,statefulsets也可以利用本地SSD,方法是配置hostPath,并通过nodeSelector或者nodeAffinity绑定到具体node上。但hostPath的问题是,管理员需要手动管理集群各个node的目录,不太方便。
下面两种类型应用适合使用local volume。
- 数据缓存,应用可以就近访问数据,快速处理。
- 分布式存储系统,如分布式数据库Cassandra ,分布式文件系统ceph/gluster
下面会先以手动方式创建PV、PVC、Pod的方式,介绍如何在tke使用local volume。
local volume是不支持动态创建pv的,也就是说,你需要先手动创建好pv,然后再创建pvc才能成功。
1. 创建local-volume-provisioner
---
apiVersion: v1
kind: ConfigMap
metadata:
name: local-provisioner-config
namespace: weixnie
data:
storageClassMap: |
fast-disks:
hostDir: /mnt/fast-disks
mountDir: /mnt/fast-disks
blockCleanerCommand:
- "/scripts/shred.sh"
- "2"
volumeMode: Filesystem
fsType: ext4
namePattern: "*"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: local-volume-provisioner
namespace: weixnie
labels:
app: local-volume-provisioner
spec:
selector:
matchLabels:
app: local-volume-provisioner
template:
metadata:
labels:
app: local-volume-provisioner
spec:
serviceAccountName: local-storage-admin
containers:
- image: "dyrnq/local-volume-provisioner:v2.4.0"
imagePullPolicy: "Always"
name: provisioner
securityContext:
privileged: true
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /etc/provisioner/config
name: provisioner-config
readOnly: true
- mountPath: /mnt/fast-disks
name: fast-disks
mountPropagation: "HostToContainer"
volumes:
- name: provisioner-config
configMap:
name: local-provisioner-config
- name: fast-disks
hostPath:
path: /mnt/fast-disks
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-storage-admin
namespace: weixnie
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-storage-provisioner-pv-binding
namespace: weixnie
subjects:
- kind: ServiceAccount
name: local-storage-admin
namespace: weixnie
roleRef:
kind: ClusterRole
name: system:persistent-volume-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-storage-provisioner-node-clusterrole
namespace: weixnie
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-storage-provisioner-node-binding
namespace: weixnie
subjects:
- kind: ServiceAccount
name: local-storage-admin
namespace: weixnie
roleRef:
kind: ClusterRole
name: local-storage-provisioner-node-clusterrole
apiGroup: rbac.authorization.k8s.io
2. 创建StorageClass
# Only create this for K8s 1.9+
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-disks
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
# Supported policies: Delete, Retain
reclaimPolicy: Delete
3. svc暴露local-volume-provisioner监控指标
如果你不需要对local-volume-provisioner做监控,可以不用创建这个svc
apiVersion: v1
kind: Service
metadata:
name: local-volume-provisioner
namespace: weixnie
labels:
app: local-volume-provisioner
spec:
type: ClusterIP
selector:
app: local-volume-provisioner
ports:
- name: metrics
port: 8080
protocol: TCP
4. 创建pv
注意创建pv,需要指定下节点进行调度
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: fast-disks
local:
path: /mnt/fast-disks
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- 172.16.33.13
5. 创建pvc
因为volumeBindingMode模式是WaitForFirstConsumer,所以没有pod成功调度到172.16.33.13这个节点进行挂载,pvc会一直pending。
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
namespace: weixnie
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: fast-disks
6. 创建deploy挂载pvc
这里我们创建下deploy来挂载之前创建好的pvc
apiVersion: apps/v1
kind: Deployment
metadata:
name: local-test-reader
namespace: weixnie
spec:
replicas: 1
selector:
matchLabels:
app: local-test-reader
template:
metadata:
labels:
app: local-test-reader
spec:
terminationGracePeriodSeconds: 10
containers:
- name: reader
image: busybox
command:
- "/bin/sh"
args:
- "-c"
- "tail -f /usr/test-pod/test_file"
volumeMounts:
- name: local-vol
mountPath: /usr/test-pod
volumes:
- name: local-vol
persistentVolumeClaim:
claimName: "example-local-claim"
当pod运行正常后,pvc也就是正常了,我们可以登录pod验证下文件是否挂载到节点目录了
[root@VM-33-13-tlinux /mnt/fast-disks]# kubectl get all -n weixnie
NAME READY STATUS RESTARTS AGE
pod/local-volume-provisioner-6m8rj 1/1 Running 0 3h22m
pod/local-volume-provisioner-gv5j5 1/1 Running 0 3h21m
pod/test-56dcc9cf9c-8w944 1/1 Running 0 179m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/local-volume-provisioner ClusterIP 10.3.0.229 <none> 8080/TCP 3h29m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/local-volume-provisioner 2 2 2 2 2 <none> 3h32m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/test 1/1 1 1 3h5m
NAME DESIRED CURRENT READY AGE
replicaset.apps/test-56dcc9cf9c 1 1 1 3h5m
[root@VM-33-13-tlinux /mnt/fast-disks]# kubectl exec -it pod/test-56dcc9cf9c-8w944 -n weixnie bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
root@test-56dcc9cf9c-8w944:/# cd /tmp/
root@test-56dcc9cf9c-8w944:/tmp# ls
test.txt
root@test-56dcc9cf9c-8w944:/tmp# cat test.txt
hello
root@test-56dcc9cf9c-8w944:/tmp# exit
exit
[root@VM-33-13-tlinux /mnt/fast-disks]# ls
test.txt
[root@VM-33-13-tlinux /mnt/fast-disks]# cat test.txt
hello
从上面验证来看,pod内的/tmp/test.txt文件正常挂载到了节点的/mnt/fast-disks这个目录,说明通过local-volume挂载成功了。
最后这里不建议大家使用local-volume来进行存储,因为现在需要手动创建pv才能正常挂载,这样会很麻烦,如果程序对cbs盘有性能要求,大家可以用tke默认的cbs挂载,cbs组件支持不同类型的云硬盘。