k8s存储类
如果,k8s集群中,有很多类似的PV,PVC在去向PV申请空间的时候,不仅会考虑名称以及访问控制模式,还会考虑你申请空间的大小,会分配给你最合适大小的PV
运行一个web服务,采用Deployment资源,基于nginx镜像。数据持久化目录为nginx服务的主访问目录:/usr/share/nginx/html
创建一个PVC,与上述资源进行关联
先创建两个PV:web-pv1(1G),web-pv2(2G)
[root@master ~]# vim web1.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: web-pv1
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfsdata/web1
server: 192.168.1.70
[root@master ~]# mkdir /nfsdata/web1
[root@master ~]# kubectl apply -f web1.yaml
persistentvolume/web-pv1 created
[root@master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
web-pv1 1Gi RWO Recycle Available nfs 6s
[root@master ~]# vim web2.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: web-pv2
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfsdata/web2
server: 192.168.1.70
[root@master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
web-pv1 1Gi RWO Recycle Available nfs 97s
web-pv2 2Gi RWO Recycle Available nfs 6s
[root@master ~]# mkdir /nfsdata/web2
[root@master ~]# vim web.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-web
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: test-web
mountPath: /usr/share/nginx/html
volumes:
- name: test-web
persistentVolumeClaim:
claimName: web-pvc
[root@master ~]# vim web-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: web-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
[root@master ~]# kubectl apply -f web-pvc.yaml
[root@master ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
web-pvc Bound web-pv1 1Gi RWO nfs 6s
[root@master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
web-pv1 1Gi RWO Recycle Bound default/web-pvc nfs 9m8s
web-pv2 2Gi RWO Recycle Available nfs 7m37s
[root@master ~]# kubectl apply -f web.yaml
[root@master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-web-67989b6d78-2b774 1/1 Running 0 2m1s 10.244.2.5 node02 <none> <none>
如果名称和访问模式都一样,它会考虑空间大小进行分配,分配比较接近的PV进行关联
[root@master ~]# kubectl exec -it test-web-67989b6d78-2b774 /bin/bash
root@test-web-67989b6d78-2b774:/# cd /usr/share/nginx/html/
root@test-web-67989b6d78-2b774:/usr/share/nginx/html# echo 12345 > index.html
root@test-web-67989b6d78-2b774:/usr/share/nginx/html# exit
exit
command terminated with exit code 127
[root@master ~]# curl 10.244.2.5
12345
很多的服务,很多的资源对象
如果要去创建服务,做数据持久化,需要预先知道可用PV有哪些?
如果为了这个服务去提前创建PV,那么我们还需要知道,这个服务大概需要多大的空间?
Storage Class(存储类):它可以动态的自动的创建所需要的 PV
PV是运维人员来创建的,开发操作PVC,可是大规模集群中可能会有很多PV,如果这些PV都需要运维手动来处理这也是一件很繁琐的事情,所以就有了动态供给概念,也就是Dynamic Provisioning。而我们上面的创建的PV都是静态供给方式,也就是Static Provisioning。而动态供给的关键就是StorageClass,它的作用就是创建PV模板。
创建StorageClass里面需要定义PV属性比如存储类型、大小等;另外创建这种PV需要用到存储插件。最终效果是,用户提交PVC,里面指定存储类型,如果符合我们定义的StorageClass,则会为其自动创建PV并进行绑定
存储类(Storage class)是k8s资源类型的一种,它是有管理员为管理PV更加方便创建的一个逻辑组,可以按照存储系统的性能高低,或者综合服务质量,备份策略等分类。不过k8s本身不知道类别到底是什么,它这是作为一个描述
Provisioner(供给方、提供者):
及提供了存储资源的存储系统。k8s内建有多重供给方,这些供给方的名字都以“kubernetes.io”为前缀。并且还可以自定义
Parmeters(参数):
存储类使用参数描述要关联到的存储卷,注意不同的供给方参数也不同
ReclaimPolicy:
PV的回收策略,可用值有Delete(默认)和Retain
基于StorageClass的动态存储供应整体过程如下图所示:
1)集群管理员预先创建存储类(StorageClass);
2)用户创建使用存储类的持久化存储声明(PVC:PersistentVolumeClaim);
3)存储持久化声明通知系统,它需要一个持久化存储(PV: PersistentVolume);
4)系统读取存储类的信息;
5)系统基于存储类的信息,在后台自动创建PVC需要的PV;
6)用户创建一个使用PVC的Pod;
7)Pod中的应用通过PVC进行数据的持久化;
8)而PVC使用PV进行数据的最终持久化处理。
更多可以参考:https://www.kubernetes.org.cn/4078.html
1)确定基于NFS服务来做的sc,NFS服务需要开启
[root@master ~]# showmount -e
Export list for master:
/nfsdata *
2)需要RBAC权限
RBAC:
rbac是k8s的API安全策略,是基于用户的访问权限
规定了谁可以有什么样的权限
为了给SC资源操作k8s集群的权限
[root@master ~]# vim rbac-rolebind.yaml
kind: Namespace
apiVersion: v1
metadata:
name: bdqn-test
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
namespace: bdqn-test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-provisioner-runner
namespace: bdqn-test
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get","create","list", "watch","update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: bdqn-test
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
[root@master ~]# kubectl apply -f rbac-rilebind.yaml
namespace/bdqn-test created
serviceaccount/nfs-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created
3)nfs-deployment.
作用:其实它是NFS的客户端,但是它通过K8S的内置的NFS驱动挂载远端的NFS服务器到本地目录;然后将自身作为storage provider,关联storage class
[root@master ~]# vim nfs-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-client-provisioner
namespace: bdqn-test
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME //提供者的名称
value: bdqn-test
- name: NFS_SERVER //nfs服务器地址
value: 192.168.1.70
- name: NFS_PATH
value: /nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 192.168.1.70
path: /nfsdata
[root@master ~]# kubectl apply -f nfs-deployment.yaml
deployment.extensions/nfs-client-provisioner created
4)创建storageclass
[root@master ~]# vim test-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: sc-nfs
namespace: bdqn-test //属于哪个名称空间
provisioner: bdqn-test //供给着,和nfs-deployment的名称要一样:value: bdqn-test
reclaimPolicy: Retain
[root@master ~]# kubectl apply -f test-storageclass.yaml
storageclass.storage.k8s.io/sc-nfs created
provisioner: bdqn-test //通过preovisioner字段关联到上述Deployment
5)创建PVC
[root@master ~]# vim test-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-claim
namespace: bdqn-test
spec:
storageClassName: sc-nfs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Mi
[root@master ~]# kubectl apply -f test-pvc.yaml
persistentvolumeclaim/test-claim created
[root@master ~]# kubectl get pvc -n bdqn-test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Bound pvc-0c606810-9f93-441b-bc6b-391d7813dcab 20Mi RWX sc-nfs 2m15s
它会为我们自动生成一个pv
[root@master ~]# ls /nfsdata/
bdqn-test-test-claim-pvc-0c606810-9f93-441b-bc6b-391d7813dcab
[root@master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-0c606810-9f93-441b-bc6b-391d7813dcab 20Mi RWX Delete Bound bdqn-test/test-claim sc-nfs 4m24s
6)创建Pod测试
[root@master ~]# vim test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: test-pod
namespace: bdqn-test
spec:
containers:
- name: test-pod
image: busybox
args:
- /bin/sh
- -c
- sleep 30000
volumeMounts:
- name: nfs-pvc
mountPath: /test
restartPolicy: OnFailure
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
[root@master ~]# kubectl apply -f test-pod.yaml
pod/test-pod created
[root@master ~]# kubectl get pod -n bdqn-test
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-57f49c99c7-lhd8n 1/1 Running 0 27m
test-pod 1/1 Running 0 32s
[root@master ~]# kubectl exec -it test-pod -n bdqn-test /bin/sh
/ # cd /test
/test # touch test-file
/test # echo 123456 > test-file
/test # exit
[root@master ~]# cat /nfsdata/bdqn-test-test-claim-pvc-0c606810-9f93-441b-bc6b-391d7813dcab/test-file
123456
[root@master ~]# kubectl exec -it -n bdqn-test nfs-client-provisioner-57f49c99c7-lhd8n /bin/sh
/ # ls /
persistentvolumes
/ # cd /persistentvolumes/
/persistentvolumes # ls
bdqn-test-test-claim-pvc-0c606810-9f93-441b-bc6b-391d7813dcab
#这个目录和/nfsdata下面的目录一样