Kubernetes Cluster 에서 사용할 스토리지로 외부의 Glusterfs 를 Dynamic Provisioning 스토리지로 사용할 경우에 대한 구축 방법에 대한 앞선 포스팅(http://bryan.wiki/283)에 더하여, 또 다른 Dynamic Provisioning 솔루션이 대해 정리해 보겠다. 즉, 본 시리즈(1~3)의 현재까지의 구축 환경을 그대로 활용하여, 추가로 Glusterfs를 컨테이너 방식으로 각 Minion 의 Pod(DaemonSet)에 위치 시켜 각 Minion 에 추가장착된 HDD(그림에서는 /dev/sdx 로 표시)를 glusterfs의 분산 디스크로 사용, Heketi API 를 통한 kubernetes 와의 통합된 환경을 구축하고 실제로 PV를 만들고 테스트해 보기로 한다.
이러한 Hyper-Converged 방식을 쓰게 되면 외부의 Glusterfs 클러스터 등의 별도 스토리지 솔루션이 불필요하게 되며, 필요한 만큼 스토리지를 갖춘 Minion 노드를 추가하게 되면 Kubernetes 를 위함 컴퓨팅 자원과 스토리지 공간이 함께 늘어나는, 진정한 Scalable Comupting Cluster를 갖출 수 있게 된다.
Static Provisioning 에 대해서는 이전 포스팅에서 이미 간략히 설명했으므로, 별도 언급은 하지 않겠으며, 이러한 Dynamic Provisioning 이 가능한 스토리지 방식으로는 GlusterFS 외에도 OpenStack Cinder, AWS EBS, GCE Persistent Disk, Ceph RBD, NetApp Trident 등이 있다.
* 기존 1~3 편에서 작업했던 설정 환경을 그대로 사용하여, kubenode1/2/3 을 각각 GlusterFs pod를 통해 Gluster 스토리지 노드로 사용하는 구성이다.
* 참고 자료: gluster_kubernetes.pdf
사전 준비(Prerequisites)
* 모든 k8s 노드에서 GlusterFS Client, SELinux 설정(setsebool)
[root@kubenode01 ~]# yum install -y centos-release-gluster310.noarch
[root@kubenode01 ~]# yum install -y glusterfs-client
[root@kubenode01 ~]# setsebool -P virt_sandbox_use_fusefs on
github gluster-kubernetes repo 다운로드
[root@kubemaster ~]# git clone https://github.com/gluster/gluster-kubernetes.git
Cloning into 'gluster-kubernetes'...
remote: Counting objects: 2024, done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 2024 (delta 5), reused 9 (delta 2), pack-reused 2008
Receiving objects: 100% (2024/2024), 987.73 KiB | 456.00 KiB/s, done.
Resolving deltas: 100% (1041/1041), done.
[root@kubemaster ~]# cd gluster-kubernetes/deploy
[root@kubemaster deploy]# cp topology.json.sample topology.json
[root@kubemaster deploy]# vi topology.json
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"kubenode1"
],
"storage": [
"10.255.10.171"
]
},
"zone": 1
},
"devices": [
"/dev/sdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"kubenode2"
],
"storage": [
"10.255.20.170"
]
},
"zone": 1
},
"devices": [
"/dev/sdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"kubenode3"
],
"storage": [
"10.255.20.171"
]
},
"zone": 1
},
"devices": [
"/dev/sdb"
]
}
]
}
]
}
Glusterfs, Heketi Pod와 Svc 설치 - gk-deploy 실행
[root@kubemaster deploy]# ./gk-deploy -g
Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift.
Before getting started, this script has some requirements of the execution
environment and of the container platform that you should verify.
The client machine that will run this script must have:
* Administrative access to an existing Kubernetes or OpenShift cluster
* Access to a python interpreter 'python'
Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
* 2222 - sshd (if running GlusterFS in a pod)
* 24007 - GlusterFS Management
* 24008 - GlusterFS RDMA
* 49152 to 49251 - Each brick for every volume on the host requires its own
port. For every new brick, one new port will be used starting at 49152. We
recommend a default range of 49152-49251 on each host, though you can adjust
this to fit your needs.
The following kernel modules must be loaded:
* dm_snapshot
* dm_mirror
* dm_thin_pool
For systems with SELinux, the following settings need to be considered:
* virt_sandbox_use_fusefs should be enabled on each node to allow writing to
remote GlusterFS volumes
In addition, for an OpenShift deployment you must:
* Have 'cluster_admin' role on the administrative account doing the deployment
* Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
* Have a router deployed that is configured to allow apps to access services
running in the cluster
Do you wish to proceed with deployment?
[Y]es, [N]o? [Default: Y]:
Using Kubernetes CLI.
Using namespace "default".
Checking for pre-existing resources...
GlusterFS pods ... not found.
deploy-heketi pod ... not found.
heketi pod ... not found.
Creating initial resources ... serviceaccount "heketi-service-account" created
clusterrolebinding "heketi-sa-view" created
clusterrolebinding "heketi-sa-view" labeled
OK
node "kubenode1" labeled
node "kubenode2" labeled
node "kubenode3" labeled
daemonset "glusterfs" created
Waiting for GlusterFS pods to start ... OK
secret "heketi-config-secret" created
secret "heketi-config-secret" labeled
service "deploy-heketi" created
deployment "deploy-heketi" created
Waiting for deploy-heketi pod to start ... OK
Creating cluster ... ID: d9b90e275621648ad6aacb67759662c9
Creating node kubenode1 ... ID: 858e047eb0cb8943be7b1cd947a26e3c
Adding device /dev/sdb ... OK
Creating node kubenode2 ... ID: 6e9368eea425afb561846576c7b7df70
Adding device /dev/sdb ... OK
Creating node kubenode3 ... ID: 0c3da29a5fa92681a5bc64cf32e9e7e5
Adding device /dev/sdb ... OK
heketi topology loaded.
Saving /tmp/heketi-storage.json
secret "heketi-storage-secret" created
endpoints "heketi-storage-endpoints" created
service "heketi-storage-endpoints" created
job "heketi-storage-copy-job" created
service "heketi-storage-endpoints" labeled
pod "deploy-heketi-2199298601-mrh2p" deleted
service "deploy-heketi" deleted
job "heketi-storage-copy-job" deleted
deployment "deploy-heketi" deleted
secret "heketi-storage-secret" deleted
service "heketi" created
deployment "heketi" created
Waiting for heketi pod to start ... OK
heketi is now running and accessible via http://172.31.45.204:8080 . To run
administrative commands you can install 'heketi-cli' and use it as follows:
# heketi-cli -s http://172.31.45.204:8080 --user admin --secret '<ADMIN_KEY>' cluster list
You can find it at https://github.com/heketi/heketi/releases . Alternatively,
use it from within the heketi pod:
# /usr/bin/kubectl -n default exec -it <HEKETI_POD> -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list
For dynamic provisioning, create a StorageClass similar to this:
---
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: glusterfs-storage
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://172.31.45.204:8080"
Deployment complete!
* DaemonSet 의 모든 pod들(gusterfs-xxxxx)의 기동에 시간이 많이 걸릴 경우, Waiting for ... pods not found. 메시지를 남기고 종료되는 경우가 있을 수 있다. 이 때는 daemonset 의 pod가 모두 Ready 상태로 기동되기를 기다렸다가 위의 gkdeploy 과정을 다시 수행하면 된다.
* Timeout 이 발생할 가능성이 높은 환경이라면 각 Host 에서 "docker pull gluster/gluster-centos:latest && docker pull heketi/heketi:dev" 을 수행하여 중요 컨테이너 이미지를 미리 다운로드 받는 것도 생각해 볼 수 있다. 위의 gkdeploy 수행 전에 다음과 같이 실행해 두면 컨테이너 이미지의 인터넷 다운로드 때문에 타임아웃 발생되는 상황을 예방할 수 있다(모든 Host 에서 동일하게 실행).
[root@kubenode1 ~]# docker pull gluster/gluster-centos:latest && docker pull heketi/heketi:dev
Trying to pull repository docker.io/gluster/gluster-centos ...
latest: Pulling from docker.io/gluster/gluster-centos
af4b0a2388c6: Pull complete
9047bb9800bc: Pull complete
58503fe557c9: Pull complete
db97d0162fd3: Pull complete
91198299ddcd: Pull complete
Digest: sha256:2a37b7287fa81131bf94f7344ff8b3bf9f1e179e936e5f086d978ff412723bf8
Status: Downloaded newer image for docker.io/gluster/gluster-centos:latest
Trying to pull repository docker.io/heketi/heketi ...
dev: Pulling from docker.io/heketi/heketi
2176639d844b: Pull complete
cd78d83c3c66: Pull complete
1798f56b8b7a: Pull complete
e1ba73362526: Pull complete
359781b6c209: Pull complete
Digest: sha256:79eddc6efc51cc76c3ff62eb534fe595376c4a3c25f441a7e047d00e5f6a15d1
Status: Downloaded newer image for docker.io/heketi/heketi:dev
Heketi CLI 를 통한 Gluster Cluster 정상 동작 확인
[root@kubemaster deploy]# echo "export HEKETI_CLI_SERVER=$(kubectl get svc/heketi --template 'http://{{.spec.clusterIP}}:{{(index .spec.ports 0).port}}')" | tee -a ~/.bashrc
export HEKETI_CLI_SERVER=http://10.128.23.39:8080
[root@kubemaster deploy]# yum install -y heketi-client
[root@kubemaster deploy]# source ~/.bashrc
[root@kubemaster deploy]# heketi-cli cluster list
Clusters:
d9b90e275621648ad6aacb67759662c9
[root@kubemaster deploy]# heketi-cli cluster info d9b90e275621648ad6aacb67759662c9
Cluster id: d9b90e275621648ad6aacb67759662c9
Nodes:
0c3da29a5fa92681a5bc64cf32e9e7e5
6e9368eea425afb561846576c7b7df70
858e047eb0cb8943be7b1cd947a26e3c
Volumes:
254add2981864590710f85567895758a
[root@kubemaster ~]# heketi-cli volume info 254add2981864590710f85567895758a
Name: heketidbstorage
Size: 2
Volume Id: 254add2981864590710f85567895758a
Cluster Id: d9b90e275621648ad6aacb67759662c9
Mount: 10.255.10.171:heketidbstorage
Mount Options: backup-volfile-servers=10.255.20.170,10.255.20.171
Durability Type: replicate
Distributed+Replica: 3
* Gluster Volume 생성 테스트
[root@kubemaster ~]# heketi-cli volume create --size=1
Name: vol_5a876cb91239b4de80c3a00797888254
Size: 1
Volume Id: 5a876cb91239b4de80c3a00797888254
Cluster Id: d9b90e275621648ad6aacb67759662c9
Mount: 10.255.10.171:vol_5a876cb91239b4de80c3a00797888254
Mount Options: backup-volfile-servers=10.255.20.170,10.255.20.171
Durability Type: replicate
Distributed+Replica: 3
[root@kubemaster ~]# heketi-cli volume delete 5a876cb91239b4de80c3a00797888254
Volume 5a876cb91239b4de80c3a00797888254 deleted
Kubernetes 에서 Gluster PV 사용을 위한 준비와 PV 생성 테스트
- StorageClass 를 생성한다
- metadata의 name 에서 지정하는 값이, 실제로 사용시 요청되는 persistent volume claim(pvc) 에서 annotation 의 값으로 사용된다
- resturl 에는 위에서 생성한 환경 변수 HEKETI_CLI_SERVER 의 값과 동일한 URL을 등록한다
[root@kubemaster ~]# mkdir gluster-storage-setup-test
[root@kubemaster ~]# cd gluster-storage-setup-test/
[root@kubemaster gluster-storage-setup-test]# vi 00-gluster-storageclass.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: glusterfs-storage
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://10.128.23.39:8080"
[root@kubemaster gluster-storage-setup-test]# kubectl create -f 00-gluster-storageclass.yaml
storageclass "glusterfs-storage" created
* 1 GB 용량의 pvc(스토리지 할당 요청)을 만들어서 테스트해 본다
[root@kubemaster gluster-storage-setup-test]# vi test-pvc-1gi.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-dyn-pvc
annotations:
volume.beta.kubernetes.io/storage-class: glusterfs-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
* accessModes는 RWO(ReadWriteOnce), ROX(ReadOnlyMany), RWX(ReadWriteMany) 의 3 종류가 있다
[root@kubemaster gluster-storage-setup-test]# kubectl create -f test-pvc-1gi.yaml
persistentvolumeclaim "test-dyn-pvc" created
[root@kubemaster gluster-storage-setup-test]# kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
pvc/test-dyn-pvc Bound pvc-1f3065fe-8900-11e7-827f-08002729d0c4 1Gi RWO glusterfs-storage 11s
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/pvc-1f3065fe-8900-11e7-827f-08002729d0c4 1Gi RWO Delete Bound default/test-dyn-pvc glusterfs-storage 6s
* test-dyn-pvc 에 대응하는 pv가 만들어져 있음을 볼 수 있다
생성된 PV를 사용하는 Pod 를 생성하고 마운트된 볼륨 내용을 다뤄 보자
[root@kubemaster gluster-storage-setup-test]# vi test-nginx-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod-pv
labels:
name: nginx-pod-pv
spec:
containers:
- name: nginx-pod-pv
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- name: web
containerPort: 80
volumeMounts:
- name: gluster-vol1
mountPath: /usr/share/nginx/html
volumes:
- name: gluster-vol1
persistentVolumeClaim:
claimName: test-dyn-pvc
[root@kubemaster gluster-storage-setup-test]# kubectl create -f test-nginx-pvc.yaml
pod "nginx-pod-pv" created
* Pod 내의 마운트포인트는 nginx 기본 html 저장위치인 /usr/share/nginx/html
* PV 이름을 직접 사용하는 것이 아니라 pvc 명인 test-dyn-pvc을 사용한다
[root@kubemaster gluster-storage-setup-test]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
busybox-for-dnstest 1/1 Running 29 1d 172.31.35.72 kubenode2
glusterfs-22v6s 1/1 Running 0 9h 10.255.20.171 kubenode3
glusterfs-pjsvb 1/1 Running 0 9h 10.255.10.171 kubenode1
glusterfs-tpd0h 1/1 Running 0 9h 10.255.20.170 kubenode2
heketi-2660258935-spb5g 1/1 Running 0 9h 172.31.45.204 kubenode3
hostnames-2923313648-3104j 1/1 Running 0 1d 172.31.35.73 kubenode2
hostnames-2923313648-h4n8j 1/1 Running 0 1d 172.31.45.199 kubenode3
hostnames-2923313648-r89h4 1/1 Running 0 1d 172.31.205.197 kubenode1
nginx-pod-pv 1/1 Running 0 3m 172.31.35.77 kubenode2
[root@kubemaster gluster-storage-setup-test]# kubectl exec -it nginx-pod-pv -- /bin/sh
# df -h
Filesystem Size Used Avail Use% Mounted on
...
10.255.10.171:vol_989e89f4758c8749c00ed6fd2fc1db92 1016M 33M 983M 4% /usr/share/nginx/html
...
# echo 'Hello World from GlusterFS!!!' > /usr/share/nginx/html/index.html
# exit
[root@kubemaster gluster-storage-setup-test]# curl http://172.31.35.77
Hello World from GlusterFS!!!
* Web 서버의 source, index.html 을 작성(변경)
* nginx Pod 의 IP를 이용해서 URL에 대한 Web 접속 테스트
GlusterFS Pod(Gluster 노드 역할) 내부에서 스토리지 내부 데이터 직접 확인
[root@kubemaster ~]# kubectl exec -it glusterfs-22v6s -- /bin/sh
sh-4.2# mount | grep heketi
/dev/mapper/cl-root on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg_be4e161a83f2835fb8e9c61ee2513a69-brick_8ff06f0a54b9155179dd70a5fa02784b on /var/lib/heketi/mounts/vg_be4e161a83f2835fb8e9c61ee2513a69/brick_8ff06f0a54b9155179dd70a5fa02784b type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
/dev/mapper/vg_be4e161a83f2835fb8e9c61ee2513a69-brick_7cebe446228298f5b11b171567edf30f on /var/lib/heketi/mounts/vg_be4e161a83f2835fb8e9c61ee2513a69/brick_7cebe446228298f5b11b171567edf30f type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
sh-4.2# cd /var/lib/heketi/mounts/vg_be4e161a83f2835fb8e9c61ee2513a69/brick_7cebe446228298f5b11b171567edf30f/brick
sh-4.2# cat index.html
Hello World from GlusterFS!!!
Default Storage Class 설정
- 현재 Kubernetes Cluster 에서 사용하는 Storage Class를 여러 개 둘 수 있음
- 다음의 과정대로 하면 별로 지정이 없는 한 GlusterFS 를 Default Storage Class 로 설정
[root@kubemaster ~]# kubectl get sc
NAME PROVISIONER AGE
glusterfs-storage kubernetes.io/glusterfs 32d
[root@kubemaster ~]# kubectl patch storageclass glusterfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass "glusterfs-storage" patched
[root@kubemaster ~]# kubectl get sc
NAME PROVISIONER AGE
glusterfs-storage (default) kubernetes.io/glusterfs 32d
불필요한 테스트용 볼륨, Pod 삭제
[root@kubemaster gluster-storage-setup-test]# kubectl delete -f test-nginx-pvc.yaml
pod "nginx-pod-pv" deleted
[root@kubemaster gluster-storage-setup-test]# kubectl delete -f test-pvc-1gi.yaml
persistentvolumeclaim "test-dyn-pvc" deleted
[root@kubemaster gluster-storage-setup-test]# kubectl get pv
No resources found.
[root@kubemaster gluster-storage-setup-test]# heketi-cli volume list
Id:254add2981864590710f85567895758a Cluster:d9b90e275621648ad6aacb67759662c9 Name:heketidbstorage
* kubectl 로 pvc 삭제를 수행하면 pv가 삭제되고, Heketi CLI 에서 해당 Gluster Volume 이 삭제되어 있음을 확인할 수 있다
(heketidbstorage 볼륨은 Heketi 의 메타데이터를 저장하는 기본 볼륨으로, 삭제하지 말 것)
- Barracuda -
[관련 글 목록]
[Technical/Cloud, 가상화, PaaS] - [Kubernetes] CentOS 7.3 으로 Kubernetes Cluster 구성(with Flannel)-1/4
[Technical/Cloud, 가상화, PaaS] - [Kubernetes] CentOS 7.3 으로 Kubernetes Cluster 구성(노드 추가하기)-2/4
[Technical/Cloud, 가상화, PaaS] - [Kubernetes] Hyper-converged GlusterFs integration with Heketi -4/4
[Technical/Cloud, 가상화, PaaS] - [GlusterFS & Kubernetes] External Gluster PV with Heketi CLI/Rest API
'Technical > Cloud, Virtualization, Containers' 카테고리의 다른 글
[Kubernetes RBAC] API 연결로 확인해 보는 RBAC 설정 방법 (0) | 2017.12.01 |
---|---|
[Kubernetes ingress part-1] Nginx ingress 구현과 사용 방법 (0) | 2017.11.03 |
[Kubernetes] 1.7.x~1.9.x, kubeadm 으로 L3 네트워크 기반 Cluster 구성(with Calico CNI)-3/4 (2) | 2017.08.23 |
[Kubernetes] CentOS 7.3 으로 Kubernetes Cluster 구성(노드 추가하기)-2/4 (0) | 2017.08.13 |
[GlusterFS & Kubernetes] External Gluster PV with Heketi CLI/Rest API (0) | 2017.08.05 |