- 기술지원rook-ceph 배포 failed
-
이*진 2024-04-30 18:51:42- hits80
안녕하세요
기존에 물리서버에 설치하였던 cp-cluster 및 cp-portal 들을 reset 스크립트를 통해 삭제 후 재배포 중에 있습니다.
deploy-cp-cluster.sh 실행 시 rook-ceph-osd가 배포되고 있지 않습니다.
TASK [cp/storage : Deploy rook cluster] ******************************************************************************
changed: [master01]
Tuesday 30 April 2024 16:54:32 +0900 (0:00:01.554) 0:22:07.014 *********
FAILED - RETRYING: [master01]: Check rook-ceph-osd-0 status (60 retries left).
FAILED - RETRYING: [master01]: Check rook-ceph-osd-0 status (59 retries left).
FAILED - RETRYING: [master01]: Check rook-ceph-osd-0 status (58 retries left).
FAILED - RETRYING: [master01]: Check rook-ceph-osd-0 status (57 retries left).
FAILED - RETRYING: [master01]: Check rook-ceph-osd-0 status (56 retries left).
FAILED - RETRYING: [master01]: Check rook-ceph-osd-0 status (55 retries left).
FAILED - RETRYING: [master01]: Check rook-ceph-osd-0 status (54 retries left).
FAILED - RETRYING: [master01]: Check rook-ceph-osd-0 status (53 retries left).
FAILED - RETRYING: [master01]: Check rook-ceph-osd-0 status (52 retries left).
FAILED - RETRYING: [master01]: Check rook-ceph-osd-0 status (51 retries left).
reset 스크립트로 삭제를 진행하기 전 lsblk 명령어로 확인 시 rook-ceph가 /dev/sdb3으로 파티션이 설정되어 있었습니다.
아래 명령어를 통해 설정된 파티션을 삭제 후 deploy-cp-cluster.sh 스크립트로 재배포 진행하였습니다.
DISK=/dev/sdb
sudo sgdisk −−zap-all $DISK
sudo lsblk -f
sudo dd if=/dev/zero of=/dev/sdb bs=1M count=100 oflag=direct,dsync
sudo blkdiscard $DISK
sudo partprobe $DISK
현재 lsblk -f 명령어로 확인 시 /dev/sdb 파티션이 보이지 않고 있습니다.
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
loop0 squashfs 4.0 0 100% /snap/core20/2182
loop1 squashfs 4.0 0 100% /snap/core20/2264
loop2 squashfs 4.0 0 100% /snap/lxd/27948
loop3 squashfs 4.0 0 100% /snap/lxd/28373
loop4 squashfs 4.0 0 100% /snap/snapd/21184
loop5 squashfs 4.0 0 100% /snap/snapd/21465
sda
├─sda1 vfat FAT32 78D7-32F2 1G 1% /boot/efi
└─sda2 ext4 1.0 a749f718-e329-4f9b-a5d0-b6ed0e5bd6d4 382G 7% /var/lib/containers/storage/overlay
/
sdb
nbd0
nbd1
nbd2
nbd3
nbd4
nbd5
nbd6
nbd7
nbd8
nbd9
nbd10
nbd11
nbd12
nbd13
nbd14
nbd15
rook-ceph-osd가 정상배포되고 있지 않은데 혹시 해결방안이 있을까요?
kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-24cqm 2/2 Running 0 8m48s
csi-cephfsplugin-8fwrl 2/2 Running 0 8m48s
csi-cephfsplugin-8sbn9 2/2 Running 0 8m48s
csi-cephfsplugin-ktqwd 2/2 Running 0 8m48s
csi-cephfsplugin-lflbl 2/2 Running 0 8m48s
csi-cephfsplugin-provisioner-86788ff996-x77gh 5/5 Running 0 8m48s
csi-cephfsplugin-provisioner-86788ff996-z5tdw 5/5 Running 0 8m48s
csi-cephfsplugin-qf6mf 2/2 Running 0 8m48s
csi-cephfsplugin-vwhq2 2/2 Running 0 8m48s
csi-rbdplugin-7l6qk 2/2 Running 0 8m48s
csi-rbdplugin-7mrbb 2/2 Running 0 8m48s
csi-rbdplugin-lzpzl 2/2 Running 0 8m48s
csi-rbdplugin-provisioner-7b5494c7fd-ps692 5/5 Running 0 8m48s
csi-rbdplugin-provisioner-7b5494c7fd-xqrxj 5/5 Running 0 8m48s
csi-rbdplugin-vhhw2 2/2 Running 0 8m48s
csi-rbdplugin-vq5bj 2/2 Running 0 8m48s
csi-rbdplugin-wbw5h 2/2 Running 0 8m48s
csi-rbdplugin-wnfbk 2/2 Running 0 8m48s
rook-ceph-mon-a-7577547897-r66rj 2/2 Running 0 8m39s
rook-ceph-operator-8684cbf49d-ddkzk 1/1 Running 0 9m27s
답변 기다리겠습니다. 감사합니다.
안녕하세요. 개방형 클라우드 플랫폼 센터입니다.
문의사항에 대해 답변드립니다.
현재 배포중이던 클러스터를 삭제하고 Worker 노드에 이전에 연결하신 rook-ceph용 추가 볼륨을 제거 후 신규로 볼륨을 할당하여 재설치를 진행해 보시기 바랍니다.
감사합니다.