K-PaaS 로고

SUPPORT

  1. SUPPORT
    • 가이드
    • 다운로드
    • 문의하기
    • 기술지원 신청
    • 인증서비스

묻고 답하기

기술지원단독형 배포 포털 설치 관련 문의, vault 및 harbor 설치 단계 이슈
엄*용 2023-11-08 11:11:59
  • hits247

안녕하세요. 

이전 문의에 답변주셔서 감사드립니다. 

이전 문의 : https://k-paas.or.kr/notice/qnaView/2488?query=&query_type=all&query_type1=all&query_type2=all&start=1

답변주신대로 OPENSTACK으로 변경 후 진행하였습니다만, 기존에 다른 설정이 있어야하는건지 잘 진행이 안되는 부분이 있습니다. 

 

먼저, 출력 내용입니다. 

$ ./deploy-cp-portal.sh

cp: cannot stat '/home/ubuntu/.env//cp-vault-vars.sh': No such file or directory

./deploy-cp-portal.sh: line 7: cp-vault-vars.sh: No such file or directory

namespace/harbor created

secret/cp-secret created

NAME: cp-harbor

LAST DEPLOYED: Wed Nov 8 01:31:54 2023

NAMESPACE: harbor

STATUS: deployed

REVISION: 1

TEST SUITE: None

NOTES:

Please wait for several minutes for Harbor deployment to complete.

Then you should be able to visit the Harbor portal at http://192.168.0.151:30002

For more details, please visit https://github.com/goharbor/harbor

[000] Please wait for several minutes for Harbor deployment to complete...

[000] Please wait for several minutes for Harbor deployment to complete...

[000] Please wait for several minutes for Harbor deployment to complete...

[502] Please wait for several minutes for Harbor deployment to complete...

[502] Please wait for several minutes for Harbor deployment to complete...

[502] Please wait for several minutes for Harbor deployment to complete...

 

1. 가장 먼저 메시지가 나타나는 부분은 vault 서비스의 해당 파일 등이 없어서 발생하는 것으로 보입니다. 

아래는 'cp-portal-vars.sh'의 vault 변수 설정 부분입니다. 

# VAULT

VAULT_IP="${K8S_MASTER_NODE_IP}" # vault url

VAULT_PORT="31654" # vault port

VAULT_ROLE_NAME="cp_role" # vault role name

VAULT_VARS_PATH="/home/ubuntu/.env/" # vault vars file path

VAULT_VARS="cp-vault-vars.sh" # vault vars file name

저 같은 경우에는 컨테이너 플랫폼으로 설치를 하였는데, 해당 폴더는 없네요. 혹시 어플리케이션 플랫폼으로 설치하였을 때에만 포털이 사용가능한건지요?

2.  응답 오류에 대해서는 응답 메시지를 보고 살펴보긴 했는데 혹시 원인과 해결방안을 알 수 있을까요?
해당 메시지를 출력하는 부분의 앞을 보니 helm chart를 이용해서 배포하고있는데, 배포 작업 자체는 수행을 하였으나 일부 pods가 정상적으로 실행되지 않는 것으로 보입니다. 

$ sudo kubectl get pods −−all-namespaces | grep

harbor

harbor cp-harbor-chartmuseum-847d8c9577-24hrx 1/1 Running 0 32m

harbor cp-harbor-core-577546f98d-zs48v 0/1 CrashLoopBackOff 9 (2m37s ago) 32m

harbor cp-harbor-database-0 1/1 Running 1 (23m ago) 32m

harbor cp-harbor-jobservice-896594945-tt4cc 0/1 CrashLoopBackOff 10 (4m26s ago) 32m

harbor cp-harbor-nginx-845666c5bf-n226z 1/1 Running 0 32m

harbor cp-harbor-notary-server-59b76844ff-bt6xf 1/1 Running 7 (22m ago) 32m

harbor cp-harbor-notary-signer-84847756c9-zbx5c 1/1 Running 7 (22m ago) 32m

harbor cp-harbor-portal-cbcf46469-95j6j 1/1 Running 0 32m

harbor cp-harbor-redis-0 1/1 Running 0 32m

harbor cp-harbor-registry-6dbbff66c4-9lntl 2/2 Running 0 32m

harbor cp-harbor-trivy-0 1/1 Running 0 32m

 

kubectl describe 명령어를 통해 살펴보았을 때, jobservice 관련 pod는 connection refuse 에러를 뱉고 있었고, core는 아래와 같은 상태입니다. 

$ sudo kubectl describe pod cp-harbor-core-5775

46f98d-zs48v -n harbor

Name: cp-harbor-core-577546f98d-zs48v

Namespace: harbor

Priority: 0

Service Account: default

Node: dev-211110-passta-cluster3/192.168.0.153

Start Time: Wed, 08 Nov 2023 01:32:04 +0000

Labels: app=harbor

component=core

pod-template-hash=577546f98d

release=cp-harbor

Annotations: checksum/configmap: 11436f9dfe11463c1290b56f361093edaa51352c949875b156dde015898a19aa

checksum/secret: a802c62eb4f5a1e646e511bbedcb9b84242014836536377902cb3731c90bf078

checksum/secret-jobservice: e3907036a16afb0af47473a42a15c27807b91ecfa6c6f8ede6bb08a9a96cc6a5

cni.projectcalico.org/containerID: 6c438ed6cfef177192e19d303859a274724095f8331363a272defa062c95e8c3

cni.projectcalico.org/podIP: 10.233.83.154/32

cni.projectcalico.org/podIPs: 10.233.83.154/32

Status: Running

IP: 10.233.83.154

IPs:

IP: 10.233.83.154

Controlled By: ReplicaSet/cp-harbor-core-577546f98d

Containers:

core:

Container ID: cri-o://5664f3254d8749f019fc1fddde4c1a2784497df9e5bfb00cc3768ce0f976d3c0

Image: goharbor/harbor-core:v2.6.0

Image ID: docker.io/goharbor/harbor-core@sha256:2f8c6f136606ba736db0673a5acab19ed61ada618e58453267eb0f4bc6b79f18

Port: 8080/TCP

Host Port: 0/TCP

State: Waiting

Reason: CrashLoopBackOff

Last State: Terminated

Reason: Error

Exit Code: 1

Started: Wed, 08 Nov 2023 02:02:05 +0000

Finished: Wed, 08 Nov 2023 02:02:06 +0000

Ready: False

Restart Count: 9

Liveness: http-get http://:8080/api/v2.0/ping delay=0s timeout=1s period=10s #success=1 #failure=2

Readiness: http-get http://:8080/api/v2.0/ping delay=0s timeout=1s period=10s #success=1 #failure=2

Startup: http-get http://:8080/api/v2.0/ping delay=10s timeout=1s period=10s #success=1 #failure=360

Environment Variables from:

cp-harbor-core ConfigMap Optional: false

cp-harbor-core Secret Optional: false

Environment:

CORE_SECRET: Optional: false

JOBSERVICE_SECRET: Optional: false

Mounts:

/etc/core/app.conf from config (rw,path="app.conf")

/etc/core/key from secret-key (rw,path="key")

/etc/core/private_key.pem from token-service-private-key (rw,path="tls.key")

/etc/core/token from psc (rw)

Conditions:

Type Status

Initialized True

Ready False

ContainersReady False

PodScheduled True

Volumes:

config:

Type: ConfigMap (a volume populated by a ConfigMap)

Name: cp-harbor-core

Optional: false

secret-key:

Type: Secret (a volume populated by a Secret)

SecretName: cp-harbor-core

Optional: false

token-service-private-key:

Type: Secret (a volume populated by a Secret)

SecretName: cp-harbor-core

Optional: false

psc:

Type: EmptyDir (a temporary directory that shares a pod's lifetime)

Medium:

SizeLimit:

QoS Class: BestEffort

Node-Selectors:

Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s

node.kubernetes.io/unreachable:NoExecute op=Exists for 300s

Events:

Type Reason Age From Message

−−−− −−−−−− −−−− −−−− −−−−−−-

Normal Scheduled 31m default-scheduler Successfully assigned harbor/cp-harbor-core-577546f98d-zs48v to dev-211110-pa

ssta-cluster3

Normal Pulled 30m kubelet Successfully pulled image "goharbor/harbor-core:v2.6.0" in 10.731386832s (22.

497940356s including waiting)

Normal Created 29m (x2 over 30m) kubelet Created container core

Normal Started 29m (x2 over 30m) kubelet Started container core

Normal Pulled 29m kubelet Successfully pulled image "goharbor/harbor-core:v2.6.0" in 2.651556456s (2.65

1561476s including waiting)

Normal Pulling 27m (x3 over 31m) kubelet Pulling image "goharbor/harbor-core:v2.6.0"

Warning Unhealthy 20m (x36 over 30m) kubelet Startup probe failed: Get "http://10.233.83.154:8080/api/v2.0/ping": dial tcp

10.233.83.154:8080: connect: connection refused

Warning BackOff 3m20s (x96 over 28m) kubelet Back-off restarting failed container core in pod cp-harbor-core-577546f98d-zs

48v_harbor(e2fc30d6-9edf-4947-831c-9e344e99f626)

 

이벤트 만으로는 명확한 상태를 알 수가 없어서 log를 살펴보니 다음과 같이 나왔습니다. 

$ kubectl logs cp-harbor-core-577546f98d-zs48v

-n harbor −−previous

Appending internal tls trust CA to ca-bundle ...

find: '/etc/harbor/ssl': No such file or directory

Internal tls trust CA appending is Done.

2023-11-08T02:02:05Z [INFO] [/controller/artifact/annotation/parser.go:71]: the annotation parser to parser artifact annotation version v1al

pha1 registered

2023-11-08T02:02:05Z [INFO] [/controller/artifact/processor/processor.go:59]: the processor to process media type application/vnd.wasm.confi

g.v1+json registered

2023-11-08T02:02:05Z [ERROR] [/lib/config/config.go:81]: failed to get config manager

2023-11-08T02:02:05Z [ERROR] [/lib/config/config.go:81]: failed to get config manager

2023-11-08T02:02:05Z [INFO] [/controller/artifact/processor/processor.go:59]: the processor to process media type application/vnd.cncf.helm.

config.v1+json registered

2023-11-08T02:02:05Z [INFO] [/controller/artifact/processor/processor.go:59]: the processor to process media type application/vnd.cnab.manif

est.v1 registered

2023-11-08T02:02:05Z [INFO] [/controller/artifact/processor/processor.go:59]: the processor to process media type application/vnd.oci.image.

index.v1+json registered

2023-11-08T02:02:05Z [INFO] [/controller/artifact/processor/processor.go:59]: the processor to process media type application/vnd.docker.dis

tribution.manifest.list.v2+json registered

2023-11-08T02:02:05Z [INFO] [/controller/artifact/processor/processor.go:59]: the processor to process media type application/vnd.docker.dis

tribution.manifest.v1+prettyjws registered

2023-11-08T02:02:05Z [INFO] [/controller/artifact/processor/processor.go:59]: the processor to process media type application/vnd.oci.image.

config.v1+json registered

2023-11-08T02:02:05Z [INFO] [/controller/artifact/processor/processor.go:59]: the processor to process media type application/vnd.docker.con

tainer.image.v1+json registered

2023-11-08T02:02:05Z [INFO] [/pkg/reg/adapter/native/adapter.go:36]: the factory for adapter docker-registry registered

2023-11-08T02:02:05Z [INFO] [/pkg/reg/adapter/aliacr/adapter.go:30]: the factory for adapter ali-acr registered

2023-11-08T02:02:05Z [INFO] [/pkg/reg/adapter/artifacthub/adapter.go:30]: the factory for adapter artifact-hub registered

2023-11-08T02:02:05Z [INFO] [/pkg/reg/adapter/awsecr/adapter.go:44]: the factory for adapter aws-ecr registered

2023-11-08T02:02:05Z [INFO] [/pkg/reg/adapter/azurecr/adapter.go:29]: Factory for adapter azure-acr registered

2023-11-08T02:02:05Z [INFO] [/pkg/reg/adapter/dockerhub/adapter.go:26]: Factory for adapter docker-hub registered

2023-11-08T02:02:05Z [INFO] [/pkg/reg/adapter/dtr/adapter.go:22]: the factory of dtr adapter was registered

2023-11-08T02:02:05Z [INFO] [/pkg/reg/adapter/githubcr/adapter.go:29]: the factory for adapter github-ghcr registered

2023-11-08T02:02:05Z [INFO] [/pkg/reg/adapter/gitlab/adapter.go:18]: the factory for adapter gitlab registered

2023-11-08T02:02:05Z [INFO] [/pkg/reg/adapter/googlegcr/adapter.go:37]: the factory for adapter google-gcr registered

2023-11-08T02:02:05Z [INFO] [/pkg/reg/adapter/harbor/adaper.go:31]: the factory for adapter harbor registered

2023-11-08T02:02:05Z [INFO] [/pkg/reg/adapter/helmhub/adapter.go:30]: the factory for adapter helm-hub registered

2023-11-08T02:02:05Z [INFO] [/pkg/reg/adapter/huawei/huawei_adapter.go:40]: the factory of Huawei adapter was registered

2023-11-08T02:02:05Z [INFO] [/pkg/reg/adapter/jfrog/adapter.go:43]: the factory of jfrog artifactory adapter was registered

2023-11-08T02:02:05Z [INFO] [/pkg/reg/adapter/quay/adapter.go:54]: the factory of Quay adapter was registered

2023-11-08T02:02:05Z [INFO] [/pkg/reg/adapter/tencentcr/adapter.go:41]: the factory for adapter tencent-tcr registered

2023-11-08T02:02:05Z [INFO] [/core/controllers/base.go:153]: Config path: /etc/core/app.conf

2023-11-08T02:02:05Z [INFO] [/core/main.go:136]: initializing cache ...

2023-11-08T02:02:05Z [INFO] [/core/main.go:146]: initializing configurations...

2023-11-08T02:02:05Z [INFO] [/lib/config/systemconfig.go:198]: key path: /etc/core/key

2023-11-08T02:02:05Z [INFO] [/lib/config/config.go:92]: init secret store

2023-11-08T02:02:05Z [INFO] [/core/main.go:148]: configurations initialization completed

2023-11-08T02:02:05Z [INFO] [/common/dao/base.go:67]: Registering database: type-PostgreSQL host-cp-harbor-database port-5432 database-regis

try sslmode-"disable"

2023-11-08T02:02:05Z [INFO] [/common/dao/base.go:72]: Register database completed

2023-11-08T02:02:06Z [INFO] [/common/dao/pgsql.go:131]: Upgrading schema for pgsql ...

2023-11-08T02:02:06Z [ERROR] [/common/dao/pgsql.go:136]: Failed to upgrade schema, error: "Dirty database version 80. Fix and force version.

"

2023-11-08T02:02:06Z [FATAL] [/core/main.go:177]: failed to migrate the database, error: Dirty database version 80. Fix and force version.

 

혹시 해당 사항에 대해 아시는 부분이 있다면 알려주신다면 감사하겠습니다. 

안녕하세요. 개방형 클라우드 플랫폼 센터입니다.
문의에 대한 답변 드립니다.

Harbor의 각 컴포넌트간에 통신이 정상적으로 이루어져야 정상적인 서비스가 가능합니다.
로그상으로 봐서는 컴포넌트간의 통신이 정상적이지 않아 health check에 실패한것으로 보이며 `kubectl get pod -n harbor -o wide` 명령을통해 로그상나와있는 10.233.83.154를 가지는 파드가 어떤 파드인지 확인 후 harbor-core Pod에서 해당 10.233.83.154:8080으로 curl을 통해 통신에 문제 없는지 확인 바랍니다.

저희가 가이드상 안내드리는 기본 환경과 다를경우 해당 문제에 대한 답을 드리는데, 시간이 필요한 점 양해 바랍니다.

감사합니다.

알 림

필수입력 값 모달창