F5 AST 를 NGINX Plus Ingress Controller를 사용하여 노출
F5 AST(application study tool)는 프로덕션 수준보다 낮은 안정성으로 애플리케이션 인사이트를 신속하게 구축하고 실행하는 데 필요한 모든 기능을 제공합니다. 이번 포스트는 NGINX Plus Ingress Controller 를 사용하여 F5 AST를 노출시키는 방법에 대해 알아봅니다.
프로덕션/운영 사용 사례의 경우, 포함된 구성 요소를 기반으로 고가용성, Grafana OIDC 통합 등을 통한 향상된 보안 등을 고려하여 구축할 수 있습니다. 또는 Opentelemetry Collector를 구성하여 필요에 따라 기존 프로덕션 운영 모니터링 도구로 데이터를 전송할 수 있습니다.
목차
1. 개요
2. F5 AST 배포 전제 조건
3. F5 AST 메트릭 수집을 위한 Otel-Collector 배포
4. F5 AST 메트릭 스크래핑을 위한 Prometheus 배포
5. F5 AST 시각화를 위한 Grafana 배포
6. VirtualServer 배포
7. 배포된 리소스
8. F5 AST 배포 확인
9. 결론
1. 개요
이 가이드는 기존 Docker로 배포되는 F5 Application Study Tool(이하 F5 AST)를 Kubernetes 환경에 배포하는 가이드입니다.
아래 f5devcentral GitHub 및 F5 Community Training & Labs등 참고 자료를 확인하여 자세한 정보를 확인할 수 있습니다.
2. F5 AST 배포 전제 조건
이번 포스트에서는 NGINX Plus Ingress Controller 를 설치하는 내용은 다루지 않습니다. NGINX Ingress Controller를 설치하려면 아래 포스트를 확인하세요.
F5에서 제공하는 Grafana 대시보드를 불러오기 위해 아래 명령을 사용하여 Git repository를 클론합니다.
git clone https://github.com/f5devcentral/application-study-tool.git
기본적으로 대시보드는 application-study-tool/services/grafana/provisioning/dashboards/ 경로에 .json 파일로 작성되어 있습니다. 이를 쿠버네티스 환경에 배포하기 위해 configmap 리소스로 변환합니다.
아래는 .json 파일들을 configmap으로 변환하는 쉘 스크립트 입니다.
#!/bin/bash
NAMESPACE="f5-ast"
FILES=(
application-study-tool/services/grafana/provisioning/dashboards/dashboards.yaml
application-study-tool/services/grafana/provisioning/dashboards/bigip/device/device-gtm.json
application-study-tool/services/grafana/provisioning/dashboards/bigip/device/device-irules.json
application-study-tool/services/grafana/provisioning/dashboards/bigip/device/device-overview.json
application-study-tool/services/grafana/provisioning/dashboards/bigip/device/device-pools-overview.json
application-study-tool/services/grafana/provisioning/dashboards/bigip/device/device-ssl.json
application-study-tool/services/grafana/provisioning/dashboards/bigip/device/device-virtual-server.json
application-study-tool/services/grafana/provisioning/dashboards/bigip/device/device-waf-overview.json
application-study-tool/services/grafana/provisioning/dashboards/bigip/device/top-n.json
application-study-tool/services/grafana/provisioning/dashboards/bigip/fleet/fleet-apm-session.json
application-study-tool/services/grafana/provisioning/dashboards/bigip/fleet/fleet-cgnat.json
application-study-tool/services/grafana/provisioning/dashboards/bigip/fleet/fleet-device-utilization.json
application-study-tool/services/grafana/provisioning/dashboards/bigip/fleet/fleet-dos.json
application-study-tool/services/grafana/provisioning/dashboards/bigip/fleet/fleet-firewall.json
application-study-tool/services/grafana/provisioning/dashboards/bigip/fleet/fleet-inventory.json
application-study-tool/services/grafana/provisioning/dashboards/bigip/fleet/fleet-virtual-server.json
application-study-tool/services/grafana/provisioning/dashboards/bigip/fleet/ssl-certificates.json
application-study-tool/services/grafana/provisioning/dashboards/bigip/profile/ltm-dns.json
application-study-tool/services/grafana/provisioning/dashboards/bigip/profile/ltm-http.json
application-study-tool/services/grafana/provisioning/dashboards/otel-collector/collector-health.json
application-study-tool/services/grafana/provisioning/dashboards/otel-collector/receiver-stats.json
)
for FILE in "${FILES[@]}"; do
NAME=$(basename "$FILE" | sed 's/\.[^.]*$//') # 확장자 제거
CONFIGMAP_NAME="grafana-${NAME//_/-}" # ConfigMap 이름 구성
echo "Creating ConfigMap: $CONFIGMAP_NAME from $FILE"
kubectl create configmap "$CONFIGMAP_NAME" -n "$NAMESPACE" \
--from-file="$FILE" \
--dry-run=client -o yaml > "${CONFIGMAP_NAME}.yaml"
done
해당 쉘 스크립트를 실행시켜 configmap의 yaml 파일을 생성합니다.
3. F5 AST 메트릭 수집을 위한 Otel-Collector 배포
F5 BIG-IP의 메트릭을 수집하기 위해 OpenTelemetry Collector를 배포합니다.
BIG-IP 접근을 위한 BIG-IP의 password를 secret으로 생성합니다.
otel-collector-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: device-secrets
namespace: f5-ast
type: Opaque
data:
BIGIP_PASSWORD_1: {base64 encoding}
BIGIP_PASSWORD_2: {base64 encoding}
BIGIP_PASSWORD_3: {base64 encoding}
Otel-collector에서 F5 BIG-IP 시스템에서 메트릭을 수집, 처리 및 내보내기 위한 configmap 리소스를 생성합니다.
아래 ConfigMap 리소스의 receivers.yaml 부분에서 Bold처리된 부분에 사용할 unique name과 BIG-IP 장비의 IP, username, password 부분을 환경에 맞게 수정합니다. 인증서가 있다면 TLS 구성도 추가해야 하며, 이번 포스트에서는 인증서 없이 구성됩니다.
otel-collector-cm.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-collector-config
namespace: f5-ast
data:
bigip-scraper-config.yaml: |
receivers: ${file:/etc/otel-collector-config/receivers.yaml}
processors:
batch/local:
batch/f5-datafabric:
send_batch_max_size: 8192
# Only export data to f5 (if enabled) every 300s
interval/f5-datafabric:
interval: 300s
# Apply the folowing transformations to metrics bound for F5 Datafabric
attributes/f5-datafabric:
actions:
- key: dataType
action: upsert
value: bigip-ast-metric
exporters:
otlphttp/metrics-local:
endpoint: http://prometheus:9090/api/v1/otlp
otlp/f5-datafabric:
endpoint: us.edge.df.f5.com:443
headers:
# Requires Sensor ID and Token to authenticate.
Authorization: "kovacs ${env:SENSOR_ID} ${env:SENSOR_SECRET_TOKEN}"
X-F5-OTEL: "GRPC"
tls:
insecure: false
ca_file: /etc/ssl/certs/ca-certificates.pem
debug/bigip:
verbosity: basic
sampling_initial: 5
sampling_thereafter: 200
service:
# Changed in upstream otel collector, default only responds on localhost
telemetry:
metrics:
readers:
- pull:
exporter:
prometheus:
host: '0.0.0.0'
port: 8888
pipelines: ${file:/etc/otel-collector-config/pipelines.yaml}
receivers.yaml: |
bigip/1: # 무조건 bigip/ 와 같은 name을 사용(e.g. bigip/1, bigip/ve)
collection_interval: 60s
data_types:
f5.apm:
enabled: false
f5.cgnat:
enabled: false
f5.dns:
enabled: false
f5.dos:
enabled: false
f5.firewall:
enabled: false
f5.gtm:
enabled: false
f5.policy.api_protection:
enabled: false
f5.policy.asm:
enabled: false
f5.policy.firewall:
enabled: false
f5.policy.ip_intelligence:
enabled: false
f5.policy.nat:
enabled: false
f5.profile.dos:
enabled: false
endpoint: ptotocol://BIGIP-IP # BIG-IP 주소
username: user # BIG-IP username
password: ${env:BIGIP_PASSWORD_3} # BIG-IP password (secret으로 생성한 key)
tls:
ca_file: ''
insecure_skip_verify: true
bigip/2:
collection_interval: 60s
data_types:
f5.apm:
enabled: false
f5.cgnat:
enabled: false
f5.dns:
enabled: false
f5.dos:
enabled: false
f5.firewall:
enabled: false
f5.gtm:
enabled: false
f5.policy.api_protection:
enabled: false
f5.policy.asm:
enabled: false
f5.policy.firewall:
enabled: false
f5.policy.ip_intelligence:
enabled: false
f5.policy.nat:
enabled: false
f5.profile.dos:
enabled: false
endpoint: ptotocol://BIGIP-IP
username: user
password: ${env:BIGIP_PASSWORD_1}
tls:
ca_file: ''
insecure_skip_verify: true
bigip/3:
collection_interval: 60s
data_types:
f5.apm:
enabled: false
f5.cgnat:
enabled: false
f5.dns:
enabled: false
f5.dos:
enabled: false
f5.firewall:
enabled: false
f5.gtm:
enabled: false
f5.policy.api_protection:
enabled: false
f5.policy.asm:
enabled: false
f5.policy.firewall:
enabled: false
f5.policy.ip_intelligence:
enabled: false
f5.policy.nat:
enabled: false
f5.profile.dos:
enabled: false
endpoint: ptotocol://BIGIP-IP
username: user
password: ${env:BIGIP_PASSWORD_1}
tls:
ca_file: ''
insecure_skip_verify: true
bigip/ve:
collection_interval: 60s
data_types:
f5.apm:
enabled: false
f5.cgnat:
enabled: false
f5.dns:
enabled: false
f5.dos:
enabled: false
f5.firewall:
enabled: false
f5.gtm:
enabled: false
f5.policy.api_protection:
enabled: false
f5.policy.asm:
enabled: false
f5.policy.firewall:
enabled: false
f5.policy.ip_intelligence:
enabled: false
f5.policy.nat:
enabled: false
f5.profile.dos:
enabled: false
endpoint: ptotocol://BIGIP-IP
username: user
password: ${env:BIGIP_PASSWORD_2}
tls:
ca_file: ''
insecure_skip_verify: true
pipelines.yaml: |
metrics/local:
exporters:
- otlphttp/metrics-local
- debug/bigip
processors:
- batch/local
receivers: # 위 receivers.yaml에 정의된 BIG-IP unique name
- bigip/1
- bigip/2
- bigip/3
- bigip/ve
위에 작성한 ConfigMap을 사용할 수 있도록 Deployment 리소스를 생성합니다.
otel-collector-deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: otel-collector
namespace: f5-ast
spec:
selector:
matchLabels:
app: otel-collector
template:
metadata:
labels:
app: otel-collector
spec:
containers:
- name: otel-collector
image: ghcr.io/f5devcentral/application-study-tool/otel_custom_collector:v0.9.2 # F5 Custom otel image
args:
- --config=/etc/otel-collector-config/bigip-scraper-config.yaml
env:
- name: BIGIP_PASSWORD_1
valueFrom:
secretKeyRef:
name: device-secrets
key: BIGIP_PASSWORD_1
- name: BIGIP_PASSWORD_2
valueFrom:
secretKeyRef:
name: device-secrets
key: BIGIP_PASSWORD_2
- name: BIGIP_PASSWORD_3
valueFrom:
secretKeyRef:
name: device-secrets
key: BIGIP_PASSWORD_3
volumeMounts:
- name: config-volume
mountPath: /etc/otel-collector-config
volumes:
- name: config-volume
configMap:
name: otel-collector-config
Otel-collector 포트 노출을 위한 service 리소스를 생성합니다.
otel-collector-svc.yaml:
apiVersion: v1
kind: Service
metadata:
name: otel-collector
namespace: f5-ast
spec:
selector:
app: otel-collector
ports:
- port: 8888
targetPort: 8888
저장 후 배포합니다.
kubectl apply -f otel-collector-cm.yaml,otel-collector-deploy.yaml,otel-collector-secret.yaml,otel-collector-svc.yaml
4. F5 AST 메트릭 스크래핑을 위한 Prometheus 배포
Otel-collector의 메트릭을 스크레핑하여 그라파나에 노출시키기 위해 Prometheus를 배포합니다.
Otel-collecor를 스크레핑하기 위한 ConfigMap을 생성합니다.
prometheus-cm.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: f5-ast
data:
prometheus.yml: |
scrape_configs:
- job_name: 'otel'
static_configs:
- targets: ['otel-collector:8888']
위 ConfigMap을 사용할 수 있도록 Deployment 리소스를 생성합니다.
prometheus-deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
namespace: f5-ast
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus:v2.54.1
args:
- --config.file=/etc/prometheus/prometheus.yml
- --storage.tsdb.path=/prometheus
- --web.console.libraries=/etc/prometheus/console_libraries
- --web.console.templates=/etc/prometheus/consoles
- --web.enable-lifecycle
- --enable-feature=otlp-write-receiver
- --storage.tsdb.retention.time=1y
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
ports:
- containerPort: 9090
volumes:
- name: config-volume
configMap:
name: prometheus-config
마찬가지로 Prometheus를 노출시키기 위한 Service 리소스를 생성합니다.
prometheus-svc.yaml:
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: f5-ast
spec:
selector:
app: prometheus
ports:
- port: 9090
targetPort: 9090
저장 후 배포합니다.
kubectl apply -f prometheus-cm.yaml,prometheus-deploy.yaml,prometheus-svc.yaml
5. F5 AST 시각화를 위한 Grafana 배포
F5 BIG-IP의 메트릭을 시각화하여 대시보드로 확인하기 위해 Grafana를 배포합니다.
Prometheus에서 메트릭을 불러오기 위해 ConfigMap을 생성합니다.
grafana-cm.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-datasources
namespace: f5-ast
data:
datasources.yaml: |
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
orgId: 1
url: http://prometheus:9090
basicAuth: false
isDefault: true
editable: true
jsonData:
timeInterval: 60s
전제조건에서 clone한 디렉토리 내에 F5에서 제공하는 Grafana Dashboards를 ConfigMap형식으로 생성합니다.


위에서 생성한 ConfigMap 리소스들을 사용하기 위해 Deployments 리소스를 생성합니다.
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: f5-ast
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:11.2.0
ports:
- containerPort: 3000
volumeMounts:
- name: dashboards-config
mountPath: /etc/grafana/provisioning/dashboards/dashboards.yaml
subPath: dashboards.yaml
- name: device-gtm
mountPath: /etc/grafana/provisioning/dashboards/bigip/device/device-gtm.json
subPath: device-gtm.json
- name: device-irules
mountPath: /etc/grafana/provisioning/dashboards/bigip/device/device-irules.json
subPath: device-irules.json
- name: device-overview
mountPath: /etc/grafana/provisioning/dashboards/bigip/device/device-overview.json
subPath: device-overview.json
- name: device-pools-overview
mountPath: /etc/grafana/provisioning/dashboards/bigip/device/device-pools-overview.json
subPath: device-pools-overview.json
- name: device-ssl
mountPath: /etc/grafana/provisioning/dashboards/bigip/device/device-ssl.json
subPath: device-ssl.json
- name: device-virtual-server
mountPath: /etc/grafana/provisioning/dashboards/bigip/device/device-virtual-server.json
subPath: device-virtual-server.json
- name: device-waf-overview
mountPath: /etc/grafana/provisioning/dashboards/bigip/device/device-waf-overview.json
subPath: device-waf-overview.json
- name: top-n
mountPath: /etc/grafana/provisioning/dashboards/bigip/device/top-n.json
subPath: top-n.json
- name: fleet-apm-session
mountPath: /etc/grafana/provisioning/dashboards/bigip/fleet/fleet-apm-session.json
subPath: fleet-apm-session.json
- name: fleet-cgnat
mountPath: /etc/grafana/provisioning/dashboards/bigip/fleet/fleet-cgnat.json
subPath: fleet-cgnat.json
- name: fleet-device-utilization
mountPath: /etc/grafana/provisioning/dashboards/bigip/fleet/fleet-device-utilization.json
subPath: fleet-device-utilization.json
- name: fleet-dos
mountPath: /etc/grafana/provisioning/dashboards/bigip/fleet/fleet-dos.json
subPath: fleet-dos.json
- name: fleet-firewall
mountPath: /etc/grafana/provisioning/dashboards/bigip/fleet/fleet-firewall.json
subPath: fleet-firewall.json
- name: fleet-inventory
mountPath: /etc/grafana/provisioning/dashboards/bigip/fleet/fleet-inventory.json
subPath: fleet-inventory.json
- name: fleet-virtual-server
mountPath: /etc/grafana/provisioning/dashboards/bigip/fleet/fleet-virtual-server.json
subPath: fleet-virtual-server.json
- name: ssl-certificates
mountPath: /etc/grafana/provisioning/dashboards/bigip/fleet/ssl-certificates.json
subPath: ssl-certificates.json
- name: ltm-dns
mountPath: /etc/grafana/provisioning/dashboards/bigip/profile/ltm-dns.json
subPath: ltm-dns.json
- name: ltm-http
mountPath: /etc/grafana/provisioning/dashboards/bigip/profile/ltm-http.json
subPath: ltm-http.json
- name: collector-health
mountPath: /etc/grafana/provisioning/dashboards/otel-collector/collector-health.json
subPath: collector-health.json
- name: receiver-stats
mountPath: /etc/grafana/provisioning/dashboards/otel-collector/receiver-stats.json
subPath: receiver-stats.json
- name: datasources-config
mountPath: /etc/grafana/provisioning/datasources/datasources.yaml
subPath: datasources.yaml
volumes:
- name: dashboards-config
configMap:
name: grafana-dashboards
- name: device-gtm
configMap:
name: grafana-device-gtm
- name: device-irules
configMap:
name: grafana-device-irules
- name: device-overview
configMap:
name: grafana-device-overview
- name: device-pools-overview
configMap:
name: grafana-device-pools-overview
- name: device-ssl
configMap:
name: grafana-device-ssl
- name: device-virtual-server
configMap:
name: grafana-device-virtual-server
- name: device-waf-overview
configMap:
name: grafana-device-waf-overview
- name: top-n
configMap:
name: grafana-top-n
- name: fleet-apm-session
configMap:
name: grafana-fleet-apm-session
- name: fleet-cgnat
configMap:
name: grafana-fleet-cgnat
- name: fleet-device-utilization
configMap:
name: grafana-fleet-device-utilization
- name: fleet-dos
configMap:
name: grafana-fleet-dos
- name: fleet-firewall
configMap:
name: grafana-fleet-firewall
- name: fleet-inventory
configMap:
name: grafana-fleet-inventory
- name: fleet-virtual-server
configMap:
name: grafana-fleet-virtual-server
- name: ssl-certificates
configMap:
name: grafana-ssl-certificates
- name: ltm-dns
configMap:
name: grafana-ltm-dns
- name: ltm-http
configMap:
name: grafana-ltm-http
- name: collector-health
configMap:
name: grafana-collector-health
- name: receiver-stats
configMap:
name: grafana-receiver-stats
- name: datasources-config
configMap:
name: grafana-datasources # 기존 데이터 소스 ConfigMap
Grafana를 노출시키기 위해 Service 리소스를 생성합니다.
apiVersion: v1
kind: Service
metadata:
name: grafana
namespace: f5-ast
spec:
selector:
app: grafana
ports:
- port: 3000
targetPort: 3000
type: ClusterIP
저장 후 배포합니다.
kubectl apply -f grafana-cm.yaml,grafana-deploy.yaml,grafana-svc.yaml
6. F5 AST 웹 노출을 위한 VirtualServer 배포
앞서 배포한 F5 AST의 Grafana를 웹으로 노출시키기 위해 NGINX Plus Ingress Controller의 VirtualServer 리소스를 배포합니다.
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: grafana-vs
namespace: f5-ast
spec:
ingressClassName: nginx
host: ast.devopsshin.co.kr
upstreams:
- name: grafana
service: grafana
port: 3000
routes:
- path: /
action:
pass: grafana
7. 배포된 리소스
배포된 리소스를 확인합니다.
kubectl get pods,deployment,service,secret,virtualserver -n f5-ast
NAME READY STATUS RESTARTS AGE
pod/grafana-cd4f4d5b6-62ffp 1/1 Running 0 2d18h
pod/otel-collector-9db4d84cd-8mwtb 1/1 Running 0 2d16h
pod/prometheus-8fc54f9dc-bs8th 1/1 Running 0 2d19h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/grafana 1/1 1 1 2d18h
deployment.apps/otel-collector 1/1 1 1 2d19h
deployment.apps/prometheus 1/1 1 1 2d19h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana ClusterIP 10.103.106.192 <none> 3000/TCP 2d18h
service/otel-collector ClusterIP 10.98.147.185 <none> 8888/TCP 2d19h
service/prometheus ClusterIP 10.103.191.48 <none> 9090/TCP 2d19h
NAME TYPE DATA AGE
secret/device-secrets Opaque 1 2d19h
NAME STATE HOST IP PORTS AGE
virtualserver.k8s.nginx.org/grafana-vs Valid ast.devopsshin.co.kr 3d16h
Configmap 리소스:
kubectl get configmap -n f5-ast
NAME DATA AGE
grafana-collector-health 1 2d18h
grafana-dashboards 1 2d18h
grafana-datasources 1 2d19h
grafana-device-gtm 1 2d18h
grafana-device-irules 1 2d18h
grafana-device-overview 1 2d18h
grafana-device-pools-overview 1 2d18h
grafana-device-ssl 1 2d18h
grafana-device-virtual-server 1 2d18h
grafana-device-waf-overview 1 2d18h
grafana-fleet-apm-session 1 2d18h
grafana-fleet-cgnat 1 2d18h
grafana-fleet-device-utilization 1 2d18h
grafana-fleet-dos 1 2d18h
grafana-fleet-firewall 1 2d18h
grafana-fleet-inventory 1 2d18h
grafana-fleet-virtual-server 1 2d18h
grafana-ltm-dns 1 2d18h
grafana-ltm-http 1 2d18h
grafana-receiver-stats 1 2d18h
grafana-ssl-certificates 1 2d18h
grafana-top-n 1 2d18h
kube-root-ca.crt 1 3d18h
otel-collector-config 3 2d19h
prometheus-config 1 2d19h
8. F5 AST 배포 확인
VirtualServer에서 설정한 Host로 접속하여 Grafana가 나타나는지 확인합니다.

Grafana의 dashboards 메뉴에서 F5에서 제공하는 대시보드가 추가되었는지 확인합니다.

간단하게 Dashboards > BigIP – Device > Device Overview에서 등록한 BigIP 장치에 대한 개요를 확인할 수 있습니다.

9. 결론
F5 AST는 프로덕션 수준보다 낮은 안정성으로 애플리케이션 인사이트를 신속하게 구축하고 실행하는 데 필요한 모든 기능을 제공합니다.
프로덕션/운영 사용 사례의 경우, 포함된 구성 요소를 기반으로 고가용성, Grafana OIDC 통합 등을 통한 향상된 보안 등을 고려하여 구축할 수 있습니다. 또는 Opentelemetry Collector를 구성하여 필요에 따라 기존 프로덕션 운영 모니터링 도구로 데이터를 전송할 수 있습니다.
F5 AST를 Kubernetes환경에 배포하며 BigIP 장치를 모니터링해보세요.
NGINX Ingress Controller의 상업용 버전을 직접 사용해 보시려면 NGINX STORE에 연락하여 논의하십시오.
댓글을 달려면 로그인해야 합니다.