구성도
- 간단한 로그를 생성하는 Springboot 애플리케이션 3개
- 3개의 어플리케이션에서 로그를 수집하여 Elasticsearch로 전송하는 logstash
- 전송받은 로그를 저정하는 Elasticsearch
- Visualization을 위해 Elasticsearch 서버에 search, aggreagate API를 호출하는 Kibana
- ELK는 편의상 도커 컴포즈로 구성한 deviantony/docker-elk 를 사용하여 구축
SpringBoot 구성
- 각 어플리케이션 이름은 springboot-elk-01, springboot-elk-02, springboot-elk-03으로 설정
- 1초마다 서버이름을 출력하는 로그 생성
@Slf4j
@SpringBootApplication
public class SpringbootElk01Application implements ApplicationRunner{
public static void main(String[] args) {
SpringApplication.run(SpringbootElk01Application.class, args);
}
@Override
public void run(ApplicationArguments args) throws Exception {
int i = 0;
while(true) {
Thread.sleep(1000);
log.info("springboot-elk-01 :: {}", ++i);
}
}
}
- logback.xml 설정 파일 추가
- src/main/resource 디렉터리에 위치
- 위에서 생성된 로그를 Logstash로 전송할 수 있도록 LogstashTcpSocketAppender 추가
- destination은 Logstash 서버 주소
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%-5level %d{HH:mm:ss.SSS} [%thread] %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<appender name="stash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>127.0.0.1:5000</destination>
<!-- encoder is required -->
<encoder class="net.logstash.logback.encoder.LogstashEncoder" />
</appender>
<root level="INFO">
<appender-ref ref="console"/>
<appender-ref ref="stash"/>
</root>
</configuration>
- 위 Appender를 사용하기 위해 logstash-logback-encoder 의존성 추가
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>6.3</version>
</dependency>
- springboot-elk-02, springboot-elk-03 애플리케이션도 위와 같이 구성
- 동시에 실행하기 위해 server port는 겹치지 않게 설정
ELK 구축
- 편의상 Elasticsearch, Logstash, Kibana를 한 번에 도커 컨테이너로 띄우기 위해 deviantony/docker-elk 사용
- git clone https://github.com/deviantony/docker-elk.git
- docker-compose build && docker-compose up -d
- Elasticsearch config
- docker-elk/elasticsearch/elasticsearch.yml
---
## Default Elasticsearch configuration from Elasticsearch base image.
## https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/config/elasticsearch.yml
#
cluster.name: "docker-cluster"
# 모든 네트워크 접근 허용
network.host: 0.0.0.0
## X-Pack settings
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-xpack.html
# X-Pack은 보안, 알림, 모니터링, 보고, 그래프 기능을 설치하기 편리한 단일 패키지로 번들 구성한 Elastic Stack 확장 프로그램
# 30일 동안 x-pack 모든 기능 사용 가능
xpack.license.self_generated.type: trial
# 6.8 버전부터 시큐리티 기능 제공, 활성화 시 request에 id,pw를 파라미터로 담아서 보내주어야 한다.
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true
- Kibana Confg
- docker-elk/kibana/kibana.yml
## Default Kibana configuration from Kibana base image.
## https://github.com/elastic/kibana/blob/master/src/dev/build/tasks/os_packages/docker_generator/templates/kibana_yml.template.js
#
server.name: kibana
server.host: 0.0.0.0
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
monitoring.ui.container.elasticsearch.enabled: true
## X-Pack security credentials
#
elasticsearch.username: elastic
elasticsearch.password: changeme
- Logstash confg
- docker-elk/logstash/pipeline/logstash.conf
- tcp port 개방.
- elasticsearch에 springboot-elk라는 index 생성
input {
tcp {
port => 5000
codec => json_lines
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => "elasticsearch:9200"
index => "springboot-elk"
user => "elastic"
password => "changeme"
}
}
- docker-compose.yml
version: '3.2'
services:
elasticsearch:
build:
context: elasticsearch/ # elasticsearch build 경로
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
# Use single node discovery in order to disable production mode and avoid bootstrap checks
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- elk
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./kibana/config/kibana.yml
target: /usr/share/kibana/config/kibana.yml
read_only: true
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
elasticsearch:
- docker-compose up -d
Kibana 확인
- 위 어플리케이션들 구동
- kibana 로그인 > Index Patterns > Create index Pattern > logstash.conf 에서 명시한 index 추가
- 실제로 index에 데이터가 있어야 추가 가능
- kibana > discovery에서 로그 확인
- 위 사진처럼 springboot-elk-01, springboot-elk-02, springboot-elk-03 애플리케이션에서 생성한 로그를 확인할 수 있다.
참고자료
1. https://github.com/deviantony/docker-elk
'기술이야기' 카테고리의 다른 글
Mysql(Mariadb) Replicaiton 구축하기 (0) | 2022.04.09 |
---|---|
무료로 내 서버에 HTTPS 적용하기 (0) | 2022.04.03 |
Elasticsearch + Logstash + Kibana 구축하기 (3) - Kafka와 연동 (0) | 2020.11.19 |
Elasticsearch + Logstash + Kibana 구축하기 (1) - ELK 설치 (0) | 2020.11.16 |