기획부터 운영까지: 엔터프라이즈 시스템 개발 실전 가이드
version: ‘3.8’
services: taiga-db: image: postgres:16 environment: POSTGRES_DB: taiga POSTGRES_USER: taiga POSTGRES_PASSWORD: taiga_password volumes: - taiga_db_data:/var/lib/postgresql/data
taiga-back: image: taigaio/taiga-back:latest environment: POSTGRES_DB: taiga POSTGRES_USER: taiga POSTGRES_HOST: taiga-db TAIGA_SECRET_KEY: your-secret-key TAIGA_SITES_SCHEME: http TAIGA_SITES_DOMAIN: localhost:9000 depends_on: - taiga-db volumes: - taiga_static_data:/taiga-back/static - taiga_media_data:/taiga-back/media
taiga-front: image: taigaio/taiga-front:latest environment: TAIGA_URL: http://localhost:9000/api/v1/ ports: - “9000:80” depends_on: - taiga-back
volumes: taiga_db_data: taiga_static_data: taiga_media_data:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
### 1.2 초기 백로그 예시
**Epic: 사용자 관리 시스템**
User Stories:
1. 사용자 회원가입 (Story Points: 5)
- Acceptance Criteria:
- 이메일, 비밀번호로 회원가입 가능
- 이메일 중복 검증
- 비밀번호 암호화 저장
2. OAuth 소셜 로그인 (Story Points: 8)
- Acceptance Criteria:
- Google, GitHub OAuth 지원
- 기존 계정과 연동 가능
3. 사용자 프로필 관리 (Story Points: 3)
- Acceptance Criteria:
- 프로필 조회/수정
- 프로필 이미지 업로드
**Epic: 태스크 관리 시스템**
User Stories:
1. 태스크 CRUD (Story Points: 5)
2. 태스크 상태 관리 (Story Points: 3)
3. 태스크 할당 및 우선순위 (Story Points: 5)
4. 태스크 검색 및 필터링 (Story Points: 8)
### 1.3 시스템 설계 문서
~~~markdown
# 시스템 설계서
## 1. 도메인 모델
### User Aggregate
- User (Root Entity)
- id: UUID
- email: String (unique)
- password: String (encrypted)
- oauthProvider: Enum
- oauthId: String
- profile: Profile (Value Object)
- createdAt: Timestamp
- updatedAt: Timestamp
### Task Aggregate
- Task (Root Entity)
- id: UUID
- title: String
- description: Text
- status: Enum (TODO, IN_PROGRESS, DONE)
- priority: Enum (LOW, MEDIUM, HIGH)
- assigneeId: UUID (User FK)
- createdBy: UUID (User FK)
- dueDate: Date
- createdAt: Timestamp
- updatedAt: Timestamp
- TaskComment (Entity)
- id: UUID
- taskId: UUID
- userId: UUID
- content: Text
- createdAt: Timestamp
## 2. API 설계
### User Service API
#### POST /api/users/register
Request:
{
"email": "user@example.com",
"password": "SecurePass123!",
"name": "John Doe"
}
Response: 201 Created
{
"id": "uuid",
"email": "user@example.com",
"name": "John Doe",
"createdAt": "2026-01-11T10:00:00Z"
}
#### POST /api/users/login
Request:
{
"email": "user@example.com",
"password": "SecurePass123!"
}
Response: 200 OK
{
"accessToken": "jwt-token",
"refreshToken": "refresh-token",
"expiresIn": 3600
}
### Task Service API
#### POST /api/tasks
Request:
{
"title": "Implement user authentication",
"description": "Add JWT-based authentication",
"priority": "HIGH",
"dueDate": "2026-01-20"
}
Response: 201 Created
{
"id": "uuid",
"title": "Implement user authentication",
"status": "TODO",
"priority": "HIGH",
"assigneeId": null,
"createdBy": "user-uuid",
"dueDate": "2026-01-20",
"createdAt": "2026-01-11T10:00:00Z"
}
#### GET /api/tasks?status=TODO&assignee=me
Response: 200 OK
{
"data": [
{
"id": "uuid",
"title": "Task 1",
"status": "TODO",
...
}
],
"meta": {
"total": 50,
"page": 1,
"perPage": 20
}
}
## 3. 데이터베이스 스키마
### Users Table
```sql
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email VARCHAR(255) UNIQUE NOT NULL,
password_hash VARCHAR(255),
oauth_provider VARCHAR(50),
oauth_id VARCHAR(255),
name VARCHAR(255) NOT NULL,
profile_image_url TEXT,
is_active BOOLEAN DEFAULT TRUE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_users_oauth ON users(oauth_provider, oauth_id);
Tasks Table
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
CREATE TABLE tasks (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
title VARCHAR(500) NOT NULL,
description TEXT,
status VARCHAR(50) DEFAULT 'TODO',
priority VARCHAR(50) DEFAULT 'MEDIUM',
assignee_id UUID REFERENCES users(id),
created_by UUID NOT NULL REFERENCES users(id),
due_date DATE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_tasks_status ON tasks(status);
CREATE INDEX idx_tasks_assignee ON tasks(assignee_id);
CREATE INDEX idx_tasks_created_by ON tasks(created_by);
Task Comments Table
1
2
3
4
5
6
7
8
9
CREATE TABLE task_comments (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
task_id UUID NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
user_id UUID NOT NULL REFERENCES users(id),
content TEXT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_comments_task ON task_comments(task_id);
4. 이벤트 설계 (Kafka Topics)
user-events
- UserRegistered
- UserProfileUpdated
- UserDeactivated
task-events
- TaskCreated
- TaskStatusChanged
- TaskAssigned
- TaskCommentAdded
notification-events
- NotificationRequested
- EmailSent ```
~~~
Phase 2: 개발 환경 구축
2.1 프로젝트 구조
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
enterprise-task-manager/
├── backend/
│ ├── api-gateway/ # Spring Cloud Gateway
│ ├── user-service/ # User management
│ ├── task-service/ # Task management
│ ├── notification-service/ # Notification handling
│ └── common/ # Shared libraries
├── frontend/
│ └── react-app/ # React application
├── infrastructure/
│ ├── docker/ # Docker configurations
│ ├── kubernetes/ # K8s manifests (optional)
│ └── scripts/ # Utility scripts
├── ci-cd/
│ ├── jenkins/ # Jenkins pipeline
│ └── github-actions/ # GitHub Actions workflows
└── docker-compose.yml # Local development setup
2.2 기본 Docker Compose 설정
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
# docker-compose.yml
version: '3.8'
services:
# PostgreSQL Database
postgres:
image: postgres:16-alpine
container_name: postgres-db
environment:
POSTGRES_DB: taskmanager
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin123
POSTGRES_MULTIPLE_DATABASES: userdb,taskdb
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
- ./infrastructure/docker/postgres/init:/docker-entrypoint-initdb.d
networks:
- backend
# Redis Cache
redis:
image: redis:7-alpine
container_name: redis-cache
ports:
- "6379:6379"
volumes:
- redis_data:/data
networks:
- backend
# Kafka & Zookeeper
zookeeper:
image: confluentinc/cp-zookeeper:7.5.0
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
networks:
- backend
kafka:
image: confluentinc/cp-kafka:7.5.0
container_name: kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
networks:
- backend
# API Gateway
api-gateway:
build:
context: ./backend/api-gateway
dockerfile: Dockerfile
container_name: api-gateway
ports:
- "8080:8080"
environment:
SPRING_PROFILES_ACTIVE: docker
USER_SERVICE_URL: http://user-service:8081
TASK_SERVICE_URL: http://task-service:8082
REDIS_HOST: redis
REDIS_PORT: 6379
depends_on:
- redis
- user-service
- task-service
networks:
- backend
# User Service
user-service:
build:
context: ./backend/user-service
dockerfile: Dockerfile
container_name: user-service
ports:
- "8081:8081"
environment:
SPRING_PROFILES_ACTIVE: docker
SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/userdb
SPRING_DATASOURCE_USERNAME: admin
SPRING_DATASOURCE_PASSWORD: admin123
REDIS_HOST: redis
KAFKA_BOOTSTRAP_SERVERS: kafka:29092
depends_on:
- postgres
- redis
- kafka
networks:
- backend
# Task Service
task-service:
build:
context: ./backend/task-service
dockerfile: Dockerfile
container_name: task-service
ports:
- "8082:8082"
environment:
SPRING_PROFILES_ACTIVE: docker
SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/taskdb
SPRING_DATASOURCE_USERNAME: admin
SPRING_DATASOURCE_PASSWORD: admin123
REDIS_HOST: redis
KAFKA_BOOTSTRAP_SERVERS: kafka:29092
depends_on:
- postgres
- redis
- kafka
networks:
- backend
# Notification Service
notification-service:
build:
context: ./backend/notification-service
dockerfile: Dockerfile
container_name: notification-service
ports:
- "8083:8083"
environment:
SPRING_PROFILES_ACTIVE: docker
KAFKA_BOOTSTRAP_SERVERS: kafka:29092
depends_on:
- kafka
networks:
- backend
# React Frontend
frontend:
build:
context: ./frontend/react-app
dockerfile: Dockerfile
container_name: react-frontend
ports:
- "3000:80"
depends_on:
- api-gateway
networks:
- backend
volumes:
postgres_data:
redis_data:
networks:
backend:
driver: bridge
2.3 PostgreSQL 초기화 스크립트
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
-- infrastructure/docker/postgres/init/01-init-databases.sql
-- Create separate databases
CREATE DATABASE userdb;
CREATE DATABASE taskdb;
-- Connect to userdb
\c userdb;
-- Users table
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email VARCHAR(255) UNIQUE NOT NULL,
password_hash VARCHAR(255),
oauth_provider VARCHAR(50),
oauth_id VARCHAR(255),
name VARCHAR(255) NOT NULL,
profile_image_url TEXT,
is_active BOOLEAN DEFAULT TRUE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_users_oauth ON users(oauth_provider, oauth_id);
-- Refresh tokens table
CREATE TABLE refresh_tokens (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
token VARCHAR(500) NOT NULL,
expires_at TIMESTAMP NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_refresh_tokens_user ON refresh_tokens(user_id);
CREATE INDEX idx_refresh_tokens_token ON refresh_tokens(token);
-- Connect to taskdb
\c taskdb;
-- Tasks table
CREATE TABLE tasks (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
title VARCHAR(500) NOT NULL,
description TEXT,
status VARCHAR(50) DEFAULT 'TODO',
priority VARCHAR(50) DEFAULT 'MEDIUM',
assignee_id UUID,
created_by UUID NOT NULL,
due_date DATE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_tasks_status ON tasks(status);
CREATE INDEX idx_tasks_assignee ON tasks(assignee_id);
CREATE INDEX idx_tasks_created_by ON tasks(created_by);
-- Task comments table
CREATE TABLE task_comments (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
task_id UUID NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
user_id UUID NOT NULL,
content TEXT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_comments_task ON task_comments(task_id);
-- Task attachments table
CREATE TABLE task_attachments (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
task_id UUID NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
file_name VARCHAR(255) NOT NULL,
file_url TEXT NOT NULL,
uploaded_by UUID NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_attachments_task ON task_attachments(task_id);
Phase 3: 백엔드 개발
3.1 User Service 구조
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
user-service/
├── src/
│ └── main/
│ ├── java/com/example/userservice/
│ │ ├── UserServiceApplication.java
│ │ ├── config/
│ │ │ ├── SecurityConfig.java
│ │ │ ├── RedisConfig.java
│ │ │ └── KafkaProducerConfig.java
│ │ ├── controller/
│ │ │ ├── UserController.java
│ │ │ └── AuthController.java
│ │ ├── service/
│ │ │ ├── UserService.java
│ │ │ ├── AuthService.java
│ │ │ └── OAuth2Service.java
│ │ ├── repository/
│ │ │ ├── UserRepository.java
│ │ │ └── RefreshTokenRepository.java
│ │ ├── entity/
│ │ │ ├── User.java
│ │ │ └── RefreshToken.java
│ │ ├── dto/
│ │ │ ├── UserRegistrationRequest.java
│ │ │ ├── LoginRequest.java
│ │ │ └── UserResponse.java
│ │ ├── security/
│ │ │ ├── JwtTokenProvider.java
│ │ │ └── OAuth2SuccessHandler.java
│ │ └── event/
│ │ └── UserEventProducer.java
│ └── resources/
│ ├── application.yml
│ └── application-docker.yml
├── Dockerfile
└── pom.xml
3.2 User Service - pom.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.2.1</version>
<relativePath/>
</parent>
<groupId>com.example</groupId>
<artifactId>user-service</artifactId>
<version>1.0.0</version>
<name>User Service</name>
<properties>
<java.version>21</java.version>
<spring-cloud.version>2023.0.0</spring-cloud.version>
</properties>
<dependencies>
<!-- Spring Boot Starters -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-oauth2-client</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
<!-- Database -->
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<scope>runtime</scope>
</dependency>
<!-- JWT -->
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-api</artifactId>
<version>0.12.3</version>
</dependency>
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-impl</artifactId>
<version>0.12.3</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt-jackson</artifactId>
<version>0.12.3</version>
<scope>runtime</scope>
</dependency>
<!-- Lombok -->
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<!-- Testing -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
3.3 User Entity
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
// backend/user-service/src/main/java/com/example/userservice/entity/User.java
package com.example.userservice.entity;
import jakarta.persistence.*;
import lombok.*;
import org.hibernate.annotations.CreationTimestamp;
import org.hibernate.annotations.UpdateTimestamp;
import java.time.LocalDateTime;
import java.util.UUID;
public class User {
@Id
@GeneratedValue(strategy = GenerationType.UUID)
private UUID id;
@Column(unique = true, nullable = false)
private String email;
@Column(name = "password_hash")
private String passwordHash;
@Column(name = "oauth_provider")
private String oauthProvider;
@Column(name = "oauth_id")
private String oauthId;
@Column(nullable = false)
private String name;
@Column(name = "profile_image_url")
private String profileImageUrl;
@Column(name = "is_active")
private Boolean isActive = true;
@CreationTimestamp
@Column(name = "created_at", updatable = false)
private LocalDateTime createdAt;
@UpdateTimestamp
@Column(name = "updated_at")
private LocalDateTime updatedAt;
}
3.4 User Repository
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// backend/user-service/src/main/java/com/example/userservice/repository/UserRepository.java
package com.example.userservice.repository;
import com.example.userservice.entity.User;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.stereotype.Repository;
import java.util.Optional;
import java.util.UUID;
public interface UserRepository extends JpaRepository<User, UUID> {
Optional<User> findByEmail(String email);
Optional<User> findByOauthProviderAndOauthId(String oauthProvider, String oauthId);
boolean existsByEmail(String email);
}
3.5 JWT Token Provider
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
// backend/user-service/src/main/java/com/example/userservice/security/JwtTokenProvider.java
package com.example.userservice.security;
import io.jsonwebtoken.*;
import io.jsonwebtoken.security.Keys;
import jakarta.annotation.PostConstruct;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
import javax.crypto.SecretKey;
import java.nio.charset.StandardCharsets;
import java.util.Date;
import java.util.UUID;
public class JwtTokenProvider {
@Value("${jwt.secret}")
private String jwtSecret;
@Value("${jwt.access-token-expiration:3600000}") // 1 hour
private long accessTokenExpiration;
@Value("${jwt.refresh-token-expiration:604800000}") // 7 days
private long refreshTokenExpiration;
private SecretKey secretKey;
@PostConstruct
public void init() {
this.secretKey = Keys.hmacShaKeyFor(jwtSecret.getBytes(StandardCharsets.UTF_8));
}
public String generateAccessToken(UUID userId, String email) {
Date now = new Date();
Date expiryDate = new Date(now.getTime() + accessTokenExpiration);
return Jwts.builder()
.subject(userId.toString())
.claim("email", email)
.claim("type", "access")
.issuedAt(now)
.expiration(expiryDate)
.signWith(secretKey)
.compact();
}
public String generateRefreshToken(UUID userId) {
Date now = new Date();
Date expiryDate = new Date(now.getTime() + refreshTokenExpiration);
return Jwts.builder()
.subject(userId.toString())
.claim("type", "refresh")
.issuedAt(now)
.expiration(expiryDate)
.signWith(secretKey)
.compact();
}
public UUID getUserIdFromToken(String token) {
Claims claims = Jwts.parser()
.verifyWith(secretKey)
.build()
.parseSignedClaims(token)
.getPayload();
return UUID.fromString(claims.getSubject());
}
public boolean validateToken(String token) {
try {
Jwts.parser()
.verifyWith(secretKey)
.build()
.parseSignedClaims(token);
return true;
} catch (JwtException | IllegalArgumentException e) {
log.error("Invalid JWT token: {}", e.getMessage());
return false;
}
}
}
3.6 Auth Service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
// backend/user-service/src/main/java/com/example/userservice/service/AuthService.java
package com.example.userservice.service;
import com.example.userservice.dto.LoginRequest;
import com.example.userservice.dto.TokenResponse;
import com.example.userservice.dto.UserRegistrationRequest;
import com.example.userservice.dto.UserResponse;
import com.example.userservice.entity.RefreshToken;
import com.example.userservice.entity.User;
import com.example.userservice.event.UserEventProducer;
import com.example.userservice.repository.RefreshTokenRepository;
import com.example.userservice.repository.UserRepository;
import com.example.userservice.security.JwtTokenProvider;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.security.crypto.password.PasswordEncoder;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
import java.time.LocalDateTime;
import java.util.UUID;
public class AuthService {
private final UserRepository userRepository;
private final RefreshTokenRepository refreshTokenRepository;
private final PasswordEncoder passwordEncoder;
private final JwtTokenProvider jwtTokenProvider;
private final UserEventProducer userEventProducer;
@Transactional
public UserResponse register(UserRegistrationRequest request) {
// Check if email already exists
if (userRepository.existsByEmail(request.getEmail())) {
throw new IllegalArgumentException("Email already in use");
}
// Create new user
User user = User.builder()
.email(request.getEmail())
.passwordHash(passwordEncoder.encode(request.getPassword()))
.name(request.getName())
.isActive(true)
.build();
User savedUser = userRepository.save(user);
// Publish user registered event to Kafka
userEventProducer.publishUserRegistered(savedUser);
log.info("User registered: {}", savedUser.getEmail());
return UserResponse.from(savedUser);
}
@Transactional
public TokenResponse login(LoginRequest request) {
// Find user by email
User user = userRepository.findByEmail(request.getEmail())
.orElseThrow(() -> new IllegalArgumentException("Invalid credentials"));
// Verify password
if (!passwordEncoder.matches(request.getPassword(), user.getPasswordHash())) {
throw new IllegalArgumentException("Invalid credentials");
}
if (!user.getIsActive()) {
throw new IllegalArgumentException("Account is deactivated");
}
// Generate tokens
String accessToken = jwtTokenProvider.generateAccessToken(user.getId(), user.getEmail());
String refreshToken = jwtTokenProvider.generateRefreshToken(user.getId());
// Save refresh token
RefreshToken refreshTokenEntity = RefreshToken.builder()
.userId(user.getId())
.token(refreshToken)
.expiresAt(LocalDateTime.now().plusDays(7))
.build();
refreshTokenRepository.save(refreshTokenEntity);
log.info("User logged in: {}", user.getEmail());
return TokenResponse.builder()
.accessToken(accessToken)
.refreshToken(refreshToken)
.expiresIn(3600)
.tokenType("Bearer")
.build();
}
@Transactional
public TokenResponse refreshToken(String refreshToken) {
// Validate refresh token
if (!jwtTokenProvider.validateToken(refreshToken)) {
throw new IllegalArgumentException("Invalid refresh token");
}
UUID userId = jwtTokenProvider.getUserIdFromToken(refreshToken);
// Check if refresh token exists in database
RefreshToken storedToken = refreshTokenRepository.findByToken(refreshToken)
.orElseThrow(() -> new IllegalArgumentException("Refresh token not found"));
if (storedToken.getExpiresAt().isBefore(LocalDateTime.now())) {
refreshTokenRepository.delete(storedToken);
throw new IllegalArgumentException("Refresh token expired");
}
// Get user
User user = userRepository.findById(userId)
.orElseThrow(() -> new IllegalArgumentException("User not found"));
// Generate new access token
String newAccessToken = jwtTokenProvider.generateAccessToken(user.getId(), user.getEmail());
return TokenResponse.builder()
.accessToken(newAccessToken)
.refreshToken(refreshToken)
.expiresIn(3600)
.tokenType("Bearer")
.build();
}
}
3.7 Auth Controller
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
// backend/user-service/src/main/java/com/example/userservice/controller/AuthController.java
package com.example.userservice.controller;
import com.example.userservice.dto.*;
import com.example.userservice.service.AuthService;
import jakarta.validation.Valid;
import lombok.RequiredArgsConstructor;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;
public class AuthController {
private final AuthService authService;
@PostMapping("/register")
public ResponseEntity<ApiResponse<UserResponse>> register(
@Valid @RequestBody UserRegistrationRequest request) {
UserResponse user = authService.register(request);
return ResponseEntity
.status(HttpStatus.CREATED)
.body(ApiResponse.success(user));
}
@PostMapping("/login")
public ResponseEntity<ApiResponse<TokenResponse>> login(
@Valid @RequestBody LoginRequest request) {
TokenResponse tokens = authService.login(request);
return ResponseEntity.ok(ApiResponse.success(tokens));
}
@PostMapping("/refresh")
public ResponseEntity<ApiResponse<TokenResponse>> refresh(
@RequestBody RefreshTokenRequest request) {
TokenResponse tokens = authService.refreshToken(request.getRefreshToken());
return ResponseEntity.ok(ApiResponse.success(tokens));
}
@PostMapping("/logout")
public ResponseEntity<ApiResponse<Void>> logout(
@RequestHeader("Authorization") String authHeader) {
// Extract token and invalidate
// Implementation depends on your session management strategy
return ResponseEntity.ok(ApiResponse.success(null));
}
}
3.8 User Event Producer (Kafka)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
// backend/user-service/src/main/java/com/example/userservice/event/UserEventProducer.java
package com.example.userservice.event;
import com.example.userservice.entity.User;
import com.fasterxml.jackson.databind.ObjectMapper;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Component;
import java.util.Map;
public class UserEventProducer {
private static final String USER_EVENTS_TOPIC = "user-events";
private final KafkaTemplate<String, String> kafkaTemplate;
private final ObjectMapper objectMapper;
public void publishUserRegistered(User user) {
try {
Map<String, Object> event = Map.of(
"eventType", "USER_REGISTERED",
"userId", user.getId().toString(),
"email", user.getEmail(),
"name", user.getName(),
"timestamp", System.currentTimeMillis()
);
String eventJson = objectMapper.writeValueAsString(event);
kafkaTemplate.send(USER_EVENTS_TOPIC, user.getId().toString(), eventJson);
log.info("Published USER_REGISTERED event for user: {}", user.getEmail());
} catch (Exception e) {
log.error("Failed to publish user registered event", e);
}
}
public void publishUserProfileUpdated(User user) {
try {
Map<String, Object> event = Map.of(
"eventType", "USER_PROFILE_UPDATED",
"userId", user.getId().toString(),
"email", user.getEmail(),
"name", user.getName(),
"timestamp", System.currentTimeMillis()
);
String eventJson = objectMapper.writeValueAsString(event);
kafkaTemplate.send(USER_EVENTS_TOPIC, user.getId().toString(), eventJson);
log.info("Published USER_PROFILE_UPDATED event for user: {}", user.getEmail());
} catch (Exception e) {
log.error("Failed to publish user profile updated event", e);
}
}
}
3.9 Application Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
# backend/user-service/src/main/resources/application.yml
spring:
application:
name: user-service
datasource:
url: jdbc:postgresql://localhost:5432/userdb
username: admin
password: admin123
driver-class-name: org.postgresql.Driver
jpa:
hibernate:
ddl-auto: validate
show-sql: true
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQLDialect
format_sql: true
data:
redis:
host: localhost
port: 6379
kafka:
bootstrap-servers: localhost:9092
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
security:
oauth2:
client:
registration:
google:
client-id: ${GOOGLE_CLIENT_ID}
client-secret: ${GOOGLE_CLIENT_SECRET}
scope: profile, email
redirect-uri: "{baseUrl}/login/oauth2/code/{registrationId}"
github:
client-id: ${GITHUB_CLIENT_ID}
client-secret: ${GITHUB_CLIENT_SECRET}
scope: user:email
redirect-uri: "{baseUrl}/login/oauth2/code/{registrationId}"
server:
port: 8081
jwt:
secret: ${JWT_SECRET:your-very-secure-secret-key-min-256-bits}
access-token-expiration: 3600000 # 1 hour
refresh-token-expiration: 604800000 # 7 days
logging:
level:
com.example.userservice: DEBUG
org.springframework.security: DEBUG
---
# Docker profile
spring:
config:
activate:
on-profile: docker
datasource:
url: jdbc:postgresql://postgres:5432/userdb
data:
redis:
host: redis
kafka:
bootstrap-servers: kafka:29092
3.10 User Service Dockerfile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# backend/user-service/Dockerfile
FROM eclipse-temurin:21-jdk-alpine AS build
WORKDIR /app
COPY pom.xml .
COPY src ./src
RUN apk add --no-cache maven && \
mvn clean package -DskipTests
FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
COPY --from=build /app/target/*.jar app.jar
EXPOSE 8081
ENTRYPOINT ["java", "-jar", "app.jar"]
Phase 4: 프론트엔드 개발
4.1 React 프로젝트 구조
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
frontend/react-app/
├── public/
│ └── index.html
├── src/
│ ├── api/
│ │ ├── client.js
│ │ ├── authApi.js
│ │ └── taskApi.js
│ ├── components/
│ │ ├── common/
│ │ │ ├── Header.jsx
│ │ │ ├── Footer.jsx
│ │ │ └── Loading.jsx
│ │ ├── auth/
│ │ │ ├── LoginForm.jsx
│ │ │ └── RegisterForm.jsx
│ │ └── tasks/
│ │ ├── TaskList.jsx
│ │ ├── TaskCard.jsx
│ │ └── TaskForm.jsx
│ ├── pages/
│ │ ├── Home.jsx
│ │ ├── Login.jsx
│ │ ├── Register.jsx
│ │ └── Dashboard.jsx
│ ├── context/
│ │ └── AuthContext.jsx
│ ├── hooks/
│ │ ├── useAuth.js
│ │ └── useTasks.js
│ ├── utils/
│ │ ├── storage.js
│ │ └── validators.js
│ ├── App.jsx
│ └── index.js
├── Dockerfile
├── nginx.conf
└── package.json
4.2 Package.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
{
"name": "task-manager-frontend",
"version": "1.0.0",
"private": true,
"dependencies": {
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-router-dom": "^6.21.1",
"axios": "^1.6.5",
"react-query": "^3.39.3",
"@tanstack/react-query": "^5.17.9",
"zustand": "^4.4.7",
"date-fns": "^3.0.6",
"react-hook-form": "^7.49.3",
"zod": "^3.22.4",
"@hookform/resolvers": "^3.3.4"
},
"devDependencies": {
"@vitejs/plugin-react": "^4.2.1",
"vite": "^5.0.11",
"eslint": "^8.56.0",
"eslint-plugin-react": "^7.33.2",
"tailwindcss": "^3.4.1",
"autoprefixer": "^10.4.16",
"postcss": "^8.4.33"
},
"scripts": {
"dev": "vite",
"build": "vite build",
"preview": "vite preview",
"lint": "eslint src --ext js,jsx"
}
}
4.3 API Client 설정
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
// frontend/react-app/src/api/client.js
import axios from 'axios';
const API_BASE_URL = import.meta.env.VITE_API_BASE_URL || 'http://localhost:8080/api';
const apiClient = axios.create({
baseURL: API_BASE_URL,
headers: {
'Content-Type': 'application/json',
},
});
// Request interceptor - Add auth token
apiClient.interceptors.request.use(
(config) => {
const token = localStorage.getItem('accessToken');
if (token) {
config.headers.Authorization = `Bearer ${token}`;
}
return config;
},
(error) => {
return Promise.reject(error);
}
);
// Response interceptor - Handle token refresh
apiClient.interceptors.response.use(
(response) => response,
async (error) => {
const originalRequest = error.config;
// If 401 and not already retried, try to refresh token
if (error.response?.status === 401 && !originalRequest._retry) {
originalRequest._retry = true;
try {
const refreshToken = localStorage.getItem('refreshToken');
if (!refreshToken) {
throw new Error('No refresh token available');
}
const response = await axios.post(`${API_BASE_URL}/auth/refresh`, {
refreshToken,
});
const { accessToken } = response.data.data;
localStorage.setItem('accessToken', accessToken);
originalRequest.headers.Authorization = `Bearer ${accessToken}`;
return apiClient(originalRequest);
} catch (refreshError) {
// Refresh failed - redirect to login
localStorage.removeItem('accessToken');
localStorage.removeItem('refreshToken');
window.location.href = '/login';
return Promise.reject(refreshError);
}
}
return Promise.reject(error);
}
);
export default apiClient;
4.4 Auth API
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
// frontend/react-app/src/api/authApi.js
import apiClient from './client';
export const authApi = {
register: async (userData) => {
const response = await apiClient.post('/auth/register', userData);
return response.data;
},
login: async (credentials) => {
const response = await apiClient.post('/auth/login', credentials);
return response.data;
},
logout: async () => {
const response = await apiClient.post('/auth/logout');
return response.data;
},
getCurrentUser: async () => {
const response = await apiClient.get('/users/me');
return response.data;
},
updateProfile: async (userId, profileData) => {
const response = await apiClient.put(`/users/${userId}`, profileData);
return response.data;
},
};
4.5 Task API
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
// frontend/react-app/src/api/taskApi.js
import apiClient from './client';
export const taskApi = {
getTasks: async (params = {}) => {
const response = await apiClient.get('/tasks', { params });
return response.data;
},
getTaskById: async (taskId) => {
const response = await apiClient.get(`/tasks/${taskId}`);
return response.data;
},
createTask: async (taskData) => {
const response = await apiClient.post('/tasks', taskData);
return response.data;
},
updateTask: async (taskId, taskData) => {
const response = await apiClient.put(`/tasks/${taskId}`, taskData);
return response.data;
},
deleteTask: async (taskId) => {
const response = await apiClient.delete(`/tasks/${taskId}`);
return response.data;
},
updateTaskStatus: async (taskId, status) => {
const response = await apiClient.patch(`/tasks/${taskId}/status`, { status });
return response.data;
},
addComment: async (taskId, comment) => {
const response = await apiClient.post(`/tasks/${taskId}/comments`, { content: comment });
return response.data;
},
getComments: async (taskId) => {
const response = await apiClient.get(`/tasks/${taskId}/comments`);
return response.data;
},
};
4.6 Auth Context
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
// frontend/react-app/src/context/AuthContext.jsx
import React, { createContext, useState, useEffect } from 'react';
import { authApi } from '../api/authApi';
export const AuthContext = createContext(null);
export const AuthProvider = ({ children }) => {
const [user, setUser] = useState(null);
const [loading, setLoading] = useState(true);
useEffect(() => {
// Check if user is logged in on mount
const loadUser = async () => {
const token = localStorage.getItem('accessToken');
if (token) {
try {
const response = await authApi.getCurrentUser();
setUser(response.data);
} catch (error) {
console.error('Failed to load user:', error);
localStorage.removeItem('accessToken');
localStorage.removeItem('refreshToken');
}
}
setLoading(false);
};
loadUser();
}, []);
const login = async (credentials) => {
const response = await authApi.login(credentials);
const { accessToken, refreshToken } = response.data;
localStorage.setItem('accessToken', accessToken);
localStorage.setItem('refreshToken', refreshToken);
const userResponse = await authApi.getCurrentUser();
setUser(userResponse.data);
return response;
};
const register = async (userData) => {
const response = await authApi.register(userData);
return response;
};
const logout = async () => {
try {
await authApi.logout();
} catch (error) {
console.error('Logout error:', error);
} finally {
localStorage.removeItem('accessToken');
localStorage.removeItem('refreshToken');
setUser(null);
}
};
const value = {
user,
loading,
login,
register,
logout,
isAuthenticated: !!user,
};
return <AuthContext.Provider value={value}>{children}</AuthContext.Provider>;
};
4.7 Login Component
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
// frontend/react-app/src/components/auth/LoginForm.jsx
import React, { useState } from 'react';
import { useNavigate } from 'react-router-dom';
import { useAuth } from '../../hooks/useAuth';
export const LoginForm = () => {
const [email, setEmail] = useState('');
const [password, setPassword] = useState('');
const [error, setError] = useState('');
const [loading, setLoading] = useState(false);
const { login } = useAuth();
const navigate = useNavigate();
const handleSubmit = async (e) => {
e.preventDefault();
setError('');
setLoading(true);
try {
await login({ email, password });
navigate('/dashboard');
} catch (err) {
setError(err.response?.data?.message || 'Login failed. Please try again.');
} finally {
setLoading(false);
}
};
const handleGoogleLogin = () => {
window.location.href = `${import.meta.env.VITE_API_BASE_URL}/oauth2/authorization/google`;
};
const handleGithubLogin = () => {
window.location.href = `${import.meta.env.VITE_API_BASE_URL}/oauth2/authorization/github`;
};
return (
<div className="max-w-md mx-auto mt-8 p-6 bg-white rounded-lg shadow-md">
<h2 className="text-2xl font-bold mb-6">Login</h2>
{error && (
<div className="mb-4 p-3 bg-red-100 text-red-700 rounded">
{error}
</div>
)}
<form onSubmit={handleSubmit}>
<div className="mb-4">
<label className="block text-gray-700 mb-2">Email</label>
<input
type="email"
value={email}
onChange={(e) => setEmail(e.target.value)}
className="w-full px-3 py-2 border rounded focus:outline-none focus:ring-2 focus:ring-blue-500"
required
/>
</div>
<div className="mb-6">
<label className="block text-gray-700 mb-2">Password</label>
<input
type="password"
value={password}
onChange={(e) => setPassword(e.target.value)}
className="w-full px-3 py-2 border rounded focus:outline-none focus:ring-2 focus:ring-blue-500"
required
/>
</div>
<button
type="submit"
disabled={loading}
className="w-full bg-blue-500 text-white py-2 rounded hover:bg-blue-600 disabled:bg-gray-400"
>
{loading ? 'Logging in...' : 'Login'}
</button>
</form>
<div className="mt-6">
<div className="relative">
<div className="absolute inset-0 flex items-center">
<div className="w-full border-t border-gray-300"></div>
</div>
<div className="relative flex justify-center text-sm">
<span className="px-2 bg-white text-gray-500">Or continue with</span>
</div>
</div>
<div className="mt-6 grid grid-cols-2 gap-3">
<button
onClick={handleGoogleLogin}
className="w-full flex items-center justify-center px-4 py-2 border border-gray-300 rounded-md shadow-sm bg-white text-sm font-medium text-gray-700 hover:bg-gray-50"
>
Google
</button>
<button
onClick={handleGithubLogin}
className="w-full flex items-center justify-center px-4 py-2 border border-gray-300 rounded-md shadow-sm bg-white text-sm font-medium text-gray-700 hover:bg-gray-50"
>
GitHub
</button>
</div>
</div>
</div>
);
};
4.8 Task List Component
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
// frontend/react-app/src/components/tasks/TaskList.jsx
import React, { useState, useEffect } from 'react';
import { taskApi } from '../../api/taskApi';
import { TaskCard } from './TaskCard';
export const TaskList = () => {
const [tasks, setTasks] = useState([]);
const [loading, setLoading] = useState(true);
const [filter, setFilter] = useState({ status: 'ALL' });
useEffect(() => {
loadTasks();
}, [filter]);
const loadTasks = async () => {
setLoading(true);
try {
const params = filter.status !== 'ALL' ? { status: filter.status } : {};
const response = await taskApi.getTasks(params);
setTasks(response.data);
} catch (error) {
console.error('Failed to load tasks:', error);
} finally {
setLoading(false);
}
};
const handleStatusChange = async (taskId, newStatus) => {
try {
await taskApi.updateTaskStatus(taskId, newStatus);
loadTasks();
} catch (error) {
console.error('Failed to update task status:', error);
}
};
const handleDelete = async (taskId) => {
if (window.confirm('Are you sure you want to delete this task?')) {
try {
await taskApi.deleteTask(taskId);
loadTasks();
} catch (error) {
console.error('Failed to delete task:', error);
}
}
};
if (loading) {
return <div className="text-center p-8">Loading tasks...</div>;
}
return (
<div className="container mx-auto p-4">
<div className="mb-4 flex justify-between items-center">
<h2 className="text-2xl font-bold">Tasks</h2>
<select
value={filter.status}
onChange={(e) => setFilter({ status: e.target.value })}
className="px-4 py-2 border rounded"
>
<option value="ALL">All Tasks</option>
<option value="TODO">To Do</option>
<option value="IN_PROGRESS">In Progress</option>
<option value="DONE">Done</option>
</select>
</div>
{tasks.length === 0 ? (
<div className="text-center text-gray-500 p-8">
No tasks found
</div>
) : (
<div className="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-4">
{tasks.map((task) => (
<TaskCard
key={task.id}
task={task}
onStatusChange={handleStatusChange}
onDelete={handleDelete}
/>
))}
</div>
)}
</div>
);
};
4.9 Frontend Dockerfile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# frontend/react-app/Dockerfile
# Build stage
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
4.10 Nginx Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# frontend/react-app/nginx.conf
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
# Gzip compression
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml text/javascript;
location / {
try_files $uri $uri/ /index.html;
}
# API proxy
location /api {
proxy_pass http://api-gateway:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
}
Phase 5: 인프라 구성
목차
Docker 컨테이너 전략
컨테이너 설계 원칙
1. 단일 책임 원칙
각 컨테이너는 하나의 프로세스만 실행:
1
2
3
4
5
6
7
✅ 올바른 예:
- user-service 컨테이너: Spring Boot 애플리케이션만
- postgres 컨테이너: PostgreSQL만
- redis 컨테이너: Redis만
❌ 잘못된 예:
- 하나의 컨테이너에 Spring Boot + PostgreSQL
2. 불변성 (Immutability)
컨테이너는 실행 중 수정되지 않아야 함:
1
2
3
4
5
6
# 빌드 시점에 모든 것을 결정
FROM eclipse-temurin:21-jre-alpine
COPY app.jar /app/
CMD ["java", "-jar", "/app/app.jar"]
# 런타임에 코드 변경 X
3. 상태 분리 (Stateless)
애플리케이션 상태는 외부 저장소에:
1
2
3
4
5
Application Container (Stateless)
↓ 세션 저장
Redis (Stateful)
↓ 데이터 저장
PostgreSQL (Stateful)
네트워크 구성
1
2
3
4
5
6
7
8
9
networks:
frontend:
driver: bridge
# React ↔ API Gateway
backend:
driver: bridge
internal: true # 외부 접근 차단
# Services ↔ DB, Redis, Kafka
격리 전략:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
┌─────────────────────────────────────┐
│ Frontend Network (Public) │
│ - React Container │
│ - Nginx │
└──────────────┬──────────────────────┘
│
┌──────────────▼──────────────────────┐
│ API Gateway (Bridge) │
└──────────────┬──────────────────────┘
│
┌──────────────▼──────────────────────┐
│ Backend Network (Internal) │
│ - User Service │
│ - Task Service │
│ - Notification Service │
│ - PostgreSQL │
│ - Redis │
│ - Kafka │
└─────────────────────────────────────┘
PostgreSQL 설정 및 최적화
데이터베이스 구조
다중 데이터베이스 전략
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
-- 초기화 스크립트: init-databases.sh
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
-- User Service용 데이터베이스
CREATE DATABASE userdb;
-- Task Service용 데이터베이스
CREATE DATABASE taskdb;
-- 읽기 전용 사용자 (Report Service용)
CREATE USER readonly_user WITH PASSWORD 'readonly_password';
-- 권한 부여
GRANT CONNECT ON DATABASE taskdb TO readonly_user;
\c taskdb
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO readonly_user;
EOSQL
PostgreSQL 최적화
postgresql.conf 튜닝
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# /etc/postgresql/postgresql.conf
# --- 메모리 설정 ---
# 전체 메모리의 25% (서버 RAM 8GB 기준 2GB)
shared_buffers = 2GB
# shared_buffers의 2배
effective_cache_size = 4GB
# 복잡한 쿼리용
work_mem = 64MB
# 유지보수 작업용
maintenance_work_mem = 512MB
# --- 연결 설정 ---
max_connections = 200
# --- WAL (Write-Ahead Logging) 설정 ---
wal_buffers = 16MB
checkpoint_completion_target = 0.9
max_wal_size = 4GB
min_wal_size = 1GB
# --- 쿼리 플래너 ---
random_page_cost = 1.1 # SSD 환경
effective_io_concurrency = 200
# --- 로깅 ---
log_min_duration_statement = 1000 # 1초 이상 쿼리 로깅
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h '
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
Connection Pooling (HikariCP)
Spring Boot 설정:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
spring:
datasource:
hikari:
# 풀 크기
maximum-pool-size: 20
minimum-idle: 5
# 타임아웃
connection-timeout: 30000 # 30초
idle-timeout: 600000 # 10분
max-lifetime: 1800000 # 30분
# 헬스 체크
connection-test-query: SELECT 1
# 메트릭
pool-name: TaskManagerHikariCP
인덱스 전략
User Service 인덱스
1
2
3
4
5
6
7
8
9
10
11
12
13
14
-- users 테이블
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_users_oauth ON users(oauth_provider, oauth_id);
CREATE INDEX idx_users_active ON users(is_active) WHERE is_active = TRUE;
-- refresh_tokens 테이블
CREATE INDEX idx_refresh_tokens_user ON refresh_tokens(user_id);
CREATE INDEX idx_refresh_tokens_token ON refresh_tokens(token);
CREATE INDEX idx_refresh_tokens_expires ON refresh_tokens(expires_at);
-- 만료된 토큰 자동 삭제
CREATE INDEX idx_refresh_tokens_expired
ON refresh_tokens(expires_at)
WHERE expires_at < NOW();
Task Service 인덱스
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
-- tasks 테이블
CREATE INDEX idx_tasks_status ON tasks(status);
CREATE INDEX idx_tasks_assignee ON tasks(assignee_id);
CREATE INDEX idx_tasks_creator ON tasks(created_by);
CREATE INDEX idx_tasks_due_date ON tasks(due_date);
-- 복합 인덱스 (자주 함께 조회)
CREATE INDEX idx_tasks_assignee_status ON tasks(assignee_id, status);
CREATE INDEX idx_tasks_status_priority ON tasks(status, priority);
-- 전문 검색 (Full-text search)
CREATE INDEX idx_tasks_search ON tasks
USING gin(to_tsvector('english', title || ' ' || COALESCE(description, '')));
-- task_comments 테이블
CREATE INDEX idx_comments_task ON task_comments(task_id);
CREATE INDEX idx_comments_user ON task_comments(user_id);
CREATE INDEX idx_comments_created ON task_comments(created_at DESC);
-- task_attachments 테이블
CREATE INDEX idx_attachments_task ON task_attachments(task_id);
파티셔닝 전략 (선택사항)
대량 데이터 처리를 위한 테이블 파티셔닝:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
-- 날짜 기반 파티셔닝 (월별)
CREATE TABLE task_comments (
id UUID NOT NULL,
task_id UUID NOT NULL,
user_id UUID NOT NULL,
content TEXT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (id, created_at)
) PARTITION BY RANGE (created_at);
-- 파티션 생성
CREATE TABLE task_comments_2026_01 PARTITION OF task_comments
FOR VALUES FROM ('2026-01-01') TO ('2026-02-01');
CREATE TABLE task_comments_2026_02 PARTITION OF task_comments
FOR VALUES FROM ('2026-02-01') TO ('2026-03-01');
-- 자동 파티션 생성 함수
CREATE OR REPLACE FUNCTION create_monthly_partitions()
RETURNS void AS $$
DECLARE
start_date DATE;
end_date DATE;
partition_name TEXT;
BEGIN
start_date := DATE_TRUNC('month', CURRENT_DATE + INTERVAL '1 month');
end_date := start_date + INTERVAL '1 month';
partition_name := 'task_comments_' || TO_CHAR(start_date, 'YYYY_MM');
EXECUTE format('CREATE TABLE IF NOT EXISTS %I PARTITION OF task_comments FOR VALUES FROM (%L) TO (%L)',
partition_name, start_date, end_date);
END;
$$ LANGUAGE plpgsql;
-- 매월 자동 실행
SELECT cron.schedule('create-partitions', '0 0 1 * *', 'SELECT create_monthly_partitions()');
Docker Compose PostgreSQL 설정
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
services:
postgres:
image: postgres:16-alpine
container_name: postgres-db
environment:
POSTGRES_DB: postgres
POSTGRES_USER: admin
POSTGRES_PASSWORD: ${DB_PASSWORD:-admin123}
# 다중 DB 생성
POSTGRES_MULTIPLE_DATABASES: userdb,taskdb
ports:
- "5432:5432"
volumes:
# 데이터 영구 저장
- postgres_data:/var/lib/postgresql/data
# 초기화 스크립트
- ./infrastructure/postgres/init:/docker-entrypoint-initdb.d
# 설정 파일
- ./infrastructure/postgres/postgresql.conf:/etc/postgresql/postgresql.conf
# 백업 디렉토리
- ./backups/postgres:/backups
command:
- "postgres"
- "-c"
- "config_file=/etc/postgresql/postgresql.conf"
# 헬스 체크
healthcheck:
test: ["CMD-SHELL", "pg_isready -U admin"]
interval: 10s
timeout: 5s
retries: 5
networks:
- backend
# 리소스 제한
deploy:
resources:
limits:
cpus: '2'
memory: 4G
reservations:
cpus: '1'
memory: 2G
Redis 캐싱 전략
Redis 용도별 구분
1. 세션 스토어
1
2
3
4
5
6
7
8
9
10
11
Key Pattern: session:{userId}
TTL: 7일
Value: JSON (사용자 정보)
예시:
session:user-123 = {
"userId": "user-123",
"email": "user@example.com",
"role": "TEAM_MEMBER",
"lastAccess": "2026-01-11T10:00:00Z"
}
2. 캐시 레이어
1
2
3
4
5
6
7
Key Pattern: cache:{entity}:{id}
TTL: 15분
예시:
cache:user:user-123 = {...}
cache:task:task-456 = {...}
cache:task:list:assignee:user-123 = [...]
3. Rate Limiting
1
2
3
4
5
6
Key Pattern: rate_limit:{ip}:{endpoint}
TTL: 1분
Value: 요청 횟수
예시:
rate_limit:192.168.1.100:/api/tasks = 45
4. 임시 데이터 (예: 이메일 인증 코드)
1
2
3
4
5
Key Pattern: temp:{purpose}:{identifier}
TTL: 10분
예시:
temp:email_verification:user@example.com = "123456"
Redis 설정
redis.conf 최적화
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# /etc/redis/redis.conf
# --- 메모리 설정 ---
maxmemory 2gb
maxmemory-policy allkeys-lru # LRU 방식으로 오래된 키 삭제
# --- 영속성 설정 ---
# RDB (스냅샷)
save 900 1 # 900초 동안 1개 이상 변경 시 저장
save 300 10 # 300초 동안 10개 이상 변경 시 저장
save 60 10000 # 60초 동안 10000개 이상 변경 시 저장
# AOF (Append-Only File) - 더 안전하지만 느림
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec # 매초 동기화
# --- 네트워크 ---
timeout 300
tcp-keepalive 60
# --- 보안 ---
requirepass ${REDIS_PASSWORD}
# --- 로깅 ---
loglevel notice
logfile "/var/log/redis/redis.log"
Docker Compose Redis 설정
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
services:
redis:
image: redis:7-alpine
container_name: redis-cache
command: redis-server /etc/redis/redis.conf
ports:
- "6379:6379"
volumes:
- redis_data:/data
- ./infrastructure/redis/redis.conf:/etc/redis/redis.conf
- ./logs/redis:/var/log/redis
environment:
- REDIS_PASSWORD=${REDIS_PASSWORD:-redis123}
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 5
networks:
- backend
deploy:
resources:
limits:
cpus: '1'
memory: 2G
reservations:
cpus: '0.5'
memory: 512M
Spring Boot Redis 통합
의존성
1
2
3
4
5
6
7
8
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
Redis Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
public class RedisConfig {
@Value("${spring.data.redis.host}")
private String redisHost;
@Value("${spring.data.redis.port}")
private int redisPort;
@Value("${spring.data.redis.password}")
private String redisPassword;
@Bean
public RedisConnectionFactory redisConnectionFactory() {
RedisStandaloneConfiguration config = new RedisStandaloneConfiguration();
config.setHostName(redisHost);
config.setPort(redisPort);
config.setPassword(redisPassword);
LettuceConnectionFactory factory = new LettuceConnectionFactory(config);
factory.setShareNativeConnection(false);
return factory;
}
@Bean
public RedisTemplate<String, Object> redisTemplate(
RedisConnectionFactory connectionFactory) {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setConnectionFactory(connectionFactory);
// JSON 직렬화
Jackson2JsonRedisSerializer<Object> serializer =
new Jackson2JsonRedisSerializer<>(Object.class);
template.setKeySerializer(new StringRedisSerializer());
template.setValueSerializer(serializer);
template.setHashKeySerializer(new StringRedisSerializer());
template.setHashValueSerializer(serializer);
template.afterPropertiesSet();
return template;
}
@Bean
public CacheManager cacheManager(RedisConnectionFactory connectionFactory) {
RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofMinutes(15))
.disableCachingNullValues()
.serializeKeysWith(
RedisSerializationContext.SerializationPair.fromSerializer(
new StringRedisSerializer()))
.serializeValuesWith(
RedisSerializationContext.SerializationPair.fromSerializer(
new Jackson2JsonRedisSerializer<>(Object.class)));
return RedisCacheManager.builder(connectionFactory)
.cacheDefaults(config)
.build();
}
}
캐싱 사용 예시
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
public class UserService {
@Autowired
private UserRepository userRepository;
@Autowired
private RedisTemplate<String, Object> redisTemplate;
// 스프링 캐시 추상화 사용
@Cacheable(value = "users", key = "#userId")
public UserResponse getUserById(UUID userId) {
return userRepository.findById(userId)
.map(UserResponse::from)
.orElseThrow(() -> new NotFoundException("User not found"));
}
// 캐시 무효화
@CacheEvict(value = "users", key = "#userId")
public void updateUser(UUID userId, UpdateUserRequest request) {
// 업데이트 로직
}
// Redis Template 직접 사용
public void cacheTaskList(UUID userId, List<TaskResponse> tasks) {
String key = "cache:task:list:assignee:" + userId;
redisTemplate.opsForValue().set(key, tasks, Duration.ofMinutes(5));
}
public List<TaskResponse> getCachedTaskList(UUID userId) {
String key = "cache:task:list:assignee:" + userId;
return (List<TaskResponse>) redisTemplate.opsForValue().get(key);
}
}
Kafka 이벤트 처리
Kafka 토픽 설계
토픽 구조
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
user-events (파티션 3개)
- UserRegistered
- UserProfileUpdated
- UserDeactivated
task-events (파티션 5개)
- TaskCreated
- TaskUpdated
- TaskStatusChanged
- TaskAssigned
- TaskDeleted
- TaskCommentAdded
notification-events (파티션 3개)
- NotificationRequested
- EmailSent
- PushNotificationSent
이벤트 스키마
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
// TaskCreated 이벤트
{
"eventId": "evt-123",
"eventType": "TaskCreated",
"timestamp": "2026-01-11T10:00:00Z",
"version": "1.0",
"payload": {
"taskId": "task-456",
"title": "API 구현",
"description": "JWT 인증 API 구현",
"status": "TODO",
"priority": "HIGH",
"assigneeId": "user-789",
"createdBy": "user-123",
"dueDate": "2026-01-20"
},
"metadata": {
"correlationId": "corr-abc",
"causationId": "cause-xyz",
"userId": "user-123"
}
}
Docker Compose Kafka 설정
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
services:
# Zookeeper (Kafka 의존성)
zookeeper:
image: confluentinc/cp-zookeeper:7.5.0
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SYNC_LIMIT: 2
volumes:
- zookeeper_data:/var/lib/zookeeper/data
- zookeeper_logs:/var/lib/zookeeper/log
networks:
- backend
healthcheck:
test: ["CMD", "nc", "-z", "localhost", "2181"]
interval: 10s
timeout: 5s
retries: 5
# Kafka Broker
kafka:
image: confluentinc/cp-kafka:7.5.0
container_name: kafka
depends_on:
zookeeper:
condition: service_healthy
ports:
- "9092:9092"
- "9093:9093"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
# 리스너 설정
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
# 토픽 설정
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
# 성능 튜닝
KAFKA_NUM_NETWORK_THREADS: 3
KAFKA_NUM_IO_THREADS: 8
KAFKA_SOCKET_SEND_BUFFER_BYTES: 102400
KAFKA_SOCKET_RECEIVE_BUFFER_BYTES: 102400
KAFKA_SOCKET_REQUEST_MAX_BYTES: 104857600
# 로그 설정
KAFKA_LOG_RETENTION_HOURS: 168 # 7일
KAFKA_LOG_SEGMENT_BYTES: 1073741824
KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS: 300000
volumes:
- kafka_data:/var/lib/kafka/data
networks:
- backend
healthcheck:
test: ["CMD", "kafka-broker-api-versions", "--bootstrap-server", "localhost:9092"]
interval: 10s
timeout: 10s
retries: 5
deploy:
resources:
limits:
cpus: '2'
memory: 2G
# Kafka UI (선택사항 - 개발/디버깅용)
kafka-ui:
image: provectuslabs/kafka-ui:latest
container_name: kafka-ui
depends_on:
- kafka
ports:
- "8090:8080"
environment:
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:29092
KAFKA_CLUSTERS_0_ZOOKEEPER: zookeeper:2181
networks:
- backend
Spring Boot Kafka 통합
Producer 설정
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
public class KafkaProducerConfig {
@Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers;
@Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
// 성능 및 안정성 설정
config.put(ProducerConfig.ACKS_CONFIG, "1"); // 리더만 확인
config.put(ProducerConfig.RETRIES_CONFIG, 3);
config.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
config.put(ProducerConfig.LINGER_MS_CONFIG, 10);
config.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
config.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "snappy");
return new DefaultKafkaProducerFactory<>(config);
}
@Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
Producer 사용
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
public class TaskEventProducer {
private static final String TASK_EVENTS_TOPIC = "task-events";
@Autowired
private KafkaTemplate<String, String> kafkaTemplate;
@Autowired
private ObjectMapper objectMapper;
public void publishTaskCreated(Task task) {
try {
TaskCreatedEvent event = TaskCreatedEvent.builder()
.eventId(UUID.randomUUID().toString())
.eventType("TaskCreated")
.timestamp(Instant.now())
.payload(TaskEventPayload.from(task))
.build();
String eventJson = objectMapper.writeValueAsString(event);
// 태스크 ID를 키로 사용 (같은 태스크는 같은 파티션으로)
kafkaTemplate.send(TASK_EVENTS_TOPIC, task.getId().toString(), eventJson)
.addCallback(
result -> log.info("TaskCreated event published: {}", task.getId()),
ex -> log.error("Failed to publish TaskCreated event", ex)
);
} catch (JsonProcessingException e) {
log.error("Failed to serialize TaskCreated event", e);
}
}
}
Consumer 설정
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
public class KafkaConsumerConfig {
@Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers;
@Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.GROUP_ID_CONFIG, "notification-service-group");
// 오프셋 관리
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
config.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
// 성능 설정
config.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 100);
config.put(ConsumerConfig.FETCH_MIN_BYTES_CONFIG, 1);
config.put(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG, 500);
return new DefaultKafkaConsumerFactory<>(config);
}
@Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(3); // 3개의 consumer 스레드
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL);
return factory;
}
}
Consumer 사용
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
public class NotificationEventConsumer {
@Autowired
private EmailService emailService;
@Autowired
private ObjectMapper objectMapper;
@KafkaListener(topics = "task-events", groupId = "notification-service-group")
public void consumeTaskEvent(
@Payload String message,
@Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
@Header(KafkaHeaders.OFFSET) long offset,
Acknowledgment acknowledgment) {
try {
TaskEvent event = objectMapper.readValue(message, TaskEvent.class);
log.info("Received event: {} from partition: {}, offset: {}",
event.getEventType(), partition, offset);
switch (event.getEventType()) {
case "TaskCreated":
handleTaskCreated(event);
break;
case "TaskAssigned":
handleTaskAssigned(event);
break;
case "TaskCommentAdded":
handleTaskCommentAdded(event);
break;
default:
log.warn("Unknown event type: {}", event.getEventType());
}
// 수동 오프셋 커밋
acknowledgment.acknowledge();
} catch (Exception e) {
log.error("Failed to process event", e);
// 에러 처리 (재시도, DLQ 등)
}
}
private void handleTaskCreated(TaskEvent event) {
// 태스크 생성자에게 확인 이메일
if (event.getPayload().getCreatedBy() != null) {
emailService.sendTaskCreatedEmail(event.getPayload());
}
}
private void handleTaskAssigned(TaskEvent event) {
// 담당자에게 알림 이메일
if (event.getPayload().getAssigneeId() != null) {
emailService.sendTaskAssignedEmail(event.getPayload());
}
}
}
통합 Docker Compose 구성
완전한 docker-compose.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
version: '3.8'
services:
# === 데이터베이스 ===
postgres:
image: postgres:16-alpine
container_name: postgres-db
environment:
POSTGRES_DB: postgres
POSTGRES_USER: admin
POSTGRES_PASSWORD: ${DB_PASSWORD:-admin123}
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
- ./infrastructure/postgres/init:/docker-entrypoint-initdb.d
- ./infrastructure/postgres/postgresql.conf:/etc/postgresql/postgresql.conf
command: postgres -c config_file=/etc/postgresql/postgresql.conf
healthcheck:
test: ["CMD-SHELL", "pg_isready -U admin"]
interval: 10s
timeout: 5s
retries: 5
networks:
- backend
deploy:
resources:
limits:
cpus: '2'
memory: 4G
# === 캐시 ===
redis:
image: redis:7-alpine
container_name: redis-cache
command: redis-server /etc/redis/redis.conf
ports:
- "6379:6379"
volumes:
- redis_data:/data
- ./infrastructure/redis/redis.conf:/etc/redis/redis.conf
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 5
networks:
- backend
deploy:
resources:
limits:
cpus: '1'
memory: 2G
# === 메시지 큐 ===
zookeeper:
image: confluentinc/cp-zookeeper:7.5.0
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
volumes:
- zookeeper_data:/var/lib/zookeeper/data
networks:
- backend
healthcheck:
test: ["CMD", "nc", "-z", "localhost", "2181"]
interval: 10s
timeout: 5s
retries: 5
kafka:
image: confluentinc/cp-kafka:7.5.0
container_name: kafka
depends_on:
zookeeper:
condition: service_healthy
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
volumes:
- kafka_data:/var/lib/kafka/data
networks:
- backend
healthcheck:
test: ["CMD", "kafka-broker-api-versions", "--bootstrap-server", "localhost:9092"]
interval: 10s
timeout: 10s
retries: 5
# === 마이크로서비스 ===
api-gateway:
build:
context: ./backend/api-gateway
dockerfile: Dockerfile
container_name: api-gateway
ports:
- "8080:8080"
environment:
SPRING_PROFILES_ACTIVE: docker
REDIS_HOST: redis
USER_SERVICE_URL: http://user-service:8081
TASK_SERVICE_URL: http://task-service:8082
depends_on:
redis:
condition: service_healthy
networks:
- backend
- frontend
restart: unless-stopped
user-service:
build:
context: ./backend/user-service
dockerfile: Dockerfile
container_name: user-service
ports:
- "8081:8081"
environment:
SPRING_PROFILES_ACTIVE: docker
SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/userdb
SPRING_DATASOURCE_PASSWORD: ${DB_PASSWORD:-admin123}
REDIS_HOST: redis
KAFKA_BOOTSTRAP_SERVERS: kafka:29092
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
kafka:
condition: service_healthy
networks:
- backend
restart: unless-stopped
task-service:
build:
context: ./backend/task-service
dockerfile: Dockerfile
container_name: task-service
ports:
- "8082:8082"
environment:
SPRING_PROFILES_ACTIVE: docker
SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/taskdb
SPRING_DATASOURCE_PASSWORD: ${DB_PASSWORD:-admin123}
REDIS_HOST: redis
KAFKA_BOOTSTRAP_SERVERS: kafka:29092
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
kafka:
condition: service_healthy
networks:
- backend
restart: unless-stopped
notification-service:
build:
context: ./backend/notification-service
dockerfile: Dockerfile
container_name: notification-service
ports:
- "8083:8083"
environment:
SPRING_PROFILES_ACTIVE: docker
KAFKA_BOOTSTRAP_SERVERS: kafka:29092
EMAIL_HOST: ${EMAIL_HOST}
EMAIL_USERNAME: ${EMAIL_USERNAME}
EMAIL_PASSWORD: ${EMAIL_PASSWORD}
depends_on:
kafka:
condition: service_healthy
networks:
- backend
restart: unless-stopped
# === 프론트엔드 ===
frontend:
build:
context: ./frontend/react-app
dockerfile: Dockerfile
container_name: react-frontend
ports:
- "3000:80"
depends_on:
- api-gateway
networks:
- frontend
restart: unless-stopped
# === 모니터링 (선택사항) ===
kafka-ui:
image: provectuslabs/kafka-ui:latest
container_name: kafka-ui
ports:
- "8090:8080"
environment:
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:29092
depends_on:
- kafka
networks:
- backend
volumes:
postgres_data:
redis_data:
zookeeper_data:
kafka_data:
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: false # 개발 환경에서는 false, 프로덕션에서는 true
환경 변수 파일 (.env)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Database
DB_PASSWORD=secure_db_password_123
# Redis
REDIS_PASSWORD=secure_redis_password_456
# Email (Notification Service)
EMAIL_HOST=smtp.gmail.com
EMAIL_PORT=587
EMAIL_USERNAME=your-email@gmail.com
EMAIL_PASSWORD=your-app-password
# JWT
JWT_SECRET=your-very-secure-jwt-secret-min-256-bits
# OAuth
GOOGLE_CLIENT_ID=your-google-client-id
GOOGLE_CLIENT_SECRET=your-google-client-secret
GITHUB_CLIENT_ID=your-github-client-id
GITHUB_CLIENT_SECRET=your-github-client-secret
인프라 모니터링
PostgreSQL 모니터링
pg_stat_statements 활성화
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
-- postgresql.conf
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all
pg_stat_statements.max = 10000
-- 확장 설치
CREATE EXTENSION pg_stat_statements;
-- 느린 쿼리 확인
SELECT
query,
calls,
total_exec_time,
mean_exec_time,
max_exec_time
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;
Redis 모니터링
1
2
3
4
5
6
7
8
9
10
11
12
# Redis CLI
redis-cli INFO
# 주요 메트릭
- used_memory_human
- connected_clients
- total_commands_processed
- keyspace_hits
- keyspace_misses
# Hit Rate 계산
Hit Rate = keyspace_hits / (keyspace_hits + keyspace_misses)
Kafka 모니터링
1
2
3
4
5
6
7
8
# 토픽 목록
kafka-topics --bootstrap-server localhost:9092 --list
# 컨슈머 그룹 상태
kafka-consumer-groups --bootstrap-server localhost:9092 --describe --group notification-service-group
# Lag 확인 (중요!)
kafka-consumer-groups --bootstrap-server localhost:9092 --describe --group notification-service-group
Phase 6: API Gateway 및 보안
6.1 Spring Cloud Gateway 설정
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
// backend/api-gateway/src/main/java/com/example/gateway/config/GatewayConfig.java
package com.example.gateway.config;
import org.springframework.cloud.gateway.route.RouteLocator;
import org.springframework.cloud.gateway.route.builder.RouteLocatorBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
public class GatewayConfig {
@Bean
public RouteLocator customRouteLocator(RouteLocatorBuilder builder) {
return builder.routes()
// User Service Routes
.route("user-service", r -> r
.path("/api/users/**", "/api/auth/**")
.filters(f -> f
.stripPrefix(0)
.addRequestHeader("X-Gateway", "API-Gateway")
)
.uri("lb://user-service"))
// Task Service Routes
.route("task-service", r -> r
.path("/api/tasks/**")
.filters(f -> f
.stripPrefix(0)
.addRequestHeader("X-Gateway", "API-Gateway")
)
.uri("lb://task-service"))
// Notification Service Routes
.route("notification-service", r -> r
.path("/api/notifications/**")
.filters(f -> f
.stripPrefix(0)
)
.uri("lb://notification-service"))
.build();
}
}
6.2 Rate Limiting Filter
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
// backend/api-gateway/src/main/java/com/example/gateway/filter/RateLimitingFilter.java
package com.example.gateway.filter;
import io.github.bucket4j.Bandwidth;
import io.github.bucket4j.Bucket;
import io.github.bucket4j.Refill;
import lombok.extern.slf4j.Slf4j;
import org.springframework.cloud.gateway.filter.GatewayFilter;
import org.springframework.cloud.gateway.filter.factory.AbstractGatewayFilterFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.http.HttpStatus;
import org.springframework.stereotype.Component;
import java.time.Duration;
import java.util.concurrent.ConcurrentHashMap;
public class RateLimitingFilter extends AbstractGatewayFilterFactory<RateLimitingFilter.Config> {
private final RedisTemplate<String, String> redisTemplate;
private final ConcurrentHashMap<String, Bucket> cache = new ConcurrentHashMap<>();
public RateLimitingFilter(RedisTemplate<String, String> redisTemplate) {
super(Config.class);
this.redisTemplate = redisTemplate;
}
@Override
public GatewayFilter apply(Config config) {
return (exchange, chain) -> {
String clientIp = exchange.getRequest().getRemoteAddress().getAddress().getHostAddress();
String key = "rate_limit:" + clientIp;
Bucket bucket = cache.computeIfAbsent(key, k -> createNewBucket(config));
if (bucket.tryConsume(1)) {
return chain.filter(exchange);
} else {
exchange.getResponse().setStatusCode(HttpStatus.TOO_MANY_REQUESTS);
log.warn("Rate limit exceeded for IP: {}", clientIp);
return exchange.getResponse().setComplete();
}
};
}
private Bucket createNewBucket(Config config) {
Bandwidth limit = Bandwidth.classic(
config.getCapacity(),
Refill.intervally(config.getRefillTokens(), Duration.ofSeconds(config.getRefillPeriodSeconds()))
);
return Bucket.builder()
.addLimit(limit)
.build();
}
public static class Config {
private int capacity = 100;
private int refillTokens = 100;
private int refillPeriodSeconds = 60;
// Getters and setters
public int getCapacity() { return capacity; }
public void setCapacity(int capacity) { this.capacity = capacity; }
public int getRefillTokens() { return refillTokens; }
public void setRefillTokens(int refillTokens) { this.refillTokens = refillTokens; }
public int getRefillPeriodSeconds() { return refillPeriodSeconds; }
public void setRefillPeriodSeconds(int refillPeriodSeconds) {
this.refillPeriodSeconds = refillPeriodSeconds;
}
}
}
6.3 JWT Authentication Filter
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
// backend/api-gateway/src/main/java/com/example/gateway/filter/JwtAuthenticationFilter.java
package com.example.gateway.filter;
import io.jsonwebtoken.Claims;
import io.jsonwebtoken.Jwts;
import io.jsonwebtoken.security.Keys;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.cloud.gateway.filter.GatewayFilter;
import org.springframework.cloud.gateway.filter.factory.AbstractGatewayFilterFactory;
import org.springframework.http.HttpHeaders;
import org.springframework.http.HttpStatus;
import org.springframework.stereotype.Component;
import org.springframework.web.server.ServerWebExchange;
import javax.crypto.SecretKey;
import java.nio.charset.StandardCharsets;
public class JwtAuthenticationFilter extends AbstractGatewayFilterFactory<JwtAuthenticationFilter.Config> {
@Value("${jwt.secret}")
private String jwtSecret;
public JwtAuthenticationFilter() {
super(Config.class);
}
@Override
public GatewayFilter apply(Config config) {
return (exchange, chain) -> {
String path = exchange.getRequest().getURI().getPath();
// Skip authentication for public endpoints
if (isPublicPath(path)) {
return chain.filter(exchange);
}
String authHeader = exchange.getRequest().getHeaders().getFirst(HttpHeaders.AUTHORIZATION);
if (authHeader == null || !authHeader.startsWith("Bearer ")) {
exchange.getResponse().setStatusCode(HttpStatus.UNAUTHORIZED);
return exchange.getResponse().setComplete();
}
String token = authHeader.substring(7);
try {
Claims claims = validateToken(token);
// Add user info to request headers
ServerWebExchange modifiedExchange = exchange.mutate()
.request(r -> r
.header("X-User-Id", claims.getSubject())
.header("X-User-Email", claims.get("email", String.class))
)
.build();
return chain.filter(modifiedExchange);
} catch (Exception e) {
log.error("JWT validation failed", e);
exchange.getResponse().setStatusCode(HttpStatus.UNAUTHORIZED);
return exchange.getResponse().setComplete();
}
};
}
private boolean isPublicPath(String path) {
return path.startsWith("/api/auth/login") ||
path.startsWith("/api/auth/register") ||
path.startsWith("/api/auth/oauth2") ||
path.startsWith("/health") ||
path.startsWith("/actuator");
}
private Claims validateToken(String token) {
SecretKey key = Keys.hmacShaKeyFor(jwtSecret.getBytes(StandardCharsets.UTF_8));
return Jwts.parser()
.verifyWith(key)
.build()
.parseSignedClaims(token)
.getPayload();
}
public static class Config {
// Configuration properties if needed
}
}
6.4 API Gateway Application Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
# backend/api-gateway/src/main/resources/application.yml
spring:
application:
name: api-gateway
cloud:
gateway:
default-filters:
- name: JwtAuthenticationFilter
- name: RateLimitingFilter
args:
capacity: 100
refillTokens: 100
refillPeriodSeconds: 60
globalcors:
corsConfigurations:
'[/**]':
allowedOrigins:
- "http://localhost:3000"
- "http://frontend"
allowedMethods:
- GET
- POST
- PUT
- DELETE
- PATCH
- OPTIONS
allowedHeaders:
- "*"
allowCredentials: true
maxAge: 3600
data:
redis:
host: localhost
port: 6379
server:
port: 8080
jwt:
secret: ${JWT_SECRET:your-very-secure-secret-key-min-256-bits}
logging:
level:
org.springframework.cloud.gateway: DEBUG
com.example.gateway: DEBUG
---
spring:
config:
activate:
on-profile: docker
data:
redis:
host: redis
Phase 7: 테스트 전략
7.1 Unit Test 예시 (User Service)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
// backend/user-service/src/test/java/com/example/userservice/service/AuthServiceTest.java
package com.example.userservice.service;
import com.example.userservice.dto.LoginRequest;
import com.example.userservice.dto.TokenResponse;
import com.example.userservice.dto.UserRegistrationRequest;
import com.example.userservice.dto.UserResponse;
import com.example.userservice.entity.User;
import com.example.userservice.repository.UserRepository;
import com.example.userservice.security.JwtTokenProvider;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.junit.jupiter.MockitoExtension;
import org.springframework.security.crypto.password.PasswordEncoder;
import java.util.Optional;
import java.util.UUID;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatThrownBy;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.Mockito.*;
class AuthServiceTest {
@Mock
private UserRepository userRepository;
@Mock
private PasswordEncoder passwordEncoder;
@Mock
private JwtTokenProvider jwtTokenProvider;
@InjectMocks
private AuthService authService;
private UserRegistrationRequest registrationRequest;
private LoginRequest loginRequest;
private User user;
@BeforeEach
void setUp() {
registrationRequest = new UserRegistrationRequest();
registrationRequest.setEmail("test@example.com");
registrationRequest.setPassword("Password123!");
registrationRequest.setName("Test User");
loginRequest = new LoginRequest();
loginRequest.setEmail("test@example.com");
loginRequest.setPassword("Password123!");
user = User.builder()
.id(UUID.randomUUID())
.email("test@example.com")
.passwordHash("hashedPassword")
.name("Test User")
.isActive(true)
.build();
}
@Test
void register_WithValidData_ShouldCreateUser() {
// Given
when(userRepository.existsByEmail(registrationRequest.getEmail())).thenReturn(false);
when(passwordEncoder.encode(registrationRequest.getPassword())).thenReturn("hashedPassword");
when(userRepository.save(any(User.class))).thenReturn(user);
// When
UserResponse response = authService.register(registrationRequest);
// Then
assertThat(response).isNotNull();
assertThat(response.getEmail()).isEqualTo("test@example.com");
assertThat(response.getName()).isEqualTo("Test User");
verify(userRepository).existsByEmail(registrationRequest.getEmail());
verify(passwordEncoder).encode(registrationRequest.getPassword());
verify(userRepository).save(any(User.class));
}
@Test
void register_WithExistingEmail_ShouldThrowException() {
// Given
when(userRepository.existsByEmail(registrationRequest.getEmail())).thenReturn(true);
// When & Then
assertThatThrownBy(() -> authService.register(registrationRequest))
.isInstanceOf(IllegalArgumentException.class)
.hasMessage("Email already in use");
verify(userRepository).existsByEmail(registrationRequest.getEmail());
verify(userRepository, never()).save(any(User.class));
}
@Test
void login_WithValidCredentials_ShouldReturnTokens() {
// Given
when(userRepository.findByEmail(loginRequest.getEmail())).thenReturn(Optional.of(user));
when(passwordEncoder.matches(loginRequest.getPassword(), user.getPasswordHash())).thenReturn(true);
when(jwtTokenProvider.generateAccessToken(any(UUID.class), anyString())).thenReturn("access-token");
when(jwtTokenProvider.generateRefreshToken(any(UUID.class))).thenReturn("refresh-token");
// When
TokenResponse response = authService.login(loginRequest);
// Then
assertThat(response).isNotNull();
assertThat(response.getAccessToken()).isEqualTo("access-token");
assertThat(response.getRefreshToken()).isEqualTo("refresh-token");
assertThat(response.getTokenType()).isEqualTo("Bearer");
verify(userRepository).findByEmail(loginRequest.getEmail());
verify(passwordEncoder).matches(loginRequest.getPassword(), user.getPasswordHash());
}
@Test
void login_WithInvalidPassword_ShouldThrowException() {
// Given
when(userRepository.findByEmail(loginRequest.getEmail())).thenReturn(Optional.of(user));
when(passwordEncoder.matches(loginRequest.getPassword(), user.getPasswordHash())).thenReturn(false);
// When & Then
assertThatThrownBy(() -> authService.login(loginRequest))
.isInstanceOf(IllegalArgumentException.class)
.hasMessage("Invalid credentials");
verify(userRepository).findByEmail(loginRequest.getEmail());
verify(passwordEncoder).matches(loginRequest.getPassword(), user.getPasswordHash());
}
}
7.2 Integration Test 예시
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
// backend/user-service/src/test/java/com/example/userservice/integration/AuthControllerIntegrationTest.java
package com.example.userservice.integration;
import com.example.userservice.dto.LoginRequest;
import com.example.userservice.dto.UserRegistrationRequest;
import com.example.userservice.entity.User;
import com.example.userservice.repository.UserRepository;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.web.servlet.AutoConfigureMockMvc;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.http.MediaType;
import org.springframework.security.crypto.password.PasswordEncoder;
import org.springframework.test.context.ActiveProfiles;
import org.springframework.test.web.servlet.MockMvc;
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.post;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.*;
class AuthControllerIntegrationTest {
@Autowired
private MockMvc mockMvc;
@Autowired
private ObjectMapper objectMapper;
@Autowired
private UserRepository userRepository;
@Autowired
private PasswordEncoder passwordEncoder;
@AfterEach
void tearDown() {
userRepository.deleteAll();
}
@Test
void register_WithValidData_ShouldReturn201() throws Exception {
// Given
UserRegistrationRequest request = new UserRegistrationRequest();
request.setEmail("newuser@example.com");
request.setPassword("Password123!");
request.setName("New User");
// When & Then
mockMvc.perform(post("/api/auth/register")
.contentType(MediaType.APPLICATION_JSON)
.content(objectMapper.writeValueAsString(request)))
.andExpect(status().isCreated())
.andExpect(jsonPath("$.data.email").value("newuser@example.com"))
.andExpect(jsonPath("$.data.name").value("New User"));
}
@Test
void login_WithValidCredentials_ShouldReturn200AndTokens() throws Exception {
// Given
User user = User.builder()
.email("existing@example.com")
.passwordHash(passwordEncoder.encode("Password123!"))
.name("Existing User")
.isActive(true)
.build();
userRepository.save(user);
LoginRequest request = new LoginRequest();
request.setEmail("existing@example.com");
request.setPassword("Password123!");
// When & Then
mockMvc.perform(post("/api/auth/login")
.contentType(MediaType.APPLICATION_JSON)
.content(objectMapper.writeValueAsString(request)))
.andExpect(status().isOk())
.andExpect(jsonPath("$.data.accessToken").exists())
.andExpect(jsonPath("$.data.refreshToken").exists())
.andExpect(jsonPath("$.data.tokenType").value("Bearer"));
}
}
7.3 Frontend Test 예시 (React Testing Library)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
// frontend/react-app/src/components/auth/__tests__/LoginForm.test.jsx
import { render, screen, fireEvent, waitFor } from '@testing-library/react';
import { BrowserRouter } from 'react-router-dom';
import { AuthProvider } from '../../../context/AuthContext';
import { LoginForm } from '../LoginForm';
import { authApi } from '../../../api/authApi';
jest.mock('../../../api/authApi');
const renderWithProviders = (component) => {
return render(
<BrowserRouter>
<AuthProvider>
{component}
</AuthProvider>
</BrowserRouter>
);
};
describe('LoginForm', () => {
beforeEach(() => {
jest.clearAllMocks();
});
test('renders login form with email and password fields', () => {
renderWithProviders(<LoginForm />);
expect(screen.getByLabelText(/email/i)).toBeInTheDocument();
expect(screen.getByLabelText(/password/i)).toBeInTheDocument();
expect(screen.getByRole('button', { name: /login/i })).toBeInTheDocument();
});
test('successful login redirects to dashboard', async () => {
authApi.login.mockResolvedValue({
data: {
accessToken: 'token',
refreshToken: 'refresh',
},
});
authApi.getCurrentUser.mockResolvedValue({
data: {
id: '1',
email: 'test@example.com',
name: 'Test User',
},
});
renderWithProviders(<LoginForm />);
const emailInput = screen.getByLabelText(/email/i);
const passwordInput = screen.getByLabelText(/password/i);
const loginButton = screen.getByRole('button', { name: /login/i });
fireEvent.change(emailInput, { target: { value: 'test@example.com' } });
fireEvent.change(passwordInput, { target: { value: 'password123' } });
fireEvent.click(loginButton);
await waitFor(() => {
expect(authApi.login).toHaveBeenCalledWith({
email: 'test@example.com',
password: 'password123',
});
});
});
test('displays error message on login failure', async () => {
authApi.login.mockRejectedValue({
response: {
data: {
message: 'Invalid credentials',
},
},
});
renderWithProviders(<LoginForm />);
const emailInput = screen.getByLabelText(/email/i);
const passwordInput = screen.getByLabelText(/password/i);
const loginButton = screen.getByRole('button', { name: /login/i });
fireEvent.change(emailInput, { target: { value: 'test@example.com' } });
fireEvent.change(passwordInput, { target: { value: 'wrongpassword' } });
fireEvent.click(loginButton);
await waitFor(() => {
expect(screen.getByText(/invalid credentials/i)).toBeInTheDocument();
});
});
});
Phase 8: CI/CD
8.1 Jenkins Pipeline (Jenkinsfile)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
// ci-cd/jenkins/Jenkinsfile
pipeline {
agent any
environment {
DOCKER_REGISTRY = 'your-registry.com'
DOCKER_CREDENTIALS = 'docker-hub-credentials'
GIT_CREDENTIALS = 'github-credentials'
SONAR_HOST = 'http://sonarqube:9000'
SONAR_TOKEN = credentials('sonarqube-token')
}
stages {
stage('Checkout') {
steps {
git branch: 'main',
credentialsId: env.GIT_CREDENTIALS,
url: 'https://github.com/your-org/task-manager.git'
}
}
stage('Build Backend Services') {
parallel {
stage('Build User Service') {
steps {
dir('backend/user-service') {
sh 'mvn clean package -DskipTests'
}
}
}
stage('Build Task Service') {
steps {
dir('backend/task-service') {
sh 'mvn clean package -DskipTests'
}
}
}
stage('Build API Gateway') {
steps {
dir('backend/api-gateway') {
sh 'mvn clean package -DskipTests'
}
}
}
}
}
stage('Unit Tests') {
parallel {
stage('Test User Service') {
steps {
dir('backend/user-service') {
sh 'mvn test'
junit 'target/surefire-reports/*.xml'
}
}
}
stage('Test Task Service') {
steps {
dir('backend/task-service') {
sh 'mvn test'
junit 'target/surefire-reports/*.xml'
}
}
}
stage('Test Frontend') {
steps {
dir('frontend/react-app') {
sh 'npm ci'
sh 'npm test -- --coverage'
}
}
}
}
}
stage('Code Quality Analysis') {
steps {
script {
def scannerHome = tool 'SonarScanner'
withSonarQubeEnv('SonarQube') {
sh """
${scannerHome}/bin/sonar-scanner \
-Dsonar.projectKey=task-manager \
-Dsonar.sources=. \
-Dsonar.host.url=${SONAR_HOST} \
-Dsonar.login=${SONAR_TOKEN}
"""
}
}
}
}
stage('Quality Gate') {
steps {
timeout(time: 5, unit: 'MINUTES') {
waitForQualityGate abortPipeline: true
}
}
}
stage('Build Docker Images') {
parallel {
stage('Build User Service Image') {
steps {
script {
docker.build("${DOCKER_REGISTRY}/user-service:${BUILD_NUMBER}",
'./backend/user-service')
}
}
}
stage('Build Task Service Image') {
steps {
script {
docker.build("${DOCKER_REGISTRY}/task-service:${BUILD_NUMBER}",
'./backend/task-service')
}
}
}
stage('Build API Gateway Image') {
steps {
script {
docker.build("${DOCKER_REGISTRY}/api-gateway:${BUILD_NUMBER}",
'./backend/api-gateway')
}
}
}
stage('Build Frontend Image') {
steps {
script {
docker.build("${DOCKER_REGISTRY}/frontend:${BUILD_NUMBER}",
'./frontend/react-app')
}
}
}
}
}
stage('Push Docker Images') {
steps {
script {
docker.withRegistry("https://${DOCKER_REGISTRY}", DOCKER_CREDENTIALS) {
docker.image("${DOCKER_REGISTRY}/user-service:${BUILD_NUMBER}").push()
docker.image("${DOCKER_REGISTRY}/task-service:${BUILD_NUMBER}").push()
docker.image("${DOCKER_REGISTRY}/api-gateway:${BUILD_NUMBER}").push()
docker.image("${DOCKER_REGISTRY}/frontend:${BUILD_NUMBER}").push()
// Tag as latest
docker.image("${DOCKER_REGISTRY}/user-service:${BUILD_NUMBER}").push('latest')
docker.image("${DOCKER_REGISTRY}/task-service:${BUILD_NUMBER}").push('latest')
docker.image("${DOCKER_REGISTRY}/api-gateway:${BUILD_NUMBER}").push('latest')
docker.image("${DOCKER_REGISTRY}/frontend:${BUILD_NUMBER}").push('latest')
}
}
}
}
stage('Deploy to Staging') {
steps {
script {
sh '''
docker-compose -f docker-compose.staging.yml down
docker-compose -f docker-compose.staging.yml pull
docker-compose -f docker-compose.staging.yml up -d
'''
}
}
}
stage('Integration Tests') {
steps {
sh 'npm run test:integration'
}
}
stage('Deploy to Production') {
when {
branch 'main'
}
steps {
input message: 'Deploy to Production?', ok: 'Deploy'
script {
sh '''
docker-compose -f docker-compose.prod.yml down
docker-compose -f docker-compose.prod.yml pull
docker-compose -f docker-compose.prod.yml up -d
'''
}
}
}
}
post {
always {
cleanWs()
}
success {
echo 'Pipeline succeeded!'
// Send notification (Slack, email, etc.)
}
failure {
echo 'Pipeline failed!'
// Send alert
}
}
}
8.2 GitHub Actions Workflow
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
# .github/workflows/ci-cd.yml
name: CI/CD Pipeline
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main, develop ]
env:
REGISTRY: ghcr.io
IMAGE_NAME: $
jobs:
test-backend:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_DB: testdb
POSTGRES_USER: test
POSTGRES_PASSWORD: test
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
strategy:
matrix:
service: [user-service, task-service, api-gateway]
steps:
- uses: actions/checkout@v4
- name: Set up JDK 21
uses: actions/setup-java@v4
with:
java-version: '21'
distribution: 'temurin'
cache: maven
- name: Run tests
working-directory: backend/$
run: mvn test
- name: Generate coverage report
working-directory: backend/$
run: mvn jacoco:report
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
files: backend/$/target/site/jacoco/jacoco.xml
flags: $
test-frontend:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
cache-dependency-path: frontend/react-app/package-lock.json
- name: Install dependencies
working-directory: frontend/react-app
run: npm ci
- name: Run tests
working-directory: frontend/react-app
run: npm test -- --coverage
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
files: frontend/react-app/coverage/lcov.info
flags: frontend
build-and-push:
needs: [test-backend, test-frontend]
runs-on: ubuntu-latest
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
permissions:
contents: read
packages: write
strategy:
matrix:
component:
- name: user-service
context: backend/user-service
- name: task-service
context: backend/task-service
- name: api-gateway
context: backend/api-gateway
- name: notification-service
context: backend/notification-service
- name: frontend
context: frontend/react-app
steps:
- uses: actions/checkout@v4
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: $
username: $
password: $
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: $/$/$
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern=
type=semver,pattern=.
type=sha
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: $
push: true
tags: $
labels: $
deploy:
needs: build-and-push
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v4
- name: Deploy to staging
run: |
echo "Deploying to staging environment"
# Add deployment commands here
# kubectl apply -f k8s/staging/
# or
# ansible-playbook deploy-staging.yml
- name: Run smoke tests
run: |
echo "Running smoke tests"
# Add smoke test commands
- name: Deploy to production
if: success()
run: |
echo "Deploying to production"
# kubectl apply -f k8s/production/
8.3 Docker Compose for Staging
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
# docker-compose.staging.yml
version: '3.8'
services:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: taskmanager
POSTGRES_USER: staging_admin
POSTGRES_PASSWORD: ${STAGING_DB_PASSWORD}
volumes:
- staging_postgres_data:/var/lib/postgresql/data
networks:
- staging
redis:
image: redis:7-alpine
volumes:
- staging_redis_data:/data
networks:
- staging
kafka:
image: confluentinc/cp-kafka:7.5.0
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
depends_on:
- zookeeper
networks:
- staging
zookeeper:
image: confluentinc/cp-zookeeper:7.5.0
environment:
ZOOKEEPER_CLIENT_PORT: 2181
networks:
- staging
api-gateway:
image: ${DOCKER_REGISTRY}/api-gateway:${BUILD_NUMBER}
environment:
SPRING_PROFILES_ACTIVE: staging
ports:
- "8080:8080"
depends_on:
- user-service
- task-service
networks:
- staging
user-service:
image: ${DOCKER_REGISTRY}/user-service:${BUILD_NUMBER}
environment:
SPRING_PROFILES_ACTIVE: staging
SPRING_DATASOURCE_PASSWORD: ${STAGING_DB_PASSWORD}
depends_on:
- postgres
- redis
- kafka
networks:
- staging
task-service:
image: ${DOCKER_REGISTRY}/task-service:${BUILD_NUMBER}
environment:
SPRING_PROFILES_ACTIVE: staging
SPRING_DATASOURCE_PASSWORD: ${STAGING_DB_PASSWORD}
depends_on:
- postgres
- redis
- kafka
networks:
- staging
notification-service:
image: ${DOCKER_REGISTRY}/notification-service:${BUILD_NUMBER}
environment:
SPRING_PROFILES_ACTIVE: staging
depends_on:
- kafka
networks:
- staging
frontend:
image: ${DOCKER_REGISTRY}/frontend:${BUILD_NUMBER}
ports:
- "80:80"
depends_on:
- api-gateway
networks:
- staging
volumes:
staging_postgres_data:
staging_redis_data:
networks:
staging:
driver: bridge
Phase 9: 모니터링 및 운영
9.1 Prometheus Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# infrastructure/monitoring/prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'api-gateway'
metrics_path: '/actuator/prometheus'
static_configs:
- targets: ['api-gateway:8080']
- job_name: 'user-service'
metrics_path: '/actuator/prometheus'
static_configs:
- targets: ['user-service:8081']
- job_name: 'task-service'
metrics_path: '/actuator/prometheus'
static_configs:
- targets: ['task-service:8082']
- job_name: 'notification-service'
metrics_path: '/actuator/prometheus'
static_configs:
- targets: ['notification-service:8083']
- job_name: 'postgres'
static_configs:
- targets: ['postgres-exporter:9187']
- job_name: 'redis'
static_configs:
- targets: ['redis-exporter:9121']
9.2 Grafana Dashboard JSON (축약)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
{
"dashboard": {
"title": "Task Manager - System Overview",
"panels": [
{
"title": "Request Rate",
"targets": [
{
"expr": "rate(http_server_requests_seconds_count[5m])",
"legendFormat": ""
}
]
},
{
"title": "Response Time (p95)",
"targets": [
{
"expr": "histogram_quantile(0.95, rate(http_server_requests_seconds_bucket[5m]))",
"legendFormat": ""
}
]
},
{
"title": "Error Rate",
"targets": [
{
"expr": "rate(http_server_requests_seconds_count{status=~\"5..\"}[5m])",
"legendFormat": ""
}
]
},
{
"title": "Database Connections",
"targets": [
{
"expr": "hikaricp_connections_active",
"legendFormat": " - active"
}
]
}
]
}
}
9.3 Application Monitoring Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# backend/user-service/src/main/resources/application.yml (추가)
management:
endpoints:
web:
exposure:
include: health,info,metrics,prometheus
metrics:
export:
prometheus:
enabled: true
tags:
application: ${spring.application.name}
environment: ${spring.profiles.active}
health:
livenessState:
enabled: true
readinessState:
enabled: true
logging:
pattern:
console: "%d{yyyy-MM-dd HH:mm:ss} - %logger{36} - %msg%n"
level:
root: INFO
com.example: DEBUG
file:
name: logs/user-service.log
max-size: 10MB
max-history: 30
9.4 Kubernetes Deployment (선택사항)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
# infrastructure/kubernetes/user-service-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
labels:
app: user-service
spec:
replicas: 3
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: your-registry/user-service:latest
ports:
- containerPort: 8081
env:
- name: SPRING_PROFILES_ACTIVE
value: "production"
- name: SPRING_DATASOURCE_URL
value: "jdbc:postgresql://postgres:5432/userdb"
- name: SPRING_DATASOURCE_USERNAME
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8081
initialDelaySeconds: 60
periodSeconds: 10
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8081
initialDelaySeconds: 30
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: user-service
spec:
selector:
app: user-service
ports:
- protocol: TCP
port: 8081
targetPort: 8081
type: ClusterIP
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: user-service-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: user-service
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
부록: 빠른 시작 가이드
로컬 개발 환경 시작
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# 1. Repository 클론
git clone https://github.com/your-org/task-manager.git
cd task-manager
# 2. 환경 변수 설정
cp .env.example .env
# .env 파일을 편집하여 필요한 값 설정
# 3. 인프라 서비스 시작 (PostgreSQL, Redis, Kafka)
docker-compose up -d postgres redis zookeeper kafka
# 4. 데이터베이스 초기화 대기
sleep 30
# 5. 백엔드 서비스 빌드 및 실행
cd backend/user-service && mvn spring-boot:run &
cd backend/task-service && mvn spring-boot:run &
cd backend/api-gateway && mvn spring-boot:run &
# 6. 프론트엔드 실행
cd frontend/react-app
npm install
npm run dev
# 7. 애플리케이션 접속
# Frontend: http://localhost:3000
# API Gateway: http://localhost:8080
Docker Compose로 전체 시스템 시작
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 전체 스택 시작
docker-compose up -d
# 로그 확인
docker-compose logs -f
# 특정 서비스 로그
docker-compose logs -f user-service
# 중지
docker-compose down
# 볼륨까지 삭제
docker-compose down -v
요약 및 다음 단계
이 가이드는 엔터프라이즈 애플리케이션의 전체 ALM 사이클을 다룹니다:
- 기획: Taiga를 활용한 프로젝트 관리, 백로그, 칸반보드
- 개발: Spring Boot 마이크로서비스, React 프론트엔드
- 데이터: PostgreSQL, Redis, Kafka 통합
- 보안: OAuth 2.0, JWT 인증, API Gateway
- 테스트: Unit, Integration, E2E 테스트
- CI/CD: Jenkins, GitHub Actions 파이프라인
- 배포: Docker, Docker Compose, Kubernetes
- 모니터링: Prometheus, Grafana
다음 단계
- Kubernetes 프로덕션 배포 설정
- 고급 보안 기능 (RBAC, 암호화)
- 성능 최적화 및 캐싱 전략
- 마이크로서비스 간 통신 패턴 개선
- 장애 복구 및 재해 복구 계획
작성 일자: 2026-01-11
Taiga: 오픈소스 애자일 프로젝트 관리 플랫폼
목차
Taiga란?
Taiga.io는 애자일 팀을 위한 오픈소스 프로젝트 관리 플랫폼입니다. 2014년에 시작된 프로젝트로, Scrum과 Kanban 방법론을 지원하는 웹 기반 도구입니다.
기본 정보
- 공식 웹사이트: https://taiga.io
- GitHub: https://github.com/taigaio
- 라이선스: GNU Affero General Public License v3.0 (AGPLv3)
- 개발 언어:
- Backend: Python (Django)
- Frontend: JavaScript (Angular)
- 데이터베이스: PostgreSQL
- 클라우드 서비스: https://tree.taiga.io
핵심 철학
Taiga는 다음과 같은 철학으로 만들어졌습니다:
- 단순성: 복잡하지 않고 직관적인 인터페이스
- 오픈소스: 소스 코드 공개 및 커뮤니티 기여
- 애자일 중심: Scrum, Kanban을 위한 최적화
- 아름다운 디자인: 사용하기 즐거운 UI/UX
주요 특징
1. 애자일 방법론 지원
Scrum
- 백로그 관리: User Stories, Tasks, Epic 구조화
- 스프린트 계획: 스프린트 생성 및 관리
- 번다운 차트: 진행 상황 시각화
- 스토리 포인트: 작업량 추정
Kanban
- 칸반 보드: 드래그 앤 드롭으로 작업 이동
- WIP 제한: Work In Progress 제한 설정
- 누적 흐름 다이어그램: 흐름 분석
2. 협업 기능
- 이슈 트래킹: 버그, 질문, 개선사항 관리
- Wiki: 프로젝트 문서화
- 커뮤니티: 팀 내 토론 및 소통
- 알림: 실시간 변경사항 알림
- 멘션: @username으로 팀원 호출
3. 커스터마이징
- 커스텀 필드: 프로젝트별 맞춤 필드 추가
- 커스텀 워크플로우: 상태(Status) 정의
- 역할 및 권한: 세밀한 접근 제어
- 태그: 자유로운 라벨링
4. 통합 기능
- GitHub/GitLab: 커밋, PR과 이슈 연동
- Slack: 알림 전송
- Webhook: 외부 시스템 연동
- API: RESTful API 제공
Taiga vs Jira 비교
상세 비교표
| 항목 | Taiga | Jira |
|---|---|---|
| 라이선스 | 오픈소스 (AGPLv3) | 상용 소프트웨어 |
| 비용 | 무료 (self-hosted) 클라우드: $0-70/월 | 무료(10명까지) $7.75/user/월~ |
| 설치 | Self-hosted 또는 Cloud | 주로 Cloud Data Center 옵션 |
| 복잡도 | 단순, 직관적 | 복잡, 기능 풍부 |
| 학습 곡선 | 낮음 (1-2일) | 높음 (1-2주) |
| 커스터마이징 | 소스 수정 가능 | 플러그인/앱 마켓플레이스 |
| 대상 사용자 | 소규모 팀 애자일 팀 | 모든 규모 엔터프라이즈 |
| 강점 | 직관성, 속도 비용 효율 | 기능 풍부 생태계 |
| 약점 | 고급 기능 부족 플러그인 생태계 작음 | 복잡도 비용 |
사용 시나리오별 권장
Taiga를 선택해야 할 때
- ✅ 스타트업이나 소규모 팀 (10-50명)
- ✅ 순수 Scrum/Kanban 팀
- ✅ 예산이 제한적일 때
- ✅ 오픈소스를 선호하는 조직
- ✅ 단순하고 빠른 도구를 원할 때
- ✅ 데이터 프라이버시가 중요할 때 (self-hosted)
Jira를 선택해야 할 때
- ✅ 대기업 또는 복잡한 조직 구조
- ✅ 다양한 Atlassian 도구 통합 필요 (Confluence, Bitbucket)
- ✅ 복잡한 워크플로우와 자동화 필요
- ✅ 광범위한 리포팅과 분석 필요
- ✅ 많은 플러그인과 확장성 필요
설치 및 사용 방법
방법 1: 클라우드 버전 (가장 쉬움)
- 회원가입
1
https://tree.taiga.io
- 프로젝트 생성
- “Create Project” 클릭
- 프로젝트 이름 및 설명 입력
- Scrum 또는 Kanban 선택
- 팀원 초대
- Settings → Members → Invite
- 이메일로 초대
방법 2: Docker로 Self-hosted 설치
최소 요구사항
- Docker 및 Docker Compose 설치됨
- 최소 2GB RAM
- 10GB 디스크 공간
docker-compose.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
version: '3.8'
services:
# PostgreSQL Database
taiga-db:
image: postgres:16-alpine
container_name: taiga-db
environment:
POSTGRES_DB: taiga
POSTGRES_USER: taiga
POSTGRES_PASSWORD: changeme_db_password
volumes:
- taiga_db_data:/var/lib/postgresql/data
networks:
- taiga
# Taiga Backend
taiga-back:
image: taigaio/taiga-back:latest
container_name: taiga-back
environment:
# PostgreSQL settings
POSTGRES_DB: taiga
POSTGRES_USER: taiga
POSTGRES_PASSWORD: changeme_db_password
POSTGRES_HOST: taiga-db
# Taiga settings
TAIGA_SECRET_KEY: changeme_secret_key_min_32_chars
TAIGA_SITES_SCHEME: http
TAIGA_SITES_DOMAIN: localhost:9000
# Email settings (optional)
EMAIL_BACKEND: django.core.mail.backends.smtp.EmailBackend
DEFAULT_FROM_EMAIL: taiga@example.com
EMAIL_HOST: smtp.gmail.com
EMAIL_PORT: 587
EMAIL_HOST_USER: your-email@gmail.com
EMAIL_HOST_PASSWORD: your-app-password
EMAIL_USE_TLS: "True"
# Registration
PUBLIC_REGISTER_ENABLED: "True"
depends_on:
- taiga-db
volumes:
- taiga_static_data:/taiga-back/static
- taiga_media_data:/taiga-back/media
networks:
- taiga
# Taiga Frontend
taiga-front:
image: taigaio/taiga-front:latest
container_name: taiga-front
environment:
TAIGA_URL: http://localhost:9000
TAIGA_WEBSOCKETS_URL: ws://localhost:9000
ports:
- "9000:80"
depends_on:
- taiga-back
networks:
- taiga
# Taiga Events (WebSocket support)
taiga-events:
image: taigaio/taiga-events:latest
container_name: taiga-events
environment:
RABBITMQ_URL: amqp://taiga:changeme_rabbitmq_password@taiga-rabbitmq:5672/taiga
TAIGA_SECRET_KEY: changeme_secret_key_min_32_chars
depends_on:
- taiga-rabbitmq
networks:
- taiga
# RabbitMQ for real-time events
taiga-rabbitmq:
image: rabbitmq:3-management-alpine
container_name: taiga-rabbitmq
environment:
RABBITMQ_DEFAULT_USER: taiga
RABBITMQ_DEFAULT_PASS: changeme_rabbitmq_password
RABBITMQ_DEFAULT_VHOST: taiga
volumes:
- taiga_rabbitmq_data:/var/lib/rabbitmq
networks:
- taiga
volumes:
taiga_db_data:
taiga_static_data:
taiga_media_data:
taiga_rabbitmq_data:
networks:
taiga:
driver: bridge
실행 명령
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 1. docker-compose.yml 파일 생성
nano docker-compose.yml
# (위의 내용 붙여넣기)
# 2. 비밀번호 변경
# POSTGRES_PASSWORD, TAIGA_SECRET_KEY, RABBITMQ_DEFAULT_PASS를
# 반드시 안전한 값으로 변경하세요!
# 3. Docker Compose 실행
docker-compose up -d
# 4. 로그 확인
docker-compose logs -f
# 5. 브라우저에서 접속
# http://localhost:9000
# 6. 초기 관리자 계정 생성
docker-compose exec taiga-back python manage.py createsuperuser
초기 설정
- 관리자 로그인
- 생성한 superuser 계정으로 로그인
- 프로젝트 생성
- 대시보드에서 “New Project” 클릭
- 팀 구성
- 팀원 초대 및 역할 부여
방법 3: 네이티브 설치 (Ubuntu 예시)
자세한 설치 가이드는 공식 문서 참조:
1
https://docs.taiga.io/
주요 기능
1. 프로젝트 대시보드
1
2
3
4
5
6
7
8
9
10
11
12
13
14
┌─────────────────────────────────────────┐
│ Project Dashboard │
├─────────────────────────────────────────┤
│ ┌──────────┐ ┌──────────┐ ┌────────┐│
│ │ Timeline │ │ Activity │ │ Issues ││
│ │ 📈 │ │ 📋 │ │ 🐛 ││
│ └──────────┘ └──────────┘ └────────┘│
│ │
│ ┌─────────────────────────────────────┐│
│ │ Current Sprint ││
│ │ ━━━━━━━━━━━━━━━━━━━━━ 60% ││
│ │ 12 / 20 tasks completed ││
│ └─────────────────────────────────────┘│
└─────────────────────────────────────────┘
2. 백로그 (Backlog)
Epic 구조:
1
2
3
4
5
6
7
8
Epic: 사용자 관리 시스템
└─ User Story: 회원가입 기능
├─ Task: 이메일 검증
├─ Task: 비밀번호 암호화
└─ Task: 이메일 발송
└─ User Story: 로그인 기능
├─ Task: JWT 구현
└─ Task: OAuth 연동
User Story 템플릿:
1
2
3
4
5
6
7
8
As a [사용자 유형]
I want to [목표]
So that [이유]
Acceptance Criteria:
- [ ] 조건 1
- [ ] 조건 2
- [ ] 조건 3
3. 칸반 보드
1
2
3
4
5
6
7
8
9
┌─────────┬─────────────┬─────────────┬──────┐
│ TODO │ IN PROGRESS │ REVIEW │ DONE │
├─────────┼─────────────┼─────────────┼──────┤
│ Task A │ Task D │ Task G │Task J│
│ Task B │ Task E │ Task H │Task K│
│ Task C │ Task F │ Task I │Task L│
│ │ │ │ │
│ WIP: 3 │ WIP: 3/3 │ WIP: 3/2 │ │
└─────────┴─────────────┴─────────────┴──────┘
4. 스프린트 관리
스프린트 계획:
- 백로그에서 User Story 선택
- Story Point 할당
- 팀 용량(Capacity) 설정
- 스프린트 시작
번다운 차트:
1
2
3
4
5
6
7
8
9
10
11
12
Remaining Work
↑
100| ●
| ●
75| ● Ideal
| ●━━━━
50| ● ●
| ● ●
25| Actual ● ●
| ● ●
0|______________●__●→
Day 1 3 5 7 9 Time
5. 이슈 트래킹
이슈 유형:
- 🐛 Bug: 버그 리포트
- ❓ Question: 질문
- 💡 Enhancement: 개선 제안
우선순위:
- 🔴 Critical
- 🟠 High
- 🟡 Normal
- 🟢 Low
상태 커스터마이징:
1
2
3
New → Accepted → In Progress → Ready for Test → Done
↓
Blocked
6. Wiki
마크다운 지원:
1
2
3
4
5
6
7
8
9
10
11
12
# 프로젝트 개요
## 아키텍처
- Frontend: React
- Backend: Spring Boot
- Database: PostgreSQL
## 배포 프로세스
1. 로컬 테스트
2. CI 파이프라인
3. Staging 배포
4. Production 배포
페이지 계층 구조:
1
2
3
4
5
6
7
8
9
10
Wiki
├── Getting Started
│ ├── Setup Guide
│ └── Development Workflow
├── Architecture
│ ├── System Design
│ └── API Documentation
└── Deployment
├── Staging
└── Production
실제 사용 사례
사례 1: 스타트업 개발팀
조직: 핀테크 스타트업 (개발자 15명)
도입 이유:
- 제한된 예산
- 빠른 개발 사이클
- Scrum 방법론 적용
사용 방법:
- 2주 스프린트
- 매일 스탠드업 미팅 전 칸반 보드 확인
- GitHub 커밋과 이슈 연동
결과:
- 월 $0 비용 (self-hosted)
- 개발 속도 30% 향상
- 팀 투명성 증가
사례 2: 오픈소스 프로젝트
프로젝트: 커뮤니티 기반 라이브러리 개발
도입 이유:
- 오픈소스 정신
- 전 세계 기여자 관리
- 공개 프로젝트 관리
사용 방법:
- 공개 백로그
- 이슈로 버그 트래킹
- Wiki로 문서화
결과:
- 기여자 50명 이상
- 투명한 로드맵 공유
- 커뮤니티 참여 증가
사례 3: 교육 기관
조직: 대학교 소프트웨어 공학 과목
도입 이유:
- 학생들에게 애자일 경험 제공
- 무료 사용
- 간단한 학습 곡선
사용 방법:
- 팀 프로젝트별 Taiga 프로젝트 생성
- 매주 스프린트 리뷰
- 교수가 모든 프로젝트 모니터링
결과:
- 학생들의 협업 능력 향상
- 실무 도구 경험
- 프로젝트 진행 상황 가시성
장단점
장점
1. 비용 효율성
- Self-hosted 시 완전 무료
- 클라우드도 경쟁 대비 저렴
- 사용자 수 제한 없음
2. 단순성
- 직관적인 UI/UX
- 낮은 학습 곡선
- 빠른 온보딩
3. 오픈소스
- 소스 코드 수정 가능
- 커뮤니티 기여
- 벤더 종속성 없음
4. 데이터 소유권
- Self-hosted 시 완전한 데이터 통제
- GDPR 준수 용이
- 보안 정책 자체 관리
5. 애자일 최적화
- Scrum, Kanban에 최적화된 기능
- 불필요한 기능 없음
- 애자일 베스트 프랙티스 반영
단점
1. 제한된 고급 기능
- 복잡한 워크플로우 자동화 부족
- 고급 리포팅 기능 제한
- 커스텀 필드 제약
2. 플러그인 생태계
- Jira 대비 플러그인 적음
- 서드파티 통합 제한적
- 커뮤니티 규모 작음
3. 엔터프라이즈 기능
- 대규모 조직 관리 기능 부족
- 고급 권한 관리 제한
- 감사(Audit) 로그 제한적
4. Self-hosted 관리
- 직접 유지보수 필요
- 업그레이드 수동 진행
- 기술 지식 필요
5. 성능
- 매우 큰 프로젝트(1000+ 이슈)에서 느려질 수 있음
- 실시간 협업 기능 제한적
대안 도구들
오픈소스 대안
1. Plane (https://plane.so)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
특징:
- 매우 모던한 UI
- Notion-like 경험
- 실시간 협업
- 빠른 성능
장점:
✅ 아름다운 디자인
✅ 빠른 속도
✅ 활발한 개발
단점:
❌ 상대적으로 신생
❌ 일부 기능 미완성
2. Focalboard (https://www.focalboard.com)
1
2
3
4
5
6
7
8
9
10
11
12
13
특징:
- Trello/Notion 대안
- 다양한 뷰 (보드, 테이블, 갤러리)
- Mattermost 통합
장점:
✅ 매우 유연함
✅ Mattermost 생태계
✅ 개인 사용 무료
단점:
❌ Scrum 기능 약함
❌ 팀 협업 기능 제한
3. OpenProject (https://www.openproject.org)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
특징:
- 전통적 프로젝트 관리
- Gantt 차트
- 시간 추적
- 비용 관리
장점:
✅ 기능 풍부
✅ 엔터프라이즈급
✅ 오래된 안정성
단점:
❌ 복잡한 UI
❌ 무거움
❌ 애자일보다 워터폴 중심
4. Redmine (https://www.redmine.org)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
특징:
- 가장 오래된 오픈소스 PM 도구
- 매우 안정적
- 플러그인 많음
장점:
✅ 검증된 안정성
✅ 많은 플러그인
✅ 커뮤니티 크기
단점:
❌ 오래된 UI
❌ 느린 성능
❌ 모던하지 않음
5. WeKan (https://wekan.github.io)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
특징:
- Trello 오픈소스 클론
- 순수 Kanban
- 매우 가벼움
장점:
✅ 단순함
✅ 빠른 설정
✅ 가벼움
단점:
❌ Scrum 미지원
❌ 기능 제한적
❌ 기본적인 기능만
상용 도구 비교
| 도구 | 가격 | 특징 | 적합 대상 |
|---|---|---|---|
| Jira | $7.75/user/월 | 엔터프라이즈급, 기능 풍부 | 대기업, 복잡한 팀 |
| Trello | $5/user/월 | 단순 Kanban | 소규모, 비개발팀 |
| Asana | $10.99/user/월 | 작업 관리, 협업 | 일반 비즈니스 |
| Monday.com | $8/user/월 | 비주얼 워크플로우 | 마케팅, 크리에이티브 |
| ClickUp | $5/user/월 | All-in-one | 다목적 팀 |
결론: Taiga는 언제 사용해야 하나?
✅ Taiga를 강력 추천하는 경우
- 순수 애자일 팀
- Scrum 또는 Kanban만 사용
- 복잡한 워크플로우 불필요
- 빠른 속도 중시
- 예산이 제한적인 경우
- 스타트업 초기
- 비영리 조직
- 교육 기관
- 오픈소스 선호
- 데이터 소유권 중요
- 커스터마이징 필요
- 벤더 종속 회피
- 소규모~중규모 팀
- 5-50명 팀
- 단일 프로젝트 또는 소수 프로젝트
- 단순한 조직 구조
❌ Taiga를 추천하지 않는 경우
- 대규모 엔터프라이즈
- 100명 이상
- 복잡한 조직 구조
- 고급 권한 관리 필요
- Atlassian 생태계 사용
- Confluence, Bitbucket 통합 필요
- 기존 Jira 데이터 다량
- Atlassian 도구에 익숙
- 복잡한 워크플로우
- 다단계 승인 프로세스
- 복잡한 자동화 규칙
- 커스텀 필드 다수 필요
- 광범위한 통합 필요
- 수십 개의 서드파티 도구 통합
- 특수한 플러그인 필요
- 고급 API 활용
작성 일자: 2026-01-11