kootaeng2 commited on
Commit
e850536
ยท
0 Parent(s):

Initial commit with final, clean project files

Browse files
.gitattributes ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ korean-emotion-classifier-final/model.safetensors filter=lfs diff=lfs merge=lfs -text
2
+ *safetensores filter=lfs diff=lfs merge=lfs -text
3
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
4
+ *.pt filter=lfs diff=lfs merge=lfs -text
.github/workflows/sync-to-hub.yml ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Sync to Hugging Face hub # ์ด Action์˜ ์ด๋ฆ„
2
+
3
+ on:
4
+ push:
5
+ branches: [main] # GitHub์˜ 'main' ๋ธŒ๋žœ์น˜์— push ์ด๋ฒคํŠธ๊ฐ€ ๋ฐœ์ƒํ•  ๋•Œ๋งˆ๋‹ค ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค.
6
+
7
+ jobs:
8
+ sync-to-hub:
9
+ runs-on: ubuntu-latest
10
+ steps:
11
+ - uses: actions/checkout@v3
12
+ with:
13
+ fetch-depth: 0
14
+ lfs: true
15
+
16
+ - name: Push to hub
17
+ env:
18
+ HF_TOKEN: ${{ secrets.HF_TOKEN }} # 1๋‹จ๊ณ„์—์„œ GitHub์— ์ €์žฅํ•œ HF_TOKEN ๊ฐ’์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค.
19
+ run: |
20
+ # ์‚ฌ์šฉ์ž ์ด๋ฆ„(koons)๊ณผ Space ์ด๋ฆ„(emotion-chatbot)์ด ํฌํ•จ๋œ ์ฃผ์†Œ๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค.
21
+ # git-lfs๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ๋ฅผ ๋Œ€๋น„ํ•ด ์‚ฌ์šฉ์ž ์ •๋ณด๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค.
22
+ git config --global user.email "[email protected]"
23
+ git config --global user.name "Hugging Face"
24
+
25
+ # 'huggingface'๋ผ๋Š” ์ด๋ฆ„์œผ๋กœ ์›๊ฒฉ ์ €์žฅ์†Œ ์ฃผ์†Œ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค.
26
+ # ์ฃผ์†Œ ํ˜•์‹: https://[์‚ฌ์šฉ์ž์ด๋ฆ„]:[ํ† ํฐ]@[Hugging Face ์ฃผ์†Œ]
27
+ git remote add huggingface https://koons:${HF_TOKEN}@huggingface.co/spaces/koons/emotion-chatbot
28
+
29
+ # ํ˜„์žฌ ๋กœ์ปฌ์˜ HEAD(์ตœ์‹  ์ปค๋ฐ‹)๋ฅผ huggingface ์›๊ฒฉ ์ €์žฅ์†Œ์˜ main ๋ธŒ๋žœ์น˜๋กœ ๊ฐ•์ œ ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค.
30
+ # -f ์˜ต์…˜์€ Space์˜ ๊ธฐ์กด ๊ธฐ๋ก์„ ๋ฎ์–ด์“ฐ๋ฏ€๋กœ, GitHub๋ฅผ ๊ธฐ์ค€์œผ๋กœ ํ•ญ์ƒ ์ตœ์‹  ์ƒํƒœ๋ฅผ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค.
31
+ git push huggingface HEAD:main -f
.gitignore ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ๊ฐ€์ƒํ™˜๊ฒฝ ํด๋”
2
+ venv/
3
+ .venv/
4
+
5
+ # ํŒŒ์ด์ฌ ์บ์‹œ ํŒŒ์ผ
6
+ __pycache__/
7
+ *.pyc
8
+
9
+ # VS Code ์„ค์ • ํด๋”
10
+ .vscode/
11
+
12
+ # ํ›ˆ๋ จ ๋กœ๊ทธ ํด๋”
13
+ logs/
14
+
15
+ # ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์ด ์ €์žฅ๋œ ํด๋”
16
+ results/
17
+
18
+ # ๊ธฐํƒ€ ์šด์˜์ฒด์ œ ํŒŒ์ผ
19
+ .DS_Store
.hfignore ADDED
File without changes
Dockerfile ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 1. ๋ฒ ์ด์Šค ์ด๋ฏธ์ง€ ์„ ํƒ (ํŒŒ์ด์ฌ 3.10 ๋ฒ„์ „)
2
+ FROM python:3.10-slim
3
+
4
+ # 2. ์ž‘์—… ํด๋” ์„ค์ •
5
+ WORKDIR /app
6
+
7
+ # 3. ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์„ค์น˜
8
+ COPY requirements.txt requirements.txt
9
+ RUN pip install --no-cache-dir -r requirements.txt
10
+
11
+ # 4. ํ”„๋กœ์ ํŠธ ์ „์ฒด ์ฝ”๋“œ ๋ณต์‚ฌ
12
+ COPY . .
13
+
14
+ # 5. Hugging Face Spaces๊ฐ€ ์‚ฌ์šฉํ•  ํฌํŠธ(7860) ์—ด๊ธฐ
15
+ EXPOSE 7860
16
+
17
+ # 6. ์ตœ์ข… ์‹คํ–‰ ๋ช…๋ น์–ด (gunicorn์œผ๋กœ src ํด๋” ์•ˆ์˜ app.py๋ฅผ ์‹คํ–‰)
18
+ CMD ["gunicorn", "--bind", "0.0.0.0:7860", "src.app:app"]
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ๐Ÿค– ์ผ๊ธฐ ๊ธฐ๋ฐ˜ ๊ฐ์ • ๋ถ„์„ ๋ฐ ์ฝ˜ํ…์ธ  ์ถ”์ฒœ ์›น ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜
2
+
3
+ > ์‚ฌ์šฉ์ž์˜ ์ผ๊ธฐ ํ…์ŠคํŠธ๋ฅผ AI๋กœ ๋ถ„์„ํ•˜์—ฌ ๊ฐ์ •์„ ํŒŒ์•…ํ•˜๊ณ , '๊ฐ์ • ์ˆ˜์šฉ' ๋˜๋Š” '๊ธฐ๋ถ„ ์ „ํ™˜'์ด๋ผ๋Š” ๋‘ ๊ฐ€์ง€ ์„ ํƒ์ง€์— ๋”ฐ๋ผ ๋งž์ถคํ˜• ์ฝ˜ํ…์ธ (์˜ํ™”, ์Œ์•…, ์ฑ…)๋ฅผ ์ถ”์ฒœํ•ด์ฃผ๋Š” ์›น ๊ธฐ๋ฐ˜ ์„œ๋น„์Šค์ž…๋‹ˆ๋‹ค.
4
+
5
+ <br>
6
+
7
+ ![Python](https://img.shields.io/badge/Python-3.10-3776AB?style=for-the-badge&logo=python)
8
+ ![PyTorch](https://img.shields.io/badge/PyTorch-%23EE4C2C.svg?style=for-the-badge&logo=PyTorch&logoColor=white)
9
+ ![Flask](https://img.shields.io/badge/Flask-000000?style=for-the-badge&logo=flask&logoColor=white)
10
+ ![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Transformers-yellow?style=for-the-badge)
11
+
12
+ <br>
13
+
14
+ ## โœจ ์ฃผ์š” ๊ธฐ๋Šฅ (Key Features)
15
+
16
+ * **AI ๊ธฐ๋ฐ˜ ๊ฐ์ • ๋ถ„์„:** Hugging Face์˜ `klue/roberta-base` ๋ชจ๋ธ์„ AI Hub์˜ '๊ฐ์„ฑ๋Œ€ํ™” ๋ง๋ญ‰์น˜' ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ ๋ฏธ์„ธ ์กฐ์ •(Fine-tuning)ํ•˜์—ฌ, ํ•œ๊ตญ์–ด ํ…์ŠคํŠธ์— ๋Œ€ํ•œ ๋†’์€ ์ •ํ™•๋„์˜ ๊ฐ์ • ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค.
17
+ * **์ƒํ™ฉ๋ณ„ ๋งž์ถค ์ถ”์ฒœ:** ๋ถ„์„๋œ ๊ฐ์ •์— ๋Œ€ํ•ด '๊ฐ์ •์„ ๊นŠ์ด ๋А๋ผ๊ณ  ์‹ถ์„ ๋•Œ(์ˆ˜์šฉ)'์™€ '๊ฐ์ •์—์„œ ๋ฒ—์–ด๋‚˜๊ณ  ์‹ถ์„ ๋•Œ(์ „ํ™˜)'๋ผ๋Š” ๋‘ ๊ฐ€์ง€ ์„ ํƒ์ง€๋ฅผ ์ œ๊ณตํ•˜์—ฌ, ์‚ฌ์šฉ์ž์˜ ํ˜„์žฌ ๋‹ˆ์ฆˆ์— ๋งž๋Š” ์ฐจ๋ณ„ํ™”๋œ ์ฝ˜ํ…์ธ ๋ฅผ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค.
18
+ * **๋‹ค์ด์–ด๋ฆฌ ํžˆ์Šคํ† ๋ฆฌ:** ์‚ฌ์šฉ์ž๊ฐ€ ์ž‘์„ฑํ•œ ์ผ๊ธฐ์™€ AI์˜ ๋ถ„์„ ๊ฒฐ๊ณผ๋ฅผ ์›น ๋ธŒ๋ผ์šฐ์ €์˜ `localStorage`์— ์ €์žฅํ•˜์—ฌ, ์–ธ์ œ๋“  ๊ณผ๊ฑฐ์˜ ๊ธฐ๋ก์„ ๋‹ค์‹œ ํ™•์ธํ•˜๊ณ  ๊ฐ์ •์˜ ํ๋ฆ„์„ ํŒŒ์•…ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
19
+ * **์‚ฌ์šฉ์ž ์นœํ™”์  ์›น ์ธํ„ฐํŽ˜์ด์Šค:** Flask ๊ธฐ๋ฐ˜์˜ ์›น ์„œ๋ฒ„์™€ ๋™์ ์ธ JavaScript๋ฅผ ํ†ตํ•ด, ๋ˆ„๊ตฌ๋‚˜ ์‰ฝ๊ฒŒ ์ž์‹ ์˜ ๊ฐ์ •์„ ๊ธฐ๋กํ•˜๊ณ  ์ถ”์ฒœ์„ ๋ฐ›์„ ์ˆ˜ ์žˆ๋Š” ์ง๊ด€์ ์ธ UI/UX๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค.
20
+
21
+ <br>
22
+
23
+ ## ๐Ÿ–ฅ๏ธ ํ”„๋กœ์ ํŠธ ๋ฐ๋ชจ (Demo)
24
+
25
+ ์ž„์‹œ ํ™ˆํŽ˜์ด์ง€(https://kootaeng2.github.io/Emotion_Chatbot_project/templates/emotion_homepage.html)
26
+
27
+ <br>
28
+
29
+ ## โš™๏ธ ๊ธฐ์ˆ  ์Šคํƒ (Tech Stack)
30
+
31
+ | ๊ตฌ๋ถ„ | ๊ธฐ์ˆ  |
32
+ | :--- | :--- |
33
+ | **Backend** | Flask |
34
+ | **Frontend**| HTML, CSS, JavaScript |
35
+ | **AI / Data**| Python 3.10, PyTorch, Hugging Face Transformers, Scikit-learn, Pandas |
36
+ | **AI Model**| `klue/roberta-base` (Fine-tuned) |
37
+
38
+ <br>
39
+
40
+ ## ๐Ÿ“‚ ํด๋” ๊ตฌ์กฐ (Folder Structure)
41
+
42
+ ```
43
+ sentiment_analysis_project/
44
+ โ”œโ”€โ”€ src/
45
+ โ”‚ โ”œโ”€โ”€ app.py # ์›น ์„œ๋ฒ„ ์‹คํ–‰ ํŒŒ์ผ
46
+ โ”‚ โ”œโ”€โ”€ chatbot.py # ํ„ฐ๋ฏธ๋„ ์ฑ—๋ด‡ ์‹คํ–‰ ํŒŒ์ผ
47
+ โ”‚ โ”œโ”€โ”€ emotion_engine.py # ๊ฐ์ • ๋ถ„์„ ์—”์ง„ ๋ชจ๋“ˆ
48
+ โ”‚ โ””โ”€โ”€ recommender.py # ์ถ”์ฒœ ๋กœ์ง ๋ชจ๋“ˆ
49
+ โ”œโ”€โ”€ scripts/
50
+ โ”‚ โ””โ”€โ”€ train_model.py # AI ๋ชจ๋ธ ํ›ˆ๋ จ ์Šคํฌ๋ฆฝํŠธ
51
+ โ”œโ”€โ”€ notebooks/
52
+ โ”‚ โ””โ”€โ”€ 1_explore_data.py # ๋ฐ์ดํ„ฐ ํƒ์ƒ‰ ๋ฐ ์‹œ๊ฐํ™”์šฉ ๋…ธํŠธ๋ถ
53
+ โ”œโ”€โ”€ data/ # ์›๋ณธ ๋ฐ์ดํ„ฐ์…‹
54
+ โ”œโ”€โ”€ results/ # ํ›ˆ๋ จ๋œ ๋ชจ๋ธ ํŒŒ์ผ (Git ๋ฏธํฌํ•จ)
55
+ โ”œโ”€โ”€ templates/ # HTML ํŒŒ์ผ
56
+ โ”œโ”€โ”€ static/ # CSS, ํด๋ผ์ด์–ธํŠธ JS ํŒŒ์ผ
57
+ โ”œโ”€โ”€ .gitignore # Git ๋ฌด์‹œ ํŒŒ์ผ ๋ชฉ๋ก
58
+ โ”œโ”€โ”€ README.md # ํ”„๋กœ์ ํŠธ ์„ค๋ช…์„œ
59
+ โ””โ”€โ”€ requirements.txt # ํ•„์ˆ˜ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ๋ชฉ๋ก
60
+ ```
61
+ <br>
62
+
63
+ ## ๐Ÿš€ ์„ค์น˜ ๋ฐ ์‹คํ–‰ ๋ฐฉ๋ฒ• (Installation & Run)
64
+
65
+ **1. ํ”„๋กœ์ ํŠธ ๋ณต์ œ (Clone)**
66
+ ```bash
67
+ git clone [https://github.com/kootaeng2/Emotion_Chatbot_project.git](https://github.com/kootaeng2/Emotion_Chatbot_project.git)
68
+ cd Emotion_Chatbot_project
69
+ ```
70
+
71
+ **2. ๊ฐ€์ƒํ™˜๊ฒฝ ์ƒ์„ฑ ๋ฐ ํ™œ์„ฑํ™” (Python 3.10 ๊ธฐ์ค€)**
72
+ ```bash
73
+ # Python 3.10 ๋ฒ„์ „์„ ์ง€์ •ํ•˜์—ฌ ๊ฐ€์ƒํ™˜๊ฒฝ ์ƒ์„ฑ
74
+ py -3.10 -m venv venv
75
+ # ๊ฐ€์ƒํ™˜๊ฒฝ ํ™œ์„ฑํ™”
76
+ .\venv\Scripts\Activate
77
+ ```
78
+
79
+ **3. ํ•„์ˆ˜ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์„ค์น˜**
80
+ ```bash
81
+ # PyTorch (CUDA 11.8 ๋ฒ„์ „)๋ฅผ ๋จผ์ € ์„ค์น˜ํ•ฉ๋‹ˆ๋‹ค.
82
+ pip install torch torchvision torchaudio --index-url [https://download.pytorch.org/whl/cu118](https://download.pytorch.org/whl/cu118)
83
+ # ๋‚˜๋จธ์ง€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ•ฉ๋‹ˆ๋‹ค.
84
+ pip install -r requirements.txt
85
+ ```
86
+
87
+ **4. AI ๋ชจ๋ธ ํ›ˆ๋ จ (์ตœ์ดˆ 1ํšŒ ํ•„์ˆ˜)**
88
+ > **์ฃผ์˜:** ์ด ๊ณผ์ •์€ AI Hub์—์„œ '๊ฐ์„ฑ๋Œ€ํ™” ๋ง๋ญ‰์น˜' ์›๋ณธ ๋ฐ์ดํ„ฐ์…‹์„ ๋‹ค์šด๋กœ๋“œํ•˜์—ฌ `data` ํด๋”์— ์œ„์น˜์‹œํ‚จ ํ›„ ์ง„ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ›ˆ๋ จ์—๋Š” RTX 4060 GPU ๊ธฐ์ค€ ์•ฝ 30-40๋ถ„์ด ์†Œ์š”๋ฉ๋‹ˆ๋‹ค.
89
+
90
+ ```bash
91
+ python train_model.py
92
+ ```
93
+
94
+ **5. ์›น ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ์‹คํ–‰**
95
+ ```bash
96
+ python app.py
97
+ ```
98
+ * ์„œ๋ฒ„๊ฐ€ ์‹คํ–‰๋˜๋ฉด, ์›น ๋ธŒ๋ผ์šฐ์ €๋ฅผ ์—ด๊ณ  ์ฃผ์†Œ์ฐฝ์— `http://127.0.0.1:5000` ์„ ์ž…๋ ฅํ•˜์„ธ์š”.
99
+
100
+ <br>
101
+
102
+ ## ๐Ÿ“Š ๋ชจ๋ธ ์„ฑ๋Šฅ (Model Performance)
103
+
104
+ '๊ฐ์„ฑ๋Œ€ํ™” ๋ง๋ญ‰์น˜' ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ์…‹(Validation Set)์œผ๋กœ ํ‰๊ฐ€ํ•œ ์ตœ์ข… ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.
105
+
106
+ | ํ‰๊ฐ€์ง€ํ‘œ (Metric) | ์ ์ˆ˜ (Score) |
107
+ | :--- | :---: |
108
+ | **Accuracy** (์ •ํ™•๋„) | **85.3%** |
109
+ | **F1-Score** (Weighted)| **0.852** |
110
+
111
+
notebooks/explore_data.py ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ๋ฐ์ดํ„ฐ์…‹์˜ ์‚ฌ์šฉ ์กฐ์‚ฌ ๋ณด๊ณ  ๊ทธ๋ž˜ํ”„
2
+
3
+ import pandas as pd
4
+ import json
5
+ import re
6
+
7
+ # ํŒŒ์ผ ๊ฒฝ๋กœ ์„ค์ •
8
+ file_path = './data/'
9
+
10
+ # ํ›ˆ๋ จ ๋ผ๋ฒจ JSON ํŒŒ์ผ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
11
+ with open(file_path + 'training-label.json', 'r', encoding='utf-8') as file:
12
+ training_data_raw = json.load(file)
13
+
14
+ # ํ•„์š”ํ•œ ๋ฐ์ดํ„ฐ๋งŒ ์ถ”์ถœํ•˜์—ฌ ๋ฆฌ์ŠคํŠธ์— ์ €์žฅ
15
+ extracted_data = []
16
+
17
+ # ๋ฐ์ดํ„ฐ๋Š” ๋ฆฌ์ŠคํŠธ์ด๋ฏ€๋กœ ๋ฐ”๋กœ ์ˆœํšŒํ•ฉ๋‹ˆ๋‹ค.
18
+ for dialogue in training_data_raw:
19
+ try:
20
+ # 1. ๊ฐ์ • ๋ผ๋ฒจ ์ถ”์ถœ (emotion ํ‚ค๋Š” profile ์•ˆ์— ์žˆ์Šต๋‹ˆ๋‹ค)
21
+ emotion_type = dialogue['profile']['emotion']['type']
22
+
23
+ # 2. ๋Œ€ํ™” ํ…์ŠคํŠธ ์ถ”์ถœ (talk ํ‚ค ์•ˆ์— content๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค)
24
+ dialogue_content = dialogue['talk']['content']
25
+
26
+ # 3. ๋”•์…”๋„ˆ๋ฆฌ์˜ value๋“ค(ํ…์ŠคํŠธ)๋งŒ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค.
27
+ texts = list(dialogue_content.values())
28
+
29
+ # 4. ๋ชจ๋“  ํ…์ŠคํŠธ๋ฅผ ํ•˜๋‚˜์˜ ๋ฌธ์ž์—ด๋กœ ํ•ฉ์นฉ๋‹ˆ๋‹ค.
30
+ # ๋นˆ ๋ฌธ์ž์—ด์„ ์ œ๊ฑฐํ•˜๊ณ  ํ•ฉ์น˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค.
31
+ full_text = " ".join([text for text in texts if text.strip()])
32
+
33
+ # 5. ํ•ฉ์ณ์ง„ ํ…์ŠคํŠธ์™€ ๊ฐ์ • ๋ผ๋ฒจ์ด ๋ชจ๋‘ ์œ ํšจํ•  ๊ฒฝ์šฐ์—๋งŒ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค.
34
+ if full_text and emotion_type:
35
+ extracted_data.append({'text': full_text, 'emotion': emotion_type})
36
+
37
+ except KeyError:
38
+ # 'profile', 'emotion', 'talk', 'content' ๋“ฑ์˜ ํ‚ค๊ฐ€ ์—†๋Š” ํ•ญ๋ชฉ์€ ๊ฑด๋„ˆ๋œ๋‹ˆ๋‹ค.
39
+ continue
40
+
41
+ # ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„ ์ƒ์„ฑ
42
+ df_train = pd.DataFrame(extracted_data)
43
+
44
+ # 6. ํ•ฉ์ณ์ง„ ๋ฐ์ดํ„ฐ ํ™•์ธ
45
+ print("--- ์ถ”์ถœ๋œ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„์˜ ์ฒซ 5์ค„ ---")
46
+ print(df_train.head())
47
+
48
+ print("\n--- ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„ ํฌ๊ธฐ ---")
49
+ print(f"ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ: {df_train.shape}")
50
+
51
+ # ๊ธฐ์กด ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ๋กœ๋“œ ์ฝ”๋“œ ์•„๋ž˜์— ์ด์–ด์„œ ์ž‘์„ฑํ•ด ์ฃผ์„ธ์š”.
52
+ # ------------------------------------------------------------------
53
+
54
+ # 1. ๊ฒ€์ฆ ๋ผ๋ฒจ JSON ํŒŒ์ผ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
55
+ with open(file_path + 'validation-label.json', 'r', encoding='utf-8') as file:
56
+ validation_data_raw = json.load(file)
57
+
58
+ # 2. ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์ถ”์ถœ
59
+ extracted_val_data = []
60
+
61
+ for dialogue in validation_data_raw:
62
+ try:
63
+ emotion_type = dialogue['profile']['emotion']['type']
64
+ dialogue_content = dialogue['talk']['content']
65
+ texts = list(dialogue_content.values())
66
+ full_text = " ".join([text for text in texts if text.strip()])
67
+
68
+ if full_text and emotion_type:
69
+ extracted_val_data.append({'text': full_text, 'emotion': emotion_type})
70
+
71
+ except KeyError:
72
+ continue
73
+
74
+ # 3. ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„ ์ƒ์„ฑ
75
+ df_val = pd.DataFrame(extracted_val_data)
76
+
77
+ # 4. ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ํ™•์ธ
78
+ print("\n--- ์ถ”์ถœ๋œ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„์˜ ์ฒซ 5์ค„ ---")
79
+ print(df_val.head())
80
+
81
+ print("\n--- ๊ฒ€์ฆ ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„ ํฌ๊ธฐ ---")
82
+ print(f"๊ฒ€์ฆ ๋ฐ์ดํ„ฐ: {df_val.shape}")
83
+
84
+ # main.py์˜ ๊ธฐ์กด ์ฝ”๋“œ ๋งจ ์•„๋ž˜์— ์ด์–ด์„œ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค.
85
+ # -----------------------------------------------------------
86
+ # --- [Phase 1] ๋ฐ์ดํ„ฐ ํƒ์ƒ‰ ๋ฐ ์ „์ฒ˜๋ฆฌ ---
87
+ # -----------------------------------------------------------
88
+ import matplotlib.pyplot as plt
89
+ import seaborn as sns
90
+
91
+ # 1. ๋ฐ์ดํ„ฐ ํƒ์ƒ‰ ๋ฐ ์‹œ๊ฐํ™”
92
+ print("\n--- [Phase 1-1] ๋ฐ์ดํ„ฐ ํƒ์ƒ‰ ๋ฐ ์‹œ๊ฐํ™” ์‹œ์ž‘ ---")
93
+
94
+ # ํ•œ๊ธ€ ํฐํŠธ ์„ค์ • (Windows: Malgun Gothic, Mac: AppleGothic)
95
+ plt.rcParams['font.family'] = 'Malgun Gothic'
96
+ plt.rcParams['axes.unicode_minus'] = False # ๋งˆ์ด๋„ˆ์Šค ๊ธฐํ˜ธ ๊นจ์ง ๋ฐฉ์ง€
97
+
98
+ # ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์˜ ๊ฐ์ • ๋ถ„ํฌ ํ™•์ธ
99
+ print("\n--- ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ๊ฐ์ • ๋ถ„ํฌ ---")
100
+ print(df_train['emotion'].value_counts())
101
+
102
+ # ๊ฐ์ • ๋ถ„ํฌ ์‹œ๊ฐํ™”
103
+ plt.figure(figsize=(10, 6))
104
+ sns.countplot(data=df_train, y='emotion', order=df_train['emotion'].value_counts().index)
105
+ plt.title('ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ๊ฐ์ • ๋ถ„ํฌ ์‹œ๊ฐํ™”', fontsize=15)
106
+ plt.xlabel('๊ฐœ์ˆ˜', fontsize=12)
107
+ plt.ylabel('๊ฐ์ •', fontsize=12)
108
+ plt.grid(axis='x', linestyle='--', alpha=0.7)
109
+ plt.show() # ๊ทธ๋ž˜ํ”„ ์ฐฝ ๋ณด์—ฌ์ฃผ๊ธฐ
110
+
111
+ print("\n์‹œ๊ฐํ™” ์™„๋ฃŒ. ๊ทธ๋ž˜ํ”„ ์ฐฝ์„ ๋‹ซ์œผ๋ฉด ๋‹ค์Œ ๋‹จ๊ณ„๊ฐ€ ์ง„ํ–‰๋ฉ๋‹ˆ๋‹ค.")
112
+
113
+ # 2. ํ…์ŠคํŠธ ์ •์ œ
114
+ print("\n--- [Phase 1-2] ํ…์ŠคํŠธ ์ •์ œ ์‹œ์ž‘ ---")
115
+ # ์ด๋ฏธ re ๋ชจ๋“ˆ์€ ์œ„์—์„œ import ํ–ˆ์Šต๋‹ˆ๋‹ค.
116
+
117
+ def clean_text(text):
118
+ # ์ •๊ทœํ‘œํ˜„์‹์„ ์‚ฌ์šฉํ•˜์—ฌ ํ•œ๊ธ€, ์˜์–ด, ์ˆซ์ž, ๊ณต๋ฐฑ์„ ์ œ์™ธํ•œ ๋ชจ๋“  ๋ฌธ์ž ์ œ๊ฑฐ
119
+ return re.sub(r'[^๊ฐ€-ํžฃa-zA-Z0-9 ]', '', text)
120
+
121
+ # ํ›ˆ๋ จ/๊ฒ€์ฆ ๋ฐ์ดํ„ฐ์— ์ •์ œ ํ•จ์ˆ˜ ์ ์šฉ
122
+ df_train['cleaned_text'] = df_train['text'].apply(clean_text)
123
+ df_val['cleaned_text'] = df_val['text'].apply(clean_text)
124
+
125
+ print("ํ…์ŠคํŠธ ์ •์ œ ์™„๋ฃŒ.")
126
+ print(df_train[['text', 'cleaned_text']].head())
requirements.txt ADDED
Binary file (1.61 kB). View file
 
scripts/save_complete_model.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # save_complete_model.py
2
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
3
+
4
+ # ๊ธฐ์กด ํ›ˆ๋ จ ๊ฒฐ๊ณผ๋ฌผ์ด ์ €์žฅ๋œ ๊ฒฝ๋กœ
5
+ checkpoint_path = "./results/checkpoint-9681"
6
+
7
+ # '์™„์ „ํ•œ ๋ชจ๋ธ'์„ ์ €์žฅํ•  ์ƒˆ ํด๋” ์ด๋ฆ„
8
+ output_dir = "./korean-emotion-classifier-final"
9
+
10
+ print(f"'{checkpoint_path}'์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค...")
11
+ model = AutoModelForSequenceClassification.from_pretrained(checkpoint_path)
12
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint_path)
13
+ print("๋ถˆ๋Ÿฌ์˜ค๊ธฐ ์™„๋ฃŒ.")
14
+
15
+ print(f"'{output_dir}' ํด๋”์— ์™„์ „ํ•œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค...")
16
+ model.save_pretrained(output_dir)
17
+ tokenizer.save_pretrained(output_dir)
18
+
19
+ print("์ €์žฅ ์™„๋ฃŒ! 'korean-emotion-classifier-final' ํด๋”๋ฅผ ํ™•์ธํ•˜์„ธ์š”.")
20
+ print("์ด ํด๋” ์•ˆ์˜ ํŒŒ์ผ๋“ค์„ 'my-local-model' ํด๋”๋กœ ์˜ฎ๊ฒจ์ฃผ์‹œ๋ฉด ๋ฉ๋‹ˆ๋‹ค.")
scripts/train_model.py ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # train_model.py
2
+ # AI ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ์Šคํฌ๋ฆฝํŠธ, ๋‹ค์‹œ ์‚ฌ์šฉ๊ฐ€๋Šฅํ•œ ์‚ญ์ œ x
3
+
4
+ import pandas as pd
5
+ import json
6
+ import re
7
+ import sys
8
+ import transformers
9
+ import torch
10
+
11
+ from transformers import AutoTokenizer
12
+
13
+ # --- 1. ๋ฐ์ดํ„ฐ ๋กœ๋”ฉ ๋ฐ ์ „์ฒ˜๋ฆฌ ---
14
+
15
+ print("--- [Phase 1] ๋ฐ์ดํ„ฐ ๋กœ๋”ฉ ๋ฐ ์ „์ฒ˜๋ฆฌ ์‹œ์ž‘ ---")
16
+ # ํŒŒ์ผ ๊ฒฝ๋กœ ์„ค์ •
17
+ file_path = './data/'
18
+
19
+ # ํ›ˆ๋ จ/๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ๋กœ๋”ฉ (์ด์ „๊ณผ ๋™์ผ)
20
+ with open(file_path + 'training-label.json', 'r', encoding='utf-8') as file:
21
+ training_data_raw = json.load(file)
22
+ with open(file_path + 'validation-label.json', 'r', encoding='utf-8') as file:
23
+ validation_data_raw = json.load(file)
24
+
25
+ # DataFrame ์ƒ์„ฑ ํ•จ์ˆ˜ (์ฝ”๋“œ๋ฅผ ๊น”๋”ํ•˜๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•ด ํ•จ์ˆ˜๋กœ ๋ฌถ์Œ)
26
+ def create_dataframe(data_raw):
27
+ extracted_data = []
28
+ for dialogue in data_raw:
29
+ try:
30
+ emotion_type = dialogue['profile']['emotion']['type']
31
+ dialogue_content = dialogue['talk']['content']
32
+ full_text = " ".join(list(dialogue_content.values()))
33
+ if full_text and emotion_type:
34
+ extracted_data.append({'text': full_text, 'emotion': emotion_type})
35
+ except KeyError:
36
+ continue
37
+ return pd.DataFrame(extracted_data)
38
+
39
+ df_train = create_dataframe(training_data_raw)
40
+ df_val = create_dataframe(validation_data_raw)
41
+
42
+ # ํ…์ŠคํŠธ ์ •์ œ
43
+ def clean_text(text):
44
+ return re.sub(r'[^๊ฐ€-ํžฃa-zA-Z0-9 ]', '', text)
45
+
46
+ df_train['cleaned_text'] = df_train['text'].apply(clean_text)
47
+ df_val['cleaned_text'] = df_val['text'].apply(clean_text)
48
+ print("โœ… ๋ฐ์ดํ„ฐ ๋กœ๋”ฉ ๋ฐ ์ „์ฒ˜๋ฆฌ ์™„๋ฃŒ!")
49
+
50
+
51
+ # --- 2. AI ๋ชจ๋ธ๋ง ์ค€๋น„ ---
52
+ print("\n--- [Phase 2] AI ๋ชจ๋ธ๋ง ์ค€๋น„ ์‹œ์ž‘ ---")
53
+ # ๋ชจ๋ธ ๋ฐ ํ† ํฌ๋‚˜์ด์ € ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
54
+ MODEL_NAME = "klue/roberta-base"
55
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
56
+
57
+ # ํ…์ŠคํŠธ ํ† ํฐํ™”
58
+ train_tokenized = tokenizer(list(df_train['cleaned_text']), return_tensors="pt", max_length=128, padding=True, truncation=True)
59
+ val_tokenized = tokenizer(list(df_val['cleaned_text']), return_tensors="pt", max_length=128, padding=True, truncation=True)
60
+
61
+ # ๋ผ๋ฒจ ์ธ์ฝ”๋”ฉ
62
+ unique_labels = sorted(df_train['emotion'].unique())
63
+ label_to_id = {label: id for id, label in enumerate(unique_labels)}
64
+ id_to_label = {id: label for label, id in label_to_id.items()}
65
+ df_train['label'] = df_train['emotion'].map(label_to_id)
66
+ df_val['label'] = df_val['emotion'].map(label_to_id)
67
+ print("โœ… ํ† ํฐํ™” ๋ฐ ๋ผ๋ฒจ ์ธ์ฝ”๋”ฉ ์™„๋ฃŒ!")
68
+ print("์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์œ„ํ•œ ๋ชจ๋“  ์ค€๋น„๊ฐ€ ๋๋‚ฌ์Šต๋‹ˆ๋‹ค.")
69
+
70
+
71
+ # [Phase 3]์˜ ๊ธฐ์กด ์ฝ”๋“œ๋ฅผ ์•„๋ž˜ ๋‚ด์šฉ์œผ๋กœ ๊ต์ฒดํ•ด์ฃผ์„ธ์š”.
72
+ # -----------------------------------------------------------
73
+ # --- [Phase 3] ๋ชจ๋ธ ํ•™์Šต ๋ฐ ํ‰๊ฐ€ (์ตœ์†Œ ๊ธฐ๋Šฅ ๋ฒ„์ „) ---
74
+ # -----------------------------------------------------------
75
+ import torch
76
+ from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer
77
+ from sklearn.metrics import accuracy_score, precision_recall_fscore_support
78
+
79
+ print("\n--- [Phase 3] ๋ชจ๋ธ ํ•™์Šต ๋ฐ ํ‰๊ฐ€ ์‹œ์ž‘ ---")
80
+
81
+ # 1. PyTorch Dataset ํด๋ž˜์Šค ์ •์˜ (์ด์ „๊ณผ ๋™์ผ)
82
+ class EmotionDataset(torch.utils.data.Dataset):
83
+ def __init__(self, encodings, labels):
84
+ self.encodings = encodings
85
+ self.labels = labels
86
+ def __getitem__(self, idx):
87
+ item = {key: val[idx].clone().detach() for key, val in self.encodings.items()}
88
+ item['labels'] = torch.tensor(self.labels[idx])
89
+ return item
90
+ def __len__(self):
91
+ return len(self.labels)
92
+
93
+ train_dataset = EmotionDataset(train_tokenized, df_train['label'].tolist())
94
+ val_dataset = EmotionDataset(val_tokenized, df_val['label'].tolist())
95
+ print("โœ… PyTorch ๋ฐ์ดํ„ฐ์…‹ ์ƒ์„ฑ์ด ์™„๋ฃŒ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.")
96
+
97
+ # 2. AI ๋ชจ๋ธ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ (์ด์ „๊ณผ ๋™์ผ)
98
+ model = AutoModelForSequenceClassification.from_pretrained(
99
+ MODEL_NAME,
100
+ num_labels=len(unique_labels),
101
+ id2label=id_to_label,
102
+ label2id=label_to_id
103
+ )
104
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
105
+ model.to(device)
106
+ print(f"โœ… ๋ชจ๋ธ ๋กœ๋”ฉ ์™„๋ฃŒ! ๋ชจ๋ธ์€ {device}์—์„œ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค.")
107
+
108
+
109
+ # 3. ๋ชจ๋ธ ์„ฑ๋Šฅ ํ‰๊ฐ€๋ฅผ ์œ„ํ•œ ํ•จ์ˆ˜ ์ •์˜ (์ˆ˜์ • ์™„๋ฃŒ)
110
+ def compute_metrics(pred):
111
+ labels = pred.label_ids
112
+ # ๋ฐ”๋กœ ์ด ๋ถ€๋ถ„์ด ์ˆ˜์ •๋˜์—ˆ์Šต๋‹ˆ๋‹ค.
113
+ preds = pred.predictions.argmax(-1)
114
+
115
+ precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='weighted', zero_division=0)
116
+ acc = accuracy_score(labels, preds)
117
+ return {'accuracy': acc, 'f1': f1, 'precision': precision, 'recall': recall}
118
+
119
+ # 4. ํ›ˆ๋ จ์„ ์œ„ํ•œ ์ƒ์„ธ ์„ค์ •(Arguments) ์ •์˜ (๋ชจ๋“  ๋ถ€๊ฐ€ ์˜ต์…˜ ์ œ๊ฑฐ)
120
+ training_args = TrainingArguments(
121
+ output_dir='./results', # ๋ชจ๋ธ์ด ์ €์žฅ๋  ์œ„์น˜ (ํ•„์ˆ˜)
122
+ num_train_epochs=3, # ํ›ˆ๋ จ ํšŸ์ˆ˜
123
+ per_device_train_batch_size=16, # ํ›ˆ๋ จ ๋ฐฐ์น˜ ์‚ฌ์ด์ฆˆ
124
+ # ๋‚˜๋จธ์ง€ ๋ชจ๋“  ํ‰๊ฐ€/์ €์žฅ ๊ด€๋ จ ์˜ต์…˜์€ ๋ชจ๋‘ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค.
125
+ )
126
+
127
+ # ---!!! ํ•ต์‹ฌ ์ˆ˜๏ฟฝ๏ฟฝ๏ฟฝ ์‚ฌํ•ญ 2 !!!---
128
+ # 5. Trainer ์ •์˜ (ํ‰๊ฐ€ ๊ด€๋ จ ๊ธฐ๋Šฅ ๋น„ํ™œ์„ฑํ™”)
129
+ trainer = Trainer(
130
+ model=model,
131
+ args=training_args,
132
+ train_dataset=train_dataset,
133
+ # ํ›ˆ๋ จ ์ค‘ ํ‰๊ฐ€๋ฅผ ํ•˜์ง€ ์•Š์œผ๋ฏ€๋กœ ์•„๋ž˜ ์˜ต์…˜๋“ค์€ ์ œ์™ธํ•ฉ๋‹ˆ๋‹ค.
134
+ # eval_dataset=val_dataset,
135
+ # compute_metrics=compute_metrics
136
+ )
137
+
138
+ # 6. ๋ชจ๋ธ ํ›ˆ๋ จ ์‹œ์ž‘!
139
+ print("\n๐Ÿ”ฅ AI ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค...")
140
+ trainer.train()
141
+ print("\n๐ŸŽ‰ ๋ชจ๋ธ ํ›ˆ๋ จ ์™„๋ฃŒ!")
142
+
143
+ # 7. ์ตœ์ข… ๋ชจ๋ธ ํ‰๊ฐ€๋Š” ํ›ˆ๋ จ์ด ๋๋‚œ ํ›„ '๋ณ„๋„๋กœ' ์‹คํ–‰
144
+ print("\n--- ์ตœ์ข… ๋ชจ๋ธ ์„ฑ๋Šฅ ํ‰๊ฐ€ ---")
145
+ # ๋น„ํ™œ์„ฑํ™”ํ–ˆ๋˜ ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ์…‹์„ evaluate ํ•จ์ˆ˜์— ์ง์ ‘ ์ „๋‹ฌํ•ด์ค๋‹ˆ๋‹ค.
146
+ final_evaluation = trainer.evaluate(eval_dataset=val_dataset)
147
+ print(final_evaluation)
148
+
149
+ print("\n๋ชจ๋“  ๊ณผ์ •์ด ์„ฑ๊ณต์ ์œผ๋กœ ๋๋‚ฌ์Šต๋‹ˆ๋‹ค! results ํด๋”์—์„œ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ํ™•์ธํ•˜์„ธ์š”.")
scripts/upload_model.py ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # upload_model.py
2
+
3
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
4
+
5
+ # ---!!! 1. ์ด ๋ถ€๋ถ„์„ ์ตœ์ข…์ ์œผ๋กœ ๊ฒฐ์ •ํ•œ ์ •๋ณด๋กœ ์ˆ˜์ •ํ•ด์ฃผ์„ธ์š” !!!---
6
+ YOUR_HF_ID = "taehoon222" # ์‚ฌ์šฉ์ž๋‹˜์˜ Hugging Face ID
7
+ YOUR_MODEL_NAME = "korean-emotion-classifier" # ์ถ”์ฒœ ๋ชจ๋ธ ์ด๋ฆ„ (์›ํ•˜๋Š” ์ด๋ฆ„์œผ๋กœ ๋ณ€๊ฒฝ ๊ฐ€๋Šฅ)
8
+ # ----------------------------------------------------
9
+
10
+ # 2. ๋‚ด ์ปดํ“จํ„ฐ์— ์ €์žฅ๋œ, ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋œ ๋ชจ๋ธ์˜ ๊ฒฝ๋กœ
11
+ LOCAL_MODEL_PATH = 'E:/sentiment_analysis_project/results/checkpoint-9681'
12
+
13
+ print(f"'{LOCAL_MODEL_PATH}'์—์„œ ๋ชจ๋ธ์„ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค...")
14
+ try:
15
+ tokenizer = AutoTokenizer.from_pretrained(LOCAL_MODEL_PATH)
16
+ model = AutoModelForSequenceClassification.from_pretrained(LOCAL_MODEL_PATH)
17
+ print("โœ… ๋กœ์ปฌ ๋ชจ๋ธ ๋กœ๋”ฉ ์„ฑ๊ณต!")
18
+ except Exception as e:
19
+ print(f"โŒ ๋กœ์ปฌ ๋ชจ๋ธ์„ ๋ถˆ๋Ÿฌ์˜ค๋Š” ๋ฐ ์‹คํŒจํ–ˆ์Šต๋‹ˆ๋‹ค: {e}")
20
+ exit()
21
+
22
+ # 3. Hugging Face Hub์— ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค.
23
+ NEW_REPO_ID = f"{YOUR_HF_ID}/{YOUR_MODEL_NAME}"
24
+ print(f"'{NEW_REPO_ID}' ์ด๋ฆ„์œผ๋กœ Hub์— ์—…๋กœ๋“œ๋ฅผ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค...")
25
+ try:
26
+ tokenizer.push_to_hub(NEW_REPO_ID)
27
+ model.push_to_hub(NEW_REPO_ID)
28
+ print("\n๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ ๋ชจ๋ธ ์—…๋กœ๋“œ์— ์„ฑ๊ณตํ–ˆ์Šต๋‹ˆ๋‹ค! ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰")
29
+ except Exception as e:
30
+ print(f"\nโŒ ์—…๋กœ๋“œ ์ค‘ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ–ˆ์Šต๋‹ˆ๋‹ค: {e}")
src/app.py ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # app.py
2
+
3
+ from flask import Flask, render_template, request, jsonify
4
+ # emotion_engine.py์—์„œ ๋‘ ํ•จ์ˆ˜๋ฅผ ๋ชจ๋‘ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ import ํ•ฉ๋‹ˆ๋‹ค.
5
+ from emotion_engine import load_emotion_classifier, predict_emotion
6
+ # recommender.py์—์„œ ๋Œ€๋ฌธ์ž Recommender 'ํด๋ž˜์Šค'๋ฅผ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ import ํ•ฉ๋‹ˆ๋‹ค.
7
+ from recommender import Recommender
8
+ import random
9
+
10
+ app = Flask(__name__)
11
+
12
+ print("AI ์ฑ—๋ด‡ ์„œ๋ฒ„๋ฅผ ์ค€๋น„ ์ค‘์ž…๋‹ˆ๋‹ค...")
13
+ # ์„œ๋ฒ„๊ฐ€ ์‹œ์ž‘๋  ๋•Œ AI ์—”์ง„๊ณผ ์ถ”์ฒœ๊ธฐ๋ฅผ ๊ฐ๊ฐ ํ•œ ๋ฒˆ์”ฉ๋งŒ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค.
14
+ emotion_classifier = load_emotion_classifier()
15
+ recommender = Recommender()
16
+ # ์›นํŽ˜์ด์ง€์— ๊ฐ์ •๋ณ„ ์ด๋ชจ์ง€๋ฅผ ๋ณด๋‚ด์ฃผ๊ธฐ ์œ„ํ•œ ๋”•์…”๋„ˆ๋ฆฌ์ž…๋‹ˆ๋‹ค.
17
+ emotion_emoji_map = {
18
+ '๊ธฐ์จ': '๐Ÿ˜„', 'ํ–‰๋ณต': '๐Ÿ˜Š', '์‚ฌ๋ž‘': 'โค๏ธ',
19
+ '๋ถˆ์•ˆ': '๐Ÿ˜Ÿ', '์Šฌํ””': '๐Ÿ˜ข', '์ƒ์ฒ˜': '๐Ÿ’”',
20
+ '๋ถ„๋…ธ': '๐Ÿ˜ ', 'ํ˜์˜ค': '๐Ÿคข', '์งœ์ฆ': '๐Ÿ˜ค',
21
+ '๋†€๋žŒ': '๐Ÿ˜ฎ',
22
+ '์ค‘๋ฆฝ': '๐Ÿ˜',
23
+ }
24
+ print("โœ… AI ์ฑ—๋ด‡ ์„œ๋ฒ„๊ฐ€ ์„ฑ๊ณต์ ์œผ๋กœ ์ค€๋น„๋˜์—ˆ์Šต๋‹ˆ๋‹ค.")
25
+
26
+ @app.route("/")
27
+ def home():
28
+ """์›น ๋ธŒ๋ผ์šฐ์ €๊ฐ€ ์ฒ˜์Œ ์ ‘์†ํ–ˆ์„ ๋•Œ ๋ณด์—ฌ์ค„ ๋ฉ”์ธ ํŽ˜์ด์ง€๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค."""
29
+ # templates ํด๋” ์•ˆ์— ์žˆ๋Š” emotion_homepage.html ํŒŒ์ผ์„ ํ™”๋ฉด์— ๋ณด์—ฌ์ค๋‹ˆ๋‹ค.
30
+ return render_template("emotion_homepage.html")
31
+
32
+ @app.route("/api/recommend", methods=["POST"])
33
+ def api_recommend():
34
+ """์›นํŽ˜์ด์ง€์˜ '์ถ”์ฒœ ๋ฐ›๊ธฐ' ๋ฒ„ํŠผ ํด๋ฆญ ์š”์ฒญ์„ ์ฒ˜๋ฆฌํ•˜๋Š” ๋ถ€๋ถ„์ž…๋‹ˆ๋‹ค."""
35
+ # 1. ์›นํŽ˜์ด์ง€๋กœ๋ถ€ํ„ฐ ์‚ฌ์šฉ์ž๊ฐ€ ์ž…๋ ฅํ•œ ์ผ๊ธฐ ๋‚ด์šฉ์„ ๋ฐ›์Šต๋‹ˆ๋‹ค.
36
+ user_diary = request.json.get("diary")
37
+ if not user_diary:
38
+ return jsonify({"error": "์ผ๊ธฐ ๋‚ด์šฉ์ด ์—†์Šต๋‹ˆ๋‹ค."}), 400
39
+
40
+ # 2. emotion_engine์„ ์‚ฌ์šฉํ•ด ๊ฐ์ •์„ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค.
41
+ predicted_emotion = predict_emotion(emotion_classifier, user_diary)
42
+
43
+ # 3. recommender๋ฅผ ์‚ฌ์šฉํ•ด '์ˆ˜์šฉ'๊ณผ '์ „ํ™˜' ์ถ”์ฒœ์„ ๋ชจ๋‘ ๋ฐ›์Šต๋‹ˆ๋‹ค.
44
+ accept_recs = recommender.recommend(predicted_emotion, "์ˆ˜์šฉ")
45
+ change_recs = recommender.recommend(predicted_emotion, "์ „ํ™˜")
46
+
47
+ # 4. ๊ฐ ์ถ”์ฒœ ๋ชฉ๋ก์—์„œ ๋žœ๋ค์œผ๋กœ ํ•˜๋‚˜์”ฉ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. (๊ฒฐ๊ณผ๊ฐ€ ์—†์„ ๊ฒฝ์šฐ๋ฅผ ๋Œ€๋น„)
48
+ accept_choice = random.choice(accept_recs) if accept_recs else "์ถ”์ฒœ ์—†์Œ"
49
+ change_choice = random.choice(change_recs) if change_recs else "์ถ”์ฒœ ์—†์Œ"
50
+
51
+ # 5. ์›นํŽ˜์ด์ง€์— ๋ณด์—ฌ์ค„ ์ตœ์ข… ํ…์ŠคํŠธ๋ฅผ ์กฐํ•ฉํ•ฉ๋‹ˆ๋‹ค.
52
+ recommendation_text = (
53
+ f"<b>[ ์ด ๊ฐ์ •์„ ๋” ๊นŠ์ด ๋А๋ผ๊ณ  ์‹ถ๋‹ค๋ฉด... (์ˆ˜์šฉ) ]</b><br>"
54
+ f"โ€ข {accept_choice}<br><br>"
55
+ f"<b>[ ์ด ๊ฐ์ •์—์„œ ๋ฒ—์–ด๋‚˜๊ณ  ์‹ถ๋‹ค๋ฉด... (์ „ํ™˜) ]</b><br>"
56
+ f"โ€ข {change_choice}"
57
+ )
58
+
59
+ # 6. ์ตœ์ข… ๊ฒฐ๊ณผ๋ฅผ JSON ํ˜•ํƒœ๋กœ ์›นํŽ˜์ด์ง€์— ๋Œ๋ ค์ค๋‹ˆ๋‹ค.
60
+ response_data = {
61
+ "emotion": predicted_emotion,
62
+ "emoji": emotion_emoji_map.get(predicted_emotion, '๐Ÿค”'),
63
+ "recommendation": recommendation_text
64
+ }
65
+ return jsonify(response_data)
66
+
67
+ if __name__ == "__main__":
68
+ app.run(debug=True)
src/emotion_engine.py ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # emotion_engine.py
2
+
3
+ import torch
4
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
5
+ import os
6
+
7
+ def load_emotion_classifier():
8
+ # ํ˜„์žฌ ์Šคํฌ๋ฆฝํŠธ ํŒŒ์ผ์˜ ๋””๋ ‰ํ„ฐ๋ฆฌ ๊ฒฝ๋กœ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค.
9
+ base_path = os.path.dirname(os.path.abspath(__file__))
10
+
11
+ # ๋ชจ๋ธ ํด๋”์˜ ์ ˆ๋Œ€ ๊ฒฝ๋กœ๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค.
12
+ MODEL_PATH = os.path.join(base_path, "korean-emotion-classifier-final")
13
+
14
+ # ๊ฒฝ๋กœ๊ฐ€ ๋กœ์ปฌ ๋””๋ ‰ํ„ฐ๋ฆฌ์ธ์ง€ ํ™•์ธ
15
+ if not os.path.isdir(MODEL_PATH):
16
+ print(f"โŒ ์˜ค๋ฅ˜: ์ง€์ •๋œ ๊ฒฝ๋กœ '{MODEL_PATH}'์— ๋ชจ๋ธ ํด๋”๊ฐ€ ์กด์žฌํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.")
17
+ return None
18
+
19
+ print(f"--- ์ตœ์ข… ๋ชจ๋ธ ๊ฒฝ๋กœ ํ™•์ธ: [{MODEL_PATH}] ---")
20
+ print(f"๋กœ์ปฌ ์ ˆ๋Œ€ ๊ฒฝ๋กœ '{MODEL_PATH}'์—์„œ ๋ชจ๋ธ์„ ์ง์ ‘ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค...")
21
+
22
+ try:
23
+ # 1. from_pretrained()์— ์ ˆ๋Œ€ ๊ฒฝ๋กœ๋ฅผ ์ง์ ‘ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค.
24
+ # 2. `local_files_only=True`๋Š” ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์ž๋™์œผ๋กœ ์ธ์‹ํ•ฉ๋‹ˆ๋‹ค.
25
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
26
+ model = AutoModelForSequenceClassification.from_pretrained(MODEL_PATH)
27
+
28
+ print("โœ… ๋กœ์ปฌ ๋ชจ๋ธ ํŒŒ์ผ ์ง์ ‘ ๋กœ๋”ฉ ์„ฑ๊ณต!")
29
+
30
+ except Exception as e:
31
+ print(f"โŒ ๋ชจ๋ธ ๋กœ๋”ฉ ์ค‘ ์˜ค๋ฅ˜: {e}")
32
+ # ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•œ ์›์ธ์„ ์ •ํ™•ํžˆ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค.
33
+ print(f"์ƒ์„ธ ์˜ค๋ฅ˜ ๋ฉ”์‹œ์ง€: {e}")
34
+ return None
35
+
36
+ device = 0 if torch.cuda.is_available() else -1
37
+ emotion_classifier = pipeline("text-classification", model=model, tokenizer=tokenizer, device=device)
38
+
39
+ return emotion_classifier
40
+
41
+ # predict_emotion ํ•จ์ˆ˜๋Š” ๊ทธ๋Œ€๋กœ ๋‘ก๋‹ˆ๋‹ค.
42
+ def predict_emotion(classifier, text):
43
+ if not text or not text.strip(): return "๋‚ด์šฉ ์—†์Œ"
44
+ if classifier is None: return "์˜ค๋ฅ˜: ๊ฐ์ • ๋ถ„์„ ์—”์ง„ ์ค€๋น„ ์•ˆ๋จ."
45
+ result = classifier(text)
46
+ return result[0]['label']
src/recommender.py ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # recommender.py
2
+
3
+ class Recommender:
4
+ """
5
+ ๊ฐ์ •๊ณผ ์‚ฌ์šฉ์ž ์„ ํƒ์— ๋”ฐ๋ผ ์ฝ˜ํ…์ธ ๋ฅผ ์ถ”์ฒœํ•˜๋Š” ํด๋ž˜์Šค์ž…๋‹ˆ๋‹ค.
6
+ """
7
+ def __init__(self):
8
+ self.recommendation_db = {
9
+ '๊ธฐ์จ': {
10
+ '์ˆ˜์šฉ': ["์˜ํ™”: ์›”ํ„ฐ์˜ ์ƒ์ƒ์€ ํ˜„์‹ค์ด ๋œ๋‹ค", "์Œ์•…: Pharrell Williams - Happy", "์ฑ…: ์ฐฝ๋ฌธ ๋„˜์–ด ๋„๋ง์นœ 100์„ธ ๋…ธ์ธ"],
11
+ '์ „ํ™˜': ["์˜ํ™”: ์‡ผ์ƒํฌ ํƒˆ์ถœ", "์Œ์•…: ์ด๋ฃจ๋งˆ - River Flows In You"]
12
+ },
13
+ 'ํ–‰๋ณต': {
14
+ '์ˆ˜์šฉ': ["์˜ํ™”: ๋น„๊ธด ์–ด๊ฒŒ์ธ", "์Œ์•…: ์ฟจ - All for You", "์ฑ…: ๊พธ๋ปฌ์”จ์˜ ํ–‰๋ณต์—ฌํ–‰"],
15
+ '์ „ํ™˜': ["์˜ํ™”: ํฌ๋ ˆ์ŠคํŠธ ๊ฒ€ํ”„", "์Œ์•…: ํ† ์ด - ์ข‹์€ ์‚ฌ๋žŒ"]
16
+ },
17
+ '๋ถˆ์•ˆ': {
18
+ '์ˆ˜์šฉ': ["์˜ํ™”: ์ธ์‚ฌ์ด๋“œ ์•„์›ƒ", "์Œ์•…: ์œ„๋กœ๊ฐ€ ๋˜๋Š” ์—ฐ์ฃผ๊ณก ํ”Œ๋ ˆ์ด๋ฆฌ์ŠคํŠธ", "์ฑ…: ๋ฏธ์›€๋ฐ›์„ ์šฉ๊ธฐ"],
19
+ '์ „ํ™˜': ["์˜ํ™”: ๊ทนํ•œ์ง์—…", "์Œ์•…: Maroon 5 - Moves Like Jagger"]
20
+ },
21
+ '๋ถ„๋…ธ': {
22
+ '์ˆ˜์šฉ': ["์˜ํ™”: ์กด ์œ…", "์Œ์•…: ๋žŒ์Šˆํƒ€์ธ - Du Hast"],
23
+ '์ „ํ™˜': ["์˜ํ™”: ๋ฆฌํ‹€ ํฌ๋ ˆ์ŠคํŠธ", "์Œ์•…: ๋…ธ๋ผ ์กด์Šค - Don't Know Why"]
24
+ },
25
+ '์Šฌํ””': {
26
+ '์ˆ˜์šฉ': ["์˜ํ™”: ์ดํ„ฐ๋„ ์„ ์ƒค์ธ", "์Œ์•…: ๋ฐ•ํšจ์‹  - ๋ˆˆ์˜ ๊ฝƒ", "์ฑ…: 1๋ฆฌํ„ฐ์˜ ๋ˆˆ๋ฌผ"],
27
+ '์ „ํ™˜': ["์˜ํ™”: ์›”-E", "์Œ์•…: ๊ฑฐ๋ถ์ด - ๋น„ํ–‰๊ธฐ"]
28
+ },
29
+ '์ƒ์ฒ˜': {
30
+ '์ˆ˜์šฉ': ["์˜ํ™”: ์บ์ŠคํŠธ ์–ด์›จ์ด", "์Œ์•…: ๊น€๊ด‘์„ - ์„œ๋ฅธ ์ฆˆ์Œ์—", "์ฑ…: ์ฃฝ๊ณ  ์‹ถ์ง€๋งŒ ๋–ก๋ณถ์ด๋Š” ๋จน๊ณ  ์‹ถ์–ด"],
31
+ '์ „ํ™˜': ["์˜ํ™”: ๊ธ€๋Ÿฌ๋ธŒ (์ŠนํŒจ๋ฅผ ๋– ๋‚œ ์•ผ๊ตฌ์˜ ์ˆœ์ˆ˜ํ•œ ์—ด์ •๊ณผ ๊ฐ๋™์„ ๋А๊ปด๋ณด์„ธ์š”)", "์Œ์•…: ์˜ฅ์ƒ๋‹ฌ๋น› - ์ˆ˜๊ณ ํ–ˆ์–ด, ์˜ค๋Š˜๋„"]
32
+ },
33
+ '๋†€๋žŒ': {
34
+ '์ˆ˜์šฉ': ["์˜ํ™”: ์‹์Šค ์„ผ์Šค", "์Œ์•…: ๋ฐ•์ง„์˜ - ์–ด๋จธ๋‹˜์ด ๋ˆ„๊ตฌ๋‹ˆ"],
35
+ '์ „ํ™˜': ["์Œ์•…: Bach - Air on G String", "์ฑ…: ๊ณ ์š”ํ• ์ˆ˜๋ก ๋ฐ์•„์ง€๋Š” ๊ฒƒ๋“ค"]
36
+ },
37
+ '์ค‘๋ฆฝ': {
38
+ '์ˆ˜์šฉ': ["์˜ํ™”: ํŒจํ„ฐ์Šจ", "์Œ์•…: ์ž”์ž”ํ•œ Lo-fi ํ”Œ๋ ˆ์ด๋ฆฌ์ŠคํŠธ", "์ฑ…: ๋ณดํ†ต์˜ ์กด์žฌ"],
39
+ '์ „ํ™˜': ["์˜ํ™”: ์ŠคํŒŒ์ด๋”๋งจ: ๋‰ด ์œ ๋‹ˆ๋ฒ„์Šค", "์Œ์•…: Queen - Don't Stop Me Now"]
40
+ },
41
+ }
42
+
43
+ def recommend(self, emotion: str, choice: str) -> list:
44
+ return self.recommendation_db.get(emotion, {}).get(choice, ["๐Ÿ˜ฅ ์•„์‰ฝ์ง€๋งŒ, ์•„์ง ์ค€๋น„๋œ ์ถ”์ฒœ์ด ์—†์–ด์š”."])
templates/emotion_homepage.html ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!-- ํ™ˆํŽ˜์ด์ง€ -->
2
+
3
+ <!DOCTYPE html>
4
+ <html lang="ko">
5
+ <head>
6
+ <meta charset="UTF-8">
7
+ <title> ์ผ๊ธฐ ๊ธฐ๋ฐ˜ ์ถ”์ฒœ ์›นํŽ˜์ด์ง€</title>
8
+ <style>
9
+ body {
10
+ font-family: "Spoqa Han Sans Neo", Arial, sans-serif;
11
+ padding: 50px;
12
+ background: linear-gradient(135deg, #ece9f7, #cacae0); /* ๊ธฐ๋ณธ ๋ฐฐ๊ฒฝ */
13
+ transition: background 0.5s ease;
14
+ }
15
+ .container {
16
+ background: white;
17
+ border-radius: 20px;
18
+ padding: 30px;
19
+ box-shadow: 0 8px 32px #0001;
20
+ max-width: 600px;
21
+ margin: 40px auto;
22
+ }
23
+ h1 { color: #65e5c9; margin-bottom: 18px; }
24
+ textarea {
25
+ width: 100%;
26
+ border-radius: 8px;
27
+ border: 1px solid #d0d0e0;
28
+ padding: 16px;
29
+ font-size: 18px;
30
+ margin-bottom: 12px;
31
+ background: #f7f7fb;
32
+ resize: vertical;
33
+ }
34
+ button {
35
+ background: #6598e5;
36
+ color: white;
37
+ padding: 12px 24px;
38
+ border-radius: 8px;
39
+ border: none;
40
+ font-size: 18px;
41
+ cursor: pointer;
42
+ transition: 0.2s;
43
+ margin-right: 8px;
44
+ }
45
+ button:hover { background: #5540a3; }
46
+ #result {
47
+ margin-top: 24px;
48
+ font-size: 17px;
49
+ color: #444a70;
50
+ background: #f2f2f7;
51
+ border-radius: 8px;
52
+ padding: 18px;
53
+ box-shadow: 0 2px 8px #cfcfe0;
54
+ min-height: 40px;
55
+ }
56
+ .history-title {
57
+ margin: 32px 0 12px 0;
58
+ color: #8855b0;
59
+ font-size: 20px;
60
+ }
61
+ #history {
62
+ background: #fff9;
63
+ border-radius: 10px;
64
+ padding: 14px 12px;
65
+ font-size: 15px;
66
+ min-height: 40px;
67
+ box-shadow: 0 2px 8px #edebf5;
68
+ margin-bottom: 15px;
69
+ }
70
+ .history-item {
71
+ margin-bottom: 8px;
72
+ border-bottom: 1px solid #eee;
73
+ padding-bottom: 6px;
74
+ }
75
+ .history-date {
76
+ color: #b2a4d4;
77
+ font-size: 13px;
78
+ margin-bottom: 3px;
79
+ display: block;
80
+ }
81
+ .clear-btn {
82
+ background: #e25b66;
83
+ color: white;
84
+ font-size: 14px;
85
+ padding: 7px 13px;
86
+ border-radius: 6px;
87
+ border: none;
88
+ cursor: pointer;
89
+ float: right;
90
+ transition: 0.2s;
91
+ }
92
+ .clear-btn:hover { background: #c73442; }
93
+ /* ํ…Œ๋งˆ ๋ณ€๊ฒฝ ๋ฒ„ํŠผ ์Šคํƒ€์ผ */
94
+ .theme-buttons {
95
+ text-align: right;
96
+ margin-bottom: 15px;
97
+ }
98
+ .theme-buttons button {
99
+ font-size: 14px;
100
+ padding: 5px 10px;
101
+ margin-left: 5px;
102
+ }
103
+ </style>
104
+ </head>
105
+ <body>
106
+ <div class="container">
107
+
108
+ <!-- ํ…Œ๋งˆ ๋ณ€๊ฒฝ ๋ฒ„ํŠผ -->
109
+ <div class="theme-buttons">
110
+ <button onclick="changeTheme('light')">๐ŸŒž ๋ฐ์€ ๋ชจ๋“œ</button>
111
+ <button onclick="changeTheme('dark')">๐ŸŒ™ ์–ด๋‘์šด ๋ชจ๋“œ</button>
112
+ <button onclick="changeTheme('pastel')">๐ŸŽจ ํŒŒ์Šคํ…” ๋ชจ๋“œ</button>
113
+ </div>
114
+
115
+ <h1>์˜ค๋Š˜์˜ ์ผ๊ธฐ ์ž…๋ ฅ</h1>
116
+ <textarea id="diary" placeholder="์˜ค๋Š˜ ์žˆ์—ˆ๋˜ ์ผ์„ ์ž์œ ๋กญ๊ฒŒ ์ž…๋ ฅํ•ด ๋ณด์„ธ์š”."></textarea><br>
117
+ <button onclick="recommend()">์ถ”์ฒœ ๋ฐ›๊ธฐ</button>
118
+ <div id="result"></div>
119
+
120
+ <div class="history-title">
121
+ ๋‚˜์˜ ์ผ๊ธฐ ํžˆ์Šคํ† ๋ฆฌ
122
+ <button class="clear-btn" onclick="clearHistory()">๋ชจ๋‘ ์ง€์šฐ๊ธฐ</button>
123
+ </div>
124
+ <div id="history"></div>
125
+ </div>
126
+
127
+ <script>
128
+ // ํ…Œ๋งˆ ๋ณ€๊ฒฝ ํ•จ์ˆ˜ + ๋กœ์ปฌ์Šคํ† ๋ฆฌ์ง€ ์ €์žฅ
129
+ function changeTheme(mode) {
130
+ if (mode === 'light') {
131
+ document.body.style.background = 'linear-gradient(135deg, #ffffff, #f0f0f0)';
132
+ }
133
+ else if (mode === 'dark') {
134
+ document.body.style.background = 'linear-gradient(135deg, #2c2c3e, #1a1a28)';
135
+ }
136
+ else if (mode === 'pastel') {
137
+ document.body.style.background = 'linear-gradient(135deg, #ffefd5, #ffe4e1)';
138
+ }
139
+ localStorage.setItem('selectedTheme', mode); // ์„ ํƒํ•œ ํ…Œ๋งˆ ์ €์žฅ
140
+ }
141
+
142
+ // ํŽ˜์ด์ง€ ๋กœ๋“œ์‹œ ์ €์žฅ๋œ ํ…Œ๋งˆ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
143
+ window.onload = function() {
144
+ const savedTheme = localStorage.getItem('selectedTheme');
145
+ if (savedTheme) {
146
+ changeTheme(savedTheme);
147
+ }
148
+ renderHistory();
149
+ };
150
+
151
+ async function recommend() {
152
+ const diary = document.getElementById('diary').value.trim();
153
+ const resultDiv = document.getElementById('result');
154
+
155
+ if (!diary) {
156
+ resultDiv.innerHTML = "์ผ๊ธฐ ๋‚ด์šฉ๏ฟฝ๏ฟฝ ์ž…๋ ฅํ•ด ์ฃผ์„ธ์š”.";
157
+ return;
158
+ }
159
+
160
+ resultDiv.innerHTML = "์ถ”์ฒœ ์ฝ˜ํ…์ธ ๋ฅผ ์ƒ์„ฑ ์ค‘์ž…๋‹ˆ๋‹ค. ์ž ์‹œ๋งŒ ๊ธฐ๋‹ค๋ ค์ฃผ์„ธ์š”...";
161
+
162
+ try {
163
+ const response = await fetch('/api/recommend', {
164
+ method: 'POST',
165
+ headers: { 'Content-Type': 'application/json' },
166
+ body: JSON.stringify({ diary })
167
+ });
168
+ const data = await response.json();
169
+
170
+ if (data.error) {
171
+ resultDiv.innerHTML = `์˜ค๋ฅ˜: ${data.error}`;
172
+ } else {
173
+ resultDiv.innerHTML = `
174
+ <strong>๊ฐ์ •๋ถ„์„:</strong> ${data.emotion} ${data.emoji}<br>
175
+ <strong>์ถ”์ฒœ:</strong> ${data.recommendation}
176
+ `;
177
+ saveDiary({
178
+ text: diary,
179
+ emotion: data.emotion,
180
+ emoji: data.emoji,
181
+ date: new Date().toISOString()
182
+ });
183
+ renderHistory();
184
+ }
185
+ } catch (error) {
186
+ console.error('Fetch error:', error);
187
+ resultDiv.innerHTML = '์„œ๋ฒ„ ํ†ต์‹  ์ค‘ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ–ˆ์Šต๋‹ˆ๋‹ค.';
188
+ }
189
+ }
190
+
191
+ function saveDiary(entry) {
192
+ let diaryHistory = JSON.parse(localStorage.getItem('diaryHistory') || '[]');
193
+ diaryHistory.unshift(entry);
194
+ if (diaryHistory.length > 20) diaryHistory = diaryHistory.slice(0, 20);
195
+ localStorage.setItem('diaryHistory', JSON.stringify(diaryHistory));
196
+ }
197
+
198
+ function renderHistory() {
199
+ const historyDiv = document.getElementById('history');
200
+ const diaryHistory = JSON.parse(localStorage.getItem('diaryHistory') || '[]');
201
+
202
+ if (!diaryHistory.length) {
203
+ historyDiv.innerHTML = "์ €์žฅ๋œ ์ผ๊ธฐ๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค.";
204
+ return;
205
+ }
206
+
207
+ historyDiv.innerHTML = diaryHistory.map(item => `
208
+ <div class="history-item">
209
+ <span class="history-date">
210
+ ${new Date(item.date).toLocaleString('ko-KR', {
211
+ year: 'numeric', month: '2-digit', day: '2-digit',
212
+ hour: '2-digit', minute: '2-digit'
213
+ })}
214
+ </span>
215
+ <span>${item.emoji} <strong>[${item.emotion}]</strong> : ${item.text}</span>
216
+ </div>
217
+ `).join('');
218
+ }
219
+
220
+ function clearHistory() {
221
+ if (confirm('์ •๋ง ๋ชจ๋‘ ์‚ญ์ œํ•˜์‹œ๊ฒ ์–ด์š”?')) {
222
+ localStorage.removeItem('diaryHistory');
223
+ renderHistory();
224
+ document.getElementById('result').innerHTML = '';
225
+ }
226
+ }
227
+ </script>
228
+ </body>
229
+ </html>