introvoyz041 commited on
Commit
9375225
·
verified ·
1 Parent(s): 05672d1

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +95 -0
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ library_name: transformers
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - shining-valiant
8
+ - shining-valiant-3
9
+ - valiant
10
+ - valiant-labs
11
+ - mistral3
12
+ - mistral
13
+ - mistral-common
14
+ - ministral-3-14b
15
+ - ministral
16
+ - reasoning
17
+ - code
18
+ - code-reasoning
19
+ - science
20
+ - science-reasoning
21
+ - physics
22
+ - biology
23
+ - chemistry
24
+ - earth-science
25
+ - astronomy
26
+ - machine-learning
27
+ - artificial-intelligence
28
+ - compsci
29
+ - computer-science
30
+ - information-theory
31
+ - ML-Ops
32
+ - math
33
+ - cuda
34
+ - deep-learning
35
+ - transformers
36
+ - agentic
37
+ - LLM
38
+ - neuromorphic
39
+ - self-improvement
40
+ - complex-systems
41
+ - cognition
42
+ - linguistics
43
+ - philosophy
44
+ - logic
45
+ - epistemology
46
+ - simulation
47
+ - game-theory
48
+ - knowledge-management
49
+ - creativity
50
+ - problem-solving
51
+ - architect
52
+ - engineer
53
+ - developer
54
+ - creative
55
+ - analytical
56
+ - expert
57
+ - rationality
58
+ - conversational
59
+ - chat
60
+ - instruct
61
+ - mlx
62
+ - mlx-my-repo
63
+ base_model: ValiantLabs/Ministral-3-14B-Reasoning-2512-ShiningValiant3
64
+ datasets:
65
+ - sequelbox/Celestia3-DeepSeek-R1-0528
66
+ - sequelbox/Mitakihara-DeepSeek-R1-0528
67
+ - sequelbox/Raiden-DeepSeek-R1
68
+ license: apache-2.0
69
+ ---
70
+
71
+ # introvoyz041/Ministral-3-14B-Reasoning-2512-ShiningValiant3-mlx-4Bit
72
+
73
+ The Model [introvoyz041/Ministral-3-14B-Reasoning-2512-ShiningValiant3-mlx-4Bit](https://huggingface.co/introvoyz041/Ministral-3-14B-Reasoning-2512-ShiningValiant3-mlx-4Bit) was converted to MLX format from [ValiantLabs/Ministral-3-14B-Reasoning-2512-ShiningValiant3](https://huggingface.co/ValiantLabs/Ministral-3-14B-Reasoning-2512-ShiningValiant3) using mlx-lm version **0.28.3**.
74
+
75
+ ## Use with mlx
76
+
77
+ ```bash
78
+ pip install mlx-lm
79
+ ```
80
+
81
+ ```python
82
+ from mlx_lm import load, generate
83
+
84
+ model, tokenizer = load("introvoyz041/Ministral-3-14B-Reasoning-2512-ShiningValiant3-mlx-4Bit")
85
+
86
+ prompt="hello"
87
+
88
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
89
+ messages = [{"role": "user", "content": prompt}]
90
+ prompt = tokenizer.apply_chat_template(
91
+ messages, tokenize=False, add_generation_prompt=True
92
+ )
93
+
94
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
95
+ ```