Cognitive Architecture
updated
Towards Modular LLMs by Building and Reusing a Library of LoRAs
Paper
•
2405.11157
•
Published
•
31
Self-MoE: Towards Compositional Large Language Models with
Self-Specialized Experts
Paper
•
2406.12034
•
Published
•
16
FunAudioLLM: Voice Understanding and Generation Foundation Models for
Natural Interaction Between Humans and LLMs
Paper
•
2407.04051
•
Published
•
40
OLMoE: Open Mixture-of-Experts Language Models
Paper
•
2409.02060
•
Published
•
78
OneGen: Efficient One-Pass Unified Generation and Retrieval for LLMs
Paper
•
2409.05152
•
Published
•
32
LLaVA-o1: Let Vision Language Models Reason Step-by-Step
Paper
•
2411.10440
•
Published
•
129
o1-Coder: an o1 Replication for Coding
Paper
•
2412.00154
•
Published
•
44
Mulberry: Empowering MLLM with o1-like Reasoning and Reflection via
Collective Monte Carlo Tree Search
Paper
•
2412.18319
•
Published
•
39
Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time
Scaling
Paper
•
2502.06703
•
Published
•
153
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM?
Paper
•
2502.14502
•
Published
•
91
Open Deep Search: Democratizing Search with Open-source Reasoning Agents
Paper
•
2503.20201
•
Published
•
48