Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
Duplicated from
natasa365/whisper.cpp
Xenobd
/
whisper.cpp
like
0
Running
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
45399ad
whisper.cpp
/
ggml
/
src
6.6 MB
100 contributors
History:
553 commits
cmdr2
Support pure float16 add/sub/mul/div operations in the CUDA (and CPU) backend (ggml/1121)
2b94a24
about 1 year ago
ggml-amx
ggml : adapt AMX to tensor->grad removal (llama/0)
over 1 year ago
ggml-blas
ggml : add support for dynamic loading of backends (llama/10469)
over 1 year ago
ggml-cann
llama : add Qwen2VL support + multimodal RoPE (llama/10361)
about 1 year ago
ggml-cpu
Support pure float16 add/sub/mul/div operations in the CUDA (and CPU) backend (ggml/1121)
about 1 year ago
ggml-cuda
Support pure float16 add/sub/mul/div operations in the CUDA (and CPU) backend (ggml/1121)
about 1 year ago
ggml-hip
CUDA: app option to compile without FlashAttention (llama/12025)
about 1 year ago
ggml-kompute
llama : add Qwen2VL support + multimodal RoPE (llama/10361)
about 1 year ago
ggml-metal
metal : copy kernels for quant to F32/F16 conversions (llama/12017)
about 1 year ago
ggml-musa
CUDA: app option to compile without FlashAttention (llama/12025)
about 1 year ago
ggml-opencl
opencl: fix for small models (llama/11950)
about 1 year ago
ggml-rpc
rpc: fix known RCE in rpc-server (ggml/1103)
about 1 year ago
ggml-sycl
Optimize mul_mat for Q4_0 on Intel GPU (llama/12035)
about 1 year ago
ggml-vulkan
vulkan: implement several ops relevant for ggml_opt (llama/11769)
about 1 year ago
CMakeLists.txt
Safe
11.8 kB
`ci`: use sccache on windows instead of ccache (llama/11545)
about 1 year ago
ggml-alloc.c
Safe
38.1 kB
vulkan: use smaller combined allocations to avoid fragmentation (llama/11551)
about 1 year ago
ggml-backend-impl.h
Safe
12 kB
rpc : early register backend devices (llama/11262)
about 1 year ago
ggml-backend-reg.cpp
Safe
17.2 kB
ggml : allow loading backend with env variable (ggml/1059)
about 1 year ago
ggml-backend.cpp
Safe
77.5 kB
ggml-backend : only offload from host buffers (fix) (llama/11124)
about 1 year ago
ggml-common.h
Safe
133 kB
CUDA: use arch list for compatibility check (llama/11775)
about 1 year ago
ggml-impl.h
Safe
18.4 kB
MUSA: support ARM64 and enable dp4a .etc (llama/11843)
about 1 year ago
ggml-opt.cpp
Safe
31.7 kB
ggml-opt: fix data corruption (ggml/1022)
over 1 year ago
ggml-quants.c
Safe
214 kB
ggml : refactor online repacking (llama/10446)
over 1 year ago
ggml-quants.h
Safe
8.34 kB
ggml : build backends as libraries (llama/10256)
over 1 year ago
ggml-threading.cpp
Safe
250 Bytes
ggml : build backends as libraries (llama/10256)
over 1 year ago
ggml-threading.h
Safe
198 Bytes
remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (llama/10797)
about 1 year ago
ggml.c
209 kB
ggml-cpu: Support s390x SIMD Instruction Set (llama/12019)
about 1 year ago
gguf.cpp
Safe
45 kB
cmake : add sanitizer flags for llama.cpp (llama/11279)
about 1 year ago