Issues: ollama/ollama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
It is possible to enable OpenAI Api in Docker image
feature request
New feature or request
#4475
opened May 16, 2024 by
Tomichi
support image with url when chat with vison model
feature request
New feature or request
#4474
opened May 16, 2024 by
dickens88
llama3-chatqa
always returns Empty reponse
bug
#4472
opened May 16, 2024 by
pnmartinez
Warning: client version is different than Ollama version in Linux
bug
Something isn't working
#4471
opened May 16, 2024 by
sohang3112
"ollama list" should display creation time, not download time
feature request
New feature or request
#4470
opened May 16, 2024 by
LaurentBonnaud
Ollama speed dropped with setting OLLAMA_NUM_PARALLEL
bug
Something isn't working
#4468
opened May 16, 2024 by
hugefrog
Zluda not working with Ollama's version(0.1.33 to 0.1.38) using RX6600 on windows 10
bug
Something isn't working
#4464
opened May 16, 2024 by
usmandilmeer
Error: llama runner process has terminated: exit status 0xc0000409
bug
Something isn't working
#4457
opened May 15, 2024 by
xdfnet
Ollama + sentence-transformers with torch cuda
bug
Something isn't working
#4453
opened May 15, 2024 by
qsdhj
openai.error.InvalidRequestError: model 'deepseek-coder:6.7b' not found, try pulling it first
bug
Something isn't working
#4449
opened May 15, 2024 by
userandpass
Streaming Chat Completion via OpenAI API should support stream option to include Usage
feature request
New feature or request
#4448
opened May 15, 2024 by
odrobnik
JSON Mode + Streaming + OpenAI API + Llama3 = never sends STOP, and a lot of whitespace after the JSON
bug
Something isn't working
#4446
opened May 15, 2024 by
odrobnik
Add tab completions for fish shell
feature request
New feature or request
#4444
opened May 15, 2024 by
coder543
Models remain resident in VRAM after deletion
bug
Something isn't working
#4443
opened May 15, 2024 by
coder543
Error: llama runner process has terminated: exit status 0xc0000409
bug
Something isn't working
#4442
opened May 15, 2024 by
hcr707305003
Add support for third-party hosted APIs
feature request
New feature or request
#4440
opened May 14, 2024 by
19h
Ollama vs Llama-cpp-python : Slow response time as compared to llama-cpp-python
bug
Something isn't working
gpu
nvidia
Issues relating to Nvidia GPUs and CUDA
#4437
opened May 14, 2024 by
utility-aagrawal
GPU layer control / prioritisation
feature request
New feature or request
#4433
opened May 14, 2024 by
AncientMystic
Lora models / Lora training
feature request
New feature or request
#4432
opened May 14, 2024 by
AncientMystic
BUG: Custom System Prompt not loading
bug
Something isn't working
#4431
opened May 14, 2024 by
MichaelFomenko
ollama can't run qwen:72b, error msg ""gpu VRAM usage didn't recover within timeout
bug
Something isn't working
#4427
opened May 14, 2024 by
changingshow
Previous Next
ProTip!
Adding no:label will show everything without a label.