is there any UI interface that you provide for inference and to use the capabilities of the model ? since LM studio doesnt even recognize the vision capabilities ?
i tried LM studio doesnt seem to work
Yes, we will provide a full set of demo code and a packaged docker that can be easily deployed by users, which is being processed. We hope to allow community users to truly use it on their own mac with the same effect as the online demo.
what about windows ? ive seen the docker for mac for now
what about windows ? ive seen the docker for mac for now
Use WSL on windows. Docker alone doesn't work on windows.
I would like to know whether support for running the model on linux would be made available as well?
thanks...
It does work in lm studio you just have to put the vision gguf in the same folder as the model and rename the vision gguf to "MiniCPM-o-4_5.mmproj-f16.gguf" and it should work it did for me
https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/demo/web_demo/WebRTC_Demo/README.md
We have created a complete demo tutorial, and also comes with one-click deployment scripts or docker deployment methods. Support mac/linux/windows platforms.
If there are problems with the use process, you are always welcome to mention issues, and we will continue to polish and improve them.
Honestly I'm only interested in the Voice functionality (real time voice to voice interaction), not the Vision, in that case, I assume that leaving out the vision components will not make it impossible to run the model? Is my assumption correct? I do not want to waste time otherwise downloading the whole thing....Thank you. I'm having Ubuntu 24.4 LTS on my Server.