

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.ĭonaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. I've had a lot of people ask if they can contribute.

Wait until it says it's finished downloading.

Under Download custom model or LoRA, enter TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ.Open the text-generation-webui UI as normal. How to easily download and use this model in text-generation-webui float16 HF format model for GPU inference and further conversions.4bit and 5bit GGML models for CPU inference.It is the result of quantising to 4bit using GPTQ-for-LLaMa. This is GPTQ format quantised 4bit models of Eric Hartford's Wizard-Vicuna 30B. Want to contribute? TheBloke's Patreon pageĮric Hartford's Wizard-Vicuna-30B-Uncensored GPTQ
