open-webui/open-webui:用户友好的 AI 界面(支持 Ollama、OpenAI API 等) --- open-webui/open-webui: User-friendly AI Interface (Supports Ollama, OpenAI API, ...)
Open WebUI is an extensible, feature-rich, and user-friendly self-hosted AI platform desi 2025-1-31 14:26:48 Author: github.com(查看原文) 阅读量:16 收藏

GitHub stars GitHub forks GitHub watchers GitHub repo size GitHub language count GitHub top language GitHub last commit Hits Discord

Open WebUI is an extensible, feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline. It supports various LLM runners like Ollama and OpenAI-compatible APIs, with built-in inference engine for RAG, making it a powerful AI deployment solution.
Open WebUI 是一个可扩展、功能丰富且用户友好的自托管 AI 平台,旨在完全离线运行。它支持各种LLM运行器,如 OllamaOpenAI 兼容的 API,并内置了 RAG 推理引擎,使其成为强大的 AI 部署解决方案

For more information, be sure to check out our Open WebUI Documentation.
有关更多信息,请务必查看我们的 Open WebUI 文档

Open WebUI Demo

Key Features of Open WebUI ⭐
Open WebUI ⭐ 的主要特点

  • 🚀 Effortless Setup: Install seamlessly using Docker or Kubernetes (kubectl, kustomize or helm) for a hassle-free experience with support for both :ollama and :cuda tagged images.
    🚀 轻松设置:使用 Docker 或 Kubernetes(kubectl、kustomize 或 helm)无缝安装,支持 :ollama:cuda 标记的映像,获得轻松的体验。

  • 🤝 Ollama/OpenAI API Integration: Effortlessly integrate OpenAI-compatible APIs for versatile conversations alongside Ollama models. Customize the OpenAI API URL to link with LMStudio, GroqCloud, Mistral, OpenRouter, and more.
    🤝 Ollama/OpenAI API 集成:轻松集成与 OpenAI 兼容的 API,与 Ollama 模型一起进行多功能对话。自定义 OpenAI API URL 以与 LMStudio、GroqCloud、Mistral、OpenRouter 等链接。

  • 🛡️ Granular Permissions and User Groups: By allowing administrators to create detailed user roles and permissions, we ensure a secure user environment. This granularity not only enhances security but also allows for customized user experiences, fostering a sense of ownership and responsibility amongst users.
    🛡️ 精细权限和用户组:通过允许管理员创建详细的用户角色和权限,我们确保安全的用户环境。这种粒度不仅增强了安全性,还允许自定义用户体验,从而在用户中培养主人翁意识和责任感。

  • 📱 Responsive Design: Enjoy a seamless experience across Desktop PC, Laptop, and Mobile devices.
    📱 响应式设计:在台式 PC、笔记本电脑和移动设备上享受无缝体验。

  • 📱 Progressive Web App (PWA) for Mobile: Enjoy a native app-like experience on your mobile device with our PWA, providing offline access on localhost and a seamless user interface.
    📱 适用于移动设备的渐进式 Web 应用程序 (PWA):使用我们的 PWA 在您的移动设备上享受类似应用程序的原生体验,在本地主机上提供离线访问和无缝用户界面。

  • ✒️🔢 Full Markdown and LaTeX Support: Elevate your LLM experience with comprehensive Markdown and LaTeX capabilities for enriched interaction.
    ✒️🔢 完整的 Markdown 和 LaTeX 支持:通过全面的 Markdown 和 LaTeX 功能提升您的LLM体验,以丰富交互。

  • 🎤📹 Hands-Free Voice/Video Call: Experience seamless communication with integrated hands-free voice and video call features, allowing for a more dynamic and interactive chat environment.
    🎤📹 免提语音/视频通话:通过集成的免提语音和视频通话功能体验无缝通信,从而实现更加动态和互动的聊天环境。

  • 🛠️ Model Builder: Easily create Ollama models via the Web UI. Create and add custom characters/agents, customize chat elements, and import models effortlessly through Open WebUI Community integration.
    🛠️ 模型生成器:通过 Web UI 轻松创建 Ollama 模型。通过 Open WebUI Community 集成轻松创建和添加自定义角色/代理、自定义聊天元素和导入模型。

  • 🐍 Native Python Function Calling Tool: Enhance your LLMs with built-in code editor support in the tools workspace. Bring Your Own Function (BYOF) by simply adding your pure Python functions, enabling seamless integration with LLMs.
    🐍 原生 Python 函数调用工具:使用工具工作区中的内置代码编辑器支持来增强您的LLMs支持。自带函数 (BYOF),只需添加纯 Python 函数,即可实现与 LLMs.

  • 📚 Local RAG Integration: Dive into the future of chat interactions with groundbreaking Retrieval Augmented Generation (RAG) support. This feature seamlessly integrates document interactions into your chat experience. You can load documents directly into the chat or add files to your document library, effortlessly accessing them using the # command before a query.
    📚 本地 RAG 集成:通过开创性的检索增强生成 (RAG) 支持,深入了解聊天交互的未来。此功能将文档交互无缝集成到您的聊天体验中。您可以将文档直接加载到聊天中或将文件添加到您的文档库中,在查询之前使用 # 命令轻松访问它们。

  • 🔍 Web Search for RAG: Perform web searches using providers like SearXNG, Google PSE, Brave Search, serpstack, serper, Serply, DuckDuckGo, TavilySearch, SearchApi and Bing and inject the results directly into your chat experience.
    🔍 RAG 的网页搜索: 使用 SearXNG、Google PSEBrave SearchserpstackserperSerplyDuckDuckGoTavilySearchSearchApiBing 等提供商执行网页搜索,并将结果直接注入您的聊天体验中。

  • 🌐 Web Browsing Capability: Seamlessly integrate websites into your chat experience using the # command followed by a URL. This feature allows you to incorporate web content directly into your conversations, enhancing the richness and depth of your interactions.
    🌐 Web 浏览功能:使用 # 命令后跟 URL 将网站无缝集成到您的聊天体验中。此功能允许您将 Web 内容直接合并到您的对话中,从而增强交互的丰富性和深度。

  • 🎨 Image Generation Integration: Seamlessly incorporate image generation capabilities using options such as AUTOMATIC1111 API or ComfyUI (local), and OpenAI's DALL-E (external), enriching your chat experience with dynamic visual content.
    🎨 图像生成集成:使用 AUTOMATIC1111 API 或 ComfyUI(本地)和 OpenAI 的 DALL-E(外部)等选项无缝整合图像生成功能,通过动态视觉内容丰富您的聊天体验。

  • ⚙️ Many Models Conversations: Effortlessly engage with various models simultaneously, harnessing their unique strengths for optimal responses. Enhance your experience by leveraging a diverse set of models in parallel.
    ⚙️ 多个模型对话:毫不费力地同时与各种模型互动,利用他们的独特优势来获得最佳响应。通过并行利用一组不同的模型来增强您的体验。

  • 🔐 Role-Based Access Control (RBAC): Ensure secure access with restricted permissions; only authorized individuals can access your Ollama, and exclusive model creation/pulling rights are reserved for administrators.
    🔐 基于角色的访问控制 (RBAC):确保使用受限权限进行安全访问;只有获得授权的个人才能访问您的 Ollama,并且为管理员保留独家模型创建/拉取权限。

  • 🌐🌍 Multilingual Support: Experience Open WebUI in your preferred language with our internationalization (i18n) support. Join us in expanding our supported languages! We're actively seeking contributors!
    🌐🌍 多语言支持:通过我们的国际化 (i18n) 支持,以您的首选语言体验 Open WebUI。加入我们,扩展我们支持的语言!我们正在积极寻找贡献者!

  • 🧩 Pipelines, Open WebUI Plugin Support: Seamlessly integrate custom logic and Python libraries into Open WebUI using Pipelines Plugin Framework. Launch your Pipelines instance, set the OpenAI URL to the Pipelines URL, and explore endless possibilities. Examples include Function Calling, User Rate Limiting to control access, Usage Monitoring with tools like Langfuse, Live Translation with LibreTranslate for multilingual support, Toxic Message Filtering and much more.
    🧩 Pipelines、Open WebUI 插件支持:使用 Pipelines 插件框架将自定义逻辑和 Python 库无缝集成到 Open WebUI 中。启动您的 Pipelines 实例,将 OpenAI URL 设置为 Pipelines URL,并探索无限可能。示例包括函数调用、用于控制访问的用户速率限制、使用 Langfuse 等工具进行使用情况监控使用 LibreTranslate 进行实时翻译以提供多语言支持、有害消息过滤等等。

  • 🌟 Continuous Updates: We are committed to improving Open WebUI with regular updates, fixes, and new features.
    🌟 持续更新:我们致力于通过定期更新、修复和新功能来改进 Open WebUI。

Want to learn more about Open WebUI's features? Check out our Open WebUI documentation for a comprehensive overview!

🔗 Also Check Out Open WebUI Community!

Don't forget to explore our sibling project, Open WebUI Community, where you can discover, download, and explore customized Modelfiles. Open WebUI Community offers a wide range of exciting possibilities for enhancing your chat interactions with Open WebUI! 🚀

How to Install 🚀

Installation via Python pip 🐍

Open WebUI can be installed using pip, the Python package installer. Before proceeding, ensure you're using Python 3.11 to avoid compatibility issues.

  1. Install Open WebUI: Open your terminal and run the following command to install Open WebUI:

  2. Running Open WebUI: After installation, you can start Open WebUI by executing:

This will start the Open WebUI server, which you can access at http://localhost:8080

Quick Start with Docker 🐳

Note

Please note that for certain Docker environments, additional configurations might be needed. If you encounter any connection issues, our detailed guide on Open WebUI Documentation is ready to assist you.

Warning

When using Docker to install Open WebUI, make sure to include the -v open-webui:/app/backend/data in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.

Tip

If you wish to utilize Open WebUI with Ollama included or CUDA acceleration, we recommend utilizing our official images tagged with either :cuda or :ollama. To enable CUDA, you must install the Nvidia CUDA container toolkit on your Linux/WSL system.

Installation with Default Configuration

  • If Ollama is on your computer, use this command:

    docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
  • If Ollama is on a Different Server, use this command:

    To connect to Ollama on another server, change the OLLAMA_BASE_URL to the server's URL:

    docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
  • To run Open WebUI with Nvidia GPU support, use this command:

    docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda

Installation for OpenAI API Usage Only

  • If you're only using OpenAI API, use this command:

    docker run -d -p 3000:8080 -e OPENAI_API_KEY=your_secret_key -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Installing Open WebUI with Bundled Ollama Support

This installation method uses a single container image that bundles Open WebUI with Ollama, allowing for a streamlined setup via a single command. Choose the appropriate command based on your hardware setup:

  • With GPU Support: Utilize GPU resources by running the following command:

    docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
  • For CPU Only: If you're not using a GPU, use this command instead:

    docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

Both commands facilitate a built-in, hassle-free installation of both Open WebUI and Ollama, ensuring that you can get everything up and running swiftly.

After installation, you can access Open WebUI at http://localhost:3000. Enjoy! 😄

Other Installation Methods

We offer various installation alternatives, including non-Docker native installation methods, Docker Compose, Kustomize, and Helm. Visit our Open WebUI Documentation or join our Discord community for comprehensive guidance.

Troubleshooting

Encountering connection issues? Our Open WebUI Documentation has got you covered. For further assistance and to join our vibrant community, visit the Open WebUI Discord.

Open WebUI: Server Connection Error

If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the --network=host flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: http://localhost:8080.

Example Docker Command:

docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Keeping Your Docker Installation Up-to-Date

In case you want to update your local Docker installation to the latest version, you can do it with Watchtower:

docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui

In the last part of the command, replace open-webui with your container name if it is different.

Check our Migration Guide available in our Open WebUI Documentation.

Using the Dev Branch 🌙

Warning

The :dev branch contains the latest unstable features and changes. Use it at your own risk as it may have bugs or incomplete features.

If you want to try out the latest bleeding-edge features and are okay with occasional instability, you can use the :dev tag like this:

docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui --add-host=host.docker.internal:host-gateway --restart always ghcr.io/open-webui/open-webui:dev

Offline Mode

If you are running Open WebUI in an offline environment, you can set the HF_HUB_OFFLINE environment variable to 1 to prevent attempts to download models from the internet.

What's Next? 🌟

Discover upcoming features on our roadmap in the Open WebUI Documentation.

License 📜

This project is licensed under the BSD-3-Clause License - see the LICENSE file for details. 📄

Support 💬

If you have any questions, suggestions, or need assistance, please open an issue or join our Open WebUI Discord community to connect with us! 🤝

Star History

Star History Chart

Created by Timothy Jaeryang Baek - Let's make Open WebUI even more amazing together! 💪


文章来源: https://github.com/open-webui/open-webui
如有侵权请联系:admin#unsafe.sh