DeepFellow DOCS

DeepFellow Infra Web Panel

DeepFellow Infra lets you manage your models. To install Infra, follow the Installation Guide.

Accessing Infra Web Panel

Type the following in your terminal:

deepfellow infra info

You will get the similar output to this:

$ deepfellow infra info
💡      Variables stored in /home/mark/.deepfellow/infra/.env
        DF_NAME=infra
        DF_INFRA_URL=https://df-infra-node-1.com
        DF_INFRA_PORT=8086
        DF_INFRA_IMAGE=github.simplito.com:5050/df/deepfellow-infra:latest
        DF_MESH_KEY=dfmesh_dbc8c1cf-c07a-4bba-bda1-d89829be37bb
        DF_INFRA_API_KEY=dfinfra_c7a934fc-5e1f-41b2-b10e-57016c18d516
        DF_INFRA_ADMIN_KEY=dfinfraadmin_99d7df55-107b-4327-aeef-54e187f6c5aa
        DF_CONNECT_TO_MESH_URL=
        DF_CONNECT_TO_MESH_KEY=
        DF_INFRA_DOCKER_SUBNET=deepfellow-infra-net
        DF_INFRA_COMPOSE_PREFIX=dfd834zh_
        DF_INFRA_DOCKER_CONFIG=/home/mark/.docker/config.json
        DF_INFRA_STORAGE_DIR=/home/mark/.deepfellow/infra/storage
        DF_HUGGING_FACE_TOKEN=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
        DF_CIVITAI_TOKEN=7fea22cc5605cf498066059415157828
...

Head to the UI at http://localhost:8086.

In the pop-up window enter DF_INFRA_ADMIN_KEY. The services window will appear:

DeepFellow Infra Web Panel screen with services available, for example ollama, openAI, and google

Services

Models are organized under "services". Each service is named after the backend, e.g. "ollama", or after the provider, e.g. "openai". Services group models from the same family.

Choosing Services

Available LLM services – ollama, llamacpp, and vllm differ in their level of hardware integration, including dependencies on specific CPU instruction sets.

To minimize hardware compatibility issues, consider the following services:

  • ollama – Recommended for most users. Automatically adapts to your hardware configuration with minimal setup required.
  • llamacpp – Supports models outside the ollama repository and the GGUF model format. May require extra configuration due to a higher chance of hardware compatibility issues.
  • vllm – Offers the highest performance but carries the highest risk of hardware-related complications. Recommended for experienced users who are confident in troubleshooting and system configuration.

Recommendation: If you're not sure which service to choose, start with ollama.

Installing Services

To install a particular service, click install. You will be prompted with the window where you can choose service parameters.

If you want to your service to use CPU only make sure the "Run on GPU" box is not checked.

vLLM service installed in CPU mode ("Run on GPU" unchecked) only supports processors with AVX-512 instruction set currently – learn more in vLLM documentation.

We recommend using 'ollama' or 'llamacpp' services instead, whenever possible. They provide the smoothest experience for now.

Before installing "openai" service get OpenAI API Key to use OpenAI models with our anonymization layer.

Before installing "google" service get Gemini API Key to use Google models with our anonymization layer.

window with service parameters to set to install ollama

After the chosen service is installed, it will appear in the grid:

green label showing installed ollama service on DeepFellow Infra Web Panel services list view

Uninstalling Services

Simply click "Uninstall" button to uninstall a service.

Models

Installing Models

Click on "Models" button on the desired service. You will see a list of available models:

DeepFellow Infra Web Panel screen with models available in ollama service

You can filter model list by name, type. You can also show models which are:

  • installed / not installed
  • custom / not custom

list of models available in ollama service, filtered by name list of models available in ollama service, filtered by model type list of models available in ollama service, filtered by model's installation state

Uninstalling Models

Simply click "Uninstall" button to uninstall a model.

Custom models

You can install your own custom models. Installed model must adhere to at least one of the criteria below:

  1. Is present in Ollama library -- use ollama service,
  2. Any model available in HuggingFace in GGUF format -- use llamacpp service,
  3. Any model available in HuggingFace supported by vLLM -- use vllm service,
  4. Any model from OpenAI/Google -- use openai/google service,
  5. Any image generation model compatible with stable diffusion (e.g. Civitai, HuggingFace) -- use stable-diffusion service,
  6. Any LoRA compatible with stable diffusion (e.g. Civitai, HuggingFace) -- use stable-diffusion service,
  7. Any docker image -- use custom service.

Pop-up window showing docker image parameters to be filled-in to install custom model.

Read Using Custom Models guide to check the details.

Install

The install procedure is similar for all the services. Exception is 'custom' service - read Using Custom Models guide to check the details.

As an example, if you want to add custom model (qwen3-embedding:0.6b -- go to Ollama library) to your 'ollama' service:

  1. Go to the services view,
  2. Locate 'ollama' tab and clik 'Install' if not already installed,
  3. Click "Add custom model" button,
  4. In the pop-up window enter Model ID qwen3-embedding:0.6b,
  5. Enter Size 639MB,
  6. Chose embedding Model type from the drop-down. New model tab will be shown,
  7. Click "Install" button,
  8. In the pop-up window add optional parameters and click "Install" to confirm,
  9. After a while your model will appear in the models list with green label "Installed". Now you can use your model as normal.

Custom models are used exactly the same way as non-custom ones. You use their intentifiers the same way in your inference requests or code.

Uninstall

Removing custom model requires two actions:

  • uninstalling model,
  • removing custom model tab.

Uninstalling model

  1. Go to the services view,
  2. Click "Models" on the tab of the service (e.g. 'ollama') model was installed from,
  3. Search the model you want to uninstall (e.g. qwen3-embedding:0.6),
  4. Click "Uninstall" to remove the model.

Removing custom model tab

  1. Inside the model view search for the custom model name (e.g. qwen3-embedding:0.6),
  2. Click "Remove custom model".

API Documentation

To access the DeepFellow Infra API documentation, click the "Go to Docs" button in the upper left corner.

Next steps

You can head to the tutorials related to using specific services listed here:

We use cookies on our website. We use them to ensure proper functioning of the site and, if you agree, for purposes such as analytics, marketing, and targeting ads.