DeepFellow DOCS

Hardware Requirements

DeepFellow adapts to your hardware – there are no strict minimum requirements. The optimal configuration depends on your use case, chosen models, and team size. Follow our recommendations below to maximize DeepFellow's performance within your infrastructure.

Enterprise Scale – High-Performance Inference

For teams with access to enterprise-grade GPUs (H100, H200, and beyond), DeepFellow scales seamlessly. Deploy multiple GPUs within a single machine, or distribute workloads across multiple machines using DeepFellow Infra for true horizontal scaling—see Installation.

Consumer & Mid-Range Deployments

DeepFellow runs efficiently on consumer-grade hardware, making it accessible for smaller teams and individual developers. Whether you're prototyping with lightweight models or running production workloads on mid-tier GPUs, DeepFellow delivers consistent performance across a wide range of configurations.

Here are some example configurations, suitable for both smaller models and more demanding projects.

ConfigurationCPURAMVRAMGPUPerformance
16/1232GB16GBRTX 5060 TIEnough for a single user, or a small group (assuming people will not be using the model at the same time). Enough for continuously running tasks, but the performance will be rather weak.
28/1632GB16GBRTX 5070 TI2x more efficient than the first setup. Still quite limited while used at the same time by a group of users, it will slow down in such a case.
38/1664GB16GBRTX 50802.5x more efficient than the fist setup, suitable for a small to medium-sized group.
48/1664GB32GBRTX 50904x more efficient than the first setup. More than enough for a single user, appropriate for a group of more or less 10 users. Very good performance on continuously running tasks.

We use cookies on our website. We use them to ensure proper functioning of the site and, if you agree, for purposes such as analytics, marketing, and targeting ads.