Image Generation
Learn how to generate images.
Generate Images
DeepFellow API provides an images/generations endpoint that lets you generate images from a text prompt.
Example code for generating an image:
curl -w '\n' -X 'POST' \
'https://deepfellow-server-host/v1/images/generations' \
-H 'Authorization: Bearer DEEPFELLOW-PROJECT-API-KEY' \
-H 'Content-Type: application/json' \
-d '{
"prompt": "An image of ...",
"model": "CyberRealistic-XL-FP16",
"response_format": "b64_json"
}' | jq -r ".data[0].b64_json" | base64 -d > image.pngfrom openai import OpenAI
import base64
client = OpenAI(
base_url="https://deepfellow-server-host/v1",
api_key="DEEPFELLOW-PROJECT-API-KEY"
)
prompt = """
An image of...
"""
result = client.images.generate(
model="CyberRealistic-XL-FP16",
prompt=prompt
)
image_base64 = result.data[0].b64_json
image_bytes = base64.b64decode(image_base64)
with open("image.png", "wb") as f:
f.write(image_bytes)import OpenAI from 'openai';
import * as fs from 'fs';
import { Buffer } from 'buffer';
const client = new OpenAI({
baseURL: 'https://deepfellow-server-host/v1',
apiKey: 'DEEPFELLOW-PROJECT-API-KEY'
});
const prompt = `
An image of ...
`;
const result = await client.images.generate({
model: 'CyberRealistic-XL-FP16',
prompt: prompt
});
const imageBase64 = result.data[0].b64_json;
const imageBytes = Buffer.from(imageBase64, 'base64');
fs.writeFileSync('image.png', imageBytes);Customize Image Output
You can configure the following output options:
- Size: Image dimensions (e.g.
1024x1024,1024x1536) - Quality: Rendering quality (e.g.
low,medium,high) - Format: File output format
- Compression: Compression level (0-100%) for
jpegandwebpformats - Background: Transparent or opaque
Parameters size, quality, and background support the auto option, where the model will automatically select the best option based on the prompt.
Size and Quality Options
Square images with standard quality are the fastest to generate. The default size is 1024x1024 pixels.
Available sizes:
- 1024x1024 (square)
- 1536x1024 (landscape)
- 1024x1536 (portrait)
- auto (default)
Quality options:
- low
- medium
- high
- auto (default)
Output Format
The API returns base64-encoded image data. The default format is png, but you can also request jpeg or webp.
If using jpeg or webp, you can also specify the output_compression parameter to control the compression level (0-100%). For example, output_compression=50 will compress the image by 50%.
To read more about image generation, visit OpenAI API Documentation
Fine Control
You can precisely control the look of your generated image by:
- using
v1/images/generationsendpoint parameters (e.g.,size,background,quality), - injecting stable diffusion generation parameters into your prompt using
<sd> {JSON} </sd>tags:
curl -w '\n' -X 'POST' \
'https://deepfellow-server-host/v1/images/generations' \
-H "Authorization: Bearer DEEPFELLOW-PROJECT-API-KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "\n<sd>\n{\n \"prompt\": \"blue, rose, 8k, cinematic macro shot, soft focus, dreamlike ambiance\",\n \"negative_prompt\": \"nsfw, bad anatomy, blurry, low-quality\",\n \"sampler_name\": \"DPM++ 2M\",\n \"hr_sampler_name\": \"Kerras\",\n \"clip_skip\": 2,\n \"steps\": 32,\n \"cfg_scale\": 7\n}\n</sd>\n",
"model": "CyberRealistic-XL-FP16",
"size": "auto",
"quality": "medium",
"background": "auto",
"output_compression": 95,
"output_format": "png",
"response_format": "b64_json"
}'| jq -r '.data[0].b64_json' | base64 -d > rose.pngimport base64
from openai import OpenAI
client = OpenAI(
base_url="https://deepfellow-server-host/v1",
api_key="DEEPFELLOW-PROJECT-API-KEY"
)
prompt = """
<sd>
{
"prompt": "blue, rose, 8k, cinematic macro shot, soft focus, dreamlike ambiance",
"negative_prompt": "nsfw, bad anatomy, blurry, low-quality",
"sampler_name": "DPM++ 2M",
"hr_sampler_name": "Kerras",
"clip_skip": 2,
"steps": 32,
"cfg_scale": 7
}
</sd>
"""
generation_params = {
"prompt": prompt,
"model": "CyberRealistic-XL-FP16",
"size": "auto",
"quality": "medium",
"background": "auto",
"output_compression": 95,
"output_format": "png",
"response_format": "b64_json"
}
response = client.images.generate(**generation_params)
b64_image_data = response.data[0].b64_json
image_bytes = base64.b64decode(b64_image_data)
with open("rose.png", "wb") as f:
f.write(image_bytes)import OpenAI from 'openai';
import * as fs from 'fs';
import { Buffer } from 'buffer';
const client = new OpenAI({
baseURL: 'https://deepfellow-server-host/v1',
apiKey: 'DEEPFELLOW-PROJECT-API-KEY'
});
const prompt = `
<sd>
{
"prompt": "blue, rose, 8k, cinematic macro shot, soft focus, dreamlike ambiance",
"negative_prompt": "nsfw, bad anatomy, blurry, low-quality",
"sampler_name": "DPM++ 2M",
"hr_sampler_name": "Kerras",
"clip_skip": 2,
"steps": 32,
"cfg_scale": 7
}
</sd>
`;
const generationParams: any = {
prompt: prompt,
model: 'CyberRealistic-XL-FP16',
size: 'auto',
quality: 'medium',
background: 'auto',
output_compression: 95,
output_format: 'png',
response_format: 'b64_json'
};
const result = await client.images.generate(generationParams);
const imageBase64 = result.data[0].b64_json;
const imageBytes = Buffer.from(imageBase64, 'base64');
fs.writeFileSync('rose.png', imageBytes);Using Low-Rank Adapters (LoRA)
Low-rank adapters allow you to tune the output of your model while keeping its base capabilities intact. Using LoRA is analogous to postproduction in the movie industry where, after shooting, the material is edited, color-graded, etc.
In case of image generation, the base model can generate the scene and then LoRA can change the tone, color-grading, or style of the scene to get the desired result. You need two stable-diffusion models to do this:
- base model (any stable-diffusion model that can generate images)
- LoRA model
In a single image generation request, both models are used.
Example
curl -w '\n' -X 'POST' \
'https://deepfellow-server-host/v1/images/generations' \
-H "Authorization: Bearer DEEPFELLOW-PROJECT-API-KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "<sd>\n{\n \"prompt\": \"valley <lora:Fantastic_Landscape-LoRA:0.3>, cinematic lighting, colorful, extreme detail\",\n \"negative_prompt\": \"verybadimagenegative_v1.3, vignetting, bokeh, out of focus, distortion\",\n \"sampler_name\": \"Euler a\",\n \"hr_sampler_name\": \"Karras\",\n \"clip_skip\": 2,\n \"steps\": 20,\n \"cfg_scale\": 7\n}\n</sd>",
"model": "yomama-2.5D",
"size": "1024x1024",
"quality": "medium",
"background": "auto",
"output_compression": 95,
"output_format": "png",
"response_format": "b64_json"
}' | jq -r '.data[0].b64_json' | base64 -d >image.pngimport base64
from openai import OpenAI
client = OpenAI(
base_url="https://deepfellow-server-host/v1",
api_key="DEEPFELLOW-PROJECT-API-KEY"
)
prompt = """
<sd>
{
"prompt": "valley <lora:Fantastic_Landscape-LoRA:0.3>, cinematic lighting, colorful, extreme detail",
"negative_prompt": "verybadimagenegative_v1.3, vignetting, bokeh, out of focus, distortion",
"sampler_name": "Euler a",
"hr_sampler_name": "Karras",
"clip_skip": 2,
"steps": 20,
"cfg_scale": 7
}
</sd>
"""
generation_params = {
"prompt": prompt,
"model": "yomama-2.5D",
"size": "1024x1024",
"quality": "medium",
"background": "auto",
"output_compression": 95,
"output_format": "png",
"response_format": "b64_json",
}
response = client.images.generate(**generation_params)
b64_image_data = response.data[0].b64_json
image_bytes = base64.b64decode(b64_image_data)
with open("image.png", "wb") as f:
f.write(image_bytes)import OpenAI from 'openai';
import * as fs from 'fs';
import { Buffer } from 'buffer';
const client = new OpenAI({
baseURL: 'https://deepfellow-server-host/v1',
apiKey: 'DEEPFELLOW-PROJECT-API-KEY'
});
const prompt = `
<sd>
{
"prompt": "valley <lora:Fantastic_Landscape-LoRA:0.3>, cinematic lighting, colorful, extreme detail",
"negative_prompt": "verybadimagenegative_v1.3, vignetting, bokeh, out of focus, distortion",
"sampler_name": "Euler a",
"hr_sampler_name": "Karras",
"clip_skip": 2,
"steps": 20,
"cfg_scale": 7
}
</sd>
`;
const generationParams: any = {
prompt: prompt,
model: 'yomama-2.5D',
size: '1024x1024',
quality: 'medium',
background: 'auto',
output_compression: 95,
output_format: 'png',
response_format: 'b64_json'
};
const result = await client.images.generate(generationParams);
const imageBase64 = result.data[0].b64_json;
const imageBytes = Buffer.from(imageBase64, 'base64');
fs.writeFileSync('image.png', imageBytes);We use cookies on our website. We use them to ensure proper functioning of the site and, if you agree, for purposes such as analytics, marketing, and targeting ads.