Skip to content

Conversation

@Chesars
Copy link
Contributor

@Chesars Chesars commented Dec 17, 2025

Relevant issues

Follow-up to #18006 - adds text-to-image generation support for Black Forest Labs

Pre-Submission checklist

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🆕 New Feature

Changes

Add native text-to-image generation support for Black Forest Labs Flux models.

Supported Models

  • flux-pro-1.1 - Fast & reliable standard generation ($0.04/image)
  • flux-pro-1.1-ultra - Ultra high-resolution up to 4MP ($0.06/image)
  • flux-dev - Development/open-source variant ($0.025/image)
  • flux-pro - Original pro model ($0.05/image)

Features

  • Polling-based async API: BFL returns a task ID immediately, then we poll for the result
  • Sync and async support: Both _poll_for_result() and _poll_for_result_async() methods
  • OpenAI-compatible parameters: Maps sizewidth/height, nnum_images, quality=hdraw=True
  • Reuses shared HTTP clients: Uses _get_httpx_client() and get_async_httpx_client() for connection pooling

Usage

import litellm

response = litellm.image_generation(
    model="black_forest_labs/flux-pro-1.1",
    prompt="A beautiful sunset over the ocean",
    size="1024x1024",
)
print(response.data[0].url)

Files Changed

  • litellm/llms/black_forest_labs/image_generation/ - New module with transformation config
  • litellm/llms/black_forest_labs/__init__.py - Export new config
  • litellm/utils.py - Register provider in get_provider_image_generation_config()
  • litellm/images/main.py - Add to HTTP handler provider list
  • model_prices_and_context_window.json - Add model pricing

Documentation Added

  • docs/my-website/docs/providers/black_forest_labs.md - Full provider documentation with SDK and Proxy examples
  • Updated docs/my-website/docs/image_generation.md - Added Black Forest Labs to supported providers list

Tests Added

  • 39 unit tests covering transformation, polling, error handling, and async support
Screenshot 2025-12-17 at 16 51 16

…st Labs

Add native integration for Black Forest Labs image editing models
(flux-kontext-pro, flux-kontext-max, flux-pro-1.0-fill, flux-pro-1.0-expand).

Changes:
- Add BlackForestLabsImageEditConfig for BFL API transformation
- Add BLACK_FOREST_LABS to LlmProviders enum
- Add use_multipart_form_data() to BaseImageEditConfig for JSON vs form-data
- Modify image_edit_handler to support JSON request bodies
- Add comprehensive unit tests

Closes BerriAI#11401
Add BFL models to model_prices_and_context_window.json with pricing:
- flux-kontext-pro: $0.04/image
- flux-kontext-max: $0.08/image
- flux-pro-1.0-fill: $0.05/image
- flux-pro-1.0-expand: $0.05/image

Add black_forest_labs_models set to __init__.py for model discovery.
Resolve comment conflict in llm_http_handler.py by combining
BFL and Gemini style comments for JSON request handling.
Replace direct httpx.get() calls with _get_httpx_client() to reuse
cached HTTP client, following the pattern used by other providers
(RunwayML, Azure AI OCR, Sagemaker, etc.).
Add native text-to-image generation for Black Forest Labs Flux models
(flux-pro-1.1, flux-pro-1.1-ultra, flux-dev, flux-pro).

- Polling-based async API with sync and async support
- OpenAI-compatible parameter mapping (size, n, quality)
- Reuses shared HTTP clients via _get_httpx_client()
- 39 unit tests added
@vercel
Copy link

vercel bot commented Dec 17, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
litellm Ready Ready Preview, Comment Dec 17, 2025 8:27pm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant