Building Playground Composer with Apple's Image Playground Framework
Building Playground Composer with Apple’s Image Playground Framework
Apple’s Image Playground framework gives iOS apps a system-native way to create images from user concepts. For a portfolio project, that is much more interesting than building another generic image generation wrapper: the engineering challenge becomes how to make image creation feel like part of the operating system, while still keeping the app architecture testable and product-shaped.
This post walks through Playground Composer, a small SwiftUI app that demonstrates three integration paths:
- A SwiftUI system sheet using
imagePlaygroundSheet(...) - Programmatic generation using
ImageCreator - A UIKit bridge using
ImagePlaygroundViewController
The app also includes device capability checks, local result persistence, unsupported-device fallbacks, and focused unit tests.
Product Goal
Playground Composer is a compact sample app for exploring system-native image generation workflows on iOS. The goal is not to train a model or build a full creative suite. Instead, the goal is to show how Apple’s official framework can be surfaced in realistic iOS workflows.
Playground Composer has three tabs:
- Compose: a primary SwiftUI workspace that opens Apple’s Image Playground sheet.
- Create: a programmatic generation flow powered by
ImageCreator. - Demos: reusable generation entry points embedded in avatar, sticker, and note illustration scenarios.
That structure intentionally separates system UI integration, async API usage, and reusable product embedding.
Core Data Model
The app uses a small request model to keep prompt validation and style selection explicit:
1 | |
The style enum wraps Apple’s generation styles in app-facing labels:
1 | |
This keeps UI copy, symbols, subtitles, and Apple API values in one place. It also makes tests simple because the app has stable identifiers like "animation", "illustration", and "sketch".
Building Concepts
Image Playground expects concepts as [ImagePlaygroundConcept]. The current app supports text concepts:
1 | |
This is deliberately small, but the boundary is useful. Later, the same builder could add extracted text, image concepts, or PencilKit drawings without changing the UI layer.
SwiftUI System Sheet Integration
The reusable GenerationLauncher component owns the system sheet integration. It takes a prompt binding, selected style binding, and shared image library:
1 | |
request is a computed property because prompt and selectedStyle are bindings. Every time the UI reads request, it reflects the latest input.
The button opens the official Image Playground sheet:
1 | |
Two details matter here:
- The launcher checks
supportsImagePlaygroundbefore enabling the button. - The completion handler receives a temporary URL, so the app copies the image into its own sandbox before displaying it later.
Programmatic Generation with ImageCreator
The Create tab demonstrates the lower-level API:
1 | |
The production implementation wraps ImageCreator:
1 | |
The important part is that images(for:style:limit:) returns an async sequence. The app consumes that sequence with for try await, which gives a natural place to support cancellation, errors, and partial result handling in a larger version.
The UI model tracks a small state machine:
1 | |
That state powers the progress banner, failure message, completion count, and cancel button.
UIKit Bridge
Even in a SwiftUI-first app, UIKit interop is still valuable. The Demos tab includes a UIViewControllerRepresentable wrapper around ImagePlaygroundViewController:
1 | |
The coordinator receives the delegate callbacks:
1 | |
This shows that the app can integrate the same framework from both SwiftUI and UIKit surfaces.
Persisting Results
Generated images from the system sheet arrive as temporary URLs. The app persists them through ImageStore:
1 | |
The app stores only generated image files and local metadata. It does not upload contacts, photo library contents, or personal data. The demo is intentionally privacy-friendly by default.
Unsupported Device Fallback
Image Playground availability depends on OS, hardware, and Apple Intelligence support. The app checks:
1 | |
When unsupported, the app disables real generation and shows a local sample result path. This makes the demo usable on simulators while still keeping real generation behind Apple’s capability check.
That distinction matters for portfolio work: a demo should not break just because the reviewer opens it on a simulator.
Keyboard Interaction Polish
The app also handles a common usability issue: keyboard dismissal. Each scrollable tab opts into interactive keyboard dismissal:
1 | |
It also adds a keyboard toolbar with a Done button:
1 | |
This is not the headline feature, but it makes the app feel much more complete during a recorded demo.
Testing Strategy
The test suite focuses on app-owned behavior:
- Prompt trimming and request validation
- Stable style identifiers
- Empty prompt handling in the concept builder
- Copying temporary result URLs into persistent storage
Example:
1 | |
The official Image Playground UI is not unit-tested directly. Instead, the app keeps its own boundaries testable and leaves system framework behavior to Apple.
What This Project Demonstrates
Playground Composer is intentionally compact, but it demonstrates several production-relevant iOS skills:
- SwiftUI composition
- Official Apple framework integration
- UIKit interop
- Async sequence consumption
- Capability-gated UI
- Local file persistence
- Keyboard and simulator demo polish
- Testable app-owned logic
The project focuses on native platform integration rather than treating image generation as a standalone API call. It shows how generation can become a native product surface, not just a backend feature with a text box.