Case Study

VideoCue

1776 had 75+ projection cues, three collaborators who needed the same picture, and a spreadsheet that couldn't carry the weight. I built a native Mac app from scratch.

Designer, engineer, projection operator 2026

75+
Projection cues managed
1
Original Mac app, built from scratch
3
Disciplines kept aligned

The Problem

Gulf Coast Symphony's production of 1776: The Musical needed projection that could carry visual weight without ever stepping on the music or the staging. I started where any designer starts: a cue spreadsheet. Scene, cue area, projection content, color palette, typography, painting reference, technical notes, status. Workable, until the cue count tripled and the collaborator count tripled with it. By the time I had 75+ cues that the director, the maestro, and the lighting designer all needed to react to, the spreadsheet stopped being a tool and started being a tax.

The Solution

I built VideoCue, a native macOS cue-based playback app, from scratch in Swift and SwiftUI. Production-specific, no general purpose ambitions, just the tool 1776 actually needed.

VideoCue running 1776 in production

The thing I kept hitting in existing show-control software was the gap between two kinds of tools. On one side, clip-based players where every video is its own tile you fire by hand. Fine for simple cue lists, painful once you need images, overlays, and precise timing between moments. On the other, traditional NLEs and compositors that produce a finished video but don’t let an operator pause, wait, loop, or riff with the room.

VideoCue sits in the middle. One global timeline, plays continuously, with named cues planted at specific times that can pause, stop-and-wait, play-through, or cross-fade into the next section. The operator hits GO to advance. The show plays itself between cues and waits for the operator at every decision point, so the same project runs identically every night but still bends to what’s happening in the room.

Before: the spreadsheet

The original cue spreadsheet. Eight columns, every cue, every scene.

Eight columns, every cue, every scene. It worked for the design conversation. It did not work for the show. I wasn’t going to pause a performance to scroll through a spreadsheet looking for the right transition.

What it does

Timeline + cues. One global timeline of video and image clips with cross-fade transitions. Cues live at specific timecodes (pause, stop-and-wait, play-through, loop), fired from the keyboard, the cue list, or configurable key bindings. Cue list round-trips as CSV for sharing between projects.

Overlay tracks. Up to ten transparent layers composited on top of the main timeline. Each overlay has its own start/end, fades, opacity, blend mode, position, and size, fully independent of the cue system. PNG, MP4, and MOV overlays all sync to the global clock.

Preview + output. A live preview inside the control window and a separate output window that goes full-screen on the projector. A global output transform lets you shrink and reposition the whole composition without re-authoring the project.

Build, flatten, perform. The actual show workflow is two passes. First pass, you build the show like a compositor: every layer, every transition, every overlay, with cues placed where they need to fire. Then you export the whole thing as a single flattened movie and reimport it into VideoCue with the cues intact. In performance, you’re playing back one processed file, not compositing live. The cues fire identically, but your machine is doing decode-and-display, not decode-and-composite. No frame drops because the room got hot, no surprises from a memory spike. The design lives in the project file. The show lives in the flattened export.

Project hygiene. Media Consolidate copies every referenced file into a Media/ folder next to the project and rewrites paths relative to the project file, so projects are portable across machines. Undo/redo covers cue, overlay, and clip edits.

The technical approach

Swift 5.9, SwiftUI for the UI, AVFoundation for video and audio, macOS 14+. Single executable. No license server, no online check, no nothing the show didn’t need.

The hardest decision was scope. I had two weeks. Every feature I cut was a feature the show didn’t need to open. Every feature I built had to be load-bearing on opening night. Things that made it: the cue system, overlays, full-screen output, MOV/MP4 export, CSV round-trip, undo/redo. Things that didn’t: a server-mode for remote operation, a plugin API, network sync, OSC. None of those would have made 1776 run any better.

Opening night, projection booth. Barbara B. Mann Performing Arts Hall.

The result

75+ cues, ran identically every performance, opened on time and stayed shipped. The director, maestro, and lighting designer reacted to the same picture every night. The spreadsheet got archived. VideoCue is now the tool I’ll bring to the next production.

The bigger lesson, the one I keep coming back to: generalists who can both design the thing and build the system that runs the thing have a real edge right now. The renders are what the audience saw. The scaffolding is what made opening night possible.

Tools & Stack

Swift 5.9 + SwiftUI
Native macOS, single executable
AVFoundation
Video and audio playback, frame-accurate cueing
Build, flatten, perform
Two-pass workflow: composite in app, export as flat MOV, perform from the export
No license server, no telemetry
Just the tool the show needed