Interactive UI
Lofi Mood Mixer
A pixel-art lofi soundscape mixer where users adjust time of day, weather, and ambience to create a personalized listening environment.
01
The Problem
A pocket of stillness, built in a weekend.
Getting laid off twice in one year left me craving calm and living in New York City, everything was already moving fast. I kept going back to Lofi Girl’s 24/7 lofi hip hop radio on YouTube as a way to slow down while working, but I wanted more: rain, fire, crickets, and forest noises layered underneath and to actually control them. Apps like Noisli handled ambient sound, but had no music. Lofi Girl had the stream, but no ambient layer. The combination I wanted didn't exist yet.

That gap, plus genuine curiosity about how far AI-assisted development could take me end to end, was the inspiration. I wanted to build something that felt like a small escape, healing both visually and audibly, and I wanted to see how quickly I could actually ship it.
02
Constraints
Constraints as a strategy, shipped in a weekend.
A weekend timeline forced every scope decision: no auth, no backend, no features that couldn't be justified in two days. Saving soundscape presets to localStorage (capped at 3 slots) was the right call since users shouldn't need an account to settle into an atmosphere.
The ambient volume ceiling was a balance constraint, not a technical one. All four ambient layers are capped at 50% so the lofi stream stays primary. The more restrictive constraint was the pixel art aesthetic: every scene element had to evoke the same retro visual nostalgia. On the technical side, the YouTube iFrame API introduced real friction around autoplay policies, coordinating multiple simultaneous players, and responsive behavior across devices.
03
My Approach
Wireframe on iPad, build with Claude.
I sketched a rough layout on my iPad before touching code, which became the visual anchor for the rest of the build.
I drafted an initial prompt with ChatGPT's help, dropped it into Claude, and got the first artifact back quickly. From there it was iterative: describe the intent, inspect what came back, and redirect when things drifted. AI handled a significant portion of code generation, but it needed direction throughout. When it started losing the thread, I steered it back.
The pixel art pivot came mid-build. An earlier version used regular illustration-style graphics that looked fine but felt generic. Switching to pixel art found the right vibe immediately. Once I saw both versions side by side, there was no question.


Early visual exploration



References used to define the cozy, calm, and nature-blended ambience the mixer aims to recreate.
04
Interaction & UI Decisions
Sliders that actually do something.
Four sliders (Time, Warmth, Weather, Nature) each control one dimension of the environment. The visual scene and audio layers both respond to the same values, so moving a slider feels like a single action with a unified result.
Time-based nature sound switching
Daytime forest sounds play between sunrise and dusk (10–70%). Night crickets take over after that. Hearing daybirds at midnight would break the illusion.
Warmth audio threshold
Campfire crackling only becomes audible above 70% warmth so cool settings don't bleed warm sounds.
Presets as entry points, but not the only path
Five curated soundscapes let users skip to a mood. The drawer collapses to keep the scene unobstructed.
Saved vibes via localStorage
Capped at 3 via localStorage, no account required, accessible in the preset drawer.
05
Implementation Details
One state root, two hooks, four audio players.
A single state object with four values (time, warmth, weather, nature) drives both the visual scene and the audio layers. That shared model is what makes everything feel synchronized rather than independently managed.
The lofi stream runs through one hook and 4 ambient YouTube players (rain, nature day, nature night, cozy fire) run through another. Each ambient layer starts muted with volume set reactively by slider values. Day and night nature sounds are mutually exclusive and the campfire layer only appears above 70% warmth.
Final application in action, shifting between presets and fine-tuning the scene’s ambience in real time.
06
Outcome
Shipped in two days.
The app launched on Vercel, fully functional across desktop and mobile. It was intentionally scoped at the two-day mark, doing exactly what it set out to do.
More than the app itself, the outcome was a clearer picture of how AI fits into an actual build. Features that would have taken weeks came together in days, but judgment calls, visual redirects, and craft decisions still required a human.
07
Reflections
AI got me there faster, orchestration was still crucial.
AI dramatically accelerates execution, but it doesn't replace judgment. Claude could generate the SVG scene logic, wire up slider state, and scaffold the audio hooks, but it needed constant steering. When it lost the thread of the visual language or product intent, I redirected. That back-and-forth is the real workflow, and getting fluent in it was the actual learning.
With that, a few process changes would have made the build cleaner.
A CLAUDE.md from the start
A structured reference file would have prevented context drift across a long single-session conversation.
Scoped sub-sessions
One long thread was harder to manage than shorter, focused sessions. Separate sessions for the visual scene, audio, controls, and mobile would have kept each context tighter and more useful.
