How It Works

Landing Page: Web Activity

Press start, sit back and relax, while Webarmonium transforms real-time web activity into generative music and visuals through a direct correlation system. Three data sources are polled continuously: Wikipedia edits (every 5s), HackerNews posts (every 10s), and GitHub pushes (every 60s). Each source feeds metrics into a dynamic normalization engine that tracks historical min/max values rather than using fixed thresholds—this ensures maximum musical variety as the system adapts to actual data patterns over time.

Monitored parameters include: edit rate, velocity (rate of change), edit size, new article count, post frequency, upvote averages, comment counts, push frequency, repository creates, and deletes. These metrics generate virtual gestures that feed into a deterministic composition system—every musical decision derives directly from input data. The CompositionEngine selects musical forms based on energy level (low energy → contemplative forms like theme_and_variations, high energy → energetic forms like sonata). PhraseMorphology transforms gesture acceleration into rhythm variation (positive acceleration = rushing, negative = dragging) and curvature into syncopation. The HarmonicEngine selects progressions by complexity level, ensuring tonal coherence through voice leading.

Velocity (the ± value in the dashboard) represents the rate of change of each metric: positive values indicate increasing activity, negative values indicate decreasing activity. Only sources with significant velocity trigger musical events, preventing sonic overload and mimicking natural human interaction patterns.

Audio-visual coherence: Each source has a distinct audio identity. Wikipedia plays in bass tessitura (110-220Hz, A2-A3) with sawtooth waves (rich harmonics), HackerNews occupies the tenor range (196-392Hz, G3-G4) with pure sine tones, and GitHub sounds in soprano frequencies (523-1047Hz, C5-C6) with hollow triangle waves. Cursor positions are calculated using a golden ratio distribution system—each cursor's X and Y coordinates are independently derived from gesture counters and multiple metrics. Visual pulses and particle flows respond to the same gesture data that drives the audio.

Background Composition Layer

Background music emerges organically in response to gestures: each parameter that feeds the composition algorithm is derived from user interactions in the room. No artificial or mechanical source has been used, just the room's activity metrics. Several voices are driven by those data to provide a rich, modulating, polyphonic texture that adapts to user gestures and uses them as material for the composition.

Musical scheduler: All notes are clock-synchronized using Tone.js Transport with 25ms precision. Remote events snap to sixteenth-note boundaries, ensuring global synchronization despite network latency.

Collaborative Rooms: Real Users

In collaborative rooms, you can create music through gestures: tap for percussive or sustained (hold the tap) notes and drag for melodic phrases.

Multi-user composition: Up to 4 users can create polyphonic compositions together, with background accompaniment. The system validates counterpoint rules to ensure musical coherence. When only one real user is present, two virtual users (from web metrics) automatically join in.

Environmental memory: The room learns from gesture patterns over time, with 24-hour retention. Initial gestures have high influence, while mature rooms evolve more slowly. Pattern creation and evolution are derived from gesture characteristics—high-intensity gestures at unique positions create new patterns, while moderate-intensity gestures evolve existing dormant patterns, creating a deterministic “room personality” that shapes the composition.