💡 摘要
Tone.js 是一个用于在浏览器中创建交互式音乐应用的 Web Audio 框架。
🎯 适合人群
🤖 AI 吐槽: “看起来很能打,但别让配置把人劝退。”
风险:Low。建议检查:是否发起外网请求(SSRF/数据外发)。以最小权限运行,并在生产环境启用前审计代码与依赖。
name: tonejs description: Web Audio framework for creating interactive music in browsers. Use when building audio applications, synthesizers, musical instruments, effects processors, audio visualizations, DAWs, step sequencers, or any browser-based sound generation. Handles synthesis, scheduling, sample playback, effects, and audio routing. license: MIT metadata: author: Yotam Mann version: "15.4.0" repository: https://github.com/Tonejs/Tone.js
Tone.js Skill
Build interactive music applications in the browser using the Web Audio API through Tone.js's high-level abstractions.
When to Use This Skill
Use Tone.js when:
- Creating synthesizers, samplers, or musical instruments
- Building step sequencers, drum machines, or DAWs
- Adding sound effects or music to games
- Implementing audio visualizations synchronized to sound
- Processing audio in real-time with effects
- Scheduling musical events with precise timing
- Working with musical concepts (notes, tempo, measures)
Core Concepts
1. Context and Initialization
The AudioContext must be started from a user interaction (browser requirement):
import * as Tone from "tone"; // ALWAYS call Tone.start() from user interaction document.querySelector("button").addEventListener("click", async () => { await Tone.start(); console.log("Audio context ready"); // Now safe to play audio });
2. Audio Graph and Routing
All audio nodes connect in a graph leading to Tone.Destination (the speakers):
// Basic connection const synth = new Tone.Synth().toDestination(); // Chain through effects const synth = new Tone.Synth(); const filter = new Tone.Filter(400, "lowpass"); const delay = new Tone.FeedbackDelay(0.125, 0.5); synth.chain(filter, delay, Tone.Destination); // Parallel routing (split signal) const reverb = new Tone.Reverb().toDestination(); const delay = new Tone.Delay(0.2).toDestination(); synth.connect(reverb); synth.connect(delay);
3. Time and Scheduling
Tone.js abstracts time in musical notation:
"4n"= quarter note"8n"= eighth note"2m"= two measures"8t"= eighth note triplet- Numbers = seconds
CRITICAL: Always use the time parameter passed to callbacks:
// CORRECT - sample-accurate timing const loop = new Tone.Loop((time) => { synth.triggerAttackRelease("C4", "8n", time); }, "4n"); // WRONG - JavaScript timing is imprecise const loop = new Tone.Loop(() => { synth.triggerAttackRelease("C4", "8n"); // Will drift }, "4n");
4. Transport System
The global timekeeper for synchronized events:
// Schedule events on the Transport const loop = new Tone.Loop((time) => { synth.triggerAttackRelease("C4", "8n", time); }, "4n").start(0); // Control the Transport Tone.Transport.start(); Tone.Transport.stop(); Tone.Transport.pause(); Tone.Transport.bpm.value = 120; // Set tempo
Step-by-Step Instructions
Task 1: Create a Basic Synthesizer
- Import Tone.js
- Create a synth and connect to output
- Wait for user interaction to start audio
- Play notes using
triggerAttackRelease
import * as Tone from "tone"; const synth = new Tone.Synth().toDestination(); button.addEventListener("click", async () => { await Tone.start(); // Play C4 for an eighth note synth.triggerAttackRelease("C4", "8n"); });
Task 2: Create a Polyphonic Instrument
- Use
PolySynthto wrap a monophonic synth - Pass multiple notes to play chords
- Release specific notes when needed
const polySynth = new Tone.PolySynth(Tone.Synth).toDestination(); // Play a chord polySynth.triggerAttack(["C4", "E4", "G4"]); // Release specific notes polySynth.triggerRelease(["E4"], "+1");
Task 3: Load and Play Audio Files
- Create a
PlayerorSampler - Wait for
Tone.loaded()promise - Start playback
const player = new Tone.Player("https://example.com/audio.mp3").toDestination(); await Tone.loaded(); player.start(); // For multi-sample instruments const sampler = new Tone.Sampler({ urls: { C4: "C4.mp3", "D#4": "Ds4.mp3", "F#4": "Fs4.mp3", }, baseUrl: "https://example.com/samples/", }).toDestination(); await Tone.loaded(); sampler.triggerAttackRelease(["C4", "E4"], 1);
Task 4: Create a Looping Pattern
- Use
Tone.LooporTone.Sequencefor patterns - Pass the time parameter to instrument triggers
- Start the loop and the Transport
const synth = new Tone.Synth().toDestination(); const loop = new Tone.Loop((time) => { synth.triggerAttackRelease("C4", "8n", time); }, "4n").start(0); await Tone.start(); Tone.Transport.start();
Task 5: Add Effects Processing
- Create effect instances
- Connect in desired order (serial or parallel)
- Adjust wet/dry mix if needed
const synth = new Tone.Synth(); const distortion = new Tone.Distortion(0.4); const reverb = new Tone.Reverb({ decay: 2.5, wet: 0.5, // 50% effect, 50% dry }); synth.chain(distortion, reverb, Tone.Destination);
Task 6: Automate Parameters
- Access parameter via property (e.g.,
frequency,volume) - Use methods like
rampTo,linearRampTo,exponentialRampTo - Schedule changes with time parameter
const osc = new Tone.Oscillator(440, "sine").toDestination(); osc.start(); // Ramp frequency to 880 Hz over 2 seconds osc.frequency.rampTo(880, 2); // Set value at specific time osc.frequency.setValueAtTime(440, "+4"); // Exponential ramp (better for frequency) osc.frequency.exponentialRampTo(220, 1, "+4");
Task 7: Synchronize Visuals with Audio
- Use
Tone.Draw.schedule()for visual updates - Schedule in the same callback as audio events
- Visual updates happen just before audio plays
const loop = new Tone.Loop((time) => { synth.triggerAttackRelease("C4", "8n", time); // Schedule visual update Tone.Draw.schedule(() => { element.classList.add("active"); }, time); }, "4n");
Common Patterns
Pattern: Step Sequencer
const synth = new Tone.Synth().toDestination(); const notes = ["C4", "D4", "E4", "G4"]; const seq = new Tone.Sequence( (time, note) => { synth.triggerAttackRelease(note, "8n", time); }, notes, "8n" ).start(0); Tone.Transport.start();
Pattern: Probabilistic Playback
const loop = new Tone.Loop((time) => { if (Math.random() > 0.5) { synth.triggerAttackRelease("C4", "8n", time); } }, "8n");
Pattern: Dynamic Effect Parameters
const filter = new Tone.Filter(1000, "lowpass").toDestination(); const lfo = new Tone.LFO(4, 200, 2000); // 4Hz, 200-2000Hz range lfo.connect(filter.frequency); lfo.start();
Sound Design Principles
Core Insights
Auditory processing is 10x faster than visual (~25ms vs ~250ms). Sound provides immediate feedback that makes interactions feel responsive. A button that clicks feels faster than one that doesn't, even with identical visual feedback.
Sound communicates emotion instantly. A single tone conveys success, error, or tension better than visual choreography. When audio and visuals tell the same story together, the experience is stronger than either alone.
Less is more. Most interactions should be silent. Reserve sound for moments that matter: confirmations for major actions, errors that can't be overlooked, state transitions, and notifications. Always pair sound with visuals for accessibility - sound enhances, never replaces. Study games for reference - they've perfected informative, emotional, non-intrusive audio feedback.
Design Philosophy
Good sound design transforms user experience across all platforms - web apps, mobile apps, desktop applications, and games. These principles apply universally whether creating notification sounds, UI feedback, or musical interactions.
Sound uses a universal language understood by everyone. When designing audio:
Ask foundational questions:
- What is the essence of what this app/feature is about?
- What emotion do you want to evoke?
- How does it match the app's visual aesthetics?
- How would users understand this interaction without looking at the screen?
Consider context:
- Where will users hear this? (Pocket, desk, busy street, quiet room)
- What will they be doing? (Working, commuting, gaming)
- How often will they hear it? (Once per day vs hundreds of times)
Notification Sound Design
Effective notification sounds have these characteristics:
1. Distinguishable
- Create a unique sonic signature that identifies the app
- Use characteristic timbres or melodic patterns
- Layer simple elements to build recognition
- Don't mimic system defaults or other common sounds
2. Conveys meaning
- The sound should connect to the message (not literal, but suggestive)
- Liquid qualities for water/weather, metallic for alerts, warm tones for success
- Use timbre and envelope to suggest the content
- Abstract representation, not sound effects
3. Friendly and appropriate
- Match urgency to message importance
- Gentle sounds: Soft attacks (50ms+), smooth timbres (sine, triangle)
- Urgent sounds: Fast attacks (<5ms), brighter timbres (square, FM synthesis)
- Volume and brightness indicate priority
4. Simple and clean
- Avoid complex layering or dense harmonic content
- One or two-note patterns work better than melodies
- Pleasant timbres remain tolerable when heard repeatedly
- Clarity over cleverness
5. Unobtrusive and repeatable
- Duration: 0.3-0.8 seconds maximum for notifications
- Use softer timbres (sine, triangle) for frequent sounds
- Avoid harsh, complex, or extremely bright timbres
- Should remain pleasant when heard 50+ times per day
6. Cuts through noise, not abrasive
- Mid-range frequencies (300-3000Hz) are most effective
- Avoid extreme highs (>8kHz) and lows (<80Hz)
- Design for noisy environments without being harsh
- Triangle and sine waves are gentler than
优点
- 高级抽象简化了音频编程
- 支持实时音频处理和效果
- 适用于各种音频应用
缺点
- 需要用户交互来启动音频上下文
- 浏览器兼容性可能有所不同
- 复杂音频路由的学习曲线
相关技能
免责声明:本内容来源于 GitHub 开源项目,仅供展示和评分分析使用。
版权归原作者所有 plyght.
