Your website speaks. Give it a voice.
Semantic audio feedback for shadcn/ui. Using a single prop.
$ npx shadcn@latest add
https://sensory-ui.com/r/sensory-ui- components
- 24 components
- roles
- 17 roles
- sound packs
- 9 sound packs
- gzipped size
- ~26kb gzipped size
Feel the difference.
Every component listens for the sound prop. 17 semantic roles. 24 components.
Instant feedback that feels right.
Sounds generated in real time via Web Audio API. No files. No latency.
Sound with intention.
UI sound is a craft, not a gimmick. These principles keep sensory-ui from becoming noise.
Informative, not decorative
Sound confirms actions, signals errors, and reinforces state changes. If it doesn't communicate something useful, it doesn't play.

Weight matches action
A subtle tick for a checkbox. A short sweep for navigation. A richer chime for a milestone. Sound weight scales with interaction significance.

Accessible by default
Respects prefers-reduced-motion. Global kill-switch. Per-category toggles. Every audio cue has a visual equivalent - sound enhances, never replaces.

Reassuring, never punishing
Sounds create comfort and confirm intent. Errors are gentle nudges, not harsh buzzers. Under 30 KB. No bundled files. No side effects.

The forgotten dimension.
"Your ears are faster than your eyes. The auditory cortex processes sound in about 25ms, while visual processing takes nearly ten times longer. A button that clicks feels faster than one that doesn't, even when the visual feedback is identical."
"Sound should create a sense of comfort and security – only calling for action when needed. Informative, honest, and reassuring."
Google understood this in 2014. The soft tap of a Material button, the chime of a notification, the sweep of a page transition. Deliberate. Crafted. Meaningful.
Web developers largely ignored this dimension. No clean, framework-native way to add audio feedback to a React component tree. sensory-ui fills that gap - zero audio files, zero side effects, procedural synthesis powered by the Web Audio API.
17 semantic roles across 5 categories

Give your UI a voice.
Add sensory-ui to your Next.js project in under 5 minutes.
Configure. Wire. Ship.
MIT License · Open source forever
