Editors designed for use with music typically allow the user to do the following:

 >  record audio from one or more inputs and store recordings in the computer's memory as digital audio

 >  edit the start time, stop time, and duration of any sound on the audio timeline

 >  fade into or out of a clip (ex:. an S-fade out during applause after a performance), or between clips (ex:. cross fading between takes)

 >  mix multiple sound sources/tracks, combine them at various volume levels and pan from channel to channel to one or more output tracks

 >  apply simple or advanced effects or filters, including compression, expansion, flanging, reverb, audio noise reduction and equalization to change the audio

 >  playback sound (often after being mixed) that can be sent to one or more outputs, such as speakers, additional processors, or a recording medium

 >  conversion between different audio file formats, or between different sound quality levels

Typically these tasks can be performed in a manner that is both non-linear and non-destructive.


Editors designed for use in speech research add the ability to make measurements and perform acoustic analyses such as extracting and displaying a fundamental frequency contour or spectrogram. They typically lack most or all of the effects of interest to musicians.