Glossary

From Audacity Development Manual
Revision as of 23:09, 3 November 2009 by Olivier (talk | contribs) (Bad wikipedia link for "bit rate")
Jump to: navigation, search

Logo for Glossary

Flag_of_Holland_small.png

FrenchFlagSmall.png

Flag_of_Brazil_small.jpg

This page gives very brief explanations of technical terms related to digital audio, with some links to Wikipedia for much more comprehensive explanations.

ToDo This or all table layouts may need to be customised to include horizontal lines. This page seems very tiring to look at without those.

This concept is being tested with prettytableb in common.css

In Opera, columns are drawn, not what we wanted... - Gale
OperaGlossary.PNG



Even after 1.4.0 if needs be, we must also have an Index. And when we do, I think these terms are part of it (because of the overlap between the Index to our pages and the Glossary items). -Gale

See Discussion page

The following two definitions need to be addressed. I believe someone more knowledgeable that myself should take a stab at these. - John

   Cepstrum:

Some description here. It's like a logarithm of a Fourier Transform. We need a better description [1]

   Noise Floor:    

A level or amplitude representing the amount of noise present in the signal./// Gale: How do you distinguish in this description from noise that is above this level such as HF whine but which does not obliterate the other audio?

I guess I'm not sure what you are driving at. Can you point me to an Audacity page that discusses your point ? - John

Bill: Quoting wikipedia: "noise floor is the measure of the signal created from the sum of all the noise sources and unwanted signals within a measurement system. The noise floor limits the smallest measurement that can be taken with certainty since any measured amplitude can on average be no less than the noise floor."

In my experience, when recording engineers talk about noise floor this is what they are talking about. In their non-rigourous way they are using it interchangeably with signal to noise ratio. It's easier to say "the signal is below the noise floor" than to say "the level of the signal relative to 0 dB is below the signal to noise ratio".

Addressing your point, Gale, the HF whine would certainly be "noise" to a listener, and it is an 'unwanted signal', but because it can be identified by amplitude and frequency it is, I believe, above the noise floor - it can be measured.

The Compressor effect has a "noise floor" slider. In my understanding the purpose of this slider is to prevent the upward compression of noise. In effect it creates a second threshold or inflection point on the I/O transfer curve of the compressor. Given that the transfer curve below the main threshold gives upward compression, the curve below the "noise" threshold would provide downward expansion. The Leveller effect has a "threshold for noise" popup that appear to do the same thing. Thus I think that our definition of noise floor should at least help users understand the use of these controls.

So, I think John's definition is good. Technically you might want to replace 'signal' with 'system', but for our purposes I think that would be confusing.


Other Items formerly in this note have been incorporated below and original notes moved to Discussion Page.

Note that the glossary term wavelength isn't actually used anywhere in the manual. Definition Removed.

General Terms

Term Description
Wikipedia1.png ADC: Analog to digital converter. The part of a sound card which records an analog, real world sound like a voice or guitar and converts it to a numerical representation of the audio that a computer can manipulate.
Wikipedia1.png Algorithm: A set of steps or a procedure that will produce a desired result.
Wikipedia1.png Amplitude: The level or magnitude of a signal. Audio signals with a higher amplitude will sound louder.
Audacity Project Format (.aup): The format in which Audacity stores its projects. This consists of a reference file with the extension .aup and a large number of small audio files with extension .AU. This structure makes it quicker for Audacity to move audio around - ideal for cutting and pasting audio in a project.
Wikipedia1.png Audio CDs: CDs containing PCM audio data in accordance with the Red Book standard. They can be played on any standalone CD player as well as on computers.
Wikipedia1.png Bit: A measure of quantity of data. A bit is one binary digit, a 0 or a 1.
Wikipedia1.png Bit Rate: The number of computer bits that are conveyed or processed per unit of time. Normally expressed in kilobits per second (kbps).
Wikipedia1.png CBR: Constant Bitrate - The rate at which audio in this format uses its data does not vary. Silence uses as much 'space' as audible sound.
Wikipedia1.png Cepstrum: Some description here. It's like a logarithm of a Fourier Transform. We need a better description [2]
Wikipedia1.png Clipping: Distortion to sound that happens when the audio is too loud. When a waveform shows 'flat tops' rather than smooth curves it is usually an indication of clipping.
Wikipedia1.png Compressed Audio Format: Any format that will reduce the space required in storing or representing an audio signal. Space savings can be made for example by discarding certain frequency components which may be inaudible. MP3 takes this approach. Other formats such as FLAC compress without audio loss, but achieve lower compression rates.
Wikipedia1.png Compression: A process that tends to even out the overall volume level by increasing the level of softer passages and decreasing the level of louder passages. See also Compressed Audio Format.
Wikipedia1.png Cycle: An audio tone consists of an oscillating sound pressure on the ear. One cycle is one full transition of positive pressure through to negative pressure, back to positive pressure again.
Wikipedia1.png DAC: Digital to analog converter. The part of a sound card which plays back a numerical representation of audio as an analog, real world sound like a voice or guitar.
Wikipedia1.png Data CDs: Data CDs contain data intended to be read directly by a computer. The data may include audio and any other types of file such as images and documents. Most standalone CD players will not play data CDs, but some DVD players will. Including compressed audio files on a data CD can greatly increase the playing time compared to audio CDs.
Wikipedia1.png dB: Decibels. A logarithmic unit (typically of sound pressure) describing the ratio of that unit to a reference level.
Wikipedia1.png Dynamic Range: The difference between the loudest and softest part in an audio recording, the maximum possible being determined by its sample format. For a device, the difference between its maximum possible undistorted signal and its Noise Floor.
Wikipedia1.png FFT: Fast Fourier Transform. A method for performing Fourier transforms quickly.
Wikipedia1.png File name extension: A suffix of three or four characters added to a file name which defines the format of its contents. The suffix is separated from the file name by a dot (period), as in "song.mp3". The extension of common formats is often hidden on Windows, but can be turned on in the system's Folder Options.
Wikipedia1.png Filter: A sound effect that lets some frequencies through and suppresses others.
Wikipedia1.png Fourier Transform: A method for converting a waveform to a spectrum, and back.
Wikipedia1.png Frequency: Audio frequency determines the pitch of a sound. Measured in Hz (see below), higher frequencies have higher pitch.
Wikipedia1.png Gain: How much to amplify the sound by.
Wikipedia1.png Interpolation: Completing waveform data by estimating missing values. The values are estimated as being between other known values. To convert a waveform recorded at 22000 Hz or samples per second to one at a higher rate such as 44000 samples per second requires interpolation.
Wikipedia1.png Harmonics: Most sounds are made up of a mix of different frequencies. In musical sounds, the component frequencies are simple multiples of each other, for example 100 Hz, 200 Hz, 300 Hz. These are called harmonics of the lowest frequency sound.
Wikipedia1.png High Pass Filter: A filter that lets high frequencies through
Wikipedia1.png Hz: Hertz. Measures a frequency event in number of cycles per second. See Frequency and Sample Rate, both of which are measured in Hz.
Wikipedia1.png LAME: A software library that converts audio to MP3 format.
Wikipedia1.png Latency: A short delay between an audio signal being sent and received. In computer audio this is due to analog-to-digital and digital-to-analog conversion. Most commonly refers to the delay between recording a sound and a) hearing its playthrough or b) laying it down on disk.
Wikipedia1.png Linear: A simple, straight-forward, directly proportional, one-to-one relationship. A volume control would be a linear control. This term is used to contrast with logarithmic, or other complex relationships.
Wikipedia1.png Logarithmic: A non-linear relationship where one item is proportional to the logarithm of the other item. Some measures, such as dB, are logarithmic by definition.
Wikipedia1.png Lossless: A format for size-compressing audio that does not lose any information. The quality is exactly as good as before compression. An example is FLAC.
Wikipedia1.png Lossy: A format for size-compressing audio that may sacrifice a small amount of quality in order to reduce the file size more than lossless compression. Examples are MP3 and OGG.
Wikipedia1.png Low Pass Filter: A filter that lets low (bass) frequencies through.
Wikipedia1.png MP3 CDs: A specific type of data CD containing only MP3 audio files. All computers can play them as can some DVD and portable MP3 players.
Wikipedia1.png Noise Floor: A level or amplitude representing the amount of noise present in the signal.
Wikipedia1.png PCM: Pulse code modulation. A method of converting audio into binary numbers to represent it digitally, then back to audio. The waveform is measured at evenly spaced intervals and the amplitude of the waveform noted for each measurement.
Wikipedia1.png Pitch: Generally synonymous with the fundamental frequency of a note, but in music, often also taken to imply a perceived measurement that can be affected by overtones above the fundamental.
Wikipedia1.png Red Book: The most widely used standard for representing audio on CD, requiring stereo, 16-bit, 44100 Hz.
Wikipedia1.png RMS: Root-mean-square. A method of calculating a numerical value for the average sound level of a waveform.
Wikipedia1.png Sample: A discrete value at a point in a waveform representing the audio at that point. Also the act of taking a sequence of such values. All digital audio must be sampled at discrete points. By contrast, analog audio (such as the sound from a loudspeaker) is always a continuous signal.
Wikipedia1.png Sample Rate: Measured in Hz like frequency, this represents the number of digital samples captured per second in order to represent the waveform.
Wikipedia1.png Sample Format: Also known as Bit Depth or Word Size. The number of computer bits present in each audio sample. Determines the dynamic range of the audio.
Wikipedia1.png Spectrum: Presentation of a sound in terms of its component frequencies.
Uncompressed Audio Format: An audio format in which every sample of sound is represented by a binary number. Examples are WAV or AIFF.
Wikipedia1.png VBR: Variable bit rate. A method for compressing audio which does not always use the same number of bits to record the same duration of sound.
Wikipedia1.png Waveform: A visual representation of an audio signal.

Audio File Formats

Unsure if a separate section is a good idea given I think we need entries for lossy/lossless and uncompressed / compressed audio format in the main glossary (e.g. to explain difference between audio signal and audio file compression). A lot depends if we have Audio File Formats as one page as an appendix. Play it by ear but I don't think this table has quite enough detail for each format at the moment, even with a Wikipedia link.- Gale
ToDo As we have now decided not to have separate pages for the different formats (only separate pages for format export options), we need to rethink where these Glossary entries link to - best left until we decide if we have an appendix containing details of "audio file formats" - Gale
Term Description
Wikipedia1.png AIFF: A container format, almost always used for lossless, uncompressed, PCM audio. The format is in Apple's Big-endian byte order.
Wikipedia1.png AU: A container format, used by Audacity for storage of lossless, uncompressed, PCM audio data
Wikipedia1.png FLAC: An Open Source lossless, size-compressed audio format
Wikipedia1.png MIDI: MIDI is a small-sized file format which stores how to play notes, widely used for keyboard instruments. It is not an audio file format like WAV that uses thousands of samples to record the full sound of the notes actually being played.
Wikipedia1.png MP2: A lossy, size-compressed audio format mainly used by the broadcast media
Wikipedia1.png MP3: A lossy, size-compressed audio format which is the main format for transmitting audio over the internet
Wikipedia1.png Ogg Vorbis: An Open Source lossy, size-compressed audio format
Wikipedia1.png WAV: A container format, almost always used for lossless, uncompressed, PCM audio. The format is in Microsoft's Little-Endian byte order.