Reverse Conducting Data for:

Mazurka in A Minor, Op. 7, No. 2 performed by Friedman (1930)

Label:Philips 456 784-2 (1999)
Track:9        Track Duration:  2:31 minutes
Related Performances:Ashkenazy (1981), Bacha (1997), Barbosa (1983), Biret (1990), Blet (2003), Block (1995), Brailowsky (1960), Chiu (1999), Clidat (1994), Cohen (1997), Cortot (1951), Csalog (1996), Czerny-Stefanska (1989), Ezaki (2006), Falvay (1989), Ferenczy (1958), Fiorentino (1958), Flière (1977), Fou (1978), Fou (2005), François (1956), François (1966), Groot (1988), Hatto (1993), Hatto (2006), Indjic (1988), Iturbi (1959), Kapell (1951), Kushner (1990), Lilamand (2001), Luisada (1990), Magaloff (1977), Magin (1975), Milkina (1970), Mohovich (1999), Niedzielski (1931), Ohlsson (1999), Olejniczac (1990), Osinska (1989), Pobłocka (1999), Rangell (2001), Rubinstein (1939), Rubinstein (1952), Rubinstein (1966), Shebanova (2002), Smith (1975), Sztompka (1959), Uninsky (1959), Vardi (1988), Wasowski (1980)

trials:01  02  03  04  05  06  07  08  09  10  11  12  13  14  15  16  17  18  19  20  
avg (05) (10) (20)       ma       link       beats (05) (10) (20)       absma   
notes (05) (10) (20)     time (05) (10) (20)     btime (05) (10) (20)
corrected (ma)     view plain directory listing

Raw Data

Below are the 20 individual tempo tapping trials for the reverse conducting of this performance. Click on a trial number to view the timing data for a particular trial.

01  02  03  04  05  06  07  08  09  10  11  12  13  14  15  16  17  18  19  20  

Header information from first trial file:

titleMazurka in A minor, Op. 7, No. 2
reverse-conductorCraig Stuart Sapp
performerIgnaz Friedman
labelPhilips 456 784-2
label-titleGreat Pianists of the 20th century, vol. 30
trial-hardwareSony Vaio PCG-R505GC laptop
trial-cpuspeed1193.084 MHz
trial-osWindows XP

The individual trials are combined into a single file for analysis in Mathematica:

In order to estimate the time offset of the of the beat tap points in the original audio file, the following tap time data were extracted manually from the original audio file:

Analysis Data

Here is the raw analysis file output from Mathematica:


The raw Mathematica output is then formated into a Humdrum file for eventual linking to the Humdrum **kern data for the score:


The meanings of each column in this file:

  1. **kern -- The duration of the beat (always a quarter note duration).
  2. **beat -- The beat number in the measure.
  3. **time -- The average time in milliseconds that the beat is expected in the original audio file.
  4. **dur -- The average duration of the current beat until the next beat (in milliseconds).
  5. **min -- The minimum absolute time at which this beat was tapped over all trials.
  6. **max -- The maximum absolute time at which this beat was tapped over all trials.
  7. **cmin -- The 95% confidence for the average minimum time for all taps on this beat.
  8. **cmax -- The 95% confidence for the average maximum time for all taps on this beat.
  9. **sd -- The standard deviation of taps around the average abslute time for this tap.

The absolute beat times in the music are extracted from pid5667230-09.avg and stored in a plain text file. The following data is useful for generating clicktracks for the audio recording of the performance.

Next, the offsets analyzed in Mathematica are added to each individual trial in another combined trial file for use in further analysis of the individual trials. The average values of the data in the following file whould give you the average performance times listed in the above file:

Score Alignment Data

The beat times are first attached to the original score:


Then the times of sub-beats are interpolated from the beat times:


Performance Analysis Data

Finally, the timing information for all individual notes in the score are extracted to a file for use in automatic note identification in an audio file of the performance:


Where the columns represent the following information about a note:

  1. abstime -- average absolute time in milliseconds of human beats where the note is expected to be in the audio file.
  2. duration -- expected duration in milliseconds of the note based on score duration.
  3. note -- MIDI note number of pitch (60 = middle C, 61 = C-sharp/D-flat, etc.).
  4. metlev -- metric level of the note: 1 = occurs on a downbeat; 0 = occurs on another beat in the measure; -1 = occurs on an offbeat.
  5. measure -- The measure number in which note occurs.
  6. absbeat -- The absolute beat from starting beat at 0 for the first beat of the composition.
  7. mintime -- minimum absolute time of human tapping for this note.
  8. maxtime -- maximum absolute time of human tapping for this note.
  9. sd -- standard deviation of human tapping time in milliseconds.

Manually Corrected Beat Times

The average tap times for this performance were manually corrected to match the beat times in the performance using a sound editor to listen to the audio in detail. Here is a file which contains the corrected absolute beat times:


The first column in the file contains the absolute beat number, starting with beat "0". Enumerating the beats in this way is useful for evaluating automatic beat position determinations. Some beats occur where there are no events, so automatic finding of events won't detect anything at those times.