Did the Manhattan Transfer use Auto-Tune?
I recently came across an allegation on Amazon.com that got me thinking. The review in question is by Andrew Grobengieser, and it is critical of the Manhattan Transfer’s latest album, The Chick Corea Songbook. Grobengieser alleges:
As a lifetime fan, I was unbelievably excited to hear of the release of a Chick Corea songbook. And then I listened. It only took me a moment before a sinking feeling set in, as I realized that ManTran, one of the best-blending and most in-tune vocal ensembles in recorded-music history, has succumbed to the scourge of modern recording known as “Auto-Tune”. Yes, Manhattan Transfer fans, welcome to the world GLEE and Cher. It’s all over the place on group harmonies, and even rears its ugly head on a few of the solo vocals.
I mean, really. Why ON EARTH would this production choice be made? It takes what are otherwise very hip and adventuresome arrangements, and makes them roboticized, metallic, cold, and inhuman.
It seems to me that it’s one thing to allege that a weekly TV musical is using Auto-Tune, but quite another to level the accusation at four vocal jazz icons.
I am by no means anything approaching an expert on this topic—just an interested fan. But the engineer in me was curious: Is it actually possible to detect the use of Auto-Tune?
First, I did a little background research. Auto-Tune is a tool that can be used to correct the pitch of recorded singing. Evidently it can be used in a subtle or blatant manner; Andy Hildebrand, the inventor of Auto-Tune, says:
At one extreme, Auto-Tune can be used very gently to nudge a note more accurately into tune. In these applications, it is impossible for skilled producers, musicians, or algorithms to determine that Auto-Tune has been used. On the other hand, when used as an effect, such as in hip-hop, Auto-Tune usage is obvious to all. Everything in between is subject to an individual’s unique listening skills.
This raises the question: Assuming that the Manhattan Transfer is attempting to use Auto-Tune in a subtle manner, how can Grobengieser detect its use? (In a follow-up comment to his review, he claims he is “a trained musician with years of experience dealing with vocal group intonation.”) Frankly, I didn’t believe he could detect it, so I decided to try to learn more.
According to one site:
The most important parameter is the retune speed – the time it takes Auto-Tune to glide the note to its perfect pitch. For maximum realism, the retune speed must be set to a value close to the retune speed of the singer’s natural voice. . . . But Auto-Tune’s retune speed can be set to any value right down to zero, which means that notes instantly jump to the exact pitch. This effect is decidedly un-natural. If the singer glides smoothly from one note to another, Auto-Tune will suddenly jump from one note to the next when the mid-point between them is reached.
I believe you can hear the un-natural Auto-Tune effect with a zero retune speed in this Cher song, which according to various web sources also seems to be the first use of Auto-Tune as a sound effect (in 1998).
But let’s assume that the Manhattan Transfer is trying to hide the use of Auto-Tune, in which case their recording engineer would presumably use a retune speed that approximates a “natural” value.
Hildebrand’s original patent for Auto-Tune, also from 1998, has a relatively clear explanation of his invention and how it works. (In my experience, the technical clarity is unusual for a patent!) If you’re interested, I recommend the discussion from the middle of column 3 to the middle of column 6.
I wondered if possibly we could detect Auto-Tune because the notes would be too perfect. The song “500 Miles High” begins with an a capella intro in which it is easy to isolate the first note sung by Janis Siegel. I brought this song into Audacity and zoomed in to the first one second of the left channel, and then selected Analyze > Plot Spectrum.
This is a reasonably crude method, but if you can use it at a point in the music where you can isolate a single voice, it can show some interesting information. Above, if I’m remembering my music theory class correctly, you can see that Siegel is singing an “A”. You can see the fundamental in the first peak, which is highlighted with the thin vertical line in the screenshot above. To the right are all the harmonics.
As you can see, the plot shows that Siegel didn’t hit a perfect “A”—that would have been at 220 Hz. Instead, she’s at 216 Hz, which would be noticeably flat. I am definitely no expert, but I’m thinking that if you’re going to use Auto-Tune, why not get the note correct?
There is a similar intro to the Manhattan Transfer song “Gentleman With a Family” from 1991’s The Offbeat of Avenues. I picked this song because it starts out similarly to “500 Miles,” and also because 1991 puts it well before Auto-Tune would have been in use. In this case, the intro isn’t a capella, so there is some instrumentation playing and thus it’s a little harder to isolate just the singer’s voice. However, selecting the left channel from 20.5 to 21.5 seconds in this song yields the following frequency analysis:
I am pretty certain that the highlight is again on the fundamental of Siegel’s voice—she is hitting a C at 262 Hz. (I believe the peaks to the left are lower tones from the instruments.) Here, before the days of Auto-Tune, she’s dead-on. Of course, she was also 19 years younger!
There are many more sophisticated methods of analysis that suggest themselves. It would be interesting to plot the frequencies over time—perhaps a voice held on a long note without any variation would be a likely indicator of the use of Auto-Tune. If we could isolate each singer onto a separate voice track, it would even be possible to run the pitch detection portion of the Auto-Tune algorithm; if this indicated that tuning was necessary, it would probably be a good clue that Auto-Tune wasn’t used in the studio. My colleague Kevin Gross suggested looking at the vibrato and timbre, because vibrato is removed altogether by Auto-Tune (and then artificial vibrato is usually added back in, according to the patent), and timbre would be changed when samples are added or dropped as part of the tuning process.
Obviously, I can’t really conclude anything from what I’ve done so far. In his Amazon review, Grobengieser doesn’t specify where he thinks he hears Auto-Tune on The Chick Corea Songbook; possibly he’s not talking about the intro to “500 Miles”. Or possibly my analysis tools are not sophisticated enough to detect the use of Auto-Tune. Or possibly if you are an audio engineer trying to sneak a little Auto-Tune into a jazz recording, you are smart enough not to correct to the exact pitch. I have no idea. To my ears the Transfer occasionally sounds just a little off-key on this album, which I ascribe to their age (but it also argues against the use of Auto-Tune). Again, though, I’m no expert.
I’d welcome your thoughts in the comments!
Howdy Pierce is a managing partner of Cardinal Peak with a technical background in multimedia systems, software engineering and operating systems.
Tags: audio processing, auto-tune
10 Responses to “Did the Manhattan Transfer use Auto-Tune?”
Your measurement of 216Hz as opposed to a perfect A of 220Hz is subject to measurement errors.
If the source audio is sampled at 44.1KHz, Nyquist gives a bandwidth of 22050Hz.
Diving this by the number of buckets you’re using, 2048, gives a resolution of 10.77Hz per bucket (assuming the buckets are linearly spaced – I can’t recall from the algorithm/windowing function how they done exactly, but this will provide an avenue for further thought regardless).
At 10.77Hz per bucket, the bucket measuring the “A” tone could actually end up anywhere between 214.6Hz and 225.4Hz – about a third of a semitone in either direction.
Wouldn’t you expect there to be some phase discontinuity where auto-tune was used? I don’t know squat about music theory but it seems like shifting a note higher or lower in frequency would require some phase shifting. Perhaps it would show up in the complex spectrum.
Also, can you try using auto-tune on say a recording of a C-# on a piano and compare that before and after spectrums to see what it does? How obvious is it?
It would be worth getting a trial version of autotune and reaper and just noodling with it a bit. I’ve used autotune to varying degrees in production over the past 5 years. It’s a really difficult to describe in words but even more subtle use becomes obvious to the ear once you’ve played with it a bit. Andy Hildebrand comments are true only insofar as the correction of a note or two throughout the performance, but it’s common these days for it to be on throughout an entire recording. Can’t speak to the Manhattan Transfer album as I haven’t heard it but I’d say trust your ears over the spectrum analysis.
Nic, that is a good point. I re-ran the analysis but used 8192 buckets. I don’t think I can post a new picture in a comment, but the peak has indeed moved — down to 213 Hz! This is sometimes the problem with software you didn’t write yourself, because I’m now at the point where I’m not sure entirely what Audacity is doing there.
I was bit by the bug and had to investigate. I’m definitely hearing some pitch correction on 500 miles high, only on one voice for sure. I listened on amazon with headphones here – http://www.amazon.com/Chick-Corea-Songbook-Manhattan-Transfer/dp/B002IVLWG0. The pitch correction is apparent on the very first two notes of the lead female vocal (some-day) in the amazon sample, after the other harmonies come in it’s less easy to identify. Again, hard to explain in words, but that’s what autotune sounds like.
auto-tune definitely use some formant-aware phase shifting, so that a slight pitch correction is essentially inaudible if properly used. Heck, back in the 1980s when we only had Eventide H3000 to save a poor performance, we could make a convincing approximation of the auto-tune effect, without auto-tune… And when the digitech vocalist MV-5 became available, it made a really decent job at pich-correction, and it was back in 1995. Do you really think that a state-of-the-art fx module used by a competent sound engineer in 2010 would be audible? No way, sir.
So, the non-qualified engineer has a thought: I think you’re looking at analogue outputs or processed versions of analogue outputs? I would have thought that statistically speaking you could identify application of Auto-Tune by looking at the digital data. For example, the filtering process would either reduce random error in the underlying data, or introduce new noise artifacts in “corrected” data that would be identifiable.
In the case of “500 Miles”, the analysis is performed looking at the digital samples that result from decoding the MP3 purchased on Amazon. (And clearly the MP3 compression is going to add some artifacts, but the only way around this is to buy the CD, and who does that anymore?) This song is new enough that I would be very surprised if it was ever converted back to analog in the studio after the original A/D conversion. And I would expect the original A/D conversion to have been performed at a very high sample rate, perhaps 96 kHz, which should eliminate any aliasing.
I just put this CD on a few moments ago for a casual background listen, and from the first few notes of the second song, “Prelude”, it was screamingly obvious to my ear that Autotune was used (and not lightly). It was hard to believe that this group would use it, so I did a quick Google search to see if anyone else heard it, and found this page. I wouldn’t say it’s used everywhere, and I’ve now only listened to a couple of the tunes, but I’ve heard it in at least several parts of several songs now, and upon a closer listen, I hear it on the first tune now in parts. To me that characteristic sound of Autotune is unmistakable. I can’t say I’m dead-set against the use of Autotune (it’s certainly more pleasant to listen to than out-of-tune-ness), but it does cheapen the heroics that otherwise are characteristic of super-talented and tremendously accomplished groups like this. For the record, I’ve never heard AutoTune on the likes of The Real Group or Take 6.
According to Wikipedia, Tim Hauser is 70 years old! Janis Siegel is almost 60. I’ve loved the group for years but maybe its time to hand over the reins to Naturally 7, Take 6, etc. Autotune is not cheating if used as an effect or acknowledged in the liner notes. Speaking of which, can Keith Richards still play the guitar?