His background may have been in tubed audio product design, but Theta Digital’s Mike Moffat is now at the forefront of computer-based digital processor development. His Theta D/A processors are among a handful of products that use Digital Signal Processing (DSP) chips and custom filtering software instead of off-the-shelf filter chips (footnote 1). I recently visited Mike at the Theta factory to get his current ideas on digital audio reproduction and what goes into designing a good-sounding processor. I began by asking Mike if he had always been an audiophile.
Mike Moffat: I’ve been an audiophile since high school. I went to UCLA, but got drafted in the middle of it and ended up graduating [in Electrical Engineering] two years late. I’m giving away my age. [laughs] I then worked briefly at Douglas Aircraft and decided I didn’t like a desk job. So I got a more adventurous job with Texas Instruments working on experimental digital tape recorders. These were 8-bit machines that sampled at 50Hz to pick up low-frequency information that geologists used. It was overseas employment and paid well. When I came back from overseas with all that money, I could afford to be in audio professionally. In 1977 I built a preamp called the Theta Preamp. I sold Theta to my partner in ’82, did interconnects for a while, and got involved in import and export.
All the old Theta analog stuff was all tubed—a power amp, preamp, and tubed head-amp. It was kind of a novel design in its day: no feedback and 6DJ8 tubes—and there were no 12AX7s in it. My campaign at the time was against the 12AX7; it was a two-bit tube designed for cheap phonograph players.
The power amp was a 75W monoblock thing that sold for $700 or $800 apiece, the preamp was $700, and the head-amp was $500—this was 1977, now. The head-amp had two Western Electric 417As in it, which was the quietest tube ever made. It was fun getting the head-amp non-microphonic. We built about a hundred of those.
Robert Harley: Was the old Theta successful?
Moffat: It was doing fine at the time I sold it. My partner sort of lost interest in it, I guess. He got married, but still, to this day, he’ll work on the old Theta stuff. He’s down in San Diego and still takes care of whoever has them out there running.
Harley: You seem nostalgic about tube gear. Do you think tubes are inherently better sonically than transistors?
Moffat: My whole system at home is tubed—but I wouldn’t want to build a product with tubes in the 1990s. The worst thing is when they come back…
Harley: Do you mean reliability problems?
Moffat: I have heard that reliability is a problem from dealers who sell tube equipment in general. If it’s got tubes in it, it will be more problematical. That’s what I hear.
Harley: But do you think they sound better?
Moffat: Oh, yeah. So why don’t I put tubes in my own products? The answer is, I don’t want to see it back. Also, just putting tubes in a $4000 Theta DS Pro would take it up to probably $5500 or $6000—more than that if you do the power supplies right.
But I love them. Neil [Sinclair, Mike’s partner in Theta Digital] still has tubes. He probably won’t admit to using a tubed system in 1992, but I know he hasn’t sold his tube amps.
Harley: All your designs—even the $1250 DS Pro Prime—use computer-based DSP chips running custom filtering software. Do you think DSP-based digital filters are a requirement for state-of-the-art digital playback?
Moffat: I believe so, yes. That’s because there are no digital filters you can buy that optimize for time-domain performance. They are designed for best frequency-domain performance—minimum ripple in the passband and maximum attenuation in the stopband. They are frequency-domain devices only.
One of the purposes of an oversampling system, where you add dots [samples] between the existing dots, is to add more information. In the captive filter design [an off-the-shelf filter chip], that translates to improvements you see on spectrum analyzers—lower ripple and better stopband characteristics. But there is no optimization or enhancement of the time domain. So you’re constrained to whatever information is in the original recording. Whereas in a time-domain–optimized filter, you can improve [the time-domain characteristics] the way you would improve the frequency-domain characteristics of a captive filter. With DSP filters you get the best of both worlds.
Harley: How much of your processors’ spatial qualities are a result of DSP-based filters and custom software?
Moffat: Almost all of it. Having done a number of experiments with captive filters—the NPC, Philips, Sony, etc.—the variations of the algorithm they all run doesn’t do anything to optimize the time-domain performance. The algorithm we run is a specific time-domain enhancer. That’s why I build processors that are Motorola-based [the Motorola 56001 DSP chip] as opposed to captive filter–based; there is a substantial difference in imaging and sense of space.
Harley: So soundstage size and image focus are primarily functions of the digital filter’s time-domain performance?
Moffat: Absolutely. Not only theoretically, but also empirically—I’ve done the experiments.
Harley: How much can you change the sound of a processor with the software?
Moffat: Quite a bit. You can put a Wadia-type [filtering] algorithm into a DS Pro Basic—we’ve done it—and it totally changes the sound of the processor. We’ve put in a frequency-domain optimization that completely changes the sound of the processor, particularly the imaging.
Harley: When you’re designing a processor, how do you decide which filtering algorithm is the best? What’s the point of reference? And do you make editorial decisions to try to make it sound “better?”
Moffat: Well, there are lots of ways to optimize digital filters. I would be bullshitting if I told you we’ve tried them all—we haven’t. But they sound very different, even with the same parameters. Given the same passband, stopband, and transition-band parameters, they do sound different. I picked the filter we’re using because it does it more than any other algorithm I’ve found. We get good frequency-domain and good time-domain performance. I picked it over filters with identical performance—it sounded better in the spaciousness aspect.
Harley: Is the spatial presentation heard from your processors inherent in the recording, or does the software create some of it? You used the word “enhancement” earlier in describing your digital filters’ time-domain performance.
Moffat: The software creates that sense of space based on information that’s in the recording. In other words, it works mathematically in the time domain the same way frequency-domain optimization works. It takes a weighted average of a group of samples. It’s very similar to the video algorithms JPL did for the Mars Lander to enhance surface detail.
So the answer is really yes to both; the sense of space is created by the software, but it’s based on information that’s originally there in the recording. It’s not making things up. It’s not re-creating some artificial image; the image it creates is based on information in the original samples.
Harley: So the additional time-domain information created in your DSP filter is analogous to the additional samples generated by an oversampling digital filter in the frequency domain?
Harley: Do you attend much live music?
Moffat: Every year we’ve been buying season tickets to the LA Opera, and a partial season to the LA Philharmonic. And finally this year they’re in the Founder’s Circle, which are the best seats in the house for sound at opera because the orchestra is in the pit.
That’s the reference for my home system—that tells me how it sounds. At the opera, unlike Broadway shows or lighter theater, if anybody comes out with a microphone taped on, tomatoes would be coming out of the audience. That’s just not okay. The only thing they ever mike in any opera, and only in some of the older operas, is the harpsichord, because it’s a feeble instrument acoustically.
Harley: Is it important to have live, unamplified music as a reference?
Moffat: Oh, absolutely. And that’s the only place you can get it any more. You can’t go to a jazz club without listening to amps. You can’t go to a Broadway show—everybody has microphones taped on. You can’t go to any light opera—Phantom of the Opera—and make judgments because you’re listening to their sound system. But if you go to the real opera and you go to the real classical concerts, there are no mikes. And that’s what tells me. That’s how I know when [the design] is right.
Footnote 1: The software in a DSP-based processor is the list of instructions that tell the DSP chips what to do to the audio data. It is contained in one or more Erasable Programmable Read-Only Memory (EPROM) chips.