Interview très instructive de Cliff Chase ici, désolé pour ceux qui ne comprennent pas l'anglais :
http://www.musicgearsource.com(...).html
extrait :
So our approach is probably somewhat different than
the typical modeler. First of all we take a two-stage
approach: preamp and power amp. The preamp modeling
accounts for the equalization of the various stages
and the different nonlinearities present. It has the
usual tone-stack simulation and models the location of
the tone-stack (pre- or post-distortion). There's
also some very special stuff going on in the actual
modeling of the tube itself that I can't really talk
about.
The power amp modeling is where things get really
interesting. A real tube amp is a dynamic beast.
When you get to that sweet spot the amp becomes
sensitive to the nuances of your playing and feels
like an extension of yourself.
Although it requires an enormous amount of
computation, we try to replicate this behavior. The
Axe-Fx power amp modeling simulates the dynamic
behavior of real tube amp. The frequency response
changes as you play harder. We even model the "sag"
of the power supply and the resulting compression.
Unlike most modelers we have a separate "Master
Volume" control that allows you to adjust how hard
this virtual power amp is driven.
The result is a modeler that has the "feel" of real
amp. One that responds to your playing and cleans up
if you roll of the volume knob. You can even adjust
the amount of sag to your liking. So you can take a
Plexi model, for example, which in real-life has a
solid-state rectifier and make it feel spongy like it
has a tube rectifier.
We also model the effects of the output transformer
and all the other subtleties of a real tube power amp.
We even model minute details like the snubber cap
across the phase inverter. It's painstaking but
offers a greater degree of realism.
The other big factor is just simply the quality of the
components and of the algorithms. With this much
compute power we don't need to skimp on our
algorithms. Digital effects have become much maligned
recently. The manufacturers are primarily to blame
for this. Rampant cost-cutting and overzealous
marketers have forced engineers to cut corners on the
quality of the components and more importantly on the
quality of their algorithms.
For example, I can show you two different ways to
implement a chorus algorithm. One of them will
require ten times the computational power of the other
but sounds much better. I'm sure you can guess which
algorithm most products use since it allows them to
use a lower-cost processor. And this is a real shame
since it has given digital effects a stigma. Analog
effects are in vogue now because guitar players CAN
hear the difference. Marketers advertise "120
simultaneous effects in one rack space" and then
expect the engineers to code that which means they
have to use low-quality algorithms.