Quantitative modeling
We start every quantitative modeling project by asking the obvious but often neglected question whether the phenomena can and should be modeled quantitatively. That is because there are non-stationary phenomena and situations without sufficiently abundant and high-quality data. Some of these situations do not lend themselves to quantitative modeling. Nothing is worse than a quantitative model that does not work, is not understood intuitively and which blinds decision-makers. Our background as practitioners of the dismal science makes us hypersensitive to blind quantitative modeling. On the other hand, our experience in quantitative finance and derivatives pricing shows that mathematical methods can work remarkable well until they do not.
We approach real-world questions with humility, open mind, promiscuous disregard to disciplinary boundaries as we happily mix methods and approaches from physics, economics, social science, finance, statistics, and mathematics.
Our guiding principle is a careful assessment to produce models that are sufficiently good for a given purpose and for a limited time. This view encompasses respect for the non-stationary nature of the world, cost efficiency, and client value.
As for our process, once the existing situation is taken apart, we remodel and rethink the components of the problem, sometimes replacing, often refining existing models as we strive for incremental change.
Our approach to model performance and risk is that of a large perturbation dynamic scenarios. In other words, we stress our models by large changes to current input values and evaluate our models across paths of predicted core driver variables from which we infer other model inputs. That allows us to assess our models statistically across forward-looking paths. We also carry out traditional backtesting, but in many instances, the lack of historical data makes such exercises difficult because, with small and incomplete historical datasets, it is difficult to both calibrate and backtest models. In such cases, we often use historical Monte-Carlo to generate statistically equivalent data sets.