Algo tagging: where’s the logic in that?
When the requirement brought about by the German high frequency trading act to tag algorithms comes into force this month, market participants may well feel hamstrung by the complexity of the regime. And while the regulatory goal of improving market surveillance and reducing systemic risk may be valid, some might wonder whether this requirement goes one step too far
Requiring firms to tag orders that have been generated by a machine-based process sounds reasonable enough. From the perspective of those charged with maintaining fair and orderly markets, simply knowing that an order, or series of orders, is being generated by a machine can help to explain unusual trading patterns or behaviour that could be the result of a rogue algorithm.
Themes
- Regulators across the globe are looking to place restrictions and get more visibility around automated trading strategies
- While there have been some high profile examples of trading glitches associated with HFT, none to date have posed a systemic risk
- Empirical evidence is divided, but many studies suggest that the adoption of automated trading has contributed to tighter bid/ask spreads
But as with many things, these regulations are likely to pose much greater complications in practice than the theory would suggest. The problem lies in determining exactly what constitutes a distinct algorithm, and how to flag different decision-making processes. Eurex issued guidance earlier this month that “the HFT law requires the entire decision path [of an algorithm] to be flagged”. But algorithmic trading is a highly complex discipline.
The decision-making processes around simple time- or volume-slicing algorithms are easier to model, but many statistical arbitrage and market making algorithms will require a multitude of data inputs to calculate their decisions, and constant calibration to ensure they are performing profitably.
Typically, the market for any security is in a perpetual state of fluctuation between momentum and mean reversion. Being able to gauge which of those forces is in play, at any given time, is a highly prized skill. Therefore, requiring firms to tag in real-time their ‘entire decision path’ would seem like an incredibly complex requirement. Worse still, if market participants are forced to focus on correctly tagging their algorithmic decision processes, rather than calibrating their algorithms to respond more adeptly to changing market circumstances, it may hamper their performance.
An analogy could be made with formula one racing. In the event of a crash, it would seem reasonable for regulators to want to identify who was driving which car, so they can understand what, or who, caused the crash. But requiring drivers to explain their decision making processes in real-time is a whole lot more complicated, and may distract them from their core duties – namely controlling the car – making the environment potentially more hazardous.
Known unknowns
- What will Bafin and regulated markets do with the data it collects?
- How will provisions to regulate HFT and algo trading within MiFID II align with the German HFT Act?
- Is there a need for a more coordinated approach across Europe?
Other measures brought about by the German HFT Act, and due to be enacted as part of MiFID II, including circuit breakers, risk controls, caps on order-to-trade ratios and capital adequacy requirements – are clearly designed to enhance the safety of the markets. Continuing the racing analogy, they would be equivalent to introducing safety cars, advanced braking systems, speed limiters in the pit lane and controlling exhaust emissions to make sure no one can generate smoke screens – all of which sound entirely reasonable.
But exactly what regulators are doing by collecting information around individuals’ decision making processes is unclear to us at JWG, and we would question whether the trading landscape will become any safer as a result.
It is also worthwhile noting that European regulators have largely been, rightly or wrongly, alerted to the ‘dangers’ of algorithmic and high frequency trading, mainly from incidents that occurred in the US, beginning with the US flash crash, and continuing with Knight’s rogue algorithms and IPO glitches for Facebook and BATS Global Markets. So far, touch wood, incidents of a similar scale and scope have not impacted European markets.
In fact, an extensive piece of research, commissioned by the UK’s government office for science, found that computer based trading had several beneficial effects on markets, helping to increase the efficiency of price discovery and liquidity provision and bring down transaction costs. It also cautioned that any new regulations ought to preserve those benefits, and should be taken in a synchronised global fashion to meet their objectives, while the adoption of standards should also play a wider role in market infrastructure.
The German HFT Act will certainly provide a test case for MiFID II when the directives are implemented in 2016. But, considering MiFID II is a directive that will need further interpretation and implementation by national competent authorities, and that its implementation will be years later than the German HFT Act, this does not bode well for a standardised and synchronised global approach.