About the Author: Jonathan Brewer is Managing Partner at IS Prime, a provider of aggregated liquidity, prime of prime capabilities, risk management access, and bespoke integration to an institutional client base.
As the cost of hardware, system processing capacity and “cloud computing” continually reduces, the barriers to entry to the aggregation technology market have been reducing in parallel.
The natural effect of this is that there are now many more companies that are offering aggregation services. Consequently, this is beginning to have an impact on the retail broker market, as some retail brokers are beginning to aggregate the feeds that they receive from their Prime of Prime (“PoP”).
Given the fact that the feeds from the PoPs are already made up of an aggregation of feeds from LPs, there are many hidden pitfalls to the concept of re-aggregation. This article outlines the reasons why it is not in the retail broker’s interest to re-aggregate PoP feeds.
On the surface, there are a few simple reasons why it would seem sensible for a retail broker to aggregate PoP feeds.
- Slight improvement in top of book spreads
- Resiliency if “primary” LP has a technology problem
Slight Improvement in ToB Spreads
This is actually a false economy, because the increased toxicity for the recipient LPs means that they widen their spreads, reduce their execution tolerance or ask to come out of the feed altogether. Over a period of time this leads to poor execution quality and your top of book spreads deteriorating to the same or worse than they would have been had you worked with a single PoP. The best and cleanest way to guarantee tight, executable prices is to use a single PoP who offers competitive rates.
Resiliency / Failover
It is important to note that, whilst aggregating two feeds can add resiliency in one way, it actually creates a weakness in your architecture, thereby creating a single point of potential failure. If your aggregator fails, then both your primary and your back up pricing sources become inaccessible.
The most effective failover method is to get an independent back up feed from your primary PoP so that you can fail over in the event of a failure in the primary environment.
Negative Impact of Re-Aggregation
Even if you were to achieve a sustainable improvement in top of book, it is more than outweighed by the negative impact of aggregation.
Holding Positions with 2 LPs – (This is the most tangible problem if you are aggregating)
- You have to hold margin in 2 places
- It is possible that you could end up with a flat client position, but large equal and opposite positions at your 2 brokers. You would then be making large margin payments to each LP, and also paying swap away. You would also eventually have to pay the spread in order to close positions with both LPs.
- Operational costs and risks associated with having positions at 2 LPs
- You could have a large profit at 1 LP and a large loss at the other, which would give you further funding issues
- You forego the benefits of position netting (loss of swap revenue, higher margin requirements etc.)
Splitting your flow – if you do all of your flow with one provider, you naturally have better bargaining power on liquidity, but also commissions. You will get better service at lower cost.
Legal Risk Factor Beneath Ripple’s Lawsuit from SECGo to article >>
Additional technology steps – Adding an aggregator in the technology chain has implied costs, specifically relating to:
- Latency – Aggregator providers might claim to have ultra-low internal latency, but in reality, even if everything is co-located and cross connected (which is expensive), the comms latency costs of adding an aggregator are typically significant, especially during times of high market activity. This leads to a reduction in execution quality in the form of higher rejection rates or greater slippage
- Reliability – you are adding an additional potential failure point in the architecture
- Additional costs – there are explicit commission charges for aggregation; 2 layers of aggregation mean 2 layers of cost.
Toxicity of the flow for LPs – often people in the retail market have little sympathy for LPs, but in reality the more the LPs’ interests are protected, the more they will be able to show tighter prices and provide better execution. It is also not often understood why double aggregation increases the toxicity of the flow that the LPs end up receiving. It is worth explaining this phenomenon.
Trade Acceptance “Perfect Storm” – This is a slightly nuanced scenario that is quite difficult to explain, but it is worth understanding, because it is the main issue of double aggregation.
Every LP sets a specific trade acceptance tolerance for every feed that they send out. This acceptance rate looks at the price deviation from the market reference point (EBS/Reuters etc.) on any given inbound trade at the moment that the trade is received. For example, if they set a 0.5 pips acceptance tolerance, they will fill a trade even if the price is half a pip worse than the market rate when they receive the trade. This is particularly necessary on retail flow, because most trades that they are receiving are against quotes that they sent out up to and over half a second previously (due to the internal latency of MT4, end clients’ internet etc), and if LPs did not accept trades that are sometimes slightly offside they would have rejection rates in the region of 50%.
This is particularly relevant for the situation that we are looking at here, because the trades that LPs receive from a double aggregated environment are far more likely to come within the negative tolerance than positive. The reason for this is because the quotes that are shown in a double aggregated environment are particularly latent (due to the additional technology steps), and the price that the client hits is rarely in the LPs favour, because it is likely that their latent quote will only be top of book if it is off market.
This is the dynamic that leads to an increase in the toxicity of the flow, and it is the reason why LPs will normally worsen their trade acceptance tolerance range, which leads to more LP rejections, which in turn leads to slippage and rejections for the client.
This is a simpler, although equally relevant, concept. The issue here is that it is very hard for an LP to manage their risk if they are showing a very competitive price in for example 1mm EUR/USD to IS Prime and also to your other PoP that you are aggregating with IS Prime. If a client hits you in 2mm EUR/USD, the same LP gets the trade from both IS Prime and PoP2 at the same time, meaning that they receive 2 million at a price that they were only quoting in 1mm.
This is particularly important because the most competitive LPs often price to PoPs on a risk reducing basis, meaning that they show a skewed interest-based price in order to exit risk that they are holding. This skew works both in the favour of the LP (because they can exit risk that they want to exit) and the client (because a skewed price leads to a tighter top of book spread). If they are being double hit it makes it much harder for them to use your flow as risk reducing flow, because they cannot control the size that they get hit in.
While you do experience some short-term spread benefits from aggregating feeds that are already aggregated, this is more than outweighed by the negative aspects of aggregating.
Essentially the reason why we avoid pricing into a re-aggregated environment is because we have found in the past that clients who aggregate tend to experience constant problems, which in turn affect us. Re-aggregation creates an order flow, which has negative impacts on every participant in the chain; the retail client and broker (increased rejections and slippage), the PoP (poor relationship with LPs and therefore worse pricing) and the liquidity provider (increased toxicity).
It is important to consider the bigger picture when thinking about re-aggregating PoP feeds, because there are material weaknesses that are inherent within this model, which eventually end up costing everyone involved.
Written By Jonathan Brewer