With the ever-growing electrification of the foreign exchange market, the use of machine learning tools is gathering speed and changing the landscape once more. While early versions of algorithms have been mostly comprised of buy and sell orders with relatively straight forward parameters, the evolution of a truly quantitative approach towards market making is making strides in the eFX space.
After the simple first generation of algorithms evolved into more sophisticated strategies which provided increasingly quantitatively driven approach to markets, investors started using dynamic pricing derived from mathematical theory.
The next step was to begin using order break-up strategies to minimize market impact and ultimately deliver to investors better entry levels on their positions. Slippage due to large orders is traditionally one of the major issues for currency traders.
The latest generation of algorithms which are getting increasingly more popular with eFX traders is called Time-Weighted Average Price (TWAP) algos. They enable clients to select different time frames over which a trade can be executed. Another similar tool which has become very popular is volume-weighted average price (VWAP) algos. They are used to normalize the time schedule depending on the expected distribution of volume.
Customers of major liquidity providers in the eFX space have been increasingly keen to access such strategies and banks consequently have been providing their clients the ability to choose from a suite of algos that minimise market impact.
JPMorgan is sharing some valuable insights into its approach towards servicing its clients trading the foreign exchange market. The company calls its newest generation algo product DNA: Deep Neural Network for Algo Execution. The goal behind it is to enhance the use of FX algorithms and leverage machine learning to combine existing algos into a consistent execution strategy.
The company’s Head of Macro eCommerce Chi Nzelu elaborated that the tool represents an optimization feature which uses simulated data from different market situations and market conditions. DNA selects the best order placement and execution-style with the sole goal to minimize market impact, shared
B2Broker Extends its Multi-Asset Liquidity Pool with Tools for BrokersGo to article >>
“It then uses reinforcement learning – a subset of machine learning – to assess the performance of individual order placement choices,” Nzelu elaborated.
The company elaborated in a statement that while DNA is currently an enhancement for certain existing strategies, JPMorgan’s future goal is to create an all-encompassing algorithm that uses available data to provide users with information to improve execution under various market conditions.
The team developing DNA used reinforcement learning to design the tool. The approach was pioneered by Google’s London-based AI team DeepMind, which developed the AlphaGo software program. The machine learning effort by the search giant made rounds when beating the world’s No.1 Go player, Ke Jie.
JPMorgan elaborates that the strategists behind DNA used reinforcement learning to deliver an algo that has increased logical capacities. The approach was materially different from previous generations, which were constructed on the basis of human-based programming or rule-based executions.
The company uses an analogy to describe the approach it is suing to perfect its DNA algorithm – teaching a robot how to walk. While rules-based technology would directly program the robot and second generation altos would show the robot billions of videos that demonstrate how to walk, reinforcement learning throws the robot into different environments and forces it to walk by learning the way a toddler would.
By facing obstacles during its experience, falling down and changing its strategy, the algo is improving itself and learns which are the best approaches to given situations.
“Artificial neural networks (ANN), of which DNA is a type, are inspired by the biological neural networks of the brain. They are capable of modelling complex non-linear relationships with little restriction in the inputs, which is useful when trying to model reality because relationships in real life are often complicated,” said Sam Nian, a Lead Strategist in the DNA initiative.
Another Lead Strategist on the project, Tanya Tang elaborated that instead of relying on statistical regression, supervised learning, and human hard-coded rules, the reinforcement learning approach provides more flexibility and removes potential human bias when training the model.