A Parameter Research [Month 1 Progress Update]
Cognitive Atom
Summary
This is our first monthly update re: our research on optimizing the amplitude parameter for Curve pools. This month, our efforts have been focused on (a) initial development and optimization of pool simulation code and (b) acquisition of high-quality market-wide exchange rate data. We have been successful in both regards, and expect to post finalized simulation code to GitHub within the next week. This code will form the basis for our GUI which will allow users to estimate optimal A parameters for current and proposed pools. We aim to have a webpage with an initial GUI running within 1 month.
Code Optimization
To ensure scalability with many pools, we prioritized highly efficient code. Here, the primary code to be optimized is (a) the pool functions themselves (i.e. simulation.py), (b) code simulating an optimal arbitrageur, and © code which finds the most profitable A parameter given a particular price feed and optimal arbitrage.
We compared a number of different implementations in Python and Julia and found (to our surprise) that the Python implementations were reliably fastest. For example, when simulating a sequence of trades, an optimized port of simulation.py to Julia was 30-40% slower than the original. We suspect that this is due to Julia’s handling of large integers (type BigInt), which provides fewer optimizations than regular Ints. We also found that Python’s PANDAS package outperformed Julia for data manipulations (e.g., re-formatting price feeds from different APIs). So, we have proceeded with Python and are experimenting with PyPy for JIT compilation. Code for (b) and © are implemented but could be faster. We expect to release optimized versions on GitHub in ~1 week.
Data Acquisition
We compared a number of APIs for historical and real-time price feeds. Our primary requirements are (a) high temporal resolution, (b) broad token coverage, and © broad market coverage, including both CEXs and DEXs. Given these requirements, nomics.com provided the best option, including minute-timescale historical data, broad market coverage across CEXs and DEXs, and data for any coin traded on any covered exchange.
Both coingecko.com (free) and coincap.io (free) provide historical data, but Coingecko’s data had unpredictable intervals (e.g., randomly varying between 5m and 1h), and Coincap did not have data for many of the newer coins available on curve.fi. Coinmarketcap.com requires a prohibitively expensive enterprise subscription for our historical data needs.
One potential downside of using a paid nomics subscription is that we cannot publicly share the raw data used in simulations. However, we intend to make charts of all data viewable in the browser, and to allow users to run simulations through the browser. We have also built the codebase in a modular fashion so that a public API can be substituted in the future.
Roadmap
Having developed the core codebase, our goal over the next month is to make it publicly usable. Following some further optimization, we will provide our simulation code on GitHub in about 1 week. We will then commit our time primarily to building and hosting a GUI for all current and proposed pools.
In subsequent months, we will focus on (a) developing methods for estimating optimal A without running full simulations (i.e., based on the distribution of historical prices), and (b) expanding the GUI so that users can estimate optimal A for custom pools.
We would like to thank the curve.fi team and community for supporting this research. We are happy to discuss any questions, comments, or suggestions either here or on the curve.fi Telegram and Discord channels.