Part 2: Building a Financial Advisor dApp on Avalanche
So now that we understand what we’re trying to build let’s see if we can spend some time looking at the data points we need to access.
Before beginning we need to understand where we get our data from. They can be divided into:
- On chain data
- Off chain data
On chain data is data that resides on the block chain and off chain data is data that resides in the outside world. An example of on chain data would be a transaction where one address sends AVAX to another address. Off chain data would be the USD price of the AVAX token.
A smart contract can access on chain data relatively easy, however in order to gain access to data that doesn’t reside on the chain, we need an Oracle.
Currently no Oracle exists within the Avalanche ecosystem. Although don’t panic! It’s on the way as per this tweet by Gabriel on 28th Feb 2021.
This means that we can’t build our dApp as a smart contract yet. That’s not the end of the world, as we can still build an off chain financial advisor that makes recommendations for on chain actions.
Let’s now explore the data sets we need access to:
- Current Swap Fee distribution for Liquidity Providers
- List of Pangolin Pairs
- Liquidity and Volume of Pangolin Pairs
- PNG Token distribution per Pool
- USD historical price of Pangolin Pairs
I’ve spent some time exploring these data sets and how we can gain access to them, and it can be broken down into the following methods:
- Manually maintained
- Through the Graph
- Via Coingecko’s API
Don’t worry if you don’t understand all the technology involved. I’ll take you through it all step by step.
I’ll be using a Github repo to store the code for this application. You can find it here:
Don’t hesitate to open any issues or create a Pull Request if you want to get involved.
Let’s now walk through populating all the data points.
Manually maintained
Current Fee distribution for Liquidity Providers
If you look at the Litepaper you can see that Swap fees are 0.30% of each trade, whereas Liquidity Providers get 0.25% at most. These figures may change once governance is in place, but let’s just assume for now they’re constant.
So we can load those two figures directly into the constants.ts
file in the repository.
The Graph
The Pangolin Analytics source code uses the Graph extensively throughout the application. I’m going to use the Graph too, so it’s important for me to provide some context into the Graph and how it works.
Let’s look at how the Graph describe their service:
The Graph is an indexing protocol for querying networks like Ethereum and IPFS. Anyone can build and publish open APIs, called subgraphs, making data easily accessible.
Let’s break down a few of those concepts. Indexing protocol. An indexing protocol allows us to index data. Think about how Google can deliver really fast search results. It’s because they index data. So the Graph indexes data in order for us to query it and get fast results.
APIs. Historically most Web API’s were built using either REST or (God help us) SOAP. They have their place but in recent years, a new way of querying data came about called GraphQL.
GraphQL greatly simplifies the querying and posting of data by allowing you to specify exactly what you want and also providing a query language which allows SQL like operations. I’m a big fan and was really glad to see the Graph using it.
Coingecko’s API
I’m a huge fan of Coingecko’s API and use it a fair bit. Coingecko uses a REST API rather than the GraphQL API used by the Graph. We will be using Coingecko in order to query historical price data of our tokens.
Wrapping up
There’s a lot to take in over the course of this post, so I think it’s a good overview of the some of the technologies we’ll be using.
In the next post I’ll start building out the queries in Postman in order for them to be run on a schedule in our Typescript application. At the end, the Typescript application will give us a daily report of the historical performance of all the pairs on Pangolin.
Let me know if you have any comments or feedback?