An analysis of Ethereum, EOS, and TRON dApps
2019 was pegged as the Year of the Decentralized Application (dApp).
The ICO fundraising hype train had completely derailed by late 2018 and left investors who had bet big on a whitepaper and a dream with little to show but blood red portfolios. So in the new year we needed something tangible...something real. Not a PROJECT, but a PRODUCT.
In stepped the dApp, and to a large extent the initial success of these (d)applications in the categories of DeFi, Gambling, Gaming, and Social Media have improved the sentiment regarding where our industry is headed in the near and long-term future.
But with any nascent industry you still have issues, and one that myself and my colleagues at ICO Alert have debated at length is the question of how the average user can go about finding great dApps?
Imagine going to Amazon.com and not knowing whether every single review is fake, or Yelp and deciding between a few restaurants whose star-ratings are 90% inflated. Your UberBlack driver faked his rating and shows up in a Pinto (no offense to Pintos).
Our society has successfully decentralized the review process of traditional business to the end benefit of both consumers and great businesses that deserve to be recognized. But in blockchain, there is still no way for a new user to effectively and efficiently parse through the myriad of misinformation and inflated statistics being displayed by these dApps.
In my search for answers, I analyzed the oft-criticized industry of blockchain gambling, taking the Top 4 gambling dApps (7d Users) on the Ethereum, EOS, and TRON networks.
I wanted to know a couple things:
- Who were the biggest outliers in the statistics they were displaying to the community?
- What was the average in $/user on each network?
- Could I find a $/user average across all networks that would signify what a “healthy” vs. “unhealthy” dApp comparison looks like?
Data sourced from dapp.com
$161 average per user without outlier
As you can see, there were significant differences in statistics between dApps both inter and intranetwork. However, when a large difference emerged intranetwork, I classified that dApp as an “outlier” and recalculated the average without this outlier’s statistics.
The theory here is that any dApp with larger than $1,500/user is “whaling” (inflating volume by bankrolling high-net worth users). While any dApp with less than $2/user is “juicing” (inflating users by having fake accounts interact with recorded wallets). This is not an automatic admission of guilt, but statistics give us a clearer picture of the battleground.
After recalculating without outliers on each network, we get a more believable $/user comparison, with EOS seemingly having the most feverish gambling base. We also get to possibly the most important statistic: $/user across all networks without outlier is $161.
For reference, I’ve also added statistics from the Top 4 (7d Users) exchanges across each network:
Data sourced from dapp.com
In conclusion, the industry needs better review processes that hold ALL decentralized applications accountable for the information they release externally, most importantly when that information can positively and negatively affect where consumers spend (and gamble) their assets. Until then, developing industry averages for what “healthy” dApp analytics look like may protect users from being taken advantage of when searching for great dApps.