I gauged Twitter sentiment evolution towards GRO

There’s been relatively limited Twitter activity about GRO so far. Nevertheless I looked into it.

That’s a snapshot over the period Jul – Oct '21.

In the last days Twitter interest in GRO skyrockets, hence sentiment fluctuations. If you zoom in you see more details.

That’s a snapshot of the last 9 days (1k tweet out of total 1.6k tweet since July) including days preceding token launch and just after it.

The more intense tweet feed, the more clear the picture. Compare to the Twitter sentiment evolution towards Compound (160k tweets since Mar '20):

As for the method: autoencoding BERT derivative language model trained on the corpus of tweets, then fine-tuned on a different corpus of tweets with sentiment labeled by human annotators, then Twitter feed scrapped with inclusion/exclusion @ # $ and some others for a particular protocol. Then run scrapped Twitter feed through language model, then for each data point in the time series calculate rolling mean with a heuristic window (a sweet spot: less – too much noise, more – too little details).

I’m working on the tool for protocol exploration, community exploration being part of it. Will publish code, comments and roadmap.

More to come: learn.klimchitsky.com

5 Likes

Really interesting project, subscribed :slight_smile:

1 Like

It’s gonna be a tool for protocol analysis in the vein of what Gauntlet is doing, but based on a set of DeFi-native premises:

  1. Gauntlet is running their simulations with agent-based models representing users interacting with a protocol. A model of a user is based on a set of theoretical assumptions about user behaviour patterns. This approach was developed for TradFi, where the bulk of real life data is either not digitised at all (much of the b2c interaction happens offline) or isn’t available (much of market data is private). Hence, agent-based modelling with theoretical assumptions about incomplete data is justifiable. However, for DeFi agent-based modelling is a suboptimal legacy framework. Since all data about transactions and user interactions with the protocol is open and available for modelling, we can learn from real life data a model of a living protocol, or parts of it, and models of user interactions with it. Moreover, this model will be continuously fine-tuned with new data emerging.
  2. Transactions model is only half of the story. The other half is community sentiment manifested on twitter, discord and discourse. In offline economy inflation expectations and consumer sentiment influence consumer behaviour and central banks of the world when modelling national economies gauge it with polls. In DeFi we have the luxury to model community sentiment not with approximating polls, but again with real life data, while constantly fine-tuning the model.

We can learn from real life data models of onchain activity & of community sentiment and then merge them to get a true-to-life DeFi-native model of a protocol, which can be constantly fine-tune. Then it can be used to build tools for explorable + explainable DeFi serving purposes of both DAOs and DeFi investors: running stress tests and alternative scenarios, classifying protocols and tokens, detecting user behaviour patterns, making forecasts about certain protocol KPIs like TVL and that of partnering protocols.

3 Likes

This sounds good Pavel! Really would like to see the end-result!

2 Likes