Let’s say mechanism described by Vitalik is used to allocate funds in charity DAO and there’s a group of people who vote for (contribute to) all proposals regarding environmental issues. This clearly indicates interest but over time their coordination coefficient k make it so those proposal don’t get subsidized (k = 0). How can an application prevent this from happening?
First of all, there is no “over time”; in the proposal the formula is a evaluated separately for each time period, so there isn’t any problem for anyone that gets progressively worse in any way.
But in any case, I’d agree there are cases where there is a genuine disadvantage of the scheme; more precisely, it discriminates against genuinely strong preferences more than CLR does. Note that there is no extra loss from a set of people all contributing $1 to ten separate climate-related projects versus contributing $10 to the same one; in either case, the total subsidies work out to be the same size.
I would argue though that in many cases this property of penalizing correlation within groups can be a good thing: it penalizes tribalism (or rather, is doesn’t subsidize tribalism as much as it subsidizes cross-tribe bridge building). Close-knit groups often already have some mechanisms for coordination (social pressure, if we’re talking about local or national groups then taxes, etc), so they “need the CLR less”. Looking at things from this view, the climate-related projects would benefit from this scheme because they attract constituencies from very different backgrounds, even those that normally agree on little.
If I am correct this falls short of penalising an actor that can influence multiple groups for and/or against a cause, to leverage a desired outcome
For instance, a coordinated attack against any system happens at multiple layers and is phased, there is no consideration for wave differential intervals of forces acting for or against.
The system SHOULD include designs to identify individual or agent pair along with, force F by M or W contribution, to determine the weight function along as well as a wave equation for both propagation and disturbances to allow for a recalculation to spot and and adjust to coordinated discoordination coefficients and such branches.
No different than identifying, a manipulation of a climate agenda (good or bad) designed to harness a particular outcome and actions. Would make more sense to include a metric measuring the force, etal.
What do you mean by “influence” here? Putting out propaganda on social media to influence people’s level of support for certain projects? Trying to understand better what you’re trying to mitigate.
“Influence” i.e “effect”.
For example I want a solution to distribute, fair value, amongst users, (gamers & customers); retailer(s) and game (publisher, dev or company if indie).
If game reward is an item from the retailer and suitable for the user, how many retailers can thrive in an environment where more money can effect (influence) which game publishers and retailers can develop partnerships, this squeezes the little guys (publishers and retailers with less money) to earn less and receive less transactional value from a platform, I would like to build.
It’s also the problem across many markets where advertising dollars are the “force” that pulls or pushes a product the “distance” . This directly has an influence on work done.
How can it be made fair if the effect and force aren’t calculated? It requires either a differential or partial differential solution of force and effect on work done.
Also, consider the role of the system or platform here, it also needs to be fairly scrutinized.
I don’t believe the “anonymous” part is a necessary requirement. The identity system that is valuable to this design is one where a series of actions to any particular agent is traceable and the interactions between other agents within the system is also traceable. However, traceability outside the system is not necessary. More practically speaking what this means is that a contributor to a project can create a new pseudonymous identity for a new project that can remain mutually exclusive from their contributions to another project. The merits of the pseudonymous identity would then be directly proportional to contributions made to that single project only.
There’s some work being done on this idea for the did:git method. The nice aspects of this thinking is that it allows a contributor to maintain an identity separate from other contexts or other projects, but doesn’t disallow a contributors from carrying an identity if they choose.
To phrase this differently (as I understand it), the mechanism described here limits the ability for two (or more) agents to trivially coordinate to extract value from the system (at the cost of inadvertently penalizing tribes). It doesn’t make the system coordination-proof, however: if I and a co-conspirator wish to extract value from the system, then we should separately run influence campaigns to encourage innocent third-parties to support ~fake projects of our choosing. By ensuring that we never contribute to the same project, we avoid the pairwise penalty; instead, the bulk of the contributions come from third-parties with whom we are not normally associated, meaning those projects still receive large subsidies.
In essence though this is no different from existing “pump and dump” schemes and so maybe isn’t such a big criticism (IMO one shouldn’t critique a mechanism for perpetuating existing problems, assuming it solves at least some others). The mechanism proposed makes trivial extraction (modulo an identity system) more difficult, and that seems like A Reasonably Good Thing.
(at the cost of inadvertently penalizing tribes)
I’d argue that penalizing tribes is a benefit, not a cost. The reason why is that it’s not penalizing tribes; tribes can still extract quite a lot of subsidies from the scheme for their projects, they just get subsidized less than if they were fully independent actors. This makes sense, because tribes already have internal mechanisms for cooperating, so they need the mechanism’s help less; there are fewer unsatisfied opportunities for very-high-value public goods within tribes than there are between tribes.
if I and a co-conspirator wish to extract value from the system, then we should separately run influence campaigns to encourage innocent third-parties to support ~fake projects of our choosing
Agree that there is this risk. But is this risk solvable in any public goods producing mechanism? It seems fundamentally impossible to distinguish between a project which is a genuine public good that benefits N people that signal the fact that they would benefit, and a fake public good that has tricked N people into believing that it’s good for them.
The one technique I can think of for mitigating this further is adding a time-based component: when you contribute $x then in N years you get refunded a percentage based on what percentage of people at that time think that the project was a good idea. I think ideas like that involving mixing together quadratic funding/voting for preference aggregations and some form of prediction markets for eliciting predictive information and rewarding competence could be really interesting.
Sure, I can grok how inter-tribe coordination is easier and so incentivization should be reserved for inter-tribe coordination which is harder but arguably more valuable. If you wanted to you could probably model this with some type of gaussian mixed-membership model where everyone is part of many tribes and the subsidy k is a function of the hidden tribal memberships, which you observe via voting patterns. Computationally it would be more efficient to calculate k as a product of the tribal membership vectors and some weight matrix than to iterate over all pair-project triples but that is a conversation for another day.
I also agree that the problem of extra-model coordination is profoundly hard (much like identity, which IMO can only be asymptotically solved, i.e. never perfectly but increasingly well), which is why I said that we shouldn’t fault mechanisms for perpetuating existing problems, provided they solve at least one existing problem without re-introducing problems we’ve previously solved.
Regarding the time component, I strongly agree. Inasmuch as the world is fundamentally a process and yet all of our models of the world are static (to even model time, we must fix it with a symbol like t), the inclusion of actual time IMO imbues models with an essential dynamism. Along these lines, I think concepts like conviction voting will lead to invaluable tools for (cheaply) incorporating valuable (and hard to manipulate) information into models. I riffed a bit on this theme a few years back: http://kronosapiens.github.io/blog/2017/09/14/time-and-authority.html
For all these which are scared by mathematical formulas, this article shows this theory in practice! Must read all to the conclusion! https://vitalik.ca/general/2019/10/24/gitcoin.html
Enjoyed reading all this other than the formulas.
My friend would say “Urbit fixes this”, but I would say “Diversification fixes this”:
Could the holy grail be achieved with diversification?
Example… all in one round together:
20% traditional CLR matching (whales can’t run the show)
20% pairwise-bounded quadratic matching (teams can’t team up)
20% CLR matching with Negative votes (shorts allow a free market)
10% single matching (good ol’days)
10% 3x matching (encourages donations larger than 1 Dai)
10% Randomized matching (Introduces lottery element - play lotto for your cause!)
10% Sample-vote matching (David Chaum knows what he’s talking about)
- Hard to game; a gamer might go in circles
- With the right optimization may not need identity???
- Diversification often makes things better - the game-able characteristics of certain strategies would be highly reduced by the other strategies.
DeDivGiv = Decentralized Diversified Giving
Hi from Gitcoin.
Our resident Data Scientist, Frank Chen, ran Gitcoin Grants Round 7 data in a split test where we looked at what the diff in what results would look like if they were pairwise bonded or not. (The official results, detailed on Vitalik’s blog were pairwise bonded).
Here is the dataset, and here is a TLDR graph:
^ thanks for posting kevin
summary, using Gitcoin Grants R7 data:
- differences between pairwise and regular QF tell us that negative changes are more common (meaning normal QF awards generally more), but they’re all in amounts <= $100 range.
- the effect above takes place with grants that generally have a less contributions.
- large $ differences doesn’t have a definite correlation with the percent difference, but it’s still important to note that if we are in favor of removing pairwise, we want small absolute differences in $ amounts.
- the strongest correlation was that the greatest percent differences seem to occur with high average contribution sizes, which might suggest that regular QR vs pairwise would disproportionately affect whale donators, but this leaves average contribution sizes (total/num contributions) < 50-ish generally unchanged.
- it seems that the largest differences affect about 15-20% of tech grants in this example, so about 15 out of the 99 that received a CLR match.
The main objective was to figure out the tradeoff between removing pairwise matching (at least on the Gitcoin App itself) vs. gaining calculation speed to show “live” calculations.
Pairwise matching is quite intensive, since every time the mechanism is performed, we have to create permutations from every unique contributor to every other unique contributor by grant. Removing it would provide significant speed gains. However, we lose out on the natural anti-collusion checking capabilities of pairwise.
Working on a quadratic weighted staking protocol for DAOs to direct funding in a more dynamic way. Reach out if you’ve thought about this more.
working on something similar. How can I get in touch with you?
The saying goes “great minds think alike”, so I think what @Gerstep added is a major problem: we punish random people with similar tastes and therefore similar contributions across various projects, an obvious thing to happen.
This is combined with the problem that it would, I guess, be trivial for me or anyone to ask multiple third-party persons to officially make the contribution for me, if I’d seriously want to game the mechanism, hiding the correlation/pairing (?).
If these problems cannot be solved properly, I am worried whether the mechanism could ever be a big step towards solving the underlying problem with reasonable satisfaction - very happy to be proven wrong
If the voter sample remains the same, those voting systematically against environmental issues will also see their weights reduced.
The weight grows for those changing their mind.
Intuitively, this sounds pretty good.
Check out this post on the intersection of pairwise quadratic funding, soulbound tokens, and quadratic funding for Plurality: How Soulbound Tokens Can Make Gitcoin Grants More Pluralistic - 🧙 🧙♀️ Workstream Discussion - Gitcoin Governance. Soulbound attestations can significantly enrich the evidence for pairwise cooperation in quadratic funding ecosystems.
In turn, the improved evidence can algorithmically make the pairwise matching weights way more accurate to gradually increase cooperation across differences and mute established relationships.
in the gitcoin grants system, we use a modified version of the QF mechanism to allocate our matching funds to grants.
what if we used NLP to generate embeddings of the descriptions of the grants each user donates to, then we can use some distance metric like cosine similarity or euclidian distance to determine how close in concept they are to each other’s idealisms?
more formally, we can set the subsidy coefficient (k[i, j]) for the user pairs as the normalized (relative to all other user pairs) cumulative distance between the set of grant descriptions they contributed to.
this ties in closely with the “social distance score” that glen weyl speaks of.
let’s say you have user 0 and user 1, user 0 donated to a set of grants G and user 1 donated to a set of grants G. for each grant in G, use an NLP model to embed (turn the text into a vector) the grant description do the same for each grant in G then, use a nested loop to compare every embedding (text vector) in G and G. “compare” in this case means to calculate the distance (euclidean/cosine similarity) between them, then sum up those distances (which are scalar values). (edited)
that sum will be unique to every user pair, so you can use this as the “k[i,j]” coefficient for users i and j.
this gets you a social distance measure, albeit in O(N^4) time. there are a few optimizations that can be done.