Contributor impact measurement
Impact measurement approach for measuring the amount of impact a contributor makes through their contribution efforts
Overview
Contributor impact measurement focuses on measuring the amount of impact that gets generated by someone's contribution efforts across an ecosystem.
Moderate impact measurability (Score - 3)
Total impact measuring complexity - The other approaches help to reveal that it’s at least moderately complex to determine the total impact an executed idea or addressed priority has helped to generate. This complexity is relevant for determining a contributor's total impact generated as it would be useful to know what impact their contribution outcomes have made for the ecosystem. It could be achievable to understand what percentage of outcomes a contributor is responsible for by tracking their quantifiable contributions. This still doesn’t mean it will be easy to determine an accurate amount of total impact generated. A contributor can help with supporting many different execution outcomes which could further increase the complexity due to having overlapping efforts on the different ideas. The quality of the contribution efforts is also another important factor which would influence the actual amount of impact a contributor has generated.
Ease of measurement - The quantifiable contributions could be somewhat easy to measure. The problem with these measurements is they are not highly valuable on their own without further analysis about the quality of contributions being made. One developer writing twice as much code as another developer means very little without understanding the complexity, novelty and thought that was required to write that code.
Comparability - If contributions can be measured sufficiently there is a moderate ability to compare the contribution outcomes and potential impact of different contributors. One way that comparability can be improved is by adopting a monthly contribution log approach so that it becomes easier to compare one contributor's outputs to another.
Very high future impact opportunity (Score - 5)
Usefulness for future decisions - Measuring the impact of different contributors effectively will mean identifying which contributors are most highly correlated with generating impactful outcomes. This is highly useful for a funding process as it enables a community to align the future incentives with the contributors who are more consistently generating impact. Contributors being aware that the incentives are aligned with generating impact further compounds the reasons that these contributors will look to better identify and execute initiatives that can generate the most amount of impact.
Repeatability - A highly performant and effective contributor will be those that are able to execute ideas more quickly and to a high quality. A contributor's abilities can gradually improve over time as they continue to learn more skills and have more experience about how to best contribute in these ecosystems. Due to this there is a high chance that these contributors could repeatedly help with generating impact for the ecosystem.
Very high game theory risks (Score - 1)
Manipulated outcomes - Each contributor would be responsible for submitting their own information about what contributions they have made that help to generate impact for the ecosystem. As this is an independent process it could be far easier for people to lie about what they have actually done for certain contribution outcomes that can’t be automatically tracked online.
Exaggerated outcomes - Some more challenging areas to properly verify is the quality of contributions made and verifying the qualitative areas of impact. A contributor could help with many ecosystem areas and overstate their contributions and amount of influence in achieving impactful outcomes. It could be more likely for a contributor to exaggerate their contributions if that increased the chances they are selected for future compensation.
Impact verification time required (Score - 1)
Automated verification - A lot of the digital contributions could be tracked and verified automatically which will help with keeping an accurate log of what each contributor has done. This only covers some of the contribution areas as others are more difficult to track such as collaborative efforts during discussion and planning.
Manual verification - Many of the contribution outcomes that could be verified automatically would still require manual verification to assess the quality of contribution. It wouldn’t matter if a developer had delivered more code than any other developer if that code was not to a high quality or was comparatively much easier to develop than more complex problems that other developers had been solving. Qualitative contributions would also benefit from being verified in some capacity to ensure that contributors have not lied or exaggerated. A large amount of effort could be required to accurately verify the impact that contributors generate.
Total score = 10 / 20
Last updated