Should The Crowdsourcing Community Have More Influence?

Guest Author

Fred Winegust is an innovative Business Strategy and Value Animation Executive currently focused in the Sustainability and Risk Management space – with consulting experience in Financial Services, Telecom and Industrial sectors as well as Non-Profit organizations including Education. Fred has proven skills in focusing on common business objectives and fact-based analysis which have enabled multiple and competing stakeholders to recognize synergies and bring business and personal values to life. He was recognized as a “Spark Plug” for efforts in Toronto’s first ClimateSpark crowdsourcing competition in February 2011. Trained by former US Vice President Al Gore, Fred is a volunteer presenter in Canada for the Climate Reality Project. He has spoken about the reality of climate change and actions which can be taken to over 2,000 students ranging from elementary school through post-graduate studies.

It is quite an honor to be asked by Don Tapscott to start blogging on Macrowikinomics. Our working relationship dates back almost 20 years, to my International marketing efforts around e-business at IBM.

The world has changed significantly since those early days. I hope my blogging will capture some of those changes through experiences I will relate using elements of Don’s Macrowikinomics as a point of departure.

The “Opening Up the Financial Services Industry” chapter introduces the concept of “Venture Capital 2.0”. At its heart is the premise that a collaborative model addresses some of the shortcomings of today’s seed venture capital system.

But the question is this: Do you get better results when you let three experts put in 1,000 hours each, or let 1,000 experts put in three hours each? And how transparent does the process need to be?

Both Don & I were part of a pool of 25 “experts” who were part of this crowdsourcing competition known as the ClimateSpark Social Venture Challenge (SVC) which was sponsored by the Toronto Atmospheric Fund.

A social venture is any undertaking (business, program, community initiative) that combines financial sustainability (a source of ongoing revenue) with a social good (community benefit, climate benefit). In the case of the ClimateSpark SVC, the effort was looking for social ventures that produce a climate benefit by, for example, increasing energy efficiency, reducing emissions or reducing waste.

The hope was that ten social ventures, each one needing to have at least one non-profit partner could demonstrate a financially viable solution to reducing Toronto’s climate impact.

Various teams proposed solutions. Depending on how well they were able to inspire their community to provide meaningful comment and vote in positive support of their solution and avoid negative voting against competing solutions, they were able to influence only 30 percent of the eventual decision.

The remaining 70 percent of the decision rested with the “experts” and “funders” as a set of partners and collaborators who pulled together a $500,000+ pool of grants, loan funds and equity investments.

From a transparency perspective, which is a key element of the Macrowikinomics vision, you could only see the comments and responses to them for each of the ideas. There was no way to tell how many of the 1,634 non-experts and non-proposers, of whom only 557 offered at least 1 comment, were just coming onto the site to register 1,781 votes. There was no restriction on the number of ideas you could offer a vote to, other than you were not able to vote multiple times for the same idea.

Only half of the 25 “experts” offered comments during the second round of the competition when 20 competitors were being narrowed down to 10 finalists. Of those who did offer comment, all but one offered comment to at least two of the 19 ideas. What was disappointing was with the exception of three experts, none of the others who offered comment to improve the idea stuck around to review and debate the comments which come back from the community.

What continues to be of interest as the competition continues is the lack of transparency on the 70 percent of the decision which was in the hands of experts and funders? Teams who didn’t reach the final 10 did not receive any feedback on how they could improve, or what factors lead to their elimination. Why did a 5th place idea with 104 votes and 51 comments from 43 people get knocked out, and a 16th place idea with 59 votes and 58 comments from 29 people get in?

If the decision is so much in the hands of experts and funders, then how should the community who participated and provided quality input feel? Should the crowdsourcing community have more influence?

“American Idol” and other shows of its kind seem to suffer from the same problem as social ventures and the venture capitalists that could fund them. Can an idea be improved through constructive suggestions from non-experts while the idea itself is subject to a popularity contest ranking of an idea by the same non “expert” team?

So, do you get better results when you let three experts put in 1,000 hours each, or let 1,000 experts put in three hours each? How transparent does the process really need to be for the community contributing and improving the ideas, and the community which is funding it?

I look forward to hearing your thoughts on this.

Check it out at and follow what happens between now and February 3, 2012.