479,373 research outputs found
Review on Quality Models for Open Source Software and its reflection on Social Coding
Social Coding Sites (SCSs) are social media services for sharing software development projects on the Web, many open source projects are currently being developed on SCSs. Assessing the quality is a crucial element for better selection of a specific project serving people requirements or needs. In this paper, we reviewed existing traditional models which evolved prior the evolution of open source software as well as open source quality models. We evaluated the selected models according to their reflection with respect to social coding project success factors: sociality, popularity, activity and supportability. Eight models were included in our research as we only selected models that introduces explicit metrics well defined for measuring, neither a process nor a generic methodology. Based on our selection criteria, a summary of the findings we obtained is that existing models doesn't fully consider or cover social factors for open source software evaluation hence there is a need for a model to measure the maturity / quality of open source projects from social factors perspective. We have also evaluated the existing models against a selected open source project hosted on social coding GitHub to assess each model applicability. Some of the measurements from the existing models were not applicable for evaluation
Evaluating Innovation
In their pursuit of the public good, foundations face two competing forces -- the pressure to do something new and the pressure to do something proven. The epigraph to this paper, "Give me something new and prove that it works," is my own summary of what foundations often seek. These pressures come from within the foundations -- their staff or boards demand them, not the public. The aspiration to fund things that work can be traced to the desire to be careful, effective stewards of resources. Foundations' recognition of the growing complexity of our shared challenges drives the increased emphasis on innovation. Issues such as climate change, political corruption, and digital learning andwork environments have enticed new players into the social problem-solving sphere and have con-vinced more funders of the need to find new solutions. The seemingly mutually exclusive desires for doing something new and doing something proven are not new, but as foundations have grown in number and size the visibility of the paradox has risen accordingly.Even as foundations seek to fund innovation, they are also seeking measurements of those investments success. Many people's first response to the challenge of measuring innovation is to declare the intention oxymoronic. Innovation is by definition amorphous, full of unintended consequences, and a creative, unpredictable process -- much like art. Measurements, assessments, evaluation are -- also by most definitions -- about quantifying activities and products. There is always the danger of counting what you can count, even if what you can count doesn't matter.For all our awareness of the inherent irony of trying to measure something that we intend to be unpredictable, many foundations (and others) continue to try to evaluate their innovation efforts. They are, as John Westley, Brenda Zimmerman, and Michael Quinn Patton put it in "Getting to Maybe", grappling with "....intentionality and complexity -- (which) meet in tension." It is important to see the struggles to measure for what they are -- attempts to evaluate the success of the process of innovation, not necessarily the success of the individual innovations themselves. This is not a semantic difference.What foundations are trying to understand is how to go about funding innovation so that more of it can happenExamples in this report were chosen because they offer a look at innovation within the broader scope of a foundation's work. This paper is the fifth in a series focused on field building. In this context I am interested in where evaluation fits within an innovation strategy and where these strategies fit within a foundation's broader funding goals. I will present a typology of innovation drawn from the OECD that can be useful inother areas. I lay the decisions about evaluation made by Knight, MacArthur, and the Jewish NewMedia Innovation Funders against their program-matic goals. Finally, I consider how evaluating innovation may improve our overall use of evaluation methods in philanthropy
Recommended from our members
Accountability and Transparency of Entrepreneurial Journalism: Unresolved ethical issues in crowdfunded journalism projects
Crowdfunding is a new business model in which journalists rely—and depend—on (micro-) payments by a large number of supporters to finance their reporting. In this form of entrepreneurial journalism the roles of publisher, fundraiser and journalist often overlap. This raises questions about conflicts of interest, accountability and transparency. The article presents the results of selected case studies in four different European countries—Germany (Krautreporter), Italy (Occhidellaguerra), the United Kingdom (Contributoria) and the Netherlands (De Correspondent)—as well as one US example (Kickstarter). The study used a two-step methodological approach: first a content analysis of the websites and the Twitter accounts with regard to practices of media accountability, transparency and user participation was undertaken. The aim was to investigate how far ethical challenges in crowdfunded entrepreneurial journalism are accounted for. Second, we present findings from semi-structured interviews with journalists from each crowdfunding. The study provides evidence about the ethical issues in this area, particularly in relation to production transparency and responsiveness. The study also shows that in some cases of crowdfunding (platforms), accountability is outsourced and implemented only through the audience participation
Assessing technical candidates on the social web
This is the pre-print version of this Article. The official published version can be accessed from the link below - Copyright @ 2012 IEEEThe Social Web provides comprehensive and publicly available information about software developers: they can be identified as contributors to open source projects, as experts at maintaining weak ties on social network sites, or as active participants to knowledge sharing sites. These signals, when aggregated and summarized, could be used to define individual profiles of potential candidates: job seekers, even if lacking a formal degree or changing their career path, could be qualitatively evaluated by potential employers through their online
contributions. At the same time, developers are aware of the Web’s public nature and the possible uses of published information when they determine what to share with the world. Some might even try to manipulate public
signals of technical qualifications, soft skills, and reputation in their favor. Assessing candidates on the Web for
technical positions presents challenges to recruiters and traditional selection procedures; the most serious being the interpretation of the provided signals.
Through an in-depth discussion, we propose guidelines for software engineers and recruiters to help them interpret the value and trouble with the signals and metrics they use to assess a candidate’s characteristics and skills
New Voices: What Works
Reviews grantees' accomplishments in building community news sites, keys to sustainability, and lessons learned about engagement, staffing, business models, social media, technology, partnerships, and limitations of university, youth, and radio projects
- …