3,702 research outputs found

    Why Modern Open Source Projects Fail

    Full text link
    Open source is experiencing a renaissance period, due to the appearance of modern platforms and workflows for developing and maintaining public code. As a result, developers are creating open source software at speeds never seen before. Consequently, these projects are also facing unprecedented mortality rates. To better understand the reasons for the failure of modern open source projects, this paper describes the results of a survey with the maintainers of 104 popular GitHub systems that have been deprecated. We provide a set of nine reasons for the failure of these open source projects. We also show that some maintenance practices -- specifically the adoption of contributing guidelines and continuous integration -- have an important association with a project failure or success. Finally, we discuss and reveal the principal strategies developers have tried to overcome the failure of the studied projects.Comment: Paper accepted at 25th International Symposium on the Foundations of Software Engineering (FSE), pages 1-11, 201

    ROOT Status and Future Developments

    Full text link
    In this talk we will review the major additions and improvements made to the ROOT system in the last 18 months and present our plans for future developments. The additons and improvements range from modifications to the I/O sub-system to allow users to save and restore objects of classes that have not been instrumented by special ROOT macros, to the addition of a geometry package designed for building, browsing, tracking and visualizing detector geometries. Other improvements include enhancements to the quick analysis sub-system (TTree::Draw()), the addition of classes that allow inter-file object references (TRef, TRefArray), better support for templated and STL classes, amelioration of the Automatic Script Compiler and the incorporation of new fitting and mathematical tools. Efforts have also been made to increase the modularity of the ROOT system with the introduction of more abstract interfaces and the development of a plug-in manager. In the near future, we intend to continue the development of PROOF and its interfacing with GRID environments. We plan on providing an interface between Geant3, Geant4 and Fluka and the new geometry package. The ROOT GUI classes will finally be available on Windows and we plan to release a GUI inspector and builder. In the last year, ROOT has drawn the endorsement of additional experiments and institutions. It is now officially supported by CERN and used as key I/O component by the LCG project.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 5 pages, MSWord, pSN MOJT00

    Clustering in ICT: From Route 128 to Silicon Valley, from DEC to Google, from Hardware to Content

    Get PDF
    One of the pioneers in academic entrepreneurship and high-tech clustering is MIT and the Route 128/Boston region. Silicon Valley centered around Stanford University was originally a fast follower and only later emerged as a scientific and industrial hotspot. Several technology and innovation waves, have shaped Silicon Valley over all the years. The initial regional success of Silicon Valley started with electro-technical instruments and defense applications in the 1940s and 1950s (represented by companies as Litton Engineering and Hewlett & Packard). In the 1960s and 1970s, the region became a national and international leader in the design and production of integrated circuit and computer chips, and as such became identified as Silicon Valley (e.g. Fairchild Semiconductor, and Intel). In the 1970s and 1980s, Silicon Valley capitalised further on the development, manufacturing and sales of the personal computer and workstations (e.g. Apple, Silicon Graphics and SUN), followed by the proliferation of telecommunications and Internet technologies in the 1990s (e.g. Cisco, 3Com) and Internet-based applications and info-mediation services (e.g. Yahoo, Google) in the late 1990s and early 2000s. When the external and/or internal conditions of its key industries change, Silicon Valley seemed to have an innate capability to restructure itself by a rapid and frequent reshuffling of people, competencies, resources and firms. To characterise the demise of one firm leading, directly or indirectly, to the formation of another and the reconfiguration of business models and product offerings by the larger companies in emerging industries, Bahrami & Evans (2000) introduced the term `flexible recycling.ñ€ℱ This dynamic process of learning by doing, failing and recombining (i.e. allowing new firms to rise from the ashes of failed enterprises) is one of the key factors underlying the dominance of Silicon Valley in the new economy.ICT;Clusters;Networks;Academic entrepreneurship;MIT;Silicon Valley;Stanford University;Flexible recycling;Route 128

    Privilege and Property. Essays on the History of Copyright

    Get PDF
    Copyright law is the site of significant contemporary controversy. In recent years copyright history has transformed as a subject from being one of interest to a few books historians to the focus of sustained historical investigation attracting the attention of scholars from across the humanities. This book comprises a collection of essays on copyright history by leading experts drawn from a range of countries and disciplinary perspectives. Covering the period from 1450 to 1900, these essays engage with a number of related themes. The first considers the general movement, from the sixteenth century onwards, from privilege to property-based conceptions of copyright protection. The second addresses the relationship between the protection provided for literary and print materials and that provided for other forms of cultural production. The third concerns the significance and relevance of these various histories in shaping and informing contemporary policy and academic practice. Essays include: 0. The History of Copyright History, by Kretschmer, Deazley & Bently; 1. From Gunpowder to Print: The Common Origins of Copyright and Patent, by Joanna Kostylo; 2. A Mongrel of early modern copyright: Scotland in European Persepctive, by Alastair Mann; 3. The Public Sphere and the Emergence of Copyright: Areopagitica, the Stationers’ Company, and the Statute of Anne, by Mark Rose; 4. Early American Printing Privileges: the Ambivalent Origins of Authors’ Copyright in America, by Oren Bracha; 5. Author and Work in the French Print Privileges System: Some Milestones, by Laurent Pfister; 6. A Venetian Experiment on Perpetual Copyright, by Maurizio Borghi; 7. Les formalitĂ©s son mortes, vive les formalities! Copyright formalities in nineteenth century Europe, by Stef van Gompel; 8. The Berlin Publisher Friedrich Nicolai and the reprinting sections of the Prussian Statute Book of 1794, by Friedemann Kawohl; 9. Nineteenth Century Controversies relating to the protection of Artistic Property in France, by FrĂ©dĂ©ric Rideau; 10. Maps, Views and Ornament. Visualising Property in Art and Law: The Case of pre-modern France, by Katie Scott; 11. Breaking the Mould? The Radical Nature of the Fine Art Copyright Bill 1862, by Ronan Deazley; 12. ‘Neither bolt nor chain, iron safe nor private watchman, can prevent the theft of words’: The birth of the performing right in Britain, by Isabella Alexander; 13. The Return of the Commons: Copyright History as a Common Source, by Karl-Nikolaus Peifer; 14. The Significance of Copyright History for the Publishing History and Historians, by John Feather; 15. Metaphors of Intellectual Property, by William St Clair. The volume is a companion to the digital archive Primary Sources on Copyright (1450-1900), funded by the UK Arts and Humanities Research Council (AHRC): www.copyrighthistory.or

    Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992

    Get PDF
    Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented

    Internet... the final frontier: an ethnographic account: exploring the cultural space of the Net from the inside

    Get PDF
    The research project The Internet as a space for interaction, which completed its mission in Autumn 1998, studied the constitutive features of network culture and network organisation. Special emphasis was given to the dynamic interplay of technical and social conventions regarding both the Net’s organisation as well as its change. The ethnographic perspective chosen studied the Internet from the inside. Research concentrated upon three fields of study: the hegemonial operating technology of net nodes (UNIX) the network’s basic transmission technology (the Internet Protocol IP) and a popular communication service (Usenet). The project’s final report includes the results of the three branches explored. Drawing upon the development in the three fields it is shown that changes that come about on the Net are neither anarchic nor arbitrary. Instead, the decentrally organised Internet is based upon technically and organisationally distributed forms of coordination within which individual preferences collectively attain the power of developing into definitive standards. --

    The invisible hand and the weightless economy

    Get PDF
    As modern economies grow, production and consumption shift towards economic value that reside in bits and bytes, and away from that embedded in atoms and molecules. This paper discusses the implications of such changes for the nature of ongoing growth in advanced economies and for the dynamics of earnings and income distributions - polarization, inequality - across people within societies
    • 

    corecore