3,906 research outputs found
How open is open enough?: Melding proprietary and open source platform strategies
Computer platforms provide an integrated architecture of hardware and software standards as a basis for developing complementary assets. The most successful platforms were owned by proprietary sponsors that controlled platform evolution and appropriated associated rewards.
Responding to the Internet and open source systems, three traditional vendors of proprietary platforms experimented with hybrid strategies which attempted to combine the advantages of open source software while retaining control and differentiation. Such hybrid standards strategies reflect the competing imperatives for adoption and appropriability, and suggest the conditions under which such strategies may be preferable to either the purely open or purely proprietary alternatives
Efficient Prior Publication Identification for Open Source Code
Free/Open Source Software (FOSS) enables large-scale reuse of preexisting
software components. The main drawback is increased complexity in software
supply chain management. A common approach to tame such complexity is automated
open source compliance, which consists in automating the verication of
adherence to various open source management best practices about license
obligation fulllment, vulnerability tracking, software composition analysis,
and nearby concerns.We consider the problem of auditing a source code base to
determine which of its parts have been published before, which is an important
building block of automated open source compliance toolchains. Indeed, if
source code allegedly developed in house is recognized as having been
previously published elsewhere, alerts should be raised to investigate where it
comes from and whether this entails that additional obligations shall be
fullled before product shipment.We propose an ecient approach for prior
publication identication that relies on a knowledge base of known source code
artifacts linked together in a global Merkle direct acyclic graph and a
dedicated discovery protocol. We introduce swh-scanner, a source code scanner
that realizes the proposed approach in practice using as knowledge base
Software Heritage, the largest public archive of source code artifacts. We
validate experimentally the proposed approach, showing its eciency in both
abstract (number of queries) and concrete terms (wall-clock time), performing
benchmarks on 16 845 real-world public code bases of various sizes, from small
to very large
Recommended from our members
Introduction to the Special Issue on Software Architecture for Language Engineering
Every building, and every computer program, has an architecture: structural and organisational principles that underpin its design and construction. The garden shed
once built by one of the authors had an ad hoc architecture, extracted (somewhat painfully) from the imagination during a slow and non-deterministic process that, luckily, resulted in a structure which keeps the rain on the outside and the mower on the inside (at least for the time being). As well as being ad hoc (i.e. not informed by analysis of similar practice or relevant science or engineering) this architecture is implicit: no explicit design was made, and no records or documentation kept of the construction process. The pyramid in the courtyard of the Louvre, by contrast, was constructed in a process involving explicit design performed by qualified engineers with a wealth of theoretical and practical knowledge of the properties of materials, the relative merits and strengths of different construction techniques, et cetera. So it is with software: sometimes it is thrown together by enthusiastic amateurs; sometimes it is architected, built to last, and intended to be 'not something you finish, but something you start' (to paraphrase Brand (1994). A number of researchers argued in the early and middle 1990s that the field of computational infrastructure or architecture for human language computation merited an increase in attention. The reasoning was that the increasingly large-scale and technologically significant nature of language processing science was placing increasing burdens of an engineering nature on research and development workers seeking robust and practical methods (as was the increasingly collaborative nature of research in this field, which puts a large premium on software integration and interoperation). Over the intervening period a number of significant systems and practices have been developed in what we may call Software Architecture for Language Engineering (SALE). This special issue represented an opportunity for practitioners in this area to report their work in a coordinated setting, and to present a snapshot of the state-ofthe-art in infrastructural work, which may indicate where further development and further take-up of these systems can be of benefit
Search based software engineering: Trends, techniques and applications
© ACM, 2012. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version is available from the link below.In the past five years there has been a dramatic increase in work on Search-Based Software Engineering (SBSE), an approach to Software Engineering (SE) in which Search-Based Optimization (SBO) algorithms are used to address problems in SE. SBSE has been applied to problems throughout the SE lifecycle, from requirements and project planning to maintenance and reengineering. The approach is attractive because it offers a suite of adaptive automated and semiautomated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives.
This article provides a review and classification of literature on SBSE. The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.EPSRC and E
Code Reuse in Open Source Software
Code reuse is a form of knowledge reuse in software development that is fundamental to innovation in many fields. However, to date there has been no systematic investigation of code reuse in open source software projects. This study uses quantitative and qualitative data gathered from a sample of six open source software projects to explore two sets of research questions derived from the literature on software reuse in firms and open source software development. We find that code reuse is extensive across the sample and that open source software developers, much like developers in firms, apply tools that lower their search costs for knowledge and code, assess the quality of software components, and have incentives to reuse code. Open source software developers reuse code because they want to integrate functionality quickly, because they want to write preferred code, because they operate under limited resources in terms of time and skills, and because they can mitigate development costs through code reuse
On Using Blockchains for Safety-Critical Systems
Innovation in the world of today is mainly driven by software. Companies need
to continuously rejuvenate their product portfolios with new features to stay
ahead of their competitors. For example, recent trends explore the application
of blockchains to domains other than finance. This paper analyzes the
state-of-the-art for safety-critical systems as found in modern vehicles like
self-driving cars, smart energy systems, and home automation focusing on
specific challenges where key ideas behind blockchains might be applicable.
Next, potential benefits unlocked by applying such ideas are presented and
discussed for the respective usage scenario. Finally, a research agenda is
outlined to summarize remaining challenges for successfully applying
blockchains to safety-critical cyber-physical systems
Development of an open-source platform for calculating losses from earthquakes
Risk analysis has a critical role in the reduction of casualties and damages due to earthquakes.
Recognition of this relation has led to a rapid rise in demand for accurate, reliable and flexible risk
assessment numerical tools and software. As a response to this need, the Global Earthquake Model
(GEM) started the development of an open source platform called OpenQuake, for calculating
seismic hazard and risk at different scales. Along with this framework, also several other tools to
support users creating their own models and visualizing their results are currently being
developed, and will be made available as a Modelers Tool Kit (MTK). In this paper, a description
of the architecture of OpenQuake is provided, highlighting the current data model, workflow of
the calculators and the main challenges raised when running this type of calculations in a global
scale. In addition, a case study is presented using the Marmara Region (Turkey) for the calculations, in which the losses for a single event are estimated, as well as probabilistic risk for a
50 years time span
Monitoring in fog computing: state-of-the-art and research challenges
Fog computing has rapidly become a widely accepted computing paradigm to mitigate cloud computing-based infrastructure limitations such as scarcity of bandwidth, large latency, security, and privacy issues. Fog computing resources and applications dynamically vary at run-time, and they are highly distributed, mobile, and appear-disappear rapidly at any time over the internet. Therefore, to ensure the quality of service and experience for end-users, it is necessary to comply with a comprehensive monitoring approach. However, the volatility and dynamism characteristics of fog resources make the monitoring design complex and cumbersome. The aim of this article is therefore three-fold: 1) to analyse fog computing-based infrastructures and existing monitoring solutions; 2) to highlight the main requirements and challenges based on a taxonomy; 3) to identify open issues and potential future research directions.This work has been (partially) funded by H2020 EU/TW 5G-DIVE (Grant 859881) and H2020 5Growth (Grant 856709). It has been also funded by the Spanish State Research Agency (TRUE5G project, PID2019-108713RB-C52 PID2019-108713RB-C52 / AEI / 10.13039/501100011033)
OpenMEEG: opensource software for quasistatic bioelectromagnetics
Background: Interpreting and controlling bioelectromagnetic phenomena require realistic physiological models and accurate numerical solvers. A semi-realistic model often used in practise is the piecewise constant conductivity model, for which only the interfaces have to be meshed. This simplified model makes it possible to use Boundary Element Methods. Unfortunately, most Boundary Element solutions are confronted with accuracy issues when the conductivity ratio between neighboring tissues is high, as for instance the scalp/skull conductivity ratio in electro-encephalography. To overcome this difficulty, we proposed a new method called the symmetric BEM, which is implemented in the OpenMEEG software. The aim of this paper is to presen
Pet sense: sistema de monitorização de animais em hospitalização
The observation and treatment of animals in veterinary hospitals is still very dependent on manual procedures, including the collection of vital signs (temperature, heart rate, respiratory rate and blood pressure). These manual procedures are time-consuming and invasive, affecting the animal’s well-being.
In this work, we purpose the use of IoT technologies to monitor animals in hospitalization, wearing sensors to collect vitals, and low-cost hardware to forward them into a cloud backend that analyses and stores data. The history of observed vitals and alarms can be accessed in the web, included in the Pet Universal software suite.
The overall architecture follows a stream processing approach, using telemetry protocols to transport data, and Apache Kafka Streams to analyse streams and trigger alarms on potential hazard conditions.
The system was fully implemented, although with laboratory sensors to emulate the smart devices to be worn by the animals. We were able to implement a data gathering and processing pipeline and integrate with the existing clinical management information system.
The proposed solution can offer a practical way for long-term monitoring and detect abnormal values of temperature and heart rate in hospitalized animals, taking into consideration the characteristics of the monitored individual (species and state).A observação e tratamento de animais hospitalizados continua muito dependente de procedimentos manuais, especialmente no que diz respeito à colheita de sinais vitais (temperatura, frequência cardíaca, frequência respiratória e pressão arterial). Estes procedimentos manuais são dispendiosos em termos de tempo e afetam o bem-estar do animal.
Neste projeto, propomos o recurso a tecnologias IoT para monitorizar animais hospitalizados equipados com sensores que medem sinais vitais, com hardware acessível, e envio dos dados para um serviço na cloud que os analisa e armazena. O histórico dos valores e alarmes podem ser acedidos na web e incluídos na plataforma comercial da Pet Universal.
A arquitetura geral segue uma abordagem de processamento funcional, usando protocolos de telemetria para transportar dados e Apache Kafka Streams, analisando e lançando alarmes sobre potenciais condições de risco de acordo com a temperatura e pulsação.
O sistema foi totalmente implementado, embora com sensores de laboratório para simular os dispositivos a serem usados pelos animais. Conseguimos implementar um circuito de colheita e processamento de dados e integrar com o sistema de gestão clínica já existente.
A solução proposta oferece uma forma prática de monitorização continuada e de deteção de valores anormais de temperatura e frequência cardíaca em animais hospitalizados, tomando em conta as características do indivíduo monitorado (espécie e estado).Mestrado em Engenharia Informátic
- …