1,070 research outputs found
A controlled experiment for the empirical evaluation of safety analysis techniques for safety-critical software
Context: Today's safety critical systems are increasingly reliant on
software. Software becomes responsible for most of the critical functions of
systems. Many different safety analysis techniques have been developed to
identify hazards of systems. FTA and FMEA are most commonly used by safety
analysts. Recently, STPA has been proposed with the goal to better cope with
complex systems including software. Objective: This research aimed at comparing
quantitatively these three safety analysis techniques with regard to their
effectiveness, applicability, understandability, ease of use and efficiency in
identifying software safety requirements at the system level. Method: We
conducted a controlled experiment with 21 master and bachelor students applying
these three techniques to three safety-critical systems: train door control,
anti-lock braking and traffic collision and avoidance. Results: The results
showed that there is no statistically significant difference between these
techniques in terms of applicability, understandability and ease of use, but a
significant difference in terms of effectiveness and efficiency is obtained.
Conclusion: We conclude that STPA seems to be an effective method to identify
software safety requirements at the system level. In particular, STPA addresses
more different software safety requirements than the traditional techniques FTA
and FMEA, but STPA needs more time to carry out by safety analysts with little
or no prior experience.Comment: 10 pages, 1 figure in Proceedings of the 19th International
Conference on Evaluation and Assessment in Software Engineering (EASE '15).
ACM, 201
Towards Next Generation Teaching, Learning, and Context-Aware Applications for Higher Education: A Review on Blockchain, IoT, Fog and Edge Computing Enabled Smart Campuses and Universities
[Abstract] Smart campuses and smart universities make use of IT infrastructure that is similar to the one required by smart cities, which take advantage of Internet of Things (IoT) and cloud computing solutions to monitor and actuate on the multiple systems of a university. As a consequence, smart campuses and universities need to provide connectivity to IoT nodes and gateways, and deploy architectures that allow for offering not only a good communications range through the latest wireless and wired technologies, but also reduced energy consumption to maximize IoT node battery life. In addition, such architectures have to consider the use of technologies like blockchain, which are able to deliver accountability, transparency, cyber-security and redundancy to the processes and data managed by a university. This article reviews the state of the start on the application of the latest key technologies for the development of smart campuses and universities. After defining the essential characteristics of a smart campus/university, the latest communications architectures and technologies are detailed and the most relevant smart campus deployments are analyzed. Moreover, the use of blockchain in higher education applications is studied. Therefore, this article provides useful guidelines to the university planners, IoT vendors and developers that will be responsible for creating the next generation of smart campuses and universities.Xunta de Galicia; ED431C 2016-045Xunta de Galicia; ED431G/01Agencia Estatal de Investigación de España; TEC2016-75067-C4-1-
Programming by Example Made Easy
Programming by example (PBE) is an emerging programming paradigm that
automatically synthesizes programs specified by user-provided input-output
examples. Despite the convenience for end-users, implementing PBE tools often
requires strong expertise in programming language and synthesis algorithms.
Such a level of knowledge is uncommon among software developers. It greatly
limits the broad adoption of PBE by the industry. To facilitate the adoption of
PBE techniques, we propose a PBE framework called Bee, which leverages an
"entity-action" model based on relational tables to ease PBE development for a
wide but restrained range of domains. Implementing PBE tools with Bee only
requires adapting domain-specific data entities and user actions to tables,
with no need to design a domain-specific language or an efficient synthesis
algorithm. The synthesis algorithm of Bee exploits bidirectional searching and
constraint-solving techniques to address the challenge of value computation
nested in table transformation. We evaluated Bee's effectiveness on 64 PBE
tasks from three different domains and usability with a human study of 12
participants. Evaluation results show that Bee is easier to learn and use than
the state-of-the-art PBE framework, and the bidirectional algorithm achieves
comparable performance to domain-specifically optimized synthesizers.Comment: Accepted by ACM Transactions on Software Engineering and Methodolog
An Empirical Assessment of Bellon's Clone Benchmark
Context: Clone benchmarks are essential to the assessment and improvement of clone detection tools and algorithms. Among existing benchmarks, Bellon’s benchmark is widely used by the research community. However, a serious threat to the validity of this benchmark is that reference clones it contains have been manually validated by Bellon alone. Other persons may disagree with Bellon’s judgment. Ob-jective: In this paper, we perform an empirical assessment of Bellon’s benchmark. Method: We seek the opinion of eighteen participants on a subset of Bellon’s benchmark to determine if researchers should trust the reference clones it contains. Results: Our experiment shows that a significant amount of the reference clones are debatable, and this phe-nomenon can introduce noise in results obtained using this benchmark
Opinion Mining for Software Development: A Systematic Literature Review
Opinion mining, sometimes referred to as sentiment analysis, has gained increasing attention in software engineering (SE) studies.
SE researchers have applied opinion mining techniques in various contexts, such as identifying developers’ emotions expressed in
code comments and extracting users’ critics toward mobile apps. Given the large amount of relevant studies available, it can take
considerable time for researchers and developers to figure out which approaches they can adopt in their own studies and what perils
these approaches entail.
We conducted a systematic literature review involving 185 papers. More specifically, we present 1) well-defined categories of opinion
mining-related software development activities, 2) available opinion mining approaches, whether they are evaluated when adopted in
other studies, and how their performance is compared, 3) available datasets for performance evaluation and tool customization, and 4)
concerns or limitations SE researchers might need to take into account when applying/customizing these opinion mining techniques.
The results of our study serve as references to choose suitable opinion mining tools for software development activities, and provide
critical insights for the further development of opinion mining techniques in the SE domain
Recommended from our members
Towards an Automation of the Traceability of Bugs from Development Logs
Context: Information and tracking of defects can be severely incomplete in almost every Open Source project, resulting in a reduced traceability of defects into the development logs (i.e., version control commit logs). In particular, defect data often appears not in sync when considering what developers logged as their actions. Synchronizing or completing the missing data of the bug repositories, with the logs detailing the actions of developers, would benefit various branches of empirical software engineering research: prediction of software faults, software reliability, traceability, software quality, effort and cost estimation, bug prediction and bug fixing. Objective: To design a framework that automates the process of synchronizing and filling the gaps of the development logs and bug issue data for open source software projects. Method: We instantiate the framework with a sample of OSS projects from GitHub, and by parsing, linking and filling the gaps found in their bug issue data, and development logs. UML diagrams show the relevant modules that will be used to merge, link and connect the bug issue data with the development data. Results: Analysing a sample of over 300 OSS projects we observed that around 1/2 of bug-related data is present in either development logs or issue tracker logs: the rest of the data is missing from one or the other source. We designed an automated approach that fills the gaps of either source by making use of the available data, and we successfully mapped all the missing data of the analysed projects, when using one heuristics of annotating bugs. Other heuristics need to be investigated and implemented. Conclusion: In this paper a framework to synchronise the development logs and bug data used in empirical software engineering was designed to automatically fill the missing parts of development logs and bugs of issue data
- …