10,489 research outputs found
A Tool for Rejuvenating Feature Logging Levels via Git Histories and Degree of Interest
Logging is a significant programming practice. Due to the highly
transactional nature of modern software applications, massive amount of logs
are generated every day, which may overwhelm developers. Logging information
overload can be dangerous to software applications. Using log levels,
developers can print the useful information while hiding the verbose logs
during software runtime. As software evolves, the log levels of logging
statements associated with the surrounding software feature implementation may
also need to be altered. Maintaining log levels necessitates a significant
amount of manual effort. In this paper, we demonstrate an automated approach
that can rejuvenate feature log levels by matching the interest level of
developers in the surrounding features. The approach is implemented as an
open-source Eclipse plugin, using two external plug-ins (JGit and Mylyn). It
was tested on 18 open-source Java projects consisting of ~3 million lines of
code and ~4K log statements. Our tool successfully analyzes 99.22% of logging
statements, increases log level distributions by ~20%, and increases the focus
of logs in bug fix contexts ~83% of the time. For further details, interested
readers can watch our demonstration video
(https://www.youtube.com/watch?v=qIULoAXoDv4).Comment: 4 pages, ICSE '22 (tool demo track
An Exploratory Study on the Characteristics of Logging Practices in Mobile Apps: A Case Study on F-Droid
Logging is a common practice in software engineering. Prior research has investigated the characteristics of logging practices in system software (e.g., web servers or databases) as well as desktop applications. However, despite the popularity of mobile apps, little is known about their logging practices. In this thesis, we sought to study logging practices in mobile apps. In particular, we conduct a case study on 1,444 open source Android apps in the F-Droid repository. Through a quantitative study, we find that although mobile app logging is less pervasive than server and desktop applications, logging is leveraged in almost all studied apps. However, we find that there exist considerable differences between the logging practices of mobile apps and the logging practices in server and desktop applications observed by prior studies. In order to further understand such differences, we conduct a firehouse email interview and a qualitative annotation on the rationale of using logs in mobile app development. By comparing the logging level of each logging statement with developers' rationale of using the logs, we find that all too often (35.4%), the chosen logging level and the rationale are inconsistent. Such inconsistency may prevent the useful runtime information to be recorded or may generate unnecessary logs that may cause performance overhead. Finally, to understand the magnitude of such performance overhead, we conduct a performance evaluation between generating all the logs and not generating any logs in eight mobile apps. In general, we observe a statistically significant performance overhead based on various performance metrics (response time, CPU and battery consumption). In addition, we find that if the performance overhead of logging is significantly observed in an app, disabling the unnecessary logs indeed provides a statistically significant performance improvement. Our results show the need for a systematic guidance and automated tool support to assist in mobile logging practices
The Making of Cloud Applications An Empirical Study on Software Development for the Cloud
Cloud computing is gaining more and more traction as a deployment and
provisioning model for software. While a large body of research already covers
how to optimally operate a cloud system, we still lack insights into how
professional software engineers actually use clouds, and how the cloud impacts
development practices. This paper reports on the first systematic study on how
software developers build applications in the cloud. We conducted a
mixed-method study, consisting of qualitative interviews of 25 professional
developers and a quantitative survey with 294 responses. Our results show that
adopting the cloud has a profound impact throughout the software development
process, as well as on how developers utilize tools and data in their daily
work. Among other things, we found that (1) developers need better means to
anticipate runtime problems and rigorously define metrics for improved fault
localization and (2) the cloud offers an abundance of operational data,
however, developers still often rely on their experience and intuition rather
than utilizing metrics. From our findings, we extracted a set of guidelines for
cloud development and identified challenges for researchers and tool vendors
Log4Perf: Suggesting and Updating Logging Locations for Web-based Systems' Performance Monitoring
Performance assurance activities are an essential step in the release cycle of software systems. Logs have become one of the most important sources of information that is used to monitor, understand and improve software performance. However, developers often face the challenge of making logging decisions, i.e., neither logging too little and logging too much is desirable. Although prior research has proposed techniques to assist in logging decisions, those automated logging guidance techniques are rather general, without considering a particular goal, such as monitoring software performance. In this thesis, we present Log4Perf, an automated approach that provides suggestions of where to insert logging statements with the goal of monitoring web-based systems' software performance. In particular, our approach builds and manipulates a statistical performance model to identify the locations in the source code that statistically significantly influence software performance. To evaluate Log4Perf, we conduct case studies on open source systems, i.e.,
CloudStore and OpenMRS, and one large-scale commercial system. Our evaluation results show that Log4Perf can build well-fit statistical performance models, indicating that such models can be leveraged to investigate the influence of locations in the source code on performance. Also, the suggested logging locations are often small and simple methods that do not have logging statements and that are not performance hotspots, making our approach an ideal complement to traditional approaches that are based on software metrics or performance hotspots. In addition, we proposed approaches that can suggest the need for updating logging locations when software evolves. After evaluating our approach, we manually examine the logging locations that are newly suggested or deprecated and identify seven root-causes.
Log4Perf is integrated into the release engineering process of the commercial software to provide logging suggestions on a regular basis
Are They All Good? Studying Practitioners' Expectations on the Readability of Log Messages
Developers write logging statements to generate logs that provide run-time
information for various tasks. The readability of log messages in the logging
statements (i.e., the descriptive text) is rather crucial to the value of the
generated logs. Immature log messages may slow down or even obstruct the
process of log analysis. Despite the importance of log messages, there is still
a lack of standards on what constitutes good readability in log messages and
how to write them. In this paper, we conduct a series of interviews with 17
industrial practitioners to investigate their expectations on the readability
of log messages. Through the interviews, we derive three aspects related to the
readability of log messages, including Structure, Information, and Wording,
along with several specific practices to improve each aspect. We validate our
findings through a series of online questionnaire surveys and receive positive
feedback from the participants. We then manually investigate the readability of
log messages in large-scale open source systems and find that a large portion
(38.1%) of the log messages have inadequate readability. Motivated by such
observation, we further explore the potential of automatically classifying the
readability of log messages using deep learning and machine learning models. We
find that both deep learning and machine learning models can effectively
classify the readability of log messages with a balanced accuracy above 80.0%
on average. Our study provides comprehensive guidelines for composing log
messages to further improve practitioners' logging practices.Comment: Accepted as a research paper at the 38th IEEE/ACM International
Conference on Automated Software Engineering (ASE 2023
CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020
The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and
Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market,
technical, ethical and governance challenges posed by the intersection of AI and cybersecurity,
focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder
by design and composed of academics, industry players from various sectors, policymakers and civil
society.
The Task Force is currently discussing issues such as the state and evolution of the application of AI
in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics
between cyber attackers and defenders; the increasing need for sharing information on threats and
how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and
possible EU policy measures to ease the adoption of AI in cybersecurity in Europe.
As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics
Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and
makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed
at helping the public and the private sector in operationalising Trustworthy AI. The list is composed
of 131 items that are supposed to guide AI designers and developers throughout the process of
design, development, and deployment of AI, although not intended as guidance to ensure
compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a
revision that will be finalised in early 2020.
This report would like to contribute to this revision by addressing in particular the interplay between
AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how
the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental
Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks
are fundamentally different from traditional cyberattacks; whether they are compatible with
different risk levels; whether they are flexible enough in terms of clear/easy measurement,
implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles
for the industry.
The HLEG is a diverse group, with more than 50 members representing different stakeholders, such
as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of
producing a simple checklist for a complex issue. The public engagement exercise looks successful
overall in that more than 450 stakeholders have signed in and are contributing to the process.
The next sections of this report present the items listed by the HLEG followed by the analysis and
suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)
- …