37,339 research outputs found
Toward Reverse Engineering of VBA Based Excel Spreadsheet Applications
Modern spreadsheet systems can be used to implement complex spreadsheet
applications including data sheets, customized user forms and executable
procedures written in a scripting language. These applications are often
developed by practitioners that do not follow any software engineering practice
and do not produce any design documentation. Thus, spreadsheet applications may
be very difficult to be maintained or restructured. In this position paper we
present in a nutshell two reverse engineering techniques and a tool that we are
currently realizing for the abstraction of conceptual data models and business
logic models.Comment: In Proceedings of the 2nd Workshop on Software Engineering Methods in
Spreadsheets (http://spreadsheetlab.org/sems15/
A Model-Driven Architecture Approach to the Efficient Identification of Services on Service-oriented Enterprise Architecture
Service-Oriented Enterprise Architecture requires the efficient development of loosely-coupled and interoperable sets of services. Existing design approaches do not always take full advantage of the value and importance of the engineering invested in existing legacy systems. This paper proposes an approach to define the key services from such legacy systems effectively. The approach focuses on identifying these services based on a Model-Driven Architecture approach supported by guidelines over a wide range of possible service types
On the Effect of Semantically Enriched Context Models on Software Modularization
Many of the existing approaches for program comprehension rely on the
linguistic information found in source code, such as identifier names and
comments. Semantic clustering is one such technique for modularization of the
system that relies on the informal semantics of the program, encoded in the
vocabulary used in the source code. Treating the source code as a collection of
tokens loses the semantic information embedded within the identifiers. We try
to overcome this problem by introducing context models for source code
identifiers to obtain a semantic kernel, which can be used for both deriving
the topics that run through the system as well as their clustering. In the
first model, we abstract an identifier to its type representation and build on
this notion of context to construct contextual vector representation of the
source code. The second notion of context is defined based on the flow of data
between identifiers to represent a module as a dependency graph where the nodes
correspond to identifiers and the edges represent the data dependencies between
pairs of identifiers. We have applied our approach to 10 medium-sized open
source Java projects, and show that by introducing contexts for identifiers,
the quality of the modularization of the software systems is improved. Both of
the context models give results that are superior to the plain vector
representation of documents. In some cases, the authoritativeness of
decompositions is improved by 67%. Furthermore, a more detailed evaluation of
our approach on JEdit, an open source editor, demonstrates that inferred topics
through performing topic analysis on the contextual representations are more
meaningful compared to the plain representation of the documents. The proposed
approach in introducing a context model for source code identifiers paves the
way for building tools that support developers in program comprehension tasks
such as application and domain concept location, software modularization and
topic analysis
Using Automatic Static Analysis to Identify Technical Debt
The technical debt (TD) metaphor describes a tradeoff between short-term and long-term goals in software development. Developers, in such situations, accept compromises in one dimension (e.g. maintainability) to meet an urgent demand in another dimension (e.g. delivering a release on time). Since TD produces interests in terms of time spent to correct the code and accomplish quality goals, accumulation of TD in software systems is dangerous because it could lead to more difficult and expensive maintenance. The research presented in this paper is focused on the usage of automatic static analysis to identify Technical Debt at code level with respect to different quality dimensions. The methodological approach is that of Empirical Software Engineering and both past and current achieved results are presented, focusing on functionality, efficiency and maintainabilit
Forensic Attacks Analysis and the Cyber Security of Safety-Critical Industrial Control Systems
Industrial Control Systems (ICS) and SCADA (Supervisory Control And Data Acquisition) applications monitor
and control a wide range of safety-related functions. These include energy generation where failures could have
significant, irreversible consequences. They also include the control systems that are used in the manufacture of
safety-related products. In this case bugs in an ICS/SCADA system could introduce flaws in the production of
components that remain undetected before being incorporated into safety-related applications. Industrial Control
Systems, typically, use devices and networks that are very different from conventional IP-based infrastructures.
These differences prevent the re-use of existing cyber-security products in ICS/SCADA environments; the
architectures, file formats and process structures are very different. This paper supports the forensic analysis of
industrial control systems in safety-related applications. In particular, we describe how forensic attack analysis is
used to identify weaknesses in devices so that we can both protect components but also determine the information
that must be analyzed during the aftermath of a cyber-incident. Simulated attacks detect vulnerabilities; a risk-based
approach can then be used to assess the likelihood and impact of any breach. These risk assessments are then used
to justify both immediate and longer-term countermeasures
HepData reloaded: reinventing the HEP data archive
We describe the status of the HepData database system, following a major
re-development in time for the advent of LHC data. The new HepData system
benefits from use of modern database and programming language technologies, as
well as a variety of high-quality tools for interfacing the data sources and
their presentation, primarily via the Web. The new back-end provides much more
flexible and semantic data representations than before, on which new external
applications can be built to respond to the data demands of the LHC
experimental era. The HepData re-development was largely motivated by a desire
to have a single source of reference data for Monte Carlo validation and tuning
tools, whose status and connection to HepData we also briefly review.Comment: 7 pages, 3 figures, Presented at 13th International Workshop on
Advanced Computing and Analysis Techniques in Physics Research (ACAT 2010),
February 22-27, 2010, Jaipur, Indi
Reverse Engineering Encapsulated Components from Object-Oriented Legacy Code
Current component-directed reverse engineering approaches extract ADL-based components from legacy systems. ADL-based components need to be configured atcode level for reuse, they cannot provide re-deposition after composition for future reuse and they cannot provide flexible re-usability as one has to bind all the ports in order to compose them. This paper proposes a solution to these issues by extracting X-MAN components from legacy systems. In this paper, we explain our component model and mapping from object-oriented code to X-MAN clusters using basic scenarios of our rule base
- âŠ