21 research outputs found
Using Codecharts for formally modelling and automating detection of patterns with application to Security Patterns
Software design patterns are solutions for recurring design problems. Many have introduced their catalogues in order to describe those patterns using templates which consist of informal statements as well as UML diagrams. Security patterns are design patterns for specific security problems domains, therefore, they are described in the same manner. However, the current catalogues describing security patterns contain a level of ambiguity and imprecision. These issues might result in incorrect implementations, which will be vital and at high cost security flaw, especially after delivery. In addition, software maintainability will be difficult thereafter, especially for systems with poor documentation. Therefore, it is important to overcome these issues by patterns formalisation in order to allow sharing the same understanding of the patterns to be implemented.
The current patterns formalisation approaches aim to translate UML diagrams using different formal methods. However, these diagrams are incomplete or suffer from levels of ambiguity and imprecision. Furthermore, the employed diagrams notations cannot depict the abstraction shown in the patterns descriptions. In addition, the current formalisation approaches cannot formalise some security properties shown the diagrams, such as system boundary.
Furthermore, detecting patterns in a source-code improves the overall software maintenance, especially when obsolete or lost system documentation is often the case of large and legacy systems. Current patterns detection approaches rely on translating the diagrams of the patterns. Consequently, the issue of detecting patterns with abstraction is not possible using such approaches. In addition, these approaches lack generality, abstraction detection, and efficiency.
This research suggests the use of Codecharts for security patterns formalisation as well as studying relationships among patterns. Besides, it investigates relationships among patterns. Furthermore, it proposes a pattern detection approach which outperforms the current pattern detection approaches in terms of generality, and abstraction detection. The approach competes in performance with the current efficient pattern detection approaches
Customizable Feature based Design Pattern Recognition Integrating Multiple Techniques
Die Analyse und RĂĽckgewinnung von Architekturinformationen
aus existierenden Altsystemen ist eine komplexe, teure und zeitraubende
Aufgabe, was der kontinuierlich steigenden Komplexität von Software und dem
Aufkommen der modernen Technologien geschuldet ist. Die Wartung von
Altsystemen wird immer stärker nachgefragt und muss dabei mit den neuesten
Technologien und neuen Kundenanforderungen umgehen können. Die
Wiederverwendung der Artefakte aus Altsystemen fĂĽr neue Entwicklungen wird
sehr bedeutsam und ĂĽberlebenswichtig fĂĽr die Softwarebranche. Die
Architekturen von Altsystemen unterliegen konstanten Veränderungen, deren
Projektdokumentation oft unvollständig, inkonsistent und veraltet ist.
Diese Dokumente enthalten ungenĂĽgend Informationen ĂĽber die innere Struktur
der Systeme. Häufig liefert nur der Quellcode zuverlässige Informationen
ĂĽber die Struktur von Altsystemen. Das Extrahieren von Artefakten aus
Quellcode von Altsystemen unterstützt das Programmverständnis, die Wartung,
das Refactoring, das Reverse Engineering, die nachträgliche Dokumentation
und Reengineering Methoden. Das Ziel dieser Dissertation ist es
Entwurfsinformationen von Altsystemen zu extrahieren, mit Fokus auf die
Wiedergewinnung von Architekturmustern. Architekturmuster sind
SchlĂĽsselelemente, um Architekturentscheidungen aus Quellcode von
Altsystemen zu extrahieren. Die Verwendung von Mustern bei der Entwicklung
von Applikationen wird allgemein als qualitätssteigernd betrachtet und
reduziert Entwicklungszeit und kosten. In der Vergangenheit wurden
unterschiedliche Methoden entwickelt, um Muster in Altsystemen zu erkennen.
Diese Techniken erkennen Muster mit unterschiedlicher Genauigkeit, da ein
und dasselbe Muster unterschiedlich spezifiziert und implementiert wird.
Der Lösungsansatz dieser Dissertation basiert auf anpassbaren und
wiederverwendbaren Merkmal-Typen, die statische und dynamische Parameter
nutzen, um variable Muster zu definieren. Jeder Merkmal-Typ verwendet eine
wählbare Suchtechnik (SQL Anfragen, Reguläre Ausdrücke oder Quellcode
Parser), um ein bestimmtes Merkmal eines Musters im Quellcode zu
identifizieren. Insbesondere zur Erkennung verschiedener Varianten eines
Musters kommen im entwickelten Verfahren statische, dynamische und
semantische Analysen zum Einsatz. Die Verwendung unterschiedlicher
Suchtechniken erhöht die Genauigkeit der Mustererkennung bei verschiedenen
Softwaresystemen. Zusätzlich wurde eine neue Semantik für Annotationen im
Quellcode von existierenden Softwaresystemen entwickelt, welche die
Effizienz der Mustererkennung steigert. Eine prototypische
Implementierung des Ansatzes, genannt UDDPRT, wurde zur Erkennung
verschiedener Muster in Softwaresystemenen unterschiedlicher
Programmiersprachen (JAVA, C/C++, C#) verwendet. UDDPRT erlaubt die
Anpassung der Mustererkennung durch den Benutzer. Alle Abfragen und deren
Zusammenspiel sind konfigurierbar und erlauben dadurch die Erkennung von
neuen und abgewandelten Mustern. Es wurden umfangreiche Experimente mit
diversen Open Source Software Systemen durchgefĂĽhrt und die erzielten
Ergebnisse wurden mit denen anderer Ansätze verglichen. Dabei war es
möglich eine deutliche Steigerung der Genauigkeit im entwickelten Verfahren
gegenüber existierenden Ansätzen zu zeigen.Recovering design information from legacy applications is a
complex, expensive, quiet challenging, and time consuming task due to ever
increasing complexity of software and advent of modern technology. The
growing demand for maintenance of legacy systems, which can cope with the
latest technologies and new business requirements, the reuse of artifacts
from the existing legacy applications for new developments become very
important and vital for software industry. Due to constant evolution in
architecture of legacy systems, they often have incomplete, inconsistent
and obsolete documents which do not provide enough information about the
structure of these systems. Mostly, source code is the only reliable source
of information for recovering artifacts from legacy systems. Extraction of
design artifacts from the source code of existing legacy systems supports
program comprehension, maintenance, code refactoring, reverse engineering,
redocumentation and reengineering methodologies. The objective of approach
used in this thesis is to recover design information from legacy code with
particular focus on the recovery of design patterns. Design patterns are
key artifacts for recovering design decisions from the legacy source code.
Patterns have been extensively tested in different applications and reusing
them yield quality software with reduced cost and time frame. Different
techniques, methodologies and tools are used to recover patterns from
legacy applications in the past. Each technique recovers patterns with
different precision and recall rates due to different specifications and
implementations of same pattern. The approach used in this thesis is based
on customizable and reusable feature types which use static and dynamic
parameters to define variant pattern definitions. Each feature type allows
user to switch/select between multiple searching techniques (SQL queries,
Regular Expressions and Source Code Parsers) which are used to match
features of patterns with source code artifacts. The technique focuses on
detecting variants of different design patterns by using static, dynamic
and semantic analysis techniques. The integrated use of SQL queries, source
code parsers, regular expressions and annotations improve the precision and
recall for pattern extraction from different legacy systems. The approach
has introduced new semantics of annotations to be used in the source code
of legacy applications, which reduce search space and time for detecting
patterns. The prototypical implementation of approach, called UDDPRT is
used to recognize different design patterns from the source code of
multiple languages (Java, C/C++, C#). The prototype is flexible and
customizable that novice user can change the SQL queries and regular
expressions for detecting implementation variants of design patterns. The
approach has improved significant precision and recall of pattern
extraction by performing experiments on number of open source systems taken
as baselines for comparisons
Evolutionary Service Composition and Personalization Ecosystem for Elderly Care
Current demographic trends suggest that people are living longer, while
the ageing process entails many necessities, calling for care services tailored to
the individual senior’s needs and life style. Personalized provision of care
services usually involves a number of stakeholders, including relatives, friends,
caregivers, professional assistance organizations, enterprises, and other support
entities. Traditional Information and Communication Technology based care and
assistance services for the elderly have been mainly focused on the development
of isolated and generic services, considering a single service provider, and
excessively featuring a techno-centric approach.
In contrast, advances on collaborative networks for elderly care suggest the
integration of services from multiple providers, encouraging collaboration as a
way to provide better personalized services. This approach requires a support
system to manage the personalization process and allow ranking the {service,
provider} pairs.
An additional issue is the problem of service evolution, as individual’s care
needs are not static over time. Consequently, the care services need to evolve
accordingly to keep the elderly’s requirements satisfied. In accordance with these
requirements, an Elderly Care Ecosystem (ECE) framework, a Service
Composition and Personalization Environment (SCoPE), and a Service Evolution
Environment (SEvol) are proposed.
The ECE framework provides the context for the personalization and
evolution methods. The SCoPE method is based on the match between the
customer´s profile and the available {service, provider} pairs to identify suitable
services and corresponding providers to attend the needs. SEvol is a method to build an adaptive and evolutionary system based on the MAPE-K methodology
supporting the solution evolution to cope with the elderly's new life stages.
To demonstrate the feasibility, utility and applicability of SCoPE and SEvol,
a number of methods and algorithms are presented, and illustrative scenarios are
introduced in which {service, provider} pairs are ranked based on a
multidimensional assessment method. Composition strategies are based on
customer’s profile and requirements, and the evolutionary solution is
determined considering customer’s inputs and evolution plans.
For the ECE evaluation process the following steps are adopted: (i) feature
selection and software prototype development; (ii) detailing the ECE framework
validation based on applicability and utility parameters; (iii) development of a
case study illustrating a typical scenario involving an elderly and her care needs;
and (iv) performing a survey based on a modified version of the technology
acceptance model (TAM), considering three contexts: Technological,
Organizational and Collaborative environment
Fujaba days 2009 : proceedings of the 7th international Fujaba days, Eindhoven University of Technology, the Netherlands, November 16-17, 2009
Fujaba is an Open Source UML CASE tool project started at the software engineering group of Paderborn University in 1997. In 2002 Fujaba has been redesigned and became the Fujaba Tool Suite with a plug-in architecture allowing developers to add functionality easily while retaining full control over their contributions. Multiple Application Domains Fujaba followed the model-driven development philosophy right from its beginning in 1997. At the early days, Fujaba had a special focus on code generation from UML diagrams resulting in a visual programming language with a special emphasis on object structure manipulating rules. Today, at least six rather independent tool versions are under development in Paderborn, Kassel, and Darmstadt for supporting (1) reengineering, (2) embedded real-time systems, (3) education, (4) specification of distributed control systems, (5) integration with the ECLIPSE platform, and (6) MOF-based integration of system (re-) engineering tools. International Community According to our knowledge, quite a number of research groups have also chosen Fujaba as a platform for UML and MDA related research activities. In addition, quite a number of Fujaba users send requests for more functionality and extensions. Therefore, the 7th International Fujaba Days aimed at bringing together Fujaba developers and Fujaba users from all over the world to present their ideas and projects and to discuss them with each other and with the Fujaba core development team
Fujaba days 2009 : proceedings of the 7th international Fujaba days, Eindhoven University of Technology, the Netherlands, November 16-17, 2009
Fujaba is an Open Source UML CASE tool project started at the software engineering group of Paderborn University in 1997. In 2002 Fujaba has been redesigned and became the Fujaba Tool Suite with a plug-in architecture allowing developers to add functionality easily while retaining full control over their contributions. Multiple Application Domains Fujaba followed the model-driven development philosophy right from its beginning in 1997. At the early days, Fujaba had a special focus on code generation from UML diagrams resulting in a visual programming language with a special emphasis on object structure manipulating rules. Today, at least six rather independent tool versions are under development in Paderborn, Kassel, and Darmstadt for supporting (1) reengineering, (2) embedded real-time systems, (3) education, (4) specification of distributed control systems, (5) integration with the ECLIPSE platform, and (6) MOF-based integration of system (re-) engineering tools. International Community According to our knowledge, quite a number of research groups have also chosen Fujaba as a platform for UML and MDA related research activities. In addition, quite a number of Fujaba users send requests for more functionality and extensions. Therefore, the 7th International Fujaba Days aimed at bringing together Fujaba developers and Fujaba users from all over the world to present their ideas and projects and to discuss them with each other and with the Fujaba core development team
The Digital Classicist 2013
This edited volume collects together peer-reviewed papers that initially emanated from presentations at Digital Classicist seminars and conference panels.
This wide-ranging volume showcases exemplary applications of digital scholarship to the ancient world and critically examines the many challenges and opportunities afforded by such research. The chapters included here demonstrate innovative approaches that drive forward the research interests of both humanists and technologists while showing that rigorous scholarship is as central to digital research as it is to mainstream classical studies.
As with the earlier Digital Classicist publications, our aim is not to give a broad overview of the field of digital classics; rather, we present here a snapshot of some of the varied research of our members in order to engage with and contribute to the development of scholarship both in the fields of classical antiquity and Digital Humanities more broadly
Study of Fine-Grained, Irregular Parallel Applications on a Many-Core Processor
This dissertation demonstrates the possibility of obtaining strong speedups for a variety of parallel applications versus the best serial and parallel implementations on commodity platforms. These results were obtained using the PRAM-inspired Explicit Multi-Threading (XMT) many-core computing platform, which is designed to efficiently support execution of both serial and parallel code and switching between the two.
Biconnectivity: For finding the biconnected components of a graph, we demonstrate speedups of 9x to 33x on XMT relative to the best serial algorithm using a relatively modest silicon budget. Further evidence suggests that speedups of 21x to 48x are possible. For graph connectivity, we demonstrate that XMT outperforms two contemporary NVIDIA GPUs of similar or greater silicon area. Prior studies of parallel biconnectivity algorithms achieved at most a 4x speedup, but we could not find biconnectivity code for GPUs to compare biconnectivity against them.
Triconnectivity: We present a parallel solution to the problem of determining the triconnected components of an undirected graph. We obtain significant speedups on XMT over the only published optimal (linear-time) serial implementation of a triconnected components algorithm running on a modern CPU. To our knowledge, no other parallel implementation of a triconnected components algorithm has been published for any platform.
Burrows-Wheeler compression: We present novel work-optimal parallel algorithms for Burrows-Wheeler compression and decompression of strings over a constant alphabet and their empirical evaluation. To validate these theoretical algorithms, we implement them on XMT and show speedups of up to 25x for compression, and 13x for decompression, versus bzip2, the de facto standard implementation of Burrows-Wheeler compression.
Fast Fourier transform (FFT): Using FFT as an example, we examine the impact that adoption of some enabling technologies, including silicon photonics, would have on the performance of a many-core architecture. The results show that a single-chip many-core processor could potentially outperform a large high-performance computing cluster.
Boosted decision trees: This chapter focuses on the hybrid memory architecture of the XMT computer platform, a key part of which is a flexible all-to-all interconnection network that connects processors to shared memory modules. First, to understand some recent advances in GPU memory architecture and how they relate to this hybrid memory architecture, we use microbenchmarks including list ranking. Then, we contrast the scalability of applications with that of routines. In particular, regardless of the scalability needs of full applications, some routines may involve smaller problem sizes, and in particular smaller levels of parallelism, perhaps even serial. To see how a hybrid memory architecture can benefit such applications, we simulate a computer with such an architecture and demonstrate the potential for a speedup of 3.3X over NVIDIA's most powerful GPU to date for XGBoost, an implementation of boosted decision trees, a timely machine learning approach.
Boolean satisfiability (SAT): SAT is an important performance-hungry problem with applications in many problem domains. However, most work on parallelizing SAT solvers has focused on coarse-grained, mostly embarrassing parallelism. Here, we study fine-grained parallelism that can speed up existing sequential SAT solvers. We show the potential for speedups of up to 382X across a variety of problem instances. We hope that these results will stimulate future research
The Digital Classicist 2013
This edited volume collects together peer-reviewed papers that initially emanated from presentations at Digital Classicist seminars and conference panels. This wide-ranging volume showcases exemplary applications of digital scholarship to the ancient world and critically examines the many challenges and opportunities afforded by such research. The chapters included here demonstrate innovative approaches that drive forward the research interests of both humanists and technologists while showing that rigorous scholarship is as central to digital research as it is to mainstream classical studies. As with the earlier Digital Classicist publications, our aim is not to give a broad overview of the field of digital classics; rather, we present here a snapshot of some of the varied research of our members in order to engage with and contribute to the development of scholarship both in the fields of classical antiquity and Digital Humanities more broadly