3,622 research outputs found
An Architecture Dynamic Modeling Language for Self-Healing Systems
AbstractAs modern software-based systems increase in complexity, recovery from malicious attacks and rectification of system faults become more difficult, labor-intensive, and error-prone. These factors have actuated research dealing with the concept of self-healing systems, which employ architectural models to monitor system behavior and use inputs obtaining therefore to adapt themselves to the run-time environment. Numerous architectural description languages (ADLs) have been developed, each providing complementary capabilities for architectural development and analysis. Unfortunately, few ADLs embrace dynamic change as a fundamental consideration and support a broad class of adaptive changes at the architectural level. The Architecture Dynamic Modeling Language (ADML) is being developed as a new formal language and/or conceptual model for representing dynamic software architectures. TheADML couple the static information provided by the system requirements and the dynamic knowledge provided by tactics, and offer a uniform way to represent and reason about both static and dynamic aspects of self-healing systems. Because the ADML is based on the Dynamic Description Logic DDL, architectural ontology entailment for the ADML languages can be reduced to knowledge base satisfiability in DDL
DAS: a data management system for instrument tests and operations
The Data Access System (DAS) is a metadata and data management software
system, providing a reusable solution for the storage of data acquired both
from telescopes and auxiliary data sources during the instrument development
phases and operations. It is part of the Customizable Instrument WorkStation
system (CIWS-FW), a framework for the storage, processing and quick-look at the
data acquired from scientific instruments. The DAS provides a data access layer
mainly targeted to software applications: quick-look displays, pre-processing
pipelines and scientific workflows. It is logically organized in three main
components: an intuitive and compact Data Definition Language (DAS DDL) in XML
format, aimed for user-defined data types; an Application Programming Interface
(DAS API), automatically adding classes and methods supporting the DDL data
types, and providing an object-oriented query language; a data management
component, which maps the metadata of the DDL data types in a relational Data
Base Management System (DBMS), and stores the data in a shared (network) file
system. With the DAS DDL, developers define the data model for a particular
project, specifying for each data type the metadata attributes, the data format
and layout (if applicable), and named references to related or aggregated data
types. Together with the DDL user-defined data types, the DAS API acts as the
only interface to store, query and retrieve the metadata and data in the DAS
system, providing both an abstract interface and a data model specific one in
C, C++ and Python. The mapping of metadata in the back-end database is
automatic and supports several relational DBMSs, including MySQL, Oracle and
PostgreSQL.Comment: Accepted for pubblication on ADASS Conference Serie
CIWS-FW: a Customizable InstrumentWorkstation Software Framework for instrument-independent data handling
The CIWS-FW is aimed at providing a common and standard solution for the
storage, processing and quick look at the data acquired from scientific
instruments for astrophysics. The target system is the instrument workstation
either in the context of the Electrical Ground Support Equipment for
space-borne experiments, or in the context of the data acquisition system for
instrumentation. The CIWS-FW core includes software developed by team members
for previous experiments and provides new components and tools that improve the
software reusability, configurability and extensibility attributes. The CIWS-FW
mainly consists of two packages: the data processing system and the data access
system. The former provides the software components and libraries to support
the data acquisition, transformation, display and storage in near real time of
either a data packet stream and/or a sequence of data files generated by the
instrument. The latter is a meta-data and data management system, providing a
reusable solution for the archiving and retrieval of the acquired data. A
built-in operator GUI allows to control and configure the IW. In addition, the
framework provides mechanisms for system error and logging handling. A web
portal provides the access to the CIWS-FW documentation, software repository
and bug tracking tools for CIWS-FW developers. We will describe the CIWS-FW
architecture and summarize the project status.Comment: Accepted for pubblication on ADASS Conference Serie
Database Systems - Present and Future
The database systems have nowadays an increasingly important role in the knowledge-based society, in which computers have penetrated all fields of activity and the Internet tends to develop worldwide. In the current informatics context, the development of the applications with databases is the work of the specialists. Using databases, reach a database from various applications, and also some of related concepts, have become accessible to all categories of IT users. This paper aims to summarize the curricular area regarding the fundamental database systems issues, which are necessary in order to train specialists in economic informatics higher education. The database systems integrate and interfere with several informatics technologies and therefore are more difficult to understand and use. Thus, students should know already a set of minimum, mandatory concepts and their practical implementation: computer systems, programming techniques, programming languages, data structures. The article also presents the actual trends in the evolution of the database systems, in the context of economic informatics.database systems - DBS, database management systems – DBMS, database – DB, programming languages, data models, database design, relational database, object-oriented systems, distributed systems, advanced database systems
Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support
A framework and methodology---termed LogiKEy---for the design and engineering
of ethical reasoners, normative theories and deontic logics is presented. The
overall motivation is the development of suitable means for the control and
governance of intelligent autonomous systems. LogiKEy's unifying formal
framework is based on semantical embeddings of deontic logics, logic
combinations and ethico-legal domain theories in expressive classic
higher-order logic (HOL). This meta-logical approach enables the provision of
powerful tool support in LogiKEy: off-the-shelf theorem provers and model
finders for HOL are assisting the LogiKEy designer of ethical intelligent
agents to flexibly experiment with underlying logics and their combinations,
with ethico-legal domain theories, and with concrete examples---all at the same
time. Continuous improvements of these off-the-shelf provers, without further
ado, leverage the reasoning performance in LogiKEy. Case studies, in which the
LogiKEy framework and methodology has been applied and tested, give evidence
that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure
XML Security in Certificate Management - XML Certificator
The trend of rapid growing use of XML format in data/document management system reveals that security measures should be urgently considered into next generation's data/document systems. This paper presents a new certificate management system developed on the basis of XML security mechanisms. The system is supported by the theories of XML security as well as Object oriented technology and database. Finally it has been successfully implemented in using C&#, SQL, XML signature and XML encryption. An implementation metrics is evidently presented
Hadoop Performance Analysis Model with Deep Data Locality
Background: Hadoop has become the base framework on the big data system via the simple concept that moving computation is cheaper than moving data. Hadoop increases a data locality in the Hadoop Distributed File System (HDFS) to improve the performance of the system. The network traffic among nodes in the big data system is reduced by increasing a data-local on the machine. Traditional research increased the data-local on one of the MapReduce stages to increase the Hadoop performance. However, there is currently no mathematical performance model for the data locality on the Hadoop. Methods: This study made the Hadoop performance analysis model with data locality for analyzing the entire process of MapReduce. In this paper, the data locality concept on the map stage and shuffle stage was explained. Also, this research showed how to apply the Hadoop performance analysis model to increase the performance of the Hadoop system by making the deep data locality. Results: This research proved the deep data locality for increasing performance of Hadoop via three tests, such as, a simulation base test, a cloud test and a physical test. According to the test, the authors improved the Hadoop system by over 34% by using the deep data locality. Conclusions: The deep data locality improved the Hadoop performance by reducing the data movement in HDFS
- …