2,051 research outputs found

    From fuzzy to annotated semantic web languages

    Get PDF
    The aim of this chapter is to present a detailed, selfcontained and comprehensive account of the state of the art in representing and reasoning with fuzzy knowledge in Semantic Web Languages such as triple languages RDF/RDFS, conceptual languages of the OWL 2 family and rule languages. We further show how one may generalise them to so-called annotation domains, that cover also e.g. temporal and provenance extensions

    An exact approach for single machine scheduling with quadratic earliness and tardiness penalties

    Get PDF
    In this paper, we consider the single machine scheduling problem with quadratic earliness and tardiness costs, and no machine idle time. We propose two different lower bounds, as well as a lower bounding procedure that combines these two bounds. Optimal branch-and-bound algorithms are then presented. These algorithms incorporate the proposed lower bound, as well as an insertion-based dominance test. The lower bounding procedure and the branch-and-bound algorithms are tested on a wide set of randomly generated problems. The computational results show that the branch-and-bound algorithms are capable of optimally solving, within reasonable computation times, instances with up to 20 jobs.scheduling, single machine, quadratic earliness and tardiness, lower bounds, branch-and-bound

    A SURVEY OF THE PROPERTIES OF AGENTS

    Get PDF
    In the past decade agent systems were considered to be as one of the major fields of study in Artificial Intelligence (AI) field. Many different definitions of agents were presented and several different approaches describing agency can be distinguished. While some authors have tried to define “what” an agent really is, others have tried to identify agents by means of properties which they should possess. Most authors agree on these properties (at least basic set of properties) which are intrinsic to agents. Since agent\u27s definitions are not consistent, we are going to give an overview and list the properties intrinsic to an agent. Many different adjectives were attached to the term agent as well and many different kinds of agents and different architectures emerged too. The aim of this paper it go give an overview of what was going on in the field while taking into consideration main streams and projects. We will also present some guidelines important when modelling agent systems and say something about security issues. Also, some existing problems which restrict the wider usage of agents will be mentioned too

    On the enhancement of Big Data Pipelines through Data Preparation, Data Quality, and the distribution of Optimisation Problems

    Get PDF
    Nowadays, data are fundamental for companies, providing operational support by facilitating daily transactions. Data has also become the cornerstone of strategic decision-making processes in businesses. For this purpose, there are numerous techniques that allow to extract knowledge and value from data. For example, optimisation algorithms excel at supporting decision-making processes to improve the use of resources, time and costs in the organisation. In the current industrial context, organisations usually rely on business processes to orchestrate their daily activities while collecting large amounts of information from heterogeneous sources. Therefore, the support of Big Data technologies (which are based on distributed environments) is required given the volume, variety and speed of data. Then, in order to extract value from the data, a set of techniques or activities is applied in an orderly way and at different stages. This set of techniques or activities, which facilitate the acquisition, preparation, and analysis of data, is known in the literature as Big Data pipelines. In this thesis, the improvement of three stages of the Big Data pipelines is tackled: Data Preparation, Data Quality assessment, and Data Analysis. These improvements can be addressed from an individual perspective, by focussing on each stage, or from a more complex and global perspective, implying the coordination of these stages to create data workflows. The first stage to improve is the Data Preparation by supporting the preparation of data with complex structures (i.e., data with various levels of nested structures, such as arrays). Shortcomings have been found in the literature and current technologies for transforming complex data in a simple way. Therefore, this thesis aims to improve the Data Preparation stage through Domain-Specific Languages (DSLs). Specifically, two DSLs are proposed for different use cases. While one of them is a general-purpose Data Transformation language, the other is a DSL aimed at extracting event logs in a standard format for process mining algorithms. The second area for improvement is related to the assessment of Data Quality. Depending on the type of Data Analysis algorithm, poor-quality data can seriously skew the results. A clear example are optimisation algorithms. If the data are not sufficiently accurate and complete, the search space can be severely affected. Therefore, this thesis formulates a methodology for modelling Data Quality rules adjusted to the context of use, as well as a tool that facilitates the automation of their assessment. This allows to discard the data that do not meet the quality criteria defined by the organisation. In addition, the proposal includes a framework that helps to select actions to improve the usability of the data. The third and last proposal involves the Data Analysis stage. In this case, this thesis faces the challenge of supporting the use of optimisation problems in Big Data pipelines. There is a lack of methodological solutions that allow computing exhaustive optimisation problems in distributed environments (i.e., those optimisation problems that guarantee the finding of an optimal solution by exploring the whole search space). The resolution of this type of problem in the Big Data context is computationally complex, and can be NP-complete. This is caused by two different factors. On the one hand, the search space can increase significantly as the amount of data to be processed by the optimisation algorithms increases. This challenge is addressed through a technique to generate and group problems with distributed data. On the other hand, processing optimisation problems with complex models and large search spaces in distributed environments is not trivial. Therefore, a proposal is presented for a particular case in this type of scenario. As a result, this thesis develops methodologies that have been published in scientific journals and conferences.The methodologies have been implemented in software tools that are integrated with the Apache Spark data processing engine. The solutions have been validated through tests and use cases with real datasets

    Evolution of security engineering artifacts: a state of the art survey

    Get PDF
    Security is an important quality aspect of modern open software systems. However, it is challenging to keep such systems secure because of evolution. Security evolution can only be managed adequately if it is considered for all artifacts throughout the software development lifecycle. This article provides state of the art on the evolution of security engineering artifacts. The article covers the state of the art on evolution of security requirements, security architectures, secure code, security tests, security models, and security risks as well as security monitoring. For each of these artifacts the authors give an overview of evolution and security aspects and discuss the state of the art on its security evolution in detail. Based on this comprehensive survey, they summarize key issues and discuss directions of future research

    Ontology Alignment using Biologically-inspired Optimisation Algorithms

    Get PDF
    It is investigated how biologically-inspired optimisation methods can be used to compute alignments between ontologies. Independent of particular similarity metrics, the developed techniques demonstrate anytime behaviour and high scalability. Due to the inherent parallelisability of these population-based algorithms it is possible to exploit dynamically scalable cloud infrastructures - a step towards the provisioning of Alignment-as-a-Service solutions for future semantic applications

    CERN openlab Whitepaper on Future IT Challenges in Scientific Research

    Get PDF
    This whitepaper describes the major IT challenges in scientific research at CERN and several other European and international research laboratories and projects. Each challenge is exemplified through a set of concrete use cases drawn from the requirements of large-scale scientific programs. The paper is based on contributions from many researchers and IT experts of the participating laboratories and also input from the existing CERN openlab industrial sponsors. The views expressed in this document are those of the individual contributors and do not necessarily reflect the view of their organisations and/or affiliates

    Engineering Self-Adaptive Collective Processes for Cyber-Physical Ecosystems

    Get PDF
    The pervasiveness of computing and networking is creating significant opportunities for building valuable socio-technical systems. However, the scale, density, heterogeneity, interdependence, and QoS constraints of many target systems pose severe operational and engineering challenges. Beyond individual smart devices, cyber-physical collectives can provide services or solve complex problems by leveraging a “system effect” while coordinating and adapting to context or environment change. Understanding and building systems exhibiting collective intelligence and autonomic capabilities represent a prominent research goal, partly covered, e.g., by the field of collective adaptive systems. Therefore, drawing inspiration from and building on the long-time research activity on coordination, multi-agent systems, autonomic/self-* systems, spatial computing, and especially on the recent aggregate computing paradigm, this thesis investigates concepts, methods, and tools for the engineering of possibly large-scale, heterogeneous ensembles of situated components that should be able to operate, adapt and self-organise in a decentralised fashion. The primary contribution of this thesis consists of four main parts. First, we define and implement an aggregate programming language (ScaFi), internal to the mainstream Scala programming language, for describing collective adaptive behaviour, based on field calculi. Second, we conceive of a “dynamic collective computation” abstraction, also called aggregate process, formalised by an extension to the field calculus, and implemented in ScaFi. Third, we characterise and provide a proof-of-concept implementation of a middleware for aggregate computing that enables the development of aggregate systems according to multiple architectural styles. Fourth, we apply and evaluate aggregate computing techniques to edge computing scenarios, and characterise a design pattern, called Self-organising Coordination Regions (SCR), that supports adjustable, decentralised decision-making and activity in dynamic environments.Con lo sviluppo di informatica e intelligenza artificiale, la diffusione pervasiva di device computazionali e la crescente interconnessione tra elementi fisici e digitali, emergono innumerevoli opportunità per la costruzione di sistemi socio-tecnici di nuova generazione. Tuttavia, l'ingegneria di tali sistemi presenta notevoli sfide, data la loro complessità—si pensi ai livelli, scale, eterogeneità, e interdipendenze coinvolti. Oltre a dispositivi smart individuali, collettivi cyber-fisici possono fornire servizi o risolvere problemi complessi con un “effetto sistema” che emerge dalla coordinazione e l'adattamento di componenti fra loro, l'ambiente e il contesto. Comprendere e costruire sistemi in grado di esibire intelligenza collettiva e capacità autonomiche è un importante problema di ricerca studiato, ad esempio, nel campo dei sistemi collettivi adattativi. Perciò, traendo ispirazione e partendo dall'attività di ricerca su coordinazione, sistemi multiagente e self-*, modelli di computazione spazio-temporali e, specialmente, sul recente paradigma di programmazione aggregata, questa tesi tratta concetti, metodi, e strumenti per l'ingegneria di ensemble di elementi situati eterogenei che devono essere in grado di lavorare, adattarsi, e auto-organizzarsi in modo decentralizzato. Il contributo di questa tesi consiste in quattro parti principali. In primo luogo, viene definito e implementato un linguaggio di programmazione aggregata (ScaFi), interno al linguaggio Scala, per descrivere comportamenti collettivi e adattativi secondo l'approccio dei campi computazionali. In secondo luogo, si propone e caratterizza l'astrazione di processo aggregato per rappresentare computazioni collettive dinamiche concorrenti, formalizzata come estensione al field calculus e implementata in ScaFi. Inoltre, si analizza e implementa un prototipo di middleware per sistemi aggregati, in grado di supportare più stili architetturali. Infine, si applicano e valutano tecniche di programmazione aggregata in scenari di edge computing, e si propone un pattern, Self-Organising Coordination Regions, per supportare, in modo decentralizzato, attività decisionali e di regolazione in ambienti dinamici

    MATrA: meta-modelling approach to traceability for avionics

    Get PDF
    PhD ThesisTraceability is the common term for mechanisms to record and navigate relationships between artifacts produced by development and assessment processes. Effective management of these relationships is critical to the success of projects involving the development of complex aerospace products. Practitioners use a range of notations to model aerospace products (often as part of a defined technique or methodology). Those appropriate to electrical and electronic systems (avionics) include Use Cases for requirements, Ada for development and Fault Trees for assessment (others such as PERT networks support product management). Most notations used within the industry have tool support, although a lack of well-defined approaches to integration leads to inconsistencies and limits traceability between their respective data sets (internal models). Conceptually, the artifacts produced using such notations populate four traceability dimensions. Of these, three record links between project artifacts (describing the same product), while the fourth relates artifacts across different projects (and hence products), and across product families within the same project. The scope of this thesis is to define a meta-framework that characterises traceability dimensions for aerospace projects, and then to propose a concrete framework capturing the syntax and semantics of notations used in developing avionics for such projects which enables traceability across the four dimensions. The concrete framework is achieved by exporting information from the internal models of tools supporting these notations to an integrated environment consisting of. i) a Workspace comprising a set of structures or meta-models (models describing models) expressed in a common modelling language representing selected notations (including appropriate extensions reflecting the application domain); ii) well-formedness constraints over these structures capturing properties of the notations (and again, reflecting the domain); and iii) associations between the structures. To maintain consistency and identify conflicts, elements of the structures are verified against a system model that defines common building blocks underlying the various notations. The approach is evaluated by (partial) tool implementation of the structures which are populated using case study material derived from actual commercial specifications and industry standards
    corecore