52 research outputs found

    Open Data

    Get PDF
    Open data is freely usable, reusable, or redistributable by anybody, provided there are safeguards in place that protect the data’s integrity and transparency. This book describes how data retrieved from public open data repositories can improve the learning qualities of digital networking, particularly performance and reliability. Chapters address such topics as knowledge extraction, Open Government Data (OGD), public dashboards, intrusion detection, and artificial intelligence in healthcare

    Hybrid approaches based on computational intelligence and semantic web for distributed situation and context awareness

    Get PDF
    2011 - 2012The research work focuses on Situation Awareness and Context Awareness topics. Specifically, Situation Awareness involves being aware of what is happening in the vicinity to understand how information, events, and one’s own actions will impact goals and objectives, both immediately and in the near future. Thus, Situation Awareness is especially important in application domains where the information flow can be quite high and poor decisions making may lead to serious consequences. On the other hand Context Awareness is considered a process to support user applications to adapt interfaces, tailor the set of application-relevant data, increase the precision of information retrieval, discover services, make the user interaction implicit, or build smart environments. Despite being slightly different, Situation and Context Awareness involve common problems such as: the lack of a support for the acquisition and aggregation of dynamic environmental information from the field (i.e. sensors, cameras, etc.); the lack of formal approaches to knowledge representation (i.e. contexts, concepts, relations, situations, etc.) and processing (reasoning, classification, retrieval, discovery, etc.); the lack of automated and distributed systems, with considerable computing power, to support the reasoning on a huge quantity of knowledge, extracted by sensor data. So, the thesis researches new approaches for distributed Context and Situation Awareness and proposes to apply them in order to achieve some related research objectives such as knowledge representation, semantic reasoning, pattern recognition and information retrieval. The research work starts from the study and analysis of state of art in terms of techniques, technologies, tools and systems to support Context/Situation Awareness. The main aim is to develop a new contribution in this field by integrating techniques deriving from the fields of Semantic Web, Soft Computing and Computational Intelligence. From an architectural point of view, several frameworks are going to be defined according to the multi-agent paradigm. Furthermore, some preliminary experimental results have been obtained in some application domains such as Airport Security, Traffic Management, Smart Grids and Healthcare. Finally, future challenges is going to the following directions: Semantic Modeling of Fuzzy Control, Temporal Issues, Automatically Ontology Elicitation, Extension to other Application Domains and More Experiments. [edited by author]XI n.s

    Acesso remoto dinâmico e seguro a bases de dados com integração de políticas de acesso suave

    Get PDF
    The amount of data being created and shared has grown greatly in recent years, thanks in part to social media and the growth of smart devices. Managing the storage and processing of this data can give a competitive edge when used to create new services, to enhance targeted advertising, etc. To achieve this, the data must be accessed and processed. When applications that access this data are developed, tools such as Java Database Connectivity, ADO.NET and Hibernate are typically used. However, while these tools aim to bridge the gap between databases and the object-oriented programming paradigm, they focus only on the connectivity issue. This leads to increased development time as developers need to master the access policies to write correct queries. Moreover, when used in database applications within noncontrolled environments, other issues emerge such as database credentials theft; application authentication; authorization and auditing of large groups of new users seeking access to data, potentially with vague requirements; network eavesdropping for data and credential disclosure; impersonating database servers for data modification; application tampering for unrestricted database access and data disclosure; etc. Therefore, an architecture capable of addressing these issues is necessary to build a reliable set of access control solutions to expand and simplify the application scenarios of access control systems. The objective, then, is to secure the remote access to databases, since database applications may be used in hard-to-control environments and physical access to the host machines/network may not be always protected. Furthermore, the authorization process should dynamically grant the appropriate permissions to users that have not been explicitly authorized to handle large groups seeking access to data. This includes scenarios where the definition of the access requirements is difficult due to their vagueness, usually requiring a security expert to authorize each user individually. This is achieved by integrating and auditing soft access policies based on fuzzy set theory in the access control decision-making process. A proof-of-concept of this architecture is provided alongside a functional and performance assessment.A quantidade de dados criados e partilhados tem crescido nos últimos anos, em parte graças às redes sociais e à proliferação dos dispositivos inteligentes. A gestão do armazenamento e processamento destes dados pode fornecer uma vantagem competitiva quando usados para criar novos serviços, para melhorar a publicidade direcionada, etc. Para atingir este objetivo, os dados devem ser acedidos e processados. Quando as aplicações que acedem a estes dados são desenvolvidos, ferramentas como Java Database Connectivity, ADO.NET e Hibernate são normalmente utilizados. No entanto, embora estas ferramentas tenham como objetivo preencher a lacuna entre as bases de dados e o paradigma da programação orientada por objetos, elas concentram-se apenas na questão da conectividade. Isto aumenta o tempo de desenvolvimento, pois os programadores precisam dominar as políticas de acesso para escrever consultas corretas. Além disso, quando usado em aplicações de bases de dados em ambientes não controlados, surgem outros problemas, como roubo de credenciais da base de dados; autenticação de aplicações; autorização e auditoria de grandes grupos de novos utilizadores que procuram acesso aos dados, potencialmente com requisitos vagos; escuta da rede para obtenção de dados e credenciais; personificação de servidores de bases de dados para modificação de dados; manipulação de aplicações para acesso ilimitado à base de dados e divulgação de dados; etc. Uma arquitetura capaz de resolver esses problemas é necessária para construir um conjunto confiável de soluções de controlo de acesso, para expandir e simplificar os cenários de aplicação destes sistemas. O objetivo, então, é proteger o acesso remoto a bases de dados, uma vez que as aplicações de bases de dados podem ser usados em ambientes de difícil controlo e o acesso físico às máquinas/rede nem sempre está protegido. Adicionalmente, o processo de autorização deve conceder dinamicamente as permissões adequadas aos utilizadores que não foram explicitamente autorizados para suportar grupos grandes de utilizadores que procuram aceder aos dados. Isto inclui cenários em que a definição dos requisitos de acesso é difícil devido à sua imprecisão, geralmente exigindo um especialista em segurança para autorizar cada utilizador individualmente. Este objetivo é atingido no processo de decisão de controlo de acesso com a integração e auditaria das políticas de acesso suaves baseadas na teoria de conjuntos difusos. Uma prova de conceito desta arquitetura é fornecida em conjunto com uma avaliação funcional e de desempenho.Programa Doutoral em Informátic

    Evaluating the Impact of Defeasible Argumentation as a Modelling Technique for Reasoning under Uncertainty

    Get PDF
    Limited work exists for the comparison across distinct knowledge-based approaches in Artificial Intelligence (AI) for non-monotonic reasoning, and in particular for the examination of their inferential and explanatory capacity. Non-monotonicity, or defeasibility, allows the retraction of a conclusion in the light of new information. It is a similar pattern to human reasoning, which draws conclusions in the absence of information, but allows them to be corrected once new pieces of evidence arise. Thus, this thesis focuses on a comparison of three approaches in AI for implementation of non-monotonic reasoning models of inference, namely: expert systems, fuzzy reasoning and defeasible argumentation. Three applications from the fields of decision-making in healthcare and knowledge representation and reasoning were selected from real-world contexts for evaluation: human mental workload modelling, computational trust modelling, and mortality occurrence modelling with biomarkers. The link between these applications comes from their presumptively non-monotonic nature. They present incomplete, ambiguous and retractable pieces of evidence. Hence, reasoning applied to them is likely suitable for being modelled by non-monotonic reasoning systems. An experiment was performed by exploiting six deductive knowledge bases produced with the aid of domain experts. These were coded into models built upon the selected reasoning approaches and were subsequently elicited with real-world data. The numerical inferences produced by these models were analysed according to common metrics of evaluation for each field of application. For the examination of explanatory capacity, properties such as understandability, extensibility, and post-hoc interpretability were meticulously described and qualitatively compared. Findings suggest that the variance of the inferences produced by expert systems and fuzzy reasoning models was higher, highlighting poor stability. In contrast, the variance of argument-based models was lower, showing a superior stability of its inferences across different system configurations. In addition, when compared in a context with large amounts of conflicting information, defeasible argumentation exhibited a stronger potential for conflict resolution, while presenting robust inferences. An in-depth discussion of the explanatory capacity showed how defeasible argumentation can lead to the construction of non-monotonic models with appealing properties of explainability, compared to those built with expert systems and fuzzy reasoning. The originality of this research lies in the quantification of the impact of defeasible argumentation. It illustrates the construction of an extensive number of non-monotonic reasoning models through a modular design. In addition, it exemplifies how these models can be exploited for performing non-monotonic reasoning and producing quantitative inferences in real-world applications. It contributes to the field of non-monotonic reasoning by situating defeasible argumentation among similar approaches through a novel empirical comparison

    Decision Support Systems

    Get PDF
    Decision support systems (DSS) have evolved over the past four decades from theoretical concepts into real world computerized applications. DSS architecture contains three key components: knowledge base, computerized model, and user interface. DSS simulate cognitive decision-making functions of humans based on artificial intelligence methodologies (including expert systems, data mining, machine learning, connectionism, logistical reasoning, etc.) in order to perform decision support functions. The applications of DSS cover many domains, ranging from aviation monitoring, transportation safety, clinical diagnosis, weather forecast, business management to internet search strategy. By combining knowledge bases with inference rules, DSS are able to provide suggestions to end users to improve decisions and outcomes. This book is written as a textbook so that it can be used in formal courses examining decision support systems. It may be used by both undergraduate and graduate students from diverse computer-related fields. It will also be of value to established professionals as a text for self-study or for reference

    INVESTIGATION OF THE ROLE OF SERVICE LEVEL AGREEMENTS IN WEB SERVICE QUALITY

    Get PDF
    Context/Background: Use of Service Level Agreements (SLAs) is crucial to provide the value added services to consumers to achieve their requirements successfully. SLAs also ensure the expected Quality of Service to consumers. Aim: This study investigates how efficient structural representation and management of SLAs can help to ensure the Quality of Service (QoS) in Web services during Web service composition. Method: Existing specifications and structures for SLAs for Web services do not fully formalize and provide support for different automatic and dynamic behavioral aspects needed for QoS calculation. This study addresses the issues on how to formalize and document the structures of SLAs for better service utilization and improved QoS results. The Service Oriented Architecture (SOA) is extended in this study with addition of an SLAAgent, which helps to automate the QoS calculation using Fuzzy Inference Systems, service discovery, service selection, SLA monitoring and management during service composition with the help of structured SLA documents. Results: The proposed framework improves the ways of how to structure, manage and monitor SLAs during Web service composition to achieve the better Quality of Service effectively and efficiently. Conclusions: To deal with different types of computational requirements the automation of SLAs is a challenge during Web service composition. This study shows the significance of the SLAs for better QoS during composition of services in SOA

    A framework to manage uncertainties in cloud manufacturing environment

    Get PDF
    This research project aims to develop a framework to manage uncertainty in cloud manufacturing for small and medium enterprises (SMEs). The framework includes a cloud manufacturing taxonomy; guidance to deal with uncertainty in cloud manufacturing, by providing a process to identify uncertainties; a detailed step-by-step approach to managing the uncertainties; a list of uncertainties; and response strategies to security and privacy uncertainties in cloud manufacturing. Additionally, an online assessment tool has been developed to implement the uncertainty management framework into a real life context. To fulfil the aim and objectives of the research, a comprehensive literature review was performed in order to understand the research aspects. Next, an uncertainty management technique was applied to identify, assess, and control uncertainties in cloud manufacturing. Two well-known approaches were used in the evaluation of the uncertainties in this research: Simple Multi-Attribute Rating Technique (SMART) to prioritise uncertainties; and a fuzzy rule-based system to quantify security and privacy uncertainties. Finally, the framework was embedded into an online assessment tool and validated through expert opinion and case studies. Results from this research are useful for both academia and industry in understanding aspects of cloud manufacturing. The main contribution is a framework that offers new insights for decisions makers on how to deal with uncertainty at adoption and implementation stages of cloud manufacturing. The research also introduced a novel cloud manufacturing taxonomy, a list of uncertainty factors, an assessment process to prioritise uncertainties and quantify security and privacy related uncertainties, and a knowledge base for providing recommendations and solutions

    Memetic algorithms for ontology alignment

    Get PDF
    2011 - 2012Semantic interoperability represents the capability of two or more systems to meaningfully and accurately interpret the exchanged data so as to produce useful results. It is an essential feature of all distributed and open knowledge based systems designed for both e-government and private businesses, since it enables machine interpretation, inferencing and computable logic. Unfortunately, the task of achieving semantic interoperability is very difficult because it requires that the meanings of any data must be specified in an appropriate detail in order to resolve any potential ambiguity. Currently, the best technology recognized for achieving such level of precision in specification of meaning is represented by ontologies. According to the most frequently referenced definition [1], an ontology is an explicit specification of a conceptualization, i.e., the formal specification of the objects, concepts, and other entities that are presumed to exist in some area of interest and the relationships that hold them [2]. However, different tasks or different points of view lead ontology designers to produce different conceptualizations of the same domain of interest. This means that the subjectivity of the ontology modeling results in the creation of heterogeneous ontologies characterized by terminological and conceptual discrepancies. Examples of these discrepancies are the use of different words to name the same concept, the use of the same word to name different concepts, the creation of hierarchies for a specific domain region with different levels of detail and so on. The arising so-called semantic heterogeneity problem represents, in turn, an obstacle for achieving semantic interoperability... [edited by author]XI n.s

    Modélisation et exploitation de base de connaissances dans le cadre du web des objets

    Get PDF
    The concept Web of things (WOT) is gradually becoming a reality as the result of development of network and hardware technologies. Nowadays, there is an increasing number of objects that can be used in predesigned applications. The world is thus more tightly connected, various objects can share their information as well as being triggered through a Web-like structure. However, even if the heterogeneous objects have the ability to be connected to the Web, they cannot be used in different applications unless there is a common model so that their heterogeneity can be described and understood. In this thesis, we want to provide a common model to describe those heterogeneous objects and use them to solve user’s problems. Users can have various requests, either to find a particular object, or to fulfill some tasks. We highlight thus two research directions. The first step is to model those heterogeneous objects and related concepts in WOT, and the next step is to use this model to fulfill user’s requests. Thus, we first study the existing technologies, applications and domains where the WOT can be applied. We compare the existing description models in this domain and find their insufficiency to be applied in the WOT...Le concept du web des objets (WOT - web of things) est devenu une réalité avec le développement d’internet, des réseaux, des technologies matérielles et des objets communicants. De nos jours, il existe un nombre croissant d’objets susceptibles d’être utilisés dans des applications spécifiques. Le Monde est ainsi plus étroitement connecté, différents objets pouvant maintenant partager leurs informations et être ainsi utilisés à travers une structure similaire à celle du Web classique. Cependant, même si des objets hétérogènes ont la possibilité de se connecter au Web, ils ne peuvent pas être utilisés dans différentes applications à moins de posséder un modèle de représentation et d’interrogation commun capable de prendre en compte leur hétérogénéité. Dans cette thèse, notre objectif est d’offrir un modèle commun pour décrire les objets hétérogènes et pouvoir ensuite les utiliser pour accéder aux requêtes des utilisateurs. Ceux-ci peuvent avoir différentes demandes, que ce soit pour trouver un objet particulier ou pour réaliser certaines tâches. Nous mettons en évidence deux directions de recherche. La première consiste à trouver une bonne modélisation de ces objets hétérogènes et des concepts liés au WOT. La seconde est d’utiliser un tel modèle pour répondre efficacement aux requêtes des utilisateurs. Dans un premier temps, nous étudions d’abord les technologies, les applications et les domaines existants où le WOT peut être appliqué. Nous comparons les modèles de description existants dans ce domaine et nous mettons en évidence leurs insuffisances lors d’applications relatives au WOT. Nous proposons alors un nouveau modèle sémantique pour la description d’objets dans le cadre du WOT. Ce modèle est construit sur une ontologie qui comporte trois composantes principales: le Core model, le Space model et l’Agent model. Ce modèle peut alors permettre la description à la fois des informations statiques mais aussi des changements dynamiques associés au WOT..
    • …
    corecore