36 research outputs found

    Quantum Algorithm of Imperfect KB Self-organization Pt I: Smart Control-Information-Thermodynamic Bounds

    Get PDF
    The quantum self-organization algorithm model of wise knowledge base design for intelligent fuzzy controllers with required robust level considered. Background of the model is a new model of quantum inference based on quantum genetic algorithm. Quantum genetic algorithm applied on line for the quantum correlation’s type searching between unknown solutions in quantum superposition of imperfect knowledge bases of intelligent controllers designed on soft computing. Disturbance conditions of analytical information-thermodynamic trade-off interrelations between main control quality measures (as new design laws) discussed in Part I. The smart control design with guaranteed achievement of these tradeoff interrelations is main goal for quantum self-organization algorithm of imperfect KB. Sophisticated synergetic quantum information effect in Part I (autonomous robot in unpredicted control situations) and II (swarm robots with imperfect KB exchanging between “master - slaves”) introduced: a new robust smart controller on line designed from responses on unpredicted control situations of any imperfect KB applying quantum hidden information extracted from quantum correlation. Within the toolkit of classical intelligent control, the achievement of the similar synergetic information effect is impossible. Benchmarks of intelligent cognitive robotic control applications considered

    Breakthroughs and emerging insights from ongoing design science projects: Research-in-progress papers and poster presentations from the 11th international conference on design science research in information systems and technology (DESRIST) 2016. St. John, Newfoundland, Canada, May 23-25

    Get PDF
    This volume contains selected research-in-progress papers and poster presentations from DESRIST 2016 - the 11th International Conference on Design Science Research in Information Systems and Technology held during 24-25 May 2016 at St. John's, Newfoundland, Canada. DESRIST provides a platform for researchers and practitioners to present and discuss Design Science research. The 11th DESRIST built on the foundation of ten prior highly successful international conferences held in Claremont, Pasadena, Atlanta, Philadelphia, St. Gallen, Milwaukee, Las Vegas, Helsinki, Miami, and Dublin. This year's conference places a special emphasis on using Design Science to engage with the growing challenges that face society, including (but not limited to) demands on health care systems, climate change, and security. With these challenges in mind, individuals from academia and industry came together to discuss important ongoing work and to share emerging knowledge and ideas. Design Science projects often involve multiple sub-problems, meaning there may be a delay before the final set of findings can be laid out. Hence, this volume "Breakthroughs and Observations from Ongoing Design Science Projects" presents preliminary findings from studies that are still underway. Completed research from DESRIST 2016 is presented in a separate volume entitled "Tackling Society's Grand Challenges with Design Science", which is published by Springer International Publishing, Switzerland. The final set of accepted papers in this volume reflects those presented at DESRIST 2016, including 11 research-in-progress papers and 4 abstracts for poster presentations. Each research-in-progress paper and each poster abstract was reviewed by a minimum of two referees. We would like to thank the authors who submitted their research-in-progress papers and poster presentations to DESRIST 2016, the referees who took the time to construct detailed and constructive reviews, and the Program Committee who made the event possible. Furthermore we thank the sponsoring organisations, in particular Maynooth University, Claremont Graduate University, and Memorial University of Newfoundland, for their financial support. We believe the research described in this volume addresses some of the most topical and interesting design challenges facing the field of information systems. We hope that readers find the insights provided by authors as valuable and thought-provoking as we have, and that the discussion of such early findings can help to maximise their impact

    HUMAN PANCREAS AND LIVER MATHEMATICAL MODELING FOR NORMAL AND DISEASED STATE STUDIES

    Get PDF
    Recent advances in biology, biochemistry, and medicine allow us to study – both qualitatively and quantitatively - the human body’s response to a variety of perturbations. So, for example, we know quantitatively what will happen to insulin levels when a person is eats a meal; in many cases, we also know qualitatively the genes and signaling molecules will be stimulated or suppressed. These qualitative and quantitative data can be curated for statistical meta-studies and/or building mathematical models. Such mathematical models are made based on Transport Phenomena, Thermodynamics, Kinetics and PKPD (Pharmacokinetics and Pharmacodynamics) principles. This research is concerned with building mathematical models at the organ level (e.g. liver and pancreas), combining such organ models into a whole-body model such that we can better understand metabolism (or both normal and diseased people) under different conditions such as homeostasis, postprandium, exercise and so on. In particular, we have made a detailed, yet not too complicated, whole-body model to the body’s response to glucose in normal people and in those with Type II Diabetes (T2D). Our results are presented in two parts. The other part is written by Hyun Park. In my M.S. Thesis, pancreatic α/β cell and liver organ models are explained using the mathematical tools such as Ordinary Differential Equations (ODEs), Flux Balance Analysis (FBA), optimization and sensitivity analysis. These models will show concentration and flux change of metabolites over time, parameter sensitivity and so on. Ultimately, the goal is to understand and model the body as a complex system with respect of components and interactions. With this we hope to understand the essential qualitative and quantitative features of T2D with the hope of developing new strategies for treatment of this disease. Primary Reader: Marc D. Donohue Secondary Reader: Michael J. Betenbaug

    A knowledge base system for overall supply chain performance evaluation : a multi-criteria decision-making approach

    Get PDF
    Due to the advancement of technology that allows organizations to collect, store, organize and use data information system for efficient decision making (DM), a new horizon of supply chain performance evaluation starts. Today, DM is shifting from “information-driven” to “data-driven” for more precision in overall supply chain performance evaluation. Based on the real-time information, fast decisions are important in order to deliver product more rapidly. Performance evaluation is critical to the success of the supply chain (SC). In managing SC, there are many decisions to be taken at each level of multi-criteria decision making (MCDM) (short-term or long-term) because of many decisions and decision criteria (attributes) that have an impact on overall supply chain performance. Therefore it is essential for decision makers to know the relationship between decisions and decision criteria on overall SC performance. However, existing supply chain performance models (SCPM) are not adequate in establishing a link between decisions and decisions criteria on overall SC performance. Most of the decisions and decision attributes in SC are conflicting in nature and performance measure of different criteria (attributes) at different levels of decisions (long-term and short-term) is different and makes it more intricate for SC performance evaluation. SC performance heavily depends on how well you design your SC. In other words, it is quite difficult to improve overall SC performance if decisions criteria (attributes) are not embedded or considered at the phase of SC design. The connection between the SC design and supply chain management (SCM) is essential for effective SC. Many companies such as Wal-Mart, Dell, etc. are successful companies and they achieve their success because of their effective SC design and management of SC activities. The purpose of this thesis is in two folds: First is to develop an integrated knowledge base system (KBS) based on Fuzzy-AHP that establish a relationship between decisions and decisions criteria (attributes) and evaluate overall SC performance. The proposed KBS assists organizations and decision-makers in evaluating their overall SC performance and helps in identifying under-performed SC function and its associated criteria. In the end, the proposed system has been implemented in a case company, and we developed a SC performance monitoring dashboard of a case company for top managers and operational managers. Second to develop decisions models that will help us in calibrating decisions and improving overall SC performance

    Knowledge Extraction for Hybrid Question Answering

    Get PDF
    Since the proposal of hypertext by Tim Berners-Lee to his employer CERN on March 12, 1989 the World Wide Web has grown to more than one billion Web pages and still grows. With the later proposed Semantic Web vision,Berners-Lee et al. suggested an extension of the existing (Document) Web to allow better reuse, sharing and understanding of data. Both the Document Web and the Web of Data (which is the current implementation of the Semantic Web) grow continuously. This is a mixed blessing, as the two forms of the Web grow concurrently and most commonly contain different pieces of information. Modern information systems must thus bridge a Semantic Gap to allow a holistic and unified access to information about a particular information independent of the representation of the data. One way to bridge the gap between the two forms of the Web is the extraction of structured data, i.e., RDF, from the growing amount of unstructured and semi-structured information (e.g., tables and XML) on the Document Web. Note, that unstructured data stands for any type of textual information like news, blogs or tweets. While extracting structured data from unstructured data allows the development of powerful information system, it requires high-quality and scalable knowledge extraction frameworks to lead to useful results. The dire need for such approaches has led to the development of a multitude of annotation frameworks and tools. However, most of these approaches are not evaluated on the same datasets or using the same measures. The resulting Evaluation Gap needs to be tackled by a concise evaluation framework to foster fine-grained and uniform evaluations of annotation tools and frameworks over any knowledge bases. Moreover, with the constant growth of data and the ongoing decentralization of knowledge, intuitive ways for non-experts to access the generated data are required. Humans adapted their search behavior to current Web data by access paradigms such as keyword search so as to retrieve high-quality results. Hence, most Web users only expect Web documents in return. However, humans think and most commonly express their information needs in their natural language rather than using keyword phrases. Answering complex information needs often requires the combination of knowledge from various, differently structured data sources. Thus, we observe an Information Gap between natural-language questions and current keyword-based search paradigms, which in addition do not make use of the available structured and unstructured data sources. Question Answering (QA) systems provide an easy and efficient way to bridge this gap by allowing to query data via natural language, thus reducing (1) a possible loss of precision and (2) potential loss of time while reformulating the search intention to transform it into a machine-readable way. Furthermore, QA systems enable answering natural language queries with concise results instead of links to verbose Web documents. Additionally, they allow as well as encourage the access to and the combination of knowledge from heterogeneous knowledge bases (KBs) within one answer. Consequently, three main research gaps are considered and addressed in this work: First, addressing the Semantic Gap between the unstructured Document Web and the Semantic Gap requires the development of scalable and accurate approaches for the extraction of structured data in RDF. This research challenge is addressed by several approaches within this thesis. This thesis presents CETUS, an approach for recognizing entity types to populate RDF KBs. Furthermore, our knowledge base-agnostic disambiguation framework AGDISTIS can efficiently detect the correct URIs for a given set of named entities. Additionally, we introduce REX, a Web-scale framework for RDF extraction from semi-structured (i.e., templated) websites which makes use of the semantics of the reference knowledge based to check the extracted data. The ongoing research on closing the Semantic Gap has already yielded a large number of annotation tools and frameworks. However, these approaches are currently still hard to compare since the published evaluation results are calculated on diverse datasets and evaluated based on different measures. On the other hand, the issue of comparability of results is not to be regarded as being intrinsic to the annotation task. Indeed, it is now well established that scientists spend between 60% and 80% of their time preparing data for experiments. Data preparation being such a tedious problem in the annotation domain is mostly due to the different formats of the gold standards as well as the different data representations across reference datasets. We tackle the resulting Evaluation Gap in two ways: First, we introduce a collection of three novel datasets, dubbed N3, to leverage the possibility of optimizing NER and NED algorithms via Linked Data and to ensure a maximal interoperability to overcome the need for corpus-specific parsers. Second, we present GERBIL, an evaluation framework for semantic entity annotation. The rationale behind our framework is to provide developers, end users and researchers with easy-to-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools and frameworks on multiple datasets. The decentral architecture behind the Web has led to pieces of information being distributed across data sources with varying structure. Moreover, the increasing the demand for natural-language interfaces as depicted by current mobile applications requires systems to deeply understand the underlying user information need. In conclusion, the natural language interface for asking questions requires a hybrid approach to data usage, i.e., simultaneously performing a search on full-texts and semantic knowledge bases. To close the Information Gap, this thesis presents HAWK, a novel entity search approach developed for hybrid QA based on combining structured RDF and unstructured full-text data sources

    Proceedings of The 13. Nordic Workshop on Secure IT Systems, NordSec 2008, Kongens Lyngby Oct 9-10, 2008

    Get PDF

    HUMAN BRAIN, MUSCLE AND ADIPOSE TISSUES MATHEMATICAL MODELING FOR NORMAL AND DISEASED STATE STUDIES

    Get PDF
    Recent advances in biology, biochemistry, and medicine allow us to study – both qualitatively and quantitatively - the human body’s response to a variety of perturbations. So, for example, we know quantitatively what will happen to insulin levels when a person is eats a meal; in many cases, we also know qualitatively the genes and signaling molecules will be stimulated or suppressed. These qualitative and quantitative data can be curated for statistical meta-studies and/or building mathematical models. Such mathematical models are made based on Transport Phenomena, Thermodynamics, Kinetics and PKPD (Pharmacokinetics and Pharmacodynamics) principles. This research is concerned with building mathematical models at the organ level (e.g. liver and pancreas), combining such organ models into a whole-body model such that we can better understand metabolism (or both normal and diseased people) under different conditions such as homeostasis, postprandium, exercise and so on. In particular, we have made a detailed, yet not too complicated, whole-body model to the body’s response to glucose in normal people and in those with Type II Diabetes (T2D). Our results are presented in two parts. The other part is written by Yifei Li. In my M.S. Thesis, myocyte, adipose and brain organ models are explained using the mathematical tools such as Ordinary Differential Equations (ODEs), Flux Balance Analysis (FBA), optimization and sensitivity analysis. These models will show concentration and flux change of metabolites over time, parameter sensitivity and so on. Secondly, the current research on T2D will be explained and we will present our hypotheses on the causes of T2D. Ultimately, the goal is to understand and model the body as a complex system with respect of components and interactions. With this we hope to understand the essential qualitative and quantitative features of T2D with the hope of developing new strategies for treatment of this disease. Primary Reader: Marc D. Donohue Secondary Reader: Michael J. Betenbaug

    Organic Conductors

    Get PDF
    This collection of articles focuses on different aspects of the study of organic conductors. Recent progress in both theoretical and experimental studies is covered in this Special Issue. Papers on a wide variety of studies are categorized into representative topics of chemistry and physics. Besides classical studies on the crystalline organic conductors, applied studies on semiconducting thin films and a number of new topics shared with inorganic materials are also discussed

    On the Design, Implementation and Application of Novel Multi-disciplinary Techniques for explaining Artificial Intelligence Models

    Get PDF
    284 p.Artificial Intelligence is a non-stopping field of research that has experienced some incredible growth lastdecades. Some of the reasons for this apparently exponential growth are the improvements incomputational power, sensing capabilities and data storage which results in a huge increment on dataavailability. However, this growth has been mostly led by a performance-based mindset that has pushedmodels towards a black-box nature. The performance prowess of these methods along with the risingdemand for their implementation has triggered the birth of a new research field. Explainable ArtificialIntelligence. As any new field, XAI falls short in cohesiveness. Added the consequences of dealing withconcepts that are not from natural sciences (explanations) the tumultuous scene is palpable. This thesiscontributes to the field from two different perspectives. A theoretical one and a practical one. The formeris based on a profound literature review that resulted in two main contributions: 1) the proposition of anew definition for Explainable Artificial Intelligence and 2) the creation of a new taxonomy for the field.The latter is composed of two XAI frameworks that accommodate in some of the raging gaps found field,namely: 1) XAI framework for Echo State Networks and 2) XAI framework for the generation ofcounterfactual. The first accounts for the gap concerning Randomized neural networks since they havenever been considered within the field of XAI. Unfortunately, choosing the right parameters to initializethese reservoirs falls a bit on the side of luck and past experience of the scientist and less on that of soundreasoning. The current approach for assessing whether a reservoir is suited for a particular task is toobserve if it yields accurate results, either by handcrafting the values of the reservoir parameters or byautomating their configuration via an external optimizer. All in all, this poses tough questions to addresswhen developing an ESN for a certain application, since knowing whether the created structure is optimalfor the problem at hand is not possible without actually training it. However, some of the main concernsfor not pursuing their application is related to the mistrust generated by their black-box" nature. Thesecond presents a new paradigm to treat counterfactual generation. Among the alternatives to reach auniversal understanding of model explanations, counterfactual examples is arguably the one that bestconforms to human understanding principles when faced with unknown phenomena. Indeed, discerningwhat would happen should the initial conditions differ in a plausible fashion is a mechanism oftenadopted by human when attempting at understanding any unknown. The search for counterfactualsproposed in this thesis is governed by three different objectives. Opposed to the classical approach inwhich counterfactuals are just generated following a minimum distance approach of some type, thisframework allows for an in-depth analysis of a target model by means of counterfactuals responding to:Adversarial Power, Plausibility and Change Intensity
    corecore