3,387 research outputs found

    A Taxonomy of Blockchain Technologies: Principles of Identification and Classification

    Get PDF
    A comparative study across the most widely known blockchain technologies is conducted with a bottom-up approach. Blockchains are deconstructed into their building blocks. Each building block is then hierarchically classified into main and subcomponents. Then, varieties of the subcomponents are identified and compared. A taxonomy tree is used to summarise the study and provide a navigation tool across different blockchain architectural configurations

    A Model of Emotion as Patterned Metacontrol

    Get PDF
    Adaptive agents use feedback as a key strategy to cope with un- certainty and change in their environments. The information fed back from the sensorimotor loop into the control subsystem can be used to change four different elements of the controller: parameters associated to the control model, the control model itself, the functional organization of the agent and the functional realization of the agent. There are many change alternatives and hence the complexity of the agent’s space of potential configurations is daunting. The only viable alternative for space- and time-constrained agents —in practical, economical, evolutionary terms— is to achieve a reduction of the dimensionality of this configuration space. Emotions play a critical role in this reduction. The reduction is achieved by func- tionalization, interface minimization and by patterning, i.e. by selection among a predefined set of organizational configurations. This analysis lets us state how autonomy emerges from the integration of cognitive, emotional and autonomic systems in strict functional terms: autonomy is achieved by the closure of functional dependency. Emotion-based morphofunctional systems are able to exhibit complex adaptation patterns at a reduced cognitive cost. In this article we show a general model of how emotion supports functional adaptation and how the emotional biological systems operate following this theoretical model. We will also show how this model is also of applicability to the construction of a wide spectrum of artificial systems1

    Diagnosis of Errors in Stalled Inter-Organizational Workflow Processes

    Get PDF
    Fault-tolerant inter-organizational workflow processes help participant organizations efficiently complete their business activities and operations without extended delays. The stalling of inter-organizational workflow processes is a common hurdle that causes organizations immense losses and operational difficulties. The complexity of software requirements, incapability of workflow systems to properly handle exceptions, and inadequate process modeling are the leading causes of errors in the workflow processes. The dissertation effort is essentially about diagnosing errors in stalled inter-organizational workflow processes. The goals and objectives of this dissertation were achieved by designing a fault-tolerant software architecture of workflow system’s components/modules (i.e., workflow process designer, workflow engine, workflow monitoring, workflow administrative panel, service integration, workflow client) relevant to exception handling and troubleshooting. The complexity and improper implementation of software requirements were handled by building a framework of guiding principles and the best practices for modeling and designing inter-organizational workflow processes. Theoretical and empirical/experimental research methodologies were used to find the root causes of errors in stalled workflow processes. Error detection and diagnosis are critical steps that can be further used to design a strategy to resolve the stalled processes. Diagnosis of errors in stalled workflow processes was in scope, but the resolution of stalled workflow process was out of the scope in this dissertation. The software architecture facilitated automatic and semi-automatic diagnostics of errors in stalled workflow processes from real-time and historical perspectives. The empirical/experimental study was justified by creating state-of-the-art inter-organizational workflow processes using an API-based workflow system, a low code workflow automation platform, a supported high-level programming language, and a storage system. The empirical/experimental measurements and dissertation goals were explained by collecting, analyzing, and interpreting the workflow data. The methodology was evaluated based on its ability to diagnose errors successfully (i.e., identifying the root cause) in stalled processes caused by web service failures in the inter-organizational workflow processes. Fourteen datasets were created to analyze, verify, and validate hypotheses and the software architecture. Amongst fourteen datasets, seven datasets were created for end-to-end IOWF process scenarios, including IOWF web service consumption, and seven datasets were for IOWF web service alone. The results of data analysis strongly supported and validated the software architecture and hypotheses. The guiding principles and the best practices of workflow process modeling and designing conclude opportunities to prevent processes from getting stalled. The outcome of the dissertation, i.e., diagnosis of errors in stalled inter-organization processes, can be utilized to resolve these stalled processes

    Developing Methods of Obtaining Quality Failure Information from Complex Systems

    Get PDF
    The complexity in most engineering systems is constantly growing due to ever-increasing technological advancements. This result in a corresponding need for methods that adequately account for the reliability of such systems based on failure information from components that make up these systems. This dissertation presents an approach to validating qualitative function failure results from model abstraction details. The impact of the level of detail available to a system designer during conceptual stages of design is considered for failure space exploration in a complex system. Specifically, the study develops an efficient approach towards detailed function and behavior modeling required for complex system analyses. In addition, a comprehensive research and documentation of existing function failure analysis methodologies is also synthesized into identified structural groupings. Using simulations, known governing equations are evaluated for components and system models to study responses to faults by accounting for detailed failure scenarios, component behaviors, fault propagation paths, and overall system performance. The components were simulated at nominal states and varying degrees of fault representing actual modes of operation. Information on product design and provisions on expected working conditions of components were used in the simulations to address normally overlooked areas during installation. The results of system model simulations were investigated using clustering analysis to develop an efficient grouping method and measure of confidence for the obtained results. The intellectual merit of this work is the use of a simulation based approach in studying how generated failure scenarios reveal component fault interactions leading to a better understanding of fault propagation within design models. The information from using varying fidelity models for system analysis help in identifying models that are sufficient enough at the conceptual design stages to highlight potential faults. This will reduce resources such as cost, manpower and time spent during system design. A broader impact of the project is to help design engineers identifying critical components, quantifying risks associated with using particular components in their prototypes early in the design process and help improving fault tolerant system designs. This research looks to eventually establishing a baseline for validating and comparing theories of complex systems analysis

    Modeling Big Medical Survival Data Using Decision Tree Analysis with Apache Spark

    Get PDF
    In many medical studies, an outcome of interest is not only whether an event occurred, but when an event occurred; and an example of this is Alzheimer’s disease (AD). Identifying patients with Mild Cognitive Impairment (MCI) who are likely to develop Alzheimer’s disease (AD) is highly important for AD treatment. Previous studies suggest that not all MCI patients will convert to AD. Massive amounts of data from longitudinal and extensive studies on thousands of Alzheimer’s patients have been generated. Building a computational model that can predict conversion form MCI to AD can be highly beneficial for early intervention and treatment planning for AD. This work presents a big data model that contains machine-learning techniques to determine the level of AD in a participant and predict the time of conversion to AD. The proposed framework considers one of the widely used screening assessment for detecting cognitive impairment called Montreal Cognitive Assessment (MoCA). MoCA data set was collected from different centers and integrated into our large data framework storage using a Hadoop Data File System (HDFS); the data was then analyzed using an Apache Spark framework. The accuracy of the proposed framework was compared with a semi-parametric Cox survival analysis model

    On the usage of the probability integral transform to reduce the complexity of multi-way fuzzy decision trees in Big Data classification problems

    Full text link
    We present a new distributed fuzzy partitioning method to reduce the complexity of multi-way fuzzy decision trees in Big Data classification problems. The proposed algorithm builds a fixed number of fuzzy sets for all variables and adjusts their shape and position to the real distribution of training data. A two-step process is applied : 1) transformation of the original distribution into a standard uniform distribution by means of the probability integral transform. Since the original distribution is generally unknown, the cumulative distribution function is approximated by computing the q-quantiles of the training set; 2) construction of a Ruspini strong fuzzy partition in the transformed attribute space using a fixed number of equally distributed triangular membership functions. Despite the aforementioned transformation, the definition of every fuzzy set in the original space can be recovered by applying the inverse cumulative distribution function (also known as quantile function). The experimental results reveal that the proposed methodology allows the state-of-the-art multi-way fuzzy decision tree (FMDT) induction algorithm to maintain classification accuracy with up to 6 million fewer leaves.Comment: Appeared in 2018 IEEE International Congress on Big Data (BigData Congress). arXiv admin note: text overlap with arXiv:1902.0935
    • …
    corecore