471,855 research outputs found

    Novel modeling of task versus rest brain state predictability using a dynamic time warping spectrum: comparisons and contrasts with other standard measures of brain dynamics

    Get PDF
    Dynamic time warping, or DTW, is a powerful and domain-general sequence alignment method for computing a similarity measure. Such dynamic programming-based techniques like DTW are now the backbone and driver of most bioinformatics methods and discoveries. In neuroscience it has had far less use, though this has begun to change. We wanted to explore new ways of applying DTW, not simply as a measure with which to cluster or compare similarity between features but in a conceptually different way. We have used DTW to provide a more interpretable spectral description of the data, compared to standard approaches such as the Fourier and related transforms. The DTW approach and standard discrete Fourier transform (DFT) are assessed against benchmark measures of neural dynamics. These include EEG microstates, EEG avalanches, and the sum squared error (SSE) from a multilayer perceptron (MLP) prediction of the EEG time series, and simultaneously acquired FMRI BOLD signal. We explored the relationships between these variables of interest in an EEG-FMRI dataset acquired during a standard cognitive task, which allowed us to explore how DTW differentially performs in different task settings. We found that despite strong correlations between DTW and DFT-spectra, DTW was a better predictor for almost every measure of brain dynamics. Using these DTW measures, we show that predictability is almost always higher in task than in rest states, which is consistent to other theoretical and empirical findings, providing additional evidence for the utility of the DTW approach

    Generalized Additive Model Implementation for Germany Real Estate Market - Model, API, UI Development

    Get PDF
    Internship Report presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsHedonic pricing approach one of the most accepted methodologies for the real estate price assessment by delivering attribute-based value. It emerges from the value changing regarding object attributes conditions. In real estate market, these changes can be property renovation, material, and construction depreciation, or even expanding the plot area. The scope of the internship report is to be explained the development first prototype General Additive Model of predicting House square meter price basis on Hedonic pricing theory for a certain region of Germany. In addition to the model development, bringing it into live via Rest API and User Interface is explained in this report. Data Science Service GMBH is the owner of the project and specialized in real estate property appraisal that is derived from statistical learning models, currently only at Austria. The outcome of this project enables us to get into Germany Real Estate Market as well. The necessary data has been brought by German Market Partner, Forschung und Beratung für Wohnen, Immobilien und Umwelt GmbH (F+B), however Data Science Service GMBH (DSS) is responsible for delivering the model product from beginning to end. R Programming Drake package is used for parallel computation and to be generated maintainable adaptive data pipeline. Parameter selection based on information criteria has been done for each model in every kind of real estate property. Lastly, the statistical model is delivered by rest API to UI (Shiny Application), both are developed with R programming language

    Attack time analysis in dynamic attack trees via integer linear programming

    Get PDF
    Attack trees are an important tool in security analysis, and an important part of attack tree analysis is computing metrics. This paper focuses on dynamic attack trees and their min time metric, i.e. the minimal time to attack a system. For general attack trees, calculating min time efficiently is an open problem, with the fastest current method being enumerating all minimal attacks, which is NP-hard. This paper presents three tools for calculating min time. First, we introduce a novel method for general dynamic attack trees based on mixed integer linear programming. Second, we show how the computation can be sped up by identifying the modules of an attack tree, i.e. subtrees connected to the rest of the attack tree via only one node. Finally, we define a general semantics for dynamic attack trees that significantly relaxes the restrictions on attack trees compared to earlier work, allowing us to apply our methods to a wide variety of attack trees. Experiments on both a case study of a server cluster and a synthetic testing set of large attack trees verify that both the integer linear programming approach and modular analysis considerably decrease the computation time of attack time analysis

    IRAF in the nineties

    Get PDF
    The Interactive Data Reduction and Analysis Facility (IRAF) data reduction and analysis system has been around since 1981. Today it is a mature system with hundreds of applications, and is supported on all the major platforms. Many institutions, projects, and individuals around the US and around the world have developed software for IRAF. Some of these packages are comparable in size to the IRAF core system itself. IRAF is both a data analysis system, and a programming environment. As a data analysis system it can be easily installed by a user at a remote site and immediately used to view and process data. As a programming environment IRAF contains a wealth of high and low level facilities for developing new applications for interactive and automated processing of astronomical or other data. As important as the applications programs and user interfaces are to the scientist using IRAF, the heart of the IRAF system is the programming environment. The programming environment determines to a large extent the types of applications which can be built within IRAF, what they will look like, and how they will interact with one another and with the user. While applications can be easily added to or removed from a software system, the programming environment must remain fairly stable, with carefully planned evolution and growth, over the lifetime of a system. The IRAF programming environment is the framework on which the rest of the IRAF system is built. The IRAF programming environment as it exists in 1992, and the work currently underway to enhance the environment are discussed. The structure of the programming environment as a class hierarchy is discussed, with emphasis on the work being done on the image data structures, graphics and image display interfaces, and user interfaces. The new technologies which we feel IRAF must deal with successfully over the coming years are discussed. Finally, a preview of what IRAF might look like to the user by the end of the decade is presented

    Interfaces for arcControlTower

    Get PDF
    Existing tools for ARC-middleware job management provide only basic operations. The effort required for job management using these tools increases with the number of jobs to execute. Most of the existing tools also lack capabilities to manage jobs on multiple clusters. The arcControlTower (aCT) is a job management framework that can efficiently manage thousands of jobs over many clusters. However, it lacks a user interface that would enable its use as an alternative to other tools. Our goal was to extend aCT to enable the creation of various user interfaces. We achieved that by adding application programming interface (API) to aCT. Then, we have developed command line interface (CLI) using the API as this type of interface is the most commonly used among job management tools. Moreover, we have developed REST interface that enables a server setup of aCT since the web access to computing resources is becoming more and more popular. The user interface enabled us to use aCT for managing several thousands of jobs from the ATLAS experiment

    Estimation of rest periods for newly constructed/reconstructed pavements

    Get PDF
    Newly-constructed and reconstructed highway pavements under the effect of traffic loading and climatic severity deteriorate progressively and need preservation intervention after a certain number of years following their construction. In the literature, the term ‘rest period’ has been used to refer to the number of years that elapse between the construction completion to the application of first major repair activity. The rest period is a critical piece of information that agencies use to not only plan and budget for the first major repair activity but also to develop more confidently, their life-cycle activity schedules for life cycle costing, work programming, and long-term plans. However, the literature lacks established procedures for predicting rest periods on the basis of pavement performance thresholds. In the absence of such resources, highway agencies rely mostly on expert opinion for establishing the rest periods for their pavement sections. In addressing this issue, this paper presents a statistical methodology for establishing the rest periods for newly-constructed or reconstructed pavements. The methodology was demonstrated using empirical data from in-service pavements in a Midwestern State in the US. The paper’s results show that the rest periods of newlyconstructed and reconstructed highway pavements are significantly influenced by their functional class, surface material type, traffic loading level, and climate severity

    AndroParse - An Android Feature Extraction Framework & Dataset

    Get PDF
    Android malware has become a major challenge. As a consequence, practitioners and researchers spend a significant time analyzing Android applications (APK). A common procedure (especially for data scientists) is to extract features such as permissions, APIs or strings which can then be analyzed. Current state of the art tools have three major issues: (1) a single tool cannot extract all the significant features used by scientists and practitioners (2) Current tools are not designed to be extensible and (3) Existing parsers do not have runtime efficiency. Therefore, this work presents AndroParse which is an open-source Android parser written in Golang that currently extracts the four most common features: Permissions, APIs, Strings and Intents. AndroParse outputs JSON files as they can easily be used by most major programming languages. Constructing the parser allowed us to create an extensive feature dataset which can be accessed by our independent REST API. Our dataset currently has 67,703 benign and 46,683 malicious APK samples

    Logic Programming for Finding Models in the Logics of Knowledge and its Applications: A Case Study

    Full text link
    The logics of knowledge are modal logics that have been shown to be effective in representing and reasoning about knowledge in multi-agent domains. Relatively few computational frameworks for dealing with computation of models and useful transformations in logics of knowledge (e.g., to support multi-agent planning with knowledge actions and degrees of visibility) have been proposed. This paper explores the use of logic programming (LP) to encode interesting forms of logics of knowledge and compute Kripke models. The LP modeling is expanded with useful operators on Kripke structures, to support multi-agent planning in the presence of both world-altering and knowledge actions. This results in the first ever implementation of a planner for this type of complex multi-agent domains.Comment: 16 pages, 1 figure, International Conference on Logic Programming 201

    Non-linear Pattern Matching with Backtracking for Non-free Data Types

    Full text link
    Non-free data types are data types whose data have no canonical forms. For example, multisets are non-free data types because the multiset {a,b,b}\{a,b,b\} has two other equivalent but literally different forms {b,a,b}\{b,a,b\} and {b,b,a}\{b,b,a\}. Pattern matching is known to provide a handy tool set to treat such data types. Although many studies on pattern matching and implementations for practical programming languages have been proposed so far, we observe that none of these studies satisfy all the criteria of practical pattern matching, which are as follows: i) efficiency of the backtracking algorithm for non-linear patterns, ii) extensibility of matching process, and iii) polymorphism in patterns. This paper aims to design a new pattern-matching-oriented programming language that satisfies all the above three criteria. The proposed language features clean Scheme-like syntax and efficient and extensible pattern matching semantics. This programming language is especially useful for the processing of complex non-free data types that not only include multisets and sets but also graphs and symbolic mathematical expressions. We discuss the importance of our criteria of practical pattern matching and how our language design naturally arises from the criteria. The proposed language has been already implemented and open-sourced as the Egison programming language
    corecore