26 research outputs found

    Context-Aware Deep Sequence Learning with Multi-View Factor Pooling for Time Series Classification

    Get PDF
    In this paper, we propose an effective, multi-view, multivariate deep classification model for time-series data. Multi-view methods show promise in their ability to learn correlation and exclusivity properties across different independent information resources. However, most current multi-view integration schemes employ only a linear model and, therefore, do not extensively utilize the relationships observed across different view-specific representations. Moreover, the majority of these methods rely exclusively on sophisticated, handcrafted features to capture local data patterns and, thus, depend heavily on large collections of labeled data. The multi-view, multivariate deep classification model for time-series data proposed in this paper makes important contributions to address these limitations. The proposed model derives a LSTM-based, deep feature descriptor to model both the view-specific data characteristics and cross-view interaction in an integrated deep architecture while driving the learning phase in a data-driven manner. The proposed model employs a compact context descriptor to exploit view-specific affinity information to design a more insightful context representation. Finally, the model uses a multi-view factor-pooling scheme for a context-driven attention learning strategy to weigh the most relevant feature dimensions while eliminating noise from the resulting fused descriptor. As shown by experiments, compared to the existing multi-view methods, the proposed multi-view deep sequential learning approach improves classification performance by roughly 4% in the UCI multi-view activity recognition dataset, while also showing significantly robust generalized representation capacity against its single-view counterparts, in classifying several large-scale multi-view light curve collections

    Context-Aware Deep Sequence Learning with Multi-View Factor Pooling for Time Series Classification

    Get PDF
    In this paper, we propose an effective, multi-view, multivariate deep classification model for time-series data. Multi-view methods show promise in their ability to learn correlation and exclusivity properties across different independent information resources. However, most current multi-view integration schemes employ only a linear model and, therefore, do not extensively utilize the relationships observed across different view-specific representations. Moreover, the majority of these methods rely exclusively on sophisticated, handcrafted features to capture local data patterns and, thus, depend heavily on large collections of labeled data. The multi-view, multivariate deep classification model for time-series data proposed in this paper makes important contributions to address these limitations. The proposed model derives a LSTM-based, deep feature descriptor to model both the view-specific data characteristics and cross-view interaction in an integrated deep architecture while driving the learning phase in a data-driven manner. The proposed model employs a compact context descriptor to exploit view-specific affinity information to design a more insightful context representation. Finally, the model uses a multi-view factor-pooling scheme for a context-driven attention learning strategy to weigh the most relevant feature dimensions while eliminating noise from the resulting fused descriptor. As shown by experiments, compared to the existing multi-view methods, the proposed multi-view deep sequential learning approach improves classification performance by roughly 4% in the UCI multi-view activity recognition dataset, while also showing significantly robust generalized representation capacity against its single-view counterparts, in classifying several large-scale multi-view light curve collections

    Towards a Visual Analytics Framework for Handling Complex Business Processes

    No full text
    Organizing data that can come from anywhere in the complex business process in a variety of types is a challeng-ing task. To tackle the challenge, we introduce the concepts of virtual sensors and process events. In addition, a visual interface is presented in this paper to aid deploying the vir-tual sensors and analyzing process events information. The virtual sensors permit collection from the streams of data at any point in the process and transmission of the data in a form ready to be analyzed by the central analytics engine. Process events provide a uniform expression of data of dif-ferent types in a form that can be automatically prioritized and that is readily meaningful to the users. Through the visual interface, the user can place the virtual sensors, in-teract with and group the process events, and delve into the details of the process at any point. The visual interface pro-vides a multiview investigative environment for sensemak-ing and decisive action by the user. 1

    Specifying Dynamic Support for Collaborative Work within WORLDS

    No full text
    In this paper, we present a specification language developed for WORLDS, a next generation computer-supported collaborative work system. Our specification language, called Introspect, employs a meta-level architecture to allow run-time modifications to specifications. We believe such an architecture is essential to WORLDS' ability to provide dynamic support for collaborative work in an elegant fashion

    MUDdling through

    No full text
    corecore