776,234 research outputs found

    Early Warning Analysis for Social Diffusion Events

    Get PDF
    There is considerable interest in developing predictive capabilities for social diffusion processes, for instance to permit early identification of emerging contentious situations, rapid detection of disease outbreaks, or accurate forecasting of the ultimate reach of potentially viral ideas or behaviors. This paper proposes a new approach to this predictive analytics problem, in which analysis of meso-scale network dynamics is leveraged to generate useful predictions for complex social phenomena. We begin by deriving a stochastic hybrid dynamical systems (S-HDS) model for diffusion processes taking place over social networks with realistic topologies; this modeling approach is inspired by recent work in biology demonstrating that S-HDS offer a useful mathematical formalism with which to represent complex, multi-scale biological network dynamics. We then perform formal stochastic reachability analysis with this S-HDS model and conclude that the outcomes of social diffusion processes may depend crucially upon the way the early dynamics of the process interacts with the underlying network's community structure and core-periphery structure. This theoretical finding provides the foundations for developing a machine learning algorithm that enables accurate early warning analysis for social diffusion events. The utility of the warning algorithm, and the power of network-based predictive metrics, are demonstrated through an empirical investigation of the propagation of political memes over social media networks. Additionally, we illustrate the potential of the approach for security informatics applications through case studies involving early warning analysis of large-scale protests events and politically-motivated cyber attacks

    Local feature selection for multiple instance learning with applications.

    Get PDF
    Feature selection is a data processing approach that has been successfully and effectively used in developing machine learning algorithms for various applications. It has been proven to effectively reduce the dimensionality of the data and increase the accuracy and interpretability of machine learning algorithms. Conventional feature selection algorithms assume that there is an optimal global subset of features for the whole sample space. Thus, only one global subset of relevant features is learned. An alternative approach is based on the concept of Local Feature Selection (LFS), where each training sample can have its own subset of relevant features. Multiple Instance Learning (MIL) is a variation of traditional supervised learning, also known as single instance learning. In MIL, each object is represented by a set of instances, or a bag. While bags are labeled, the labels of their instances are unknown. The ambiguity of the instance labels makes the feature selection for MIL challenging. Although feature selection in traditional supervised learning has been researched extensively, there are only a few methods for the MIL framework. Moreover, localized feature selection for MIL has not been researched. This dissertation focuses on developing a local feature selection method for the MIL framework. Our algorithm, called Multiple Instance Local Salient Feature Selection (MI-LSFS), searches the feature space to find the relevant features within each bag. We also propose a new multiple instance classification algorithm, called MILES-LFS, that integrates information learned by MI-LSFS during the feature selection process to identify a reduced subset of representative bags and instances. We show that using a more focused subset of prototypes can improve the performance while significantly reducing the computational complexity. Other applications of the proposed MI-LSFS include a new method that uses our MI-LSFS algorithm to explore and investigate the features learned by a Convolutional Neural Network (CNN) model; a visualization method for CNN models, called Gradient-weighted Sample Activation Map (Grad-SAM), that uses the locally learned features of each sample to highlight their relevant and salient parts, and a novel explanation method, called Classifier Explanation by Local Feature Selection (CE-LFS), to explain the decisions of trained models. The proposed MI-LSFS and its applications are validated using several synthetic and real data sets. We report and compare quantitative measures such as Rand Index, Area Under Curve (AUC), and accuracy. We also provide qualitative measures by visualizing and interpreting the selected features and their effects

    Modelling mobile health systems: an application of augmented MDA for the extended healthcare enterprise

    Get PDF
    Mobile health systems can extend the enterprise computing system of the healthcare provider by bringing services to the patient any time and anywhere. We propose a model-driven design and development methodology for the development of the m-health components in such extended enterprise computing systems. The methodology applies a model-driven design and development approach augmented with formal validation and verification to address quality and correctness and to support model transformation. Recent work on modelling applications from the healthcare domain is reported. One objective of this work is to explore and elaborate the proposed methodology. At the University of Twente we are developing m-health systems based on Body Area Networks (BANs). One specialization of the generic BAN is the health BAN, which incorporates a set of devices and associated software components to provide some set of health-related services. A patient will have a personalized instance of the health BAN customized to their current set of needs. A health professional interacts with their\ud patientsÂż BANs via a BAN Professional System. The set of deployed BANs are supported by a server. We refer to this distributed system as the BAN System. The BAN system extends the enterprise computing system of the healthcare provider. Development of such systems requires a sound software engineering approach and this is what we explore with the new methodology. The methodology is illustrated with reference to recent modelling activities targeted at real implementations. In the context of the Awareness project BAN implementations will be trialled in a number of clinical settings including epilepsy management and management of chronic pain

    Integrated modeling of friction stir welding of 6xxx series Al alloys: Process, microstructure and properties

    Get PDF
    International audienceCompared to most thermomechanical processing methods, friction stir welding (FSW) is a recent technique which has not yet reached full maturity. Nevertheless, owing to multiple intrinsic advantages, FSW has already replaced conventional welding methods in a variety of industrial applications especially for Al alloys. This provides the impetus for developing a methodology towards optimization, from process to performances, using the most advanced approach available in materials science and thermomechanics. The aim is to obtain a guidance both for process fine tuning and for alloy design. Integrated modeling constitutes a way to accelerate the insertion of the process, especially regarding difficult applications where for instance ductility, fracture toughness, fatigue and/or stress corrosion cracking are key issues. Hence, an integrated modeling framework devoted to the FSW of 6xxx series Al alloys has been established and applied to the 6005A and 6056 alloys. The suite of models involves an in-process temperature evolution model, a microstructure evolution model with an extension to heterogeneous precipitation, a microstructure based strength and strain hardening model, and a micro-mechanics based damage model. The presentation of each model is supplemented by the coverage of relevant recent literature. The "model chain" is assessed towards a wide range of experimental data. The final objective is to present routes for the optimization of the FSW process using both experiments and models. Now, this strategy goes well beyond the case of FSW, illustrating the potential of chain models to support a "material by design approach" from process to performances

    OmniLRS: A Photorealistic Simulator for Lunar Robotics

    Full text link
    Developing algorithms for extra-terrestrial robotic exploration has always been challenging. Along with the complexity associated with these environments, one of the main issues remains the evaluation of said algorithms. With the regained interest in lunar exploration, there is also a demand for quality simulators that will enable the development of lunar robots. % In this paper, we explain how we built a Lunar simulator based on Isaac Sim, Nvidia's robotic simulator. In this paper, we propose Omniverse Lunar Robotic-Sim (OmniLRS) that is a photorealistic Lunar simulator based on Nvidia's robotic simulator. This simulation provides fast procedural environment generation, multi-robot capabilities, along with synthetic data pipeline for machine-learning applications. It comes with ROS1 and ROS2 bindings to control not only the robots, but also the environments. This work also performs sim-to-real rock instance segmentation to show the effectiveness of our simulator for image-based perception. Trained on our synthetic data, a yolov8 model achieves performance close to a model trained on real-world data, with 5% performance gap. When finetuned with real data, the model achieves 14% higher average precision than the model trained on real-world data, demonstrating our simulator's photorealism.% to realize sim-to-real. The code is fully open-source, accessible here: https://github.com/AntoineRichard/LunarSim, and comes with demonstrations.Comment: 7 pages, 4 figure

    Inferring Topology of Networks With Hidden Dynamic Variables

    Get PDF
    Inferring the network topology from the dynamics of interacting units constitutes a topical challenge that drives research on its theory and applications across physics, mathematics, biology, and engineering. Most current inference methods rely on time series data recorded from all dynamical variables in the system. In applications, often only some of these time series are accessible, while other units or variables of all units are hidden, i.e. inaccessible or unobserved. For instance, in AC power grids, frequency measurements often are easily available whereas determining the phase relations among the oscillatory units requires much more effort. Here, we propose a network inference method that allows to reconstruct the full network topology even if all units exhibit hidden variables. We illustrate the approach in terms of a basic AC power grid model with two variables per node, the local phase angle and the local instantaneous frequency. Based solely on frequency measurements, we infer the underlying network topology as well as the relative phases that are inaccessible to measurement. The presented method may be enhanced to include systems with more complex coupling functions and additional parameters such as losses in power grid models. These results may thus contribute towards developing and applying novel network inference approaches in engineering, biology and beyond

    TEMPOS: A Platform for Developing Temporal Applications on Top of Object DBMS

    Get PDF
    This paper presents TEMPOS: a set of models and languages supporting the manipulation of temporal data on top of object DBMS. The proposed models exploit object-oriented technology to meet some important, yet traditionally neglected design criteria related to legacy code migration and representation independence. Two complementary ways for accessing temporal data are offered: a query language and a visual browser. The query language, namely TempOQL, is an extension of OQL supporting the manipulation of histories regardless of their representations, through fully composable functional operators. The visual browser offers operators that facilitate several time-related interactive navigation tasks, such as studying a snapshot of a collection of objects at a given instant, or detecting and examining changes within temporal attributes and relationships. TEMPOS models and languages have been formalized both at the syntactical and the semantical level and have been implemented on top of an object DBMS. The suitability of the proposals with regard to applications' requirements has been validated through concrete case studies

    High-Performance Cloud Computing: A View of Scientific Applications

    Full text link
    Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the Cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.Comment: 13 pages, 9 figures, conference pape
    • …
    corecore