2,890 research outputs found

    Tsunamis and Ripples: Effects of Scalar Waves on Screening in the Milky Way

    Full text link
    Modified gravity models which include an additional propagating degree of freedom are typically studied in the quasi-static limit, where the propagation is neglected, and the wave equation of the field is replaced with a Poisson-type equation. Recently, it has been proposed that, in the context of models with symmetron- or chameleon-type screening, scalar waves from astrophysical or cosmological events could have a significant effect on the screening of the Solar System, and hence invalidate these models. Here, we quantitatively investigate the impact of scalar waves by solving the full field equation linearised in the wave amplitude. In the symmetron case, we find that the quantitative effect of waves is generally negligible, even for the largest amplitudes of waves that are physically expected. In order to spoil the screening in the Solar System, a significant amount of wave energy would have to be focused on the Solar System by arranging the sources in a spherical shell centred on Earth. In the chameleon case, we are able to rule out any significant effects of propagating waves on Solar System tests.Comment: 11 pages, 2 figure

    Spatial data fusion with visual feedback

    Get PDF
    This paper outlines our ongoing work towards developing a system for extracting patterns embedded in heterogeneous data streams that contain people’s recorded movements in both physical and virtual spaces. Examples of such spatial data sources are satellite-based sensors (GPS), ultrasound acoustic trackers, radio frequency (WLAN, Bluetooth, UWB) and infrared-based sensors. The core work on pattern extraction relies on the spatial data fusion component aiming to bring various data types to a common format. The additional benefit of this system will consist in the graphical interface that will enable interactive visualisation of the extracted patterns. The rationale of this work is outlined through the relevance of location aware system in the context of ubiquitous computing, which so far have received limited benefits from fields such as Human Computer Interaction (HCI) and interactive visualisation

    Observers are a key source of detection heterogeneity and biased occupancy estimates in species monitoring

    Full text link
    Reliable assessments of population status and trends underpin conservation management efforts but are complicated by the fact that imperfect detection is ubiquitous in monitoring data. We explore the most commonly considered variables believed to influence detection probabilities, quantifying how they influence detectability and assessing how occupancy rates are impacted when a variable is ignored. To do so, we used data from two multi-species amphibian monitoring programmes, collected by volunteers and professional surveyors. Our results suggest that although detection rates varied substantially in relation to commonly considered factors such as seasonal and annual effects, ignoring these factors in the analysis of monitoring data had negligible effect on estimated occupancy rates. Variation among surveyors in detection probabilities turned out to be most important. It was high and failing to account for it led to occupancy being underestimated. Importantly, we identified that heterogeneity among observers was as high for professional surveyors as for volunteers, highlighting that this issue is not restricted to citizen-science monitoring. Occupancy modelling has greatly improved the reliability of inference from species monitoring data, yet capturing the relevant sources of variation remains a challenge. Our results highlight that variation among surveyors is a key source of heterogeneity, and that this issue is just as pertinent to data collected by experts as by volunteers. Detection heterogeneity should be accounted for when analysing monitoring data. Furthermore, efforts to increase training of field crews and collecting data to quantify differences between observer abilities are important to avoid biased inference resulting from unmodelled observer differences

    ChatGPT Prompt Patterns for Improving Code Quality, Refactoring, Requirements Elicitation, and Software Design

    Full text link
    This paper presents prompt design techniques for software engineering, in the form of patterns, to solve common problems when using large language models (LLMs), such as ChatGPT to automate common software engineering activities, such as ensuring code is decoupled from third-party libraries and simulating a web application API before it is implemented. This paper provides two contributions to research on using LLMs for software engineering. First, it provides a catalog of patterns for software engineering that classifies patterns according to the types of problems they solve. Second, it explores several prompt patterns that have been applied to improve requirements elicitation, rapid prototyping, code quality, refactoring, and system design

    Mining Medical Data: Bridging the Knowledge Divide

    Get PDF
    Due to the signiÂŻcant amount of data generated by modern medicine there is a growing reliance on tools such as data mining and knowledge discovery to help make sense and comprehend such data. The success of this process requires collaboration and interaction between such methods and medical professionals. Therefore an important question is: How can we strengthen the relationship between two traditionally separate fields (technology and medicine) in order to work simultaneously towards enhancing knowledge in modern medicine. To address this question, this study examines the application of data mining techniques to a large asthma medical dataset. A discussion introducing various methods for a smooth approach, straying from the `jack of all trades, master of none' to a modular cooperative approach for a successful outcome is pro-posed. The results of this study support the use of data mining as a useful tool and highlight the advantages on a global scale of closer relations between the two distinct fields. The exploration of CRISP methodology suggests that a `one methodology fits all approach' is not appropriate, but rather combines to create a hybrid holistic approach to data mining
    • …
    corecore