78,785 research outputs found

    Data analysis of gravitational-wave signals from spinning neutron stars. IV. An all-sky search

    Get PDF
    We develop a set of data analysis tools for a realistic all-sky search for continuous gravitational-wave signals. The methods that we present apply to data from both the resonant bar detectors that are currently in operation and the laser interferometric detectors that are in the final stages of construction and commissioning. We show that with our techniques we shall be able to perform an all-sky 2-day long coherent search of the narrow-band data from the resonant bar EXPLORER with no loss of signals with the dimensionless amplitude greater than 2.8×10−232.8\times10^{-23}.Comment: REVTeX, 26 pages, 1 figure, submitted to Phys. Rev.

    Vehicle Accident Alert and Locator (VAAL)

    Get PDF
    An emergency is a deviation from planned or expected behaviour or a course of event that endangers or adversely affects people, property, or the environment. This paper reports a complete research work in accident (automobile) emergency alert situation. The authors were able to programme a GPS / GSM module incorporating a crash detector to report automatically via the GSM communication platform (using SMS messaging) to the nearest agencies such as police posts, hospitals, fire services etc, giving the exact position of the point where the crash had occurred. This will allow early response and rescue of accident victims; saving lives and properties. The paper reports its experimental results, gives appropriate conclusions and recommendations

    Comparing observed damages and losses with modelled ones using a probabilistic approach: the Lorca 2011 case

    Get PDF
    A loss and damage assessment was performed for the buildings of Lorca, Spain, considering an earthquake hazard scenario with similar characteristics to those of a real event which occurred on May 11th, 2011, in terms of epicentre, depth and magnitude while also considering the local soil response. This low-to moderate earthquake caused severe damage and disruption in the region and especially on the city. A building by building resolution database was developed and used for damage and loss assessment. The portfolio of buildings was characterized by means of indexes capturing information from a structural point of view such as age, main construction materials, number of stories, and building class as well as others related to age and vulnerability classes. A replacement cost approach was selected for the analysis in order to calculate the direct losses incurred by the event. Seismic hazard and vulnerability were modelled in a probabilistic way, considering their inherent uncertainties which were also taken into account in the damage and loss calculation process. Losses have been expressed in terms of the mean damage ratio of each dwelling and since the analysis has been performed on a geographical information system platform, the distribution of the damage and its categories was mapped for the entire urban centre. The simulated damages and losses were compared with the observed ones reported by the local authorities and institutions that inspected the city after the event.Peer ReviewedPostprint (author's final draft

    Improving Big Data Visual Analytics with Interactive Virtual Reality

    Full text link
    For decades, the growth and volume of digital data collection has made it challenging to digest large volumes of information and extract underlying structure. Coined 'Big Data', massive amounts of information has quite often been gathered inconsistently (e.g from many sources, of various forms, at different rates, etc.). These factors impede the practices of not only processing data, but also analyzing and displaying it in an efficient manner to the user. Many efforts have been completed in the data mining and visual analytics community to create effective ways to further improve analysis and achieve the knowledge desired for better understanding. Our approach for improved big data visual analytics is two-fold, focusing on both visualization and interaction. Given geo-tagged information, we are exploring the benefits of visualizing datasets in the original geospatial domain by utilizing a virtual reality platform. After running proven analytics on the data, we intend to represent the information in a more realistic 3D setting, where analysts can achieve an enhanced situational awareness and rely on familiar perceptions to draw in-depth conclusions on the dataset. In addition, developing a human-computer interface that responds to natural user actions and inputs creates a more intuitive environment. Tasks can be performed to manipulate the dataset and allow users to dive deeper upon request, adhering to desired demands and intentions. Due to the volume and popularity of social media, we developed a 3D tool visualizing Twitter on MIT's campus for analysis. Utilizing emerging technologies of today to create a fully immersive tool that promotes visualization and interaction can help ease the process of understanding and representing big data.Comment: 6 pages, 8 figures, 2015 IEEE High Performance Extreme Computing Conference (HPEC '15); corrected typo

    A qualitative approach to the identification, visualisation and interpretation of repetitive motion patterns in groups of moving point objects

    Get PDF
    Discovering repetitive patterns is important in a wide range of research areas, such as bioinformatics and human movement analysis. This study puts forward a new methodology to identify, visualise and interpret repetitive motion patterns in groups of Moving Point Objects (MPOs). The methodology consists of three steps. First, motion patterns are qualitatively described using the Qualitative Trajectory Calculus (QTC). Second, a similarity analysis is conducted to compare motion patterns and identify repetitive patterns. Third, repetitive motion patterns are represented and interpreted in a continuous triangular model. As an illustration of the usefulness of combining these hitherto separated methods, a specific movement case is examined: Samba dance, a rhythmical dance will? many repetitive movements. The results show that the presented methodology is able to successfully identify, visualize and interpret the contained repetitive motions

    PreSEIS: A Neural Network-Based Approach to Earthquake Early Warning for Finite Faults

    Get PDF
    The major challenge in the development of earthquake early warning (EEW) systems is the achievement of a robust performance at largest possible warning time. We have developed a new method for EEW—called PreSEIS (Pre-SEISmic)—that is as quick as methods that are based on single station observations and, at the same time, shows a higher robustness than most other approaches. At regular timesteps after the triggering of the first EEW sensor, PreSEIS estimates the most likely source parameters of an earthquake using the available information on ground motions at different sensors in a seismic network. The approach is based on two-layer feed-forward neural networks to estimate the earthquake hypocenter location, its moment magnitude, and the expansion of the evolving seismic rupture. When applied to the Istanbul Earthquake Rapid Response and Early Warning System (IERREWS), PreSEIS estimates the moment magnitudes of 280 simulated finite faults scenarios (4.5≀M≀7.5) with errors of less than ±0.8 units after 0.5 sec, ±0.5 units after 7.5 sec, and ±0.3 units after 15.0 sec. In the same time intervals, the mean location errors can be reduced from 10 km over 6 km to less than 5 km, respectively. Our analyses show that the uncertainties of the estimated parameters (and thus of the warnings) decrease with time. This reveals a trade-off between the reliability of the warning on the one hand, and the remaining warning time on the other hand. Moreover, the ongoing update of predictions with time allows PreSEIS to handle complex ruptures, in which the largest fault slips do not occur close to the point of rupture initiation. The estimated expansions of the seismic ruptures lead to a clear enhancement of alert maps, which visualize the level and distribution of likely ground shaking in the affected region seconds before seismic waves will arrive

    PlaNet - Photo Geolocation with Convolutional Neural Networks

    Full text link
    Is it possible to build a system to determine the location where a photo was taken using just its pixels? In general, the problem seems exceptionally difficult: it is trivial to construct situations where no location can be inferred. Yet images often contain informative cues such as landmarks, weather patterns, vegetation, road markings, and architectural details, which in combination may allow one to determine an approximate location and occasionally an exact location. Websites such as GeoGuessr and View from your Window suggest that humans are relatively good at integrating these cues to geolocate images, especially en-masse. In computer vision, the photo geolocation problem is usually approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. While previous approaches only recognize landmarks or perform approximate matching using global image descriptors, our model is able to use and integrate multiple visible cues. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman levels of accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, we demonstrate that this model achieves a 50% performance improvement over the single-image model

    Trying to break new ground in aerial archaeology

    Get PDF
    Aerial reconnaissance continues to be a vital tool for landscape-oriented archaeological research. Although a variety of remote sensing platforms operate within the earth’s atmosphere, the majority of aerial archaeological information is still derived from oblique photographs collected during observer-directed reconnaissance flights, a prospection approach which has dominated archaeological aerial survey for the past century. The resulting highly biased imagery is generally catalogued in sub-optimal (spatial) databases, if at all, after which a small selection of images is orthorectified and interpreted. For decades, this has been the standard approach. Although many innovations, including digital cameras, inertial units, photogrammetry and computer vision algorithms, geographic(al) information systems and computing power have emerged, their potential has not yet been fully exploited in order to re-invent and highly optimise this crucial branch of landscape archaeology. The authors argue that a fundamental change is needed to transform the way aerial archaeologists approach data acquisition and image processing. By addressing the very core concepts of geographically biased aerial archaeological photographs and proposing new imaging technologies, data handling methods and processing procedures, this paper gives a personal opinion on how the methodological components of aerial archaeology, and specifically aerial archaeological photography, should evolve during the next decade if developing a more reliable record of our past is to be our central aim. In this paper, a possible practical solution is illustrated by outlining a turnkey aerial prospection system for total coverage survey together with a semi-automated back-end pipeline that takes care of photograph correction and image enhancement as well as the management and interpretative mapping of the resulting data products. In this way, the proposed system addresses one of many bias issues in archaeological research: the bias we impart to the visual record as a result of selective coverage. While the total coverage approach outlined here may not altogether eliminate survey bias, it can vastly increase the amount of useful information captured during a single reconnaissance flight while mitigating the discriminating effects of observer-based, on-the-fly target selection. Furthermore, the information contained in this paper should make it clear that with current technology it is feasible to do so. This can radically alter the basis for aerial prospection and move landscape archaeology forward, beyond the inherently biased patterns that are currently created by airborne archaeological prospection
    • 

    corecore