2,375 research outputs found

    Rates and Characteristics of Intermediate Mass Ratio Inspirals Detectable by Advanced LIGO

    Get PDF
    Gravitational waves (GWs) from the inspiral of a neutron star (NS) or stellar-mass black hole (BH) into an intermediate-mass black hole (IMBH) with mass between ~50 and ~350 solar masses may be detectable by the planned advanced generation of ground-based GW interferometers. Such intermediate mass ratio inspirals (IMRIs) are most likely to be found in globular clusters. We analyze four possible IMRI formation mechanisms: (1) hardening of an NS-IMBH or BH-IMBH binary via three-body interactions, (2) hardening via Kozai resonance in a hierarchical triple system, (3) direct capture, and (4) inspiral of a compact object from a tidally captured main-sequence star; we also discuss tidal effects when the inspiraling object is an NS. For each mechanism we predict the typical eccentricities of the resulting IMRIs. We find that IMRIs will have largely circularized by the time they enter the sensitivity band of ground-based detectors. Hardening of a binary via three-body interactions, which is likely to be the dominant mechanism for IMRI formation, yields eccentricities under 10^-4 when the GW frequency reaches 10 Hz. Even among IMRIs formed via direct captures, which can have the highest eccentricities, around 90% will circularize to eccentricities under 0.1 before the GW frequency reaches 10 Hz. We estimate the rate of IMRI coalescences in globular clusters and the sensitivity of a network of three Advanced LIGO detectors to the resulting GWs. We show that this detector network may see up to tens of IMRIs per year, although rates of one to a few per year may be more plausible. We also estimate the loss in signal-to-noise ratio that will result from using circular IMRI templates for data analysis and find that, for the eccentricities we expect, this loss is negligible.Comment: Accepted for publication in ApJ; revised version reflects changes made to the article during the acceptance proces

    A Lightweight Service Placement Approach for Community Network Micro-Clouds

    Get PDF
    Community networks (CNs) have gained momentum in the last few years with the increasing number of spontaneously deployed WiFi hotspots and home networks. These networks, owned and managed by volunteers, offer various services to their members and to the public. While Internet access is the most popular service, the provision of services of local interest within the network is enabled by the emerging technology of CN micro-clouds. By putting services closer to users, micro-clouds pursue not only a better service performance, but also a low entry barrier for the deployment of mainstream Internet services within the CN. Unfortunately, the provisioning of these services is not so simple. Due to the large and irregular topology, high software and hardware diversity of CNs, a "careful" placement of micro-clouds services over the network is required to optimize service performance. This paper proposes to leverage state information about the network to inform service placement decisions, and to do so through a fast heuristic algorithm, which is critical to quickly react to changing conditions. To evaluate its performance, we compare our heuristic with one based on random placement in Guifi.net, the biggest CN worldwide. Our experimental results show that our heuristic consistently outperforms random placement by 2x in bandwidth gain. We quantify the benefits of our heuristic on a real live video-streaming service, and demonstrate that video chunk losses decrease significantly, attaining a 37% decrease in the packet loss rate. Further, using a popular Web 2.0 service, we demonstrate that the client response times decrease up to an order of magnitude when using our heuristic. Since these improvements translate in the QoE (Quality of Experience) perceived by the user, our results are relevant for contributing to higher QoE, a crucial parameter for using services from volunteer-based systems and adapting CN micro-clouds as an eco-system for service deployment

    Influence of relative NK-DC abundance on placentation and its relation to epigenetic programming in the offspring

    Get PDF
    Normal placentation relies on an efficient maternal adaptation to pregnancy. Within the decidua, natural killer (NK) cells and dendritic cells (DC) have a critical role in modulating angiogenesis and decidualization associated with pregnancy. However, the contribution of these immune cells to the placentation process and subsequently fetal development remains largely elusive. Using two different mouse models, we here show that optimal placentation and fetal development is sensitive to disturbances in NK cell relative abundance at the fetal–maternal interface. Depletion of NK cells during early gestation compromises the placentation process by causing alteration in placental function and structure. Embryos derived from NK-depleted dams suffer from intrauterine growth restriction (IUGR), a phenomenon that continued to be evident in the offspring on post-natal day 4. Further, we demonstrate that IUGR was accompanied by an overall reduction of global DNA methylation levels and epigenetic changes in the methylation of specific hepatic gene promoters. Thus, temporary changes within the NK cell pool during early gestation influence placental development and function, subsequently affecting hepatic gene methylation and fetal metabolism.Fil: Freitag, Nancy. Medicine University of Berlin; AlemaniaFil: Zwier, M. V.. University of Groningen; Países BajosFil: Barrientos, Gabriela Laura. Medicine University of Berlin; Alemania. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Tirado González, Irene. Medicine University of Berlin; AlemaniaFil: Conrad, Melanie L.. Medicine University of Berlin; AlemaniaFil: Rose, Matthias. Medicine University of Berlin; AlemaniaFil: Scherjon, S. A.. University of Groningen; Países BajosFil: Plösch, T.. University of Groningen; Países BajosFil: Blois, Sandra M.. Medicine University of Berlin; Alemani

    Post- and peritraumatic stress in disaster survivors: An explorative study about the influence of individual and event characteristics across different types of disasters

    Get PDF
    Background: Examination of existing research on posttraumatic adjustment after disasters suggests that survivors’ posttraumatic stress levels might be better understood by investigating the influence of the characteristics of the event experienced on how people thought and felt, during the event as well as afterwards. Objective: To compare survivors’ perceived post- and peritraumatic emotional and cognitive reactions across different types of disasters. Additionally, to investigate individual and event characteristics. Design: In a European multi-centre study, 102 survivors of different disasters terror attack, flood, fire and collapse of a building were interviewed about their responses during the event. Survivors’ perceived posttraumatic stress levels were assessed with the Impact of Event Scale-Revised (IES-R). Peritraumatic emotional stress and risk perception were rated retrospectively. Influences of individual characteristics, such as socio-demographic data, and event characteristics, such as time and exposure factors, on post- and peritraumatic outcomes were analyzed. Results: Levels of reported post- and peritraumatic outcomes differed significantly between types of disasters. Type of disaster was a significant predictor of all three outcome variables but the factors gender, education, time since event, injuries and fatalities were only significant for certain outcomes. Conclusion: Results support the hypothesis that there are differences in perceived post- and peritraumatic emotional and cognitive reactions after experiencing different types of disasters. However, it should be noted that these findings were not only explained by the type of disaster itself but also by individual and event characteristics. As the study followed an explorative approach, further research paths are discussed to better understand the relationships between variables

    Detecting extreme mass ratio inspiral events in LISA data using the Hierarchical Algorithm for Clusters and Ridges (HACR)

    Get PDF
    One of the most exciting prospects for the Laser Interferometer Space Antenna (LISA) is the detection of gravitational waves from the inspirals of stellar-mass compact objects into supermassive black holes. Detection of these sources is an extremely challenging computational problem due to the large parameter space and low amplitude of the signals. However, recent work has suggested that the nearest extreme mass ratio inspiral (EMRI) events will be sufficiently loud that they might be detected using computationally cheap, template-free techniques, such as a time-frequency analysis. In this paper, we examine a particular time-frequency algorithm, the Hierarchical Algorithm for Clusters and Ridges (HACR). This algorithm searches for clusters in a power map and uses the properties of those clusters to identify signals in the data. We find that HACR applied to the raw spectrogram performs poorly, but when the data is binned during the construction of the spectrogram, the algorithm can detect typical EMRI events at distances of up to 2.6\sim2.6Gpc. This is a little further than the simple Excess Power method that has been considered previously. We discuss the HACR algorithm, including tuning for single and multiple sources, and illustrate its performance for detection of typical EMRI events, and other likely LISA sources, such as white dwarf binaries and supermassive black hole mergers. We also discuss how HACR cluster properties could be used for parameter extraction.Comment: 21 pages, 11 figures, submitted to Class. Quantum Gravity. Modified and shortened in light of referee's comments. Updated results consider tuning over all three HACR thresholds, and show 10-15% improvement in detection rat

    A Robust Solution Procedure for Hyperelastic Solids with Large Boundary Deformation

    Full text link
    Compressible Mooney-Rivlin theory has been used to model hyperelastic solids, such as rubber and porous polymers, and more recently for the modeling of soft tissues for biomedical tissues, undergoing large elastic deformations. We propose a solution procedure for Lagrangian finite element discretization of a static nonlinear compressible Mooney-Rivlin hyperelastic solid. We consider the case in which the boundary condition is a large prescribed deformation, so that mesh tangling becomes an obstacle for straightforward algorithms. Our solution procedure involves a largely geometric procedure to untangle the mesh: solution of a sequence of linear systems to obtain initial guesses for interior nodal positions for which no element is inverted. After the mesh is untangled, we take Newton iterations to converge to a mechanical equilibrium. The Newton iterations are safeguarded by a line search similar to one used in optimization. Our computational results indicate that the algorithm is up to 70 times faster than a straightforward Newton continuation procedure and is also more robust (i.e., able to tolerate much larger deformations). For a few extremely large deformations, the deformed mesh could only be computed through the use of an expensive Newton continuation method while using a tight convergence tolerance and taking very small steps.Comment: Revision of earlier version of paper. Submitted for publication in Engineering with Computers on 9 September 2010. Accepted for publication on 20 May 2011. Published online 11 June 2011. The final publication is available at http://www.springerlink.co

    A novel approach to measure brain-to-brain spatial and temporal alignment during positive empathy

    Get PDF
    : Empathy is defined as the ability to vicariously experience others' suffering (vicarious pain) or feeling their joy (vicarious reward). While most neuroimaging studies have focused on vicarious pain and describe similar neural responses during the observed and the personal negative affective involvement, only initial evidence has been reported for the neural responses to others' rewards and positive empathy. Here, we propose a novel approach, based on the simultaneous recording of multi-subject EEG signals and exploiting the wavelet coherence decomposition to measure the temporal alignment between ERPs in a dyad of interacting subjects. We used the Third-Party Punishment (TPP) paradigm to elicit the personal and vicarious experiences. During a positive experience, we observed the simultaneous presence in both agents of the Late Positive Potential (LPP), an ERP component related to emotion processing, as well as the existence of an inter-subject ERPs synchronization in the related time window. Moreover, the amplitude of the LPP synchronization was modulated by the presence of a human-agent. Finally, the localized brain circuits subtending the ERP-synchronization correspond to key-regions of personal and vicarious reward. Our findings suggest that the temporal and spatial ERPs alignment might be a novel and direct proxy measure of empathy

    Multiple-Brain connectivity during third party punishment: an EEG hyperscanning study

    Get PDF
    Compassion is a particular form of empathic reaction to harm that befalls others and is accompanied by a desire to alleviate their suffering. This altruistic behavior is often manifested through altruistic punishment, wherein individuals penalize a deprecated human's actions, even if they are directed toward strangers. By adopting a dual approach, we provide empirical evidence that compassion is a multifaceted prosocial behavior and can predict altruistic punishment. In particular, in this multiple-brain connectivity study in an EEG hyperscanning setting, compassion was examined during real-time social interactions in a third-party punishment (TPP) experiment. We observed that specific connectivity patterns were linked to behavioral and psychological intra- and interpersonal factors. Thus, our results suggest that an ecological approach based on simultaneous dual-scanning and multiple-brain connectivity is suitable for analyzing complex social phenomena

    Extreme mass ratio inspiral rates: dependence on the massive black hole mass

    Full text link
    We study the rate at which stars spiral into a massive black hole (MBH) due to the emission of gravitational waves (GWs), as a function of the mass M of the MBH. In the context of our model, it is shown analytically that the rate approximately depends on the MBH mass as M^{-1/4}. Numerical simulations confirm this result, and show that for all MBH masses, the event rate is highest for stellar black holes, followed by white dwarfs, and lowest for neutron stars. The Laser Interferometer Space Antenna (LISA) is expected to see hundreds of these extreme mass ratio inspirals per year. Since the event rate derived here formally diverges as M->0, the model presented here cannot hold for MBHs of masses that are too low, and we discuss what the limitations of the model are.Comment: Accepted to CQG, special LISA issu

    Electrostatically Confined Monolayer Graphene Quantum Dots with Orbital and Valley Splittings

    Get PDF
    The electrostatic confinement of massless charge carriers is hampered by Klein tunneling. Circumventing this problem in graphene mainly relies on carving out nanostructures or applying electric displacement fields to open a band gap in bilayer graphene. So far, these approaches suffer from edge disorder or insufficiently controlled localization of electrons. Here we realize an alternative strategy in monolayer graphene, by combining a homogeneous magnetic field and electrostatic confinement. Using the tip of a scanning tunneling microscope, we induce a confining potential in the Landau gaps of bulk graphene without the need for physical edges. Gating the localized states toward the Fermi energy leads to regular charging sequences with more than 40 Coulomb peaks exhibiting typical addition energies of 7-20 meV. Orbital splittings of 4-10 meV and a valley splitting of about 3 meV for the first orbital state can be deduced. These experimental observations are quantitatively reproduced by tight binding calculations, which include the interactions of the graphene with the aligned hexagonal boron nitride substrate. The demonstrated confinement approach appears suitable to create quantum dots with well-defined wave function properties beyond the reach of traditional techniques
    corecore