5,774 research outputs found

    Transits and Lensing by Compact Objects in the Kepler Field: Disrupted Stars Orbiting Blue Stragglers

    Full text link
    Kepler's first major discoveries are two hot objects orbiting stars in its field. These may be the cores of stars that have each been eroded or disrupted by a companion star. The companion, which is the star monitored today, is likely to have gained mass from its now-defunct partner, and can be considered to be a blue straggler. KOI-81 is almost certainly the product of stable mass transfer; KOI-74 may be as well, or it may be the first clear example of a blue straggler created throughthree-body interactions. We show that mass transfer binaries are common enough that Kepler should discover ~1000 white dwarfs orbiting main sequence stars. Most, like KOI-74 and KOI-81, will be discovered through transits, but many will be discovered through a combination of gravitational lensing and transits, while lensing will dominate for a subset. In fact, some events caused by white dwarfs will have the appearance of "anti-transits" --i.e., short-lived enhancements in the amount of light received from the monitored star. Lensing and other mass measurements methods provide a way to distinguish white dwarf binaries from planetary systems. This is important for the success of Kepler's primary mission, in light of the fact that white dwarf radii are similar to the radii of terrestrial planets, and that some white dwarfs will have orbital periods that place them in the habitable zones of their stellar companions. By identifying transiting and/or lensing white dwarfs, Kepler will conduct pioneering studies of white dwarfs and of the end states of mass transfer. It may also identify orbiting neutron stars or black holes. The calculations inspired by the discovery of KOI-74 and KOI-81 have implications for ground-based wide-field surveys as well as for future space-based surveys.Comment: 29 pages, 6 figures, 1 table; submitted to The Astrophysical Journa

    Validation & Verification of an EDA automated synthesis tool

    Get PDF
    Reliability and correctness are two mandatory features for automated synthesis tools. To reach the goals several campaigns of Validation and Verification (V&V) are needed. The paper presents the extensive efforts set up to prove the correctness of a newly developed EDA automated synthesis tool. The target tool, MarciaTesta, is a multi-platform automatic generator of test programs for microprocessors' caches. Getting in input the selected March Test and some architectural details about the target cache memory, the tool automatically generates the assembly level program to be run as Software Based Self-Testing (SBST). The equivalence between the original March Test, the automatically generated Assembly program, and the intermediate C/C++ program have been proved resorting to sophisticated logging mechanisms. A set of proved libraries has been generated and extensively used during the tool development. A detailed analysis of the lessons learned is reporte

    Location-Verification and Network Planning via Machine Learning Approaches

    Full text link
    In-region location verification (IRLV) in wireless networks is the problem of deciding if user equipment (UE) is transmitting from inside or outside a specific physical region (e.g., a safe room). The decision process exploits the features of the channel between the UE and a set of network access points (APs). We propose a solution based on machine learning (ML) implemented by a neural network (NN) trained with the channel features (in particular, noisy attenuation values) collected by the APs for various positions both inside and outside the specific region. The output is a decision on the UE position (inside or outside the region). By seeing IRLV as an hypothesis testing problem, we address the optimal positioning of the APs for minimizing either the area under the curve (AUC) of the receiver operating characteristic (ROC) or the cross entropy (CE) between the NN output and ground truth (available during the training). In order to solve the minimization problem we propose a twostage particle swarm optimization (PSO) algorithm. We show that for a long training and a NN with enough neurons the proposed solution achieves the performance of the Neyman-Pearson (N-P) lemma.Comment: Accepted for Workshop on Machine Learning for Communications, June 07 2019, Avignon, Franc

    Modelling sustainable human development in a capability perspective

    Get PDF
    In this paper we model sustainable human development as intended in Sen's capability approach in a system dynamic framework. Our purpose is to verify the variations over time of some achieved functionings, due to structural dynamics and to variations of the institutional setting and instrumental freedoms (IF Vortex). The model is composed of two sections. The 'Left Side' one points out the 'demand' for functionings in an ideal world situation. The real world one, on the 'Right Side' indicates the 'supply' of functionings that the socio-economic system is able to provide individuals with. The general model, specifically tailored for Italy, can be simulated over desired time horizons: for each time period, we carry out a comparison between ideal world and real world functionings. On the basis of their distances, the model simulates some responses of decision makers. These responses, in turn influenced by institutions and instrumental freedoms, ultimately affect the dynamics of real world functionings, i.e. of sustainable human development.Functionings, Capabilities, Institutions, Instrumental Freedoms, Sustainable Human Development

    An area-efficient 2-D convolution implementation on FPGA for space applications

    Get PDF
    The 2-D Convolution is an algorithm widely used in image and video processing. Although its computation is simple, its implementation requires a high computational power and an intensive use of memory. Field Programmable Gate Arrays (FPGA) architectures were proposed to accelerate calculations of 2-D Convolution and the use of buffers implemented on FPGAs are used to avoid direct memory access. In this paper we present an implementation of the 2-D Convolution algorithm on a FPGA architecture designed to support this operation in space applications. This proposed solution dramatically decreases the area needed keeping good performance, making it appropriate for embedded systems in critical space application

    Topological Constraints in Eukaryotic Genomes and How They Can Be Exploited to Improve Spatial Models of Chromosomes

    Get PDF
    Several orders of magnitude typically separate the contour length of eukaryotic chromosomes and the size of the nucleus where they are confined. The ensuing topological constraints can slow down the relaxation dynamics of genomic filaments to the point that mammalian chromosomes are never in equilibrium over a cell's lifetime. In this opinion article, we revisit these out-of-equilibrium effects and discuss how their inclusion in physical models can enhance the spatial reconstructions of interphase eukaryotic genomes from phenomenological constraints collected during interphase.Comment: 5 pages, 1 figure, opinion article, submitted for publicatio

    Comparison between three glass fiber post cementation techniques

    Get PDF
    The aim of this experimental study was to compare the traditional cement systems with those of the latest generation, to assess if indeed these could represent of viable substitutes in the cementation of indirect restorations, and in the specific case of endodontic posts

    On the evolution of elastic properties during laboratory stick-slip experiments spanning the transition from slow slip to dynamic rupture

    Get PDF
    The physical mechanisms governing slow earthquakes remain unknown, as does the relationship between slow and regular earthquakes. To investigate the mechanism(s) of slow earthquakes and related quasi-dynamic modes of fault slip we performed laboratory experiments on simulated fault gouge in the double direct shear configuration. We reproduced the full spectrum of slip behavior, from slow to fast stick slip, by altering the elastic stiffness of the loading apparatus (k) to match the critical rheologic stiffness of fault gouge (kc). Our experiments show an evolution from stable sliding, when k>kc, to quasi-dynamic transients when k ~ kc, to dynamic instabilities when k<kc. To evaluate the microphysical processes of fault weakening we monitored variations of elastic properties. We find systematic changes in P wave velocity (Vp) for laboratory seismic cycles. During the coseismic stress drop, seismic velocity drops abruptly, consistent with observations on natural faults. In the preparatory phase preceding failure, we find that accelerated fault creep causes a Vp reduction for the complete spectrum of slip behaviors. Our results suggest that the mechanics of slow and fast ruptures share key features and that they can occur on same faults, depending on frictional properties. In agreement with seismic surveys on tectonic faults our data show that their state of stress can be monitored by Vp changes during the seismic cycle. The observed reduction in Vp during the earthquake preparatory phase suggests that if similar mechanisms are confirmed in nature high-resolution monitoring of fault zone properties may be a promising avenue for reliable detection of earthquake precursors
    • 

    corecore