1,261 research outputs found

    Formation of hot subdwarf B stars with neutron star components

    Full text link
    Binary population synthesis predicts the existence of subdwarf B stars (sdBs) with neutron star (NS) or black hole (BH) companions. We systematically investigate the formation of sdB+NS binaries from binary evolution and aim to obtain some clues for a search for such systems. We started from a series of MS+NS systems and determined the parameter spaces for producing sdB+NS binaries from the stable Roche-lobe overflow (RLOF) channel and from the common envelope (CE) ejection channel. Various NS accretion efficiencies and NS masses were examined to investigate the effects they have. We show the characteristics of the produced sdB+NS systems, such as the mass of components, orbital period, the semi-amplitude of the radial velocity (K), and the spin of the NS component. In the stable RLOF channel, the orbital period of sdB+NS binaries produced in this way ranges from several days to more than 1000 days and moves toward the short-period (~ hr) side with increasing initial MS mass. the sdB+NS systems that result from CE ejection have very short orbital periods and then high values of K (up to 800km s^-1). Such systems are born in very young populations (younger than 0.3 Gyr) and are potential gravitational wave sources that might be resolved by the Laser Interferometer Space Antenna (LISA) in the future. Gravitational wave radiation may again bring them into contact on a timescale of only ~Myr. As a consequence, they are rare and hard to discover. The pulsar signal is likely a feature of sdB+NS systems caused by stable RLOF, and some NS components in sdB binaries may be millisecond pulsars.Comment: 12 pages, 6 figures, 4 tables. Accepted for publication in A&

    CFD-FEM simulation of water entry of aluminium flat stiffened plate structure considering the effects of hydroelasticity

    Get PDF
    In this paper, the slamming loads and structural response of an aluminium flat stiffened-plate structure during calm water entry considering the hydroelasticity effects are studied by a partitioned CFD-FEM two-way coupled method. The target structure is simplified as one segment of an idealized ship grillage structure, comprising flat plate and stiffeners. The typical numerical results are analyzed such as vertical displacement, velocity, acceleration, impact loads, and structural stress of the flexible flat bottom grillage structure considering the hydroelasticity effect and air cushion effect in different free fall height conditions. Drop test results of the same structure and other existing numerical simulation data by both coupled and uncoupled solutions in the literature are used for comparison with the present numerical simulation results. This study provides a practical means to simulate the slamming behaviour and structural response of ship structures, which is useful for predicting ship hull stiffened panel loads and related structural design

    He+eH \rightarrow e^+ e^- at CEPC: ISR effect with MadGraph

    Full text link
    The Circular Electron Positron Collider (CEPC) is a future Higgs factory proposed by the Chinese high energy physics community. It will operate at a center-of-mass energy of 240-250 GeV. The CEPC will accumulate an integrated luminosity of 5 ab1^{\rm{-1}} in ten years' operation. With GEANT4-based full simulation samples for CEPC, Higgs boson decaying into electron pair is studied at the CEPC. The upper limit of B(He+e){\cal B}(H \rightarrow e^+ e^-) could reach 0.024\% at 95\% confidence level. The signal process is generated by MadGraph, with Initial State Radiation (ISR) implemented, as a first step to adjust MadGraph for a electron positron Collider.Comment: Accepted version by J.P.

    Dense Pixel-to-Pixel Harmonization via Continuous Image Representation

    Full text link
    High-resolution (HR) image harmonization is of great significance in real-world applications such as image synthesis and image editing. However, due to the high memory costs, existing dense pixel-to-pixel harmonization methods are mainly focusing on processing low-resolution (LR) images. Some recent works resort to combining with color-to-color transformations but are either limited to certain resolutions or heavily depend on hand-crafted image filters. In this work, we explore leveraging the implicit neural representation (INR) and propose a novel image Harmonization method based on Implicit neural Networks (HINet), which to the best of our knowledge, is the first dense pixel-to-pixel method applicable to HR images without any hand-crafted filter design. Inspired by the Retinex theory, we decouple the MLPs into two parts to respectively capture the content and environment of composite images. A Low-Resolution Image Prior (LRIP) network is designed to alleviate the Boundary Inconsistency problem, and we also propose new designs for the training and inference process. Extensive experiments have demonstrated the effectiveness of our method compared with state-of-the-art methods. Furthermore, some interesting and practical applications of the proposed method are explored. Our code is available at https://github.com/WindVChen/INR-Harmonization.Comment: Accepted by IEEE Transactions on Circuits and Systems for Video Technology (TCSVT

    Continuous Cross-resolution Remote Sensing Image Change Detection

    Full text link
    Most contemporary supervised Remote Sensing (RS) image Change Detection (CD) approaches are customized for equal-resolution bitemporal images. Real-world applications raise the need for cross-resolution change detection, aka, CD based on bitemporal images with different spatial resolutions. Given training samples of a fixed bitemporal resolution difference (ratio) between the high-resolution (HR) image and the low-resolution (LR) one, current cross-resolution methods may fit a certain ratio but lack adaptation to other resolution differences. Toward continuous cross-resolution CD, we propose scale-invariant learning to enforce the model consistently predicting HR results given synthesized samples of varying resolution differences. Concretely, we synthesize blurred versions of the HR image by random downsampled reconstructions to reduce the gap between HR and LR images. We introduce coordinate-based representations to decode per-pixel predictions by feeding the coordinate query and corresponding multi-level embedding features into an MLP that implicitly learns the shape of land cover changes, therefore benefiting recognizing blurred objects in the LR image. Moreover, considering that spatial resolution mainly affects the local textures, we apply local-window self-attention to align bitemporal features during the early stages of the encoder. Extensive experiments on two synthesized and one real-world different-resolution CD datasets verify the effectiveness of the proposed method. Our method significantly outperforms several vanilla CD methods and two cross-resolution CD methods on the three datasets both in in-distribution and out-of-distribution settings. The empirical results suggest that our method could yield relatively consistent HR change predictions regardless of varying bitemporal resolution ratios. Our code is available at \url{https://github.com/justchenhao/SILI_CD}.Comment: 21 pages, 11 figures. Accepted article by IEEE TGR

    Implicit Ray-Transformers for Multi-view Remote Sensing Image Segmentation

    Full text link
    The mainstream CNN-based remote sensing (RS) image semantic segmentation approaches typically rely on massive labeled training data. Such a paradigm struggles with the problem of RS multi-view scene segmentation with limited labeled views due to the lack of considering 3D information within the scene. In this paper, we propose ''Implicit Ray-Transformer (IRT)'' based on Implicit Neural Representation (INR), for RS scene semantic segmentation with sparse labels (such as 4-6 labels per 100 images). We explore a new way of introducing multi-view 3D structure priors to the task for accurate and view-consistent semantic segmentation. The proposed method includes a two-stage learning process. In the first stage, we optimize a neural field to encode the color and 3D structure of the remote sensing scene based on multi-view images. In the second stage, we design a Ray Transformer to leverage the relations between the neural field 3D features and 2D texture features for learning better semantic representations. Different from previous methods that only consider 3D prior or 2D features, we incorporate additional 2D texture information and 3D prior by broadcasting CNN features to different point features along the sampled ray. To verify the effectiveness of the proposed method, we construct a challenging dataset containing six synthetic sub-datasets collected from the Carla platform and three real sub-datasets from Google Maps. Experiments show that the proposed method outperforms the CNN-based methods and the state-of-the-art INR-based segmentation methods in quantitative and qualitative metrics
    corecore