90 research outputs found

    Forecasting the BAO Measurements of the CSST galaxy and AGN Spectroscopic Surveys

    Full text link
    The spectroscopic survey of the China Space Station Telescope (CSST) is expected to obtain a huge number of slitless spectra, including more than one hundred million galaxy spectra and millions of active galactic nuclei (AGN) spectra. By making use of these spectra, we can measure the Baryon Acoustic Oscillation (BAO) signals over large redshift ranges with excellent precisions. In this work, we predict the CSST measurements of the post-reconstruction galaxy power spectra at 0<z<1.2 and pre-reconstruction AGN power spectra at 0<z<4, and derive the BAO signals at different redshift bins by constraining the BAO scaling parameters using the Markov Chain Monte Carlo method. Our result shows that the CSST spectroscopic survey can provide accurate BAO measurements with precisions higher than 1% and 3% for the galaxy and AGN surveys, respectively. By comparing with current measurements in the same range at low redshifts, this can improve the precisions by a factor of 232\sim3, and similar precisions can be obtained in the pessimistic case. We also investigate the constraints on the cosmological parameters using the measured BAO data by the CSST, and obtain stringent constraint results for the energy density of dark matter, Hubble constant, and equation of state of dark energy.Comment: 15 pages, 9 figures, 4 table

    Cosmological Constraint Precision of the Photometric and Spectroscopic Multi-probe Surveys of China Space Station Telescope (CSST)

    Full text link
    As one of Stage IV space-based telescopes, China Space Station Telescope (CSST) can perform photometric and spectroscopic surveys simultaneously to efficiently explore the Universe in extreme precision. In this work, we investigate several powerful CSST cosmological probes, including cosmic shear, galaxy-galaxy lensing, photometric and spectroscopic galaxy clustering, and number counts of galaxy clusters, and study the capability of these probes by forecasting the results of joint constraints on the cosmological parameters. By referring to real observational results, we generate mock data and estimate the measured errors based on CSST observational and instrumental designs. To study the systematical effects on the results, we also consider a number of systematics in CSST photometric and spectroscopic surveys, such as the intrinsic alignment, shear calibration uncertainties, photometric redshift uncertainties, galaxy bias, non-linear effects, instrumental effects, etc. The Fisher matrix method is used to derive the constraint results from individual or joint surveys on the cosmological and systematical parameters. We find that the joint constraints by including all these CSST cosmological probes can significantly improve the results from current observations by one order of magnitude at least, which gives Ωm\Omega_m and σ8\sigma_8 <<1% accuracy, and w0w_0 and waw_a <<5% and 20% accuracies, respectively. This indicates that the CSST photometric and spectroscopic multi-probe surveys could provide powerful tools to explore the Universe and greatly improve the studies of relevant cosmological problems.Comment: 17 pages, 12 figures, 3 tables. Accepted for publication in MNRA

    More complex encoder is not all you need

    Full text link
    U-Net and its variants have been widely used in medical image segmentation. However, most current U-Net variants confine their improvement strategies to building more complex encoder, while leaving the decoder unchanged or adopting a simple symmetric structure. These approaches overlook the true functionality of the decoder: receiving low-resolution feature maps from the encoder and restoring feature map resolution and lost information through upsampling. As a result, the decoder, especially its upsampling component, plays a crucial role in enhancing segmentation outcomes. However, in 3D medical image segmentation, the commonly used transposed convolution can result in visual artifacts. This issue stems from the absence of direct relationship between adjacent pixels in the output feature map. Furthermore, plain encoder has already possessed sufficient feature extraction capability because downsampling operation leads to the gradual expansion of the receptive field, but the loss of information during downsampling process is unignorable. To address the gap in relevant research, we extend our focus beyond the encoder and introduce neU-Net (i.e., not complex encoder U-Net), which incorporates a novel Sub-pixel Convolution for upsampling to construct a powerful decoder. Additionally, we introduce multi-scale wavelet inputs module on the encoder side to provide additional information. Our model design achieves excellent results, surpassing other state-of-the-art methods on both the Synapse and ACDC datasets

    New Middle Jurassic Kempynin Osmylid Lacewings from China

    Full text link

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    Strategies and Implementation Paths for Curriculum Setting of Digital Media Major in the Digital Age

    No full text
    The professional courses of digital media major face several problems: too many influencing factors, unclear implementation strategy, and difficulty in evaluating the implementation effect. To solve these problems, this paper explores strategies and implementation paths for the professional courses of digital media major in the digital age. Firstly, the essential elements in curriculum setting were discussed, and the symptoms of the relevant problems were summarized; On this basis, the authors derived detailed strategies for the curriculum setting of digital media major. To ensure the effectiveness and reasonability of the professional courses of digital media major, an evaluation system and an evaluation model were constructed for judging the implementation effect of these courses. The research provides a strong support for the smooth implementation of the designed professional courses of digital media major, and a good reference for theorists and engineers in the relevant fields

    A game theory approach for self-coexistence analysis among IEEE 802.22 networks

    No full text
    The cognitive radio (CR) based IEEE 802.22 is a standard for wireless regional area networks (WRANs), which is allowed to utilize TV bands when no interference is present to incumbents (i.e. TV receivers and microphones). Compared to other existing networks, it has larger coverage range and provides broadband access in rural and remote areas with performance comparable to DSL and cable modems. It is a promising networks for future wireless communications. When multiple networks deployed by different wireless service providers overlay each other in the same area, they would have to compete to access available TV channels. When more than one WRANs accesses the same channel, the interference occurs. This will cause some given quality of service (QoS) requirements to be unsatisfied. Appropriate strategy such that finding an available channel with minimum cost is needed. In this paper, we formulate the problem as a noncooperative game and establish the Nash equilibrium. Both theoretical and experimental analysis are conducted and demonstrate that the proposed strategy can find an available channel with smaller cost than that of other strategies
    corecore