152 research outputs found

    From Conditional Quantile Regression to Marginal Quantile Estimation with Applications to Missing Data and Causal Inference

    No full text
    It is well known that information on the conditional distribution of an outcome variable given covariates can be used to obtain an enhanced estimate of the marginal outcome distribution. This can be done easily by integrating out the marginal covariate distribution from the conditional outcome distribution. However, to date, no analogy has been established between marginal quantile and conditional quantile regression. This article provides a link between them. We propose two novel marginal quantile and marginal mean estimation approaches through conditional quantile regression when some of the outcomes are missing at random. The first of these approaches is free from the need to choose a propensity score. The second is double robust to model misspecification: it is consistent if either the conditional quantile regression model is correctly specified or the missing mechanism of outcome is correctly specified. Consistency and asymptotic normality of the two estimators are established, and the second double robust estimator achieves the semiparametric efficiency bound. Extensive simulation studies are performed to demonstrate the utility of the proposed approaches. An application to causal inference is introduced. For illustration, we apply the proposed methods to a job training program dataset.</p

    Public awareness, news promptness and the measles outbreak in Hong Kong from March to April, 2019

    No full text
    Background: Globally, a resurgence of measles during the last decade may be attributed to many factors. An unexpected measles outbreak occurred in Hong Kong, and infected 29 airport staff between March and April 2019. The authority updated public on new cases daily, a public enquiry telephone/online platform was set up on March 23, and an emergent vaccination programme was launched targeting unvaccinated airport staff. We aimed to study this measles outbreak and its related factors. Methods: We quantified the transmissibility of the outbreak by the time-varying effective reproduction number, Reff(t), and inferred the time-varying basic reproduction number, R0(t). We examined the statistical associations between local public awareness or reporting delay and the R0(t). Results: Our estimated average R0 is 10.7 with 95% CI of 6.0–29.2. We found that R0(t) was negatively associated with the level of public awareness and the level of promptness of situation updates on new cases. Conclusion: Public awareness via situation updates helped to control the outbreak. The medical effects of the vaccination programme was not soon enough to cause the immediate shutting down of the outbreak, but it boosted herd immunity to prevent future airport outbreaks in the next few years.</p

    Using a Monotonic Density Ratio Model to Find the Asymptotically Optimal Combination of Multiple Diagnostic Tests

    No full text
    <p>With the advent of new technology, new biomarker studies have become essential in cancer research. To achieve optimal sensitivity and specificity, one needs to combine different diagnostic tests. The celebrated Neyman–Pearson lemma enables us to use the density ratio to optimally combine different diagnostic tests. In this article, we propose a semiparametric model by directly modeling the density ratio between the diseased and nondiseased population as an unspecified monotonic nondecreasing function of a linear or nonlinear combination of multiple diagnostic tests. This method is appealing in that it is not necessary to assume separate models for the diseased and nondiseased populations. Further, the proposed method provides an asymptotically optimal way to combine multiple test results. We use a pool-adjacent-violation-algorithm to find the semiparametric maximum likelihood estimate of the receiver operating characteristic (ROC) curve. Using modern empirical process theory we show cubic root <i>n</i> consistency for the ROC curve and the underlying Euclidean parameter estimation. Extensive simulations show that the proposed method outperforms its competitors. We apply the method to two real-data applications. Supplementary materials for this article are available online.</p

    Group-based atrous convolution stereo matching network

    No full text
    Stereo matching, is the key technology in stereo vision. Given a pair of rectified images, stereo matching determines correspondences between the pair images and estimate depth by obtaining disparity between corresponding pixels. Current work has shown that depth estimation from a stereo pair of images can be formulated as a supervised learning task with an end-to-end frame based on Convolutional Neural Networks (CNNs). However, 3D CNN makes a great burden on memory storage and computation, which further leads to the significantly increased computation time. To alleviate this issue, atrous convolution was proposed to reduce the number of convolutional operations via a relatively sparse receptive field. However, this sparse receptive field makes it difficult to find reliable corresponding points in fuzzy areas, e.g., occluded areas and untextured areas, owning to the loss of rich contextual information. To address this problem, we propose Group-based Atrous Convolution Spatial Pyramid Pooling (GASPP) to robustly segment objects at multiple scales with affordable computing resources. The main feature of GASPP module is to set convolutional layers with continuous dilation rate in each group, so that it can reduce the impact of holes introduced by atrous convolution on network performance. Moreover, we introduce a tailored cascade cost volume in a pyramid form to reduce memory, so as to meet real-time performance. Group-based atrous convolution stereo matching network is evaluated on the street scene benchmark KITTI 2015 and Scene Flow and achieves state-of-the-art performance.</div

    Data_Sheet_1_From digital museuming to on-site visiting: The mediation of cultural identity and perceived value.docx

    No full text
    IntroductionMuseums use digital resources to provide online services to the public, and a “digital museuming” boom has started. The mechanism of online museum visiting and its impact on willingness to visit on site has become an important issue of widespread concern. Therefore, based on the theory of presence and cognitive-emotional-behavioral theory, this paper introduces perceived value and cultural identity as mediating variables to explore the influence of the digital museuming experience on the willingness to visit on site from the audience’s perspective.MethodQuestionnaires were distributed, using the snowball sampling method, and 429 valid questionnaires were returned.ResultsThe empirical test presents the following results: (1) virtual reality technology multi-dimensionally expands the digital museuming experience; (2) immersion, interaction and available experience promote willingness to visit on-site; (3) hedonic experience in the process of digital museuming cannot be ignored; and (4) perceived value and cultural identity play a mediating role.DiscussionUser experience of visiting virtual museums, perceived value and cultural identity influence user willingness to visit museums in the field, but perceived value does not enhance the user’s cultural identity, maybe due to the inability of the online experience to increase the depth of the experience.</p

    LightLog: A lightweight temporal convolutional network for log anomaly detection on the edge

    No full text
    Log anomaly detection on edge devices is the key to enhance edge security when deploying IoT systems. Despite the success of many newly proposed deep learning based log anomaly detection methods, handling large-scale logs on edge devices is still a bottleneck due to the limited computational power on these devices to fulfil the real-time processing requirement for accurate anomaly detection. In this work, we propose a novel lightweight log anomaly detection algorithm, named LightLog, to tackle this research gap. In specific, we achieve real-time processing speed on the task via two aspects: (i) creation of a low-dimensional semantic vector space based on word2vec and post-processing algorithms (PPA); and (ii) design of a lightweight temporal convolutional network (TCN) for the detection. These two components significantly reduce the number of parameters and computations of a standard TCN while improving the detection performance. Experimental results show that our LightLog outperforms several benchmarking methods, namely DeepLog, LogAnomaly and RobustLog, by achieving 97.0 F1 score on HDFS Dataset and 97.2 F1 score on BGL with smallest model size. This effective yet efficient method paves the way to the deployment of log anomaly detection on the edge. Our source code and datasets are freely available on https://github.com/Aquariuaa/LightLog</div

    GCT-UNET: U-Net image segmentation model for a small sample of adherent bone marrow cells based on a gated channel transform module

    No full text
    Pathological diagnosis is considered to be declarative and authoritative. However, reading pathology slides is a challenging task. Different parts of the section are taken and read for different purposes and with different focuses, which further adds difficulty to the pathologist’s diagnosis. In recent years, the deep neural network has made great progress in the direction of computer vision and the main approach to image segmentation is the use of convolutional neural networks, through which the spatial properties of the data are captured. Among a wide variety of different network structures, one of the more representative ones is UNET with encoder and decoder structures. The biggest advantage of traditional UNET is that it can still perform well with a small number of samples, but because the information in the feature map is lost in the downsampling process of UNET, and a large amount of spatially accurate detailed information is lost in the decoding part. This makes it difficult to complete accurate segmentation of cell images with dense numbers and high adhesion. For this reason, we propose a new network structure based on UNET, which can be used to segment cell images by aggregating the global contextual information between different channels and assigning different weights to the corresponding channels through the gated adaptive mechanism, we improve the performance of UNET in the cell segmentation task and consider the use of unsupervised segmentation methods for secondary segmentation of the predicted results of our model, and the final results obtained are tested to meet the needs of the readers

    Group-based atrous convolution stereo matching network

    No full text
    Stereo matching, is the key technology in stereo vision. Given a pair of rectified images, stereo matching determines correspondences between the pair images and estimate depth by obtaining disparity between corresponding pixels. Current work has shown that depth estimation from a stereo pair of images can be formulated as a supervised learning task with an end-to-end frame based on Convolutional Neural Networks (CNNs). However, 3D CNN makes a great burden on memory storage and computation, which further leads to the significantly increased computation time. To alleviate this issue, atrous convolution was proposed to reduce the number of convolutional operations via a relatively sparse receptive field. However, this sparse receptive field makes it difficult to find reliable corresponding points in fuzzy areas, e.g., occluded areas and untextured areas, owning to the loss of rich contextual information. To address this problem, we propose Group-based Atrous Convolution Spatial Pyramid Pooling (GASPP) to robustly segment objects at multiple scales with affordable computing resources. The main feature of GASPP module is to set convolutional layers with continuous dilation rate in each group, so that it can reduce the impact of holes introduced by atrous convolution on network performance. Moreover, we introduce a tailored cascade cost volume in a pyramid form to reduce memory, so as to meet real-time performance. Group-based atrous convolution stereo matching network is evaluated on the street scene benchmark KITTI 2015 and Scene Flow and achieves state-of-the-art performance.</div

    GCT-UNET: U-Net image segmentation model for a small sample of adherent bone marrow cells based on a gated channel transform module

    No full text
    Pathological diagnosis is considered to be declarative and authoritative. However, reading pathology slides is a challenging task. Different parts of the section are taken and read for different purposes and with different focuses, which further adds difficulty to the pathologist’s diagnosis. In recent years, the deep neural network has made great progress in the direction of computer vision and the main approach to image segmentation is the use of convolutional neural networks, through which the spatial properties of the data are captured. Among a wide variety of different network structures, one of the more representative ones is UNET with encoder and decoder structures. The biggest advantage of traditional UNET is that it can still perform well with a small number of samples, but because the information in the feature map is lost in the downsampling process of UNET, and a large amount of spatially accurate detailed information is lost in the decoding part. This makes it difficult to complete accurate segmentation of cell images with dense numbers and high adhesion. For this reason, we propose a new network structure based on UNET, which can be used to segment cell images by aggregating the global contextual information between different channels and assigning different weights to the corresponding channels through the gated adaptive mechanism, we improve the performance of UNET in the cell segmentation task and consider the use of unsupervised segmentation methods for secondary segmentation of the predicted results of our model, and the final results obtained are tested to meet the needs of the readers
    corecore