190 research outputs found

    Micropillar Arrays Fabricated by Light Induced Self-Writing: An Opportunity for Rapid, Scalable Formation of Hydrophobic Surfaces

    Get PDF
    Superhydrophobic surfaces naturally exist in plants and animals, which have inspired the development of artificial hydrophobic surfaces. The hydrophobic surfaces have drawn attention in multiple areas in recent years. Multiple approaches were carried out and achieved different levels of hydrophobicity. In this thesis, a series of hydrophobic surface structures have been prepared with a photo-inducted self-writing method and then coated with fluorocarbon compounds. Various fiber heights and different coating methods have been tested for differences and influences in hydrophobicity. Section views using optical microscope showed uniform cone structures fabricated with photo-curing; contact angle measurements that exhibited static contact angles greater than 150 ° were achieved. This method is also available for creating translucent samples

    Distributed Logistic Regression for Massive Data with Rare Events

    Full text link
    Large-scale rare events data are commonly encountered in practice. To tackle the massive rare events data, we propose a novel distributed estimation method for logistic regression in a distributed system. For a distributed framework, we face the following two challenges. The first challenge is how to distribute the data. In this regard, two different distribution strategies (i.e., the RANDOM strategy and the COPY strategy) are investigated. The second challenge is how to select an appropriate type of objective function so that the best asymptotic efficiency can be achieved. Then, the under-sampled (US) and inverse probability weighted (IPW) types of objective functions are considered. Our results suggest that the COPY strategy together with the IPW objective function is the best solution for distributed logistic regression with rare events. The finite sample performance of the distributed methods is demonstrated by simulation studies and a real-world Sweden Traffic Sign dataset

    Estimating Extreme Value Index by Subsampling for Massive Datasets with Heavy-Tailed Distributions

    Full text link
    Modern statistical analyses often encounter datasets with massive sizes and heavy-tailed distributions. For datasets with massive sizes, traditional estimation methods can hardly be used to estimate the extreme value index directly. To address the issue, we propose here a subsampling-based method. Specifically, multiple subsamples are drawn from the whole dataset by using the technique of simple random subsampling with replacement. Based on each subsample, an approximate maximum likelihood estimator can be computed. The resulting estimators are then averaged to form a more accurate one. Under appropriate regularity conditions, we show theoretically that the proposed estimator is consistent and asymptotically normal. With the help of the estimated extreme value index, a normal range can be established for a heavy-tailed random variable. Observations that fall outside the range should be treated as suspected records and can be practically regarded as outliers. Extensive simulation experiments are provided to demonstrate the promising performance of our method. A real data analysis is also presented for illustration purpose

    Research on acceptance analysis of application programming learning platform for industrial robots

    Get PDF
    Objective To investigate the college students’ acceptance of solid model teaching and virtual model teaching. Methods Several factors (behavioral intention, effort expectation and performance expectation) in UTAUT (Integrated Technology Acceptance Model) were used for data analysis using T-test. Results The experimental results showed that students had higher behavioral intention to the entity model, higher eff ort expectation and performance expectation to the entity model, and the diff erence was signifi cant. Compared with the virtual 3D model, students prefer the physical device that can be held in their hands and operated. Conclusion In the design of robot application programming teaching platform, we should appropriately introduce the teaching link of solid model, and combine the advantages of virtual model and solid model

    Estimating Mixture of Gaussian Processes by Kernel Smoothing

    Get PDF
    When functional data are not homogenous, for example, when there are multiple classes of functional curves in the dataset, traditional estimation methods may fail. In this article, we propose a new estimation procedure for the mixture of Gaussian processes, to incorporate both functional and inhomogenous properties of the data. Our method can be viewed as a natural extension of high-dimensional normal mixtures. However, the key difference is that smoothed structures are imposed for both the mean and covariance functions. The model is shown to be identifiable, and can be estimated efficiently by a combination of the ideas from expectation-maximization (EM) algorithm, kernel regression, and functional principal component analysis. Our methodology is empirically justified by Monte Carlo simulations and illustrated by an analysis of a supermarket dataset

    Subnetwork Estimation for Spatial Autoregressive Models in Large-scale Networks

    Full text link
    Large-scale networks are commonly encountered in practice (e.g., Facebook and Twitter) by researchers. In order to study the network interaction between different nodes of large-scale networks, the spatial autoregressive (SAR) model has been popularly employed. Despite its popularity, the estimation of a SAR model on large-scale networks remains very challenging. On the one hand, due to policy limitations or high collection costs, it is often impossible for independent researchers to observe or collect all network information. On the other hand, even if the entire network is accessible, estimating the SAR model using the quasi-maximum likelihood estimator (QMLE) could be computationally infeasible due to its high computational cost. To address these challenges, we propose here a subnetwork estimation method based on QMLE for the SAR model. By using appropriate sampling methods, a subnetwork, consisting of a much-reduced number of nodes, can be constructed. Subsequently, the standard QMLE can be computed by treating the sampled subnetwork as if it were the entire network. This leads to a significant reduction in information collection and model computation costs, which increases the practical feasibility of the effort. Theoretically, we show that the subnetwork-based QMLE is consistent and asymptotically normal under appropriate regularity conditions. Extensive simulation studies, based on both simulated and real network structures, are presented

    Distributed Estimation and Inference for Spatial Autoregression Model with Large Scale Networks

    Full text link
    The rapid growth of online network platforms generates large-scale network data and it poses great challenges for statistical analysis using the spatial autoregression (SAR) model. In this work, we develop a novel distributed estimation and statistical inference framework for the SAR model on a distributed system. We first propose a distributed network least squares approximation (DNLSA) method. This enables us to obtain a one-step estimator by taking a weighted average of local estimators on each worker. Afterwards, a refined two-step estimation is designed to further reduce the estimation bias. For statistical inference, we utilize a random projection method to reduce the expensive communication cost. Theoretically, we show the consistency and asymptotic normality of both the one-step and two-step estimators. In addition, we provide theoretical guarantee of the distributed statistical inference procedure. The theoretical findings and computational advantages are validated by several numerical simulations implemented on the Spark system. Lastly, an experiment on the Yelp dataset further illustrates the usefulness of the proposed methodology

    Subgroup Analysis via Recursive Partitioning

    Get PDF
    Subgroup analysis is an integral part of comparative analysis where assessing the treatment effect on a response is of central interest. Its goal is to determine the heterogeneity of the treatment effect across subpopulations. In this paper, we adapt the idea of recursive partitioning and introduce an interaction tree (IT) procedure to conduct subgroup analysis. The IT procedure automatically facilitates a number of objectively defined subgroups, in some of which the treatment effect is found prominent while in others the treatment has a negligible or even negative effect. The standard CART (Breiman et al., 1984) methodology is inherited to construct the tree structure. Also, in order to extract factors that contribute to the heterogeneity of the treatment effect, variable importance measure is made available via random forests of the interaction trees. Both simulated experiments and analysis of census wage data are presented for illustration
    • …
    corecore