318 research outputs found

    Adjusted inference for multiple testing procedure in group sequential designs

    Full text link
    Adjustment of statistical significance levels for repeated analysis in group sequential trials has been understood for some time. Similarly, methods for adjustment accounting for testing multiple hypotheses are common. There is limited research on simultaneously adjusting for both multiple hypothesis testing and multiple analyses of one or more hypotheses. We address this gap by proposing adjusted-sequential p-values that reject an elementary hypothesis when its adjusted-sequential p-values are less than or equal to the family-wise Type I error rate (FWER) in a group sequential design. We also propose sequential p-values for intersection hypotheses as a tool to compute adjusted sequential p-values for elementary hypotheses. We demonstrate the application using weighted Bonferroni tests and weighted parametric tests, comparing adjusted sequential p-values to a desired FWER for inference on each elementary hypothesis tested

    Evaluating Point Cloud Quality via Transformational Complexity

    Full text link
    Full-reference point cloud quality assessment (FR-PCQA) aims to infer the quality of distorted point clouds with available references. Merging the research of cognitive science and intuition of the human visual system (HVS), the difference between the expected perceptual result and the practical perception reproduction in the visual center of the cerebral cortex indicates the subjective quality degradation. Therefore in this paper, we try to derive the point cloud quality by measuring the complexity of transforming the distorted point cloud back to its reference, which in practice can be approximated by the code length of one point cloud when the other is given. For this purpose, we first segment the reference and the distorted point cloud into a series of local patch pairs based on one 3D Voronoi diagram. Next, motivated by the predictive coding theory, we utilize one space-aware vector autoregressive (SA-VAR) model to encode the geometry and color channels of each reference patch in cases with and without the distorted patch, respectively. Specifically, supposing that the residual errors follow the multi-variate Gaussian distributions, we calculate the self-complexity of the reference and the transformational complexity between the reference and the distorted sample via covariance matrices. Besides the complexity terms, the prediction terms generated by SA-VAR are introduced as one auxiliary feature to promote the final quality prediction. Extensive experiments on five public point cloud quality databases demonstrate that the transformational complexity based distortion metric (TCDM) produces state-of-the-art (SOTA) results, and ablation studies have further shown that our metric can be generalized to various scenarios with consistent performance by examining its key modules and parameters

    Point Cloud Quality Assessment using 3D Saliency Maps

    Full text link
    Point cloud quality assessment (PCQA) has become an appealing research field in recent days. Considering the importance of saliency detection in quality assessment, we propose an effective full-reference PCQA metric which makes the first attempt to utilize the saliency information to facilitate quality prediction, called point cloud quality assessment using 3D saliency maps (PQSM). Specifically, we first propose a projection-based point cloud saliency map generation method, in which depth information is introduced to better reflect the geometric characteristics of point clouds. Then, we construct point cloud local neighborhoods to derive three structural descriptors to indicate the geometry, color and saliency discrepancies. Finally, a saliency-based pooling strategy is proposed to generate the final quality score. Extensive experiments are performed on four independent PCQA databases. The results demonstrate that the proposed PQSM shows competitive performances compared to multiple state-of-the-art PCQA metrics

    GPA-Net:No-Reference Point Cloud Quality Assessment with Multi-task Graph Convolutional Network

    Full text link
    With the rapid development of 3D vision, point cloud has become an increasingly popular 3D visual media content. Due to the irregular structure, point cloud has posed novel challenges to the related research, such as compression, transmission, rendering and quality assessment. In these latest researches, point cloud quality assessment (PCQA) has attracted wide attention due to its significant role in guiding practical applications, especially in many cases where the reference point cloud is unavailable. However, current no-reference metrics which based on prevalent deep neural network have apparent disadvantages. For example, to adapt to the irregular structure of point cloud, they require preprocessing such as voxelization and projection that introduce extra distortions, and the applied grid-kernel networks, such as Convolutional Neural Networks, fail to extract effective distortion-related features. Besides, they rarely consider the various distortion patterns and the philosophy that PCQA should exhibit shifting, scaling, and rotational invariance. In this paper, we propose a novel no-reference PCQA metric named the Graph convolutional PCQA network (GPA-Net). To extract effective features for PCQA, we propose a new graph convolution kernel, i.e., GPAConv, which attentively captures the perturbation of structure and texture. Then, we propose the multi-task framework consisting of one main task (quality regression) and two auxiliary tasks (distortion type and degree predictions). Finally, we propose a coordinate normalization module to stabilize the results of GPAConv under shift, scale and rotation transformations. Experimental results on two independent databases show that GPA-Net achieves the best performance compared to the state-of-the-art no-reference PCQA metrics, even better than some full-reference metrics in some cases

    Effect of Trypsin Modification on Heat Resistance and Structural Properties of Liquid Egg White during Heat Sterilization

    Get PDF
    In order to increase the pasteurization temperature and heat resistance of liquid egg white, the effect of trypsin modification on the heat resistance and structural properties of liquid egg white was investigated in this study. The sample in this study consisted of two groups: unmodified and enzyme-modified. Each group was kept at 25 ℃ (control) or sterilized at 56, 62, 68 or 72 ℃ for 3 min. The changes of heat resistance were measured by turbidity and supernatant protein content, and the structure of egg white protein was characterized by apparent viscosity, particle size, surface hydrophobicity, Fourier transform infrared (FTIR) spectroscopy, sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), and scanning electron microscopy (SEM). Trypsin modification significantly reduced egg white turbidity and increased the protein content of the supernatant (P < 0.05). As the sterilization temperature increased, the turbidity and particle size of egg white gradually increased, while the protein content of the supernatant gradually decreased. At the same sterilization temperature, the turbidity and apparent viscosity of the modified egg white were significantly lower, while the surface hydrophobicity was significantly higher (P < 0.05) and the particle size distribution was closer to the normal distribution compared with that of the unmodified egg white. Enzymatic modification could inhibit protein thermal aggregation and improve heat resistance. SEM results showed that enzymatic modification increased the surface porosity of egg white protein and the dispersity of the particles; at the same sterilization temperature the number of particles retained on the surface was higher in modified than in unmodified egg white. SDS-PAGE analysis showed that enzymatic modification promoted the degradation of large molecular mass proteins in egg white. Fourier transform infrared spectroscopy showed that at temperatures below 68 ℃, the relative content of α-helix of the modified egg white was significantly higher than that of the unmodified egg white (P < 0.05), while the relative content of random coil was significantly lower than that of the unmodified egg white. In conclusion, trypsin can effectively improve the thermal aggregation of egg white proteins during heat sterilization and improve the heat resistance of liquid egg white, which is important for expanding its sales radius

    Exploiting wireless received signal strength indicators to detect evil-twin attacks in smart homes

    Get PDF
    Evil-twin is becoming a common attack in Smart Home environments where an attacker can set up a fake AP to compromise the security of the connected devices. To identify the fake APs, The current approaches of detecting Evil-twin attacks all rely on information such as SSIDs, the MAC address of the genuine AP or network traffic patterns. However, such information can be faked by the attacker, often leading to low detection rates and weak protection. This paper presents a novel evil-twin attack detection method based on the received signal strength indicator (RSSI). Our key insight is that the location of the genuine AP rarely moves in a home environment and as a result the RSSI of the genuine AP is relatively stable. Our approach considers the RSSI as a fingerprint of APs and uses the fingerprint of the genuine AP to identify fake ones. We provide two schemes to detect a fake AP in two different scenarios where the genuine AP can be located at either a single or multiple locations in the property, by exploiting the multipath effect of the WIFI signal. As a departure from prior work, our approach does not rely on any professional measurement devices. Experimental results show that our approach can successfully detect 90% of the fake APs, at the cost of an one-off, modest connection delay

    S3IM: Stochastic Structural SIMilarity and Its Unreasonable Effectiveness for Neural Fields

    Full text link
    Recently, Neural Radiance Field (NeRF) has shown great success in rendering novel-view images of a given scene by learning an implicit representation with only posed RGB images. NeRF and relevant neural field methods (e.g., neural surface representation) typically optimize a point-wise loss and make point-wise predictions, where one data point corresponds to one pixel. Unfortunately, this line of research failed to use the collective supervision of distant pixels, although it is known that pixels in an image or scene can provide rich structural information. To the best of our knowledge, we are the first to design a nonlocal multiplex training paradigm for NeRF and relevant neural field methods via a novel Stochastic Structural SIMilarity (S3IM) loss that processes multiple data points as a whole set instead of process multiple inputs independently. Our extensive experiments demonstrate the unreasonable effectiveness of S3IM in improving NeRF and neural surface representation for nearly free. The improvements of quality metrics can be particularly significant for those relatively difficult tasks: e.g., the test MSE loss unexpectedly drops by more than 90% for TensoRF and DVGO over eight novel view synthesis tasks; a 198% F-score gain and a 64% Chamfer L1L_{1} distance reduction for NeuS over eight surface reconstruction tasks. Moreover, S3IM is consistently robust even with sparse inputs, corrupted images, and dynamic scenes.Comment: ICCV 2023 main conference. Code: https://github.com/Madaoer/S3IM. 14 pages, 5 figures, 17 table

    Multifractal analysis of the heterogeneity of nanopores in tight reservoirs based on boosting machine learning algorithms

    Get PDF
    Exploring the geological factors that affect fluid flow has always been a hot topic. For tight reservoirs, the pore structure and characteristics of different lithofacies reveal the storage status of fluids in different reservoir environments. The size, connectivity, and distribution of fillers in different sedimentary environments have always posed a challenge in studying the microscopic heterogeneity. In this paper, six logging curves (gamma-ray, density, acoustic, compensated neutron, shallow resistivity, and deep resistivity) in two marker wells, namely, J1 and J2, of the Permian Lucaogou Formation in the Jimsar Basin are tested by using four reinforcement learning algorithms: LogitBoost, GBM, XGBoost, and KNN. The total percent correct of training well J2 is 96%, 96%, 96%, and 96%, and the total percent correct of validation well J1 is 75%, 68%, 72%, and 75%, respectively. Based on the lithofacies classification obtained by using reinforcement learning algorithm, micropores, mesopores, and macropores are comprehensively described by high-pressure mercury injection and low-pressure nitrogen gas adsorption tests. The multifractal theory servers for the quantitative characterization of the pore distribution heterogeneity regarding different lithofacies samples, and as observed, the higher probability measure area of the generalized fractal spectrum affects the heterogeneity of the local interval of mesopores and macropores of the estuary dam. In the micropore and mesopore, the heterogeneity of the evaporation lake showed a large variation due to the influence of the higher probability measure area, and in the mesopore and macropore, the heterogeneity of the evaporation lake was controlled by the lower probability measure area. According to the correlation analysis, the single-fractal dimension is well related to the multifractal parameters, and the individual fitting degree reaches up to 99%, which can serve for characterizing the pore size distribution uniformity. The combination of boosting machine learning and multifractal can help to better characterize the micro-heterogeneity under different sedimentary environments and different pore size distribution ranges, which is helpful in the exploration and development of oil fields
    corecore