4,069 research outputs found

    Targeting the Uniformly Most Powerful Unbiased Test in Sample Size Reassessment Adaptive Clinical Trials with Deep Learning

    Full text link
    In recent pharmaceutical drug development, adaptive clinical trials become more and more appealing due to ethical considerations, and the ability to accommodate uncertainty while conducting the trial. Several methods have been proposed to optimize a certain study design within a class of candidates, but finding an optimal hypothesis testing strategy for a given design remains challenging, mainly due to the complex likelihood function involved. This problem is of great interest from both patient and sponsor perspectives, because the smallest sample size is required for the optimal hypothesis testing method to achieve a desired level of power. To address these issues, we propose a novel application of the deep neural network to construct the test statistics and the critical value with a controlled type I error rate in a computationally efficient manner. We apply the proposed method to a sample size reassessment confirmatory adaptive study MUSEC (MUltiple Sclerosis and Extract of Cannabis), demonstrating the proposed method outperforms the existing alternatives. Simulation studies are also performed to demonstrate that our proposed method essentially establishes the underlying uniformly most powerful (UMP) unbiased test in several non-adaptive designs

    Deep Neural Networks Guided Ensemble Learning for Point Estimation

    Full text link
    In modern statistics, interests shift from pursuing the uniformly minimum variance unbiased estimator to reducing mean squared error (MSE) or residual squared error. Shrinkage based estimation and regression methods offer better prediction accuracy and improved interpretation. However, the characterization of such optimal statistics in terms of minimizing MSE remains open and challenging in many problems, for example estimating treatment effect in adaptive clinical trials with pre-planned modifications to design aspects based on accumulated data. From an alternative perspective, we propose a deep neural network based automatic method to construct an improved estimator from existing ones. Theoretical properties are studied to provide guidance on applicability of our estimator to seek potential improvement. Simulation studies demonstrate that the proposed method has considerable finite-sample efficiency gain as compared with several common estimators. In the Adaptive COVID-19 Treatment Trial (ACTT) as an important application, our ensemble estimator essentially contributes to a more ethical and efficient adaptive clinical trial with fewer patients enrolled. The proposed framework can be generally applied to various statistical problems, and can be served as a reference measure to guide statistical research

    Efficient and Generic Point Model for Lossless Point Cloud Attribute Compression

    Full text link
    The past several years have witnessed the emergence of learned point cloud compression (PCC) techniques. However, current learning-based lossless point cloud attribute compression (PCAC) methods either suffer from high computational complexity or deteriorated compression performance. Moreover, the significant variations in point cloud scale and sparsity encountered in real-world applications make developing an all-in-one neural model a challenging task. In this paper, we propose PoLoPCAC, an efficient and generic lossless PCAC method that achieves high compression efficiency and strong generalizability simultaneously. We formulate lossless PCAC as the task of inferring explicit distributions of attributes from group-wise autoregressive priors. A progressive random grouping strategy is first devised to efficiently resolve the point cloud into groups, and then the attributes of each group are modeled sequentially from accumulated antecedents. A locality-aware attention mechanism is utilized to exploit prior knowledge from context windows in parallel. Since our method directly operates on points, it can naturally avoids distortion caused by voxelization, and can be executed on point clouds with arbitrary scale and density. Experiments show that our method can be instantly deployed once trained on a Synthetic 2k-ShapeNet dataset while enjoying continuous bit-rate reduction over the latest G-PCCv23 on various datasets (ShapeNet, ScanNet, MVUB, 8iVFB). Meanwhile, our method reports shorter coding time than G-PCCv23 on the majority of sequences with a lightweight model size (2.6MB), which is highly attractive for practical applications. Dataset, code and trained model are available at https://github.com/I2-Multimedia-Lab/PoLoPCAC

    A computational tool for Bayesian networks enhanced with reliability methods

    Get PDF
    A computational framework for the reduction and computation of Bayesian Networks enhanced with structural reliability methods is presented. During the last decades, the inner flexibility of the Bayesian Network method, its intuitive graphical structure and the strong mathematical background have attracted increasing interest in a large variety of applications involving joint probability of complex events and dependencies. Furthermore, the fast growing availability of computational power on the one side and the implementation of robust inference algorithms on the other, have additionally promoted the success of this method. Inference in Bayesian Networks is limited to only discrete variables (with the only exception of Gaussian distributions) in case of exact algorithms, whereas approximate approach allows to handle continuous distributions but can either result computationally inefficient or have unknown rates of convergence. This work provides a valid alternative to the traditional approach without renouncing to the reliability and robustness of exact inference computation. The methodology adopted is based on the combination of Bayesian Networks with structural reliability methods and allows to integrate random and interval variables within the Bayesian Network framework in the so called Enhanced Bayesian Networks. In the following, the computational algorithms developed are described and a simple structural application is proposed in order to fully show the capability of the tool developed

    N-(5-Sulfanyl­idene-4,5-dihydro-1,3,4-thia­diazol-2-yl)acetamide dimethyl sulfoxide disolvate

    Get PDF
    In the title compound, C4H5N3OS2·2C2H6OS, the five-membered heterocyclic ring and the N—(C=O)—C plane of the acetamide group are essentially co-planar, with a dihedral angle of 1.25 (3)°. Inter­molecular N—H⋯O hydrogen bonds between the acetamide compound and the dimethyl sulfoxide mol­ecules stabilize the crystal structure. The two dimethyl sulfoxide mol­ecules are each disordered over two positions with occupancy ratios of 0.605 (2):0.395 (2) and 0.8629 (18):0.1371 (18)

    LLM Agents can Autonomously Hack Websites

    Full text link
    In recent years, large language models (LLMs) have become increasingly capable and can now interact with tools (i.e., call functions), read documents, and recursively call themselves. As a result, these LLMs can now function autonomously as agents. With the rise in capabilities of these agents, recent work has speculated on how LLM agents would affect cybersecurity. However, not much is known about the offensive capabilities of LLM agents. In this work, we show that LLM agents can autonomously hack websites, performing tasks as complex as blind database schema extraction and SQL injections without human feedback. Importantly, the agent does not need to know the vulnerability beforehand. This capability is uniquely enabled by frontier models that are highly capable of tool use and leveraging extended context. Namely, we show that GPT-4 is capable of such hacks, but existing open-source models are not. Finally, we show that GPT-4 is capable of autonomously finding vulnerabilities in websites in the wild. Our findings raise questions about the widespread deployment of LLMs
    corecore