95 research outputs found

    Towards Ontology-Based Program Analysis

    Get PDF
    Program analysis is fundamental for program optimizations, debugging, and many other tasks. But developing program analyses has been a challenging and error-prone process for general users. Declarative program analysis has shown the promise to dramatically improve the productivity in the development of program analyses. Current declarative program analysis is however subject to some major limitations in supporting cooperations among analysis tools, guiding program optimizations, and often requires much effort for repeated program preprocessing. In this work, we advocate the integration of ontology into declarative program analysis. As a way to standardize the definitions of concepts in a domain and the representation of the knowledge in the domain, ontology offers a promising way to address the limitations of current declarative program analysis. We develop a prototype framework named PATO for conducting program analysis upon ontology-based program representation. Experiments on six program analyses confirm the potential of ontology for complementing existing declarative program analysis. It supports multiple analyses without separate program preprocessing, promotes cooperative Liveness analysis between two compilers, and effectively guides a data placement optimization for Graphic Processing Units (GPU)

    A Macro-Micro Approach to Reconstructing Vehicle Trajectories on Multi-Lane Freeways with Lane Changing

    Full text link
    Vehicle trajectories can offer the most precise and detailed depiction of traffic flow and serve as a critical component in traffic management and control applications. Various technologies have been applied to reconstruct vehicle trajectories from sparse fixed and mobile detection data. However, existing methods predominantly concentrate on single-lane scenarios and neglect lane-changing (LC) behaviors that occur across multiple lanes, which limit their applicability in practical traffic systems. To address this research gap, we propose a macro-micro approach for reconstructing complete vehicle trajectories on multi-lane freeways, wherein the macro traffic state information and micro driving models are integrated to overcome the restrictions imposed by lane boundary. Particularly, the macroscopic velocity contour maps are established for each lane to regulate the movement of vehicle platoons, meanwhile the velocity difference between adjacent lanes provide valuable criteria for guiding LC behaviors. Simultaneously, the car-following models are extended from micro perspective to supply lane-based candidate trajectories and define the plausible range for LC positions. Later, a two-stage trajectory fusion algorithm is proposed to jointly infer both the car-following and LC behaviors, in which the optimal LC positions is identified and candidate trajectories are adjusted according to their weights. The proposed framework was evaluated using NGSIM dataset, and the results indicated a remarkable enhancement in both the accuracy and smoothness of reconstructed trajectories, with performance indicators reduced by over 30% compared to two representative reconstruction methods. Furthermore, the reconstruction process effectively reproduced LC behaviors across contiguous lanes, adding to the framework's comprehensiveness and realism

    Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness

    Full text link
    Recently smoothing deep neural network based classifiers via isotropic Gaussian perturbation is shown to be an effective and scalable way to provide state-of-the-art probabilistic robustness guarantee against â„“2\ell_2 norm bounded adversarial perturbations. However, how to train a good base classifier that is accurate and robust when smoothed has not been fully investigated. In this work, we derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed counterpart when training the base classifier. It is computationally efficient and can be implemented in parallel with other empirical defense methods. We discuss how to implement it under both standard (non-adversarial) and adversarial training scheme. At the same time, we also design a new certification algorithm, which can leverage the regularization effect to provide tighter robustness lower bound that holds with high probability. Our extensive experimentation demonstrates the effectiveness of the proposed training and certification approaches on CIFAR-10 and ImageNet datasets.Comment: AAAI202

    EasyNet: An Easy Network for 3D Industrial Anomaly Detection

    Full text link
    3D anomaly detection is an emerging and vital computer vision task in industrial manufacturing (IM). Recently many advanced algorithms have been published, but most of them cannot meet the needs of IM. There are several disadvantages: i) difficult to deploy on production lines since their algorithms heavily rely on large pre-trained models; ii) hugely increase storage overhead due to overuse of memory banks; iii) the inference speed cannot be achieved in real-time. To overcome these issues, we propose an easy and deployment-friendly network (called EasyNet) without using pre-trained models and memory banks: firstly, we design a multi-scale multi-modality feature encoder-decoder to accurately reconstruct the segmentation maps of anomalous regions and encourage the interaction between RGB images and depth images; secondly, we adopt a multi-modality anomaly segmentation network to achieve a precise anomaly map; thirdly, we propose an attention-based information entropy fusion module for feature fusion during inference, making it suitable for real-time deployment. Extensive experiments show that EasyNet achieves an anomaly detection AUROC of 92.6% without using pre-trained models and memory banks. In addition, EasyNet is faster than existing methods, with a high frame rate of 94.55 FPS on a Tesla V100 GPU

    Diagnosing quantum phase transition via holographic entanglement entropy at finite temperature

    Full text link
    We investigate the behavior of the holographic entanglement entropy (HEE) in proximity to the quantum critical points (QCPs) of the metal-insulator transition (MIT) in the Einstein-Maxwell-dilaton-axions (EMDA) model. Due to the fact that the ground state entropy density of the EMDA model is vanishing for insulating phase, but non-vanishing for the metallic phase, we used to expect that it is the HEE itself that characterizes the QCPs. This expectation is validated for certain case, however, we make a noteworthy observation: for a specific scenario, it is not the HEE itself but rather the second-order derivative of HEE with respect to the lattice wave number that effectively characterizes the quantum phase transition (QPT). This distinction arises due to the influence of thermal effects. These findings present novel insights into the interplay between HEE and QPTs in the context of the MIT, and have significant implications for studying QPT at finite temperatures.Comment: 15 pages, 5 figure

    Real3D-AD: A Dataset of Point Cloud Anomaly Detection

    Full text link
    High-precision point cloud anomaly detection is the gold standard for identifying the defects of advancing machining and precision manufacturing. Despite some methodological advances in this area, the scarcity of datasets and the lack of a systematic benchmark hinder its development. We introduce Real3D-AD, a challenging high-precision point cloud anomaly detection dataset, addressing the limitations in the field. With 1,254 high-resolution 3D items from forty thousand to millions of points for each item, Real3D-AD is the largest dataset for high-precision 3D industrial anomaly detection to date. Real3D-AD surpasses existing 3D anomaly detection datasets available regarding point cloud resolution (0.0010mm-0.0015mm), 360 degree coverage and perfect prototype. Additionally, we present a comprehensive benchmark for Real3D-AD, revealing the absence of baseline methods for high-precision point cloud anomaly detection. To address this, we propose Reg3D-AD, a registration-based 3D anomaly detection method incorporating a novel feature memory bank that preserves local and global representations. Extensive experiments on the Real3D-AD dataset highlight the effectiveness of Reg3D-AD. For reproducibility and accessibility, we provide the Real3D-AD dataset, benchmark source code, and Reg3D-AD on our website:https://github.com/M-3LAB/Real3D-AD

    White Matter Injury After Intracerebral Hemorrhage

    Get PDF
    Spontaneous intracerebral hemorrhage (ICH) accounts for 15% of all stroke cases. ICH is a devastating form of stroke associated with high morbidity, mortality, and disability. Preclinical studies have explored the mechanisms of neuronal death and gray matter damage after ICH. However, few studies have examined the development of white matter injury (WMI) following ICH. Research on WMI indicates that its pathophysiological presentation involves axonal damage, demyelination, and mature oligodendrocyte loss. However, the detailed relationship and mechanism between WMI and ICH remain unclear. Studies of other acute brain insults have indicated that WMI is strongly correlated with cognitive deficits, neurological deficits, and depression. The degree of WMI determines the short- and long-term prognosis of patients with ICH. This review demonstrates the structure and functions of the white matter in the healthy brain and discusses the pathophysiological mechanism of WMI following ICH. Our review reveals that the development of WMI after ICH is complex; therefore, comprehensive treatment is essential. Understanding the relationship between WMI and other brain cells may reveal therapeutic targets for the treatment of ICH
    • …
    corecore