257,647 research outputs found

    Fractal geometry of nature (bone) may inspire medical devices shape

    Get PDF
    Medical devices, as orthopaedics prostheses and dental implants, have been designed over years on the strength of mechanical, clinical and biological indications. This sequence is the commonly accepted cognitive and research process: adapting the device to the surrounding environment (host tissue). Inverting this traditional logical approach, we started from bone microarchitecture analysis. Here we show that a unique geometric rule seems to underlie different morphologic and functional aspects of human jaw bone tissue: fractal properties of white trabeculae in low quality bone are similar to fractal properties of black spaces in high quality bone and vice versa. These data inspired the fractal bone quality classification and they were the starting point for reverse engineering to design specific dental implants threads. We introduce a new philosophy: bone decoding and with these data devices encoding. In the future, the method will be implemented for the analysis of other human or animal tissues in order to project medical devices and biomaterials with a microarchitecture driven by nature

    Logic tensor networks for semantic image interpretation

    Get PDF
    Semantic Image Interpretation (SII) is the task of extracting structured semantic descriptions from images. It is widely agreed that the combined use of visual data and background knowledge is of great importance for SII. Recently, Statistical Relational Learning (SRL) approaches have been developed for reasoning under uncertainty and learning in the presence of data and rich knowledge. Logic Tensor Networks (LTNs) are a SRL framework which integrates neural networks with first-order fuzzy logic to allow (i) efficient learning from noisy data in the presence of logical constraints, and (ii) reasoning with logical formulas describing general properties of the data. In this paper, we develop and apply LTNs to two of the main tasks of SII, namely, the classification of an image's bounding boxes and the detection of the relevant part-of relations between objects. To the best of our knowledge, this is the first successful application of SRL to such SII tasks. The proposed approach is evaluated on a standard image processing benchmark. Experiments show that background knowledge in the form of logical constraints can improve the performance of purely data-driven approaches, including the state-of-theart Fast Region-based Convolutional Neural Networks (Fast R-CNN). Moreover, we show that the use of logical background knowledge adds robustness to the learning system when errors are present in the labels of the training data

    Rule By Example: Harnessing Logical Rules for Explainable Hate Speech Detection

    Full text link
    Classic approaches to content moderation typically apply a rule-based heuristic approach to flag content. While rules are easily customizable and intuitive for humans to interpret, they are inherently fragile and lack the flexibility or robustness needed to moderate the vast amount of undesirable content found online today. Recent advances in deep learning have demonstrated the promise of using highly effective deep neural models to overcome these challenges. However, despite the improved performance, these data-driven models lack transparency and explainability, often leading to mistrust from everyday users and a lack of adoption by many platforms. In this paper, we present Rule By Example (RBE): a novel exemplar-based contrastive learning approach for learning from logical rules for the task of textual content moderation. RBE is capable of providing rule-grounded predictions, allowing for more explainable and customizable predictions compared to typical deep learning-based approaches. We demonstrate that our approach is capable of learning rich rule embedding representations using only a few data examples. Experimental results on 3 popular hate speech classification datasets show that RBE is able to outperform state-of-the-art deep learning classifiers as well as the use of rules in both supervised and unsupervised settings while providing explainable model predictions via rule-grounding.Comment: ACL 2023 Main Conferenc

    Data-driven decoding of quantum error correcting codes using graph neural networks

    Full text link
    To leverage the full potential of quantum error-correcting stabilizer codes it is crucial to have an efficient and accurate decoder. Accurate, maximum likelihood, decoders are computationally very expensive whereas decoders based on more efficient algorithms give sub-optimal performance. In addition, the accuracy will depend on the quality of models and estimates of error rates for idling qubits, gates, measurements, and resets, and will typically assume symmetric error channels. In this work, instead, we explore a model-free, data-driven, approach to decoding, using a graph neural network (GNN). The decoding problem is formulated as a graph classification task in which a set of stabilizer measurements is mapped to an annotated detector graph for which the neural network predicts the most likely logical error class. We show that the GNN-based decoder can outperform a matching decoder for circuit level noise on the surface code given only simulated experimental data, even if the matching decoder is given full information of the underlying error model. Although training is computationally demanding, inference is fast and scales approximately linearly with the space-time volume of the code. We also find that we can use large, but more limited, datasets of real experimental data [Google Quantum AI, Nature {\bf 614}, 676 (2023)] for the repetition code, giving decoding accuracies that are on par with minimum weight perfect matching. The results show that a purely data-driven approach to decoding may be a viable future option for practical quantum error correction, which is competitive in terms of speed, accuracy, and versatility.Comment: 15 pages, 12 figure

    3D Simulation-based Analysis of Individual and Group Dynamic Behaviour in Video Surveillance

    Get PDF
    The visual behaviour analysis of individual and group dynamics is a subject of extensive research in both academia and industry. However, despite the recent technological advancements, the problem remains difficult. Most of the approaches concentrate on direct extraction and classification of graphical features from the video feed, analysing the behaviour directly from the source. The major obstacle, which impacts the real-time performance, is the necessity of combining processing of enormous volume of video data with complex symbolic data analysis. In this paper, we present the results of the experimental validation of a new method for dynamic behaviour analysis in visual analytics framework, which has as a core an agent-based, event-driven simulator. Our method utilizes only limited data extracted from the live video to analyse the activities monitored by surveillance cameras. Through combining the ontology of the visual scene, which accounts for the logical features of the observed world, with the patterns of dynamic behaviour, approximating the visual dynamics of the world, the framework allows recognizing the behaviour patterns on the basis of logical events rather than on physical appearance. This approach has several advantages. Firstly, the simulation reduces the complexity of data processing by eliminating the need of precise graphic data. Secondly, the granularity and precision of the analysed behaviour patterns can be controlled by parameters of the simulation itself. The experiments prove in a convincing manner that the simulation generates rich enough data to analyse the dynamic behaviour in real time with sufficient precision, completely adequate for many applications of video surveillance

    Neural Decoder for Topological Codes using Pseudo-Inverse of Parity Check Matrix

    Full text link
    Recent developments in the field of deep learning have motivated many researchers to apply these methods to problems in quantum information. Torlai and Melko first proposed a decoder for surface codes based on neural networks. Since then, many other researchers have applied neural networks to study a variety of problems in the context of decoding. An important development in this regard was due to Varsamopoulos et al. who proposed a two-step decoder using neural networks. Subsequent work of Maskara et al. used the same concept for decoding for various noise models. We propose a similar two-step neural decoder using inverse parity-check matrix for topological color codes. We show that it outperforms the state-of-the-art performance of non-neural decoders for independent Pauli errors noise model on a 2D hexagonal color code. Our final decoder is independent of the noise model and achieves a threshold of 10%10 \%. Our result is comparable to the recent work on neural decoder for quantum error correction by Maskara et al.. It appears that our decoder has significant advantages with respect to training cost and complexity of the network for higher lengths when compared to that of Maskara et al.. Our proposed method can also be extended to arbitrary dimension and other stabilizer codes.Comment: 12 pages, 12 figures, 2 tables, submitted to the 2019 IEEE International Symposium on Information Theor
    corecore