808 research outputs found

    An Exploration of Controlling the Content Learned by Deep Neural Networks

    Get PDF
    With the great success of the Deep Neural Network (DNN), how to get a trustworthy model attracts more and more attention. Generally, people intend to provide the raw data to the DNN directly in training. However, the entire training process is in a black box, in which the knowledge learned by the DNN is out of control. There are many risks inside. The most common one is overfitting. With the deepening of research on neural networks, additional and probably greater risks were discovered recently. The related research shows that unknown clues can hide in the training data because of the randomization of the data and the finite scale of the training data. Some of the clues build meaningless but explicit links between input data the output data called ``shortcuts\u27\u27. The DNN makes the decision based on these ``shortcuts\u27\u27. This phenomenon is also called ``network cheating\u27\u27. The knowledge of such shortcuts learned by DNN ruins all the training and makes the performance of the DNN unreliable. Therefore, we need to control the raw data using in training. Here, we name the explicit raw data as ``content\u27\u27 and the implicit logic learned by the DNN as ``knowledge\u27\u27 in this dissertation. By quantifying the information in DNN\u27s training, we find that the information learned by the network is much less than the information contained in the dataset. It indicates that it is unnecessary to train the neural network with all of the information, which means using partial information for training can also achieve a similar effect of using full information. In other words, it is possible to control the content fed into the DNN, and this strategy shown in this study can reduce the risks (e.g., overfitting and shortcuts) mentioned above. Moreover, use reconstructed data (with partial information) to train the network can reduce the complexity of the network and accelerate the training. In this dissertation, we provide a pipeline to implement content control in DNN\u27s training. We use a series of experiments to prove its feasibility in two applications. One is human brain anatomy structure analysis, and the other is human pose detection and classification

    Train the Neural Network by Abstract Images

    Get PDF
    Like the textbook for students\u27 learning, the training data plays a significant role in the network\u27s training. In most cases, people intend to use big-data to train the network, which leads to two problems. Firstly, the knowledge learned by the network is out of control. Secondly, the space occupation of big-data is huge. In this paper, we use the concepts-based knowledge visualization [33] to visualize the knowledge learned by the model. Based on the observation results and information theory, we make three conjectures about the key information provided by the dataset. Finally, we use experiments to prove that the artificial abstracted data can be used in networks\u27 training, which can solve the problem mentioned above. The experiment is designed based on Mask-RCNN, which is used to detect and classify three typical human poses on the construction site

    Inflammatory cellular response and cytokines IL-1β, IL-6 and TNFα in rat and human spinal cord injury.

    Get PDF
    The goal of this study was to characterize the post-traumatic inflammatory responses and localize cellular sources of IL-1β, IL-6 and TNFα following SCI. Thus it was hypothesized that the pro-inflammatory cytokines IL-1β, IL-6 and TNFα may act as messengers to coordinate the inflammatory cascade in the secondary SCI and that the cytokine response should be greater in severe than in mild injury.Thesis (Ph.D.) -- University of Adelaide, Dept. of Surgery (Neurosurgery) and Institute of Medical and Veterinary Science, Dept. of Neuropathology, 200

    Excitotoxic model of posttraumatic syringomyelia in the rat

    Get PDF
    Thesis (M.S.) -- University of Adelaide, Dept. of Surgery, 199

    Metasurface array for single-shot spectroscopic ellipsometry

    Full text link
    Spectroscopic ellipsometry is a potent method that is widely adopted for the measurement of thin film thickness and refractive index. However, a conventional ellipsometer, which utilizes a mechanically rotating polarizer and grating-based spectrometer for spectropolarimetric detection, is bulky, complex, and does not allow real-time measurements. Here, we demonstrated a compact metasurface array-based spectroscopic ellipsometry system that allows single-shot spectropolarimetric detection and accurate determination of thin film properties without any mechanical movement. The silicon-based metasurface array with a highly anisotropic and diverse spectral response is combined with iterative optimization to reconstruct the full Stokes polarization spectrum of the light reflected by the thin film with high fidelity. Subsequently, the film thickness and refractive index can be determined by fitting the measurement results to a proper material model with high accuracy. Our approach opens up a new pathway towards a compact and robust spectroscopic ellipsometry system for the high throughput measurement of thin film properties

    Evolution of Exhibition Space Strategies in Smart Museums: A Historical Transition from Traditional to Digital

    Get PDF
    Museums have long been regarded as important cultural institutions for preserving and presenting artefacts and artworks to the public. Over time, the strategies employed to showcase these collections have significantly changed, influenced by technological advancements and shifting visitor expectations. This paper explores the historical evolution of smart museum exhibition space strategies, focusing on transitioning from traditional to digital approaches. It highlights the shift from traditional static displays to interactive and immersive experiences, fostering visitor engagement and enhancing cultural understanding. Integrating technology and multimedia has revolutionized exhibition spaces, enabling smart museums to present diverse narratives and perspectives. This evolution has enriched the visitor experience and expanded the cultural value of museums, making them more accessible and relevant to a broader audience

    Knowledge Distillation based Contextual Relevance Matching for E-commerce Product Search

    Full text link
    Online relevance matching is an essential task of e-commerce product search to boost the utility of search engines and ensure a smooth user experience. Previous work adopts either classical relevance matching models or Transformer-style models to address it. However, they ignore the inherent bipartite graph structures that are ubiquitous in e-commerce product search logs and are too inefficient to deploy online. In this paper, we design an efficient knowledge distillation framework for e-commerce relevance matching to integrate the respective advantages of Transformer-style models and classical relevance matching models. Especially for the core student model of the framework, we propose a novel method using kk-order relevance modeling. The experimental results on large-scale real-world data (the size is 6\sim174 million) show that the proposed method significantly improves the prediction accuracy in terms of human relevance judgment. We deploy our method to the anonymous online search platform. The A/B testing results show that our method significantly improves 5.7% of UV-value under price sort mode

    Saliency-Aware Spatio-Temporal Artifact Detection for Compressed Video Quality Assessment

    Full text link
    Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques
    corecore