19,360 research outputs found

    Technical Dimensions of Programming Systems

    Get PDF
    Programming requires much more than just writing code in a programming language. It is usually done in the context of a stateful environment, by interacting with a system through a graphical user interface. Yet, this wide space of possibilities lacks a common structure for navigation. Work on programming systems fails to form a coherent body of research, making it hard to improve on past work and advance the state of the art. In computer science, much has been said and done to allow comparison of programming languages, yet no similar theory exists for programming systems; we believe that programming systems deserve a theory too. We present a framework of technical dimensions which capture the underlying characteristics of programming systems and provide a means for conceptualizing and comparing them. We identify technical dimensions by examining past influential programming systems and reviewing their design principles, technical capabilities, and styles of user interaction. Technical dimensions capture characteristics that may be studied, compared and advanced independently. This makes it possible to talk about programming systems in a way that can be shared and constructively debated rather than relying solely on personal impressions. Our framework is derived using a qualitative analysis of past programming systems. We outline two concrete ways of using our framework. First, we show how it can analyze a recently developed novel programming system. Then, we use it to identify an interesting unexplored point in the design space of programming systems. Much research effort focuses on building programming systems that are easier to use, accessible to non-experts, moldable and/or powerful, but such efforts are disconnected. They are informal, guided by the personal vision of their authors and thus are only evaluable and comparable on the basis of individual experience using them. By providing foundations for more systematic research, we can help programming systems researchers to stand, at last, on the shoulders of giants

    Efficacy of Information Extraction from Bar, Line, Circular, Bubble and Radar Graphs

    Get PDF
    With the emergence of enormous amounts of data, numerous ways to visualize such data have been used. Bar, circular, line, radar and bubble graphs that are ubiquitous were investigated for their effectiveness. Fourteen participants performed four types of evaluations: between categories (cities), within categories (transport modes within a city), all categories, and a direct reading within a category from a graph. The representations were presented in random order and participants were asked to respond to sixteen questions to the best of their ability after visually scanning the related graph. There were two trials on two separate days for each participant. Eye movements were recorded using an eye tracker. Bar and line graphs show superiority over circular and radial graphs in effectiveness, efficiency, and perceived ease of use primarily due to eye saccades. The radar graph had the worst performance. “Vibration-type” fill pattern could be improved by adding colors and symbolic fills. Design guidelines are proposed for the effective representation of data so that the presentation and communication of information are effective

    ARA-net: an attention-aware retinal atrophy segmentation network coping with fundus images

    Get PDF
    BackgroundAccurately detecting and segmenting areas of retinal atrophy are paramount for early medical intervention in pathological myopia (PM). However, segmenting retinal atrophic areas based on a two-dimensional (2D) fundus image poses several challenges, such as blurred boundaries, irregular shapes, and size variation. To overcome these challenges, we have proposed an attention-aware retinal atrophy segmentation network (ARA-Net) to segment retinal atrophy areas from the 2D fundus image.MethodsIn particular, the ARA-Net adopts a similar strategy as UNet to perform the area segmentation. Skip self-attention connection (SSA) block, comprising a shortcut and a parallel polarized self-attention (PPSA) block, has been proposed to deal with the challenges of blurred boundaries and irregular shapes of the retinal atrophic region. Further, we have proposed a multi-scale feature flow (MSFF) to challenge the size variation. We have added the flow between the SSA connection blocks, allowing for capturing considerable semantic information to detect retinal atrophy in various area sizes.ResultsThe proposed method has been validated on the Pathological Myopia (PALM) dataset. Experimental results demonstrate that our method yields a high dice coefficient (DICE) of 84.26%, Jaccard index (JAC) of 72.80%, and F1-score of 84.57%, which outperforms other methods significantly.ConclusionOur results have demonstrated that ARA-Net is an effective and efficient approach for retinal atrophic area segmentation in PM

    Copy-paste data augmentation for domain transfer on traffic signs

    Get PDF
    City streets carry a lot of information that can be exploited to improve the quality of the services the citizens receive. For example, autonomous vehicles need to act accordingly to all the element that are nearby the vehicle itself, like pedestrians, traffic signs and other vehicles. It is also possible to use such information for smart city applications, for example to predict and analyze the traffic or pedestrian flows. Among all the objects that it is possible to find in a street, traffic signs are very important because of the information they carry. This information can in fact be exploited both for autonomous driving and for smart city applications. Deep learning and, more generally, machine learning models however need huge quantities to learn. Even though modern models are very good at gener- alizing, the more samples the model has, the better it can generalize between different samples. Creating these datasets organically, namely with real pictures, is a very tedious task because of the wide variety of signs available in the whole world and especially because of all the possible light, orientation conditions and con- ditions in general in which they can appear. In addition to that, it may not be easy to collect enough samples for all the possible traffic signs available, cause some of them may be very rare to find. Instead of collecting pictures manually, it is possible to exploit data aug- mentation techniques to create synthetic datasets containing the signs that are needed. Creating this data synthetically allows to control the distribution and the conditions of the signs in the datasets, improving the quality and quantity of training data that is going to be used. This thesis work is about using copy-paste data augmentation to create synthetic data for the traffic sign recognition task

    An efficient, lightweight MobileNetV2-based fine-tuned model for COVID-19 detection using chest X-ray images

    Get PDF
    In recent years, deep learning's identification of cancer, lung disease and heart disease, among others, has contributed to its rising popularity. Deep learning has also contributed to the examination of COVID-19, which is a subject that is currently the focus of considerable scientific debate. COVID-19 detection based on chest X-ray (CXR) images primarily depends on convolutional neural network transfer learning techniques. Moreover, the majority of these methods are evaluated by using CXR data from a single source, which makes them prohibitively expensive. On a variety of datasets, current methods for COVID-19 detection may not perform as well. Moreover, most current approaches focus on COVID-19 detection. This study introduces a rapid and lightweight MobileNetV2-based model for accurate recognition of COVID-19 based on CXR images; this is done by using machine vision algorithms that focused largely on robust and potent feature-learning capabilities. The proposed model is assessed by using a dataset obtained from various sources. In addition to COVID-19, the dataset includes bacterial and viral pneumonia. This model is capable of identifying COVID-19, as well as other lung disorders, including bacterial and viral pneumonia, among others. Experiments with each model were thoroughly analyzed. According to the findings of this investigation, MobileNetv2, with its 92% and 93% training validity and 88% precision, was the most applicable and reliable model for this diagnosis. As a result, one may infer that this study has practical value in terms of giving a reliable reference to the radiologist and theoretical significance in terms of establishing strategies for developing robust features with great presentation ability

    SOFIA and ALMA Investigate Magnetic Fields and Gas Structures in Massive Star Formation: The Case of the Masquerading Monster in BYF 73

    Full text link
    We present SOFIA+ALMA continuum and spectral-line polarisation data on the massive molecular cloud BYF 73, revealing important details about the magnetic field morphology, gas structures, and energetics in this unusual massive star formation laboratory. The 154ÎŒ\mum HAWC+ polarisation map finds a highly organised magnetic field in the densest, inner 0.55×\times0.40 pc portion of the cloud, compared to an unremarkable morphology in the cloud's outer layers. The 3mm continuum ALMA polarisation data reveal several more structures in the inner domain, including a pc-long, ∌\sim500 M⊙_{\odot} "Streamer" around the central massive protostellar object MIR 2, with magnetic fields mostly parallel to the east-west Streamer but oriented north-south across MIR 2. The magnetic field orientation changes from mostly parallel to the column density structures to mostly perpendicular, at thresholds NcritN_{\rm crit} = 6.6×\times1026^{26} m−2^{-2}, ncritn_{\rm crit} = 2.5×\times1011^{11} m−3^{-3}, and BcritB_{\rm crit} = 42±\pm7 nT. ALMA also mapped Goldreich-Kylafis polarisation in 12^{12}CO across the cloud, which traces in both total intensity and polarised flux, a powerful bipolar outflow from MIR 2 that interacts strongly with the Streamer. The magnetic field is also strongly aligned along the outflow direction; energetically, it may dominate the outflow near MIR 2, comprising rare evidence for a magnetocentrifugal origin to such outflows. A portion of the Streamer may be in Keplerian rotation around MIR 2, implying a gravitating mass 1350±\pm50 M⊙_{\odot} for the protostar+disk+envelope; alternatively, these kinematics can be explained by gas in free fall towards a 950±\pm35 M⊙_{\odot} object. The high accretion rate onto MIR 2 apparently occurs through the Streamer/disk, and could account for ∌\sim33% of MIR 2's total luminosity via gravitational energy release.Comment: 33 pages, 32 figures, accepted by ApJ. Line-Integral Convolution (LIC) images and movie versions of Figures 3b, 7, and 29 are available at https://gemelli.spacescience.org/~pbarnes/research/champ/papers

    Evaluation of image quality and reconstruction parameters in recent PET-CT and PET-MR systems

    No full text
    In this PhD dissertation, we propose to evaluate the impact of using different PET isotopes for the National Electrical Manufacturers Association (NEMA) tests performance evaluation of the GE Signa integrated PET/MR. The methods were divided into three closely related categories: NEMA performance measurements, system modelling and evaluation of the image quality of the state-of-the-art of clinical PET scanners. NEMA performance measurements for characterizing spatial resolution, sensitivity, image quality, the accuracy of attenuation and scatter corrections, and noise equivalent count rate (NECR) were performed using clinically relevant and commercially available radioisotopes. Then we modelled the GE Signa integrated PET/MR system using a realistic GATE Monte Carlo simulation and validated it with the result of the NEMA measurements (sensitivity and NECR). Next, the effect of the 3T MR field on the positron range was evaluated for F-18, C-11, O-15, N-13, Ga-68 and Rb-82. Finally, to evaluate the image quality of the state-of-the-art clinical PET scanners, a noise reduction study was performed using a Bayesian Penalized-Likelihood reconstruction algorithm on a time-of-flight PET/CT scanner to investigate whether and to what extent noise can be reduced. The outcome of this thesis will allow clinicians to reduce the PET dose which is especially relevant for young patients. Besides, the Monte Carlo simulation platform for PET/MR developed for this thesis will allow physicists and engineers to better understand and design integrated PET/MR systems

    Augmented classification for electrical coil winding defects

    Get PDF
    A green revolution has accelerated over the recent decades with a look to replace existing transportation power solutions through the adoption of greener electrical alternatives. In parallel the digitisation of manufacturing has enabled progress in the tracking and traceability of processes and improvements in fault detection and classification. This paper explores electrical machine manufacture and the challenges faced in identifying failures modes during this life cycle through the demonstration of state-of-the-art machine vision methods for the classification of electrical coil winding defects. We demonstrate how recent generative adversarial networks can be used to augment training of these models to further improve their accuracy for this challenging task. Our approach utilises pre-processing and dimensionality reduction to boost performance of the model from a standard convolutional neural network (CNN) leading to a significant increase in accuracy

    Diagnosis of Pneumonia Using Deep Learning

    Get PDF
    Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines or software that work and react like humans. Some of the activities computers with artificial intelligence are designed for include, Speech, recognition, Learning, Planning and Problem solving. Deep learning is a collection of algorithms used in machine learning, It is part of a broad family of methods used for machine learning that are based on learning representations of data. Deep learning is a technique used to produce Pneumonia detection and classification models using x-ray imaging for rapid and easy detection and identification of pneumonia. In this thesis, we review ways and mechanisms to use deep learning techniques to produce a model for Pneumonia detection. The goal is find a good and effective way to detect pneumonia based on X-rays to help the chest doctor in decision-making easily and accuracy and speed. The model will be designed and implemented, including both Dataset of image and Pneumonia detection through the use of Deep learning algorithms based on neural networks. The test and evaluation will be applied to a range of chest x-ray images and the results will be presented in detail and discussed. This thesis uses deep learning to detect pneumonia and its classification

    Non-Thermal Optical Engineering of Strongly-Correlated Quantum Materials

    Get PDF
    This thesis develops multiple optical engineering mechanisms to modulate the electronic, magnetic, and optical properties of strongly-correlated quantum materials, including polar metals, transition metal trichalcogenides, and copper oxides. We established the mechanisms of Floquet engineering and magnon bath engineering, and used optical probes, especially optical nonlinearity, to study the dynamics of these quantum systems. Strongly-correlated quantum materials host complex interactions between different degrees of freedom, offering a rich phase diagram to explore both in and out of equilibrium. While static tuning methods of the phases have witnessed great success, the emerging optical engineering methods have provided a more versatile platform. For optical engineering, the key to success lies in achieving the desired tuning while suppressing other unwanted effects, such as laser heating. We used sub-gap optical driving in order to avoid electronic excitation. Therefore, we managed to directly couple to low-energy excitation, or to induce coherent light-matter interactions. In order to elucidate the exact microscopic mechanisms of the optical engineering effects, we performed photon energy-dependent measurements and thorough theoretical analysis. To experimentally access the engineered quantum states, we leveraged various probe techniques, including the symmetry-sensitive optical second harmonic generation (SHG), and performed pump-probe type experiments to study the dynamics of quantum materials. I will first introduce the background and the motivation of this thesis, with an emphasis on the principles of optical engineering within the big picture of achieving quantum material properties on demand (Chapter I). I will then continue to introduce the main probe technique used in this thesis: SHG. I will also introduce the experimental setups which we developed and where we conducted the works contained in this thesis (Chapter II). In Chapter III, I will introduce an often overlooked aspect of SHG studies -- using SHG to study short-range structural correlations. Chapter IV will contain the theoretical analysis and experimental realizations of using sub-gap and resonant optical driving to tune electronic and optical properties of MnPS₃. The main tuning mechanism used in this chapter is Floquet engineering, where light modulates material properties without being absorbed. In Chapter V, I will turn to another useful material property: magnetism. First I will describe the extension of the Floquet mechanism to the renormalization of spin exchange interaction. Then I will switch gears and describe the demagnetization in Sr₂Cu₃O₄Cl₂ by resonant coupling between photons and magnons. I will end the thesis with a brief closing remark (Chapter VI).</p
    • 

    corecore