12,179 research outputs found

    Interpretable and explainable machine learning for ultrasonic defect sizing

    Get PDF
    Despite its popularity in literature, there are few examples of machine learning (ML) being used for industrial nondestructive evaluation (NDE) applications. A significant barrier is the ‘black box’ nature of most ML algorithms. This paper aims to improve the interpretability and explainability of ML for ultrasonic NDE by presenting a novel dimensionality reduction method: Gaussian feature approximation (GFA). GFA involves fitting a 2D elliptical Gaussian function an ultrasonic image and storing the seven parameters that describe each Gaussian. These seven parameters can then be used as inputs to data analysis methods such as the defect sizing neural network presented in this paper. GFA is applied to ultrasonic defect sizing for inline pipe inspection as an example application. This approach is compared to sizing with the same neural network, and two other dimensionality reduction methods (the parameters of 6 dB drop boxes and principal component analysis), as well as a convolutional neural network applied to raw ultrasonic images. Of the dimensionality reduction methods tested, GFA features produce the closest sizing accuracy to sizing from the raw images, with only a 23% increase in RMSE, despite a 96.5% reduction in the dimensionality of the input data. Implementing ML with GFA is implicitly more interpretable than doing so with principal component analysis or raw images as inputs, and gives significantly more sizing accuracy than 6 dB drop boxes. Shapley additive explanations (SHAP) are used to calculate how each feature contributes to the prediction of an individual defect’s length. Analysis of SHAP values demonstrates that the GFA-based neural network proposed displays many of the same relationships between defect indications and their predicted size as occur in traditional NDE sizing methods

    A Pairwise Dataset for GUI Conversion and Retrieval between Android Phones and Tablets

    Full text link
    With the popularity of smartphones and tablets, users have become accustomed to using different devices for different tasks, such as using their phones to play games and tablets to watch movies. To conquer the market, one app is often available on both smartphones and tablets. However, although one app has similar graphic user interfaces (GUIs) and functionalities on phone and tablet, current app developers typically start from scratch when developing a tablet-compatible version of their app, which drives up development costs and wastes existing design resources. Researchers are attempting to employ deep learning in automated GUIs development to enhance developers' productivity. Deep learning models rely heavily on high-quality datasets. There are currently several publicly accessible GUI page datasets for phones, but none for pairwise GUIs between phones and tablets. This poses a significant barrier to the employment of deep learning in automated GUI development. In this paper, we collect and make public the Papt dataset, which is a pairwise dataset for GUI conversion and retrieval between Android phones and tablets. The dataset contains 10,035 phone-tablet GUI page pairs from 5,593 phone-tablet app pairs. We illustrate the approaches of collecting pairwise data and statistical analysis of this dataset. We also illustrate the advantages of our dataset compared to other current datasets. Through preliminary experiments on this dataset, we analyse the present challenges of utilising deep learning in automated GUI development and find that our dataset can assist the application of some deep learning models to tasks involving automatic GUI development.Comment: 10 pages, 9 figure

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Automated Mapping of Adaptive App GUIs from Phones to TVs

    Full text link
    With the increasing interconnection of smart devices, users often desire to adopt the same app on quite different devices for identical tasks, such as watching the same movies on both their smartphones and TV. However, the significant differences in screen size, aspect ratio, and interaction styles make it challenging to adapt Graphical User Interfaces (GUIs) across these devices. Although there are millions of apps available on Google Play, only a few thousand are designed to support smart TV displays. Existing techniques to map a mobile app GUI to a TV either adopt a responsive design, which struggles to bridge the substantial gap between phone and TV or use mirror apps for improved video display, which requires hardware support and extra engineering efforts. Instead of developing another app for supporting TVs, we propose a semi-automated approach to generate corresponding adaptive TV GUIs, given the phone GUIs as the input. Based on our empirical study of GUI pairs for TV and phone in existing apps, we synthesize a list of rules for grouping and classifying phone GUIs, converting them to TV GUIs, and generating dynamic TV layouts and source code for the TV display. Our tool is not only beneficial to developers but also to GUI designers, who can further customize the generated GUIs for their TV app development. An evaluation and user study demonstrate the accuracy of our generated GUIs and the usefulness of our tool.Comment: 30 pages, 15 figure

    Endogenous measures for contextualising large-scale social phenomena: a corpus-based method for mediated public discourse

    Get PDF
    This work presents an interdisciplinary methodology for developing endogenous measures of group membership through analysis of pervasive linguistic patterns in public discourse. Focusing on political discourse, this work critiques the conventional approach to the study of political participation, which is premised on decontextualised, exogenous measures to characterise groups. Considering the theoretical and empirical weaknesses of decontextualised approaches to large-scale social phenomena, this work suggests that contextualisation using endogenous measures might provide a complementary perspective to mitigate such weaknesses. This work develops a sociomaterial perspective on political participation in mediated discourse as affiliatory action performed through language. While the affiliatory function of language is often performed consciously (such as statements of identity), this work is concerned with unconscious features (such as patterns in lexis and grammar). This work argues that pervasive patterns in such features that emerge through socialisation are resistant to change and manipulation, and thus might serve as endogenous measures of sociopolitical contexts, and thus of groups. In terms of method, the work takes a corpus-based approach to the analysis of data from the Twitter messaging service whereby patterns in users’ speech are examined statistically in order to trace potential community membership. The method is applied in the US state of Michigan during the second half of 2018—6 November having been the date of midterm (i.e. non-Presidential) elections in the United States. The corpus is assembled from the original posts of 5,889 users, who are nominally geolocalised to 417 municipalities. These users are clustered according to pervasive language features. Comparing the linguistic clusters according to the municipalities they represent finds that there are regular sociodemographic differentials across clusters. This is understood as an indication of social structure, suggesting that endogenous measures derived from pervasive patterns in language may indeed offer a complementary, contextualised perspective on large-scale social phenomena

    Leveraging a machine learning based predictive framework to study brain-phenotype relationships

    Get PDF
    An immense collective effort has been put towards the development of methods forquantifying brain activity and structure. In parallel, a similar effort has focused on collecting experimental data, resulting in ever-growing data banks of complex human in vivo neuroimaging data. Machine learning, a broad set of powerful and effective tools for identifying multivariate relationships in high-dimensional problem spaces, has proven to be a promising approach toward better understanding the relationships between the brain and different phenotypes of interest. However, applied machine learning within a predictive framework for the study of neuroimaging data introduces several domain-specific problems and considerations, leaving the overarching question of how to best structure and run experiments ambiguous. In this work, I cover two explicit pieces of this larger question, the relationship between data representation and predictive performance and a case study on issues related to data collected from disparate sites and cohorts. I then present the Brain Predictability toolbox, a soft- ware package to explicitly codify and make more broadly accessible to researchers the recommended steps in performing a predictive experiment, everything from framing a question to reporting results. This unique perspective ultimately offers recommen- dations, explicit analytical strategies, and example applications for using machine learning to study the brain

    High-throughput Tools and Techniques to Investigate Environmental Effects on Aging Behaviors in Caenorhabditis elegans

    Get PDF
    Aging is modulated by genetic and environmental cues; however, it is difficult to study how these perturbations modulate the aging process in a robust, high-throughput manner. Methods to gather large-scale behavioral data for aging studies are labor-intensive, lack individual-level resolution, or lack precise spatiotemporal environmental control. In addition, tools to analyze large-scale behavioral data sets are difficult to scale, unable to be broadly applied across complex environments, or fail to detect subtle behavioral changes. In this thesis I develop tools to enable robust, microfluidic culture and behavioral analysis of C. elegans to examine how environmental cues, such as dietary restriction, influence longevity and behavior with age. In Aim 1, I engineer a robust pipeline for the long-term longitudinal culture and behavioral monitoring of C. elegans in aging studies with precise spatiotemporal environmental control. In Aim 2, I develop a flexible deep learning based pipeline for detecting and extracting postural information from large-scale behavioral datasets across heterogeneous environments. In Aim 3, I characterize how the full behavioral repertoire of individuals change with age, along with examining how these age-related behavioral changes are modulated by different dietary restriction regimes. The completion of this thesis provides 1) a new toolset to robustly explore how genetic or environmental effects influence longevity and healthspan, 2) a flexible pipeline for analyzing large-scale behavioral data in C. elegans, and 3) insight into how environmental perturbations influence health through age-related changes in behavior.Ph.D

    Intelligent computing : the latest advances, challenges and future

    Get PDF
    Computing is a critical driving force in the development of human civilization. In recent years, we have witnessed the emergence of intelligent computing, a new computing paradigm that is reshaping traditional computing and promoting digital revolution in the era of big data, artificial intelligence and internet-of-things with new computing theories, architectures, methods, systems, and applications. Intelligent computing has greatly broadened the scope of computing, extending it from traditional computing on data to increasingly diverse computing paradigms such as perceptual intelligence, cognitive intelligence, autonomous intelligence, and human computer fusion intelligence. Intelligence and computing have undergone paths of different evolution and development for a long time but have become increasingly intertwined in recent years: intelligent computing is not only intelligence-oriented but also intelligence-driven. Such cross-fertilization has prompted the emergence and rapid advancement of intelligent computing

    Inclusive Intelligent Learning Management System Framework - Application of Data Science in Inclusive Education

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceBeing a disabled student the author faced higher education with a handicap which as experience studying during COVID 19 confinement periods matched the findings in recent research about the importance of digital accessibility through more e-learning intensive academic experiences. Narrative and systematic literature reviews enabled providing context in World Health Organization’s International Classification of Functioning, Disability and Health, legal and standards framework and information technology and communication state-of-the art. Assessing Portuguese higher education institutions’ web sites alerted to the fact that only outlying institutions implemented near perfect, accessibility-wise, websites. Therefore a gap was identified in how accessible the Portuguese higher education websites are, the needs of all students, including those with disabilities, and even the accessibility minimum legal requirements for digital products and the services provided by public or publicly funded organizations. Having identified a problem in society and exploring the scientific base of knowledge for context and state of the art was a first stage in the Design Science Research methodology, to which followed development and validation cycles of an Inclusive Intelligent Learning Management System Framework. The framework blends various Data Science study fields contributions with accessibility guidelines compliant interface design and content upload accessibility compliance assessment. Validation was provided by a focus group whose inputs were considered for the version presented in this dissertation. Not being the purpose of the research to deliver a complete implementation of the framework and lacking consistent data to put all the modules interacting with each other, the most relevant modules were tested with open data as proof of concept. The rigor cycle of DSR started with the inclusion of the previous thesis on Atlântica University Institute Scientific Repository and is to be completed with the publication of this thesis and the already started PhD’s findings in relevant journals and conferences
    • …
    corecore