11,283 research outputs found

    Brain enhancement through cognitive training: A new insight from brain connectome

    Get PDF
    Owing to the recent advances in neurotechnology and the progress in understanding of brain cognitive functions, improvements of cognitive performance or acceleration of learning process with brain enhancement systems is not out of our reach anymore, on the contrary, it is a tangible target of contemporary research. Although a variety of approaches have been proposed, we will mainly focus on cognitive training interventions, in which learners repeatedly perform cognitive tasks to improve their cognitive abilities. In this review article, we propose that the learning process during the cognitive training can be facilitated by an assistive system monitoring cognitive workloads using electroencephalography (EEG) biomarkers, and the brain connectome approach can provide additional valuable biomarkers for facilitating leaners' learning processes. For the purpose, we will introduce studies on the cognitive training interventions, EEG biomarkers for cognitive workload, and human brain connectome. As cognitive overload and mental fatigue would reduce or even eliminate gains of cognitive training interventions, a real-time monitoring of cognitive workload can facilitate the learning process by flexibly adjusting difficulty levels of the training task. Moreover, cognitive training interventions should have effects on brain sub-networks, not on a single brain region, and graph theoretical network metrics quantifying topological architecture of the brain network can differentiate with respect to individual cognitive states as well as to different individuals' cognitive abilities, suggesting that the connectome is a valuable approach for tracking the learning progress. Although only a few studies have exploited the connectome approach for studying alterations of the brain network induced by cognitive training interventions so far, we believe that it would be a useful technique for capturing improvements of cognitive function

    Notions of explainability and evaluation approaches for explainable artificial intelligence

    Get PDF
    Explainable Artificial Intelligence (XAI) has experienced a significant growth over the last few years. This is due to the widespread application of machine learning, particularly deep learning, that has led to the development of highly accurate models that lack explainability and interpretability. A plethora of methods to tackle this problem have been proposed, developed and tested, coupled with several studies attempting to define the concept of explainability and its evaluation. This systematic review contributes to the body of knowledge by clustering all the scientific studies via a hierarchical system that classifies theories and notions related to the concept of explainability and the evaluation approaches for XAI methods. The structure of this hierarchy builds on top of an exhaustive analysis of existing taxonomies and peer-reviewed scientific material. Findings suggest that scholars have identified numerous notions and requirements that an explanation should meet in order to be easily understandable by end-users and to provide actionable information that can inform decision making. They have also suggested various approaches to assess to what degree machine-generated explanations meet these demands. Overall, these approaches can be clustered into human-centred evaluations and evaluations with more objective metrics. However, despite the vast body of knowledge developed around the concept of explainability, there is not a general consensus among scholars on how an explanation should be defined, and how its validity and reliability assessed. Eventually, this review concludes by critically discussing these gaps and limitations, and it defines future research directions with explainability as the starting component of any artificial intelligent system

    Using structural and semantic methodologies to enhance biomedical terminologies

    Get PDF
    Biomedical terminologies and ontologies underlie various Health Information Systems (HISs), Electronic Health Record (EHR) Systems, Health Information Exchanges (HIEs) and health administrative systems. Moreover, the proliferation of interdisciplinary research efforts in the biomedical field is fueling the need to overcome terminological barriers when integrating knowledge from different fields into a unified research project. Therefore well-developed and well-maintained terminologies are in high demand. Most of the biomedical terminologies are large and complex, which makes it impossible for human experts to manually detect and correct all errors and inconsistencies. Automated and semi-automated Quality Assurance methodologies that focus on areas that are more likely to contain errors and inconsistencies are therefore important. In this dissertation, structural and semantic methodologies are used to enhance biomedical terminologies. The dissertation work is divided into three major parts. The first part consists of structural auditing techniques for the Semantic Network of the Unified Medical Language System (UMLS), which serves as a vocabulary knowledge base for biomedical research in various applications. Research techniques are presented on how to automatically identify and prevent erroneous semantic type assignments to concepts. The Web-based adviseEditor system is introduced to help UMLS editors to make correct multiple semantic type assignments to concepts. It is made available to the National Library of Medicine for future use in maintaining the UMLS. The second part of this dissertation is on how to enhance the conceptual content of SNOMED CT by methods of semantic harmonization. By 2015, SNOMED will become the standard terminology for EH R encoding of diagnoses and problem lists. In order to enrich the semantics and coverage of SNOMED CT for clinical and research applications, the problem of semantic harmonization between SNOMED CT and six reference terminologies is approached by 1) comparing the vertical density of SNOM ED CT with the reference terminologies to find potential concepts for export and import; and 2) categorizing the relationships between structurally congruent concepts from pairs of terminologies, with SNOMED CT being one terminology in the pair. Six kinds of configurations are observed, e.g., alternative classifications, and suggested synonyms. For each configuration, a corresponding solution is presented for enhancing one or both of the terminologies. The third part applies Quality Assurance techniques based on “Abstraction Networks” to biomedical ontologies in BioPortal. The National Center for Biomedical Ontology provides B ioPortal as a repository of over 350 biomedical ontologies covering a wide range of domains. It is extremely difficult to design a new Quality Assurance methodology for each ontology in BioPortal. Fortunately, groups of ontologies in BioPortal share common structural features. Thus, they can be grouped into families based on combinations of these features. A uniform Quality Assurance methodology design for each family will achieve improved efficiency, which is critical with the limited Quality Assurance resources available to most ontology curators. In this dissertation, a family-based framework covering 186 BioPortal ontologies and accompanying Quality Assurance methods based on abstraction networks are presented to tackle this problem

    How Does Refactoring Impact Security When Improving Quality? A Security Aware Refactoring

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/155871/1/RefactoringSecurityQMOOD__ICSE____Copy_.pd

    Enhancing health risk prediction with deep learning on big data and revised fusion node paradigm

    Get PDF
    With recent advances in health systems, the amount of health data is expanding rapidly in various formats. This data originates from many new sources including digital records, mobile devices, and wearable health devices. Big health data offers more opportunities for health data analysis and enhancement of health services via innovative approaches. The objective of this research is to develop a framework to enhance health prediction with the revised fusion node and deep learning paradigms. Fusion node is an information fusion model for constructing prediction systems. Deep learning involves the complex application of machine-learning algorithms, such as Bayesian fusions and neural network, for data extraction and logical inference. Deep learning, combined with information fusion paradigms, can be utilized to provide more comprehensive and reliable predictions from big health data. Based on the proposed framework, an experimental system is developed as an illustration for the framework implementatio

    Multi-task deep learning for large-scale building detail extraction from high-resolution satellite imagery

    Full text link
    Understanding urban dynamics and promoting sustainable development requires comprehensive insights about buildings. While geospatial artificial intelligence has advanced the extraction of such details from Earth observational data, existing methods often suffer from computational inefficiencies and inconsistencies when compiling unified building-related datasets for practical applications. To bridge this gap, we introduce the Multi-task Building Refiner (MT-BR), an adaptable neural network tailored for simultaneous extraction of spatial and attributional building details from high-resolution satellite imagery, exemplified by building rooftops, urban functional types, and roof architectural types. Notably, MT-BR can be fine-tuned to incorporate additional building details, extending its applicability. For large-scale applications, we devise a novel spatial sampling scheme that strategically selects limited but representative image samples. This process optimizes both the spatial distribution of samples and the urban environmental characteristics they contain, thus enhancing extraction effectiveness while curtailing data preparation expenditures. We further enhance MT-BR's predictive performance and generalization capabilities through the integration of advanced augmentation techniques. Our quantitative results highlight the efficacy of the proposed methods. Specifically, networks trained with datasets curated via our sampling method demonstrate improved predictive accuracy relative to those using alternative sampling approaches, with no alterations to network architecture. Moreover, MT-BR consistently outperforms other state-of-the-art methods in extracting building details across various metrics. The real-world practicality is also demonstrated in an application across Shanghai, generating a unified dataset that encompasses both the spatial and attributional details of buildings

    Modeling Crowd Feedback in the Mobile App Market

    Get PDF
    Mobile application (app) stores, such as Google Play and the Apple App Store, have recently emerged as a new model of online distribution platform. These stores have expanded in size in the past five years to host millions of apps, offering end-users of mobile software virtually unlimited options to choose from. In such a competitive market, no app is too big to fail. In fact, recent evidence has shown that most apps lose their users within the first 90 days after initial release. Therefore, app developers have to remain up-to-date with their end-users’ needs in order to survive. Staying close to the user not only minimizes the risk of failure, but also serves as a key factor in achieving market competitiveness as well as managing and sustaining innovation. However, establishing effective communication channels with app users can be a very challenging and demanding process. Specifically, users\u27 needs are often tacit, embedded in the complex interplay between the user, system, and market components of the mobile app ecosystem. Furthermore, such needs are scattered over multiple channels of feedback, such as app store reviews and social media platforms. To address these challenges, in this dissertation, we incorporate methods of requirements modeling, data mining, domain engineering, and market analysis to develop a novel set of algorithms and tools for automatically classifying, synthesizing, and modeling the crowd\u27s feedback in the mobile app market. Our analysis includes a set of empirical investigations and case studies, utilizing multiple large-scale datasets of mobile user data, in order to devise, calibrate, and validate our algorithms and tools. The main objective is to introduce a new form of crowd-driven software models that can be used by app developers to effectively identify and prioritize their end-users\u27 concerns, develop apps to meet these concerns, and uncover optimized pathways of survival in the mobile app ecosystem

    Corporate Codes of Conduct: Is Common Environmental Content Feasible?

    Get PDF
    In a developing country context, a policy to promote adoption of common environmental content for corporate codes of conduct (COCs) aspires to meaningful results on two fronts. First, adherence to COC provisions should offer economic benefits that exceed the costs of compliance; i.e., companies must receive a price premium, market expansion, efficiency gains, subsidized technical assistance, or some combination of these benefits in return for meeting the requirements. Second, compliance should produce significant improvements in environmental outcomes; i.e., the code must impose real requirements, and monitoring and enforcement must offer sufficient incentives to prevent evasion. With those goals in mind, we explore options for establishing common environmental content in voluntary COCs. Because the benefits of a COC rest on its ability to signal information, we ground our analysis in a review of experiences with a broad range of voluntary (and involuntary) information-based programs: not only existing corporate COCs, but also the International Organization for Standardization (ISO) family of standards, ecolabels, and information disclosure programs. We find some important tradeoffs between harmonization, applicability, feasibility, and efficacy.corporate social responsibility, codes of conduct, environmental management

    An Incremental Language Conversion Method to Convert C++ into Ada95

    Get PDF
    This thesis develops a methodology to incrementally convert a legacy object oriented C++ application into Ada95. Using the experience of converting a graphic application, called Remote Debriefing Tool (RDT), in the Graphics Lab of the Air Force Institute of Technology (AFIT), this effort defined a process to convert a C++ application into Ada95. The methodology consists of five phases: (1) reorganizing the software application, (2) breaking mutual dependencies, (3) creating package specifications to interface the existing C++ classes, (4) converting C++ code into Ada programs, and (5) embellishing. This methodology used the GNAT\u27s C++ low level interface capabilities to support the incremental conversion. The goal of this methodology is not only to correctly convert C++ code into Ada95, but also to take advantage of Ada\u27s features which support good software engineering principles

    An Ontology for Product-Service Systems

    Get PDF
    Industries are transforming their business strategy from a product-centric to a more service-centric nature by bundling products and services into integrated solutions to enhance the relationship between their customers. Since Product- Service Systems design research is currently at a rudimentary stage, the development of a robust ontology for this area would be helpful. The advantages of a standardized ontology are that it could help researchers and practitioners to communicate their views without ambiguity and thus encourage the conception and implementation of useful methods and tools. In this paper, an initial structure of a PSS ontology from the design perspective is proposed and evaluated
    corecore