632 research outputs found

    Performance Analysis of Family Welfare Empowerment Application: A Kanban Method Approach

    Get PDF
    This research examines the application of the Kanban method in testing a family welfare empowerment application. The Kanban method, initially developed by Toyota in manufacturing, has been effectively applied in software development. This study involves a series of tests involving various features within the application, such as user registration, village data collection, processing of the family welfare empowerment data at the Village/District level, and more. The test results show that most tests were successful, highlighting the application's success in executing essential functions such as user registration and event scheduling. However, some tests failed, primarily in inputting village, hamlet, and community unit data.  These results indicate that using the Kanban method in testing a family welfare empowerment application can potentially enhance development and testing efficiency. Metrics such as testing time, test success, and time efficiency have provided valuable insights into the application's performance. In conclusion, this testing provides a foundation for further application development, focusing on improving the areas that experienced testing failures. This research also opens up opportunities for further studies on using the Kanban method in software testing in various other application development contexts

    Computational Testing for Automated Preprocessing : a Matlab toolbox to enable large scale electroencephalography data processing

    Get PDF
    Electroencephalography (EEG) is a rich source of information regarding brain function. However, the preprocessing of EEG data can be quite complicated, due to several factors. For example, the distinction between true neural sources and noise is indeterminate; EEG data can also be very large. The various factors create a large number of subjective decisions with consequent risk of compound error. Existing tools present the experimenter with a large choice of analysis methods. Yet it remains a challenge for the researcher to integrate methods for batch-processing of the average large datasets, and compare methods to choose an optimal approach across the many possible parameter configurations. Additionally, many tools still require a high degree of manual decision making for, e.g. the classification of artefacts in channels, epochs or segments. This introduces extra subjectivity, is slow and is not reproducible. Batching and well-designed automation can help to regularise EEG preprocessing, and thus reduce human effort, subjectivity and consequent error. We present the computational testing for automated preprocessing (CTAP) toolbox, to facilitate: (i) batch-processing that is easy for experts and novices alike; (ii) testing and manual comparison of preprocessing methods. CTAP extends the existing data structure and functions from the well-known EEGLAB toolbox, based on Matlab and produces extensive quality control outputs. CTAP is available under MIT licence from https://github.com/bwrc/ctap.Peer reviewe

    Data Sharing in Neuroimaging Research

    Get PDF
    Significant resources around the world have been invested in neuroimaging studies of brain function and disease. Easier access to this large body of work should have profound impact on research in cognitive neuroscience and psychiatry, leading to advances in the diagnosis and treatment of psychiatric and neurological disease. A trend toward increased sharing of neuroimaging data has emerged in recent years. Nevertheless, a number of barriers continue to impede momentum. Many researchers and institutions remain uncertain about how to share data or lack the tools and expertise to participate in data sharing. The use of electronic data capture (EDC) methods for neuroimaging greatly simplifies the task of data collection and has the potential to help standardize many aspects of data sharing. We review here the motivations for sharing neuroimaging data, the current data sharing landscape, and the sociological or technical barriers that still need to be addressed. The INCF Task Force on Neuroimaging Datasharing, in conjunction with several collaborative groups around the world, has started work on several tools to ease and eventually automate the practice of data sharing. It is hoped that such tools will allow researchers to easily share raw, processed, and derived neuroimaging data, with appropriate metadata and provenance records, and will improve the reproducibility of neuroimaging studies. By providing seamless integration of data sharing and analysis tools within a commodity research environment, the Task Force seeks to identify and minimize barriers to data sharing in the field of neuroimaging

    BrainStat: A toolbox for brain-wide statistics and multimodal feature associations

    Get PDF
    Analysis and interpretation of neuroimaging datasets has become a multidisciplinary endeavor, relying not only on statistical methods, but increasingly on associations with respect to other brain-derived features such as gene expression, histological data, and functional as well as cognitive architectures. Here, we introduce BrainStat - a toolbox for (i) univariate and multivariate linear models in volumetric and surface-based brain imaging datasets, and (ii) multidomain feature association of results with respect to spatial maps of post-mortem gene expression and histology, task-based fMRI meta-analysis, as well as resting-state fMRI motifs across several common surface templates. The combination of statistics and feature associations into a turnkey toolbox streamlines analytical processes and accelerates cross-modal research. The toolbox is implemented in both Python and MATLAB, two widely used programming languages in the neuroimaging and neuroinformatics communities. BrainStat is openly available and complemented by an expandable documentation

    Provision of acute and elective general surgical care at a tertiary facility in the era of subspecialisation

    Get PDF
    Background. The need for an acute care and general surgical unit (ACGSU) to provide care for patients previously managed on an ad hoc basis by subspecialist units was recognised by the provincial government of the Western Cape Province, South Africa, the management of Groote Schuur Hospital (GSH) and the Department of Surgery.Objective. To describe the resulting ACGSU and its functioning.Methods. Data available from administrative records, patient files and operating room forms were collected in spreadsheet form for the period July 2013 - November 2016 inclusive.Results. The ACGSU comprised a medical care team of four consultants and four to five trainees. A total of 7 571 patients were seen during the study period, the majority (66.1%) referred from the GSH Emergency Centre. Skin and soft-tissue infections formed the major disease complex. A total of 3 144 operative records were available. The most common procedures were wound debridement and inguinal hernia repairs. Trainees acted as primary surgeon in most cases. Complications (Clavien-Dindo grades I - V) were noted in 25.0% of patients.Conclusions. The ACGSU provides patient management that would otherwise complicate care in the subspecialist surgical units. It serves as a training ground for registrars and stands as a model for other institutions. Further research into the effect on patient care is planned.

    A Comparison of Neuroelectrophysiology Databases

    Full text link
    As data sharing has become more prevalent, three pillars - archives, standards, and analysis tools - have emerged as critical components in facilitating effective data sharing and collaboration. This paper compares four freely available intracranial neuroelectrophysiology data repositories: Data Archive for the BRAIN Initiative (DABI), Distributed Archives for Neurophysiology Data Integration (DANDI), OpenNeuro, and Brain-CODE. These archives provide researchers with tools to store, share, and reanalyze neurophysiology data though the means of accomplishing these objectives differ. The Brain Imaging Data Structure (BIDS) and Neurodata Without Borders (NWB) are utilized by these archives to make data more accessible to researchers by implementing a common standard. While many tools are available to reanalyze data on and off the archives' platforms, this article features Reproducible Analysis and Visualization of Intracranial EEG (RAVE) toolkit, developed specifically for the analysis of intracranial signal data and integrated with the discussed standards and archives. Neuroelectrophysiology data archives improve how researchers can aggregate, analyze, distribute, and parse these data, which can lead to more significant findings in neuroscience research.Comment: 25 pages, 8 figures, 1 tabl

    Assessing Trustworthy AI in times of COVID-19. Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients

    Get PDF
    Abstract—The paper's main contributions are twofold: to demonstrate how to apply the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare; and to investigate the research question of what does “trustworthy AI” mean at the time of the COVID-19 pandemic. To this end, we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic. The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient’s lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia (Italy) since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses socio-technical scenarios to identify ethical, technical and domain-specific issues in the use of the AI system in the context of the pandemic.</p

    Assessing Trustworthy AI in Times of COVID-19: Deep Learning for Predicting a Multiregional Score Conveying the Degree of Lung Compromise in COVID-19 Patients

    Full text link
    This article's main contributions are twofold: 1) to demonstrate how to apply the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare and 2) to investigate the research question of what does "trustworthy AI" mean at the time of the COVID-19 pandemic. To this end, we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multiregional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic. The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient's lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia, Italy, since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses sociotechnical scenarios to identify ethical, technical, and domain-specific issues in the use of the AI system in the context of the pandemic
    • …
    corecore