680 research outputs found

    Computational Intelligence in Healthcare

    Get PDF
    This book is a printed edition of the Special Issue Computational Intelligence in Healthcare that was published in Electronic

    Computational Intelligence in Healthcare

    Get PDF
    The number of patient health data has been estimated to have reached 2314 exabytes by 2020. Traditional data analysis techniques are unsuitable to extract useful information from such a vast quantity of data. Thus, intelligent data analysis methods combining human expertise and computational models for accurate and in-depth data analysis are necessary. The technological revolution and medical advances made by combining vast quantities of available data, cloud computing services, and AI-based solutions can provide expert insight and analysis on a mass scale and at a relatively low cost. Computational intelligence (CI) methods, such as fuzzy models, artificial neural networks, evolutionary algorithms, and probabilistic methods, have recently emerged as promising tools for the development and application of intelligent systems in healthcare practice. CI-based systems can learn from data and evolve according to changes in the environments by taking into account the uncertainty characterizing health data, including omics data, clinical data, sensor, and imaging data. The use of CI in healthcare can improve the processing of such data to develop intelligent solutions for prevention, diagnosis, treatment, and follow-up, as well as for the analysis of administrative processes. The present Special Issue on computational intelligence for healthcare is intended to show the potential and the practical impacts of CI techniques in challenging healthcare applications

    Recent Advances in Embedded Computing, Intelligence and Applications

    Get PDF
    The latest proliferation of Internet of Things deployments and edge computing combined with artificial intelligence has led to new exciting application scenarios, where embedded digital devices are essential enablers. Moreover, new powerful and efficient devices are appearing to cope with workloads formerly reserved for the cloud, such as deep learning. These devices allow processing close to where data are generated, avoiding bottlenecks due to communication limitations. The efficient integration of hardware, software and artificial intelligence capabilities deployed in real sensing contexts empowers the edge intelligence paradigm, which will ultimately contribute to the fostering of the offloading processing functionalities to the edge. In this Special Issue, researchers have contributed nine peer-reviewed papers covering a wide range of topics in the area of edge intelligence. Among them are hardware-accelerated implementations of deep neural networks, IoT platforms for extreme edge computing, neuro-evolvable and neuromorphic machine learning, and embedded recommender systems

    Optimization Algorithms for Integrating Advanced Facility-Level Healthcare Technologies into Personal Healthcare Devices

    Get PDF
    Healthcare is one of the most important services to preserve the quality of our daily lives, and it is capable of dealing with issues such as global aging, increase in the healthcare cost, and changes to the medical paradigm, i.e., from the in-facility cure to the prevention and cure outside the facility. Accordingly, there has been growing interest in the smart and personalized healthcare systems to diagnose and care themselves. Such systems are capable of providing facility-level diagnosis services by using smart devices (e.g., smartphones, smart watches, and smart glasses). However, in realizing the smart healthcare systems, it is very difficult, albeit impossible, to directly integrate high-precision healthcare technologies or scientific theories into the smart devices due to the stringent limitations in the computing power and battery lifetime, as well as environmental constraints. In this dissertation, we propose three optimization methods in the field of cell counting systems and gait-aid systems for Parkinson's disease patients that address the problems that arise when integrating a specialized healthcare system used in the facilities into mobile or wearable devices. First, we present an optimized cell counting algorithm based on heuristic optimization, which is a key building block for realizing the mobile point-of-care platforms. Second, we develop a learning-based cell counting algorithm that guarantees high performance and efficiency despite the existence of blurry cells due to out-focus and varying brightness of background caused by the limitation of lenses free in-line holographic apparatus. Finally, we propose smart gait-aid glasses for Parkinson’s disease patients based on mathematical optimization. ⓒ 2017 DGISTopenI. Introduction 1-- 1.1 Global Healthcare Trends 1-- 1.2 Smart Healthcare System 2-- 1.3 Benefits of Smart Healthcare System 3-- 1.4 Challenges of Smart Healthcare. 4-- 1.5 Optimization 6-- 1.6 Aims of the Dissertation 7-- 1.7 Dissertation Organization 8-- II.Optimization of a cell counting algorithm for mobile point-of-care testing platforms 9-- 2.1 Introduction 9-- 2.2 Materials and Methods. 13-- 2.2.1 Experimental Setup. 13-- 2.2.2 Overview of Cell Counting. 16-- 2.2.3 Cell Library Optimization. 18-- 2.2.4 NCC Approximation. 20-- 2.3 Results 21-- 2.3.1 Cell Library Optimization. 21-- 2.3.2 NCC Approximation. 23-- 2.3.3 Measurement Using an Android Device. 28-- 2.4 Summary 32-- III.Human-level Blood Cell Counting System using NCC-Deep learning algorithm on Lens-free Shadow Image. 33-- 3.1 Introduction 33-- 3.2 Cell Counting Architecture 36-- 3.3 Methods 37-- 3.3.1 Candidate Point Selection based on NCC. 37-- 3.3.2 Reliable Cell Counting using CNN. 40-- 3.4 Results 43-- 3.4.1 Subjects . 43-- 3.4.2 Evaluation for the cropped cell image 44-- 3.4.3 Evaluation on the blood sample image 46-- 3.4.4 Elapsed-time evaluation 50-- 3.5 Summary 50-- IV.Smart Gait-Aid Glasses for Parkinson’s Disease Patients 52-- 4.1 Introduction 52-- 4.2 Related Works 54-- 4.2.1 Existing FOG Detection Methods 54-- 4.2.2 Existing Gait-Aid Systems 56-- 4.3 Methods 57-- 4.3.1 Movement Recognition. 59-- 4.3.2 FOG Detection On Glasses. 62-- 4.3.3 Generation of Visual Patterns 66-- 4.4 Experiments . 67-- 4.5 Results 69-- 4.5.1 FOG Detection Performance. 69-- 4.5.2 Gait-Aid Performance. 71-- 4.6 Summary 73-- V. Conclusion 75-- Reference 77-- 요약문 89본 논문은 의료 관련 연구시설 및 병원 그리고 실험실 레벨에서 사용되는 전문적인 헬스케어 시스템을 개인의 일상생활 속에서 사용할 수 있는 스마트 헬스케어 시스템에 적용시키기 위한 최적화 문제에 대해 다룬다. 현대 사회에서 의료비용 증가 세계적인 고령화에 따라 의료 패러다임은 질병이 발생한 뒤 시설 내에서 치료 받는 방식에서 질병이나 건강관리에 관심있는 환자 혹은 일반인이 휴대할 수 있는 개인용 디바이스를 이용하여 의료 서비스에 접근하고, 이를 이용하여 질병을 미리 예방하는 방식으로 바뀌었다. 이에 따라 언제, 어디서나 스마트 디바이스(스마트폰, 스마트워치, 스마트안경 등)를 이용하여 병원 수준의 예방 및 진단을 실현하는 스마트 헬스케어가 주목 받고 있다. 하지만, 스마트 헬스케어 서비스 실현을 위하여 기존의 전문 헬스케어 장치 및 과학적 이론을 스마트 디바이스에 접목하는 데에는 스마트 디바이스의 제한적인 컴퓨팅 파워와 배터리, 그리고 연구소나 실험실에서 발생하지 않았던 환경적인 제약조건으로 인해 적용 할 수 없는 문제가 있다. 따라서 사용 환경에 맞춰 동작 가능하도록 최적화가 필요하다. 본 논문에서는 Cell counting 분야와 파킨슨 환자의 보행 보조 분야에서 전문 헬스케어 시스템을 스마트 헬스케어에 접목시키는데 발생하는 세 가지 문제를 제시하고 문제 해결을 위한 세 가지 최적화 알고리즘(Heuristic optimization, Learning-based optimization, Mathematical optimization) 및 이를 기반으로 하는 시스템을 제안한다.DoctordCollectio

    Control and Automation

    Get PDF
    Control and automation systems are at the heart of our every day lives. This book is a collection of novel ideas and findings in these fields, published as part of the Special Issue on Control and Automation. The core focus of this issue was original ideas and potential contributions for both theory and practice. It received a total number of 21 submissions, out of which 7 were accepted. These published manuscripts tackle some novel approaches in control, including fractional order control systems, with applications in robotics, biomedical engineering, electrical engineering, vibratory systems, and wastewater treatment plants. This Special Issue has gathered a selection of novel research results regarding control systems in several distinct research areas. We hope that these papers will evoke new ideas, concepts, and further developments in the field

    Enhancement of Metaheuristic Algorithm for Scheduling Workflows in Multi-fog Environments

    Get PDF
    Whether in computer science, engineering, or economics, optimization lies at the heart of any challenge involving decision-making. Choosing between several options is part of the decision- making process. Our desire to make the "better" decision drives our decision. An objective function or performance index describes the assessment of the alternative's goodness. The theory and methods of optimization are concerned with picking the best option. There are two types of optimization methods: deterministic and stochastic. The first is a traditional approach, which works well for small and linear problems. However, they struggle to address most of the real-world problems, which have a highly dimensional, nonlinear, and complex nature. As an alternative, stochastic optimization algorithms are specifically designed to tackle these types of challenges and are more common nowadays. This study proposed two stochastic, robust swarm-based metaheuristic optimization methods. They are both hybrid algorithms, which are formulated by combining Particle Swarm Optimization and Salp Swarm Optimization algorithms. Further, these algorithms are then applied to an important and thought-provoking problem. The problem is scientific workflow scheduling in multiple fog environments. Many computer environments, such as fog computing, are plagued by security attacks that must be handled. DDoS attacks are effectively harmful to fog computing environments as they occupy the fog's resources and make them busy. Thus, the fog environments would generally have fewer resources available during these types of attacks, and then the scheduling of submitted Internet of Things (IoT) workflows would be affected. Nevertheless, the current systems disregard the impact of DDoS attacks occurring in their scheduling process, causing the amount of workflows that miss deadlines as well as increasing the amount of tasks that are offloaded to the cloud. Hence, this study proposed a hybrid optimization algorithm as a solution for dealing with the workflow scheduling issue in various fog computing locations. The proposed algorithm comprises Salp Swarm Algorithm (SSA) and Particle Swarm Optimization (PSO). In dealing with the effects of DDoS attacks on fog computing locations, two Markov-chain schemes of discrete time types were used, whereby one calculates the average network bandwidth existing in each fog while the other determines the number of virtual machines existing in every fog on average. DDoS attacks are addressed at various levels. The approach predicts the DDoS attack’s influences on fog environments. Based on the simulation results, the proposed method can significantly lessen the amount of offloaded tasks that are transferred to the cloud data centers. It could also decrease the amount of workflows with missed deadlines. Moreover, the significance of green fog computing is growing in fog computing environments, in which the consumption of energy plays an essential role in determining maintenance expenses and carbon dioxide emissions. The implementation of efficient scheduling methods has the potential to mitigate the usage of energy by allocating tasks to the most appropriate resources, considering the energy efficiency of each individual resource. In order to mitigate these challenges, the proposed algorithm integrates the Dynamic Voltage and Frequency Scaling (DVFS) technique, which is commonly employed to enhance the energy efficiency of processors. The experimental findings demonstrate that the utilization of the proposed method, combined with the Dynamic Voltage and Frequency Scaling (DVFS) technique, yields improved outcomes. These benefits encompass a minimization in energy consumption. Consequently, this approach emerges as a more environmentally friendly and sustainable solution for fog computing environments

    Automatic Segmentation of Intramedullary Multiple Sclerosis Lesions

    Get PDF
    Contexte: La moelle épinière est un composant essentiel du système nerveux central. Elle contient des neurones responsables d’importantes fonctionnalités et assure la transmission d’informations motrices et sensorielles entre le cerveau et le système nerveux périphérique. Un endommagement de la moelle épinière, causé par un choc ou une maladie neurodégénérative, peut mener à un sérieux handicap, pouvant entraîner des incapacités fonctionnelles, de la paralysie et/ou de la douleur. Chez les patients atteints de sclérose en plaques (SEP), la moelle épinière est fréquemment affectée par de l’atrophie et/ou des lésions. L’imagerie par résonance magnétique (IRM) conventionnelle est largement utilisée par des chercheurs et des cliniciens pour évaluer et caractériser, de façon non-invasive, des altérations micro-structurelles. Une évaluation quantitative des atteintes structurelles portées à la moelle épinière (e.g. sévérité de l’atrophie, extension des lésions) est essentielle pour le diagnostic, le pronostic et la supervision sur le long terme de maladies, telles que la SEP. De plus, le développement de biomarqueurs impartiaux est indispensable pour évaluer l’effet de nouveaux traitements thérapeutiques. La segmentation de la moelle épinière et des lésions intramédullaires de SEP sont, par conséquent, pertinentes d’un point de vue clinique, aussi bien qu’une étape nécessaire vers l’interprétation d’images RM multiparamétriques. Cependant, la segmentation manuelle est une tâche extrêmement chronophage, fastidieuse et sujette à des variations inter- et intra-expert. Il y a par conséquent un besoin d’automatiser les méthodes de segmentations, ce qui pourrait faciliter l’efficacité procédures d’analyses. La segmentation automatique de lésions est compliqué pour plusieurs raisons: (i) la variabilité des lésions en termes de forme, taille et position, (ii) les contours des lésions sont la plupart du temps difficilement discernables, (iii) l’intensité des lésions sur des images MR sont similaires à celles de structures visiblement saines. En plus de cela, réaliser une segmentation rigoureuse sur l’ensemble d’une base de données multi-centrique d’IRM est rendue difficile par l’importante variabilité des protocoles d’acquisition (e.g. résolution, orientation, champ de vue de l’image). Malgré de considérables récents développements dans le traitement d’images MR de moelle épinière, il n’y a toujours pas de méthode disponible pouvant fournir une segmentation rigoureuse et fiable de la moelle épinière pour un large spectre de pathologies et de protocoles d’acquisition. Concernant les lésions intramédullaires, une recherche approfondie dans la littérature n’a pas pu fournir une méthode disponible de segmentation automatique. Objectif: Développer un système complètement automatique pour segmenter la moelle épinière et les lésions intramédullaires sur des IRM conventionnelles humaines. Méthode: L’approche présentée est basée de deux réseaux de neurones à convolution mis en cascade. La méthode a été pensée pour faire face aux principaux obstacles que présentent les données IRM de moelle épinière. Le procédé de segmentation a été entrainé et validé sur une base de données privée composée de 1943 images, acquises dans 30 différents centres avec des protocoles hétérogènes. Les sujets scannés comportent 459 sujets sains, 471 patients SEP et 112 avec d’autres pathologies affectant la moelle épinière. Le module de segmentation de la moelle épinière a été comparé à une méthode existante reconnue par la communauté, PropSeg. Résultats: L’approche basée sur les réseaux de neurones à convolution a fourni de meilleurs résultats que PropSeg, atteignant un Dice médian (intervalle inter-quartiles) de 94.6 (4.6) vs. 87.9 (18.3) %. Pour les lésions, notre segmentation automatique a permis d'obtenir un Dice de 60.0 (21.4) % en le comparant à la segmentation manuelle, un ratio de vrai positifs de 83 (34) %, et une précision de 77 (44) %. Conclusion: Une méthode complètement automatique et innovante pour segmenter la moelle épinière et les lésions SEP intramédullaires sur des données IRM a été conçue durant ce projet de maîtrise. La méthode a été abondamment validée sur une base de données clinique. La robustesse de la méthode de segmentation de moelle épinière a été démontrée, même sur des cas pathologiques. Concernant la segmentation des lésions, les résultats sont encourageants, malgré un taux de faux positifs relativement élevé. Je crois en l’impact que peut potentiellement avoir ces outils pour la communauté de chercheurs. Dans cette optique, les méthodes ont été intégrées et documentées dans un logiciel en accès-ouvert, la “Spinal Cord Toolbox”. Certains des outils développés pendant ce projet de Maîtrise sont déjà utilisés par des analyses d’études cliniques, portant sur des patients SEP et sclérose latérale amyotrophique.----------ABSTRACT Context: The spinal cord is a key component of the central nervous system, which contains neurons responsible for complex functions, and ensures the conduction of motor and sensory information between the brain and the peripheral nervous system. Damage to the spinal cord, through trauma or neurodegenerative diseases, can lead to severe impairment, including functional disabilities, paralysis and/or pain. In multiple sclerosis (MS) patients, the spinal cord is frequently affected by atrophy and/or lesions. Conventional magnetic resonance imaging (MRI) is widely used by researchers and clinicians to non-invasively assess and characterize spinal cord microstructural changes. Quantitative assessment of the structural damage to the spinal cord (e.g. atrophy severity, lesion extent) is essential for the diagnosis, prognosis and longitudinal monitoring of diseases, such as MS. Furthermore, the development of objective biomarkers is essential to evaluate the effect of new therapeutic treatments. Spinal cord and intramedullary MS lesions segmentation is consequently clinically relevant, as well as a necessary step towards the interpretation of multi-parametric MR images. However, manual segmentation is highly time-consuming, tedious and prone to intra- and inter-rater variability. There is therefore a need for automated segmentation methods to facilitate the efficiency of analysis pipelines. Automatic lesion segmentation is challenging for various reasons: (i) lesion variability in terms of shape, size and location, (ii) lesion boundaries are most of the time not well defined, (iii) lesion intensities on MR data are confounding with those of normal-appearing structures. Moreover, achieving robust segmentation across multi-center MRI data is challenging because of the broad variability of data features (e.g. resolution, orientation, field of view). Despite recent substantial developments in spinal cord MRI processing, there is still no method available that can yield robust and reliable spinal cord segmentation across the very diverse spinal pathologies and data features. Regarding the intramedullary lesions, a thorough search of the relevant literature did not yield available method of automatic segmentation. Goal: To develop a fully-automatic framework for segmenting the spinal cord and intramedullary MS lesions from conventional human MRI data. Method: The presented approach is based on a cascade of two Convolutional Neural Networks (CNN). The method has been designed to face the main challenges of ‘real world’ spinal cord MRI data. It was trained and validated on a private dataset made up of 1943 MR volumes, acquired in different 30 sites with heterogeneous acquisition protocols. Scanned subjects involve 459 healthy controls, 471 MS patients and 112 with other spinal pathologies. The proposed spinal cord segmentation method was compared to a state-of-the-art spinal cord segmentation method, PropSeg. Results: The CNN-based approach achieved better results than PropSeg, yielding a median (interquartile range) Dice of 94.6 (4.6) vs. 87.9 (18.3) % when compared to the manual segmentation. For the lesion segmentation task, our method provided a median Dice-overlap with the manual segmentation of 60.0 (21.4) %, a lesion-based true positive rate of 83 (34) % and a lesion-based precision de 77 (44) %. Conclusion: An original fully-automatic method to segment the spinal cord and intramedullary MS lesions on MRI data has been devised during this Master’s project. The method was validated extensively against a clinical dataset. The robustness of the spinal cord segmentation has been demonstrated, even on challenging pathological cases. Regarding the lesion segmentation, the results are encouraging despite the fairly high false positive rate. I believe in the potential value of these developed tools for the research community. In this vein, the methods are integrated and documented into an open-source software, the Spinal Cord Toolbox. Some of the tools developed during this Master’s project are already integrated into automated analysis pipelines of clinical studies, including MS and Amyotrophic Lateral Sclerosis patients

    Refining Parkinson’s neurological disorder identification through deep transfer learning

    Get PDF
    © 2019, Springer-Verlag London Ltd., part of Springer Nature. Parkinson’s disease (PD), a multi-system neurodegenerative disorder which affects the brain slowly, is characterized by symptoms such as muscle stiffness, tremor in the limbs and impaired balance, all of which tend to worsen with the passage of time. Available treatments target its symptoms, aiming to improve the quality of life. However, automatic diagnosis at early stages is still a challenging medicine-related task to date, since a patient may have an identical behavior to that of a healthy individual at the very early stage of the disease. Parkinson’s disease detection through handwriting data is a significant classification problem for identification of PD at the infancy stage. In this paper, a PD identification is realized with help of handwriting images that help as one of the earliest indicators for PD. For this purpose, we proposed a deep convolutional neural network classifier with transfer learning and data augmentation techniques to improve the identification. Two approaches like freeze and fine-tuning of transfer learning are investigated using ImageNet and MNIST dataset as source task independently. A trained network achieved 98.28% accuracy using fine-tuning-based approach using ImageNet and PaHaW dataset. Experimental results on benchmark dataset reveal that the proposed approach provides better detection of Parkinson’s disease as compared to state-of-the-art work

    Selected Papers from the 5th International Electronic Conference on Sensors and Applications

    Get PDF
    This Special Issue comprises selected papers from the proceedings of the 5th International Electronic Conference on Sensors and Applications, held on 15–30 November 2018, on sciforum.net, an online platform for hosting scholarly e-conferences and discussion groups. In this 5th edition of the electronic conference, contributors were invited to provide papers and presentations from the field of sensors and applications at large, resulting in a wide variety of excellent submissions and topic areas. Papers which attracted the most interest on the web or that provided a particularly innovative contribution were selected for publication in this collection. These peer-reviewed papers are published with the aim of rapid and wide dissemination of research results, developments, and applications. We hope this conference series will grow rapidly in the future and become recognized as a new way and venue by which to (electronically) present new developments related to the field of sensors and their applications

    WEATHER LORE VALIDATION TOOL USING FUZZY COGNITIVE MAPS BASED ON COMPUTER VISION

    Get PDF
    Published ThesisThe creation of scientific weather forecasts is troubled by many technological challenges (Stern & Easterling, 1999) while their utilization is generally dismal. Consequently, the majority of small-scale farmers in Africa continue to consult some forms of weather lore to reach various cropping decisions (Baliscan, 2001). Weather lore is a body of informal folklore (Enock, 2013), associated with the prediction of the weather, and based on indigenous knowledge and human observation of the environment. As such, it tends to be more holistic, and more localized to the farmers’ context. However, weather lore has limitations; for instance, it has an inability to offer forecasts beyond a season. Different types of weather lore exist, utilizing almost all available human senses (feel, smell, sight and hearing). Out of all the types of weather lore in existence, it is the visual or observed weather lore that is mostly used by indigenous societies, to come up with weather predictions. On the other hand, meteorologists continue to treat this knowledge as superstition, partly because there is no means to scientifically evaluate and validate it. The visualization and characterization of visual sky objects (such as moon, clouds, stars, and rainbows) in forecasting weather are significant subjects of research. To realize the integration of visual weather lore in modern weather forecasting systems, there is a need to represent and scientifically substantiate this form of knowledge. This research was aimed at developing a method for verifying visual weather lore that is used by traditional communities to predict weather conditions. To realize this verification, fuzzy cognitive mapping was used to model and represent causal relationships between selected visual weather lore concepts and weather conditions. The traditional knowledge used to produce these maps was attained through case studies of two communities (in Kenya and South Africa).These case studies were aimed at understanding the weather lore domain as well as the causal effects between metrological and visual weather lore. In this study, common astronomical weather lore factors related to cloud physics were identified as: bright stars, dispersed clouds, dry weather, dull stars, feathery clouds, gathering clouds, grey clouds, high clouds, layered clouds, low clouds, stars, medium clouds, and rounded clouds. Relationships between the concepts were also identified and formally represented using fuzzy cognitive maps. On implementing the verification tool, machine vision was used to recognize sky objects captured using a sky camera, while pattern recognition was employed in benchmarking and scoring the objects. A wireless weather station was used to capture real-time weather parameters. The visualization tool was then designed and realized in a form of software artefact, which integrated both computer vision and fuzzy cognitive mapping for experimenting visual weather lore, and verification using various statistical forecast skills and metrics. The tool consists of four main sub-components: (1) Machine vision that recognizes sky objects using support vector machine classifiers using shape-based feature descriptors; (2) Pattern recognition–to benchmark and score objects using pixel orientations, Euclidean distance, canny and grey-level concurrence matrix; (3) Fuzzy cognitive mapping that was used to represent knowledge (i.e. active hebbian learning algorithm was used to learn until convergence); and (4) A statistical computing component was used for verifications and forecast skills including brier score and contingency tables for deterministic forecasts. Rigorous evaluation of the verification tool was carried out using independent (not used in the training and testing phases) real-time images from Bloemfontein, South Africa, and Voi-Kenya. The real-time images were captured using a sky camera with GPS location services. The results of the implementation were tested for the selected weather conditions (for example, rain, heat, cold, and dry conditions), and found to be acceptable (the verified prediction accuracies were over 80%). The recommendation in this study is to apply the implemented method for processing tasks, towards verifying all other types of visual weather lore. In addition, the use of the method developed also requires the implementation of modules for processing and verifying other types of weather lore, such as sounds, and symbols of nature. Since time immemorial, from Australia to Asia, Africa to Latin America, local communities have continued to rely on weather lore observations to predict seasonal weather as well as its effects on their livelihoods (Alcock, 2014). This is mainly based on many years of personal experiences in observing weather conditions. However, when it comes to predictions for longer lead-times (i.e. over a season), weather lore is uncertain (Hornidge & Antweiler, 2012). This uncertainty has partly contributed to the current status where meteorologists and other scientists continue to treat weather lore as superstition (United-Nations, 2004), and not capable of predicting weather. One of the problems in testing the confidence in weather lore in predicting weather is due to wide varieties of weather lore that are found in the details of indigenous sayings, which are tightly coupled to locality and pattern variations(Oviedo et al., 2008). This traditional knowledge is entrenched within the day-to-day socio-economic activities of the communities using it and is not globally available for comparison and validation (Huntington, Callaghan, Fox, & Krupnik, 2004). Further, this knowledge is based on local experience that lacks benchmarking techniques; so that harmonizing and integrating it within the science-based weather forecasting systems is a daunting task (Hornidge & Antweiler, 2012). It is partly for this reason that the question of validation of weather lore has not yet been substantially investigated. Sufficient expanded processes of gathering weather observations, combined with comparison and validation, can produce some useful information. Since forecasting weather accurately is a challenge even with the latest supercomputers (BBC News Magazine, 2013), validated weather lore can be useful if it is incorporated into modern weather prediction systems. Validation of traditional knowledge is a necessary step in the management of building integrated knowledge-based systems. Traditional knowledge incorporated into knowledge-based systems has to be verified for enhancing systems’ reliability. Weather lore knowledge exists in different forms as identified by traditional communities; hence it needs to be tied together for comparison and validation. The development of a weather lore validation tool that can integrate a framework for acquiring weather data and methods of representing the weather lore in verifiable forms can be a significant step in the validation of weather lore against actual weather records using conventional weather-observing instruments. The success of validating weather lore could stimulate the opportunity for integrating acceptable weather lore with modern systems of weather prediction to improve actionable information for decision making that relies on seasonal weather prediction. In this study a hybrid method is developed that includes computer vision and fuzzy cognitive mapping techniques for verifying visual weather lore. The verification tool was designed with forecasting based on mimicking visual perception, and fuzzy thinking based on the cognitive knowledge of humans. The method provides meaning to humanly perceivable sky objects so that computers can understand, interpret, and approximate visual weather outcomes. Questionnaires were administered in two case study locations (KwaZulu-Natal province in South Africa, and Taita-Taveta County in Kenya), between the months of March and July 2015. The two case studies were conducted by interviewing respondents on how visual astronomical and meteorological weather concepts cause weather outcomes. The two case studies were used to identify causal effects of visual astronomical and meteorological objects to weather conditions. This was followed by finding variations and comparisons, between the visual weather lore knowledge in the two case studies. The results from the two case studies were aggregated in terms of seasonal knowledge. The causal links between visual weather concepts were investigated using these two case studies; results were compared and aggregated to build up common knowledge. The joint averages of the majority of responses from the case studies were determined for each set of interacting concepts. The modelling of the weather lore verification tool consists of input, processing components and output. The input data to the system are sky image scenes and actual weather observations from wireless weather sensors. The image recognition component performs three sub-tasks, including: detection of objects (concepts) from image scenes, extraction of detected objects, and approximation of the presence of the concepts by comparing extracted objects to ideal objects. The prediction process involves the use of approximated concepts generated in the recognition component to simulate scenarios using the knowledge represented in the fuzzy cognitive maps. The verification component evaluates the variation between the predictions and actual weather observations to determine prediction errors and accuracy. To evaluate the tool, daily system simulations were run to predict and record probabilities of weather outcomes (i.e. rain, heat index/hotness, dry, cold index). Weather observations were captured periodically using a wireless weather station. This process was repeated several times until there was sufficient data to use for the verification process. To match the range of the predicted weather outcomes, the actual weather observations (measurement) were transformed and normalized to a range [0, 1].In the verification process, comparisons were made between the actual observations and weather outcome prediction values by computing residuals (error values) from the observations. The error values and the squared error were used to compute the Mean Squared Error (MSE), and the Root Mean Squared Error (RMSE), for each predicted weather outcome. Finally, the validity of the visual weather lore verification model was assessed using data from a different geographical location. Actual data in the form of daily sky scenes and weather parameters were acquired from Voi, Kenya, from December 2015 to January 2016.The results on the use of hybrid techniques for verification of weather lore is expected to provide an incentive in integrating indigenous knowledge on weather with modern numerical weather prediction systems for accurate and downscaled weather forecasts
    corecore