64 research outputs found

    A fuzzy DEMATEL approach based on intuitionistic fuzzy information for evaluating knowledge transfer effectiveness in GSD projects

    Get PDF
    The offshore/onsite teams' effectiveness of knowledge transfer is significantly measured by various kinds of factors. In this paper, we propose a knowledge transfer (KT) assessment framework which integrates four criteria for evaluating the KT effectiveness of GSD teams. These are: knowledge, team, technology, and organisation factors. In this context, we present a fuzzy DEMATEL approach for assessing GSD teams KT effectiveness based on intuitionistic fuzzy numbers (IFNs). In this approach, decision makers provide their subjective judgments on the criteria, characterised on the basis of intuitionistic fuzzy sets. Moreover, intuitionistic fuzzy sets used in the fuzzy DEMATEL approach can effectively assess the KT effectiveness criteria and rank the alternatives. Subsequently, the entire process is illustrated with GSD teams' KT evaluation criteria samples, and the factors are ranked using fuzzy linguistic variables which are mapped to IFNs. Afterwards, the IFNs are converted into their corresponding basic probability assignments (BPAs) and then the Dempster-Shafer theory is used to combine the group decision making process. Besides, illustrative applicability and usefulness of the proposed approach in group decision making process for the evaluation of multiple criteria under fuzzy environment has been tested by software professionals at Inowits Software Organisation in India

    Hybrid ANFIS-Taguchi Method Based on PCA for Blood Bank Demand Forecasting

    Get PDF
    Kan; hastalıklar, ameliyatlar veya yaralanmalar nedeniyle her gün binlerce insan tarafından ihtiyaç duyulan hayati bir üründür. Bu nedenle hastanelerin kan ihtiyacını karşılayan kan bankalarının stoklarında yeterli miktarda kan bulundurması gereklidir. Gereğinden az miktarda kan elde bulundurulması ihtiyacın karşılanamaması ve can kaybı gibi önemli sorunlar oluştururken, fazla miktarda kanın stoklanması ise kanın bozulmasına ve kan ihtiyacı olan farklı hastanelerin stoksuz kalmasına neden olmaktadır. Bu çalışmada öncelikle kan bileşenlerinden biri olan eritrosit süspansiyonu talebine etki eden kriterler belirlenerek; bu kriterlere göre makine öğrenme algoritmalarından uyarlamalı ağ tabanlı bulanık çıkarım sistemi (ANFIS) yöntemi ile talebin tahmin edilmesi amaçlanmaktadır. Ancak talebe etki eden çok sayıda kriter olduğu için gruplandırarak azaltmak ve kriterler arasındaki bağımlılıkları ortadan kaldırmak amacıyla temel bileşen analizi (PCA) yönteminden yararlanılmıştır. Ayrıca ANFIS’in performansı; modelin yapısı ve öğrenmesini etkileyen parametre değerlerinin doğru belirlenmesi ile ilişkili olduğundan en yüksek doğrulukla tahmini sağlayacak değerler Taguchi deney tasarımı yöntemiyle belirlenmiştir. Geliştirilen PCA esaslı hibrit ANFIS-Taguchi yöntemi bir bölge kan merkezinde uygulanmıştır. Korelasyon katsayısı (??) performans ölçütü ile yöntemin tahmin yeteneği değerlendirilmiştir. Uygulama sonunda tahmin edilen eritrosit süspansiyon talep miktarının %88.1 oranla gerçekleşen talep miktarı ile benzer sonuç verdiği görülmüştür.Blood is a vital product that is needed by thousands of people every day due to diseases, surgeries or injuries. For this reason, it is necessary that the blood banks have enough blood quantity to meet the blood needs of hospitals . The provision of small amounts of blood in hospitals creates significant problems such as loss of life and can’t meet the demand. On the other hand, the stocking of large amounts of blood leads to the wastage of blood and the stockless of blood different hospitals. The aim of this study is to determine the criteria affecting blood demand and to forecast the blood demand by the machine learning algorithm Adaptive Network Based Fuzzy Inference System (ANFIS) method. However, since the number of impact criteria is high, principal component analysis (PCA) method has been used in order to decrease criteria and eliminate the dependencies between the criteria. In addition, the performance of ANFIS depend on determining ANFIS parameters that affect its structure and learning. So to provide the highest learning ANFIS parameters were determined by the Taguchi experimental design method. The developed hybrid method was applied in a regional blood center and evaluated with correlation coefficient (??). At the end of the application, it is seen that the estimated red blood cells demand is similar to the demand amount realized at the rate of 88.1%

    A fuzzy rule based inference system for early debt collection

    Get PDF
    Nowadays, unpaid invoices and unpaid credits are becoming more and more common. Large amounts of data regarding these debts are collected and stored by debt collection agencies. Early debt collection processes aim at collecting payments from creditors or debtors before the legal procedure starts. In order to be successful and be able to collect maximum debts, collection agencies need to use their human resources efficiently and communicate with the customers via the most convenient channel that leads to minimum costs. However, achieving these goals need processing, analyzing and evaluating customer data and inferring the right actions instantaneously. In this study, fuzzy inference based intelligent systems are used to empower early debt collection processes using the principles of data science. In the paper, an early debt collection system composed of three different Fuzzy Inference Systems (FIS), one for credit debts, one for credit card debts, and one for invoices, is developed. These systems use different inputs such as amount of loan, wealth of debtor, part history of debtor, amount of other debts, active customer since, credit limit, and criticality to determine the output possibility of repaying the debt. This output is later used to determine the most convenient communication channel and communication activity profile

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    Performance Evaluation of Smart Decision Support Systems on Healthcare

    Get PDF
    Medical activity requires responsibility not only from clinical knowledge and skill but also on the management of an enormous amount of information related to patient care. It is through proper treatment of information that experts can consistently build a healthy wellness policy. The primary objective for the development of decision support systems (DSSs) is to provide information to specialists when and where they are needed. These systems provide information, models, and data manipulation tools to help experts make better decisions in a variety of situations. Most of the challenges that smart DSSs face come from the great difficulty of dealing with large volumes of information, which is continuously generated by the most diverse types of devices and equipment, requiring high computational resources. This situation makes this type of system susceptible to not recovering information quickly for the decision making. As a result of this adversity, the information quality and the provision of an infrastructure capable of promoting the integration and articulation among different health information systems (HIS) become promising research topics in the field of electronic health (e-health) and that, for this same reason, are addressed in this research. The work described in this thesis is motivated by the need to propose novel approaches to deal with problems inherent to the acquisition, cleaning, integration, and aggregation of data obtained from different sources in e-health environments, as well as their analysis. To ensure the success of data integration and analysis in e-health environments, it is essential that machine-learning (ML) algorithms ensure system reliability. However, in this type of environment, it is not possible to guarantee a reliable scenario. This scenario makes intelligent SAD susceptible to predictive failures, which severely compromise overall system performance. On the other hand, systems can have their performance compromised due to the overload of information they can support. To solve some of these problems, this thesis presents several proposals and studies on the impact of ML algorithms in the monitoring and management of hypertensive disorders related to pregnancy of risk. The primary goals of the proposals presented in this thesis are to improve the overall performance of health information systems. In particular, ML-based methods are exploited to improve the prediction accuracy and optimize the use of monitoring device resources. It was demonstrated that the use of this type of strategy and methodology contributes to a significant increase in the performance of smart DSSs, not only concerning precision but also in the computational cost reduction used in the classification process. The observed results seek to contribute to the advance of state of the art in methods and strategies based on AI that aim to surpass some challenges that emerge from the integration and performance of the smart DSSs. With the use of algorithms based on AI, it is possible to quickly and automatically analyze a larger volume of complex data and focus on more accurate results, providing high-value predictions for a better decision making in real time and without human intervention.A atividade médica requer responsabilidade não apenas com base no conhecimento e na habilidade clínica, mas também na gestão de uma enorme quantidade de informações relacionadas ao atendimento ao paciente. É através do tratamento adequado das informações que os especialistas podem consistentemente construir uma política saudável de bem-estar. O principal objetivo para o desenvolvimento de sistemas de apoio à decisão (SAD) é fornecer informações aos especialistas onde e quando são necessárias. Esses sistemas fornecem informações, modelos e ferramentas de manipulação de dados para ajudar os especialistas a tomar melhores decisões em diversas situações. A maioria dos desafios que os SAD inteligentes enfrentam advêm da grande dificuldade de lidar com grandes volumes de dados, que é gerada constantemente pelos mais diversos tipos de dispositivos e equipamentos, exigindo elevados recursos computacionais. Essa situação torna este tipo de sistemas suscetível a não recuperar a informação rapidamente para a tomada de decisão. Como resultado dessa adversidade, a qualidade da informação e a provisão de uma infraestrutura capaz de promover a integração e a articulação entre diferentes sistemas de informação em saúde (SIS) tornam-se promissores tópicos de pesquisa no campo da saúde eletrônica (e-saúde) e que, por essa mesma razão, são abordadas nesta investigação. O trabalho descrito nesta tese é motivado pela necessidade de propor novas abordagens para lidar com os problemas inerentes à aquisição, limpeza, integração e agregação de dados obtidos de diferentes fontes em ambientes de e-saúde, bem como sua análise. Para garantir o sucesso da integração e análise de dados em ambientes e-saúde é importante que os algoritmos baseados em aprendizagem de máquina (AM) garantam a confiabilidade do sistema. No entanto, neste tipo de ambiente, não é possível garantir um cenário totalmente confiável. Esse cenário torna os SAD inteligentes suscetíveis à presença de falhas de predição que comprometem seriamente o desempenho geral do sistema. Por outro lado, os sistemas podem ter seu desempenho comprometido devido à sobrecarga de informações que podem suportar. Para tentar resolver alguns destes problemas, esta tese apresenta várias propostas e estudos sobre o impacto de algoritmos de AM na monitoria e gestão de transtornos hipertensivos relacionados com a gravidez (gestação) de risco. O objetivo das propostas apresentadas nesta tese é melhorar o desempenho global de sistemas de informação em saúde. Em particular, os métodos baseados em AM são explorados para melhorar a precisão da predição e otimizar o uso dos recursos dos dispositivos de monitorização. Ficou demonstrado que o uso deste tipo de estratégia e metodologia contribui para um aumento significativo do desempenho dos SAD inteligentes, não só em termos de precisão, mas também na diminuição do custo computacional utilizado no processo de classificação. Os resultados observados buscam contribuir para o avanço do estado da arte em métodos e estratégias baseadas em inteligência artificial que visam ultrapassar alguns desafios que advêm da integração e desempenho dos SAD inteligentes. Como o uso de algoritmos baseados em inteligência artificial é possível analisar de forma rápida e automática um volume maior de dados complexos e focar em resultados mais precisos, fornecendo previsões de alto valor para uma melhor tomada de decisão em tempo real e sem intervenção humana

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of “volunteer mappers”. Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protection

    Remote Sensing

    Get PDF
    This dual conception of remote sensing brought us to the idea of preparing two different books; in addition to the first book which displays recent advances in remote sensing applications, this book is devoted to new techniques for data processing, sensors and platforms. We do not intend this book to cover all aspects of remote sensing techniques and platforms, since it would be an impossible task for a single volume. Instead, we have collected a number of high-quality, original and representative contributions in those areas

    Renewable Energy Resource Assessment and Forecasting

    Get PDF
    In recent years, several projects and studies have been launched towards the development and use of new methodologies, in order to assess, monitor, and support clean forms of energy. Accurate estimation of the available energy potential is of primary importance, but is not always easy to achieve. The present Special Issue on ‘Renewable Energy Resource Assessment and Forecasting’ aims to provide a holistic approach to the above issues, by presenting multidisciplinary methodologies and tools that are able to support research projects and meet today’s technical, socio-economic, and decision-making needs. In particular, research papers, reviews, and case studies on the following subjects are presented: wind, wave and solar energy; biofuels; resource assessment of combined renewable energy forms; numerical models for renewable energy forecasting; integrated forecasted systems; energy for buildings; sustainable development; resource analysis tools and statistical models; extreme value analysis and forecasting for renewable energy resources

    Fluvial Processes in Motion: Measuring Bank Erosion and Suspended Sediment Flux using Advanced Geomatic Methods and Machine Learning

    Get PDF
    Excessive erosion and fine sediment delivery to river corridors and receiving waters degrade aquatic habitat, add to nutrient loading, and impact infrastructure. Understanding the sources and movement of sediment within watersheds is critical for assessing ecosystem health and developing management plans to protect natural and human systems. As our changing climate continues to cause shifts in hydrological regimes (e.g., increased precipitation and streamflow in the northeast U.S.), the development of tools to better understand sediment dynamics takes on even greater importance. In this research, advanced geomatics and machine learning are applied to improve the (1) monitoring of streambank erosion, (2) understanding of event sediment dynamics, and (3) prediction of sediment loading using meteorological data as inputs. Streambank movement is an integral part of geomorphic changes along river corridors and also a significant source of fine sediment to receiving waters. Advances in unmanned aircraft systems (UAS) and photogrammetry provide opportunities for rapid and economical quantification of streambank erosion and deposition at variable scales. We assess the performance of UAS-based photogrammetry to capture streambank topography and quantify bank movement. UAS data were compared to terrestrial laser scanner (TLS) and GPS surveying from Vermont streambank sites that featured a variety of bank conditions and vegetation. Cross-sectional analysis of UAS and TLS data revealed that the UAS reliably captured the bank surface and was able to quantify the net change in bank area where movement occurred. Although it was necessary to consider overhanging bank profiles and vegetation, UAS-based photogrammetry showed significant promise for capturing bank topography and movement at fine resolutions in a flexible and efficient manner. This study also used a new machine-learning tool to improve the analysis of sediment dynamics using three years of high-resolution suspended sediment data collected in the Mad River watershed. A restricted Boltzmann machine (RBM), a type of artificial neural network (ANN), was used to classify individual storm events based on the visual hysteresis patterns present in the suspended sediment-discharge data. The work expanded the classification scheme typically used for hysteresis analysis. The results provided insights into the connectivity and sources of sediment within the Mad River watershed and its tributaries. A recurrent counterpropagation network (rCPN) was also developed to predict suspended sediment discharge at ungauged locations using only local meteorological data as inputs. The rCPN captured the nonlinear relationships between meteorological data and suspended sediment discharge, and outperformed the traditional sediment rating curve approach. The combination of machine-learning tools for analyzing storm-event dynamics and estimating loading at ungauged locations in a river network provides a robust method for estimating sediment production from catchments that informs watershed management

    Molecular phylogeny of horseshoe crab using mitochondrial Cox1 gene as a benchmark sequence

    Get PDF
    An effort to assess the utility of 650 bp Cytochrome C oxidase subunit I (DNA barcode) gene in delineating the members horseshoe crabs (Family: xiphosura) with closely related sister taxa was made. A total of 33 sequences were extracted from National Center for Biotechnological Information (NCBI) which include horseshoe crabs, beetles, common crabs and scorpion sequences. Constructed phylogram showed beetles are closely related with horseshoe crabs than common crabs. Scorpion spp were distantly related to xiphosurans. Phylogram and observed genetic distance (GD) date were also revealed that Limulus polyphemus was closely related with Tachypleus tridentatus than with T.gigas. Carcinoscorpius rotundicauda was distantly related with L.polyphemus. The observed mean Genetic Distance (GD) value was higher in 3rd codon position in all the selected group of organisms. Among the horseshoe crabs high GC content was observed in L.polyphemus (38.32%) and lowest was observed in T.tridentatus (32.35%). We conclude that COI sequencing (barcoding) could be used in identifying and delineating evolutionary relatedness with closely related specie
    corecore