19 research outputs found

    CNN-enabled Visual Data Analytics and Intelligent Reasoning for Real-time Optimization and Simulation: An Application to Occupancy-aware Elevator Dispatching Optimization

    Get PDF
    For most operational systems, the optimization problem is a combinatorial optimization problem, and the optimization performance largely determines the solution quality. Moreover, there exists a trade-off between the computing time of the decision-making process and the optimization performance, which is particularly evident in a system that conducts real-time operations. To obtain better solutions to the decision-making problem in a shorter time, many optimization algorithms are proposed to improve the searching efficiency in the search space. However, information extraction from the environment is also essential for problem-solving. The environment information not only includes the optimization model inputs, but also contains details of the current situation that may change the problem formulation and optimization algorithm parameter values. Due to the time constraint and the computation time of visual processing algorithms, most conventional operational systems collect environment data from sensor platforms but do not analyze image data, which contains situational information that can assist with the decision-making process. To address this issue, this thesis proposes CNN-enabled visual data analytics and intelligent reasoning for real-time optimization, and a closed-loop optimization structure with discrete event simulation to fit the use of situational information in the optimization model. In the proposed operational system, CNNs are used to extract context information from image data, like the type and the number of objects at the scene. Then reasoning techniques and methodologies are applied to deduct knowledge about the current situation to adjust problem formulation and parameter settings. Discrete event simulation is conducted to test the optimization performance of the system, and adjustments can be made to better fit situational information in the optimization process. To validate the feasibility and effectiveness, an application to occupancy-aware elevator dispatching optimization is presented.M.S

    Variable precision rough set theory decision support system: With an application to bank rating prediction

    Get PDF
    This dissertation considers, the Variable Precision Rough Sets (VPRS) model, and its development within a comprehensive software package (decision support system), incorporating methods of re sampling and classifier aggregation. The concept of /-reduct aggregation is introduced, as a novel approach to classifier aggregation within the VPRS framework. The software is applied to the credit rating prediction problem, in particularly, a full exposition of the prediction and classification of Fitch's Individual Bank Strength Ratings (FIBRs), to a number of banks from around the world is presented. The ethos of the developed software was to rely heavily on a simple 'point and click' interface, designed to make a VPRS analysis accessible to an analyst, who is not necessarily an expert in the field of VPRS or decision rule based systems. The development of the software has also benefited from consultations with managers from one of Europe's leading hedge funds, who gave valuable insight, advice and recommendations on what they considered as pertinent issues with regards to data mining, and what they would like to see from a modern data mining system. The elements within the developed software reflect each stage of the knowledge discovery process, namely, pre-processing, feature selection, data mining, interpretation and evaluation. The developed software encompasses three software packages, a pre-processing package incorporating some of the latest pre-processing and feature selection methods a VPRS data mining package, based on a novel "vein graph" interface, which presents the analyst with selectable /-reducts over the domain of / and a third more advanced VPRS data mining package, which essentially automates the vein graph interface for incorporation into a re-sampling environment, and also implements the introduced aggregated /-reduct, developed to optimise and stabilise the predictive accuracy of a set of decision rules induced from the aggregated /-reduct

    I2ECR: Integrated and Intelligent Environment for Clinical Research

    Get PDF
    Clinical trials are designed to produce new knowledge about a certain disease, drug or treatment. During these studies, a huge amount of data is collected about participants, therapies, clinical procedures, outcomes, adverse events and so on. A multicenter, randomized, phase III clinical trial in Hematology enrolls up to hundreds of subjects and evaluates post-treatment outcomes on stratified sub- groups of subjects for a period of many years. Therefore, data collection in clinical trials is becoming complex, with huge amount of clinical and biological variables. Outside the medical field, data warehouses (DWs) are widely employed. A Data Ware-house is a “collection of integrated, subject-oriented databases designed to support the decision-making process”. To verify whether DWs might be useful for data quality and association analysis, a team of biomedical engineers, clinicians, biologists and statisticians developed the “I2ECR” project. I2ECR is an Integrated and Intelligent Environment for Clinical Research where clinical and omics data stand together for clinical use (reporting) and for generation of new clinical knowledge. I2ECR has been built from the “MCL0208” phase III, prospective, clinical trial, sponsored by the Fondazione Italiana Linfomi (FIL); this is actually a translational study, accounting for many clinical data, along with several clinical prognostic indexes (e.g. MIPI - Mantle International Prognostic Index), pathological information, treatment and outcome data, biological assessments of disease (MRD - Minimal Residue Disease), as well as many biological, ancillary studies, such as Mutational Analysis, Gene Expression Profiling (GEP) and Pharmacogenomics. In this trial forty-eight Italian medical centers were actively involved, for a total of 300 enrolled subjects. Therefore, I2ECR main objectives are: • to propose an integration project on clinical and molecular data quality concepts. The application of a clear row-data analysis as well as clinical trial monitoring strategies to implement a digital platform where clinical, biological and “omics” data are imported from different sources and well-integrated in a data- ware-house • to be a dynamic repository of data congruency quality rules. I2ECR allows to monitor, in a semi-automatic manner, the quality of data, in relation to the clinical data imported from eCRFs (electronic Case Report Forms) and from biologic and mutational datasets internally edited by local laboratories. Therefore, I2ECR will be able to detect missing data and mistakes derived from non-conventional data- entry activities by centers. • to provide to clinical stake-holders a platform from where they can easily design statistical and data mining analysis. The term Data Mining (DM) identifies a set of tools to searching for hidden patterns of interest in large and multivariate datasets. The applications of DM techniques in the medical field range from outcome prediction and patient classification to genomic medicine and molecular biology. I2ECR allows to clinical stake-holders to propose innovative methods of supervised and unsupervised feature extraction, data classification and statistical analysis on heterogeneous datasets associated to the MCL0208 clinical trial. Although MCL0208 study is the first example of data-population of I2ECR, the environment will be able to import data from clinical studies designed for other onco-hematologic diseases, too

    Data-driven disaster management in a smart city

    Get PDF
    Disasters, both natural and man-made, are complex events that result in the loss of human life and/or the destruction of properties. The advances in Information Technology (IT) and Big Data Analysis represent an opportunity for the development of resilient environments, since from the application of Big Data (BD) technologies it is possible not only to extract patterns of occurrences of events, but also to predict them. The work carried out in this dissertation aims to apply the CRISP-DM methodology to conduct a descriptive and predictive analysis of the events that occurred in the city of Lisbon, with emphasis on the events that affected buildings. Through this research it was verified the existence of temporal and spatial patterns of occurrences with some events occurring in certain periods of the year, such as floods and collapses that are recorded more frequently in periods of high precipitation. The spatial analysis showed that the city center is the area most affected by the occurrences, and it is in these areas where the largest proportion of buildings with major repair needs are concentrated. Finally, machine learning models were applied to the data, and the Random Forest model obtained the best result with an accuracy of 58%. This research contributes to improve the resilience of the city since the analysis developed allowed to extract insights regarding the events and their occurrence patterns that will help the decision-making process.Os desastres, tanto naturais quanto as provocadas pelo homem, são eventos complexos que se traduzem em perdas de vidas e/ou destruição de propriedades. Os avanços na área de Tecnologias de Informação e Big Data Analysis representam uma oportunidade para o desenvolvimento de ambientes resilientes dado que, a partir da aplicação das tecnologias de Big Data (BD), é possível não só extrair padrões de ocorrências dos eventos, mas também fazer a previsão dos mesmos. O trabalho realizado nesta dissertação visa aplicar a metodologia CRISP-DM de forma a conduzir análises descritivas e preditivas sobre os eventos que ocorreram na cidade de Lisboa, com ênfase nos eventos que afetaram os edifícios. A investigação permitiu verificar a existência de padrões temporais e espaciais eventos a ocorrer em certos períodos do ano, como é o caso das cheias e inundações que são registados com maior frequência nos períodos de alta precipitação. A análise espacial permitiu verificar que a área do centro da cidade é a área mais afetada pelas ocorrências sendo nestas áreas onde se concentram a maior proporção de edifícios com grandes necessidades de reparação. Por fim, modelos de aprendizagem automática foram aplicados aos dados tendo o modelo Random Forest obtido o melhor resultado com accuracy de 58%. Esta pesquisa contribui para melhorar o aumento da resiliência da cidade pois, a análise desenvolvida permitiu extrair insights sobre os eventos e os seus padrões de ocorrência que irá ajudar os processos de tomada de decisão

    Can bank interaction during rating measurement of micro and very small enterprises ipso facto Determine the collapse of PD status?

    Get PDF
    This paper begins with an analysis of trends - over the period 2012-2018 - for total bank loans, non-performing loans, and the number of active, working enterprises. A review survey was done on national data from Italy with a comparison developed on a local subset from the Sardinia Region. Empirical evidence appears to support the hypothesis of the paper: can the rating class assigned by banks - using current IRB and A-IRB systems - to micro and very small enterprises, whose ability to replace financial resources using endogenous means is structurally impaired, ipso facto orient the results of performance in the same terms of PD assigned by the algorithm, thereby upending the principle of cause and effect? The thesis is developed through mathematical modeling that demonstrates the interaction of the measurement tool (the rating algorithm applied by banks) on the collapse of the loan status (default, performing, or some intermediate point) of the assessed micro-entity. Emphasis is given, in conclusion, to the phenomenon using evidence of the intrinsically mutualistic link of the two populations of banks and (micro) enterprises provided by a system of differential equation

    Recent Trends in Computational Intelligence

    Get PDF
    Traditional models struggle to cope with complexity, noise, and the existence of a changing environment, while Computational Intelligence (CI) offers solutions to complicated problems as well as reverse problems. The main feature of CI is adaptability, spanning the fields of machine learning and computational neuroscience. CI also comprises biologically-inspired technologies such as the intellect of swarm as part of evolutionary computation and encompassing wider areas such as image processing, data collection, and natural language processing. This book aims to discuss the usage of CI for optimal solving of various applications proving its wide reach and relevance. Bounding of optimization methods and data mining strategies make a strong and reliable prediction tool for handling real-life applications

    Efficient Decision Support Systems

    Get PDF
    This series is directed to diverse managerial professionals who are leading the transformation of individual domains by using expert information and domain knowledge to drive decision support systems (DSSs). The series offers a broad range of subjects addressed in specific areas such as health care, business management, banking, agriculture, environmental improvement, natural resource and spatial management, aviation administration, and hybrid applications of information technology aimed to interdisciplinary issues. This book series is composed of three volumes: Volume 1 consists of general concepts and methodology of DSSs; Volume 2 consists of applications of DSSs in the biomedical domain; Volume 3 consists of hybrid applications of DSSs in multidisciplinary domains. The book is shaped upon decision support strategies in the new infrastructure that assists the readers in full use of the creative technology to manipulate input data and to transform information into useful decisions for decision makers

    Innovative Methods and Materials in Structural Health Monitoring of Civil Infrastructures

    Get PDF
    In the past, when elements in sructures were composed of perishable materials, such as wood, the maintenance of houses, bridges, etc., was considered of vital importance for their safe use and to preserve their efficiency. With the advent of materials such as reinforced concrete and steel, given their relatively long useful life, periodic and constant maintenance has often been considered a secondary concern. When it was realized that even for structures fabricated with these materials that the useful life has an end and that it was being approached, planning maintenance became an important and non-negligible aspect. Thus, the concept of structural health monitoring (SHM) was introduced, designed, and implemented as a multidisciplinary method. Computational mechanics, static and dynamic analysis of structures, electronics, sensors, and, recently, the Internet of Things (IoT) and artificial intelligence (AI) are required, but it is also important to consider new materials, especially those with intrinsic self-diagnosis characteristics, and to use measurement and survey methods typical of modern geomatics, such as satellite surveys and highly sophisticated laser tools
    corecore