895 research outputs found

    Probability Transform Based on the Ordered Weighted Averaging and Entropy Difference

    Get PDF
    Dempster-Shafer evidence theory can handle imprecise and unknown information, which has attracted many people. In most cases, the mass function can be translated into the probability, which is useful to expand the applications of the D-S evidence theory. However, how to reasonably transfer the mass function to the probability distribution is still an open issue. Hence, the paper proposed a new probability transform method based on the ordered weighted averaging and entropy difference. The new method calculates weights by ordered weighted averaging, and adds entropy difference as one of the measurement indicators. Then achieved the transformation of the minimum entropy difference by adjusting the parameter r of the weight function. Finally, some numerical examples are given to prove that new method is more reasonable and effective

    Temporospatial Context-Aware Vehicular Crash Risk Prediction

    Get PDF
    With the demand for more vehicles increasing, road safety is becoming a growing concern. Traffic collisions take many lives and cost billions of dollars in losses. This explains the growing interest of governments, academic institutions and companies in road safety. The vastness and availability of road accident data has provided new opportunities for gaining a better understanding of accident risk factors and for developing more effective accident prediction and prevention regimes. Much of the empirical research on road safety and accident analysis utilizes statistical models which capture limited aspects of crashes. On the other hand, data mining has recently gained interest as a reliable approach for investigating road-accident data and for providing predictive insights. While some risk factors contribute more frequently in the occurrence of a road accident, the importance of driver behavior, temporospatial factors, and real-time traffic dynamics have been underestimated. This study proposes a framework for predicting crash risk based on historical accident data. The proposed framework incorporates machine learning and data analytics techniques to identify driving patterns and other risk factors associated with potential vehicle crashes. These techniques include clustering, association rule mining, information fusion, and Bayesian networks. Swarm intelligence based association rule mining is employed to uncover the underlying relationships and dependencies in collision databases. Data segmentation methods are employed to eliminate the effect of dependent variables. Extracted rules can be used along with real-time mobility to predict crashes and their severity in real-time. The national collision database of Canada (NCDB) is used in this research to generate association rules with crash risk oriented subsequents, and to compare the performance of the swarm intelligence based approach with that of other association rule miners. Many industry-demanding datasets, including road-accident datasets, are deficient in descriptive factors. This is a significant barrier for uncovering meaningful risk factor relationships. To resolve this issue, this study proposes a knwoledgebase approximation framework to enhance the crash risk analysis by integrating pieces of evidence discovered from disparate datasets capturing different aspects of mobility. Dempster-Shafer theory is utilized as a key element of this knowledgebase approximation. This method can integrate association rules with acceptable accuracy under certain circumstances that are discussed in this thesis. The proposed framework is tested on the lymphography dataset and the road-accident database of the Great Britain. The derived insights are then used as the basis for constructing a Bayesian network that can estimate crash likelihood and risk levels so as to warn drivers and prevent accidents in real-time. This Bayesian network approach offers a way to implement a naturalistic driving analysis process for predicting traffic collision risk based on the findings from the data-driven model. A traffic incident detection and localization method is also proposed as a component of the risk analysis model. Detecting and localizing traffic incidents enables timely response to accidents and facilitates effective and efficient traffic flow management. The results obtained from the experimental work conducted on this component is indicative of the capability of our Dempster-Shafer data-fusion-based incident detection method in overcoming the challenges arising from erroneous and noisy sensor readings

    Insights and Characterization of l1-norm Based Sparsity Learning of a Lexicographically Encoded Capacity Vector for the Choquet Integral

    Get PDF
    This thesis aims to simultaneously minimize function error and model complexity for data fusion via the Choquet integral (CI). The CI is a generator function, i.e., it is parametric and yields a wealth of aggregation operators based on the specifics of the underlying fuzzy measure. It is often the case that we desire to learn a fusion from data and the goal is to have the smallest possible sum of squared error between the trained model and a set of labels. However, we also desire to learn as “simple’’ of solutions as possible. Herein, L1-norm regularization of a lexicographically encoded capacity vector relative to the CI is explored. The impact of regularization is explored in terms of what capacities and aggregation operators it induces under different common and extreme scenarios. Synthetic experiments are provided in order to illustrate the propositions and concepts put forth

    Advances and Applications of Dezert-Smarandache Theory (DSmT) for Information Fusion (Collected Works), Vol. 4

    Get PDF
    The fourth volume on Advances and Applications of Dezert-Smarandache Theory (DSmT) for information fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics. The contributions (see List of Articles published in this book, at the end of the volume) have been published or presented after disseminating the third volume (2009, http://fs.unm.edu/DSmT-book3.pdf) in international conferences, seminars, workshops and journals. First Part of this book presents the theoretical advancement of DSmT, dealing with Belief functions, conditioning and deconditioning, Analytic Hierarchy Process, Decision Making, Multi-Criteria, evidence theory, combination rule, evidence distance, conflicting belief, sources of evidences with different importance and reliabilities, importance of sources, pignistic probability transformation, Qualitative reasoning under uncertainty, Imprecise belief structures, 2-Tuple linguistic label, Electre Tri Method, hierarchical proportional redistribution, basic belief assignment, subjective probability measure, Smarandache codification, neutrosophic logic, Evidence theory, outranking methods, Dempster-Shafer Theory, Bayes fusion rule, frequentist probability, mean square error, controlling factor, optimal assignment solution, data association, Transferable Belief Model, and others. More applications of DSmT have emerged in the past years since the apparition of the third book of DSmT 2009. Subsequently, the second part of this volume is about applications of DSmT in correlation with Electronic Support Measures, belief function, sensor networks, Ground Moving Target and Multiple target tracking, Vehicle-Born Improvised Explosive Device, Belief Interacting Multiple Model filter, seismic and acoustic sensor, Support Vector Machines, Alarm classification, ability of human visual system, Uncertainty Representation and Reasoning Evaluation Framework, Threat Assessment, Handwritten Signature Verification, Automatic Aircraft Recognition, Dynamic Data-Driven Application System, adjustment of secure communication trust analysis, and so on. Finally, the third part presents a List of References related with DSmT published or presented along the years since its inception in 2004, chronologically ordered

    Towards an automated approach to map flooded areas from Sentinel-2 MSI data and soft integration of water spectral features

    Get PDF
    Abstract In this work we propose an approach for mapping flooded areas from Sentinel-2 MSI (Multispectral Instrument) data based on soft fuzzy integration of evidence scores derived from both band combinations (i.e. Spectral Indices - SIs) and components of the Hue, Saturation and Value (HSV) colour transformation. Evidence scores are integrated with Ordered Weighted Averaging (OWA) operators, which model user's decision attitude varying smoothly between optimistic and pessimistic approach. Output is a map of global evidence degree showing the plausibility of being flooded for each pixel of the input Sentinel-2 (S2) image. Algorithm set up and validation were carried out with data over three sites in Italy where water surfaces are extracted from stable water bodies (lakes and rivers), natural hazard flooding, and irrigated paddy rice fields. Validation showed more than satisfactory accuracy for the OR-like OWA operators (F-score > 0.90) with performance slightly decreased (F-scor

    Timbre brownfield prioritization tool to support effective brownfield regeneration.

    Get PDF
    In the last decade, the regeneration of derelict or underused sites, fully or partly located in urban areas (or so called “brownfields”), has become more common, since free developable land (or so called “greenfields”) has more and more become a scare and, hence, more expensive resource, especially in densely populated areas. Although the regeneration of brownfield sites can offer development potentials, the complexity of these sites requires considerable efforts to successfully complete their revitalization projects and the proper selection of promising sites is a pre-requisite to efficiently allocate the limited financial resources. The identification and analysis of success factors for brownfield sites regeneration can support investors and decision makers in selecting those sites which are the most advantageous for successful regeneration. The objective of this paper is to present the Timbre Brownfield Prioritization Tool (TBPT), developed as a web-based solution to assist stakeholders responsible for wider territories or clusters of brownfield sites (portfolios) to identify which brownfield sites should be preferably considered for redevelopment or further investigation. The prioritization approach is based on a set of success factors properly identified through a systematic stakeholder engagement procedure. Within the TBPT these success factors are integrated by means of a Multi Criteria Decision Analysis (MCDA) methodology, which includes stakeholders' requalification objectives and perspectives related to the brownfield regeneration process and takes into account the three pillars of sustainability (economic, social and environmental dimensions). The tool has been applied to the South Moravia case study (Czech Republic), considering two different requalification objectives identified by local stakeholders, namely the selection of suitable locations for the development of a shopping centre and a solar power plant, respectively. The application of the TBPT to the case study showed that it is flexible and easy to adapt to different local contexts, allowing the assessors to introduce locally relevant parameters identified according to their expertise and considering the availability of local data

    Fuzzy Sets in Business Management, Finance, and Economics

    Get PDF
    This book collects fifteen papers published in s Special Issue of Mathematics titled “Fuzzy Sets in Business Management, Finance, and Economics”, which was published in 2021. These paper cover a wide range of different tools from Fuzzy Set Theory and applications in many areas of Business Management and other connected fields. Specifically, this book contains applications of such instruments as, among others, Fuzzy Set Qualitative Comparative Analysis, Neuro-Fuzzy Methods, the Forgotten Effects Algorithm, Expertons Theory, Fuzzy Markov Chains, Fuzzy Arithmetic, Decision Making with OWA Operators and Pythagorean Aggregation Operators, Fuzzy Pattern Recognition, and Intuitionistic Fuzzy Sets. The papers in this book tackle a wide variety of problems in areas such as strategic management, sustainable decisions by firms and public organisms, tourism management, accounting and auditing, macroeconomic modelling, the evaluation of public organizations and universities, and actuarial modelling. We hope that this book will be useful not only for business managers, public decision-makers, and researchers in the specific fields of business management, finance, and economics but also in the broader areas of soft mathematics in social sciences. Practitioners will find methods and ideas that could be fruitful in current management issues. Scholars will find novel developments that may inspire further applications in the social sciences

    Development and validation of HRCT airway segmentation algorithms

    Get PDF
    Direct measurements of airway lumen and wall areas are potentially useful as a diagnostic tool and as an aid to understanding the pathophysiology underlying lung disease. Direct measurements can be made from images created by high resolution computer tomography (HRCT) by using computer-based algorithms to segment airways, but current validation techniques cannot adequately establish the accuracy and precision of these algorithms. A detailed review of HRCT airway segmentation algorithms was undertaken, from which three candidate algorithm designs were developed. A custom Windows-based software program was implemented to facilitate multi-modality development and validation of the segmentation algorithms. The performance of the algorithms was examined in clinical HRCT images. A centre-likelihood (CL) ray-casting algorithm was found to be the most suitable algorithm due to its speed and reliability in semi-automatic segmentation and tracking of the airway wall. Several novel refinements were demonstrated to improve the CL algorithm’s robustness in HRCT lung data. The performance of the CL algorithm was then quantified in two-dimensional simulated data to optimise customisable parameters such as edge-detection method, interpolation and number of rays. Novel correction equations to counter the effects of volume averaging and airway orientation angle were derived and demonstrated in three-dimensional simulated data. The optimal CL algorithm was validated with HRCT data using a plastic phantom and a pig lung phantom matched to micro-CT. Accuracy was found to be improved compared to previous studies using similar methods. The volume averaging correction was found to improve precision and accuracy in the plastic phantom but not in the pig lung phantom. When tested in a clinical setting the results of the optimised CL algorithm was in agreement with the results of other measures of lung function. The thesis concludes that the relative contributions of confounders of airway measurement have been quantified in simulated data and the CL algorithm’s performance has been validated in a plastic phantom as well as animal model. This validation protocol has improved the accuracy and precision of measurements made using the CL algorith
    • …
    corecore