3,086 research outputs found

    Applying case based reasoning for prioritizing areas of business management

    Get PDF
    Determining the importance of different management areas in a company provides guidance about the needs of increasing the analysis and actions focuses in particular topic. To do it, it is necessary to decompose the management in a coherent set of specific management areas and provide a way that allows the company to determine the importance of these areas for them. This paper presents a novel system that guides companies to obtain a classification of important management areas for them. It is focused on the use of a case based reasoning system because the variability and the evolution of companies as time passes requires using techniques with learning capabilities. The proposed system provides an automatic self-assessment system that provides companies an ordered list of their most important management areas. This system was implemented a year ago for the evaluation of Spanish companies. Currently, it is in production providing relevant information about the management areas of these companies

    Feature Sensitive Three-Dimensional Point Cloud Simplification using Support Vector Regression

    Get PDF
    Contemporary three-dimensional (3D) scanning devices are characterized by high speed and resolution. They provide dense point clouds that contain abundant data about scanned objects and require computationally intensive and time consuming processing. On the other hand, point clouds usually contain a large amount of redundant data that carry little or no additional information about scanned object geometry. To facilitate further analysis and extraction of relevant information from point cloud, as well as faster transfer of data between different computational devices, it is rational to carry out its simplification at an early stage of the processing. However, the reduction of data during simplification has to ensure high level of information contents preservation; simplification has to be feature sensitive. In this paper we propose a method for feature sensitive simplification of 3D point clouds that is based on epsilon insensitive support vector regression (epsilon-SVR). The proposed method is intended for structured point clouds. It exploits the flatness property of epsilon-SVR for effective recognition of points in high curvature areas of scanned lines. The points from these areas are kept in simplified point cloud along with a reduced number of points from flat areas. In addition, the proposed method effectively detects the points in the vicinity of sharp edges without additional processing. Proposed simplification method is experimentally verified using three real world case studies. To estimate the quality of the simplification, we employ non-uniform rational b-splines fitting to initial and reduced scan lines

    Feature Sensitive Three-Dimensional Point Cloud Simplification using Support Vector Regression

    Get PDF
    Contemporary three-dimensional (3D) scanning devices are characterized by high speed and resolution. They provide dense point clouds that contain abundant data about scanned objects and require computationally intensive and time consuming processing. On the other hand, point clouds usually contain a large amount of redundant data that carry little or no additional information about scanned object geometry. To facilitate further analysis and extraction of relevant information from point cloud, as well as faster transfer of data between different computational devices, it is rational to carry out its simplification at an early stage of the processing. However, the reduction of data during simplification has to ensure high level of information contents preservation; simplification has to be feature sensitive. In this paper we propose a method for feature sensitive simplification of 3D point clouds that is based on epsilon insensitive support vector regression (epsilon-SVR). The proposed method is intended for structured point clouds. It exploits the flatness property of epsilon-SVR for effective recognition of points in high curvature areas of scanned lines. The points from these areas are kept in simplified point cloud along with a reduced number of points from flat areas. In addition, the proposed method effectively detects the points in the vicinity of sharp edges without additional processing. Proposed simplification method is experimentally verified using three real world case studies. To estimate the quality of the simplification, we employ non-uniform rational b-splines fitting to initial and reduced scan lines

    A study of distributed clustering of vector time series on the grid by task farming

    Get PDF
    Traditional data mining methods were limited by availability of computing resources like network bandwidth, storage space and processing power. These algorithms were developed to work around this problem by looking at a small cross-section of the whole data available. However since a major chunk of the data is kept out, the predictions were generally inaccurate and missed out on significant features that was part of the data. Today with resources growing at almost the same pace as data, it is possible to rethink mining algorithms to work on distributed resources and essentially distributed data. Distributed data mining thus holds great promise. Using grid technologies, data mining can be extended to areas which were not previously looked at because of the volume of data being generated, like climate modeling, web usage, etc. An important characteristic of data today is that it is highly decentralized and mostly redundant. Data mining algorithms which can make efficient use of distributed data has to be thought of. Though it is possible to bring all the data together and run traditional algorithms, this has a high overhead, in terms of bandwidth usage for transmission, preprocessing steps which have to be to handle every format the received data. By processing the data locally, the preprocessing stage can be made less bulky and also the traditional data mining techniques would be able to work on the data efficiently. The focus of this project is to use an existing data mining technique, fuzzy c-means clustering to work on distributed data in a simulated grid environment and to review the performance of this approach viz., the traditional approach

    Financial crises and bank failures: a review of prediction methods

    Get PDF
    In this article we analyze financial and economic circumstances associated with the U.S. subprime mortgage crisis and the global financial turmoil that has led to severe crises in many countries. We suggest that the level of cross-border holdings of long-term securities between the United States and the rest of the world may indicate a direct link between the turmoil in the securitized market originated in the United States and that in other countries. We provide a summary of empirical results obtained in several Economics and Operations Research papers that attempt to explain, predict, or suggest remedies for financial crises or banking defaults; we also extensively outline the methodologies used in them. The intent of this article is to promote future empirical research for preventing financial crises.Subprime mortgage ; Financial crises

    Longitudinal clustering analysis and prediction of Parkinson\u27s disease progression using radiomics and hybrid machine learning

    Get PDF
    Background: We employed machine learning approaches to (I) determine distinct progression trajectories in Parkinson\u27s disease (PD) (unsupervised clustering task), and (II) predict progression trajectories (supervised prediction task), from early (years 0 and 1) data, making use of clinical and imaging features. Methods: We studied PD-subjects derived from longitudinal datasets (years 0, 1, 2 & 4; Parkinson\u27s Progressive Marker Initiative). We extracted and analyzed 981 features, including motor, non-motor, and radiomics features extracted for each region-of-interest (ROIs: left/right caudate and putamen) using our standardized standardized environment for radiomics analysis (SERA) radiomics software. Segmentation of ROIs on dopamine transposer - single photon emission computed tomography (DAT SPECT) images were performed via magnetic resonance images (MRI). After performing cross-sectional clustering on 885 subjects (original dataset) to identify disease subtypes, we identified optimal longitudinal trajectories using hybrid machine learning systems (HMLS), including principal component analysis (PCA) + K-Means algorithms (KMA) followed by Bayesian information criterion (BIC), Calinski-Harabatz criterion (CHC), and elbow criterion (EC). Subsequently, prediction of the identified trajectories from early year data was performed using multiple HMLSs including 16 Dimension Reduction Algorithms (DRA) and 10 classification algorithms. Results: We identified 3 distinct progression trajectories. Hotelling\u27s t squared test (HTST) showed that the identified trajectories were distinct. The trajectories included those with (I, II) disease escalation (2 trajectories, 27% and 38% of patients) and (III) stable disease (1 trajectory, 35% of patients). For trajectory prediction from early year data, HMLSs including the stochastic neighbor embedding algorithm (SNEA, as a DRA) as well as locally linear embedding algorithm (LLEA, as a DRA), linked with the new probabilistic neural network classifier (NPNNC, as a classifier), resulted in accuracies of 78.4% and 79.2% respectively, while other HMLSs such as SNEA + Lib_SVM (library for support vector machines) and t_SNE (t-distributed stochastic neighbor embedding) + NPNNC resulted in 76.5% and 76.1% respectively. Conclusions: This study moves beyond cross-sectional PD subtyping to clustering of longitudinal disease trajectories. We conclude that combining medical information with SPECT-based radiomics features, and optimal utilization of HMLSs, can identify distinct disease trajectories in PD patients, and enable effective prediction of disease trajectories from early year data
    corecore