3,426 research outputs found

    ART Neural Networks: Distributed Coding and ARTMAP Applications

    Full text link
    ART (Adaptive Resonance Theory) neural networks for fast, stable learning and prediction have been applied in a variety of areas. Applications include airplane design and manufacturing, automatic target recognition, financial forecasting, machine tool monitoring, digital circuit design, chemical analysis, and robot vision. Supervised ART architectures, called ARTMAP systems, feature internal control mechanisms that create stable recognition categories of optimal size by maximizing code compression while minimizing predictive error in an on-line setting. Special-purpose requirements of various application domains have led to a number of ARTMAP variants, including fuzzy ARTMAP, ART-EMAP, Gaussian ARTMAP, and distributed ARTMAP. ARTMAP has been used for a variety of applications, including computer-assisted medical diagnosis. Medical databases present many of the challenges found in general information management settings where speed, efficiency, ease of use, and accuracy are at a premium. A direct goal of improved computer-assisted medicine is to help deliver quality emergency care in situations that may be less than ideal. Working with these problems has stimulated a number of ART architecture developments, including ARTMAP-IC [1]. This paper describes a recent collaborative effort, using a new cardiac care database for system development, has brought together medical statisticians and clinicians at the New England Medical Center with researchers developing expert systems and neural networks, in order to create a hybrid method for medical diagnosis. The paper also considers new neural network architectures, including distributed ART {dART), a real-time model of parallel distributed pattern learning that permits fast as well as slow adaptation, without catastrophic forgetting. Local synaptic computations in the dART model quantitatively match the paradoxical phenomenon of Markram-Tsodyks [2] redistribution of synaptic efficacy, as a consequence of global system hypotheses.Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657

    A panel model for predicting the diversity of internal temperatures from English dwellings

    Get PDF
    Using panel methods, a model for predicting daily mean internal temperature demand across a heterogeneous domestic building stock is developed. The model offers an important link that connects building stock models to human behaviour. It represents the first time a panel model has been used to estimate the dynamics of internal temperature demand from the natural daily fluctuations of external temperature combined with important behavioural, socio-demographic and building efficiency variables. The model is able to predict internal temperatures across a heterogeneous building stock to within ~0.71°C at 95% confidence and explain 45% of the variance of internal temperature between dwellings. The model confirms hypothesis from sociology and psychology that habitual behaviours are important drivers of home energy consumption. In addition, the model offers the possibility to quantify take-back (direct rebound effect) owing to increased internal temperatures from the installation of energy efficiency measures. The presence of thermostats or thermostatic radiator valves (TRV) are shown to reduce average internal temperatures, however, the use of an automatic timer is statistically insignificant. The number of occupants, household income and occupant age are all important factors that explain a proportion of internal temperature demand. Households with children or retired occupants are shown to have higher average internal temperatures than households who do not. As expected, building typology, building age, roof insulation thickness, wall U-value and the proportion of double glazing all have positive and statistically significant effects on daily mean internal temperature. In summary, the model can be used as a tool to predict internal temperatures or for making statistical inferences. However, its primary contribution offers the ability to calibrate existing building stock models to account for behaviour and socio-demographic effects making it possible to back-out more accurate predictions of domestic energy demand

    Rain Rate Retrieval Algorithm for Conical-Scanning Microwave Imagers Aided by Random Forest, RReliefF, and Multivariate Adaptive Regression Splines (RAMARS)

    Get PDF
    This paper proposes a rain rate retrieval algorithm for conical-scanning microwave imagers (RAMARS), as an alternative to the NASA Goddard profiling (GPROF) algorithm, that does not rely on any a priori information. The fundamental basis of the RAMARS follows the concept of the GPROF algorithm, which means, being consistent with the Tropical Rainfall Measuring Mission (TRMM) precipitation radar rain rate observations, but independent of any auxiliary information. The RAMARS is built upon the combination of state-of-the-art machine learning and regression techniques, comprising of random forest algorithm, RReliefF, and multivariate adaptive regression splines. The RAMARS is applicable to both over ocean and land as well as coast surface terrains. It has been demonstrated that, when comparing with the TRMM Precipitation Radar observations, the performance of the RAMARS algorithm is comparable with the 2A12 GPROF algorithm. Furthermore, the RAMARS has been applied to two cyclonic cases, hurricane Sandy in 2012, and cyclone Mahasen in 2013, showing a very good capability to reproduce the structure and intensity of the cyclone fields. The RAMARS is highly flexible, because of its four processing components, making it extremely suitable for use to other passive microwave imagers in the global precipitation measurement (GPM) constellation

    Defection Detection: Measuring and Understanding the Predictive Accuracy of Customer Churn Models

    Get PDF
    The authors express their gratitude to Sanyin Siang (Managing Director, Teradata Center for Customer Relationship Management at the Fuqua School of Business, Duke University); research assistants Sarwat Husain, Michael Kurima, and Emilio del Rio; and an anonymous wireless telephone carrier that provided the data for this study. The authors also thank participants in the Tuck School of Business, Dart-mouth College, Marketing Workshop, for comments and the two anony-mous JMR reviewers for their constructive suggestions. Finally, the authors express their appreciation to former editor Dick Wittink (posthumously) for his invaluable insights and guidance. This article provides a descriptive analysis of how methodological factors contribute to the accuracy of customer churn predictive models. The study is based on a tournament in which both academics and practitioners downloaded data from a publicly available Web site, estimated a model, and made predictions on two validation databases. The results suggest several important findings. First, methods do matter. The differences observed in predictive accuracy across submissions could change the profitability of a churn management campaign by hundreds of thousands of dollars. Second, models have staying power. They suffer very little decrease in performance if they are used to predict churn for a database compiled three months after the calibration data. Third, researchers use a variety of modeling "approaches," characterized by variables such as estimation technique, variable selection procedure, number of variables included, and time allocated to steps in the model-building process. The authors find important differences in performance among these approaches and discuss implications for both researchers and practitioners

    A novel framework for predicting patients at risk of readmission

    Get PDF
    Uncertainty in decision-making for patients’ risk of re-admission arises due to non-uniform data and lack of knowledge in health system variables. The knowledge of the impact of risk factors will provide clinicians better decision-making and in reducing the number of patients admitted to the hospital. Traditional approaches are not capable to account for the uncertain nature of risk of hospital re-admissions. More problems arise due to large amount of uncertain information. Patients can be at high, medium or low risk of re-admission, and these strata have ill-defined boundaries. We believe that our model that adapts fuzzy regression method will start a novel approach to handle uncertain data, uncertain relationships between health system variables and the risk of re-admission. Because of nature of ill-defined boundaries of risk bands, this approach does allow the clinicians to target individuals at boundaries. Targeting individuals at boundaries and providing them proper care may provide some ability to move patients from high risk to low risk band. In developing this algorithm, we aimed to help potential users to assess the patients for various risk score thresholds and avoid readmission of high risk patients with proper interventions. A model for predicting patients at high risk of re-admission will enable interventions to be targeted before costs have been incurred and health status have deteriorated. A risk score cut off level would flag patients and result in net savings where intervention costs are much higher per patient. Preventing hospital re-admissions is important for patients, and our algorithm may also impact hospital income

    Microprocessor based signal processing techniques for system identification and adaptive control of DC-DC converters

    Get PDF
    PhD ThesisMany industrial and consumer devices rely on switch mode power converters (SMPCs) to provide a reliable, well regulated, DC power supply. A poorly performing power supply can potentially compromise the characteristic behaviour, efficiency, and operating range of the device. To ensure accurate regulation of the SMPC, optimal control of the power converter output is required. However, SMPC uncertainties such as component variations and load changes will affect the performance of the controller. To compensate for these time varying problems, there is increasing interest in employing real-time adaptive control techniques in SMPC applications. It is important to note that many adaptive controllers constantly tune and adjust their parameters based upon on-line system identification. In the area of system identification and adaptive control, Recursive Least Square (RLS) method provide promising results in terms of fast convergence rate, small prediction error, accurate parametric estimation, and simple adaptive structure. Despite being popular, RLS methods often have limited application in low cost systems, such as SMPCs, due to the computationally heavy calculations demanding significant hardware resources which, in turn, may require a high specification microprocessor to successfully implement. For this reason, this thesis presents research into lower complexity adaptive signal processing and filtering techniques for on-line system identification and control of SMPCs systems. The thesis presents the novel application of a Dichotomous Coordinate Descent (DCD) algorithm for the system identification of a dc-dc buck converter. Two unique applications of the DCD algorithm are proposed; system identification and self-compensation of a dc-dc SMPC. Firstly, specific attention is given to the parameter estimation of dc-dc buck SMPC. It is computationally efficient, and uses an infinite impulse response (IIR) adaptive filter as a plant model. Importantly, the proposed method is able to identify the parameters quickly and accurately; thus offering an efficient hardware solution which is well suited to real-time applications. Secondly, new alternative adaptive schemes that do not depend entirely on estimating the plant parameters is embedded with DCD algorithm. The proposed technique is based on a simple adaptive filter method and uses a one-tap finite impulse response (FIR) prediction error filter (PEF). Experimental and simulation results clearly show the DCD technique can be optimised to achieve comparable performance to classic RLS algorithms. However, it is computationally superior; thus making it an ideal candidate technique for low cost microprocessor based applications.Iraq Ministry of Higher Educatio

    Geo-spatial Technology for Landslide Hazard Zonation and Prediction

    Get PDF
    Similar to other geo hazards, landslides cannot be avoided in mountainous terrain. It is the most common natural hazard in the mountain regions and can result in enormous damage to both property and life every year. Better understanding of the hazard will help people to live in harmony with the pristine nature. Since India has 15% of its land area prone to landslides, preparation of landslide susceptibility zonation (LSZ) maps for these areas is of utmost importance. These susceptibility zonation maps will give the areas that are prone to landslides and the safe areas, which in-turn help the administrators for safer planning and future development activities. There are various methods for the preparation of LSZ maps such as based on Fuzzy logic, Artificial Neural Network, Discriminant Analysis, Direct Mapping, Regression Analysis, Neuro-Fuzzy approach and other techniques. These different approaches apply different rating system and the weights, which are area and factors dependent. Therefore, these weights and ratings play a vital role in the preparation of susceptibility maps using any of the approach. However, one technique that gives very high accuracy in certain might not be applicable to other parts of the world due to change in various factors, weights and ratings. Hence, only one method cannot be suggested to be applied in any other terrain. Therefore, an understanding of these approaches, factors and weights needs to be enhanced so that their execution in Geographic Information System (GIS) environment could give better results and yield actual ground like scenarios for landslide susceptibility mapping. Hence, the available and applicable approaches are discussed in this chapter along with detailed account of the literature survey in the areas of LSZ mapping. Also a case study of Garhwal area where Support Vector Machine (SVM) technique is used for preparing LSZ is also given. These LSZ maps will also be an important input for preparing the risk assessment of LSZ
    corecore