819 research outputs found

    A Novel Two-Stage Spectrum-Based Approach for Dimensionality Reduction: A Case Study on the Recognition of Handwritten Numerals

    Get PDF
    Dimensionality reduction (feature selection) is an important step in pattern recognition systems. Although there are different conventional approaches for feature selection, such as Principal Component Analysis, Random Projection, and Linear Discriminant Analysis, selecting optimal, effective, and robust features is usually a difficult task. In this paper, a new two-stage approach for dimensionality reduction is proposed. This method is based on one-dimensional and two-dimensional spectrum diagrams of standard deviation and minimum to maximum distributions for initial feature vector elements. The proposed algorithm is validated in an OCR application, by using two big standard benchmark handwritten OCR datasets, MNIST and Hoda. In the beginning, a 133-element feature vector was selected from the most used features, proposed in the literature. Finally, the size of initial feature vector was reduced from 100% to 59.40% (79 elements) for the MNIST dataset, and to 43.61% (58 elements) for the Hoda dataset, in order. Meanwhile, the accuracies of OCR systems are enhanced 2.95% for the MNIST dataset, and 4.71% for the Hoda dataset. The achieved results show an improvement in the precision of the system in comparison to the rival approaches, Principal Component Analysis and Random Projection. The proposed technique can also be useful for generating decision rules in a pattern recognition system using rule-based classifiers

    Comparison of Estimated Glomerular Filtration Rate Using Five Equations to Predict Acute Kidney Injury Following Total Joint Arthroplasty

    Get PDF
    Introduction: Primary total joint arthroplasty (TJA) is one of the most common procedures in the United States, and as the incidence of this surgery increases, identifying methods for improving outcomes and reducing complications is essential. Acute kidney injury (AKI) following TJA is a potential source of morbidity and mortality. Estimated glomerular filtration rate (eGFR) is used as an indicator of renal function. Several equations are commonly used to calculate eGFR. The purpose of this study was 1) to evaluate the agreement between five equations in calculating eGFR, and 2) to confirm which equation can best predict AKI in patients undergoing TJA. Methods: 497,261 cases of TJA were queried from the National Surgical Quality Improvement Program (NSQIP) from 2012 to 2019. Preoperative eGFR was calculated using the Cockcroft-Gault, Modification of Diet in Renal Disease (MDRD) II, re-expressed MDRD II, Chronic Kidney Disease Epidemiology Collaboration, and Mayo quadratic (Mayo) equations. The primary outcome measure was acute kidney injury (AKI). These cohorts were compared based on demographic and preoperative factors. Multivariate regression analysis was used to evaluate independent associations between preoperative eGFR and postoperative renal outcomes. Results: Seven hundred seventy-seven (0.16%) patients experienced AKI after TJA. The Cockcroft-Gault equation yielded the highest mean eGFR (98.6 ± 32.7), while the Re-expressed MDRD II equation yielded the lowest mean eGFR (75.1 ± 28.8). Multivariate regression analysis showed that a decrease in preoperative eGFR was independently associated with an increased risk of postoperative AKI in all five equations. The Akaike information criterion (AIC) was the lowest in the Mayo equation (6546). Conclusions: Preoperative decrease in eGFR in all five equations was independently associated with increased risk of postoperative AKI. The Mayo equation had the highest predictive ability of acquiring postoperative AKI following TJA

    Development Planning and Dependence

    Get PDF
    SUMMARY Much development planning theory and practice has been based on neo?classical approaches to development. Recent work on 'dependence' has been critical of these views and has suggested alternatives emphasizing objectives such as disengagement from the world capitalist system, providing for the basic needs of the population and a radical transformation of the distribution of income and wealth. Adopting this model implies significant changes in the functions and definition of planning. Development is seen as the process of national integration and the task of planning is to assure that this process is irreversible. RESUME Plans de développement et dépendance La théorie et la pratique des plans de développement reposent, en grande partie, sur des concepts de développement néo?classiques. Des ouvrages récents consacrés à la ‘ dépendance’, ont critiqué cette approche et suggéré des solutions de rechange mettant l'accent sur des objectifs tels que le désengagement vis?à?vis du système capitaliste mondial, la satisfaction des besoins fondamentaux de la population et une transformation radicale du système de répartition des revenus et des richesses. L'adoption de ce modèle implique la modification profonde des fonctions et de la définition du planning. Le développement est considéré comme le processus de l'intégration nationale, la tâche du planning étant d'assurer l'irréversibilité de ce processus. RESUMEN Planificación del Desarrollo y Dependencia Gran parte de la planificación del desarrollo ha estado basada en los enfoques neo?clásicos del desarrollo. Trabajos recientes que podríamos llamar el enfoque de la dependencia han criticado estos enfoques, y han sugerido enfoques alternativos sobre el proceso de desarrollo. En estos se pone énfasis en objetivos tales como el rompimiento con el sistema capitalista mundial, la satisfacción de las necesidades básicas de la población, la redistribución del ingreso y la riqueza y otros. La adopción de este modelo conlleva importantes implicaciones para la planificación. El desarrollo se concibe como el proceso de integración nacional y la tarea de la planificación es asegurar que ese proceso sea irreversible

    Nonbinary Associative Memory With Exponential Pattern Retrieval Capacity and Iterative Learning

    Get PDF
    We consider the problem of neural association for a network of nonbinary neurons. Here, the task is to first memorize a set of patterns using a network of neurons whose states assume values from a finite number of integer levels. Later, the same network should be able to recall the previously memorized patterns from their noisy versions. Prior work in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we concentrate on exploiting redundancy and internal structure of the patterns to improve the pattern retrieval capacity. Our first result shows that if the given patterns have a suitable linear-algebraic structure, i.e., comprise a subspace of the set of all possible patterns, then the pattern retrieval capacity is exponential in terms of the number of neurons. The second result extends the previous finding to cases where the patterns have weak minor components, i.e., the smallest eigenvalues of the correlation matrix tend toward zero. We will use these minor components (or the basis vectors of the pattern null space) to increase both the pattern retrieval capacity and error correction capabilities. An iterative algorithm is proposed for the learning phase, and two simple algorithms are presented for the recall phase. Using analytical methods and simulations, we show that the proposed methods can tolerate a fair amount of errors in the input while being able to memorize an exponentially large number of patterns

    Exponential Pattern Retrieval Capacity with Non-Binary Associative Memory

    Get PDF
    We consider the problem of neural association for a network of non-binary neurons. Here, the task is to recall a previously memorized pattern from its noisy version using a network of neurons whose states assume values from a finite number of non-negative integer levels. Prior works in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network

    Molecular associative memory: An associative memory framework with exponential storage capacity for DNA computing

    Get PDF
    Associative memory problem: Find the closest stored vector (in Hamming distance) to a given query vector. There are different ways to implement an associative memory, including the neural networks and DNA strands. Using neural networks, connection weights are adjusted in order to perform association. Recall procedure is iterative and relies on simple neural operations. In this case, the design criteria is maximizing the number of stored patterns C while having some noise tolerance. The molecular implementation is based on synthesizing C DNA strands as stored vectors. Recall procedure is usually done in one shot via chemical reactions and relies on highly parallelism of DNA computing. Here, the design criteria: finding proper DNA sequences to minimize probability of error during the recall phase. Current molecular associative memories are either low in storage capacity, if implemented using molecular realizations of neural networks, or very complex to implement, if all the stored sequences have to be synthesized. We introduce an associative memory framework with exponential storage capacity based on transcriptional networks of DNA switches. The advantages of the proposed approach over current methods are: 1. Exponential storage capacities with current neural network-based approaches can not be achieved. 2. For other methods, although having exponential storage capacities is possible, it is very complex as it requires synthesizing an extraordinarily large number of DNA strands

    Comparing five equations to calculate estimated glomerular filtration rate to predict acute kidney injury following total joint arthroplasty.

    Get PDF
    BACKGROUND: Acute kidney injury (AKI) following total joint arthroplasty (TJA) is associated with increased morbidity and mortality. Estimated glomerular filtration rate (eGFR) is used as an indicator of renal function. The purpose of this study was (1) to assess each of the five equations that are used in calculating eGFR, and (2) to evaluate which equation may best predict AKI in patients following TJA. METHODS: The National Surgical Quality Improvement Program (NSQIP) was queried for all 497,261 cases of TJA performed from 2012 to 2019 with complete data. The Modification of Diet in Renal Disease (MDRD) II, re-expressed MDRD II, Cockcroft-Gault, Mayo quadratic, and Chronic Kidney Disease Epidemiology Collaboration equations were used to calculate preoperative eGFR. Two cohorts were created based on the development of postoperative AKI and were compared based on demographic and preoperative factors. Multivariate regression analysis was used to assess for independent associations between preoperative eGFR and postoperative renal failure for each equation. The Akaike information criterion (AIC) was used to evaluate predictive ability of the five equations. RESULTS: Seven hundred seventy-seven (0.16%) patients experienced AKI after TJA. The Cockcroft-Gault equation yielded the highest mean eGFR (98.6 ± 32.7), while the Re-expressed MDRD II equation yielded the lowest mean eGFR (75.1 ± 28.8). Multivariate regression analysis demonstrated that a decrease in preoperative eGFR was independently associated with an increased risk of developing postoperative AKI in all five equations. The AIC was the lowest in the Mayo equation. CONCLUSIONS: Preoperative decrease in eGFR was independently associated with increased risk of postoperative AKI in all five equations. The Mayo equation was most predictive of the development of postoperative AKI following TJA. The mayo equation best identified patients with the highest risk of postoperative AKI, which may help providers make decisions on perioperative management in these patients

    Comparison of Estimated Glomerular Filtration Rate Using Five Equations to Predict Acute Kidney Injury Following Hip Fracture Surgery

    Get PDF
    Introduction: Hip fractures are a common injury and a source of disability and mortality in the aging population. Acute kidney injury (AKI) is a common and potentially serious complication following hip fracture surgery. Estimated glomerular filtration rate (eGFR) is used as an indicator of renal function. Several equations are commonly used to calculate eGFR. The purpose of this study was 1) to evaluate the agreement between five equations in calculating eGFR, and 2) to confirm which equation can best predict AKI in patients undergoing hip fracture surgery. Methods: 146,702 cases of surgical stabilization of hip fracture were queried from the National Surgical Quality Improvement Program (NSQIP) from 2012 to 2019. Preoperative eGFR was calculated using the Cockcroft-Gault, Modification of Diet in Renal Disease (MDRD) II, re-expressed MDRD II, Chronic Kidney Disease Epidemiology Collaboration, and Mayo quadratic (Mayo) equations. The primary outcome measure was AKI. Cases were stratified into two cohorts based on the development of postoperative AKI. These cohorts were compared based on demographic and preoperative factors. Multivariate regression analysis was used to evaluate independent associations between preoperative eGFR and postoperative renal outcomes. Results: Six hundred ninety-nine (0.73%) patients acquired AKI after hip fracture surgery. The Mayo equation yielded the highest mean eGFR (83.8 ± 23.6), while the Re-expressed MDRD II equation yielded the lowest mean eGFR (68.3 ± 35.6). Multivariate regression analysis showed that a decrease in preoperative eGFR was independently associated with an increased risk of postoperative AKI in all five equations. The Akaike information criterion (AIC) was the lowest in the Mayo equation (5116). Conclusions: Preoperative decrease in eGFR in all five equations was independently associated with increased risk of postoperative AKI. The Mayo equation had the highest predictive ability of acquiring postoperative AKI following hip fracture surgery

    PrivGenDB: Efficient and privacy-preserving query executions over encrypted SNP-Phenotype database

    Full text link
    Searchable symmetric encryption (SSE) has been used to protect the confidentiality of genomic data while providing substring search and range queries on a sequence of genomic data, but it has not been studied for protecting single nucleotide polymorphism (SNP)-phenotype data. In this article, we propose a novel model, PrivGenDB, for securely storing and efficiently conducting different queries on genomic data outsourced to an honest-but-curious cloud server. To instantiate PrivGenDB, we use SSE to ensure confidentiality while conducting different types of queries on encrypted genomic data, phenotype and other information of individuals to help analysts/clinicians in their analysis/care. To the best of our knowledge, PrivGenDB construction is the first SSE-based approach ensuring the confidentiality of shared SNP-phenotype data through encryption while making the computation/query process efficient and scalable for biomedical research and care. Furthermore, it supports a variety of query types on genomic data, including count queries, Boolean queries, and k'-out-of-k match queries. Finally, the PrivGenDB model handles the dataset containing both genotype and phenotype, and it also supports storing and managing other metadata like gender and ethnicity privately. Computer evaluations on a dataset with 5,000 records and 1,000 SNPs demonstrate that a count/Boolean query and a k'-out-of-k match query over 40 SNPs take approximately 4.3s and 86.4{\mu}s, respectively, that outperforms the existing schemes
    • …
    corecore