1,907 research outputs found

    Towards On-line Domain-Independent Big Data Learning: Novel Theories and Applications

    Get PDF
    Feature extraction is an extremely important pre-processing step to pattern recognition, and machine learning problems. This thesis highlights how one can best extract features from the data in an exhaustively online and purely adaptive manner. The solution to this problem is given for both labeled and unlabeled datasets, by presenting a number of novel on-line learning approaches. Specifically, the differential equation method for solving the generalized eigenvalue problem is used to derive a number of novel machine learning and feature extraction algorithms. The incremental eigen-solution method is used to derive a novel incremental extension of linear discriminant analysis (LDA). Further the proposed incremental version is combined with extreme learning machine (ELM) in which the ELM is used as a preprocessor before learning. In this first key contribution, the dynamic random expansion characteristic of ELM is combined with the proposed incremental LDA technique, and shown to offer a significant improvement in maximizing the discrimination between points in two different classes, while minimizing the distance within each class, in comparison with other standard state-of-the-art incremental and batch techniques. In the second contribution, the differential equation method for solving the generalized eigenvalue problem is used to derive a novel state-of-the-art purely incremental version of slow feature analysis (SLA) algorithm, termed the generalized eigenvalue based slow feature analysis (GENEIGSFA) technique. Further the time series expansion of echo state network (ESN) and radial basis functions (EBF) are used as a pre-processor before learning. In addition, the higher order derivatives are used as a smoothing constraint in the output signal. Finally, an online extension of the generalized eigenvalue problem, derived from James Stone’s criterion, is tested, evaluated and compared with the standard batch version of the slow feature analysis technique, to demonstrate its comparative effectiveness. In the third contribution, light-weight extensions of the statistical technique known as canonical correlation analysis (CCA) for both twinned and multiple data streams, are derived by using the same existing method of solving the generalized eigenvalue problem. Further the proposed method is enhanced by maximizing the covariance between data streams while simultaneously maximizing the rate of change of variances within each data stream. A recurrent set of connections used by ESN are used as a pre-processor between the inputs and the canonical projections in order to capture shared temporal information in two or more data streams. A solution to the problem of identifying a low dimensional manifold on a high dimensional dataspace is then presented in an incremental and adaptive manner. Finally, an online locally optimized extension of Laplacian Eigenmaps is derived termed the generalized incremental laplacian eigenmaps technique (GENILE). Apart from exploiting the benefit of the incremental nature of the proposed manifold based dimensionality reduction technique, most of the time the projections produced by this method are shown to produce a better classification accuracy in comparison with standard batch versions of these techniques - on both artificial and real datasets

    Canonical Correlation Analysis of Video Volume Tensors for Action Categorization and Detection

    Get PDF
    Abstract—This paper addresses a spatiotemporal pattern recognition problem. The main purpose of this study is to find a right representation and matching of action video volumes for categorization. A novel method is proposed to measure video-to-video volume similarity by extending Canonical Correlation Analysis (CCA), a principled tool to inspect linear relations between two sets of vectors, to that of two multiway data arrays (or tensors). The proposed method analyzes video volumes as inputs avoiding the difficult problem of explicit motion estimation required in traditional methods and provides a way of spatiotemporal pattern matching that is robust to intraclass variations of actions. The proposed matching is demonstrated for action classification by a simple Nearest Neighbor classifier. We, moreover, propose an automatic action detection method, which performs 3D window search over an input video with action exemplars. The search is speeded up by dynamic learning of subspaces in the proposed CCA. Experiments on a public action data set (KTH) and a self-recorded hand gesture data showed that the proposed method is significantly better than various state-ofthe-art methods with respect to accuracy. Our method has low time complexity and does not require any major tuning parameters. Index Terms—Action categorization, gesture recognition, canonical correlation analysis, tensor, action detection, incremental subspace learning, spatiotemporal pattern classification. Ç

    Building Deep Networks on Grassmann Manifolds

    Full text link
    Learning representations on Grassmann manifolds is popular in quite a few visual recognition tasks. In order to enable deep learning on Grassmann manifolds, this paper proposes a deep network architecture by generalizing the Euclidean network paradigm to Grassmann manifolds. In particular, we design full rank mapping layers to transform input Grassmannian data to more desirable ones, exploit re-orthonormalization layers to normalize the resulting matrices, study projection pooling layers to reduce the model complexity in the Grassmannian context, and devise projection mapping layers to respect Grassmannian geometry and meanwhile achieve Euclidean forms for regular output layers. To train the Grassmann networks, we exploit a stochastic gradient descent setting on manifolds of the connection weights, and study a matrix generalization of backpropagation to update the structured data. The evaluations on three visual recognition tasks show that our Grassmann networks have clear advantages over existing Grassmann learning methods, and achieve results comparable with state-of-the-art approaches.Comment: AAAI'18 pape

    Multi-Order Statistical Descriptors for Real-Time Face Recognition and Object Classification

    Get PDF
    We propose novel multi-order statistical descriptors which can be used for high speed object classification or face recognition from videos or image sets. We represent each gallery set with a global second-order statistic which captures correlated global variations in all feature directions as well as the common set structure. A lightweight descriptor is then constructed by efficiently compacting the second-order statistic using Cholesky decomposition. We then enrich the descriptor with the first-order statistic of the gallery set to further enhance the representation power. By projecting the descriptor into a low-dimensional discriminant subspace, we obtain further dimensionality reduction, while the discrimination power of the proposed representation is still preserved. Therefore, our method represents a complex image set by a single descriptor having significantly reduced dimensionality. We apply the proposed algorithm on image set and video-based face and periocular biometric identification, object category recognition, and hand gesture recognition. Experiments on six benchmark data sets validate that the proposed method achieves significantly better classification accuracy with lower computational complexity than the existing techniques. The proposed compact representations can be used for real-time object classification and face recognition in videos. 2013 IEEE.This work was supported by NPRP through the Qatar National Research Fund (a member of Qatar Foundation) under Grant 7-1711-1-312.Scopu

    Interdisciplinary approaches to job design: A constructive replication with extensions.

    Get PDF

    The predictive ability of corporate narrative disclosures: Australian evidence

    Get PDF
    The mam objective of this study is to contribute to the academic literature by investigating the relationship between narrative disclosures and corporate performance based on Australian evidence. The research design takes as its starting from the content analysis of discretionary narrative disclosures conducted by Smith and Taffler (2000), and extends their research by combining thematic content analysis and syntactic content analysis. This study focuses on the discretionary disclosures (the Chairman\u27s Statement) of· · Australian manufacturing companies. Based on the Earnings per Share (EPS) movement between 2008 and 2009, 64 sample companies are classified into two groups: good performer and poor performer. This study is grounded on signalling theory and agency theory, and links with the impression management strategy. Based on two branches of impression management (rationalisation and enhancement), six groups of variables are collected to examine narrative disclosures from both quantity ( what to disclose ) and quality ( how to disclose ) perspectives. Manual coding and two computer-based software programs are employed in this study. This study finds that the word-based and theme-based variables based on discretionary disclosures are significantly correlated with corporate performance. Moreover, word-based variables can successfully classify companies between good performer and poor performer with an accuracy of 86%. However, there is no significant relationship between corporate performance and report size, use of long words (as a proxy for jargon), FLESCH readability score, or persuasive language. The main value of this study is to build a classification model based on Australian evidence for continuing companies, since most prior research focuses on UK, US and New Zealand companies and is based on a healthy/failed distinction

    The Financial Implications and Organizational Cultural Perceptions of Implementing a Performance Management System in a Government Enterprise

    Get PDF
    Successful organizations continually seek ways to improve productivity, reduce and control costs, and increase efficiency. Governmental entities also are driven by the need for increased efficiency and accountability in public service for their constituents.There is a continuing need for better tools and a number of government entities have turned to performance management systems due to their promise of improvement in various areas of productivity and accountability. This research focused on one such system, Six Sigma, which has recently experienced widespread adoption in industry in the United States, internationally, and in some government organizations. In this study Six Sigma was compared and contrasted with several performance management systems, and its effects and organizational cultural impacts on one organization were examined.The study investigated the financial implications and perceptions of organizational cultural change resulting from the Six Sigma system implementation in a large government enterprise. The first part of the study used the organization’s published financial information from 1997 through 2006 to determine whether there was a tangible financial benefit of implementing Six Sigma. The analysis indicated that the financial implications were statistically significant and quantified them as material and relevant to the organization’s two major business units.The second component of the research explored differences in organizational culture and attitudes among and between selected employee groups through the use of interviews and a survey instrument. Interviews were also conducted with a purposive sample of the executives who were involved in the decisions to implement Six Sigma. The Organizational Culture Inventory© and Organizational Effectiveness Inventory™ survey instruments were used to measure the organizational culture perceptions of the employee groups. Discriminant function analysis results suggested that the various groups shared a common organizational culture, which supports the null hypothesis that there were no differences in the organizational cultural perceptions among the organizational groups investigated

    The impact of lean implementation on quality and efficiency of U.S. hospitals

    Full text link
    Thesis (D.B.A.)--Boston UniversityLean has been implemented in the health care sector for over a decade to address the challenges of lowering cost and improving quality. Its impact, however, has not been conclusive. Furthermore, the debate on its potential benefit has not been rigorous or systemic. This dissertation research fills this gap in literature. This dissertation is composed of three papers. In the first paper, I develop a reliable and valid instrument to measure the extent of lean implementation in hospitals. I theoretically derive a more robust set of lean principles for the hospital environment (patient focus, standardized care, seamless coordination, and continuous improvement) and use them as a primary platform for analyzing the use of lean in the health care environment. The results show that currently hospitals have implemented lean principles at relatively low level, compared to the possible maximum implementation level. Among the four principles, continuous improvement principle showed highest implementation level in hospitals. In the second paper, I assess the impact of lean principles implementation on quality and efficiency performance in hospitals by performing multivariate regression analysis with lean principles as independent variables and hospital performance as dependent variable. Multiple hospital performance indicators (adherence to evidence-based care processes, risk-adjusted mortality, patient satisfaction, and risk-adjusted cost) are used to measure process quality, outcome quality, perceived quality, and efficiency of each hospital. The results show that patient focus, standardized care, and continuous improvement principles are significantly associated with hospital quality, while seamless coordination principle is not. The result does not show any significant association between lean principles and hospital efficiency. In the last paper, I identify different lean implementation patterns in hospitals. Since lean is a multi-dimensional concept of four lean principles, which can be implemented individually or in combination , several lean implementation patterns are possible, depending on differing level of emphasis on lean principles. The results show that when lean is implemented holistically, lean is effective in improving quality performance in the health care environment, as in the manufacturing. The result does not show any significant association between lean implementation patterns and hospital efficiency
    • …
    corecore