102,743 research outputs found
An Efficient Hidden Markov Model for Offline Handwritten Numeral Recognition
Traditionally, the performance of ocr algorithms and systems is based on the
recognition of isolated characters. When a system classifies an individual
character, its output is typically a character label or a reject marker that
corresponds to an unrecognized character. By comparing output labels with the
correct labels, the number of correct recognition, substitution errors
misrecognized characters, and rejects unrecognized characters are determined.
Nowadays, although recognition of printed isolated characters is performed with
high accuracy, recognition of handwritten characters still remains an open
problem in the research arena. The ability to identify machine printed
characters in an automated or a semi automated manner has obvious applications
in numerous fields. Since creating an algorithm with a one hundred percent
correct recognition rate is quite probably impossible in our world of noise and
different font styles, it is important to design character recognition
algorithms with these failures in mind so that when mistakes are inevitably
made, they will at least be understandable and predictable to the person
working with theComment: 6pages, 5 figure
Learning Sparse & Ternary Neural Networks with Entropy-Constrained Trained Ternarization (EC2T)
Deep neural networks (DNN) have shown remarkable success in a variety of
machine learning applications. The capacity of these models (i.e., number of
parameters), endows them with expressive power and allows them to reach the
desired performance. In recent years, there is an increasing interest in
deploying DNNs to resource-constrained devices (i.e., mobile devices) with
limited energy, memory, and computational budget. To address this problem, we
propose Entropy-Constrained Trained Ternarization (EC2T), a general framework
to create sparse and ternary neural networks which are efficient in terms of
storage (e.g., at most two binary-masks and two full-precision values are
required to save a weight matrix) and computation (e.g., MAC operations are
reduced to a few accumulations plus two multiplications). This approach
consists of two steps. First, a super-network is created by scaling the
dimensions of a pre-trained model (i.e., its width and depth). Subsequently,
this super-network is simultaneously pruned (using an entropy constraint) and
quantized (that is, ternary values are assigned layer-wise) in a training
process, resulting in a sparse and ternary network representation. We validate
the proposed approach in CIFAR-10, CIFAR-100, and ImageNet datasets, showing
its effectiveness in image classification tasks.Comment: Proceedings of the CVPR'20 Joint Workshop on Efficient Deep Learning
in Computer Vision. Code is available at
https://github.com/d-becking/efficientCNN
Is the physical vacuum a preferred frame ?
It is generally assumed that the physical vacuum of particle physics should
be characterized by an energy momentum tensor in such a way to preserve exact
Lorentz invariance. On the other hand, if the ground state were characterized
by its energy-momentum vector, with zero spatial momentum and a non-zero
energy, the vacuum would represent a preferred frame. Since both theoretical
approaches have their own good motivations, we propose an experimental test to
decide between the two scenarios.Comment: 12 pages, no figure
Combining goal-oriented and model-driven approaches to solve the Payment Problem Scenario
Motivated by the objective to provide an improved participation of business domain experts in the design of service-oriented integration solutions, we extend our previous work on using the COSMO methodology for service mediation by introducing a goal-oriented approach to requirements engineering. With this approach, business requirements including the motivations behind the mediation solution are better understood, specified, and aligned with their technical implementations. We use the Payment Problem Scenario of the SWS Challenge to illustrate the extension
Motivations for OpenLearn: the Open University's Open Content Initiative
This short paper is a contribution to the Organisation for Economic Co-operation and Development (OECD) expert workshop to help identify "motivations, benefits and barriers for institutions producing open educational resources". The motivations are examined by looking at the reasons behind the launch by the Open University in the UK of a web based collection of open educational resources, OpenLearn. OpenLearn launched on October 25th 2006 and reflects an initiative backed by the William and Flora Hewlett Foundation and the Open University to develop a learning environment (LearningSpace) and an accompanying educator environment (LabSpace) giving free access to material derived from Open University courses. There are of course many reasons for the taking part in open educational resources and so this paper considers motivations in community, organisational, technical and economic terms.The paper was initially prepared for the OECD experts meeting on Open Educational Resources 26-27 October 2006 in Barcelona, Spain
- …