618 research outputs found
Utilizing Analytical Hierarchy Process for Pauper House Programme in Malaysia
In Malaysia, the selection and evaluation of candidates for
Pauper House Programme (PHP) are done manually. In
this paper, a technique based on Analytical Hierarchy
Technique (AHP) is designed and developed in order to
make an evaluation and selection of PHP application. The
aim is to ensure the selection process is more precise,
accurate and can avoid any biasness issue. This technique
is studied and designed based on the Pauper assessment
technique from one of district offices in Malaysia. A
hierarchical indexes are designed based on the criteria that
been used in the official form of PHP application. A
number of 23 samples of data which had been endorsed
by Exco of State in Malaysia are used to test this
technique. Furthermore the comparison of those two
methods are given in this paper. All the calculations of
this technique are done in a software namely Expert
Choice version 11.5. By comparing the manual and AHP
shows that there are three (3) samples that are not
qualified. The developed technique also satisfies in term
of ease of accuracy and preciseness but need a further
study due to some limitation as explained in the
recommendation of this paper
Some Theorems for Feed Forward Neural Networks
In this paper we introduce a new method which employs the concept of
"Orientation Vectors" to train a feed forward neural network and suitable for
problems where large dimensions are involved and the clusters are
characteristically sparse. The new method is not NP hard as the problem size
increases. We `derive' the method by starting from Kolmogrov's method and then
relax some of the stringent conditions. We show for most classification
problems three layers are sufficient and the network size depends on the number
of clusters. We prove as the number of clusters increase from N to N+dN the
number of processing elements in the first layer only increases by d(logN), and
are proportional to the number of classes, and the method is not NP hard.
Many examples are solved to demonstrate that the method of Orientation
Vectors requires much less computational effort than Radial Basis Function
methods and other techniques wherein distance computations are required, in
fact the present method increases logarithmically with problem size compared to
the Radial Basis Function method and the other methods which depend on distance
computations e.g statistical methods where probabilistic distances are
calculated. A practical method of applying the concept of Occum's razor to
choose between two architectures which solve the same classification problem
has been described. The ramifications of the above findings on the field of
Deep Learning have also been briefly investigated and we have found that it
directly leads to the existence of certain types of NN architectures which can
be used as a "mapping engine", which has the property of "invertibility", thus
improving the prospect of their deployment for solving problems involving Deep
Learning and hierarchical classification. The latter possibility has a lot of
future scope in the areas of machine learning and cloud computing.Comment: 15 pages 13 figure
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Artificial intelligence methodologies and their application to diabetes
In the past decade diabetes management has been transformed by the addition of continuous glucose monitoring and insulin pump data. More recently, a wide variety of functions and physiologic variables, such as heart rate, hours of sleep, number of steps walked and movement, have been available through wristbands or watches. New data, hydration, geolocation, and barometric pressure, among others, will be incorporated in the future. All these parameters, when analyzed, can be helpful for patients and doctors' decision support. Similar new scenarios have appeared in most medical fields, in such a way that in recent years, there has been an increased interest in the development and application of the methods of artificial intelligence (AI) to decision support and knowledge acquisition. Multidisciplinary research teams integrated by computer engineers and doctors are more and more frequent, mirroring the need of cooperation in this new topic. AI, as a science, can be defined as the ability to make computers do things that would require intelligence if done by humans. Increasingly, diabetes-related journals have been incorporating publications focused on AI tools applied to diabetes. In summary, diabetes management scenarios have suffered a deep transformation that forces diabetologists to incorporate skills from new areas. This recently needed knowledge includes AI tools, which have become part of the diabetes health care. The aim of this article is to explain in an easy and plane way the most used AI methodologies to promote the implication of health care providers?doctors and nurses?in this field
3D object detection from point clouds with dense pose voters
Il riconoscimento di oggetti è sempre stato un compito sfidante per la Computer Vision. Trova applicazione in molti campi, principalmente nell’industria, come ad esempio per permettere ad un robot di trovare gli oggetti da afferrare. Negli ultimi decenni tali compiti hanno trovato nuovi modi di essere raggiunti grazie alla riscoperta delle Reti Neurali, in particolare le Reti Neurali Convoluzionali. Questo tipo di reti ha raggiunto ottimi risultati in molte applicazioni per il riconoscimento e la classificazione degli oggetti. La tendenza, ora, `e quella di utilizzare tali reti anche nell’industria automobilistica per cercare di rendere reale il sogno delle automobili che guidano da sole. Ci sono molti lavori importanti sul riconoscimento delle auto dalle immagini. In questa tesi presentiamo la nostra architettura di Rete Neurale Convoluzionale per il riconoscimento di automobili e la loro posizione nello spazio, utilizzando solo input lidar. Salvando le informazioni riguardanti le bounding box attorno all’auto a livello del punto ci assicura una buona previsione anche in situazioni in cui le automobili sono occluse. I test vengono eseguiti sul dataset più utilizzato per il riconoscimento di automobili e pedoni nelle applicazioni di guida autonoma
Neural Networks retrieving Boolean patterns in a sea of Gaussian ones
Restricted Boltzmann Machines are key tools in Machine Learning and are
described by the energy function of bipartite spin-glasses. From a statistical
mechanical perspective, they share the same Gibbs measure of Hopfield networks
for associative memory. In this equivalence, weights in the former play as
patterns in the latter. As Boltzmann machines usually require real weights to
be trained with gradient descent like methods, while Hopfield networks
typically store binary patterns to be able to retrieve, the investigation of a
mixed Hebbian network, equipped with both real (e.g., Gaussian) and discrete
(e.g., Boolean) patterns naturally arises. We prove that, in the challenging
regime of a high storage of real patterns, where retrieval is forbidden, an
extra load of Boolean patterns can still be retrieved, as long as the ratio
among the overall load and the network size does not exceed a critical
threshold, that turns out to be the same of the standard
Amit-Gutfreund-Sompolinsky theory. Assuming replica symmetry, we study the case
of a low load of Boolean patterns combining the stochastic stability and
Hamilton-Jacobi interpolating techniques. The result can be extended to the
high load by a non rigorous but standard replica computation argument.Comment: 16 pages, 1 figur
Self-Supervised Learning with an Information Maximization Criterion
Self-supervised learning allows AI systems to learn effective representations
from large amounts of data using tasks that do not require costly labeling.
Mode collapse, i.e., the model producing identical representations for all
inputs, is a central problem to many self-supervised learning approaches,
making self-supervised tasks, such as matching distorted variants of the
inputs, ineffective. In this article, we argue that a straightforward
application of information maximization among alternative latent
representations of the same input naturally solves the collapse problem and
achieves competitive empirical results. We propose a self-supervised learning
method, CorInfoMax, that uses a second-order statistics-based mutual
information measure that reflects the level of correlation among its arguments.
Maximizing this correlative information measure between alternative
representations of the same input serves two purposes: (1) it avoids the
collapse problem by generating feature vectors with non-degenerate covariances;
(2) it establishes relevance among alternative representations by increasing
the linear dependence among them. An approximation of the proposed information
maximization objective simplifies to a Euclidean distance-based objective
function regularized by the log-determinant of the feature covariance matrix.
The regularization term acts as a natural barrier against feature space
degeneracy. Consequently, beyond avoiding complete output collapse to a single
point, the proposed approach also prevents dimensional collapse by encouraging
the spread of information across the whole feature space. Numerical experiments
demonstrate that CorInfoMax achieves better or competitive performance results
relative to the state-of-the-art SSL approaches
- …