38,160 research outputs found
A Processor Core Model for Quantum Computing
We describe an architecture based on a processing 'core' where multiple
qubits interact perpetually, and a separate 'store' where qubits exist in
isolation. Computation consists of single qubit operations, swaps between the
store and the core, and free evolution of the core. This enables computation
using physical systems where the entangling interactions are 'always on'.
Alternatively, for switchable systems our model constitutes a prescription for
optimizing many-qubit gates. We discuss implementations of the quantum Fourier
transform, Hamiltonian simulation, and quantum error correction.Comment: 5 pages, 2 figures; improved some arguments as suggested by a refere
Scientific requirements for an engineered model of consciousness
The building of a non-natural conscious system requires more than the design of physical or virtual machines with intuitively conceived abilities, philosophically elucidated architecture or hardware homologous to an animal’s brain. Human society might one day treat a type of robot or computing system as an artificial person. Yet that would not answer scientific questions about the machine’s consciousness or otherwise. Indeed, empirical tests for consciousness are impossible because no such entity is denoted within the theoretical structure of the science of mind, i.e. psychology. However, contemporary experimental psychology can identify if a specific mental process is conscious in particular circumstances, by theory-based interpretation of the overt performance of human beings. Thus, if we are to build a conscious machine, the artificial systems must be used as a test-bed for theory developed from the existing science that distinguishes conscious from non-conscious causation in natural systems. Only such a rich and realistic account of hypothetical processes accounting for observed input/output relationships can establish whether or not an engineered system is a model of consciousness. It follows that any research project on machine consciousness needs a programme of psychological experiments on the demonstration systems and that the programme should be designed to deliver a fully detailed scientific theory of the type of artificial mind being developed – a Psychology of that Machine
Federated Embedded Systems – a review of the literature in related fields
This report is concerned with the vision of smart interconnected objects, a vision that has attracted much attention lately. In this paper, embedded, interconnected, open, and heterogeneous control systems are in focus, formally referred to as Federated Embedded Systems. To place FES into a context, a review of some related research directions is presented. This review includes such concepts as systems of systems, cyber-physical systems, ubiquitous
computing, internet of things, and multi-agent systems. Interestingly, the reviewed fields seem to overlap with each other in an increasing number of ways
Complexity, BioComplexity, the Connectionist Conjecture and Ontology of Complexity\ud
This paper develops and integrates major ideas and concepts on complexity and biocomplexity - the connectionist conjecture, universal ontology of complexity, irreducible complexity of totality & inherent randomness, perpetual evolution of information, emergence of criticality and equivalence of symmetry & complexity. This paper introduces the Connectionist Conjecture which states that the one and only representation of Totality is the connectionist one i.e. in terms of nodes and edges. This paper also introduces an idea of Universal Ontology of Complexity and develops concepts in that direction. The paper also develops ideas and concepts on the perpetual evolution of information, irreducibility and computability of totality, all in the context of the Connectionist Conjecture. The paper indicates that the control and communication are the prime functionals that are responsible for the symmetry and complexity of complex phenomenon. The paper takes the stand that the phenomenon of life (including its evolution) is probably the nearest to what we can describe with the term “complexity”. The paper also assumes that signaling and communication within the living world and of the living world with the environment creates the connectionist structure of the biocomplexity. With life and its evolution as the substrate, the paper develops ideas towards the ontology of complexity. The paper introduces new complexity theoretic interpretations of fundamental biomolecular parameters. The paper also develops ideas on the methodology to determine the complexity of “true” complex phenomena.\u
Machine learning in spectral domain
Deep neural networks are usually trained in the space of the nodes, by
adjusting the weights of existing links via suitable optimization protocols. We
here propose a radically new approach which anchors the learning process to
reciprocal space. Specifically, the training acts on the spectral domain and
seeks to modify the eigenvectors and eigenvalues of transfer operators in
direct space. The proposed method is ductile and can be tailored to return
either linear or non linear classifiers. The performance are competitive with
standard schemes, while allowing for a significant reduction of the learning
parameter space. Spectral learning restricted to eigenvalues could be also
employed for pre-training of the deep neural network, in conjunction with
conventional machine-learning schemes. Further, it is surmised that the nested
indentation of eigenvectors that defines the core idea of spectral learning
could help understanding why deep networks work as well as they do
Entity Identification Problem in Big and Open Data
Big and Open Data provide great opportunities to businesses to enhance their competitive advantages if
utilized properly. However, during past few years’ research in Big and Open Data process, we have
encountered big challenge in entity identification reconciliation, when trying to establish accurate
relationships between entities from different data sources. In this paper, we present our innovative Intelligent
Reconciliation Platform and Virtual Graphs solution that addresses this issue. With this solution, we are able
to efficiently extract Big and Open Data from heterogeneous source, and integrate them into a common
analysable format. Further enhanced with the Virtual Graphs technology, entity identification reconciliation
is processed dynamically to produce more accurate result at system runtime. Moreover, we believe that our
technology can be applied to a wide diversity of entity identification problems in several domains, e.g., e-
Health, cultural heritage, and company identities in financial world.Ministerio de Ciencia e InnovaciĂłn TIN2013-46928-C3-3-
End-to-end weakly-supervised semantic alignment
We tackle the task of semantic alignment where the goal is to compute dense
semantic correspondence aligning two images depicting objects of the same
category. This is a challenging task due to large intra-class variation,
changes in viewpoint and background clutter. We present the following three
principal contributions. First, we develop a convolutional neural network
architecture for semantic alignment that is trainable in an end-to-end manner
from weak image-level supervision in the form of matching image pairs. The
outcome is that parameters are learnt from rich appearance variation present in
different but semantically related images without the need for tedious manual
annotation of correspondences at training time. Second, the main component of
this architecture is a differentiable soft inlier scoring module, inspired by
the RANSAC inlier scoring procedure, that computes the quality of the alignment
based on only geometrically consistent correspondences thereby reducing the
effect of background clutter. Third, we demonstrate that the proposed approach
achieves state-of-the-art performance on multiple standard benchmarks for
semantic alignment.Comment: In 2018 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR 2018
- …