23 research outputs found
An Adaptive Dictionary Learning Approach for Modeling Dynamical Textures
Video representation is an important and challenging task in the computer
vision community. In this paper, we assume that image frames of a moving scene
can be modeled as a Linear Dynamical System. We propose a sparse coding
framework, named adaptive video dictionary learning (AVDL), to model a video
adaptively. The developed framework is able to capture the dynamics of a moving
scene by exploring both sparse properties and the temporal correlations of
consecutive video frames. The proposed method is compared with state of the art
video processing methods on several benchmark data sequences, which exhibit
appearance changes and heavy occlusions
Blind Source Separation with Compressively Sensed Linear Mixtures
This work studies the problem of simultaneously separating and reconstructing
signals from compressively sensed linear mixtures. We assume that all source
signals share a common sparse representation basis. The approach combines
classical Compressive Sensing (CS) theory with a linear mixing model. It allows
the mixtures to be sampled independently of each other. If samples are acquired
in the time domain, this means that the sensors need not be synchronized. Since
Blind Source Separation (BSS) from a linear mixture is only possible up to
permutation and scaling, factoring out these ambiguities leads to a
minimization problem on the so-called oblique manifold. We develop a geometric
conjugate subgradient method that scales to large systems for solving the
problem. Numerical results demonstrate the promising performance of the
proposed algorithm compared to several state of the art methods.Comment: 9 pages, 2 figure
Grasp Motion Planning for box opening task by multi-fingered hands and arms
The aim of our project is to develop a robot to manipulate an object in human environment. In this paper, as a first step, we focus on opening paper box such as tea box, and present a method to plan grasp motion by 2 arms with multi-fingered hands. we propose a task priority based scheme to plan grasping area consistent with whole steps of the given task procedure. Based on the grasping area and the concept of preshape, we derive desired fingertip positions and hand base position and orientation for preshape. Based on the vector field approach, we propose a motion planning method for the planned grasp by multi-fingered hands to avoid any undesired collisions. This method can be applied to regrasping and a motion in which collision is required
âA jack of all tradesâ - The role of PIs in the establishment and management of collaborative networks in scientific knowledge commercialisation
The commercialisation of scientific knowledge has become a primary objective for universities worldwide. Collaborative research projects are viewed as the key to achieving this objective, however, the role of Principal Investigators (PIs) within these complex multi-stakeholder research projects remains under researched. This paper explores how networks in the scientific knowledge collaboration process are initiated and maintained from a multi-stakeholder perspective. It is based on case study evidence from 82 stakeholders in 17 research collaboration projects in Irish and German universities, which provides for a holistic view of the process, as opposed to prior research which has tended to report findings based on the analysis of one or two stakeholders. It finds that PIs play a lead role in establishing and managing stakeholder networks. This finding is unanimous for all stakeholders, irrespective of research centre size, type and geographical location. Not unlike the entrepreneur, the PI has to be âa jack of all trades', taking on the roles of project manager, negotiator, resource acquirer as well as, the traditional academic role of Ph.D. supervision and mentoring. The findings suggest that PIs are better placed than Technology Transfer Office (TTO) managers to act as boundary spanners in bridging the gap between science and industry
From Adaptive Reasoning to Cognitive Factory: Bringing Cognitive Intelligence to Manufacturing Technology
There are two important aspects that will play important roles in future manufacturing systems: changeability and human-machine collaboration. The first aspect, changeability, concerns with the ability of production tools to reconfigure themselves to the new manufacturing settings, possibly with unknown prior information, while maintaining their reliability at lowest cost. The second aspect, human-machine collaboration, emphasizes the ability of production tools to put themselves on the position as humansâ co-workers. The interplay between these two aspects will not only determine the economical accomplishment of a manufacturing process, but it will also shape the future of the technology itself. To address this future challenge of manufacturing systems, the concept of Cognitive Factory was proposed. Along this line, machines and processes are equipped with cognitive capabilities in order to allow them to assess and increase their scope of operation autonomously. However, the technical implementation of such a concept is still widely open for research, since there are several stumbling blocks that limit practicality of the proposed methods. In this paper, we introduce our method to achieve the goal of the Cognitive Factory. Our method is inspired by the working mechanisms of a humanâs brain; it works by harnessing the reasoning capabilities of cognitive architecture. By utilizing such an adaptive reasoning mechanism, we envision the future manufacturing systems with cognitive intelligence. We provide illustrative examples from our current research work to demonstrate that our proposed method is notable to address the primary issues of the Cognitive Factory: changeability and human-machine collaboration
Sample Complexity of Dictionary Learning and other Matrix Factorizations
Many modern tools in machine learning and signal processing, such as sparse
dictionary learning, principal component analysis (PCA), non-negative matrix
factorization (NMF), -means clustering, etc., rely on the factorization of a
matrix obtained by concatenating high-dimensional vectors from a training
collection. While the idealized task would be to optimize the expected quality
of the factors over the underlying distribution of training vectors, it is
achieved in practice by minimizing an empirical average over the considered
collection. The focus of this paper is to provide sample complexity estimates
to uniformly control how much the empirical average deviates from the expected
cost function. Standard arguments imply that the performance of the empirical
predictor also exhibit such guarantees. The level of genericity of the approach
encompasses several possible constraints on the factors (tensor product
structure, shift-invariance, sparsity \ldots), thus providing a unified
perspective on the sample complexity of several widely used matrix
factorization schemes. The derived generalization bounds behave proportional to
w.r.t.\ the number of samples for the considered matrix
factorization techniques.Comment: to appea