119 research outputs found
Balance between openness and closeness of organizational boundaries in facilitating open innovation
The need of open innovation emerged from the organizational worldwide competitiveness and the need of being always improving and innovating, in order to be sustainable and add value. Organizations face new challenges and must presente solutions for them. Frequently their assets inside house are not enough, but can be a precious key for joining efforts with external players and get better performance. Open innovation is about working with outside partners (namely suppliers, clients, competitors, etc) to market inside advancements and get a wellspring of outer development that can be popularized. Through the contextual analysis examination of the Clothing and Textile industry in Portugal, this dissertation aims to contribute for the reflexion of conceiving and keep up a dynamic harmony amongst openness and closeness of open innovation. Developed in INESC TEC, a private non-profit association that hosted the researcher of this dissertation, this study is mostly qualitative. By interviewing some of the top authority players from the textile sector in Portugal, the dissertation’s findings suggest that open innovation is a combined model which envolves Government, Universities, Research Centres, Associations and Enterprises (SME and big companies). It requires a symbiosis between closed and open processes, where all players should act, interact, collaborate and cooperate. This case shows how large-scale firms and SME firms tend to see open innovation and how they objectify it. It also presents one possible way of being a successful open innovator by combining open and closed features. What to share, how to value the process and its effects and what are the benefits of sharing are some of the key questions answered.
This dissertation concludes that open innovation requires action and will have more impact if worked in a cluster’s approach, by all the player, even if in different levels and approaches.A necessidade da inovação aberta resultou da competitividade das organizações a nível mundial e da necessidade de as empresas estarem sempre a melhorar e inovar, de forma a ser sustentáveis e acrescentar valor. As empresas enfrentam novos desafios e precisam de apresentar soluções para esses desafios. Muitas vezes os seus ativos internos não são suficientes, mas podem ser uma chave preciosa para reunir esforços com atores externos e obter melhor desempenho. A inovação aberta está relacionada com o trabalho a desenvolver com os parceiros externos (nomeadamente fornecedores, clientes, concorrentes, etc) para responder às exigências do mercado e, ao mesmo tempo, receber uma “lufada de ar fresco” sobre o desenvolvimento que está a ser conduzido no exterior da empresa. Através de uma análise contextual da indústria de Roupas e Têxteis em Portugal, esta dissertação procura contribuir para que se reflita sobre a necessidade de conceber e criar uma dinâmica harmónica entre inovação aberta e fechada. Desenvolvida no INESC TEC, Instituição que acolheu o investigador, este estudo é, sobretudo, qualitativo. Através de entrevistas feitas a alguns dos mais relevantes atores da Indústria Têxtil em Portugal, os resultados desta dissertação sugerem que a inovação aberta é um modelo combinado que envolve o Governo, as Universidades, os Centros de Investigação, as Associações e as Empresas (PME’s e Grandes empresas). Requer uma simbiose entre processos abertos, onde todos os atores devem atuar, interagir, colaborar e cooperar. Este caso mostra como é que as grandes empresas e as PME’s tendem a ver a inovação aberta e como é que a objetivam.
Apresenta, ainda, um possível caminho para se tornar um inovador aberto de sucesso, através da combinação de especificidades abertas e fechadas. O que partilhar, como valorizar o processo e os seus efeitos, bem como os benefícios da partilha são algumas das questões-chave que são respondidas neste estudo. Esta dissertação conclui que a inovação aberta exige ação e terá maior impacto se for trabalhada numa abordagem de cluster, por todos os atores, mesmo que em diversos níveis e abordagens
ScALPEL: A Scalable Adaptive Lightweight Performance Evaluation Library for application performance monitoring
As supercomputers continue to grow in scale and capabilities, it is becoming
increasingly difficult to isolate processor and system level causes of
performance degradation. Over the last several years, a significant number of
performance analysis and monitoring tools have been built/proposed. However,
these tools suffer from several important shortcomings, particularly in
distributed environments. In this paper we present ScALPEL, a Scalable Adaptive
Lightweight Performance Evaluation Library for application performance
monitoring at the functional level. Our approach provides several distinct
advantages. First, ScALPEL is portable across a wide variety of architectures,
and its ability to selectively monitor functions presents low run-time
overhead, enabling its use for large-scale production applications. Second, it
is run-time configurable, enabling both dynamic selection of functions to
profile as well as events of interest on a per function basis. Third, our
approach is transparent in that it requires no source code modifications.
Finally, ScALPEL is implemented as a pluggable unit by reusing existing
performance monitoring frameworks such as Perfmon and PAPI and extending them
to support both sequential and MPI applications.Comment: 10 pages, 4 figures, 2 table
IMAGE CLASSIFICATION USING INVARIANT LOCAL FEATURES AND CONTEXTUAL INFORMATION
Ph.DDOCTOR OF PHILOSOPH
DART: Distribution Aware Retinal Transform for Event-based Cameras
We introduce a generic visual descriptor, termed as distribution aware
retinal transform (DART), that encodes the structural context using log-polar
grids for event cameras. The DART descriptor is applied to four different
problems, namely object classification, tracking, detection and feature
matching: (1) The DART features are directly employed as local descriptors in a
bag-of-features classification framework and testing is carried out on four
standard event-based object datasets (N-MNIST, MNIST-DVS, CIFAR10-DVS,
NCaltech-101). (2) Extending the classification system, tracking is
demonstrated using two key novelties: (i) For overcoming the low-sample problem
for the one-shot learning of a binary classifier, statistical bootstrapping is
leveraged with online learning; (ii) To achieve tracker robustness, the scale
and rotation equivariance property of the DART descriptors is exploited for the
one-shot learning. (3) To solve the long-term object tracking problem, an
object detector is designed using the principle of cluster majority voting. The
detection scheme is then combined with the tracker to result in a high
intersection-over-union score with augmented ground truth annotations on the
publicly available event camera dataset. (4) Finally, the event context encoded
by DART greatly simplifies the feature correspondence problem, especially for
spatio-temporal slices far apart in time, which has not been explicitly tackled
in the event-based vision domain.Comment: 12 pages, revision submitted to TPAMI in Nov 201
HyNNA: Improved Performance for Neuromorphic Vision Sensor based Surveillance using Hybrid Neural Network Architecture
Applications in the Internet of Video Things (IoVT) domain have very tight
constraints with respect to power and area. While neuromorphic vision sensors
(NVS) may offer advantages over traditional imagers in this domain, the
existing NVS systems either do not meet the power constraints or have not
demonstrated end-to-end system performance. To address this, we improve on a
recently proposed hybrid event-frame approach by using morphological image
processing algorithms for region proposal and address the low-power requirement
for object detection and classification by exploring various convolutional
neural network (CNN) architectures. Specifically, we compare the results
obtained from our object detection framework against the state-of-the-art
low-power NVS surveillance system and show an improved accuracy of 82.16% from
63.1%. Moreover, we show that using multiple bits does not improve accuracy,
and thus, system designers can save power and area by using only single bit
event polarity information. In addition, we explore the CNN architecture space
for object classification and show useful insights to trade-off accuracy for
lower power using lesser memory and arithmetic operations.Comment: 4 pages, 2 figure
Low-power dynamic object detection and classification with freely moving event cameras
We present the first purely event-based, energy-efficient approach for dynamic object detection and categorization with a freely moving event camera. Compared to traditional cameras, event-based object recognition systems are considerably behind in terms of accuracy and algorithmic maturity. To this end, this paper presents an event-based feature extraction method devised by accumulating local activity across the image frame and then applying principal component analysis (PCA) to the normalized neighborhood region. Subsequently, we propose a backtracking-free k-d tree mechanism for efficient feature matching by taking advantage of the low-dimensionality of the feature representation. Additionally, the proposed k-d tree mechanism allows for feature selection to obtain a lower-dimensional object representation when hardware resources are limited to implement PCA. Consequently, the proposed system can be realized on a field-programmable gate array (FPGA) device leading to high performance over resource ratio. The proposed system is tested on real-world event-based datasets for object categorization, showing superior classification performance compared to state-of-the-art algorithms. Additionally, we verified the real-time FPGA performance of the proposed object detection method, trained with limited data as opposed to deep learning methods, under a closed-loop aerial vehicle flight mode. We also compare the proposed object categorization framework to pre-trained convolutional neural networks using transfer learning and highlight the drawbacks of using frame-based sensors under dynamic camera motion. Finally, we provide critical insights about the feature extraction method and the classification parameters on the system performance, which aids in understanding the framework to suit various low-power (less than a few watts) application scenarios
Inverse problem of photoelastic fringe mapping using neural networks
This paper presents an enhanced technique for inverse analysis of photoelastic fringes using neural networks to determine the applied load. The technique may be useful in whole-field analysis of photoelastic images obtained due to external loading, which may find application in a variety of specialized areas including robotics and biomedical engineering. The presented technique is easy to implement, does not require much computation and can cope well within slight experimental variations. The technique requires image acquisition, filtering and data extraction, which is then fed to the neural network to provide load as output. This technique can be efficiently implemented for determining the applied load in applications where repeated loading is one of the main considerations. The results presented in this paper demonstrate the novelty of this technique to solve the inverse problem from direct image data. It has been shown that the presented technique offers better result for the inverse photoelastic problems than previously published works
Towards Virtual Shared Memory for Non-Cache-Coherent Multicore Systems
Abstract-Emerging heterogeneous architectures do not necessarily provide cache-coherent shared memory across all components of the system. Although there are good reasons for this architectural decision, it does provide programmers with a challenge. Several programming models and approaches are currently available, including explicit data movement for offloading computation to coprocessors, and treating coprocessors as distributed memory machines by using message passing. This paper examines the potential of distributed shared memory (DSM) for addressing this programming challenge. We discuss how our recently proposed DSM system and its memory consistency model maps to the heterogeneous node context, and present experimental results that highlight the advantages and challenges of this approach
- …