5,471 research outputs found
A brief network analysis of Artificial Intelligence publication
In this paper, we present an illustration to the history of Artificial
Intelligence(AI) with a statistical analysis of publish since 1940. We
collected and mined through the IEEE publish data base to analysis the
geological and chronological variance of the activeness of research in AI. The
connections between different institutes are showed. The result shows that the
leading community of AI research are mainly in the USA, China, the Europe and
Japan. The key institutes, authors and the research hotspots are revealed. It
is found that the research institutes in the fields like Data Mining, Computer
Vision, Pattern Recognition and some other fields of Machine Learning are quite
consistent, implying a strong interaction between the community of each field.
It is also showed that the research of Electronic Engineering and Industrial or
Commercial applications are very active in California. Japan is also publishing
a lot of papers in robotics. Due to the limitation of data source, the result
might be overly influenced by the number of published articles, which is to our
best improved by applying network keynode analysis on the research community
instead of merely count the number of publish.Comment: 18 pages, 7 figure
Action Recognition in Videos: from Motion Capture Labs to the Web
This paper presents a survey of human action recognition approaches based on
visual data recorded from a single video camera. We propose an organizing
framework which puts in evidence the evolution of the area, with techniques
moving from heavily constrained motion capture scenarios towards more
challenging, realistic, "in the wild" videos. The proposed organization is
based on the representation used as input for the recognition task, emphasizing
the hypothesis assumed and thus, the constraints imposed on the type of video
that each technique is able to address. Expliciting the hypothesis and
constraints makes the framework particularly useful to select a method, given
an application. Another advantage of the proposed organization is that it
allows categorizing newest approaches seamlessly with traditional ones, while
providing an insightful perspective of the evolution of the action recognition
task up to now. That perspective is the basis for the discussion in the end of
the paper, where we also present the main open issues in the area.Comment: Preprint submitted to CVIU, survey paper, 46 pages, 2 figures, 4
table
HALSIE - Hybrid Approach to Learning Segmentation by Simultaneously Exploiting Image and Event Modalities
Standard frame-based algorithms fail to retrieve accurate segmentation maps
in challenging real-time applications like autonomous navigation, owing to the
limited dynamic range and motion blur prevalent in traditional cameras. Event
cameras address these limitations by asynchronously detecting changes in
per-pixel intensity to generate event streams with high temporal resolution,
high dynamic range, and no motion blur. However, event camera outputs cannot be
directly used to generate reliable segmentation maps as they only capture
information at the pixels in motion. To augment the missing contextual
information, we postulate that fusing spatially dense frames with temporally
dense events can generate semantic maps with fine-grained predictions. To this
end, we propose HALSIE, a hybrid approach to learning segmentation by
simultaneously leveraging image and event modalities. To enable efficient
learning across modalities, our proposed hybrid framework comprises two input
branches, a Spiking Neural Network (SNN) branch and a standard Artificial
Neural Network (ANN) branch to process event and frame data respectively, while
exploiting their corresponding neural dynamics. Our hybrid network outperforms
the state-of-the-art semantic segmentation benchmarks on DDD17 and MVSEC
datasets and shows comparable performance on the DSEC-Semantic dataset with
upto 33.23 reduction in network parameters. Further, our method shows
upto 18.92 improvement in inference cost compared to existing SOTA
approaches, making it suitable for resource-constrained edge applications
Energy efficient enabling technologies for semantic video processing on mobile devices
Semantic object-based processing will play an increasingly important role in future multimedia systems due to the ubiquity of digital multimedia capture/playback technologies and increasing storage capacity. Although the object based paradigm has many undeniable benefits, numerous technical challenges remain before the applications becomes pervasive, particularly on computational constrained mobile devices. A fundamental issue is the ill-posed problem of semantic object segmentation. Furthermore, on battery powered mobile computing devices, the additional algorithmic complexity of semantic object based processing compared to conventional video processing is highly undesirable both from a real-time operation and battery life perspective. This
thesis attempts to tackle these issues by firstly constraining the solution space and focusing on the
human face as a primary semantic concept of use to users of mobile devices. A novel face detection algorithm is proposed, which from the outset was designed to be amenable to be offloaded from the host microprocessor to dedicated hardware, thereby providing real-time performance and
reducing power consumption. The algorithm uses an Artificial Neural Network (ANN), whose topology and weights are evolved via a genetic algorithm (GA). The computational burden of the ANN evaluation is offloaded to a dedicated hardware accelerator, which is capable of processing
any evolved network topology. Efficient arithmetic circuitry, which leverages modified Booth recoding, column compressors and carry save adders, is adopted throughout the design. To tackle the increased computational costs associated with object tracking or object based shape encoding, a novel energy efficient binary motion estimation architecture is proposed. Energy is reduced in the proposed motion estimation architecture by minimising the redundant operations inherent in the binary data. Both architectures are shown to compare favourable with the relevant prior art
Generic object classification for autonomous robots
Un dels principals problemes de la interacció dels robots autònoms és el coneixement de l'escena. El reconeixement és fonamental per a solucionar aquest problema i permetre als robots interactuar en un escenari no controlat. En aquest document presentem una aplicació pràctica de la captura d'objectes, de la normalització i de la classificació de senyals triangulars i circulars. El sistema s'introdueix en el robot Aibo de Sony per a millorar-ne la interacció. La metodologia presentada s'ha comprobat en simulacions i problemes de categorització reals, com ara la classificació de senyals de trànsit, amb resultats molt prometedors.Uno de los principales problemas de la interacción de los robots autónomos es el conocimiento de la escena. El reconocimiento es fundamental para solventar este problema y permitir a los robots interactuar en un escenario no controlado. En este documento, presentamos una aplicación práctica de captura del objeto, normalización y clasificación de señales triangulares y circulares. El sistema es introducido en el robot Aibo de Sony para mejorar el comportamiento de la interacción del robot. La metodología presentada ha sido testeada en simulaciones y problemas de categorización reales, como es la clasificación de señales de tráfico, con resultados muy prometedores.One of the main problems of autonomous robots interaction is the scene knowledge. Recognition is concerned to deal with this problem and to allow robots to interact in uncontrolled environments. In this paper, we present a practical application for object fitting, normalization and classification of triangular and circular signs. The system is introduced in the Aibo robot of Sony to increase the robot interaction behaviour. The presented methodology has been tested in real simulations and categorization problems, as the traffic signs classification, with very promising results.Nota: Aquest document conté originàriament altre material i/o programari només consultable a la Biblioteca de Ciència i Tecnologia
- …