395 research outputs found
Graphene-derived materials as oxygen reduction catalysts in alkaline conditions for energy applications
Graphene is a relatively new carbon material increasingly finding technological applications due to its unique physical and engineering properties. Here, its application as catalyst for the oxygen reduction reaction (ORR) in alkaline media is investigated.
First, the role of graphene-related materials (including multi-walled carbon nanotubes) as catalyst supports is compared to the widely used carbon black, finding that the ORR follows a mixed behaviour between the direct 4-electron pathway and the indirect 2-step mechanism on graphene-supported platinum catalysts.
Further, different combinations of boron, nitrogen, phosphorus and sulphur metal-free doped-graphene catalysts have been systematically synthesised and evaluated, finding that dual-doped graphene catalysts yield the best ORR performance. Specifically, phosphorus and nitrogen dual-doped graphene (PN-Gr) demonstrates the highest catalytic activity, with 3.5 electrons transferred during the ORR.
Doped-graphene/perovskite oxide hybrid catalysts have been also tested, yielding PN-Gr/La0.8Sr0.2MnO3 the best ORR activity in terms of measured current density, achieving a value that is 85% of that reported for a commercial Pt/C catalyst. Moreover, SN-Gr/La0.8Sr0.2MnO3 produces the lowest amount of peroxide formation with only 10%.
These results confirm the graphene-derived catalysts as promising alternatives to the current platinum-based catalysts, and could enable the important issues related to its practical application to be overcome
Automatically Updating a Dynamic Region Connection Calculus for Topological Reasoning
Proceedings of: Workshop on User-Centric Technologies and Applications (CONTEXTS 2011), Salamanca, Spain, April 6-8, 2011During the last years ontology-based applications have been thought without taking in account their limitations in terms of upgradeability. In parallel, new capabilities such as topological sorting of instances with spatial characteristics have been developed. Both facts may lead to a collapse in the operational capacity of this kind of applications. This paper presents an ontology-centric architecture to solve the topological relationships between spatial objects automatically. The capability for automatic assertion is given by an object model based on geometries. The object model seeks to prioritize the optimization using a dynamic data structure of spatial data. The ultimate goal of this architecture is the automatic storage of the spatial relationships without a noticeable loss of efficiency.This work was supported in part by Projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, CAM CONTEXTS (S2009/ TIC-1485) and DPS2008-07029-C02-02.Publicad
Interactive Video Annotation Tool
Proceedings of: Forth International Workshop on User-Centric Technologies and applications (CONTEXTS 2010). Valencia, 7-10 September , 2010.Abstract: Increasingly computer vision discipline needs annotated video databases to realize assessment tasks. Manually providing ground truth data to multimedia resources is a very expensive work in terms of effort, time and economic resources. Automatic and semi-automatic video annotation and labeling is the faster and more economic way to get ground truth for quite large video collections. In this paper, we describe a new automatic and supervised video annotation tool. Annotation tool is a modified version of ViPER-GT tool. ViPER-GT standard version allows manually editing and reviewing video metadata to generate assessment data. Automatic annotation capability is possible thanks to an incorporated tracking system which can deal the visual data association problem in real time. The research aim is offer a system which enables spends less time doing valid assessment models.Publicad
A General Purpose Context Reasoning Environment to Deal with Tracking Problems: An Ontology-based Prototype
Proceedings of: 6th International Conference on Hybrid Artificial Intelligence Systems (HAIS 2011). Wroclaw, Poland, May 23-25, 2011The high complexity of semantics extraction with automatic video analysis has forced the researchers to the generalization of mixed approaches based on perceptual and context data. These mixed approaches do not usually take in account the advantages and benefits of the data fusion discipline. This paper presents a context reasoning environment to deal with general and specific tracking problems. The cornerstone of the environment is a symbolic architecture based on the Joint Directors of Laboratories fusion model. This architecture may build a symbolic data representation from any source, check the data consistency, create new knowledge and refine it through inference obtaining a higher understanding level of the scene and providing feedback to autocorrect the tracking errors. An ontology-based prototype has been developed to carry out experimental tests. The prototype has been proved against tracking analysis occlusion problems.This work was supported in part by Projects CICYT TIN2008-06742-C02-02/TSI,
CICYT TEC2008-06732-C02-02/TEC, CAM CONTEXTS (S2009/ TIC-1485) and
DPS2008-07029-C02-02.Publicad
Data fusion to improve trajectory tracking in a Cooperative Surveillance Multi-Agent Architecture
13 pages, 12 figures.In this paper we present a Cooperative Surveillance Multi-Agent System (CS-MAS) architecture extended to incorporate dynamic coalition formation. We illustrate specific coalition formation using fusion skills. In this case, the fusion process is divided into two layers: (i) a global layer in the fusion center, which initializes the coalitions and (ii) a local layer within coalitions, where a local fusion agent is dynamically instantiated. There are several types of autonomous agent: surveillanceâsensor agents, a fusion center agent, a local fusion agent, interface agents, record agents, planning agents, etc. Autonomous agents differ in their ability to carry out a specific surveillance task. A surveillanceâsensor agent controls and manages individual sensors (usually video cameras). It has different capabilities depending on its functional complexity and limitations related to sensor-specific aspects. In the work presented here we add a new autonomous agent, called the local fusion agent, to the CS-MAS architecture, addressing specific problems of on-line sensor alignment, registration, bias removal and data fusion. The local fusion agent is dynamically created by the fusion center agent and involves several surveillanceâsensor agents working in a coalition. We show how the inclusion of this new dynamic local fusion agent guarantees that, in a video-surveillance system, objects of interest are successfully tracked across the whole area, assuring continuity and seamless transitions.This work was supported in part by Projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, SINPROB, CAM MADRINET S-0505 /TIC/0255 and DPS2008-07029-C02-02.Publicad
A Structured Representation to the Group Behavior Recognition Issue
Proceedings of: Workshop on User-Centric Technologies and Applications (CONTEXTS 2011), Salamanca, April 6-8, 2011The behavior recognition is one of the most prolific lines of research in recent decades in the field of computer vision. Within this field, the majority of researches have focused on the recognition of the activities carried out by a single individual, however this paper deals with the problem of recognizing the behavior of a group of individuals, in which relations between the component elements are of great importance. For this purpose it is exposed a new representation that concentrates all necessary information concerning relations peer to peer present in the group, and the semantics of the different groups formed by individuals and training (or structure) of each one of them. The work is presented with the dataset created in CVBASE06 dealing the European handball.This work was supported in part by Projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, CAM CONTEXTS (S2009/ TIC-1485) and DPS2008-07029-C02-02.Publicad
A Feature Selection Approach to the Group Behavior Recognition Issue Using Static Context Information
This paper deals with the problem of group behavior recognition. Our approach is to merge all the possible features of group behavior (individuals, groups, relationships between individuals, relationships between groups, etc.) with static context information relating to particular domains. All this information is represented as a set of features by classification algorithms. This is a very high-dimensional problem, with which classification algorithms are unable cope. For this reason, this paper also presents four feature selection alternatives: two wrappers and two filters. We present and compare the results of each method in the basketball domain.This work was supported in part by ProjectsMEyC TEC2012-
37832-C02-01, CICYT TEC2011-28626-C02-02, and CAM
CONTEXTS (S2009/TIC-1485).Publicad
A multi-agent architecture to support active fusion in a visual sensor network
8 pages, 12 figures.-- Contributed to: Second ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC'2008, Stanford, California, US, Sep 7-11, 2008).One of the main characteristics of a visual sensor network environment is the high amount of data generated. In addition, the application of some process, as for example tracking objects, generate a highly noisy output which may potentially produce an inconsistent system output. By inconsistent output we mean highly differences between tracking information provided by the visual sensors. A visual sensor network, with overlapped field of views, could exploit the redundancy between the field of view of each visual sensor to avoid inconsistencies and obtain more accurate results. In this paper, we present a visual sensor network system with overlapped field of views, modeled as a network of software agents. The communication of each software agent allows the use of feedback information in the visual sensors, called active fusion. Results of the software architecture to support active fusion scheme in an indoor scenario evaluation are presented.This work was supported in part by Projects MADRINET, TEC2005-07186-C03-02, SINPROB, TSI2005-07344-C02-02.Publicad
Analysis of distributed fusion alternatives in coordinated vision agents
6 pages, 10 figures.-- Contributed to: 11th International Conference on Information Fusion (FUSION'2008, Cologne, Germany, Jun 30-Jul 3, 2008).In this paper, we detail some technical alternatives when building a coherent distributed visual sensor network by using the multi-agent paradigm. We argue that the multi-agent paradigm fits well within the visual sensor network architecture and in this paper we specially focus on the problem of distributed data fusion. Three different data fusion coordination schemes are proposed and experimental results of passive fusion are presented and discussed. The main contributions of this paper are twofold, first we propose the use of multi-agent paradigm as the visual sensor architecture and present a real system results. Secondly, the use of feedback information in the visual sensors, called active fusion, is proposed. The experimental results prove that the multi-agent paradigm fits well within the visual sensor network and provide a novel mechanism to develop a real visual sensor network system.This work was partially supported by projects MADRINET, TEC2005-07186-C03-02, SINPROB, TSI2005-07344-C02-02.Publicad
A multi-agent architecture based on the BDI model for data fusion in visual sensor networks
30 pages, 18 figures.-- Article in press.The newest surveillance applications is attempting more complex tasks such as the analysis of the behavior of individuals and crowds. These complex tasks may use a distributed visual sensor network in order to gain coverage and exploit the inherent redundancy of the overlapped field of views. This article, presents a Multi-agent architecture based on the Belief-Desire-Intention (BDI) model for processing the information and fusing the data in a distributed visual sensor network. Instead of exchanging raw images between the agents involved in the visual network, local signal processing is performed and only the key observed features are shared. After a registration or calibration phase, the proposed architecture performs tracking, data fusion and coordination. Using the proposed Multi-agent architecture, we focus on the means of fusing the estimated positions on the ground plane from different agents which are applied to the same object. This fusion process is used for two different purposes: (1) to obtain a continuity in the tracking along the field of view of the cameras involved in the distributed network, (2) to improve the quality of the tracking by means of data fusion techniques, and by discarding non reliable sensors. Experimental results on two different scenarios show that the designed architecture can successfully track an object even when occlusions or sensorâs errors take place. The sensorâs errors are reduced by exploiting the inherent redundancy of a visual sensor network with overlapped field of views.This work was partially supported by projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, SINPROB, CAM MADRINET S-0505/TIC/0255 and DPS2008-07029-C02-02.En prens
- âŠ