6,091 research outputs found

    Fast model predictive control for hydrogen outflow regulation in ethanol steam reformers

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In the recent years, the presence of alternative power sources, such as solar panels, wind farms, hydropumps and hydrogen-based devices, has significantly increased. The reasons of this trend are clear: contributing to a reduction of gas emissions and dependency on fossil fuels. Hydrogen-based devices are of particular interest due to their significant efficiency and reliability. Reforming technologies are among the most economic and efficient ways of producing hydrogen. In this paper we consider the regulation of hydrogen outflow in an ethanol steam reformer (ESR). In particular, a fast model predictive control approach based on a finite step response model of the process is proposed. Simulations performed using a more realistic non-linear model show the effectiveness of the proposed approach in driving the ESR to different operating conditions while fulfilling input and output constraints.Peer ReviewedPostprint (author's final draft

    A semantic-based probabilistic approach for real-time video event recognition

    Full text link
    This is the author’s version of a work that was accepted for publication in Journal Computer Vision and Image Understanding. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Journal Computer Vision and Image Understanding, 116, 9 (2012) DOI: 10.1016/j.cviu.2012.04.005This paper presents an approach for real-time video event recognition that combines the accuracy and descriptive capabilities of, respectively, probabilistic and semantic approaches. Based on a state-of-art knowledge representation, we define a methodology for building recognition strategies from event descriptions that consider the uncertainty of the low-level analysis. Then, we efficiently organize such strategies for performing the recognition according to the temporal characteristics of events. In particular, we use Bayesian Networks and probabilistically-extended Petri Nets for recognizing, respectively, simple and complex events. For demonstrating the proposed approach, a framework has been implemented for recognizing human-object interactions in the video monitoring domain. The experimental results show that our approach improves the event recognition performance as compared to the widely used deterministic approach.This work has been partially supported by the Spanish Administration agency CDTI (CENIT-VISION 2007- 1007), by the Spanish Government (TEC2011-25995 EventVideo), by the Consejería de Educación of the Comunidad de Madrid and by The European Social Fund

    A semantic-guided and self-configurable framework for video analysis

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s00138-011-0397-xThis paper presents a distributed and scalable framework for video analysis that automatically estimates the optimal workflow required for the analysis of different application domains. It integrates several technologies related with data acquisition, visual analysis tools, communication protocols, and data storage. Moreover, hierarchical semantic representations are included in the framework to describe the application domain, the analysis capabilities, and the user preferences. The automatic determination of the analysis workflow is performed by selecting the most appropriate tools for each domain among the available ones in the framework by means of exploiting the relations between the semantic descriptions. The experimental results in the video surveillance domain demonstrate that the proposed approach successfully composes optimal workflows for video analysis applications.This work has been partially supported by the Spanish Government (TEC2011-25995), by the Consejería de Educación of the Comunidad de Madrid and by The European Social Fund

    Attitudes of students of a health sciences university towards the extension of smoke-free policies at the university campuses of Barcelona

    Get PDF
    OBJECTIVE: To assess attitudes towards the extension of outdoor smoke-free areas on university campuses. METHODS: Cross-sectional study (n=384) conducted using a questionnaire administered to medical and nursing students in Barcelona in 2014. Information was obtained pertaining to support for indoor and outdoor smoking bans on university campuses, and the importance of acting as role models. Logistic regression analyses were performed to examine agreement. RESULTS: Most of the students agreed on the importance of health professionals and students as role models (74.9% and 64.1%, respectively) although there were statistically significant differences by smoking status and age. 90% of students reported exposure to smoke on campus. Students expressed strong support for indoor smoke-free policies (97.9%). However, only 39.3% of participants supported regulation of outdoor smoking for university campuses. Non-smokers (OR=12.315; 95% CI: 5.377-28.204) and students ≥22 years old (OR=3.001; 95% CI: 1.439-6.257) were the strongest supporters. CONCLUSIONS: The students supported indoor smoke-free policies for universities. However, support for extending smoke-free regulations to outdoor areas of university campuses was limited. It is necessary to educate students about tobacco control and emphasise their importance as role models before extending outdoor smoke-free legislation at university campuses

    On the effect of motion segmentation techniques in description based adaptive video transmission

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. J. C. San Miguel, and J. M. Martínez, "On the effect of motion segmentation techniques in description based adaptive video transmission", in AVSS '07: Proceedings of the 2007 IEEE Conference on Advanced Video and Signal Based Surveillance, 2007, p. 359-364This paper presents the results of analysing the effect of different motion segmentation techniques in a system that transmits the information captured by a static surveillance camera in an adaptative way based on the on-line generation of descriptions and their descriptions at different levels of detail. The video sequences are analyzed to detect the regions of activity (motion analysis) and to differentiate them from the background, and the corresponding descriptions (mainly MPEG-7 moving regions) are generated together with the textures of the moving regions and the associated background image. Depending on the available bandwidth, different levels of transmission are specified, ranging from just sending the descriptions generated to a transmission with all the associated images corresponding to the moving objects and background. We study the effect of three motion segmentation algorithms in several aspects such as accurate segmentation, size of the descriptions generated, computational efficiency and reconstructed data quality.This work is partially supported by Cátedra Infoglobal-UAM para Nuevas Tecnologías de video aplicadas a la seguridad. This work is also supported by the Ministerio de Ciencia y Tecnología of the Spanish Government under project TIN2004-07860 (MEDUSA) and by the Comunidad de Madrid under project P-TIC-0223-0505 (PROMULTIDIS)

    Shadow detection in video surveillance by maximizing agreement between independent detectors

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. J. C. SanMiguel, and J. M. Martínez, "Shadow detection in video surveillance by maximizing agreement between independent detectors", in 16th IEEE International Conference on Image Processing, ICIP 2009. p. 1141-1144This paper starts from the idea of automatically choosing the appropriate thresholds for a shadow detection algorithm. It is based on the maximization of the agreement between two independent shadow detectors without training data. Firstly, this shadow detection algorithm is described and then, it is adapted to analyze video surveillance sequences. Some modifications are introduced to increase its robustness in generic surveillance scenarios and to reduce its overall computational cost (critical in some video surveillance applications). Experimental results show that the proposed modifications increase the detection reliability as compared to some previous shadow detection algorithms and performs considerably well across a variety of multiple surveillance scenarios.Work supported by the Spanish Government (TEC2007- 65400 SemanticVideo), by Cátedra Infoglobal-UAM for “Nuevas Tecnologías de video aplicadas a la seguridad”, by the Spanish Administration agency CDTI (CENIT-VISION 2007-1007), by the Comunidad de Madrid (S-050/TIC-0223 - ProMultiDis), by the Consejería de Educación of the Comunidad de Madrid and by the European Social Fund

    Robust unattended and stolen object detection by fusing simple algorithms

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. J. C. San Miguel, and J. M. Martínez, "Robust unattended and stolen object detection by fusing simple algorithms", in IEEE Fifth International Conference on Advanced Video and Signal Based Surveillance, 2008. AVSS '08, 2008, p. 18 - 25In this paper a new approach for detecting unattended or stolen objects in surveillance video is proposed. It is based on the fusion of evidence provided by three simple detectors. As a first step, the moving regions in the scene are detected and tracked. Then, these regions are classified as static or dynamic objects and human or nonhuman objects. Finally, objects detected as static and nonhuman are analyzed with each detector. Data from these detectors are fused together to select the best detection hypotheses. Experimental results show that the fusion-based approach increases the detection reliability as compared to the detectors and performs considerably well across a variety of multiple scenarios operating at realtime.This work is supported by Cátedra Infoglobal-UAM for “Nuevas Tecnologías de video aplicadas a la seguridad”, by the Spanish Government (TEC2007-65400 SemanticVideo), by the Comunidad de Madrid (S- 050/TIC-0223 - ProMultiDis-CM), by the Consejería de Educación of the Comunidad de Madrid and by the European Social Fund

    On-line adaptive video sequence transmission based on generation and transmisión of descriptions

    Full text link
    Proceedings of the 26th Picture Coding Symposium, PCS 2007, Lisbon, Portugal, November 2007This paper presents a system to transmit the information from a static surveillance camera in an adaptive way, from low to higher bit-rate, based on the on-line generation of descriptions. The proposed system is based on a server/client model: the server is placed in the surveillance area and the client is placed in a user side. The server analyzes the video sequence to detect the regions of activity (motion analysis) and the corresponding descriptions (mainly MPEG-7 moving regions) are generated together with the textures of moving regions and the associated background image. Depending on the available bandwidth, different levels of transmission are specified, ranging from just sending the descriptions generated to a transmission with all the associated images corresponding to the moving objects and background.This work is partially supported by Cátedra Infoglobal-UAM para Nuevas Tecnologías de video aplicadas a la seguridad. This work is also supported by the Ministerio de Ciencia y Tecnología of the Spanish Government under project TIN2004-07860 (MEDUSA) and by the Comunidad de Madrid under project P-TIC-0223-0505 (PROMULTIDIS)

    On the evaluation of background subtraction algorithms without ground-truth

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. J. C. San Miguel, and J. M. Martínez, "On the evaluation of background subtraction algorithms without ground-truth" in 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance, 2013, 180 - 187In video-surveillance systems, the moving object segmentation stage (commonly based on background subtraction) has to deal with several issues like noise, shadows and multimodal backgrounds. Hence, its failure is inevitable and its automatic evaluation is a desirable requirement for online analysis. In this paper, we propose a hierarchy of existing performance measures not-based on ground-truth for video object segmentation. Then, four measures based on color and motion are selected and examined in detail with different segmentation algorithms and standard test sequences for video object segmentation. Experimental results show that color-based measures perform better than motion-based measures and background multimodality heavily reduces the accuracy of all obtained evaluation results.This work is partially supported by the Spanish Government (TEC2007- 65400 SemanticVideo), by Cátedra Infoglobal-UAM for “Nuevas Tecnologías de video aplicadas a la seguridad”, by the Consejería de Educación of the Comunidad de Madrid and by the European Social Fund
    corecore