18 research outputs found

    New approach to calculating the fundamental matrix

    Get PDF
    The estimation of the fundamental matrix (F) is to determine the epipolar geometry and to establish a geometrical relation between two images of the same scene or elaborate video frames. In the literature, we find many techniques that have been proposed for robust estimations such as RANSAC (random sample consensus), least-squares median (LMeds), and M estimators as exhaustive. This article presents a comparison between the different detectors that are (Harris, FAST, SIFT, and SURF) in terms of detected points number, the number of correct matches and the computation speed of the ‘F’. Our method based first on the extraction of descriptors by the algorithm (SURF) was used in comparison to the other one because of its robustness, then set the threshold of uniqueness to obtain the best points and also normalize these points and rank it according to the weighting function of the different regions at the end of the estimation of the matrix''F'' by the technique of the M-estimator at eight points, to calculate the average error and the speed of the calculation ''F''. The results of the experimental simulation were applied to the real images with different changes of viewpoints, for example (rotation, lighting, and moving object), give a good agreement in terms of the counting speed of the fundamental matrix and the acceptable average error. The results of the simulation show this technique of use in real-time application

    Pedestrian Detection by Computer Vision.

    Get PDF
    This document describes work aimed at determining whether the detection, bycomputer vision, of pedestrians waiting at signal-controlled road crossings could bemade sufficiently reliable and affordable, using currently available technology, so asto be suitable for widespread use in traffic control systems.The work starts by examining the need for pedestrian detection in traffic controlsystems and then goes onto look at the specific problems of applying a vision systemto the detection task. The most important distinctive features of the pedestriandetection task addressed in this work are:• The operating conditions are an outdoor environment with no constraints onfactors such as variation in illumination, presence of shadows and the effects ofadverse weather.• Pedestrians may be moving or static and are not limited to certain orientations orto movement in a single direction.• The number of pedestrians to be monitored is not restricted such that the visionsystem must cope with the monitoring of multiple targets concurrently.• The background scene is complex and so contains image features that tend todistract a vision system from the successful detection of pedestrians.• Pedestrian attire is unconstrained so detection must occur even when details ofpedestrian shape are hidden by items such as coats and hats.• The camera's position is such that assumptions commonly used by vision systemsto avoid the effects of occlusion, perspective and viewpoint variation are not valid.•The implementation cost of the system, in moderate volumes, must be realistic forwidespread installation.A review of relevant prior art in computer vision with respect to the above demands ispresented. Thereafter techniques developed by the author to overcome thesedifficulties are developed and evaluated over an extensive test set of image sequencesrepresentative of the range of conditions found in the real world.The work has resulted in the development of a vision system which has been shown toattain a useful level of performance under a wide range of environmental andtransportation conditions. This was achieved, in real-time, using low-cost processingand sensor components so demonstrating the viability of developing the results of thiswork into a practical detector

    Factors affecting blind mobility

    Get PDF
    This thesis contains a survey of the mobility problems of blind people, experimental analysis and evaluation of these problems and suggestions for ways in which the evaluation of mobility performance and the design of mobility aids may be improved. The survey revealed a low level of mobility among blind people, with no significant improvement since a comparable survey in 1967. A group of self taught cane users were identified and their mobility was shown to be poor or potentially dangerous. Existing measures of mobility were unable to detect improvements in performance above that achieved by competent long cane users. By using newly devised measures of environmental awareness and of gait, the advantages of the Sonic Pathfinder were demonstrated. Existing measures of psychological stress were unsatisfactory. Heart rate is affected by physical effort and has been shown to be a poor indicator of moment-to-moment stress in blind mobility. Analysis of secondary task errors showed that they occurred while obstacles were being negotiated. They did not measure stress due to anticipation of obstacles or of danger. In contrast, step length, stride time and particularly speed all show significant anticipatory effects. The energy expended in walking a given distance is least at the walker's preferred speed. When guided, blind people walk at this most efficient pace. It is therefore suggested that the ratio of actual to preferred speed is the best measure of efficiency in mobility. Both guide dogs and aids which enhance preview allow pedestrians to walk at, or close to, their preferred speed. Further experiments are needed to establish the extent to which psychological stress is present during blind mobility, since none of the conventional measures, such as heart rate and mood checklists show consistent effects. Walking speed may well prove to be the most useful measure of such stress

    A Context Aware Classification System for Monitoring Driver’s Distraction Levels

    Get PDF
    Understanding the safety measures regarding developing self-driving futuristic cars is a concern for decision-makers, civil society, consumer groups, and manufacturers. The researchers are trying to thoroughly test and simulate various driving contexts to make these cars fully secure for road users. Including the vehicle’ surroundings offer an ideal way to monitor context-aware situations and incorporate the various hazards. In this regard, different studies have analysed drivers’ behaviour under different case scenarios and scrutinised the external environment to obtain a holistic view of vehicles and the environment. Studies showed that the primary cause of road accidents is driver distraction, and there is a thin line that separates the transition from careless to dangerous. While there has been a significant improvement in advanced driver assistance systems, the current measures neither detect the severity of the distraction levels nor the context-aware, which can aid in preventing accidents. Also, no compact study provides a complete model for transitioning control from the driver to the vehicle when a high degree of distraction is detected. The current study proposes a context-aware severity model to detect safety issues related to driver’s distractions, considering the physiological attributes, the activities, and context-aware situations such as environment and vehicle. Thereby, a novel three-phase Fast Recurrent Convolutional Neural Network (Fast-RCNN) architecture addresses the physiological attributes. Secondly, a novel two-tier FRCNN-LSTM framework is devised to classify the severity of driver distraction. Thirdly, a Dynamic Bayesian Network (DBN) for the prediction of driver distraction. The study further proposes the Multiclass Driver Distraction Risk Assessment (MDDRA) model, which can be adopted in a context-aware driving distraction scenario. Finally, a 3-way hybrid CNN-DBN-LSTM multiclass degree of driver distraction according to severity level is developed. In addition, a Hidden Markov Driver Distraction Severity Model (HMDDSM) for the transitioning of control from the driver to the vehicle when a high degree of distraction is detected. This work tests and evaluates the proposed models using the multi-view TeleFOT naturalistic driving study data and the American University of Cairo dataset (AUCD). The evaluation of the developed models was performed using cross-correlation, hybrid cross-correlations, K-Folds validation. The results show that the technique effectively learns and adopts safety measures related to the severity of driver distraction. In addition, the results also show that while a driver is in a dangerous distraction state, the control can be shifted from driver to vehicle in a systematic manner

    Cooperative social robots: accompanying, guiding and interacting with people

    Get PDF
    The development of social robots capable of interacting with humans is one of the principal challenges in the field of robotics. More and more, robots are appearing in dynamic environments, like pedestrian walkways, universities, and hospitals; for this reason, their interaction with people must be conducted in a natural, gradual, and cordial manner, given that their function could be aid, or assist people. Therefore, navigation and interaction among humans in these environments are key skills that future generations of robots will require to have. Additionally, robots must also be able to cooperate with each other, if necessary. This dissertation examines these various challenges and describes the development of a set of techniques that allow robots to interact naturally with people in their environments, as they guide or accompany humans in urban zones. In this sense, the robots' movements are inspired by the persons' actions and gestures, determination of appropriate personal space, and the rules of common social convention. The first issue this thesis tackles is the development of an innovative robot-companion approach based on the newly founded Extended Social-Forces Model. We evaluate how people navigate and we formulate a set of virtual social forces to describe robot's behavior in terms of motion. Moreover, we introduce a robot companion analytical metric to effectively evaluate the system. This assessment is based on the notion of "proxemics" and ensures that the robot's navigation is socially acceptable by the person being accompanied, as well as to other pedestrians in the vicinity. Through a user study, we show that people interpret the robot's behavior according to human social norms. In addition, a new framework for guiding people in urban areas with a set of cooperative mobile robots is presented. The proposed approach offers several significant advantages, as compared with those outlined in prior studies. Firstly, it allows a group of people to be guided within both open and closed areas; secondly, it uses several cooperative robots; and thirdly, it includes features that enable the robots to keep people from leaving the crowd group, by approaching them in a friendly and safe manner. At the core of our approach, we propose a "Discrete Time Motion" model, which works to represent human and robot motions, to predict people's movements, so as to plan a route and provide the robots with concrete motion instructions. After, this thesis goes one step forward by developing the "Prediction and Anticipation Model". This model enables us to determine the optimal distribution of robots for preventing people from straying from the formation in specific areas of the map, and thus to facilitate the task of the robots. Furthermore, we locally optimize the work performed by robots and people alike, and thereby yielding a more human-friendly motion. Finally, an autonomous mobile robot capable of interacting to acquire human-assisted learning is introduced. First, we present different robot behaviors to approach a person and successfully engage with him/her. On the basis of this insight, we furnish our robot with a simple visual module for detecting human faces in real-time. We observe that people ascribe different personalities to the robot depending on its different behaviors. Once contact is initiated, people are given the opportunity to assist the robot to improve its visual skills. After this assisted learning stage, the robot is able to detect people by using the enhanced visual methods. Both contributions are extensively and rigorously tested in real environments. As a whole, this thesis demonstrates the need for robots that are able to operate acceptably around people; to behave in accordance with social norms while accompanying and guiding them. Furthermore, this work shows that cooperation amongst a group of robots optimizes the performance of the robots and people as well.El desenvolupament de robots socials capaços d'interactuar amb els éssers humans és un dels principals reptes en el camp de la robòtica. Actualment, els robots comencen a aparèixer en entorns dinàmics, com zones de vianants, universitats o hospitals; per aquest motiu, aquesta interacció ha de realitzar-se de manera natural, progressiva i cordial, ja que la seva utilització pot ser col.laboració, assistència o ajuda a les persones. Per tant, la navegació i la interacció amb els humans, en aquests entorns, són habilitats importants que les futures generacions de robots han de posseir, a més a més, els robots han de ser aptes de cooperar entre ells si fos requerit. El present treball estudia aquests reptes plantejats. S’han desenvolupat un conjunt de tècniques que permeten als robots interectuar de manera natural amb les persones i el seu entorn, mentre que guien o acompanyen als humans en zones urbanes. En aquest sentit, el moviment dels robots s’inspira en la manera com es mouen els humans en les convenvions socials, així com l’espai personal.El primer punt que aquesta tesi comprèn és el desenvolupament d’un nou mètode per a "robots-acompanyants" basat en el nou model estès de forces socials. S’ha evaluat com es mouen les persones i s’han formulat un conjunt de forces socials virtuals que descriuren el comportament del robot en termes de moviments. Aquesta evaluació es basa en el concepte de “proxemics” i assegura que la navegació del robot està socialment acceptada per la persona que està sent acompanyada i per la gent que es troba a l’entorn. Per mitjà d’un estudi social, mostrem que els humans interpreten el comportament del robot d’acord amb les normes socials. Així mateix, un nou sistema per a guiar a persones en zones urbanes amb un conjunt de robots mòbils que cooperen és presentat. El model proposat ofereix diferents avantatges comparat amb treballs anteriors. Primer, es permet a un grup de persones ser guiades en entorns oberts o amb alta densitat d’obstacles; segon, s’utilitzen diferents robots que cooperen; tercer, els robots són capaços de reincorporar a la formació les persones que s’han allunyat del grup anteriorment de manera segura. La base del nostre enfocament es basa en el nou model anomenat “Discrete Time Motion”, el qual representa els movimients dels humans i els robots, prediu el comportament de les persones, i planeja i proporciona una ruta als robots.Posteriorment, aquesta tesi va un pas més enllà amb el desenvolupament del model “Prediction and Anticipation Model”. Aquest model ens permet determinar la distribució òptima de robots per a prevenir que les persones s’allunyin del grup en zones especíıfiques del mapa, i per tant facilitar la tasca dels robots. A més, s’optimitza localment el treball realitzat pels robots i les persones, produint d’aquesta manera un moviment més amigable. Finalment, s’introdueix un robot autònom mòbil capaç d’interactuar amb les persones per realitzar un aprenentatge assistit. Incialment, es presenten diferents comportaments del robot per apropar-se a una persona i crear un víıncle amb ell/ella. Basant-nos en aquesta idea, un mòdul visual per a la detecció de cares humanes en temps real va ser proporcionat al robot. Hem observat que les persones atribueixen diferents personalitats al robot en funció dels seus diferents comportaments. Una vegada que el contacte va ser iniciat es va donar l’oportunitat als voluntaris d’ajudar al robot per a millorar les seves habilitats visuals. Després d’aquesta etapa d’aprenentatge assistit, el robot va ser capaç d’identificar a les persones mitjançant l'ús de mètodes visuals.En resum, aquesta tesi presenta i demostra la necessitat de robots que siguin capaços d’operar de forma acceptable amb la gent i que es comportin d’acord amb les normes socials mentres acompanyen o guien a persones. Per altra banda, aquest treball mostra que la coperació entre un grup de robots pot optimitzar el rendiment tant dels robots com dels humans

    Generation of regions of interest with high potential to contain pedestrians using non-dense 3D reconstruction from monocular vision

    Get PDF
    [EN] Traffic accidents are a global public health problem, due to the high number of human victims and the elevated economic and social costs that generate. In this context, pedestrians are among the most important and vulnerable elements of the road scene that need to be protected. It is thus that, in this work an innovative proposal is presented where the monocular visual information is used to simulate the stereo vision, and from this :i) generate regions of interest (ROIs) with high possibility of containing a pedestrian, and ii) estimate the trajectory of the vehicle. Experiments have been developed into a dataset of images taken in several streets of Santiago (Región Metropolitana), Chile. This database was obtained using an experimental vehicle under real driving conditions during the day. The ROI detection rate is 86;6 % for distances less than 20 meters, 82;9 % for distances less than 30 meters and76;2 % for distances less than 40 meters.[ES] Los accidentes de tráfico son un problema de salud pública a escala mundial, por el alto número de víctimas humanas y los elevados costos económicos y sociales que generan. En este contexto, los peatones se encuentran entre los elementos más importantes y vulnerables de la escena vial que necesitan ser protegidos. Es así que en este trabajo se presenta una innovadora propuesta utilizado la información visual monocular para emular la visión estéreo, y a partir de ello: i) generar regiones de interés (ROIs) con alta posibilidad de contener un peatón, y ii) estimar la trayectoria del vehículo. Los experimentos han sido desarrollados sobre una base de datos de imágenes tomadas en varias calles de la ciudad de Santiago (Región-Metropolitana), Chile. Esta información fue obtenida usando una plataforma experimental en condiciones reales de conducción durante el día. La tasa de detección de ROIs es del 86;6 % para distancias menores a 20 metros, 82;9 % para distancias menores a 30 metros y del 76;2 % para distancias menores a 40 metros.Este proyecto ha sido financiado por la Comisión Nacional de Ciencia y Tecnología de Chile (Conicyt) a través del proyecto Fondecyt No. 11060251, por la Universidad de las Fuerzas Armadas-ESPE, a través del Plan de Movilidad con Fines de Investigación (Orden Rectorado 2017-109-ESPE-d), el proyecto de investigación Nro. 2014-PIT-007 y por la empresa Tecnologías I&H.Zubiaguirre-Bergen, I.; Torres-Torriti, M.; Flores-Calero, M. (2018). Generación de Regiones con Potencial de Contener Peatones usando Reconstrucción 3D No Densa a partir de Visión Monocular. Revista Iberoamericana de Automática e Informática industrial. 15(3):243-251. https://doi.org/10.4995/riai.2017.8825OJS243251153Agencia Nacional de Tránsito del Ecuador, 2016. Siniestros octubre 2015. URL: http://www.ant.gob.ec/index.php/descargable/file/3368-siniestros-diciembre-2015Bouguet, Jean-Yves, 2015. Camera calibration toolbox for matlab. URL: http://www.vision.caltech.edu/bouguetj/calib_doc/CONASET, 2014. Informes de peatones. URL: http://www.conaset.cl/informes-peatones/Dalal, N., 2006. Finding people in images and videos. Ph.D. Thesis, Institut National Polytechnique de Grenoble.Ess, A., Leibe, B., Schindler, K., , van Gool, L., June 2008. A mobile vision system for robust multi-person tracking. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR'08). IEEE Press. https://doi.org/10.1109/CVPR.2008.4587581Fischler, M., Bolles, R., 1981. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24 (6), 381-395. https://doi.org/10.1145/358669.358692Flores-Calero, M., Armingol, A., de-la Escalera, A., july 2010. Driver drowsiness warning system using visual information for both diurnal and nocturnal illumination conditions. EURASIP journal on advances in signal processing 2010 (3). https://doi.org/10.1155/2010/438205Flores-Calero, M., Armingol, A., de-la Escalera, A., 2011. Sistema Avanzado de Asistencia a la Conducción para la Detección de la Somnolencia. Revista Iberoamericana de Automática e Informática Industrial 8 (3), 216-228. https://doi.org/10.1016/j.riai.2011.06.009Flores-Calero, M., Robayo, D., Saa, D., May 2015. Histograma del gradiente con múltiples orientaciones (HOG-MO): Detección de personas. Revista Vínculos 12 (2), 138-147.Forsyth, D. A., Ponce, J., 2003. Computer Vision, A Modern Approach, 1st Edition. Prentice Hall.Fundación MAFRE, 2012. Datos de seguridad vial. URL: https://www.profesoresyseguridadvial.com/colombia-datos-de-seguridad-vial/Horgan, J., Hughes, C., McDonald, J., Yogamani, S., 2015. Vision-Based Driver Assistance Systems: Survey, Taxonomy and Advances. In: IEEE 18th International Conference on Intelligent Transportation Systems (ITSC). pp. 2032-2039. https://doi.org/10.1109/ITSC.2015.329Keller, C., Enzweiler, M., Gavrila, D., July 2011. A new benchmark for stereobased pedestrian detection. In: IEEE Intelligent Vehicles Symposium (IV). pp. 691-696.Kohler, S., Goldhammer, M., Zindler, K., Doll, K., Dietmeyer, K., September 2015. Stereo-vision-based pedestrian's intention detection in a moving vehicle. In: 2015 IEEE 18th International Conference on Intelligent Transportation Systems. pp. 2317-2322. https://doi.org/10.1109/ITSC.2015.374La Tercera, 2014. Chile es el país con mayor tasa de peatones fallecidos entre los países de la OCDE. URL: http://www.latercera.com/noticia/nacional/2014/10/680-601399-9-chile-//es-el-pais-con-mayor-tasa-de-peatones-fallecidos-entre-//los-paises-de-la.shtmlLi, X., Flohr, F., Yang, Y., Xiong, H., Braun, M., Pan, S., Li, K., Gavrila, D. M., June 2016. A new benchmark for vison-based cyclist detection. In: IEEE Intelligent Vehicles Symposium. pp. 1109-1114.Ma, G., Muller, D., Park, S.-B., Muller-Schneiders, S., Kummert, A., march 2009. Pedestrian detection using a single monochrome camera. Intelligent Transport Systems, IET 3 (1), 42 -56. https://doi.org/10.1049/iet-its:20080001Mammeri, A., Zuo, T., Boukerche, A., April 2016. Extending the Detection Range of Vision-Based Vehicular Instrumentation. IEEE Transactions on Instrumentation and Measurement 65 (4), 856-873. https://doi.org/10.1109/TIM.2016.2514780Mesmakhosroshahi, M., Chung, K.-H., Lee, Y., Kim, J., November 2014. Depth gradient based region of interest generation for pedestrian detection. In: IEEE International Conference on SoC Design (ISOCC). pp. 156-157. https://doi.org/10.1109/ISOCC.2014.7087674Min, K., Son, H., Choe, Y., Kim, Y., June 2013. Real-time pedestrian detection based on a hierarchical two-stage support vector machine. In: IEEE 8th Conference on Industrial Electronics and Applications (ICIEA). pp. 114-119.Oikawaa, S., Matsuia, Y., Doib, T., Sakuraic, T., February 2016. Relation between vehicle travel velocity and pedestrian injury risk in different age groups for the design of a pedestrian detection system. Safety Science 82, 361-367. https://doi.org/10.1016/j.ssci.2015.10.003Overett, G., Petersson, L., Brewer, N., Andersson, L., Pettersson, N., 2008. A new pedestrian dataset for supervised learning. URL: https://research.csiro.au/data61/automap-datasets-and-code/Russell, B. C., Torralba, A., Murphy, K. P., Freeman, W. T., May 2008. Label me, a database and web-based tool for image annotation. International Journal of Computer Vision (1-3). URL: http://labelme.csail.mit.edu/Shi, J., Tomasi, C., jun 1994. Good features to track. In: Computer Vision and Pattern Recognition, 1994. Proceedings CVPR '94., 1994 IEEE Computer Society Conference on. pp. 593 -600. https://doi.org/10.1109/CVPR.1994.323794Shou, N., Peng, H., Wang, H., Meng, L.-M., Du, K.-L., Octuber 2012. An rois based pedestrian detection system for single images.Tetik, Y., Bolat, B., June 2011. Pedestrian detection from still images. In: IEEE International Symposium on Innovations in Intelligent Systems and Applications (INISTA). pp. 540-544.Villalón-Sepúlveda, G., Torres-Torriti, M., Flores-Calero, M., May 2017. Traffic Sign Detection System for Locating Road Intersections and Roundabouts: The Chilean Case. Sensors MDPI 17 (6), 138-147. https://doi.org/10.3390/s17061207Wang, L., Shi, J., Song, G., Shen, I.-f., 2007. Object detection combining recognition and segmentation. URL: https://www.cis.upenn.edu/~jshi/ped_html/Wang, X., Wang, M., Li, W., December 2014. Scene-Specific Pedestrian Detection for Static Video Surveillance. IEEE Transactions on Pattern Analysis and Machine Intelligence 36, 361-374. https://doi.org/10.1109/TPAMI.2013.124World Health Organization WHO, 2015. Road traffic injuries.Yuan, Y., Lin, W., Fang, Y., September 2015. Is pedestrian detection robust for surveillance? In: Image Processing (ICIP), 2015 IEEE International Conference on. pp. 2776 - 2780.Zhang, C., Chung, K.-H., Kim, J., November 2015a. Region-of-interest reduction using edge and depth images for pedestrian detection in urban areas.Zhang, X., Hu, H.-M., Jiang, F., Li, B., May 2015b. Pedestrian detection based on hierarchical co-occurrence model for occlusion handling. Neurocomputing 168, 861-870. https://doi.org/10.1016/j.neucom.2015.05.038Zhang, Z., Tao, W., Sun, K., Hu, W., Yao, L., May 2016. Pedestrian detection aided by fusion of binocular information. Pattern Recognition 60, 227-238. https://doi.org/10.1016/j.patcog.2016.05.006Zhao, X., Ye, M., Zhu, Y., Zhong, C., Zhou, J., December 2009. Real time roi generation for pedestrian detection

    Scientific, Technical, and Forensic Evidence

    Get PDF
    Materials from the conference on Scientific, Technical, and Forensic Evidence held by UK/CLE in February 2002
    corecore