28 research outputs found
Multi-Sensor Person Following in Low-Visibility Scenarios
Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment
Archaeology via underwater robots : mapping and localization within Maltese cistern systems
This paper documents the application of several
underwater robot mapping and localization techniques used
during an archaeological expedition. The goal of this project was
to explore and map ancient cisterns located on the islands of
Malta and Gozo. The cisterns of interest acted as water storage
systems for fortresses, private homes, and churches. They often
consisted of several connected chambers, still containing water. A
sonar-equipped Remotely Operated Vehicle (ROV) was deployed
into these cisterns to obtain both video footage and sonar range
measurements. Four different mapping and localization
techniques were employed including 1) Sonar image mosaics
using stationary sonar scans, and 2) Simultaneous Localization
and Mapping (SLAM) while the vehicle was in motion, 3) SLAM
using stationary sonar scans, and 4) Localization using previously
created maps. Two dimensional maps of 6 different cisterns were
successfully constructed. It is estimated that the cisterns were
built as far back as 300 B.C.peer-reviewe
Pedestrian detection for mobile bus surveillance
In this paper, we present a system for pedestrian detection involving scenes captured by mobile bus surveillance cameras in busy city streets. Our approach integrates scene localization, foreground and background separation, and pedestrian detection modules into a unified detection framework. The scene localization module performs a two stage clustering of the video data. In the first stage, SIFT Homography is applied to cluster frames in terms of their structural similarities and second stage further clusters these aligned frames in terms of lighting. This produces clusters of images which are differential in viewpoint and lighting. A kernel density estimation (KDE) method for colour and gradient foreground-background separation are then used to construct background model for each image cluster which is subsequently used to detect all foreground pixels. Finally, using a hierarchical template matching approach, pedestrians can be identified. We have tested our system on a set of real bus video datasets and the experimental results verify that our system works well in practice.<br /
Passenger monitoring in moving bus video
In this paper, we present a novel person detection system for public transport buses tackling the problem of changing illumination conditions. Our approach integrates a stable SIFT (Scale Invariant Feature Transform) background seat modeling mechanism with a human shape model into a weighted Bayesian framework to detect passengers on-board buses. SIFT background modeling extracts local stable features on the pre-annotated background seat areas and tracks these features over time to build a global statistical background model for each seat. Since SIFT features are partially invariant to lighting, this background model can be used robustly to detect the seat occupancy status even under severe lighting changes. The human shape model further confirms the existence of a passenger when a seat is occupied. This constructs a robust passenger monitoring system which is resilient to illumination changes. We evaluate the performance of our proposed system on a number of challenging video datasets obtained from bus cameras and the experimental results show that it is superior to state-of-art people detection systems.<br /
Identification using face regions: Application and assessment in forensic scenarios
This is the authorâs version of a work that was accepted for publication in Forensic Science International. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Forensic Science International, 23, 1-3, (2013) DOI: 10.1016/j.forsciint.2013.08.020This paper reports an exhaustive analysis of the discriminative power of the different regions of the human face on various forensic scenarios. In practice, when forensic examiners compare two face images, they focus their attention not only on the overall similarity of the two faces. They carry out an exhaustive morphological comparison region by region (e.g., nose, mouth, eyebrows, etc.). In this scenario it is very important to know based on scientific methods to what extent each facial region can help in identifying a person. This knowledge obtained using quantitative and statical methods on given populations can then be used by the examiner to support or tune his observations. In order to generate such scientific knowledge useful for the expert, several methodologies are compared, such as manual and automatic facial landmarks extraction, different facial regions extractors, and various distances between the subject and the acquisition camera. Also, three scenarios of interest for forensics are considered comparing mugshot and Closed-Circuit TeleVision (CCTV) face images using MORPH and SCface databases. One of the findings is that depending of the acquisition distances, the discriminative power of the facial regions change, having in some cases better performance than the full face
Brain Emotional Learning Based Intelligent Decoupler for Nonlinear Multi-Input Multi-Output Distillation Columns
The distillation process is vital in many fields of chemical industries, such as the two-coupled distillation columns that are usually highly nonlinear Multi-Input Multi-Output (MIMO) coupled processes. The control of MIMO process is usually implemented via a decentralized approach using a set of Single-Input Single-Output (SISO) loop controllers. Decoupling the MIMO process into group of single loops requires proper input-output pairing and development of decoupling compensator unit. This paper proposes a novel intelligent decoupling approach for MIMO processes based on new MIMO brain emotional learning architecture. A MIMO architecture of Brain Emotional Learning Based Intelligent Controller (BELBIC) is developed and applied as a decoupler for 4 input/4 output highly nonlinear coupled distillation columns process. Moreover, the performance of the proposed Brain Emotional Learning Based Intelligent Decoupler (BELBID) is enhanced using Particle Swarm Optimization (PSO) technique. The performance is compared with the PSO optimized steady state decoupling compensation matrix. Mathematical models of the distillation columns and the decouplers are built and tested in simulation environment by applying the same inputs. The results prove remarkable success of the BELBID in minimizing the loops interactions without degrading the output that every input has been paired with
Movement Control in Recovering UUV Based on Two-Stage Discrete T-S Fuzzy Model
A two-stage discrete T-S fuzzy model controller, which is formed by a motion controller and a dynamic controller connected in series, is presented to solve UUV (unmanned underwater vehicle) movement control problem for recovering. The motion controller is designed based on the uncertain T-S model and the concept of discrete fuzzy vector. The position error between UUV and moving platform as the input of the motion controller is converted into the speed commands of UUV at the next time. The dynamic controller design is based on the theory of fuzzy region model and a relaxed condition for Lyapunov stabilization function is derived in the form of linear matrix inequalities, which generate force and torque required to complete the recovery task. The feasibility and the efficiency of the proposed control scheme are illustrated through the simulations that UUV follows moving platform