33 research outputs found

    Study Of Human Activity In Video Data With An Emphasis On View-invariance

    Get PDF
    The perception and understanding of human motion and action is an important area of research in computer vision that plays a crucial role in various applications such as surveillance, HCI, ergonomics, etc. In this thesis, we focus on the recognition of actions in the case of varying viewpoints and different and unknown camera intrinsic parameters. The challenges to be addressed include perspective distortions, differences in viewpoints, anthropometric variations, and the large degrees of freedom of articulated bodies. In addition, we are interested in methods that require little or no training. The current solutions to action recognition usually assume that there is a huge dataset of actions available so that a classifier can be trained. However, this means that in order to define a new action, the user has to record a number of videos from different viewpoints with varying camera intrinsic parameters and then retrain the classifier, which is not very practical from a development point of view. We propose algorithms that overcome these challenges and require just a few instances of the action from any viewpoint with any intrinsic camera parameters. Our first algorithm is based on the rank constraint on the family of planar homographies associated with triplets of body points. We represent action as a sequence of poses, and decompose the pose into triplets. Therefore, the pose transition is broken down into a set of movement of body point planes. In this way, we transform the non-rigid motion of the body points into a rigid motion of body point iii planes. We use the fact that the family of homographies associated with two identical poses would have rank 4 to gauge similarity of the pose between two subjects, observed by different perspective cameras and from different viewpoints. This method requires only one instance of the action. We then show that it is possible to extend the concept of triplets to line segments. In particular, we establish that if we look at the movement of line segments instead of triplets, we have more redundancy in data thus leading to better results. We demonstrate this concept on “fundamental ratios.” We decompose a human body pose into line segments instead of triplets and look at set of movement of line segments. This method needs only three instances of the action. If a larger dataset is available, we can also apply weighting on line segments for better accuracy. The last method is based on the concept of “Projective Depth”. Given a plane, we can find the relative depth of a point relative to the given plane. We propose three different ways of using “projective depth:” (i) Triplets - the three points of a triplet along with the epipole defines the plane and the movement of points relative to these body planes can be used to recognize actions; (ii) Ground plane - if we are able to extract the ground plane, we can find the “projective depth” of the body points with respect to it. Therefore, the problem of action recognition would translate to curve matching; and (iii) Mirror person - We can use the mirror view of the person to extract mirror symmetric planes. This method also needs only one instance of the action. Extensive experiments are reported on testing view invariance, robustness to noisy localization and occlusions of body points, and action recognition. The experimental results are very promising and demonstrate the efficiency of our proposed invariants. i

    Calibration-based minimalistic multi-exposure digital sensor camera robust linear high dynamic range enhancement technique demonstration

    Get PDF
    Demonstrated for a digital image sensor-based camera is a calibration target optimized method for finding the Camera Response Function (CRF). The proposed method uses localized known target zone pixel outputs spatial averaging and histogram analysis for saturated pixel detection. Using the proposed CRF generation method with a 87 dB High Dynamic Range (HDR) silicon CMOS image sensor camera viewing a 90 dB HDR calibration target, experimentally produced is a non-linear CRF with a limited 40 dB linear CRF zone. Next, a 78 dB test target is deployed to test the camera with this measured CRF and its restricted 40 dB zone. By engaging the proposed minimal exposures, weighting free, multi-exposure imaging method with 2 images, demonstrated is a highly robust recovery of the test target. In addition, the 78 dB test target recovery with 16 individual DR value patches stays robust over a factor of 20 change in test target illumination lighting. In comparison, a non-robust test target image recovery is produced by 5 leading prior-art multi-exposure HDR recovery algorithms using 16 images having 16 different exposure times, with each consecutive image having a sensor dwell time increasing by a factor of 2. Further validation of the proposed HDR image recovery method is provided using two additional experiments, the first using a 78 dB calibrated target combined with a natural indoor scene to form a hybrid design target and a second experiment using an uncalibrated indoor natural scene. The proposed technique applies to all digital image sensor-based cameras having exposure time and illumination controls. In addition, the proposed methods apply to various sensor technologies, spectral bands, and imaging applications

    Robust testing of displays using the extreme linear dynamic range CAOS camera

    Get PDF
    Proposed and demonstrated for the first time is robust testing of optical displays using the extreme linear Dynamic Range (DR) CAOS camera. Experiments highlight accurate and repeatable CAOS camera-based testing of standard 8-bit (i.e., 48 dB DR) and modified DR 10-bit (i.e., 60 dB DR) computer Liquid Crystal Displays (LCDs). Results are compared with CMOS camera and light meter-based LCD testing highlighting the robustness of the CAOS camera readings

    Residential rooftop solar panel adoption behavior: Bibliometric analysis of the past and future trends

    Get PDF
    This study reviews residents' behavioral adoption of rooftop solar photovoltaics (solar PV). Solar PV imparts many benefits towards the environment, economic and social development. However, there has been no comprehensive understanding of knowledge structure in solar PV adoption among households in the literature. Through a bibliometric approach, 564 publications on residents’ adoption of solar PV were retrieved from the Web of Science (WoS). A co-citation and co-word analysis were performed to uncover past and predict future trends in this regard. The analysis produces significant themes related to residents' diffusion innovation adoption and motivation/predictors toward solar PV. This review contributes to the fundamental understanding of residents' critical determinants of solar PV adoption. Theory and practical implications are discussed

    Role of sustainable development goals in advancing the circular economy: A state-of-the-art review on past, present and future directions

    Get PDF
    The purpose of this study is to review the relationship between the highly anticipated concept of circular economy (CE) and sustainable development goals (SDGs). These two sustainability principles have transformed organizations and countries in their quest to achieve sustainable development. Despite their importance to the business and corporate realm, the discussion of these two concepts has been developed in silos, arbitrarily connected. Through a bibliometric approach, this study reviewed 226 journal publications and 16,008 cited references from the Web of Science (WoS) to understand the past, present and future trends of the two concepts and their impact on the sustainability development. The bibliometric approach of citation, co-citation and co-word analysis uncovers the relevant and significant themes and research streams. Theoretical and practical implications were discussed within the broader business and governance perspective to develop a substantial triple bottom line in creating a sustainable future for civil society

    Human Action Recognition In Video Data Using Invariant Characteristic Vectors

    No full text
    We introduce the concept of the \u27characteristic vector\u27 as an invariant vector associated with a set of freely moving points relative to a plane. We show that if the motion of two sets of points differ only up to a similarity transformation, then the elements of the characteristic vector differ up to scale regardless of viewing directions and cameras. Furthermore, this invariant vector is given by any arbitrary homography that is consistent with epipolar geometry. The characteristic vector of moving points can thus be used to recognize the transitions of a set of points in an articulated body during the course of an action regardless of the camera orientation and parameters. Our extensive experimental results on both motion capture data and real data indicates very good performance. © 2012 IEEE

    Motion Retrieval Using Consistency Of Epipolar Geometry

    No full text
    In this paper, we present an efficient method for motion retrieval method based on the consistency of the homographies with the epipolar geometry. We treat the body pose as body point triplets and use the fact that each homography obtained from corresponding body point triplets should be consistent with epipolar geometry to estimate the similarity of two poses. We show that our method is invariant to camera internal parameters and viewpoint. Experiments are performed on the CMU MoCap dataset, and IXMAS dataset testing testing view-invariance, and action recognition. The results demonstrate that our method can accurately identify human action from video sequences when they are observed from totally different viewpoints with different camera parameters

    Robust Auto-Calibration Of A Ptz Camera With Non-Overlapping Fov

    No full text
    We consider the problem of auto-calibration of cameras, which are fixed in location but are free to rotate while changing their internal parameters by zooming. Our method is based on line correspondences between two views, which may have non-overlapping field of view. Camera calibration from images having non-overlapping field of view is the basic motivation behind this research. The key observation is that the planes formed by the optic center and the line correspondences are really the same plane. We use this fact together with the orthonormality constraint of the rotation matrix to estimate the unknown camera parameters. We show experimental results on synthetic and real data, and analyze the accuracy and stability of our method. © 2008 IEEE
    corecore