4,160 research outputs found

    Measurement Function Design for Visual Tracking Applications

    Get PDF
    Extracting human postural information from video sequences has proved a difficult research question. The most successful approaches to date have been based on particle filtering, whereby the underlying probability distribution is approximated by a set of particles. The shape of the underlying observational probability distribution plays a significant role in determining the success, both accuracy and efficiency, of any visual tracker. In this paper we compare approaches used by other authors and present a cost path approach which is commonly used in image segmentation problems, however is currently not widely used in tracking applications

    Equivalent standard DEA models to provide super-efficiency scores

    Get PDF
    DEA super-efficiency models were introduced originally with the objective of providing a tie-breaking procedure for ranking units rated as efficient in conventional DEA models. This objective has been expanded to include sensitivity analysis, outlier identification and inter-temporal analysis. However, not all units rated as efficient in conventional DEA models have feasible solutions in DEA super-efficiency models. We propose a new super-efficiency model that (a) generates the same super-efficiency scores as conventional super-efficiency models for all units having a feasible solution under the latter, and (b) generates a feasible solution for all units not having a feasible solution under the latter. Empirical examples are provided to compare the two super-efficiency models

    Visual Odometry for Quantitative Bronchoscopy Using Optical Flow

    Get PDF
    Optical Flow, the extraction of motion from a sequence of images or a video stream, has been extensively researched since the late 1970s, but has been applied to the solution of few practical problems. To date, the main applications have been within fields such as robotics, motion compensation in video, and 3D reconstruction. In this paper we present the initial stages of a project to extract valuable information on the size and structure of the lungs using only the visual information provided by a bronchoscope during a typical procedure. The initial implementation provides a realtime estimation of the motion of the bronchoscope through the patients airway, as well as a simple means for the estimation of the cross sectional area of the airway

    A high resolution smart camera with GigE Vision extension for surveillance applications

    Get PDF

    Tracking with Multiple Cameras for Video Surveillance

    Get PDF
    The large shape variability and partial occlusions challenge most object detection and tracking methods for nonrigid targets such as pedestrians. Single camera tracking is limited in the scope of its applications because of the limited field of view (FOV) of a camera. This initiates the need for a multiple-camera system for completely monitoring and tracking a target, especially in the presence of occlusion. When the object is viewed with multiple cameras, there is a fair chance that it is not occluded simultaneously in all the cameras. In this paper, we developed a method for the fusion of tracks obtained from two cameras placed at two different positions. First, the object to be tracked is identified on the basis of shape information measured by MPEG-7 ART shape descriptor. After this, single camera tracking is performed by the unscented Kalman filter approach and finally the tracks from the two cameras are fused. A sensor network model is proposed to deal with the situations in which the target moves out of the field of view of a camera and reenters after sometime. Experimental results obtained demonstrate the effectiveness of our proposed scheme for tracking objects under occlusion

    Vision Processing in Intelligent CCTV for Mass Transport Security

    Get PDF
    Intelligent Surveillance Systems is attracting unprecedented attention from research and industry. In this paper, we describe a real-life trial system where various video analytic systems are used to detect events and objects of interests in a mass transport environment. The system configuration and architecture of this system is presented. In addition to implementation and scalability challenges, we discuss issues related to on-going trials in public spaces incorporating existing surveillance hardware

    Design of a large span-distributed load flying-wing cargo airplane with laminar flow control

    Get PDF
    A design study was conducted to add laminar flow control to a previously design span-distributed load airplane while maintaining constant range and payload. With laminar flow control applied to 100 percent of the wing and vertical tail chords, the empty weight increased by 4.2 percent, the drag decreased by 27.4 percent, the required engine thrust decreased by 14.8 percent, and the fuel consumption decreased by 21.8 percent. When laminar flow control was applied to a lesser extent of the chord (approximately 80 percent), the empty weight increased by 3.4 percent, the drag decreased by 20.0 percent, the required engine thrust decreased by 13.0 percent, and the fuel consumption decreased by 16.2 percent. In both cases the required take-off gross weight of the aircraft was less than the original turbulent aircraft

    Preliminary design characteristics of a subsonic business jet concept employing laminar flow control

    Get PDF
    Aircraft configurations were developed with laminar flow control (LFC) and without LFC. The LFC configuration had approximately eleven percent less parasite drag and a seven percent increase in the maximum lift-to drag ratio. Although these aerodynamic advantages were partially offset by the additional weight of the LFC system, the LFC aircraft burned from six to eight percent less fuel for comparable missions. For the trans-atlantic design mission with the gross weight fixed, the LFC configuration would carry a greater payload for ten percent fuel per passenger mile

    Space VLBI Observations of 3C 279 at 1.6 and 5 GHz

    Get PDF
    We present the first VLBI Space Observatory Programme (VSOP) observations of the gamma-ray blazar 3C 279 at 1.6 and 5 GHz. The combination of the VSOP and VLBA-only images at these two frequencies maps the jet structure on scales from 1 to 100 mas. On small angular scales the structure is dominated by the quasar core and the bright secondary component `C4' located 3 milliarcseconds from the core (at this epoch). On larger angular scales the structure is dominated by a jet extending to the southwest, which at the largest scale seen in these images connects with the smallest scale structure seen in VLA images. We have exploited two of the main strengths of VSOP: the ability to obtain matched-resolution images to ground-based images at higher frequencies and the ability to measure high brightness temperatures. A spectral index map was made by combining the VSOP 1.6 GHz image with a matched-resolution VLBA-only image at 5 GHz from our VSOP observation on the following day. The spectral index map shows the core to have a highly inverted spectrum, with some areas having a spectral index approaching the limiting value for synchrotron self-absorbed radiation of 2.5. Gaussian model fits to the VSOP visibilities revealed high brightness temperatures (>10^{12} K) that are difficult to measure with ground-only arrays. An extensive error analysis was performed on the brightness temperature measurements. Most components did not have measurable brightness temperature upper limits, but lower limits were measured as high as 5x10^{12} K. This lower limit is significantly above both the nominal inverse Compton and equipartition brightness temperature limits. The derived Doppler factor, Lorentz factor, and angle to the line-of-sight in the case of the equipartition limit are at the upper end of the range of expected values for EGRET blazars.Comment: 11 pages, 6 figures, emulateapj.sty, To be published in The Astrophysical Journal, v537, Jul 1, 200

    MIME: A Gesture-Driven Computer Interface

    Get PDF
    MIME (Mime Is Manual Expression) is a computationally efficient computer vision system for recognizing hand gestures. The system is intended to replace the mouse interface on a standard personal computer to control application in a more intuitive manner. The system is implemented in C code with no hardware-acceleration and tracks hand motion at 30 fps on a standard PC. Using a simple two-dimensional model of the human hand, MIME employs a highly-efficient, single-pass algorithm to segment the hand and extract its model parameters from each frame in the video input. The hand is tracked from one frame to the next using a constant-acceleration Kalman filter. Tracking and feature extraction is remarkably fast and robust even when the hand is placed above difficult backdrops such as a typical cluttered desktop environment. Because of the efficient coding of the gesture tracking, adequate CPU power remains to run standard applications such as web browsers and presentations
    corecore