20 research outputs found

    Vision applications in agriculture

    Get PDF
    From early beginnings in work on the visual guidance of tractors, the National Centre for Engineering in Agriculture has built up a portfolio of projects in which machine vision plays a prominent part. This presentation traces the history of this research, including some highly unusual topics

    Automated soil hardness testing machine

    Get PDF
    This paper describes the design and performance of a mechatronic system for controlling a standard drop-hammer mechanism that is commonly used in performing outdoor soil or ground hardness tests. A low-cost microcontroller is used to control a hydraulic actuator to repeatedly lift and drop a standard free-falling weight that strikes a pipe (sampler) which is pushed deeper into the ground with each impact. The depth of the sampler pipe and position of the hydraulic cylinder are constantly monitored and the number of drops, soil penetration data and other variables are recorded in a database for future analysis. This device, known as the “EVH Trip Hammer”, allows the full automation and faster completion of what is typically a very labour-intensive and slow testing process that can involve human error and the risk of human injuries

    The use of machine vision for assessment of fodder quality

    Get PDF
    At present fodder is assessed subjectively. The evaluation depends greatly on a personal opinion and there can be large variations in assessments. The project has investigated the use of machine vision in several ways, to provide measures of fodder quality that will be ojective and independent of the assessor. Growers will be able to quote a quality measure that buyers can trust. The research includes the possibility of discerning colour differences that are beyond the capability of the human eye, while still using equipment that is of relatively modest cost

    Bovine intelligence for training horses

    Get PDF
    A rail-mounted model of a small cow is to be used in the training of horses for camp-drafting contests. The paper concerns the addition of sensors and a strategy to enable the machine to respond to the proximity of the horse in a manner that will represent the behaviour of a live calf

    Relating vanishing points to catadioptric camera calibration

    Get PDF
    This paper presents the analysis and derivation of the geometric relation between vanishing points and camera parameters of central catadioptric camera systems. These vanishing points correspond to the three mutually orthogonal directions of 3D real world coordinate system (i.e. X, Y and Z axes). Compared to vanishing points (VPs) in the perspective projection, the advantages of VPs under central catadioptric projection are that there are normally two vanishing points for each set of parallel lines, since lines are projected to conics in the catadioptric image plane. Also, their vanishing points are usually located inside the image frame. We show that knowledge of the VPs corresponding to XYZ axes from a single image can lead to simple derivation of both intrinsic and extrinsic parameters of the central catadioptric system. This derived novel theory is demonstrated and tested on both synthetic and real data with respect to noise sensitivity

    Deep Learning for Vanishing Point Detection Using an Inverse Gnomonic Projection

    Full text link
    We present a novel approach for vanishing point detection from uncalibrated monocular images. In contrast to state-of-the-art, we make no a priori assumptions about the observed scene. Our method is based on a convolutional neural network (CNN) which does not use natural images, but a Gaussian sphere representation arising from an inverse gnomonic projection of lines detected in an image. This allows us to rely on synthetic data for training, eliminating the need for labelled images. Our method achieves competitive performance on three horizon estimation benchmark datasets. We further highlight some additional use cases for which our vanishing point detection algorithm can be used.Comment: Accepted for publication at German Conference on Pattern Recognition (GCPR) 2017. This research was supported by German Research Foundation DFG within Priority Research Programme 1894 "Volunteered Geographic Information: Interpretation, Visualisation and Social Computing

    Using Points at Infinity for Parameter Decoupling in Camera Calibration

    Full text link

    Non-parametric Models of Distortion in Imaging Systems.

    Full text link
    Traditional radial lens distortion models are based on the physical construction of lenses. However, manufacturing defects and physical shock often cause the actual observed distortion to be different from what can be modeled by the physically motivated models. In this work, we initially propose a Gaussian process radial distortion model as an alternative to the physically motivated models. The non-parametric nature of this model helps implicitly select the right model complexity, whereas for traditional distortion models one must perform explicit model selection to decide the right parametric complexity. Next, we forego the radial distortion assumption and present a completely non-parametric, mathematically motivated distortion model based on locally-weighted homographies. The separation from an underlying physical model allows this model to capture arbitrary sources of distortion. We then apply this fully non-parametric distortion model to a zoom lens, where the distortion complexity can vary across zoom levels and the lens exhibits noticeable non-radial distortion. Through our experiments and evaluation, we show that the proposed models are as accurate as the traditional parametric models at characterizing radial distortion while flexibly capturing non-radial distortion if present in the imaging system.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120690/1/rpradeep_1.pd

    Visual Tracking of Human Hand and Head Movements and Its Applications

    Get PDF
    Tracking of human body movements is an important problem in computer vision with applications in visual surveillance and human-computer interaction. Tracking of a single hand moving in space is addressed and a set of applications in human computer interaction are presented. In this approach, a disparity map and motion fields extracted from a stereo camera set are modelled using a robust estimation method. Then, the absolute position and orientation of the hand in space are estimated and the central region of the hand is tracked over time. Virtual drawing in space, a virtual marble game, and 3D object construction are shown as the applications of the single hand tracking. Algorithms are presented for tracking the hands and head of a person or several interacting people viewed by a set of cameras in 3D. The problem is first defined as a general multiple object tracking problem in a multiple sensor environment and a two layered solution is proposed. The proposed solution includes a low level particle filtering layer to track individual targets in parallel, and a finite state machine to analyze the interactions between the targets and apply application specific heuristics. A set of activity recognition experiments in visual surveillance show the usefulness of the system. The recognized activities involve interactions between the hands and head of people and objects. A color analysis scheme and a technique for combining information from different cameras are presented. They are used to detect carried objects and exchanges between the hands
    corecore