22 research outputs found
An automated, high-throughput method for standardizing image color profiles to improve image-based plant phenotyping
High-throughput phenotyping has emerged as a powerful method for studying plant biology. Large image-based datasets are generated and analyzed with automated image analysis pipelines. A major challenge associated with these analyses is variation in image quality that can inadvertently bias results. Images are made up of tuples of data called pixels, which consist of R, G, and B values, arranged in a grid. Many factors, for example image brightness, can influence the quality of the image that is captured. These factors alter the values of the pixels within images and consequently can bias the data and downstream analyses. Here, we provide an automated method to adjust an image-based dataset so that brightness, contrast, and color profile is standardized. The correction method is a collection of linear models that adjusts pixel tuples based on a reference panel of colors. We apply this technique to a set of images taken in a high-throughput imaging facility and successfully detect variance within the image dataset. In this case, variation resulted from temperature-dependent light intensity throughout the experiment. Using this correction method, we were able to standardize images throughout the dataset, and we show that this correction enhanced our ability to accurately quantify morphological measurements within each image. We implement this technique in a high-throughput pipeline available with this paper, and it is also implemented in PlantCV
Analysis Algorithm for Sky Type and Ice Halo Recognition in All-Sky Images
Halo displays, in particular the 22∘ halo, have been captured in long time series of images obtained from total sky imagers (TSIs) at various Atmospheric Radiation Measurement (ARM) sites. Halo displays form if smooth-faced hexagonal ice crystals are present in the optical path. We describe an image analysis algorithm for long time series of TSI images which scores images with respect to the presence of 22∘ halos. Each image is assigned an ice halo score (IHS) for 22∘ halos, as well as a photographic sky type (PST), which differentiates cirrostratus (PST-CS), partially cloudy (PST-PCL), cloudy (PST-CLD), or clear (PST-CLR) within a near-solar image analysis area. The color-resolved radial brightness behavior of the near-solar region is used to define the discriminant properties used to classify photographic sky type and assign an ice halo score. The scoring is based on the tools of multivariate Gaussian analysis applied to a standardized sun-centered image produced from the raw TSI image, following a series of calibrations, rotation, and coordinate transformation. The algorithm is trained based on a training set for each class of images. We present test results on halo observations and photographic sky type for the first 4 months of the year 2018, for TSI images obtained at the Southern Great Plains (SGP) ARM site. A detailed comparison of visual and algorithm scores for the month of March 2018 shows that the algorithm is about 90 % reliable in discriminating the four photographic sky types and identifies 86 % of all visual halos correctly. Numerous instances of halo appearances were identified for the period January through April 2018, with persistence times between 5 and 220 min. Varying by month, we found that between 9 % and 22 % of cirrostratus skies exhibited a full or partial 22∘ halo
On-line control of active camera networks
Large networks of cameras have been increasingly employed to capture dynamic events for tasks such as surveillance and training. When using active (pan-tilt-zoom) cameras to capture events distributed throughout a large area, human control becomes impractical and unreliable. This has led to the development of automated approaches for on-line camera control. I introduce a new approach that consists of a stochastic performance metric and a constrained optimization method. The metric quantifies the uncertainty in the state of multiple points on each target. It uses state-space methods with stochastic models of the target dynamics and camera measurements. It can account for static and dynamic occlusions, accommodate requirements specific to the algorithm used to process the images, and incorporate other factors that can affect its results. The optimization explores the space of camera configurations over time under constraints associated with the cameras, the predicted target trajectories, and the image processing algorithm. While an exhaustive exploration of this parameter space is intractable, through careful complexity analysis and application domain observations I have identified appropriate alternatives for reducing the space. Specifically, I reduce the spatial dimension of the search by dividing the optimization problem into subproblems, and then optimizing each subproblem independently. I reduce the temporal dimension of the search by using empirically-based heuristics inside each subproblem. The result is a tractable optimization that explores an appropriate subspace of the parameters, while attempting to minimize the risk of excluding the global optimum. The approach can be applied to conventional surveillance tasks (e.g., tracking or face recognition), as well as tasks employing more complex computer vision methods (e.g., markerless motion capture or 3D reconstruction). I present the results of experimental simulations of two such scenarios, using controlled and natural (unconstrained) target motions, employing simulated and real target tracks, in realistic scenes, and with realistic camera networks
Person re-Identification over distributed spaces and time
PhDReplicating the human visual system and cognitive abilities that the brain uses to process the
information it receives is an area of substantial scientific interest. With the prevalence of video
surveillance cameras a portion of this scientific drive has been into providing useful automated
counterparts to human operators. A prominent task in visual surveillance is that of matching
people between disjoint camera views, or re-identification. This allows operators to locate people
of interest, to track people across cameras and can be used as a precursory step to multi-camera
activity analysis. However, due to the contrasting conditions between camera views and their
effects on the appearance of people re-identification is a non-trivial task. This thesis proposes
solutions for reducing the visual ambiguity in observations of people between camera views
This thesis first looks at a method for mitigating the effects on the appearance of people under
differing lighting conditions between camera views. This thesis builds on work modelling
inter-camera illumination based on known pairs of images. A Cumulative Brightness Transfer
Function (CBTF) is proposed to estimate the mapping of colour brightness values based on limited
training samples. Unlike previous methods that use a mean-based representation for a set of
training samples, the cumulative nature of the CBTF retains colour information from underrepresented
samples in the training set. Additionally, the bi-directionality of the mapping function
is explored to try and maximise re-identification accuracy by ensuring samples are accurately
mapped between cameras.
Secondly, an extension is proposed to the CBTF framework that addresses the issue of changing
lighting conditions within a single camera. As the CBTF requires manually labelled training
samples it is limited to static lighting conditions and is less effective if the lighting changes. This
Adaptive CBTF (A-CBTF) differs from previous approaches that either do not consider lighting
change over time, or rely on camera transition time information to update. By utilising contextual
information drawn from the background in each camera view, an estimation of the lighting
change within a single camera can be made. This background lighting model allows the mapping
of colour information back to the original training conditions and thus remove the need for
3
retraining.
Thirdly, a novel reformulation of re-identification as a ranking problem is proposed. Previous
methods use a score based on a direct distance measure of set features to form a correct/incorrect
match result. Rather than offering an operator a single outcome, the ranking paradigm is to give
the operator a ranked list of possible matches and allow them to make the final decision. By utilising
a Support Vector Machine (SVM) ranking method, a weighting on the appearance features
can be learned that capitalises on the fact that not all image features are equally important to
re-identification. Additionally, an Ensemble-RankSVM is proposed to address scalability issues
by separating the training samples into smaller subsets and boosting the trained models.
Finally, the thesis looks at a practical application of the ranking paradigm in a real world application.
The system encompasses both the re-identification stage and the precursory extraction
and tracking stages to form an aid for CCTV operators. Segmentation and detection are combined
to extract relevant information from the video, while several combinations of matching
techniques are combined with temporal priors to form a more comprehensive overall matching
criteria.
The effectiveness of the proposed approaches is tested on datasets obtained from a variety
of challenging environments including offices, apartment buildings, airports and outdoor public
spaces
Methods for the automatic alignment of colour histograms
Colour provides important information in many image processing tasks such as object identification and
tracking. Different images of the same object frequently yield different colour values due to undesired
variations in lighting and the camera. In practice, controlling the source of these fluctuations is difficult,
uneconomical or even impossible in a particular imaging environment. This thesis is concerned with the
question of how to best align the corresponding clusters of colour histograms to reduce or remove the
effect of these undesired variations.
We introduce feature based histogram alignment (FBHA) algorithms that enable flexible alignment
transformations to be applied. The FBHA approach has three steps, 1) feature detection in the colour
histograms, 2) feature association and 3) feature alignment. We investigate the choices for these three
steps on two colour databases : 1) a structured and labeled database of RGB imagery acquired under controlled
camera, lighting and object variation and 2) grey-level video streams from an industrial inspection
application. The design and acquisition of the RGB image and grey-level video databases are a key contribution
of the thesis. The databases are used to quantitatively compare the FBHA approach against
existing methodologies and show it to be effective. FBHA is intended to provide a generic method for
aligning colour histograms, it only uses information from the histograms and therefore ignores spatial
information in the image. Spatial information and other context sensitive cues are deliberately avoided
to maintain the generic nature of the algorithm; by ignoring some of this important information we gain
useful insights into the performance limits of a colour alignment algorithm that works from the colour
histogram alone, this helps understand the limits of a generic approach to colour alignment
Recommended from our members
Environmentally robust multiple camera tracking
A significant growth of the use of surveillance cameras has arisen from both the availability of low-cost home security and post-September 11th security measures. With such a plethora of surveillance cameras available and already in use, tracking a person or object from one field of view to another accurately is a challenging possibility; recognising the same person at different spatial locations, under different lighting conditions, at different scales and orientations. In order to address these challenges and provide a solution, a review of recent and past literature is provided.
The main theme of this research is investigating methods to improve tracking of objects and people in dynamic environments and applying computational techniques to provide solutions to optimise such tracking systems. Image processing techniques are explored and refactored to adapt to currently available single-board computing power. Optimisation methods for speed of computing are investigated, presenting the paradigm of parallel programming during the design of “computationally intense” algorithms. The research also addresses cross-platform software/ server application design.
In controlled environments current tracking systems perform well, however, this project explores methods to take multiple camera tracking to a higher level where they can, in real time, robustly cope with: rapid changes in lighting and track objects between indoor and outdoor scenarios at any time of day or in any weather conditions, severe image occlusion, rapid changes in direction, orientation and velocity of the object being tracked and be invariant to image clutter and noise. Thus the outputs are twofold: track a human/object across multiple cameras and ensure the algorithm is fast enough to run in real time on a modern processor.
This research explores algorithms to deliver colour illumination invariance, also known as colour constancy. Colour illumination invariance can be applied as a pre-processing step to all cameras in a multi-camera environment. The research also investigates experimental assessment of multi-camera performance, focusing mainly on robustness to environmental changes.
There are three main objectives for a tracking algorithm being used in the proposed system. Firstly, the tracking algorithm must accurately detect objects independently of their scale change and rotation. Secondly, the tracking algorithm must accurately detect objects across multiple cameras in different lighting conditions. The third objective for the tracking algorithm is that it must be able to attain a high level of colour constancy. The last objective can be implemented as a pre-processing step to such a tracking algorithm. This research explores the use of the Scale Invariant Feature Transform (SIFT) and the Speeded-Up Robust Features (SURF) algorithm. These algorithms are discussed in detail in the literature review as well as methods for providing colour illumination invariance