2,425 research outputs found

    Modelling and analysis of plant image data for crop growth monitoring in horticulture

    Get PDF
    Plants can be characterised by a range of attributes, and measuring these attributes accurately and reliably is a major challenge for the horticulture industry. The measurement of those plant characteristics that are most relevant to a grower has previously been tackled almost exclusively by a combination of manual measurement and visual inspection. The purpose of this work is to propose an automated image analysis approach in order to provide an objective measure of plant attributes to remove subjective factors from assessment and to reduce labour requirements in the glasshouse. This thesis describes a stereopsis approach for estimating plant height, since height information cannot be easily determined from a single image. The stereopsis algorithm proposed in this thesis is efficient in terms of the running time, and is more accurate when compared with other algorithms. The estimated geometry, together with colour information from the image, are then used to build a statistical plant surface model, which represents all the information from the visible spectrum. A self-organising map approach can be adopted to model plant surface attributes, but the model can be improved by using a probabilistic model such as a mixture model formulated in a Bayesian framework. Details of both methods are discussed in this thesis. A Kalman filter is developed to track the plant model over time, extending the model to the time dimension, which enables smoothing of the noisy measurements to produce a development trend for a crop. The outcome of this work could lead to a number of potentially important applications in horticulture

    Modelling and analysis of plant image data for crop growth monitoring in horticulture

    Get PDF
    Plants can be characterised by a range of attributes, and measuring these attributes accurately and reliably is a major challenge for the horticulture industry. The measurement of those plant characteristics that are most relevant to a grower has previously been tackled almost exclusively by a combination of manual measurement and visual inspection. The purpose of this work is to propose an automated image analysis approach in order to provide an objective measure of plant attributes to remove subjective factors from assessment and to reduce labour requirements in the glasshouse. This thesis describes a stereopsis approach for estimating plant height, since height information cannot be easily determined from a single image. The stereopsis algorithm proposed in this thesis is efficient in terms of the running time, and is more accurate when compared with other algorithms. The estimated geometry, together with colour information from the image, are then used to build a statistical plant surface model, which represents all the information from the visible spectrum. A self-organising map approach can be adopted to model plant surface attributes, but the model can be improved by using a probabilistic model such as a mixture model formulated in a Bayesian framework. Details of both methods are discussed in this thesis. A Kalman filter is developed to track the plant model over time, extending the model to the time dimension, which enables smoothing of the noisy measurements to produce a development trend for a crop. The outcome of this work could lead to a number of potentially important applications in horticulture.EThOS - Electronic Theses Online ServiceHorticultural Development Council (Great Britain) (HDC) (CP 37)GBUnited Kingdo

    Modelling and analysis of plant image data for crop growth monitoring in horticulture

    Get PDF
    Plants can be characterised by a range of attributes, and measuring these attributes accurately and reliably is a major challenge for the horticulture industry. The measurement of those plant characteristics that are most relevant to a grower has previously been tackled almost exclusively by a combination of manual measurement and visual inspection. The purpose of this work is to propose an automated image analysis approach in order to provide an objective measure of plant attributes to remove subjective factors from assessment and to reduce labour requirements in the glasshouse. This thesis describes a stereopsis approach for estimating plant height, since height information cannot be easily determined from a single image. The stereopsis algorithm proposed in this thesis is efficient in terms of the running time, and is more accurate when compared with other algorithms. The estimated geometry, together with colour information from the image, are then used to build a statistical plant surface model, which represents all the information from the visible spectrum. A self-organising map approach can be adopted to model plant surface attributes, but the model can be improved by using a probabilistic model such as a mixture model formulated in a Bayesian framework. Details of both methods are discussed in this thesis. A Kalman filter is developed to track the plant model over time, extending the model to the time dimension, which enables smoothing of the noisy measurements to produce a development trend for a crop. The outcome of this work could lead to a number of potentially important applications in horticulture.EThOS - Electronic Theses Online ServiceHorticultural Development Council (Great Britain) (HDC) (CP 37)GBUnited Kingdo

    Cyclist Detection, Tracking, and Trajectory Analysis in Urban Traffic Video Data

    Full text link
    The major objective of this thesis work is examining computer vision and machine learning detection methods, tracking algorithms and trajectory analysis for cyclists in traffic video data and developing an efficient system for cyclist counting. Due to the growing number of cyclist accidents on urban roads, methods for collecting information on cyclists are of significant importance to the Department of Transportation. The collected information provides insights into solving critical problems related to transportation planning, implementing safety countermeasures, and managing traffic flow efficiently. Intelligent Transportation System (ITS) employs automated tools to collect traffic information from traffic video data. In comparison to other road users, such as cars and pedestrians, the automated cyclist data collection is relatively a new research area. In this work, a vision-based method for gathering cyclist count data at intersections and road segments is developed. First, we develop methodology for an efficient detection and tracking of cyclists. The combination of classification features along with motion based properties are evaluated to detect cyclists in the test video data. A Convolutional Neural Network (CNN) based detector called You Only Look Once (YOLO) is implemented to increase the detection accuracy. In the next step, the detection results are fed into a tracker which is implemented based on the Kernelized Correlation Filters (KCF) which in cooperation with the bipartite graph matching algorithm allows to track multiple cyclists, concurrently. Then, a trajectory rebuilding method and a trajectory comparison model are applied to refine the accuracy of tracking and counting. The trajectory comparison is performed based on semantic similarity approach. The proposed counting method is the first cyclist counting method that has the ability to count cyclists under different movement patterns. The trajectory data obtained can be further utilized for cyclist behavioral modeling and safety analysis

    Protein Tracking by CNN-Based Candidate Pruning and Two-Step Linking with Bayesian Network

    Get PDF
    Protein trafficking plays a vital role in understanding many biological processes and disease. Automated tracking of protein vesicles is challenging due to their erratic behaviour, changing appearance, and visual clutter. In this paper we present a novel tracking approach which utilizes a two-step linking process that exploits a probabilistic graphical model to predict tracklet linkage. The vesicles are initially detected with help of a candidate selection process, where the candidates are identified by a multi-scale spot enhancing filter. Subsequently, these candidates are pruned and selected by a light weight convolutional neural network. At the linking stage, the tracklets are formed based on the distance and the detection assignment which is implemented via combinatorial optimization algorithm. Each tracklet is described by a number of parameters used to evaluate the probability of tracklets connection by the inference over the Bayesian network. The tracking results are presented for confocal fluorescence microscopy data of protein trafficking in epithelial cells. The proposed method achieves a root mean square error (RMSE) of 1.39 for the vesicle localisation and of 0.7 representing the degree of track matching with ground truth. The presented method is also evaluated against the state-of-the-art “Trackmate“ framework

    Integrasjon av et minimalistisk sett av sensorer for kartlegging og lokalisering av landbruksroboter

    Get PDF
    Robots have recently become ubiquitous in many aspects of daily life. For in-house applications there is vacuuming, mopping and lawn-mowing robots. Swarms of robots have been used in Amazon warehouses for several years. Autonomous driving cars, despite being set back by several safety issues, are undeniably becoming the standard of the automobile industry. Not just being useful for commercial applications, robots can perform various tasks, such as inspecting hazardous sites, taking part in search-and-rescue missions. Regardless of end-user applications, autonomy plays a crucial role in modern robots. The essential capabilities required for autonomous operations are mapping, localization and navigation. The goal of this thesis is to develop a new approach to solve the problems of mapping, localization, and navigation for autonomous robots in agriculture. This type of environment poses some unique challenges such as repetitive patterns, large-scale sparse features environments, in comparison to other scenarios such as urban/cities, where the abundance of good features such as pavements, buildings, road lanes, traffic signs, etc., exists. In outdoor agricultural environments, a robot can rely on a Global Navigation Satellite System (GNSS) to determine its whereabouts. It is often limited to the robot's activities to accessible GNSS signal areas. It would fail for indoor environments. In this case, different types of exteroceptive sensors such as (RGB, Depth, Thermal) cameras, laser scanner, Light Detection and Ranging (LiDAR) and proprioceptive sensors such as Inertial Measurement Unit (IMU), wheel-encoders can be fused to better estimate the robot's states. Generic approaches of combining several different sensors often yield superior estimation results but they are not always optimal in terms of cost-effectiveness, high modularity, reusability, and interchangeability. For agricultural robots, it is equally important for being robust for long term operations as well as being cost-effective for mass production. We tackle this challenge by exploring and selectively using a handful of sensors such as RGB-D cameras, LiDAR and IMU for representative agricultural environments. The sensor fusion algorithms provide high precision and robustness for mapping and localization while at the same time assuring cost-effectiveness by employing only the necessary sensors for a task at hand. In this thesis, we extend the LiDAR mapping and localization methods for normal urban/city scenarios to cope with the agricultural environments where the presence of slopes, vegetation, trees render the traditional approaches to fail. Our mapping method substantially reduces the memory footprint for map storing, which is important for large-scale farms. We show how to handle the localization problem in dynamic growing strawberry polytunnels by using only a stereo visual-inertial (VI) and depth sensor to extract and track only invariant features. This eliminates the need for remapping to deal with dynamic scenes. Also, for a demonstration of the minimalistic requirement for autonomous agricultural robots, we show the ability to autonomously traverse between rows in a difficult environment of zigzag-liked polytunnel using only a laser scanner. Furthermore, we present an autonomous navigation capability by using only a camera without explicitly performing mapping or localization. Finally, our mapping and localization methods are generic and platform-agnostic, which can be applied to different types of agricultural robots. All contributions presented in this thesis have been tested and validated on real robots in real agricultural environments. All approaches have been published or submitted in peer-reviewed conference papers and journal articles.Roboter har nylig blitt standard i mange deler av hverdagen. I hjemmet har vi støvsuger-, vaske- og gressklippende roboter. Svermer med roboter har blitt brukt av Amazons varehus i mange år. Autonome selvkjørende biler, til tross for å ha vært satt tilbake av sikkerhetshensyn, er udiskutabelt på vei til å bli standarden innen bilbransjen. Roboter har mer nytte enn rent kommersielt bruk. Roboter kan utføre forskjellige oppgaver, som å inspisere farlige områder og delta i leteoppdrag. Uansett hva sluttbrukeren velger å gjøre, spiller autonomi en viktig rolle i moderne roboter. De essensielle egenskapene for autonome operasjoner i landbruket er kartlegging, lokalisering og navigering. Denne type miljø gir spesielle utfordringer som repetitive mønstre og storskala miljø med få landskapsdetaljer, sammenlignet med andre steder, som urbane-/bymiljø, hvor det finnes mange landskapsdetaljer som fortau, bygninger, trafikkfelt, trafikkskilt, etc. I utendørs jordbruksmiljø kan en robot bruke Global Navigation Satellite System (GNSS) til å navigere sine omgivelser. Dette begrenser robotens aktiviteter til områder med tilgjengelig GNSS signaler. Dette vil ikke fungere i miljøer innendørs. I ett slikt tilfelle vil reseptorer mot det eksterne miljø som (RGB-, dybde-, temperatur-) kameraer, laserskannere, «Light detection and Ranging» (LiDAR) og propriopsjonære detektorer som treghetssensorer (IMU) og hjulenkodere kunne brukes sammen for å bedre kunne estimere robotens tilstand. Generisk kombinering av forskjellige sensorer fører til overlegne estimeringsresultater, men er ofte suboptimale med hensyn på kostnadseffektivitet, moduleringingsgrad og utbyttbarhet. For landbruksroboter så er det like viktig med robusthet for lang tids bruk som kostnadseffektivitet for masseproduksjon. Vi taklet denne utfordringen med å utforske og selektivt velge en håndfull sensorer som RGB-D kameraer, LiDAR og IMU for representative landbruksmiljø. Algoritmen som kombinerer sensorsignalene gir en høy presisjonsgrad og robusthet for kartlegging og lokalisering, og gir samtidig kostnadseffektivitet med å bare bruke de nødvendige sensorene for oppgaven som skal utføres. I denne avhandlingen utvider vi en LiDAR kartlegging og lokaliseringsmetode normalt brukt i urbane/bymiljø til å takle landbruksmiljø, hvor hellinger, vegetasjon og trær gjør at tradisjonelle metoder mislykkes. Vår metode reduserer signifikant lagringsbehovet for kartlagring, noe som er viktig for storskala gårder. Vi viser hvordan lokaliseringsproblemet i dynamisk voksende jordbær-polytuneller kan løses ved å bruke en stereo visuel inertiel (VI) og en dybdesensor for å ekstrahere statiske objekter. Dette eliminerer behovet å kartlegge på nytt for å klare dynamiske scener. I tillegg demonstrerer vi de minimalistiske kravene for autonome jordbruksroboter. Vi viser robotens evne til å bevege seg autonomt mellom rader i ett vanskelig miljø med polytuneller i sikksakk-mønstre ved bruk av kun en laserskanner. Videre presenterer vi en autonom navigeringsevne ved bruk av kun ett kamera uten å eksplisitt kartlegge eller lokalisere. Til slutt viser vi at kartleggings- og lokaliseringsmetodene er generiske og platform-agnostiske, noe som kan brukes med flere typer jordbruksroboter. Alle bidrag presentert i denne avhandlingen har blitt testet og validert med ekte roboter i ekte landbruksmiljø. Alle forsøk har blitt publisert eller sendt til fagfellevurderte konferansepapirer og journalartikler

    Contributions to autonomous robust navigation of mobile robots in industrial applications

    Get PDF
    151 p.Un aspecto en el que las plataformas móviles actuales se quedan atrás en comparación con el punto que se ha alcanzado ya en la industria es la precisión. La cuarta revolución industrial trajo consigo la implantación de maquinaria en la mayor parte de procesos industriales, y una fortaleza de estos es su repetitividad. Los robots móviles autónomos, que son los que ofrecen una mayor flexibilidad, carecen de esta capacidad, principalmente debido al ruido inherente a las lecturas ofrecidas por los sensores y al dinamismo existente en la mayoría de entornos. Por este motivo, gran parte de este trabajo se centra en cuantificar el error cometido por los principales métodos de mapeado y localización de robots móviles,ofreciendo distintas alternativas para la mejora del posicionamiento.Asimismo, las principales fuentes de información con las que los robots móviles son capaces de realizarlas funciones descritas son los sensores exteroceptivos, los cuales miden el entorno y no tanto el estado del propio robot. Por esta misma razón, algunos métodos son muy dependientes del escenario en el que se han desarrollado, y no obtienen los mismos resultados cuando este varía. La mayoría de plataformas móviles generan un mapa que representa el entorno que les rodea, y fundamentan en este muchos de sus cálculos para realizar acciones como navegar. Dicha generación es un proceso que requiere de intervención humana en la mayoría de casos y que tiene una gran repercusión en el posterior funcionamiento del robot. En la última parte del presente trabajo, se propone un método que pretende optimizar este paso para así generar un modelo más rico del entorno sin requerir de tiempo adicional para ello
    corecore