947 research outputs found
Commercialisation of precision agriculture technologies in the macadamia industry
A prototype vision-based yield monitor has been developed for the macadamia industry. The system estimates yield for individual trees by detecting nuts and their harvested location. The technology was developed by the National Centre for Engineering in Agriculture, University of Southern Queensland for the purpose of reducing labour and costs in varietal assessment trials where yield for individual trees are required to be measured to indicate tree performance. The project was commissioned by Horticulture Australia Limited
Recommended from our members
Improving the safety and efficiency of rail yard operations using robotics
textSignificant efforts have been expended by the railroad industry to make operations safer and more efficient through the intelligent use of sensor data. This work proposes to take the technology one step further to use this data for the control of physical systems designed to automate hazardous railroad operations, particularly those that require humans to interact with moving trains. To accomplish this, application specific requirements must be established to design self-contained machine vision and robotic solutions to eliminate the risks associated with existing manual operations. Present-day rail yard operations have been identified as good candidates to begin development. Manual uncoupling, in particular, of rolling stock in classification yards has been investigated. To automate this process, an intelligent robotic system must be able to detect, track, approach, contact, and manipulate constrained objects on equipment in motion. This work presents multiple prototypes capable of autonomously uncoupling full-scale freight cars using feedback from its surrounding environment. Geometric image processing algorithms and machine learning techniques were implemented to accurately identify cylindrical objects in point clouds generated in real-vi time. Unique methods fusing velocity and vision data were developed to synchronize a pair of moving rigid bodies in real-time. Multiple custom end-effectors with in-built compliance and fault tolerance were designed, fabricated, and tested for grasping and manipulating cylindrical objects. Finally, an event-driven robotic control application was developed to safely and reliably uncouple freight cars using data from 3D cameras, velocity sensors, force/torque transducers, and intelligent end-effector tooling. Experimental results in a lab setting confirm that modern robotic and sensing hardware can be used to reliably separate pairs of rolling stock up to two miles per hour. Additionally, subcomponents of the autonomous pin-pulling system (APPS) were designed to be modular to the point where they could be used to automate other hazardous, labor-intensive tasks found in U.S. classification yards. Overall, this work supports the deployment of autonomous robotic systems in semi-unstructured yard environments to increase the safety and efficiency of rail operations.Mechanical Engineerin
Coopération de réseaux de caméras ambiantes et de vision embarquée sur robot mobile pour la surveillance de lieux publics
Actuellement, il y a une demande croissante pour le dĂ©ploiement de robots mobile dans des lieux publics. Pour alimenter cette demande, plusieurs chercheurs ont dĂ©ployĂ© des systĂšmes robotiques de prototypes dans des lieux publics comme les hĂŽpitaux, les supermarchĂ©s, les musĂ©es, et les environnements de bureau. Une principale prĂ©occupation qui ne doit pas ĂȘtre nĂ©gligĂ©, comme des robots sortent de leur milieu industriel isolĂ© et commencent Ă interagir avec les humains dans un espace de travail partagĂ©, est une interaction sĂ©curitaire. Pour un robot mobile Ă avoir un comportement interactif sĂ©curitaire et acceptable - il a besoin de connaĂźtre la prĂ©sence, la localisation et les mouvements de population Ă mieux comprendre et anticiper leurs intentions et leurs actions. Cette thĂšse vise Ă apporter une contribution dans ce sens en mettant l'accent sur les modalitĂ©s de perception pour dĂ©tecter et suivre les personnes Ă proximitĂ© d'un robot mobile. Comme une premiĂšre contribution, cette thĂšse prĂ©sente un systĂšme automatisĂ© de dĂ©tection des personnes visuel optimisĂ© qui prend explicitement la demande de calcul prĂ©vue sur le robot en considĂ©ration. DiffĂ©rentes expĂ©riences comparatives sont menĂ©es pour mettre clairement en Ă©vidence les amĂ©liorations de ce dĂ©tecteur apporte Ă la table, y compris ses effets sur la rĂ©activitĂ© du robot lors de missions en ligne. Dans un deuxiĂš contribution, la thĂšse propose et valide un cadre de coopĂ©ration pour fusionner des informations depuis des camĂ©ras ambiant affixĂ© au mur et de capteurs montĂ©s sur le robot mobile afin de mieux suivre les personnes dans le voisinage. La mĂȘme structure est Ă©galement validĂ©e par des donnĂ©es de fusion Ă partir des diffĂ©rents capteurs sur le robot mobile au cours de l'absence de perception externe. Enfin, nous dĂ©montrons les amĂ©liorations apportĂ©es par les modalitĂ©s perceptives dĂ©veloppĂ©s en les dĂ©ployant sur notre plate-forme robotique et illustrant la capacitĂ© du robot Ă percevoir les gens dans les lieux publics supposĂ©s et respecter leur espace personnel pendant la navigation.This thesis deals with detection and tracking of people in a surveilled public place. It proposes to include a mobile robot in classical surveillance systems that are based on environment fixed sensors. The mobile robot brings about two important benefits: (1) it acts as a mobile sensor with perception capabilities, and (2) it can be used as means of action for service provision. In this context, as a first contribution, it presents an optimized visual people detector based on Binary Integer Programming that explicitly takes the computational demand stipulated into consideration. A set of homogeneous and heterogeneous pool of features are investigated under this framework, thoroughly tested and compared with the state-of-the-art detectors. The experimental results clearly highlight the improvements the different detectors learned with this framework bring to the table including its effect on the robot's reactivity during on-line missions. As a second contribution, the thesis proposes and validates a cooperative framework to fuse information from wall mounted cameras and sensors on the mobile robot to better track people in the vicinity. Finally, we demonstrate the improvements brought by the developed perceptual modalities by deploying them on our robotic platform and illustrating the robot's ability to perceive people in supposed public areas and respect their personal space during navigation
Classification of road users detected and tracked with LiDAR at intersections
Data collection is a necessary component of transportation engineering. Manual data collection methods have proven to be inefficient and limited in terms of the data required for comprehensive traffic and safety studies. Automatic methods are being introduced to characterize the transportation system more accurately and are providing more information to better understand the dynamics between road users. Video data collection is an inexpensive and widely used automated method, but the accuracy of video-based algorithms is known to be affected by obstacles and shadows and the third dimension is lost with video camera data collection.
The impressive progress in sensing technologies has encouraged development of new methods for measuring the movements of road users. The Center for Road Safety at Purdue University proposed application of a LiDAR-based algorithm for tracking vehicles at intersections from a roadside location. LiDAR provides a three-dimensional characterization of the sensed environment for better detection and tracking results. The feasibility of this system was analyzed in this thesis using an evaluation methodology to determine the accuracy of the algorithm when tracking vehicles at intersections. According to the implemented method, the LiDAR-based system provides successful detection and tracking of vehicles, and its accuracy is comparable to the results provided by frame-by-frame extraction of trajectory data using video images by human observers.
After supporting the suitability of the system for tracking, the second component of this thesis focused on proposing a classification methodology to discriminate between vehicles, pedestrians, and two-wheelers. Four different methodologies were applied to identify the best method for implementation. The KNN algorithm, which is capable of creating adaptive decision boundaries based on the characteristics of similar observations, provided better performance when evaluating new locations. The multinomial logit model did not allow the inclusion of collinear variables into the model. Overfitting of the training data was indicated in the classification tree and boosting methodologies and produced lower performance when the models were applied to the test data. Despite ANOVA analysis not supporting superior performance by a competitor, the objective of classifying movements at intersections under diverse conditions was achieved with the KNN algorithm and was chosen as the method to implement with the existing algorithm
Standardization Roadmap for Unmanned Aircraft Systems, Version 1.0
This Standardization Roadmap for Unmanned Aircraft Systems, Version 1.0 (âroadmapâ) represents the culmination of the UASSCâs work to identify existing standards and standards in development, assess gaps, and make recommendations for priority areas where there is a perceived need for additional standardization and/or pre-standardization R&D.
The roadmap has examined 64 issue areas, identified a total of 60 gaps and corresponding recommendations across the topical areas of airworthiness; flight operations (both general concerns and application-specific ones including critical infrastructure inspections, commercial services, and public safety operations); and personnel training, qualifications, and certification. Of that total, 40 gaps/recommendations have been identified as high priority, 17 as medium priority, and 3 as low priority. A âgapâ means no published standard or specification exists that covers the particular issue in question. In 36 cases, additional R&D is needed.
The hope is that the roadmap will be broadly adopted by the standards community and that it will facilitate a more coherent and coordinated approach to the future development of standards for UAS. To that end, it is envisioned that the roadmap will be widely promoted and discussed over the course of the coming year, to assess progress on its implementation and to identify emerging issues that require further elaboration
The University Defence Research Collaboration In Signal Processing
This chapter describes the development of algorithms for automatic detection of anomalies from multi-dimensional, undersampled and incomplete datasets. The challenge in this work is to identify and classify behaviours as normal or abnormal, safe or threatening, from an irregular and often heterogeneous sensor network. Many defence and civilian applications can be modelled as complex networks of interconnected nodes with unknown or uncertain spatio-temporal relations. The behavior of such heterogeneous networks can exhibit dynamic properties, reflecting evolution in both network structure (new nodes appearing and existing nodes disappearing), as well as inter-node relations.
The UDRC work has addressed not only the detection of anomalies, but also the identification of their nature and their statistical characteristics. Normal patterns and changes in behavior have been incorporated to provide an acceptable balance between true positive rate, false positive rate, performance and computational cost. Data quality measures have been used to ensure the models of normality are not corrupted by unreliable and ambiguous data. The context for the activity of each node in complex networks offers an even more efficient anomaly detection mechanism. This has allowed the development of efficient approaches which not only detect anomalies but which also go on to classify their behaviour
Single chip solution for stabilization control & monocular visual servoing of small-scale quadrotor helicopter
This thesis documents the research undertaken to develop a high-performing design
of a small-scale quadrotor (four-rotor) helicopter capable of delivering the speed and
robustness required for agile motion while also featuring an autonomous visual servoing
capability within the size, weight, and power (SWaP) constraint package. The
state of the art research was reviewed, and the areas in the existing design methodologies
that can potentially be improved were identified, which included development
of a comprehensive dynamics model of quadrotor, design and construction of a performance
optimized prototype vehicle, high-performance actuator design, design of a
robust attitude stabilization controller, and a single chip solution for autonomous vision
based position control. The gaps in the current art of designing each component
were addressed individually. The outcomes of the corresponding development activities
include a high-fidelity dynamics and control model of the vehicle. The model
was developed using multi-body bond graph modeling approach to incorporate the
dynamic interactions between the frame body and propulsion system. Using an algorithmic
size, payload capacity, and flight endurance optimization approach, a quadrotor
prototype was designed and constructed. In order to conform to the optimized
geometric and performance parameters, the frame of the prototype was constructed
using printed circuit board (PCB) technology and processing power was integrated
using a single chip field programmable gate array (FPGA) technology. Furthermore, to actuate the quadrotor at a high update rate while also improving the power efficiency
of the actuation system, a ground up FPGA based brushless direct current
(BLDC) motor driver was designed using a low-loss commutation scheme and hall
effect sensors. A proportional-integral-derivative (PID) technology based closed loop
motor speed controller was also implemented in the same FPGA hardware for precise
speed control of the motors. In addition, a novel control law was formulated for robust
attitude stabilization by adopting a cascaded architecture of active disturbance rejection
control (ADRC) technology and PID control technology. Using the same single
FPGA chip to drive an on-board downward looking camera, a monocular visual servoing
solution was developed to integrate an autonomous position control feature with
the quadrotor. Accordingly, a numerically simple relative position estimation technique
was implemented in FPGA hardware that relies on a passive landmark/target
for 3-D position estimation.
The functionality and effectiveness of the synthesized design were evaluated by
performance benchmarking experiments conducted on each individual component as
well as on the complete system constructed from these components. It was observed
that the proposed small-scale quadrotor, even though just 43 cm in diameter, can lift
434 gm of payload while operating for 18 min. Among the ground up designed components,
the FPGA based motor driver demonstrated a maximum of 4% improvement in
the power consumption and at the same time can handle a command update at a rate
of 16 kHz. The cascaded attitude stabilization controller can asymptotically stabilize
the vehicle within 426 ms of the command update. Robust control performance under
stochastic wind gusts is also observed from the stabilization controller. Finally, the
single chip FPGA based monocular visual servoing solution can estimate pose information
at the camera rate of 37 fps and accordingly the quadrotor can autonomously
climb/descend and/or hover over a passive target
- âŠ