28,742 research outputs found
DEVELOPMENT OF A MODULAR AGRICULTURAL ROBOTIC SPRAYER
Precision Agriculture (PA) increases farm productivity, reduces pollution, and minimizes input costs. However, the wide adoption of existing PA technologies for complex field operations, such as spraying, is slow due to high acquisition costs, low adaptability, and slow operating speed. In this study, we designed, built, optimized, and tested a Modular Agrochemical Precision Sprayer (MAPS), a robotic sprayer with an intelligent machine vision system (MVS). Our work focused on identifying and spraying on the targeted plants with low cost, high speed, and high accuracy in a remote, dynamic, and rugged environment. We first researched and benchmarked combinations of one-stage convolutional neural network (CNN) architectures with embedded or mobile hardware systems. Our analysis revealed that TensorRT-optimized SSD-MobilenetV1 on an NVIDIA Jetson Nano provided sufficient plant detection performance with low cost and power consumption. We also developed an algorithm to determine the maximum operating velocity of a chosen CNN and hardware configuration through modeling and simulation. Based on these results, we developed a CNN-based MVS for real-time plant detection and velocity estimation. We implemented Robot Operating System (ROS) to integrate each module for easy expansion. We also developed a robust dynamic targeting algorithm to synchronize the spray operation with the robot motion, which will increase productivity significantly. The research proved to be successful. We built a MAPS with three independent vision and spray modules. In the lab test, the sprayer recognized and hit all targets with only 2% wrong sprays. In the field test with an unstructured crop layout, such as a broadcast-seeded soybean field, the MAPS also successfully sprayed all targets with only a 7% incorrect spray rate
Comparing Computing Platforms for Deep Learning on a Humanoid Robot
The goal of this study is to test two different computing platforms with
respect to their suitability for running deep networks as part of a humanoid
robot software system. One of the platforms is the CPU-centered Intel NUC7i7BNH
and the other is a NVIDIA Jetson TX2 system that puts more emphasis on GPU
processing. The experiments addressed a number of benchmarking tasks including
pedestrian detection using deep neural networks. Some of the results were
unexpected but demonstrate that platforms exhibit both advantages and
disadvantages when taking computational performance and electrical power
requirements of such a system into account.Comment: 12 pages, 5 figure
FPGA-based module for SURF extraction
We present a complete hardware and software solution of an FPGA-based computer vision embedded module capable of carrying out SURF image features extraction algorithm. Aside from image analysis, the module embeds a Linux distribution that allows to run programs specifically tailored for particular applications. The module is based on a Virtex-5 FXT FPGA which features powerful configurable logic and an embedded PowerPC processor. We describe the module hardware as well as the custom FPGA image processing cores that implement the algorithm's most computationally expensive process, the interest point detection. The module's overall performance is evaluated and compared to CPU and GPU based solutions. Results show that the embedded module achieves comparable disctinctiveness to the SURF software implementation running in a standard CPU while being faster and consuming significantly less power and space. Thus, it allows to use the SURF algorithm in applications with power and spatial constraints, such as autonomous navigation of small mobile robots
Mobile forensic triage for damaged phones using M_Triage
Mobile forensics triage is a useful technique in a digital forensics investigation for recovering lost or purposely deleted and hidden files from digital storage. It is particularly useful, especially when solving a very sensitive crime, for example, kidnapping, in a timely manner. However, the existing mobile forensics triage tools do not consider performing a triage examination on damaged mobile phones. This research addressed the issues of performing triage examination on damaged Android mobile phones and reduction of false positive result generated by the current mobile forensics triage tools. Furthermore, the research addressed the issues of ignoring possible evidence residing in a bad block memory location. In this research a new forensics triage tool called M_Triage was introduced by extending Decode’s framework to handle data retrieval challenges on damaged Android mobile phones. The tool was designed to obtain evidence quickly and accurately (i.e. valid address book, call logs, SMS, images, and, videos, etc.) on Android damaged mobile phones. The tool was developed using C#, while back end engines was done using C programming and tested using five data sets. Based on the computational time processing comparison with Dec0de, Lifter, XRY and Xaver, the result showed that there was 75% improvement over Dec0de, 36% over Lifter, 28% over XRY and finally 71% over Xaver. Again, based on the experiment done on five data sets, M_Triage was capable of carving valid address book, call logs, SMS, images and videos as compared to Dec0de, Lifter, XRY and Xaver. With the average improvement of 90% over DEC0DE, 30% over Lifter, 40% over XRY and lastly 61% over Xaver. This shows that M_Triage is a better tool to be used because it saves time, carve more relevant files and less false positive result are achieved with the tool
Mobile forensic triage for damaged phones using M_Triage
Mobile forensics triage is a useful technique in a digital forensics investigation for recovering lost or purposely deleted and hidden files from digital storage. It is particularly useful, especially when solving a very sensitive crime, for example, kidnapping, in a timely manner. However, the existing mobile forensics triage tools do not consider performing a triage examination on damaged mobile phones. This research addressed the issues of performing triage examination on damaged Android mobile phones and reduction of false positive result generated by the current mobile forensics triage tools. Furthermore, the research addressed the issues of ignoring possible evidence residing in a bad block memory location. In this research a new forensics triage tool called M_Triage was introduced by extending Decode’s framework to handle data retrieval challenges on damaged Android mobile phones. The tool was designed to obtain evidence quickly and accurately (i.e. valid address book, call logs, SMS, images, and, videos, etc.) on Android damaged mobile phones. The tool was developed using C#, while back end engines was done using C programming and tested using five data sets. Based on the computational time processing comparison with Dec0de, Lifter, XRY and Xaver, the result showed that there was 75% improvement over Dec0de, 36% over Lifter, 28% over XRY and finally 71% over Xaver. Again, based on the experiment done on five data sets, M_Triage was capable of carving valid address book, call logs, SMS, images and videos as compared to Dec0de, Lifter, XRY and Xaver. With the average improvement of 90% over DEC0DE, 30% over Lifter, 40% over XRY and lastly 61% over Xaver. This shows that M_Triage is a better tool to be used because it saves time, carve more relevant files and less false positive result are achieved with the tool
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
A FPGA Spike-Based Robot Controlled with Neuro-inspired VITE
This paper presents a spike-based control system applied to a fixed
robotic platform. Our aim is to take a step forward to a future complete spikes
processing architecture, from vision to direct motor actuation. This paper covers
the processing and actuation layer over an anthropomorphic robot. In this way,
the processing layer uses the neuro-inspired VITE algorithm, for reaching a target,
based on PFM taking advantage of spike system information: its frequency.
Thus, all the blocks of the system are based on spikes. Each layer is implemented
within a FPGA board and spikes communication is codified under the
AER protocol. The results show an accurate behavior of the robotic platform
with 6-bit resolution for a 130º range per joint, and an automatic speed control
of the algorithm. Up to 96 motor controllers could be integrated in the same
FPGA, allowing the positioning and object grasping by more complex anthropomorphic
robots.Ministerio de Ciencia e Innovación TEC2009-10639-C04-02Ministerio de Economía y Competitividad TEC2012-37868-C04-0
- …