90,252 research outputs found
Vision-based weed identification with farm robots
Robots in agriculture offer new opportunities for real time weed identification and quick removal operations. Weed identification and control remains one of the most challenging task in agriculture, particularly in organic agriculture practices. Considering environmental impacts and food quality, the excess use of chemicals in agriculture for controlling weeds and diseases is decreasing. The cost of herbercides and their field applications must be optimized. As an alternative, a smart weed identification technique followed by the mechanical and thermal weed control can fulfill the organic farmers’ expectations. The smart identification technique works on the concept of ‘shape matching’ and ‘active shape modeling’ of plant and weed leafs. The automated weed detection and control system consists of three major tools. Such as: i) eXcite multispectral camera, ii) LTI image processing library and iii) Hortibot robotic vehicle. The components are combined in Linux interface environment in the eXcite camera associate PC. The laboratory experiments for active shape matching have shown interesting results which will be further enhanced to develop the automated weed detection system. The Hortibot robot will be mounted with the camera unit in the front-end and the mechanical weed remover in the rear-end. The system will be upgraded for intense commercial applications in maize and other row crops
Optimized imaging using non-rigid registration
The extraordinary improvements of modern imaging devices offer access to data
with unprecedented information content. However, widely used image processing
methodologies fall far short of exploiting the full breadth of information
offered by numerous types of scanning probe, optical, and electron
microscopies. In many applications, it is necessary to keep measurement
intensities below a desired threshold. We propose a methodology for extracting
an increased level of information by processing a series of data sets
suffering, in particular, from high degree of spatial uncertainty caused by
complex multiscale motion during the acquisition process. An important role is
played by a nonrigid pixel-wise registration method that can cope with low
signal-to-noise ratios. This is accompanied by formulating objective quality
measures which replace human intervention and visual inspection in the
processing chain. Scanning transmission electron microscopy of siliceous
zeolite material exhibits the above-mentioned obstructions and therefore serves
as orientation and a test of our procedures
Disentangling astroglial physiology with a realistic cell model in silico
Electrically non-excitable astroglia take up neurotransmitters, buffer extracellular K+ and generate Ca2+ signals that release molecular regulators of neural circuitry. The underlying machinery remains enigmatic, mainly because the sponge-like astrocyte morphology has been difficult to access experimentally or explore theoretically. Here, we systematically incorporate multi-scale, tri-dimensional astroglial architecture into a realistic multi-compartmental cell model, which we constrain by empirical tests and integrate into the NEURON computational biophysical environment. This approach is implemented as a flexible astrocyte-model builder ASTRO. As a proof-of-concept, we explore an in silico astrocyte to evaluate basic cell physiology features inaccessible experimentally. Our simulations suggest that currents generated by glutamate transporters or K+ channels have negligible distant effects on membrane voltage and that individual astrocytes can successfully handle extracellular K+ hotspots. We show how intracellular Ca2+ buffers affect Ca2+ waves and why the classical Ca2+ sparks-and-puffs mechanism is theoretically compatible with common readouts of astroglial Ca2+ imaging
The sensing and perception subsystem of the NASA research telerobot
A useful space telerobot for on-orbit assembly, maintenance, and repair tasks must have a sensing and perception subsystem which can provide the locations, orientations, and velocities of all relevant objects in the work environment. This function must be accomplished with sufficient speed and accuracy to permit effective grappling and manipulation. Appropriate symbolic names must be attached to each object for use by higher-level planning algorithms. Sensor data and inferences must be presented to the remote human operator in a way that is both comprehensible in ensuring safe autonomous operation and useful for direct teleoperation. Research at JPL toward these objectives is described
- …