873,722 research outputs found
A cost-effective intelligent robotic system with dual-arm dexterous coordination and real-time vision
Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two articulated arms, one movable robot head, and two charged coupled device (CCD) cameras for producing the stereoscopic views, and articulated cylindrical-type lower body, and an optional mobile base. A functional prototype is demonstrated
Knowledge/geometry-based Mobile Autonomous Robot Simulator (KMARS)
Ongoing applied research is focused on developing guidance system for robot vehicles. Problems facing the basic research needed to support this development (e.g., scene understanding, real-time vision processing, etc.) are major impediments to progress. Due to the complexity and the unpredictable nature of a vehicle's area of operation, more advanced vehicle control systems must be able to learn about obstacles within the range of its sensor(s). A better understanding of the basic exploration process is needed to provide critical support to developers of both sensor systems and intelligent control systems which can be used in a wide spectrum of autonomous vehicles. Elcee Computek, Inc. has been working under contract to the Flight Dynamics Laboratory, Wright Research and Development Center, Wright-Patterson AFB, Ohio to develop a Knowledge/Geometry-based Mobile Autonomous Robot Simulator (KMARS). KMARS has two parts: a geometry base and a knowledge base. The knowledge base part of the system employs the expert-system shell CLIPS ('C' Language Integrated Production System) and necessary rules that control both the vehicle's use of an obstacle detecting sensor and the overall exploration process. The initial phase project has focused on the simulation of a point robot vehicle operating in a 2D environment
Recommended from our members
A STUDY OF MACHINE VISION IN THE AUTOMOTIVE INDUSTRY
With the growth of industrial automation, it has become increasingly important to validate the quality of every manufactured part during production. Until now, human visual inspection aided with hard tooling or machines have been the primary means to this end, but the speed of today's production lines, the complexity of production equipment and the highest standards of quality to which parts must adhere frequently, make the traditional methods of industrial inspection and control impractical, if not impossible.
Subsequently, new solutions have been developed for the monitoring and control of industrial processes, in realĀtime. One such technology is the area of machine vision. After many years of research and development, computerised vision systems are now leaving the laboratory and are being used successfully in the factory environment. They are both robust and competitively priced as a sensing technique which has now opened up a whole new sector for automation.
Machine vision systems are becoming an important integral part of the automotive manufacturing process, with applications ranging from inspection, classification, robot guidance, assembly verification through to process monitoring and control. Although the number of systems in current use is still relatively small, there can be no doubt, given the issues at stake, that the automotive industry will once again lead the way with the implementation of machine vision just as it has done robotic technology.
The thesis considered the issue of machine vision and in particular, its deployment within the automotive industry. The thesis has presented work on machine vision for the prospective end-user and not the designer of such systems. It will provide sufficient background about the subject, to separate machine vision promises from reality and permit intelligent decisions regarding machine vision applications to be made.
The initial part of the dissertation focussed on the strategic issues affecting the selection of machine vision at the planning stage, such as a listing of the factors to justify investment, the capability of the technology and type of problems that are associated with this relatively new but complex science.
Though it is widely accepted that no two industrial machine vision systems are identical, knowledge of the basic fundamentals which underpin the structure of the technology in its application is presented.
This work covered a structured description detailing typical hardware components such as camera technology, lighting systems, etc... which form an integral part of an industrial system and discussions regarding the criteria for selection are presented. To complement this work, a further section is specifically devoted to the bewildering array of vision software analysis techniques which are currently available today. A detailed description of the various techniques that are applied to images in order to make use of and understand the data contained within them are discussed and explored.
Applications for machine vision fall into two main categories namely robotic guidance and inspection. Obviously within each category there are many further subĀgroups. Within this context the latter part of the thesis reviews with a well structured description of several industrial case studies derived from the automotive industry, which illustrate that machine vision is capable of providing real time solutions to manufacturing based problems.
In conclusion, despite the limited availability of industrially based machine vision systems, the success of implementation is not always guaranteed, as the technology imposes both technical limitations and introduce new human engineering considerations.
By understanding the application and the implications of the technical requirements on both the "staging" and the "image-processing" power required of the machine vision system. The thesis has shown that the most significant elements of a successful application are indeed the lighting, optics, component design, etc... - the "Staging". From the case studies investigated, optimised "staging" has resulted in the need for less computing power in the machine vision system. Inevitably, greater computing power not only requires more time but is generally more expensive.
The experience gained from the this project, has demonstrated that machine vision technology is a realistic alternative means of capturing data in real-time. Since the current limitations of the technology are well suited to the delivery process of the quality function within the manufacturing process
Managing trafic safety : an approach to the evaluation of new vehicle safety systems
Road traffic crashes are killing more than one million road users per year, worldwide. Preventive measures to decrease this number are warranted. Relevant measures can be to introduce systematic road traffic safety management and to improve the safety pro- perties of components in the road transport system. The so called Vision Zero, a holistic system to improve road safety, is built around the general idea to build a āāsafe systemā based on knowledge regarding human capacity and where no predicted crash or collision results in death or serious injury. A Vision Zero model for safe traffic has been developed to illustrate how different system components interact.
The aim of this thesis was to investigate how two vehicle safety systems, electronic stability control (ESC) and intelligent seat belt reminders (SBR), could deliver improved road traffic safety. Further the Plan-Do-Check-Act approach was investigated as a systematic mean for the evaluation of effects of new safety technologies.
The studies were mainly based on data in the form of police records and field observa- tions. For some aspects, in depth studies of fatal road crashes were used.
ESC systems were focused in two studies. The first study only investigated crash involvement independent of injury outcome level. In that study, a positive effect of ESC was found both overall and for accidents on wet, icy, and/or snowy roads. In a later study with a larger data set, the effect of ESC in crashes with various injury severities was investigated. The overall effectiveness on all injury crash types was found to be 16.7% (95% C.I. 7.4ā25.0%), while for serious and fatal crashes, the effectiveness was 21.6% (8.8ā34.4%).
The highest effects were found on serious and fatal loss-of-control type crashes on wet roads where the effect was 56.2% (32.7ā79.7%) and on roads covered with ice or snow where the effect was 49.2% (19.0ā79.4%). It was estimated that for Sweden, that at the time had a total of around 500 vehicle-related deaths annually, 80-100 fatalities could have been avoided if all cars had had ESC.
The effect of SBRs to increase the use of seat belts was studied in eleven cities in Europe, five of them in Sweden. The seat belt use rate in cars without SBR ranged from 69.6% to 96.9%. In cars with SBR, the seat belt wearing ranged from 92.6% to 99.8%. Considering all data, SBR increases seat belt use in traffic with 82.2% (73.6-90.8%). The fourth study includes an analysis of how safety aspects can be put into the Vision Zero model for safe traffic. An important finding was that the model was helpful in the understanding of how road traffic safety aspects performed and interrelated.
In the last study, fatal crashes of modern cars were studied. The focus was on ESC and SBR. ESC reduced fatal loss-of-control crashes with 74%. Of the nine, loss-of-control cases in cars with ESC, only one occurred under normal driving conditions. The other cases were related to very low friction, loss-of-control initiated beside the road surface, or related to extreme speeding. Seat belt use in fatal crashes was 74% for cars without SBR and 93% for cars with SBR.
The studies in this thesis illustrate how important it was to follow the introduction of new safety technologies, and this from many aspects, all the way to the ultimate goal to eliminate fatalities and serious injuries. The Vision Zero model was helpful in defining how safety parameters interact. Plan-Do-Check-Act was found to be a valuable approach. As safety systems become more sophisticated and fatality rates diminish, better evaluation processes will be needed
Project for a European technological platform on organic agriculture: Vision for an Organic Food and Farming Research Agenda to 2025
Facing global problems like food security, the unsustainable use of natural resources, the degradation of soils and biodiversity as well as climate change, international experts demand a strategy change in agriculture and in agricultural research. Such a change encompasses not only re-establishing principles like closing cycles in agro-ecosystems, making best use of regulating and supporting ecosystems services but also making use of indigenous or tacit knowledge of farmer communities in addition to technological progress. The reports of the āMillennium Ecosystem Assessmentā and of the IAASTD highlighted the need for this change in 2005 and 2008. Influenced by these recommendations, the potential of organic food and farming systems have to be assessed for the future of agriculture. Consequently, it is important to debate the future development of organic food and farming systems. Does organic agriculture stick to a niche strategy of producing high quality food for an elite of consumers? Or is organic farming a main stream strategy for feeding the world by minimising the negative impacts on the environment? Are these two objectives combinable? Organic farming is based on management strategies which are crucial for sustainable agriculture: The productivity of crops is maintained by closed circuits of nutrients and biomass, depending on multiple interfaces between livestock and cropping systems. Crop rotations integrate leguminous plants in order to make agriculture independent from external nitrogen supply and consequently reducing energy consumption and GHG emissions. Furthermore, the management and increase of biodiversity is an inherent approach of organic agriculture in order to control pest and diseases, as well as the increase of soil fertility in order to maintain high yields. And finally, organic farming has always used indigenous and tacit knowledge. Eco-functional intensification will be the major challenge of future agriculture. Thatās why the IFOAM-EU group published a vision for the future of organic food and farming systems (see www.organicresearch.org) and made a first step towards its implementation by setting up a technology platform called āorganicsā
Smart Decision-Making via Edge Intelligence for Smart Cities
Smart cities are an ambitious vision for future urban environments. The ultimate aim of smart cities is to use modern technology to optimize city resources and operations while improving overall quality-of-life of its citizens. Realizing this ambitious vision will require embracing advancements in information communication technology, data analysis, and other technologies. Because smart cities naturally produce vast amounts of data, recent artificial intelligence (AI) techniques are of interest due to their ability to transform raw data into insightful knowledge to inform decisions (e.g., using live road traffic data to control traffic lights based on current traffic conditions). However, training and providing these AI applications is non-trivial and will require sufficient computing resources. Traditionally, cloud computing infrastructure have been used to process computationally intensive tasks; however, due to the time-sensitivity of many of these smart city applications, novel computing hardware/technologies are required. The recent advent of edge computing provides a promising computing infrastructure to support the needs of the smart cities of tomorrow. Edge computing pushes compute resources close to end users to provide reduced latency and improved scalability ā making it a viable candidate to support smart cities. However, it comes with hardware limitations that are necessary to consider.
This thesis explores the use of the edge computing paradigm for smart city applications and how to make efficient, smart decisions related to their available resources. This is done while considering the quality-of-service provided to end users. This work can be seen as four parts. First, this work touches on how to optimally place and serve AI-based applications on edge computing infrastructure to maximize quality-of-service to end users. This is cast as an optimization problem and solved with efficient algorithms that approximate the optimal solution. Second, this work investigates the applicability of compression techniques to reduce offloading costs for AI-based applications in edge computing systems. Finally, this thesis then demonstrate how edge computing can support AI-based solutions for smart city applications, namely, smart energy and smart traffic. These applications are approached using the recent paradigm of federated learning.
The contributions of this thesis include the design of novel algorithms and system design strategies for placement and scheduling of AI-based services on edge computing systems, formal formulation for trade-offs between delivered AI model performance and latency, compression for offloading decisions for communication reductions, and evaluation of federated learning-based approaches for smart city applications
Guidance and control of an autonomous underwater vehicle
Merged with duplicate record 10026.1/856 on 07.03.2017 by CS (TIS)A cooperative project between the Universities of Plymouth and Cranfield was aimed
at designing and developing an autonomous underwater vehicle named Hammerhead.
The work presented herein is to formulate an advance guidance and control system
and to implement it in the Hammerhead. This involves the description of Hammerhead
hardware from a control system perspective. In addition to the control system,
an intelligent navigation scheme and a state of the art vision system is also developed.
However, the development of these submodules is out of the scope of this thesis.
To model an underwater vehicle, the traditional way is to acquire painstaking mathematical
models based on laws of physics and then simplify and linearise the models to
some operating point. One of the principal novelties of this research is the use of system
identification techniques on actual vehicle data obtained from full scale in water
experiments. Two new guidance mechanisms have also been formulated for cruising
type vehicles. The first is a modification of the proportional navigation guidance for
missiles whilst the other is a hybrid law which is a combination of several guidance
strategies employed during different phases of the Right.
In addition to the modelling process and guidance systems, a number of robust control
methodologies have been conceived for Hammerhead. A discrete time linear
quadratic Gaussian with loop transfer recovery based autopilot is formulated and integrated
with the conventional and more advance guidance laws proposed. A model
predictive controller (MPC) has also been devised which is constructed using artificial
intelligence techniques such as genetic algorithms (GA) and fuzzy logic. A GA
is employed as an online optimization routine whilst fuzzy logic has been exploited
as an objective function in an MPC framework. The GA-MPC autopilot has been
implemented in Hammerhead in real time and results demonstrate excellent robustness
despite the presence of disturbances and ever present modelling uncertainty. To
the author's knowledge, this is the first successful application of a GA in real time
optimization for controller tuning in the marine sector and thus the thesis makes an
extremely novel and useful contribution to control system design in general. The
controllers are also integrated with the proposed guidance laws and is also considered
to be an invaluable contribution to knowledge. Moreover, the autopilots are used in
conjunction with a vision based altitude information sensor and simulation results
demonstrate the efficacy of the controllers to cope with uncertain altitude demands.J&S MARINE LTD., QINETIQ,
SUBSEA 7 AND SOUTH WEST WATER PL
Visual system identiļ¬cation: learning physical parameters and latent spaces from pixels
In this thesis, we develop machine learning systems that are able to leverage the knowledge
of equations of motion (scene-specific or scene-agnostic) to perform object discovery,
physical parameter estimation, position and velocity estimation, camera pose
estimation, and learn structured latent spaces that satisfy physical dynamics rules.
These systems are unsupervised, learning from unlabelled videos, and use as inductive
biases the general equations of motion followed by objects of interest in the scene.
This is an important task as in many complex real world environments ground-truth
states are not available, although there is physical knowledge of the underlying system.
Our goals with this approach, i.e. integration of physics knowledge with unsupervised
learning models, are to improve vision-based prediction, enable new forms of control,
increase data-efficiency and provide model interpretability, all of which are key areas
of interest in machine learning. With the above goals in mind, we start by asking the
following question: given a scene in which the objectsā motions are known up to some
physical parameters (e.g. a ball bouncing off the floor with unknown restitution coefficient),
how do we build a model that uses such knowledge to discover the objects in the
scene and estimate these physical parameters?
Our first model, PAIG (Physics-as-Inverse-Graphics), approaches this problem from a
vision-as-inverse-graphics perspective, describing the visual scene as a composition of
objects defined by their location and appearance, which are rendered onto the frame in
a graphics manner. This is a known approach in the unsupervised learning literature,
where the fundamental problem then becomes that of derendering, that is, inferring and
discovering these locations and appearances for each object. In PAIG we introduce a
key rendering component, the Coordinate-Consistent Decoder, which enables the integration
of the known equations of motion with an inverse-graphics autoencoder architecture
(trainable end-to-end), to perform simultaneous object discovery and physical
parameter estimation. Although trained on simple simulated 2D scenes, we show that
knowledge of the physical equations of motion of the objects in the scene can be used
to greatly improve future prediction and provide physical scene interpretability.
Our second model, V-SysId, tackles the limitations shown by the PAIG architecture,
namely the training difficulty, the restriction to simulated 2D scenes, and the need for
noiseless scenes without distractors. Here, we approach the problem from rst principles
by asking the question: are neural networks a necessary component to solve this
problem? Can we use simpler ideas from classical computer vision instead? With V-
SysId, we approach the problem of object discovery and physical parameter estimation
from a keypoint extraction, tracking and selection perspective, composed of 3 separate
stages: proposal keypoint extraction and tracking, 3D equation tting and camera pose
estimation from 2D trajectories, and entropy-based trajectory selection. Since all the
stages use lightweight algorithms and optimisers, V-SysId is able to perform joint object
discovery, physical parameter and camera pose estimation from even a single video,
drastically improving data-efficiency. Additionally, due to the fact that it does not use a
rendering/derendering approach, it can be used in real 3D scenes with many distractor
objects. We show that this approach enables a number of interest applications, such as
vision-based robot end-effector localisation and remote breath rate measurement.
Finally, we move into the area of structured recurrent variational models from vision,
where we are motivated by the following observation: in existing models, applying a
force in the direction from a start point and an end point (in latent space), does not
result in a movement from the start point towards the end point, even on the simplest
unconstrained environments. This means that the latent space learned by these models
does not follow Newtonās law, where the acceleration vector has the same direction
as the force vector (in point-mass systems), and prevents the use of PID controllers,
which are the simplest and most well understood type of controller. We solve this problem
by building inductive biases from Newtonian physics into the latent variable model,
which we call NewtonianVAE. Crucially, Newtonian correctness in the latent space brings
about the ability to perform proportional (or PID) control, as opposed to the more computationally
expensive model predictive control (MPC). PID controllers are ubiquitous
in industrial applications, but had thus far lacked integration with unsupervised vision
models. We show that the NewtonianVAE learns physically correct latent spaces in simulated
2D and 3D control systems, which can be used to perform goal-based discovery
and control in imitation learning, and path following via Dynamic Motion Primitives
Methods for Wheel Slip and Sinkage Estimation in Mobile Robots
Future outdoor mobile robots will have to explore larger and larger areas, performing difficult tasks, while preserving, at the same time, their safety. This will primarily require advanced sensing and perception capabilities. Video sensors supply contact-free, precise measurements and are flexible devices that can be easily integrated with multi-sensor robotic platforms. Hence, they represent a potential answer to the need of new and improved perception capabilities for autonomous vehicles. One of the main applications of vision in mobile robotics is localization. For mobile robots operating on rough terrain, conventional dead reckoning techniques are not well suited, since wheel slipping, sinkage, and sensor drift may cause localization errors that accumulate without bound during the vehicleās travel. Conversely, video sensors are exteroceptive devices, that is, they acquire information from the robotās environment; therefore, vision-based motion estimates are independent of the knowledge of terrain properties and wheel-terrain interaction. Indeed, like dead reckoning, vision could lead to accumulation of errors; however, it has been proved that, compared to dead reckoning, it allows more accurate results and can be considered as a promising solution to the problem of robust robot positioning in high-slip environments. As a consequence, in the last few years, several localization methods using vision have been developed. Among them, visual odometry algorithms, based on the tracking of visual features over subsequent images, have been proved particularly effective.
Accurate and reliable methods to sense slippage and sinkage are also desirable, since these effects compromise the vehicleās traction performance, energy consumption and lead to gradual deviation of the robot from the intended path, possibly resulting in large drift and poor results of localization and control systems. For example, the use of conventional dead-reckoning technique is largely compromised, since it is based on the assumption that wheel revolutions can be translated into correspondent linear displacements. Thus, if one wheel slips, then the associated encoder will register revolutions even though these revolutions do not correspond to a linear displacement of the wheel. Conversely, if one wheel skids, fewer encoder pulses will be counted. Slippage and sinkage measurements are also valuable for terrain identification according to the classical terramechanics theory. This chapter investigates vision-based onboard technology to improve mobility of robots on natural terrain. A visual odometry algorithm and two methods for online measurement of vehicle slip angle and wheel sinkage, respectively, are discussed. Tests results are presented showing the performance of the proposed approaches using an all-terrain rover moving across uneven terrain
- ā¦