5,841 research outputs found
Multi-Robot Systems: Challenges, Trends and Applications
This book is a printed edition of the Special Issue entitled āMulti-Robot Systems: Challenges, Trends, and Applicationsā that was published in Applied Sciences. This Special Issue collected seventeen high-quality papers that discuss the main challenges of multi-robot systems, present the trends to address these issues, and report various relevant applications. Some of the topics addressed by these papers are robot swarms, mission planning, robot teaming, machine learning, immersive technologies, search and rescue, and social robotics
Sense and Respond
Over the past century, the manufacturing industry has undergone a number of paradigm shifts: from the Ford assembly line (1900s) and its focus on efficiency to the Toyota production system (1960s) and its focus on effectiveness and JIDOKA; from flexible manufacturing (1980s) to reconfigurable manufacturing (1990s) (both following the trend of mass customization); and from agent-based manufacturing (2000s) to cloud manufacturing (2010s) (both deploying the value stream complexity into the material and information flow, respectively). The next natural evolutionary step is to provide value by creating industrial cyber-physical assets with human-like intelligence. This will only be possible by further integrating strategic smart sensor technology into the manufacturing cyber-physical value creating processes in which industrial equipment is monitored and controlled for analyzing compression, temperature, moisture, vibrations, and performance. For instance, in the new wave of the āIndustrial Internet of Thingsā (IIoT), smart sensors will enable the development of new applications by interconnecting software, machines, and humans throughout the manufacturing process, thus enabling suppliers and manufacturers to rapidly respond to changing standards. This reprint of āSense and Respondā aims to cover recent developments in the field of industrial applications, especially smart sensor technologies that increase the productivity, quality, reliability, and safety of industrial cyber-physical value-creating processes
Semi-automatic fault localization
One of the most expensive and time-consuming components of the debugging
process is locating the errors or faults. To locate faults, developers must identify
statements involved in failures and select suspicious statements that might contain
faults. In practice, this localization is done by developers in a tedious and manual
way, using only a single execution, targeting only one fault, and having a limited
perspective into a large search space.
The thesis of this research is that fault localization can be partially automated
with the use of commonly available dynamic information gathered from test-case
executions in a way that is eļ¬ective, eļ¬cient, tolerant of test cases that pass but also
execute the fault, and scalable to large programs that potentially contain multiple
faults. The overall goal of this research is to develop eļ¬ective and eļ¬cient fault
localization techniques that scale to programs of large size and with multiple faults.
There are three principle steps performed to reach this goal: (1) Develop practical
techniques for locating suspicious regions in a program; (2) Develop techniques to
partition test suites into smaller, specialized test suites to target speciļ¬c faults; and
(3) Evaluate the usefulness and cost of these techniques.
In this dissertation, the diļ¬culties and limitations of previous work in the area
of fault-localization are explored. A technique, called Tarantula, is presented that
addresses these diļ¬culties. Empirical evaluation of the Tarantula technique shows
that it is eļ¬cient and eļ¬ective for many faults. The evaluation also demonstrates
that the Tarantula technique can loose eļ¬ectiveness as the number of faults increases.
To address the loss of eļ¬ectiveness for programs with multiple faults, supporting
techniques have been developed and are presented. The empirical evaluation of these
supporting techniques demonstrates that they can enable eļ¬ective fault localization in
the presence of multiple faults. A new mode of debugging, called parallel debugging, is
developed and empirical evidence demonstrates that it can provide a savings in terms
of both total expense and time to delivery. A prototype visualization is provided to
display the fault-localization results as well as to provide a method to interact and
explore those results. Finally, a study on the eļ¬ects of the composition of test suites
on fault-localization is presented.Ph.D.Committee Chair: Harrold, Mary Jean; Committee Member: Orso, Alessandro; Committee Member: Pande, Santosh; Committee Member: Reiss, Steven; Committee Member: Rugaber, Spence
Uncertainty Minimization in Robotic 3D Mapping Systems Operating in Dynamic Large-Scale Environments
This dissertation research is motivated by the potential and promise of 3D sensing technologies in safety and security applications. With specific focus on unmanned robotic mapping to aid clean-up of hazardous environments, under-vehicle inspection, automatic runway/pavement inspection and modeling of urban environments, we develop modular, multi-sensor, multi-modality robotic 3D imaging prototypes using localization/navigation hardware, laser range scanners and video cameras.
While deploying our multi-modality complementary approach to pose and structure recovery in dynamic real-world operating conditions, we observe several data fusion issues that state-of-the-art methodologies are not able to handle. Different bounds on the noise model of heterogeneous sensors, the dynamism of the operating conditions and the interaction of the sensing mechanisms with the environment introduce situations where sensors can intermittently degenerate to accuracy levels lower than their design specification. This observation necessitates the derivation of methods to integrate multi-sensor data considering sensor conflict, performance degradation and potential failure during operation.
Our work in this dissertation contributes the derivation of a fault-diagnosis framework inspired by information complexity theory to the data fusion literature. We implement the framework as opportunistic sensing intelligence that is able to evolve a belief policy on the sensors within the multi-agent 3D mapping systems to survive and counter concerns of failure in challenging operating conditions. The implementation of the information-theoretic framework, in addition to eliminating failed/non-functional sensors and avoiding catastrophic fusion, is able to minimize uncertainty during autonomous operation by adaptively deciding to fuse or choose believable sensors. We demonstrate our framework through experiments in multi-sensor robot state localization in large scale dynamic environments and vision-based 3D inference. Our modular hardware and software design of robotic imaging prototypes along with the opportunistic sensing intelligence provides significant improvements towards autonomous accurate photo-realistic 3D mapping and remote visualization of scenes for the motivating applications
Predictive Inference with Feature Conformal Prediction
Conformal prediction is a distribution-free technique for establishing valid
prediction intervals. Although conventionally people conduct conformal
prediction in the output space, this is not the only possibility. In this
paper, we propose feature conformal prediction, which extends the scope of
conformal prediction to semantic feature spaces by leveraging the inductive
bias of deep representation learning. From a theoretical perspective, we
demonstrate that feature conformal prediction provably outperforms regular
conformal prediction under mild assumptions. Our approach could be combined
with not only vanilla conformal prediction, but also other adaptive conformal
prediction methods. Apart from experiments on existing predictive inference
benchmarks, we also demonstrate the state-of-the-art performance of the
proposed methods on large-scale tasks such as ImageNet classification and
Cityscapes image segmentation.The code is available at
\url{https://github.com/AlvinWen428/FeatureCP}.Comment: Published as a conference paper at ICLR 202
Deep Learning in Cardiology
The medical field is creating large amount of data that physicians are unable
to decipher and use efficiently. Moreover, rule-based expert systems are
inefficient in solving complicated medical tasks or for creating insights using
big data. Deep learning has emerged as a more accurate and effective technology
in a wide range of medical problems such as diagnosis, prediction and
intervention. Deep learning is a representation learning method that consists
of layers that transform the data non-linearly, thus, revealing hierarchical
relationships and structures. In this review we survey deep learning
application papers that use structured data, signal and imaging modalities from
cardiology. We discuss the advantages and limitations of applying deep learning
in cardiology that also apply in medicine in general, while proposing certain
directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
- ā¦