978 research outputs found
An energy-aware architecture : a practical implementation for autonomous underwater vehicles
Energy awareness, fault tolerance and performance estimation are important aspects for
extending the autonomy levels of todayâs autonomous vehicles. Those are related to the
concepts of survivability and reliability, two important factors that often limit the trust
of end users in conducting large-scale deployments of such vehicles. With the aim of
preparing the way for persistent autonomous operations this work focuses its efforts on
investigating those effects on underwater vehicles capable of long-term missions.
A novel energy-aware architecture for autonomous underwater vehicles (AUVs) is
presented. This, by monitoring at runtime the vehicleâs energy usage, is capable of
detecting and mitigating failures in the propulsion subsystem, one of the most common
sources of mission-time problems. Furthermore it estimates the vehicleâs performance
when operating in unknown environments and in the presence of external disturbances.
These capabilities are a great contribution for reducing the operational uncertainty that
most underwater platforms face during their deployment. Using knowledge collected while
conducting real missions the proposed architecture allows the optimisation of on-board
resource usage. This improves the vehicleâs effectiveness when operating in unknown
stochastic scenarios or when facing the problem of resource scarcity.
The architecture has been implemented on a real vehicle, Nessie AUV, used for real sea
experiments as part of multiple research projects. These gave the opportunity of evaluating
the improvements of the proposed system when considering more complex autonomous
tasks. Together with Nessie AUV, the commercial platform IVER3 AUV has been involved
in the evaluating the feasibility of this approach. Results and operational experience,
gathered both in real sea scenarios and in controlled environment experiments, are
discussed in detail showing the benefits and the operational constraints of the introduced
architecture, alongside suggestions for future research directions
Recommended from our members
Investigation and development of an advanced virtual coordinate measuring machine
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel UniversityDimensional measurement plays a critical role in product development and quality control. With the continuously increasing demand for tighter tolerances and more complex workpiece shapes in the industry, dimensional metrology often becomes the bottleneck of taking the quality and performance of manufacturing to the next level. As one kind of the most useful and powerful measuring instruments, coordinate measuring machines (CMMs) are widely employed in manufacturing industries. Since the accuracy and efficiency of a CMM have a vital impact on the product quality, productivity and manufacturing cost, the evaluation and improvement of CMM performance have always been important research topics since the invention of CMM.
A novel Advanced Virtual Coordinate Measuring Machine (AVCMM) is proposed against such a background. The proposed AVCMM is a software package that provides an integrated virtual environment, in which user can plan inspection strategy for a given task, carry out virtual measurement, and evaluate the uncertainty associated with the measurement result, all without the need of using a physical machine. The obtained estimate of uncertainty can serve as a rapid feedback for user to optimize the inspection plan in the AVCMM before actual measurement, or as an evaluation of the result of a performed measurement. Without involving a physical CMM in the inspection planning or evaluation of uncertainty, the AVCMM can greatly reduce the time and cost needed for such processes. Furthermore, as the package offers vivid 3D visual representation of the virtual environment and supports operations similar to a physical CMM, it does not only allow the user to easily plan and optimise the inspection strategy, but also provide a cost-effective, risk-free solution for training CMM operators.
A modular, multitier architecture has been adopted to develop the AVCMM system, which incorporates a number of functional components covering CMM and workpiece modelling, error simulation, inspection simulation, feature calculation, uncertainty evaluation and 3D representation. A new engine for detecting collision/contact has been developed and utilized, which is suitable for the virtual environment of simulated CMM inspections. A novel approach has been established to calculate errors required for the error simulation, where the data are obtained from FEA simulations in addition to conventional experimental method. Monte Carlo method has been adopted for uncertainty evaluation and has been implemented with multiple options available to meet different requirements.
A prototype of the proposed AVCMM system has been developed in this research. Its validity, usability and performance have been verified and evaluated through a set of experiments. The principles for utilising the AVCMM in practical use have also been established and demonstrated.
The results have indicated that the proposed AVCMM system has great potentials to improve the functionalities and overall performance of CMMs.ORSAS and the School of Engineering and Design of Brunel University
Analyzing audit trails in a distributed and hybrid intrusion detection platform
Efforts have been made over the last decades in order to design and perfect Intrusion
Detection Systems (IDS). In addition to the widespread use of Intrusion Prevention
Systems (IPS) as perimeter defense devices in systems and networks, various IDS solutions are used together as elements of holistic approaches to cyber security incident detection and prevention, including Network-Intrusion Detection Systems
(NIDS) and Host-Intrusion Detection Systems (HIDS). Nevertheless, specific IDS and
IPS technology face several effectiveness challenges to respond to the increasing scale and complexity of information systems and sophistication of attacks. The use of isolated IDS components, focused on one-dimensional approaches, strongly limits a common analysis based on evidence correlation. Today, most organizationsâ cyber-security operations centers still rely on conventional SIEM (Security Information and Event Management) technology. However, SIEM platforms also have significant drawbacks in dealing with heterogeneous and specialized security event-sources, lacking the support for flexible and uniform multi-level analysis of security audit-trails involving distributed and heterogeneous systems.
In this thesis, we propose an auditing solution that leverages on different intrusion
detection components and synergistically combines them in a Distributed and Hybrid IDS (DHIDS) platform, taking advantage of their benefits while overcoming the effectiveness drawbacks of each one. In this approach, security events are detected
by multiple probes forming a pervasive, heterogeneous and distributed monitoring
environment spread over the network, integrating NIDS, HIDS and specialized Honeypot probing systems. Events from those heterogeneous sources are converted to a canonical representation format, and then conveyed through a Publish-Subscribe
middleware to a dedicated logging and auditing system, built on top of an elastic and
scalable document-oriented storage system. The aggregated events can then be queried and matched against suspicious attack signature patterns, by means of a proposed declarative query-language that provides event-correlation semantics
Parallel Architectures for Planetary Exploration Requirements (PAPER)
The Parallel Architectures for Planetary Exploration Requirements (PAPER) project is essentially research oriented towards technology insertion issues for NASA's unmanned planetary probes. It was initiated to complement and augment the long-term efforts for space exploration with particular reference to NASA/LaRC's (NASA Langley Research Center) research needs for planetary exploration missions of the mid and late 1990s. The requirements for space missions as given in the somewhat dated Advanced Information Processing Systems (AIPS) requirements document are contrasted with the new requirements from JPL/Caltech involving sensor data capture and scene analysis. It is shown that more stringent requirements have arisen as a result of technological advancements. Two possible architectures, the AIPS Proof of Concept (POC) configuration and the MAX Fault-tolerant dataflow multiprocessor, were evaluated. The main observation was that the AIPS design is biased towards fault tolerance and may not be an ideal architecture for planetary and deep space probes due to high cost and complexity. The MAX concepts appears to be a promising candidate, except that more detailed information is required. The feasibility for adding neural computation capability to this architecture needs to be studied. Key impact issues for architectural design of computing systems meant for planetary missions were also identified
Evaluating Storm Sewer Pipe Condition Using Autonomous Drone Technology
The United States Air Force (USAF) owns a total of 30.9 million linear feet (LF) of storm sewer pipes valued at approximately $2.3B in its vast portfolio of built infrastructure. Current inventory records reveal that 78% of the inventory (24.1 million LF) is over 50 years old and will soon exceed its estimated service life. Additionally, the USAF depends on contract support while its business processes undervalue in-service evaluations from long-term funding plans. Ultimately, this disconnect negatively impacts infrastructure performance and overall strategic success, and the USAF risks making uninformed decisions in a fiscally constrained environment.
This research presents a proof of concept effort to automate storm sewer evaluations for the USAF using unmanned ground vehicles and computer vision technology for autonomous defect detection. The results conceptually show that a low-cost autonomous system can be developed using commercial off the shelf (COTS) hardware and open-source software to quantify the condition of underground storm sewer pipes with an efficiency of 36%. While the results show that the prototype developed for this research is not sufficient for operational use, it does demonstrate that the USAF can leverage COTS systems in future AM strategies to improve asset visibility at a significantly lower cost.
Quality-aware model-driven service engineering
Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Quality aspects
ranging from interoperability to maintainability to performance are of central importance for the integration of heterogeneous, distributed service-based systems. Architecture models can substantially influence quality attributes of the implemented software systems. Besides the benefits of explicit architectures on maintainability and reuse, architectural constraints such as styles, reference architectures and architectural patterns can influence observable software properties such as performance. Empirical performance evaluation is a process of measuring and evaluating the performance of implemented software. We present an approach for addressing the quality of services and service-based systems at the model-level in the context of model-driven service engineering. The focus on architecture-level models is a consequence of the black-box
character of services
Self-Adaptation in Industry: A Survey
Computing systems form the backbone of many areas in our society, from
manufacturing to traffic control, healthcare, and financial systems. When
software plays a vital role in the design, construction, and operation, these
systems are referred as software-intensive systems. Self-adaptation equips a
software-intensive system with a feedback loop that either automates tasks that
otherwise need to be performed by human operators or deals with uncertain
conditions. Such feedback loops have found their way to a variety of practical
applications; typical examples are an elastic cloud to adapt computing
resources and automated server management to respond quickly to business needs.
To gain insight into the motivations for applying self-adaptation in practice,
the problems solved using self-adaptation and how these problems are solved,
and the difficulties and risks that industry faces in adopting self-adaptation,
we performed a large-scale survey. We received 184 valid responses from
practitioners spread over 21 countries. Based on the analysis of the survey
data, we provide an empirically grounded overview of state-of-the-practice in
the application of self-adaptation. From that, we derive insights for
researchers to check their current research with industrial needs, and for
practitioners to compare their current practice in applying self-adaptation.
These insights also provide opportunities for the application of
self-adaptation in practice and pave the way for future industry-research
collaborations.Comment: 43 page
Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue
Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion
TOWARDS AN EFFICIENT MULTI-CLOUD OBSERVABILITY FRAMEWORK OF CONTAINERIZED MICROSERVICES IN KUBERNETES PLATFORM
A recent trend in software development adopts the paradigm of distributed microservices architecture (MA). Kubernetes, a container-based virtualization platform, has become a de facto environment in which to run MA applications. Organizations may choose to run microservices at several cloud providers to optimize cost and satisfy security concerns. This leads to increased complexity, due to the need to observe the performance characteristics of distributed MA systems. Following a decision guidance models (DGM) approach, this research proposes a decentralized and scalable framework to monitor containerized microservices that run on same or distributed Kubernetes clusters. The framework introduces efficient techniques to gather, distribute, and analyze the observed runtime telemetry data. It offers extensible and cloud-agnostic modules that can exchange data by using a multiplexing, reactive, and non-blocking data streaming approach. An experiment to observe samples of microservices deployed across different cloud platforms was used as a method to evaluate the efficacy and usefulness of the framework. The proposed framework suggests an innovative approach to the development and operations (DevOps) practitioners to observe services across different Kubernetes platforms. It could also serve as a reference architecture for researchers to guide further design options and analysis techniques
- âŠ