3,372 research outputs found
Review on ship onboard machinery maintenance strategy selection using multi-criteria optimization
Abstract: Marine shipping is an important aspect of the transportation system in Canada. It is estimated that 70-80% of items that we are surrounded by and use daily are brought by ships. Canadian businesses need to sell to the world and ships carries their products abroad. For people that live in Canada’s island or northern communities, marine shipping is often the only source they have for essentials. It is estimated that marine shipping directly contributes about $3 billion annually to Canada’s GDP through employment and other impacts. In a marine ship system, safety and reliability are very important considerations. The various system elements must be properly maintained and organizations are now looking to maintenance optimization to achieve optimum safety, machinery reliability and reduced costs. Modern day maintenance optimization is a decision-making problem which need to satisfy multiple and conflicting criteria. Multi-Criteria Optimization (MCO) techniques have been used in maintenance optimization. Two main classes of maintenance MCO problems have been identified as strategy selection and interval optimization. In marine ships, maintenance strategy selection is a complex decision-making problem that has become ever more challenging to address and is accompanied by diverse constraints and economic considerations. Each maintenance strategy has its own characteristics, importance and drawbacks. The use of inappropriate maintenance strategy affects the safety of a ship, crew, machinery reliability, maintenance cost etc. MCO techniques have been used in selecting optimal maintenance strategy for ship onboard machinery.Communication présentée lors du congrès international tenu conjointement par Canadian Society for Mechanical Engineering (CSME) et Computational Fluid Dynamics Society of Canada (CFD Canada), à l’Université de Sherbrooke (Québec), du 28 au 31 mai 2023
Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning
The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques
Recommended from our members
A feature-based comparison of the centralised versus market-based decision making under lens of environment uncertainty: Case of the mobile task allocation problem
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Decision making problems are amongst the most common challenges facing managers at different management levels in the organisation: strategic, tactical, and operational. However, prior reaching decisions at the operational level of the management hierarchy, operations management departments frequently have to deal with the optimisation process to evaluate the available decision alternatives. Industries with complex supply chain structures and service organisations that have to optimise the utilisation of their resources are examples. Conventionally, operational decisions used to be taken centrally by a decision making authority located at the top of a hierarchically-structured organisation. In order to take decisions, information related to the managed system and the affecting externalities (e.g. demand) should be globally available to the decision maker. The obtained information is then processed to reach the optimal decision. This approach usually makes extensive use of information systems (IS) containing myriad of optimisation algorithms and meta-heuristics to process the high amount and complex nature of data. The decisions reached are then broadcasted to the passive actuators of the system to put them in execution. On the other hand, recent advancements in information and communication technologies (ICT) made it possible to distribute the decision making rights and proved its applicability in several sectors. The market-based approach is as such a distributed decision making mechanism where passive actuators are delegated the rights of taking individual decisions matching their self-interests. The communication among the market agents is done through market transactions regulated by auctions. The system’s global optimisation, therefore, raise from the aggregated self-oriented market agents. As opposed to the centralised approach, the main characteristics of the market-based approach are the market mechanism and local knowledge of the agents.
The existence of both approaches attracted several studies to compare them in different contexts. Recently, some comparisons compared the centralised versus market-based approaches in the context of transportation applications from an algorithm perspective. Transportation applications and routing problems are assumed to be good candidates for this comparison given the distributed nature of the system and due to the presence of several sources of uncertainty. Uncertainty exceptions make decisions highly vulnerable and necessitating frequent corrective interventions to keep an efficient level of service. Motivated by the previous comparison studies, this research aims at further investigating the features of both approaches and to contrast them in the context of a distributed task allocation problem in light of environmental uncertainty. Similar applications are often faced by service industries with mobile workforce. Contrary to the previous comparison studies that sought to compare those approaches at the mechanism level, this research attempts to identify the effect of the most significant characteristics of each approach to face environmental uncertainty, which is reflected in this research by the arrival of dynamic tasks and the occurrence of stochasticity delays. To achieve the aim of this research, a target optimisation problem from the VRP family is proposed and solved with both approaches. Given that this research does not target proposing new algorithms, two basic solution mechanisms are adopted to compare the centralised and the market-based approach. The produced solutions are executed on a dedicated multi-agent simulation system. During execution dynamism and stochasticity are introduced.
The research findings suggest that a market-based approach is attractive to implement in highly uncertain environments when the degree of local knowledge and workers’ experience is high and when the system tends to be complex with large dimensions. It is also suggested that a centralised approach fits more in situations where uncertainty is lower and the decision maker is able to make timely decision updates, which is in turn regulated by the size of the system at hand
Error Detection and Diagnosis for System-on-Chip in Space Applications
Tesis por compendio de publicacionesLos componentes electrónicos comerciales, comúnmente llamados componentes
Commercial-Off-The-Shelf (COTS) están presentes en multitud de dispositivos habituales
en nuestro día a día. Particularmente, el uso de microprocesadores y sistemas en chip (SoC)
altamente integrados ha favorecido la aparición de dispositivos electrónicos cada vez más
inteligentes que sostienen el estilo de vida y el avance de la sociedad moderna. Su uso se
ha generalizado incluso en aquellos sistemas que se consideran críticos para la seguridad,
como vehículos, aviones, armamento, dispositivos médicos, implantes o centrales eléctricas.
En cualquiera de ellos, un fallo podría tener graves consecuencias humanas o económicas.
Sin embargo, todos los sistemas electrónicos conviven constantemente con factores internos
y externos que pueden provocar fallos en su funcionamiento. La capacidad de un sistema
para funcionar correctamente en presencia de fallos se denomina tolerancia a fallos, y es
un requisito en el diseño y operación de sistemas críticos.
Los vehículos espaciales como satélites o naves espaciales también hacen uso de
microprocesadores para operar de forma autónoma o semi autónoma durante su vida útil,
con la dificultad añadida de que no pueden ser reparados en órbita, por lo que se consideran
sistemas críticos. Además, las duras condiciones existentes en el espacio, y en particular
los efectos de la radiación, suponen un gran desafío para el correcto funcionamiento de los
dispositivos electrónicos. Concretamente, los fallos transitorios provocados por radiación
(conocidos como soft errors) tienen el potencial de ser una de las mayores amenazas para
la fiabilidad de un sistema en el espacio.
Las misiones espaciales de gran envergadura, típicamente financiadas públicamente
como en el caso de la NASA o la Agencia Espacial Europea (ESA), han tenido
históricamente como requisito evitar el riesgo a toda costa por encima de cualquier
restricción de coste o plazo. Por ello, la selección de componentes resistentes a la radiación
(rad-hard) específicamente diseñados para su uso en el espacio ha sido la metodología
imperante en el paradigma que hoy podemos denominar industria espacial tradicional, u
Old Space. Sin embargo, los componentes rad-hard tienen habitualmente un coste mucho
más alto y unas prestaciones mucho menores que otros componentes COTS equivalentes.
De hecho, los componentes COTS ya han sido utilizados satisfactoriamente en misiones
de la NASA o la ESA cuando las prestaciones requeridas por la misión no podían ser
cubiertas por ningún componente rad-hard existente.
En los últimos años, el acceso al espacio se está facilitando debido en gran parte a la
entrada de empresas privadas en la industria espacial. Estas empresas no siempre buscan
evitar el riesgo a toda costa, sino que deben perseguir una rentabilidad económica, por
lo que hacen un balance entre riesgo, coste y plazo mediante gestión del riesgo en un
paradigma denominado Nuevo Espacio o New Space. Estas empresas a menudo están
interesadas en entregar servicios basados en el espacio con las máximas prestaciones y el mayor beneficio posibles, para lo cual los componentes rad-hard son menos atractivos
debido a su mayor coste y menores prestaciones que los componentes COTS existentes.
Sin embargo, los componentes COTS no han sido específicamente diseñados para su uso
en el espacio y típicamente no incluyen técnicas específicas para evitar que los efectos de
la radiación afecten su funcionamiento. Los componentes COTS se comercializan tal cual
son, y habitualmente no es posible modificarlos para mejorar su resistencia a la radiación.
Además, los elevados niveles de integración de los sistemas en chip (SoC) complejos
de altas prestaciones dificultan su observación y la aplicación de técnicas de tolerancia
a fallos. Este problema es especialmente relevante en el caso de los microprocesadores.
Por tanto, existe un gran interés en el desarrollo de técnicas que permitan conocer y
mejorar el comportamiento de los microprocesadores COTS bajo radiación sin modificar
su arquitectura y sin interferir en su funcionamiento para facilitar su uso en el espacio y
con ello maximizar las prestaciones de las misiones espaciales presentes y futuras.
En esta Tesis se han desarrollado técnicas novedosas para detectar, diagnosticar y
mitigar los errores producidos por radiación en microprocesadores y sistemas en chip
(SoC) comerciales, utilizando la interfaz de traza como punto de observación. La interfaz de
traza es un recurso habitual en los microprocesadores modernos, principalmente enfocado
a soportar las tareas de desarrollo y depuración del software durante la fase de diseño. Sin
embargo, una vez el desarrollo ha concluido, la interfaz de traza típicamente no se utiliza
durante la fase operativa del sistema, por lo que puede ser reutilizada sin coste. La interfaz
de traza constituye un punto de conexión viable para observar el comportamiento de un
microprocesador de forma no intrusiva y sin interferir en su funcionamiento.
Como resultado de esta Tesis se ha desarrollado un módulo IP capaz de recabar
y decodificar la información de traza de un microprocesador COTS moderno de altas
prestaciones. El IP es altamente configurable y personalizable para adaptarse a diferentes
aplicaciones y tipos de procesadores. Ha sido diseñado y validado utilizando el dispositivo
Zynq-7000 de Xilinx como plataforma de desarrollo, que constituye un dispositivo COTS
de interés en la industria espacial. Este dispositivo incluye un procesador ARM Cortex-A9
de doble núcleo, que es representativo del conjunto de microprocesadores hard-core
modernos de altas prestaciones. El IP resultante es compatible con la tecnología ARM
CoreSight, que proporciona acceso a información de traza en los microprocesadores ARM.
El IP incorpora técnicas para detectar errores en el flujo de ejecución y en los datos de la
aplicación ejecutada utilizando la información de traza, en tiempo real y con muy baja
latencia. El IP se ha validado en campañas de inyección de fallos y también en radiación con
protones y neutrones en instalaciones especializadas. También se ha combinado con otras
técnicas de tolerancia a fallos para construir técnicas híbridas de mitigación de errores.
Los resultados experimentales obtenidos demuestran su alta capacidad de detección y
potencialidad en el diagnóstico de errores producidos por radiación.
El resultado de esta Tesis, desarrollada en el marco de un Doctorado Industrial entre
la Universidad Carlos III de Madrid (UC3M) y la empresa Arquimea, se ha transferido satisfactoriamente al entorno empresarial en forma de un proyecto financiado por la
Agencia Espacial Europea para continuar su desarrollo y posterior explotación.Commercial electronic components, also known as Commercial-Off-The-Shelf (COTS),
are present in a wide variety of devices commonly used in our daily life. Particularly, the
use of microprocessors and highly integrated System-on-Chip (SoC) devices has fostered
the advent of increasingly intelligent electronic devices which sustain the lifestyles and the
progress of modern society. Microprocessors are present even in safety-critical systems,
such as vehicles, planes, weapons, medical devices, implants, or power plants. In any of
these cases, a fault could involve severe human or economic consequences. However, every
electronic system deals continuously with internal and external factors that could provoke
faults in its operation. The capacity of a system to operate correctly in presence of faults
is known as fault-tolerance, and it becomes a requirement in the design and operation of
critical systems.
Space vehicles such as satellites or spacecraft also incorporate microprocessors to
operate autonomously or semi-autonomously during their service life, with the additional
difficulty that they cannot be repaired once in-orbit, so they are considered critical systems.
In addition, the harsh conditions in space, and specifically radiation effects, involve a big
challenge for the correct operation of electronic devices. In particular, radiation-induced
soft errors have the potential to become one of the major risks for the reliability of systems
in space.
Large space missions, typically publicly funded as in the case of NASA or European
Space Agency (ESA), have followed historically the requirement to avoid the risk at any
expense, regardless of any cost or schedule restriction. Because of that, the selection of
radiation-resistant components (known as rad-hard) specifically designed to be used in
space has been the dominant methodology in the paradigm of traditional space industry,
also known as “Old Space”. However, rad-hard components have commonly a much higher
associated cost and much lower performance that other equivalent COTS devices. In fact,
COTS components have already been used successfully by NASA and ESA in missions
that requested such high performance that could not be satisfied by any available rad-hard
component.
In the recent years, the access to space is being facilitated in part due to the irruption
of private companies in the space industry. Such companies do not always seek to avoid
the risk at any cost, but they must pursue profitability, so they perform a trade-off between
risk, cost, and schedule through risk management in a paradigm known as “New Space”.
Private companies are often interested in deliver space-based services with the maximum
performance and maximum benefit as possible. With such objective, rad-hard components
are less attractive than COTS due to their higher cost and lower performance.
However, COTS components have not been specifically designed to be used in space
and typically they do not include specific techniques to avoid or mitigate the radiation effects in their operation. COTS components are commercialized “as is”, so it is not
possible to modify them to improve their susceptibility to radiation effects. Moreover,
the high levels of integration of complex, high-performance SoC devices hinder their
observability and the application of fault-tolerance techniques. This problem is especially
relevant in the case of microprocessors. Thus, there is a growing interest in the development
of techniques allowing to understand and improve the behavior of COTS microprocessors
under radiation without modifying their architecture and without interfering with their
operation. Such techniques may facilitate the use of COTS components in space and
maximize the performance of present and future space missions.
In this Thesis, novel techniques have been developed to detect, diagnose, and
mitigate radiation-induced errors in COTS microprocessors and SoCs using the trace
interface as an observation point. The trace interface is a resource commonly found
in modern microprocessors, mainly intended to support software development and
debugging activities during the design phase. However, it is commonly left unused
during the operational phase of the system, so it can be reused with no cost. The trace
interface constitutes a feasible connection point to observe microprocessor behavior in a
non-intrusive manner and without disturbing processor operation.
As a result of this Thesis, an IP module has been developed capable to gather and
decode the trace information of a modern, high-end, COTS microprocessor. The IP is highly
configurable and customizable to support different applications and processor types. The
IP has been designed and validated using the Xilinx Zynq-7000 device as a development
platform, which is an interesting COTS device for the space industry. This device features a
dual-core ARM Cortex-A9 processor, which is a good representative of modern, high-end,
hard-core microprocessors. The resulting IP is compatible with the ARM CoreSight
technology, which enables access to trace information in ARM microprocessors. The IP is
able to detect errors in the execution flow of the microprocessor and in the application data
using trace information, in real time and with very low latency. The IP has been validated
in fault injection campaigns and also under proton and neutron irradiation campaigns in
specialized facilities. It has also been combined with other fault-tolerance techniques
to build hybrid error mitigation approaches. Experimental results demonstrate its high
detection capabilities and high potential for the diagnosis of radiation-induced errors.
The result of this Thesis, developed in the framework of an Industrial Ph.D. between the
University Carlos III of Madrid (UC3M) and the company Arquimea, has been successfully
transferred to the company business as a project sponsored by European Space Agency to
continue its development and subsequent commercialization.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidenta: María Luisa López Vallejo.- Secretario: Enrique San Millán Heredia.- Vocal: Luigi Di Lill
Making intelligent systems team players: Case studies and design issues. Volume 1: Human-computer interaction design
Initial results are reported from a multi-year, interdisciplinary effort to provide guidance and assistance for designers of intelligent systems and their user interfaces. The objective is to achieve more effective human-computer interaction (HCI) for systems with real time fault management capabilities. Intelligent fault management systems within the NASA were evaluated for insight into the design of systems with complex HCI. Preliminary results include: (1) a description of real time fault management in aerospace domains; (2) recommendations and examples for improving intelligent systems design and user interface design; (3) identification of issues requiring further research; and (4) recommendations for a development methodology integrating HCI design into intelligent system design
Towards self-powered wireless sensor networks
Ubiquitous computing aims at creating smart environments in which computational and communication capabilities permeate the word at all scales, improving the human experience and quality of life in a totally unobtrusive yet completely reliable manner. According to this vision, an huge variety of smart devices and products (e.g., wireless sensor nodes, mobile phones, cameras, sensors, home appliances and industrial machines) are interconnected to realize a network of distributed agents that continuously collect, process, share and transport information. The impact of such technologies in our everyday life is expected to be massive, as it will enable innovative applications that will profoundly change the world around us. Remotely monitoring the conditions of patients and elderly people inside hospitals and at home, preventing catastrophic failures of buildings and critical structures, realizing smart cities with sustainable management of traffic and automatic monitoring of pollution levels, early detecting earthquake and forest fires, monitoring water quality and detecting water leakages, preventing landslides and avalanches are just some examples of life-enhancing applications made possible by smart ubiquitous computing systems.
To turn this vision into a reality, however, new raising challenges have to be addressed, overcoming the limits that currently prevent the pervasive deployment of smart devices that are long lasting, trusted, and fully autonomous. In particular, the most critical factor currently limiting the realization of ubiquitous computing is energy provisioning. In fact, embedded devices are typically powered by short-lived batteries that severely affect their lifespan and reliability, often requiring expensive and invasive maintenance.
In this PhD thesis, we investigate the use of energy-harvesting techniques to overcome the energy bottleneck problem suffered by embedded devices, particularly focusing on Wireless Sensor Networks (WSNs), which are one of the key enablers of pervasive computing systems. Energy harvesting allows to use energy readily available from the environment (e.g., from solar light, wind, body movements, etc.) to significantly extend the typical lifetime of low-power devices, enabling ubiquitous computing systems that can last virtually forever. However, the design challenges posed both at the hardware and at the software levels by the design of energy-autonomous devices are many. This thesis addresses some of the most challenging problems of this emerging research area, such as devising mechanisms for energy prediction and management, improving the efficiency of the energy scavenging process, developing protocols for harvesting-aware resource allocation, and providing solutions that enable robust and reliable security support. %, including the design of mechanisms for energy prediction and management, improving the efficiency of the energy harvesting process, the develop of protocols for harvesting-aware resource allocation, and providing solutions that enable robust and reliable security support
Fourth Conference on Artificial Intelligence for Space Applications
Proceedings of a conference held in Huntsville, Alabama, on November 15-16, 1988. The Fourth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: space applications of expert systems in fault diagnostics, in telemetry monitoring and data collection, in design and systems integration; and in planning and scheduling; knowledge representation, capture, verification, and management; robotics and vision; adaptive learning; and automatic programming
China\u27s Evolving Surface Fleet
The missile fast-attack craft and amphibious fleets of the People\u27s Liberation Army (PLA) Navy (PLAN) have undergone significant modernization over the past fifteen years. The capabilities of both categories of vessels have improved even if their actual numbers have not increased dramatically. Examined from the perspective of PLA doctrine and training, the missions of these forces represent the PLAN\u27s past, present, and future.https://digital-commons.usnwc.edu/cmsi-red-books/1013/thumbnail.jp
- …