10 research outputs found
Improved D-S evidence theory for pipeline damage identification
Identification of the location of damages in pipeline is important and still needs to be improved due to the problems such as limited number of sensors that can be placed in the pipeline, uncertainty of the working environment. In this paper, a new approach is presented, which shows an improved diagnosis performance. The new approach is based on combining an improved D-S evidence theory and an improved index which is based on three specific identification techniques known to the pipeline community. The improved D-S evidence theory is based on the idea of preserving similar evidences and using weighted average evidence to represent the conflict evidence. The paper also presents an experiment to verify the new approach by comparing the result of the new approach with the traditional D-S approach. The experimental result has shown that the new approach is promising. The new approach also renders the possibility to visualize the location of the damages in the pipeline, which facilitates the access to the damage for repair and any compensation measure
iDCR: Improved Dempster Combination Rule for Multisensor Fault Diagnosis
Data gathered from multiple sensors can be effectively fused for accurate
monitoring of many engineering applications. In the last few years, one of the
most sought after applications for multi sensor fusion has been fault
diagnosis. Dempster-Shafer Theory of Evidence along with Dempsters Combination
Rule is a very popular method for multi sensor fusion which can be successfully
applied to fault diagnosis. But if the information obtained from the different
sensors shows high conflict, the classical Dempsters Combination Rule may
produce counter-intuitive result. To overcome this shortcoming, this paper
proposes an improved combination rule for multi sensor data fusion. Numerical
examples have been put forward to show the effectiveness of the proposed
method. Comparative analysis has also been carried out with existing methods to
show the superiority of the proposed method in multi sensor fault diagnosis
Multi-agent System for Tracking and Classification of Moving Objects
In the past, computational barriers have limited the complexity of video and image processing applications but recently, faster computers have enabled researchers to consider more complex algorithms which can deal successfully with vehicle and pedestrian detection technologies. However, much of the work only pays attention to the accuracy of the final results provided by the systems, leaving aside the computational efficiency. Therefore, this paper describes a system using a paradigm of multi-agent system capable of regulating itself dynamically taking into account certain parameters pertaining to detection, tracking and classification, to reduce the computational burden as low as possible at all times without this in any way compromise the reliability of the result
Fusion system based on multi-agent systems to merge data from WSN
This paper presents an intelligent multi-agent system that is aimed at improving healthcare and assistance to elderly and dependent people in geriatric residences and at their homes. The system is based on the PANGEA multi-agent architecture, which provides a high-level framework for intelligent information fusion and management. The system makes use of wireless sensor networks and a real-time locating system to obtain heterogeneous data, and is able to provide autonomous responses according to the environment status. The high-level development of the system that extracts and stores information plays an essential role to deal with the avalanche of context data. In our case, the multi-agent system approach results satisfactorily because each agent that represents an autonomous entity with different capabilities and offers different services works collaboratively with each other. Several tests have been performed on this platform to evaluate/demonstrate its validity
A New Similarity Measure of Generalized Trapezoidal Fuzzy Numbers and Its Application on Rotor Fault Diagnosis
Fault diagnosis technology plays a vital role in the variety of critical engineering applications. Fuzzy approach is widely employed to cope with decision-making problems because it is in the simplest and most used form. This paper proposed a new similarity measure of generalized trapezoidal fuzzy numbers used for fault diagnosis. The presented similarity measure combines concepts of the geometric distance, the center of gravity point, the perimeter, and the area of the generalized trapezoidal fuzzy numbers for calculating the degree of similarity between generalized trapezoidal fuzzy numbers. This method is proposed to deal with both standardized and nonstandardized generalized trapezoidal fuzzy numbers. Some properties of the proposed similarity measure have been proved, and 12 sets of generalized fuzzy numbers have been used to compare the calculation results of the proposed similarity measures with the existing similarity measures. Comparison results indicate that the proposed similarity measure can overcome the drawbacks of existing similarity measures. Finally, a fault diagnosis experiment is carried out in laboratory based on multifunctional flexible rotor experiment bench. Experimental results demonstrate that the proposed similarity measure is more effective than other methods in terms of rotor fault diagnosis
Methods and Systems for Fault Diagnosis in Nuclear Power Plants
This research mainly deals with fault diagnosis in nuclear power plants (NPP), based on a framework that integrates contributions from fault scope identification, optimal sensor placement, sensor validation, equipment condition monitoring, and diagnostic reasoning based on pattern analysis. The research has a particular focus on applications where data collected from the existing SCADA (supervisory, control, and data acquisition) system is not sufficient for the fault diagnosis system. Specifically, the following methods and systems are developed.
A sensor placement model is developed to guide optimal placement of sensors in NPPs. The model includes 1) a method to extract a quantitative fault-sensor incidence matrix for a system; 2) a fault diagnosability criterion based on the degree of singularities of the incidence matrix; and 3) procedures to place additional sensors to meet the diagnosability criterion. Usefulness of the proposed method is demonstrated on a nuclear power plant process control test facility (NPCTF). Experimental results show that three pairs of undiagnosable faults can be effectively distinguished with three additional sensors selected by the proposed model.
A wireless sensor network (WSN) is designed and a prototype is implemented on the NPCTF. WSN is an effective tool to collect data for fault diagnosis, especially for systems where additional measurements are needed. The WSN has distributed data processing and information fusion for fault diagnosis. Experimental results on the NPCTF show that the WSN system can be used to diagnose all six fault scenarios considered for the system.
A fault diagnosis method based on semi-supervised pattern classification is developed which requires significantly fewer training data than is typically required in existing fault diagnosis models. It is a promising tool for applications in NPPs, where it is usually difficult to obtain training data under fault conditions for a conventional fault diagnosis model. The proposed method has successfully diagnosed nine types of faults physically simulated on the NPCTF.
For equipment condition monitoring, a modified S-transform (MST) algorithm is developed by using shaping functions, particularly sigmoid functions, to modify the window width of the existing standard S-transform. The MST can achieve superior time-frequency resolution for applications that involves non-stationary multi-modal signals, where classical methods may fail. Effectiveness of the proposed algorithm is demonstrated using a vibration test system as well as applications to detect a collapsed pipe support in the NPCTF. The experimental results show that by observing changes in time-frequency characteristics of vibration signals, one can effectively detect faults occurred in components of an industrial system.
To ensure that a fault diagnosis system does not suffer from erroneous data, a fault detection and isolation (FDI) method based on kernel principal component analysis (KPCA) is extended for sensor validations, where sensor faults are detected and isolated from the reconstruction errors of a KPCA model. The method is validated using measurement data from a physical NPP.
The NPCTF is designed and constructed in this research for experimental validations of fault diagnosis methods and systems. Faults can be physically simulated on the NPCTF. In addition, the NPCTF is designed to support systems based on different instrumentation and control technologies such as WSN and distributed control systems. The NPCTF has been successfully utilized to validate the algorithms and WSN system developed in this research.
In a real world application, it is seldom the case that one single fault diagnostic scheme can meet all the requirements of a fault diagnostic system in a nuclear power. In fact, the values and performance of the diagnosis system can potentially be enhanced if some of the methods developed in this thesis can be integrated into a suite of diagnostic tools. In such an integrated system, WSN nodes can be used to collect additional data deemed necessary by sensor placement models. These data can be integrated with those from existing SCADA systems for more comprehensive fault diagnosis. An online performance monitoring system monitors the conditions of the equipment and provides key information for the tasks of condition-based maintenance. When a fault is detected, the measured data are subsequently acquired and analyzed by pattern classification models to identify the nature of the fault. By analyzing the symptoms of the fault, root causes of the fault can eventually be identified
MSL Framework: (Minimum Service Level Framework) for Cloud Providers and Users
Cloud Computing ensures parallel computing and emerged as an efficient technology to meet
the challenges of rapid growth of data that we experienced in this Internet age. Cloud
computing is an emerging technology that offers subscription based services, and provide
different models such as IaaS, PaaS and SaaS among other models to cater the needs of
different user groups. The technology has enormous benefits but there are serious concerns
and challenges related to lack of uniform standards or nonexistence of minimum benchmark
for level of services offered across the industry to provide an effective, uniform and reliable
service to the cloud users. As the cloud computing is gaining popularity, organizations and
users are having problems to adopt the service ue to lack of minimum service level
framework which can act as a benchmark in the selection of the cloud provider and provide
quality of service according to the user’s expectations. The situation becomes more critical
due to distributed nature of the service provider which can be offering service from any part
of the world. Due to lack of minimum service level framework that will act as a benchmark
to provide a uniform service across the industry there are serious concerns raised recently interms
of security and data privacy breaches, authentication and authorization issues, lack of
third party audit and identity management problems, integrity, confidentiality and variable
data availability standards, no uniform incident response and monitoring standards,
interoperability and lack of portability standards, identity management issues, lack of
infrastructure protection services standards and weak governance and compliance standards
are major cause of concerns for cloud users. Due to confusion and absence of universal
agreed SLAs for a service model, different quality of services is being provided across the
cloud industry. Currently there is no uniform performance model agreed by all stakeholders;
which can provide performance criteria to measure, evaluate, and benchmark the level of
services offered by various cloud providers in the industry. With the implementation of
General Data Protection Regulation (GDPR) and demand from cloud users to have Green
SLAs that provides better resource allocations mechanism, there will be serious implications
for the cloud providers and its consumers due to lack of uniformity in SLAs and variable
standards of service offered by various cloud providers. This research examines weaknesses in service level agreements offered by various cloud
providers and impact due to absence of uniform agreed minimum service level framework on
the adoption and usage of cloud service. The research is focused around higher education
case study and proposes a conceptual model based on uniform minimum service model that
acts as benchmark for the industry to ensure quality of service to the cloud users in the higher
education institution and remove the barriers to the adoption of cloud technology. The
proposed Minimum Service Level (MSL) framework, provides a set of minimum and
uniform standards in the key concern areas raised by the participants of HE institution which
are essential to the cloud users and provide a minimum quality benchmark that becomes a
uniform standard across the industry. The proposed model produces a cloud computing
implementation evaluation criteria which is an attempt to reduce the adoption barrier of the
cloud technology and set minimum uniform standards followed by all the cloud providers
regardless of their hosting location so that their performance can be measured, evaluated and
compared across the industry to improve the overall QoS (Quality of Service) received by the
cloud users, remove the adoption barriers and concerns of the cloud users and increase the
competition across the cloud industry.A computação em nuvem proporciona a computação paralela e emergiu como uma tecnologia
eficiente para enfrentar os desafios do crescimento rápido de dados que vivemos na era da
Internet. A computação em nuvem é uma tecnologia emergente que oferece serviços
baseados em assinatura e oferece diferentes modelos como IaaS, PaaS e SaaS, entre outros
modelos para atender as necessidades de diferentes grupos de utilizadores. A tecnologia tem
enormes benefÃcios, mas subsistem sérias preocupações e desafios relacionados com a falta
de normas uniformes ou inexistência de um referencial mÃnimo para o nÃvel de serviços
oferecidos, na indústria, para proporcionar uma oferta eficaz, uniforme e confiável para os
utilizadores da nuvem. Como a computação em nuvem está a ganhar popularidade, tanto
organizações como utilizadores estão enfrentando problemas para adotar o serviço devido Ã
falta de enquadramento de nÃvel de serviço mÃnimo que possa agir como um ponto de
referência na seleção de provedor da nuvem e fornecer a qualidade dos serviços de acordo
com as expectativas do utilizador. A situação torna-se mais crÃtica, devido à natureza
distribuÃda do prestador de serviço, que pode ser oriundo de qualquer parte do mundo.
Devido à falta de enquadramento de nÃvel de serviço mÃnimo que irá agir como um
benchmark para fornecer um serviço uniforme em toda a indústria, existem sérias
preocupações levantadas recentemente em termos de violações de segurança e privacidade de
dados, autenticação e autorização, falta de questões de auditoria de terceiros e problemas de
gestão de identidade, integridade, confidencialidade e disponibilidade de dados, falta de
uniformidade de normas, a não resposta a incidentes e o monitoramento de padrões, a
interoperabilidade e a falta de padrões de portabilidade, questões relacionadas com a gestão
de identidade, falta de padrões de serviços de proteção das infraestruturas e fraca governança
e conformidade de padrões constituem outras importantes causas de preocupação para os
utilizadores. Devido à confusão e ausência de SLAs acordados de modo universal para um
modelo de serviço, diferente qualidade de serviços está a ser fornecida através da nuvem, pela
indústria da computação em nuvem. Atualmente, não há desempenho uniforme nem um
modelo acordado por todas as partes interessadas; que pode fornecer critérios de desempenho
para medir, avaliar e comparar o nÃvel de serviços oferecidos por diversos fornecedores de
computação em nuvem na indústria. Com a implementação do Regulamento Geral de Protecção de Dados (RGPD) e a procura da
nuvem com base no impacto ambiental (Green SLAs), são acrescentadas precupações
adicionais e existem sérias implicações para os forncedores de computação em nuvem e para
os seus consumidores, também devido à falta de uniformidade na multiplicidade de SLAs e
padrões de serviço oferecidos. A presente pesquisa examina as fraquezas em acordos de nÃvel
de serviço oferecidos por fornecedores de computação em nuvem e estuda o impacto da
ausência de um quadro de nÃvel de serviço mÃnimo acordado sobre a adoção e o uso no
contexto da computação em nuvem. A pesquisa está orientada para a adoção destes serviços
para o caso do ensino superior e as instituições de ensino superior e propõe um modelo
conceptualt com base em um modelo de serviço mÃnimo uniforme que funciona como
referência para a indústria, para garantir a qualidade do serviço para os utilizadores da nuvem
numa instituição de ensino superior de forma a eliminar as barreiras para a adoção da
tecnologia de computação em nuvem. O nÃvel de serviço mÃnimo proposto (MSL), fornece
um conjunto mÃnimo de normas uniformes e na áreas das principais preocupações levantadas
por responsáveis de instituições de ensino superior e que são essenciais, de modo a fornecer
um referencial mÃnimo de qualidade, que se possa tornar um padrão uniforme em toda a
indústria. O modelo proposto é uma tentativa de reduzir a barreira de adoção da tecnologia de
computação em nuvem e definir normas mÃnimas seguidas por todos os fornecedores de
computação em nuvem, independentemente do seu local de hospedagem para que os seus
desempenhos possam ser medidos, avaliados e comparados em toda a indústria, para
melhorar a qualidade de serviço (QoS) recebida pelos utilizadores e remova as barreiras de
adoção e as preocupações dos utilizadores, bem como fomentar o aumento da concorrência
em toda a indústria da computação em nuvem
MSL Framework: (Minimum Service Level Framework) for cloud providers and users
Cloud Computing ensures parallel computing and emerged as an efficient technology to meet
the challenges of rapid growth of data that we experienced in this Internet age. Cloud
computing is an emerging technology that offers subscription based services, and provide
different models such as IaaS, PaaS and SaaS among other models to cater the needs of
different user groups. The technology has enormous benefits but there are serious concerns
and challenges related to lack of uniform standards or nonexistence of minimum benchmark
for level of services offered across the industry to provide an effective, uniform and reliable
service to the cloud users. As the cloud computing is gaining popularity, organizations and
users are having problems to adopt the service ue to lack of minimum service level
framework which can act as a benchmark in the selection of the cloud provider and provide
quality of service according to the user’s expectations. The situation becomes more critical
due to distributed nature of the service provider which can be offering service from any part
of the world. Due to lack of minimum service level framework that will act as a benchmark
to provide a uniform service across the industry there are serious concerns raised recently interms
of security and data privacy breaches, authentication and authorization issues, lack of
third party audit and identity management problems, integrity, confidentiality and variable
data availability standards, no uniform incident response and monitoring standards,
interoperability and lack of portability standards, identity management issues, lack of
infrastructure protection services standards and weak governance and compliance standards
are major cause of concerns for cloud users. Due to confusion and absence of universal
agreed SLAs for a service model, different quality of services is being provided across the
cloud industry. Currently there is no uniform performance model agreed by all stakeholders;
which can provide performance criteria to measure, evaluate, and benchmark the level of
services offered by various cloud providers in the industry. With the implementation of
General Data Protection Regulation (GDPR) and demand from cloud users to have Green
SLAs that provides better resource allocations mechanism, there will be serious implications
for the cloud providers and its consumers due to lack of uniformity in SLAs and variable
standards of service offered by various cloud providers. This research examines weaknesses in service level agreements offered by various cloud
providers and impact due to absence of uniform agreed minimum service level framework on
the adoption and usage of cloud service. The research is focused around higher education
case study and proposes a conceptual model based on uniform minimum service model that
acts as benchmark for the industry to ensure quality of service to the cloud users in the higher
education institution and remove the barriers to the adoption of cloud technology. The
proposed Minimum Service Level (MSL) framework, provides a set of minimum and
uniform standards in the key concern areas raised by the participants of HE institution which
are essential to the cloud users and provide a minimum quality benchmark that becomes a
uniform standard across the industry. The proposed model produces a cloud computing
implementation evaluation criteria which is an attempt to reduce the adoption barrier of the
cloud technology and set minimum uniform standards followed by all the cloud providers
regardless of their hosting location so that their performance can be measured, evaluated and
compared across the industry to improve the overall QoS (Quality of Service) received by the
cloud users, remove the adoption barriers and concerns of the cloud users and increase the
competition across the cloud industry.A computação em nuvem proporciona a computação paralela e emergiu como uma tecnologia
eficiente para enfrentar os desafios do crescimento rápido de dados que vivemos na era da
Internet. A computação em nuvem é uma tecnologia emergente que oferece serviços
baseados em assinatura e oferece diferentes modelos como IaaS, PaaS e SaaS, entre outros
modelos para atender as necessidades de diferentes grupos de utilizadores. A tecnologia tem
enormes benefÃcios, mas subsistem sérias preocupações e desafios relacionados com a falta
de normas uniformes ou inexistência de um referencial mÃnimo para o nÃvel de serviços
oferecidos, na indústria, para proporcionar uma oferta eficaz, uniforme e confiável para os
utilizadores da nuvem. Como a computação em nuvem está a ganhar popularidade, tanto
organizações como utilizadores estão enfrentando problemas para adotar o serviço devido Ã
falta de enquadramento de nÃvel de serviço mÃnimo que possa agir como um ponto de
referência na seleção de provedor da nuvem e fornecer a qualidade dos serviços de acordo
com as expectativas do utilizador. A situação torna-se mais crÃtica, devido à natureza
distribuÃda do prestador de serviço, que pode ser oriundo de qualquer parte do mundo.
Devido à falta de enquadramento de nÃvel de serviço mÃnimo que irá agir como um
benchmark para fornecer um serviço uniforme em toda a indústria, existem sérias
preocupações levantadas recentemente em termos de violações de segurança e privacidade de
dados, autenticação e autorização, falta de questões de auditoria de terceiros e problemas de
gestão de identidade, integridade, confidencialidade e disponibilidade de dados, falta de
uniformidade de normas, a não resposta a incidentes e o monitoramento de padrões, a
interoperabilidade e a falta de padrões de portabilidade, questões relacionadas com a gestão
de identidade, falta de padrões de serviços de proteção das infraestruturas e fraca governança
e conformidade de padrões constituem outras importantes causas de preocupação para os
utilizadores. Devido à confusão e ausência de SLAs acordados de modo universal para um
modelo de serviço, diferente qualidade de serviços está a ser fornecida através da nuvem, pela
indústria da computação em nuvem. Atualmente, não há desempenho uniforme nem um
modelo acordado por todas as partes interessadas; que pode fornecer critérios de desempenho
para medir, avaliar e comparar o nÃvel de serviços oferecidos por diversos fornecedores de
computação em nuvem na indústria. Com a implementação do Regulamento Geral de Protecção de Dados (RGPD) e a procura da
nuvem com base no impacto ambiental (Green SLAs), são acrescentadas precupações
adicionais e existem sérias implicações para os forncedores de computação em nuvem e para
os seus consumidores, também devido à falta de uniformidade na multiplicidade de SLAs e
padrões de serviço oferecidos. A presente pesquisa examina as fraquezas em acordos de nÃvel
de serviço oferecidos por fornecedores de computação em nuvem e estuda o impacto da
ausência de um quadro de nÃvel de serviço mÃnimo acordado sobre a adoção e o uso no
contexto da computação em nuvem. A pesquisa está orientada para a adoção destes serviços
para o caso do ensino superior e as instituições de ensino superior e propõe um modelo
conceptualt com base em um modelo de serviço mÃnimo uniforme que funciona como
referência para a indústria, para garantir a qualidade do serviço para os utilizadores da nuvem
numa instituição de ensino superior de forma a eliminar as barreiras para a adoção da
tecnologia de computação em nuvem. O nÃvel de serviço mÃnimo proposto (MSL), fornece
um conjunto mÃnimo de normas uniformes e na áreas das principais preocupações levantadas
por responsáveis de instituições de ensino superior e que são essenciais, de modo a fornecer
um referencial mÃnimo de qualidade, que se possa tornar um padrão uniforme em toda a
indústria. O modelo proposto é uma tentativa de reduzir a barreira de adoção da tecnologia de
computação em nuvem e definir normas mÃnimas seguidas por todos os fornecedores de
computação em nuvem, independentemente do seu local de hospedagem para que os seus
desempenhos possam ser medidos, avaliados e comparados em toda a indústria, para
melhorar a qualidade de serviço (QoS) recebida pelos utilizadores e remova as barreiras de
adoção e as preocupações dos utilizadores, bem como fomentar o aumento da concorrência
em toda a indústria da computação em nuvem