52 research outputs found
Recommended from our members
A distributed architecture for fuzzy logic systems and its application in human activity recognition
Fuzzy Logic Systems (FLS) have the full potential in handling imprecise and uncertain data due to the inherent advantages of the Fuzzy Inference System (FIS). Traditionally, fuzzy logic systems are linked to specific hardware or software systems. The literature review reveals that dispersed and distributed architectures of FLS are in high demand due to their capability to handle the complexities of fuzzy logic computations. However, the absence of best practices and standard methodologies prevents widespread adoption. As a result, some specific requirements, such as web communications and Service-Oriented Architecture (SOA), which can be found in many modern systems, are rarely adapted for FLSs. Sharing FLSs accessibility as web services (called Fuzzy-as-a-Service alias FaaS), in which the service is developed independently from a specific client platform, allows for autonomy, openness, load balancing, efficient resource allocation and eventually cost-effective, particularly for computationally intense FLSs.
The proposed novel architectural solution (FaaS) is a web-based service that distributes the main services for FLS on more than one client and servers nodes that can reach multiple users. By extending the IEEE-1855 (2016) standard in terms of system definition and data exchange, this research offers a standard solution for building FaaS as a novel method of implementing fuzzy logic systems by means of a cloud-based collecting, processing, and examining data over the web. Recent advances in standardising Fuzzy Mark-up Language (IEEE 1855-2016) and its associated software libraries (such as JFML and Simpful) have made this achievable. Two different cloud service providers and software libraries (Amazon Web Services using JFML as a java-based library and Azure Web Services using Simpful as a python-based library) are exploited to realise the FaaS on the cloud.
As a case study to establish the efficacy of the proposed FaaS, Human Activity Recognition (HAR) that plays a pivotal role in monitoring the health status of the Persons Under Observation (PUO)has been taken under consideration. In order to monitor the data related to HAR and physiological data, which are imprecise and uncertain in nature, various previous researchers have developed a good number of machine learning tools. However, such monitoring systems suffer from certain limitations due to the nature and amount of data being analysed.
A number of experiments are carried out in order to showcase and evaluate FaaS performance in different HAR scenarios. The first scenario has been a real-time walking/running detection. Secondly, a fall detection system via FaaS is designed based on IEEE 1855-2016 and JFML. In view, the pandemic caused due to COVID-19, the third application dealt with developing a system to determine the health status of individuals by remotely monitoring their Oxygen saturation and heartbeat rate using wearable sensors. Finally, a performance comparison between a stand-alone fuzzy system and a FaaS solution for fall detection is performed on two different cloud services, namely AWS and Azure. Research findings exhibit that while the proposed algorithm can keep the same accuracy as a stand-alone fuzzy system (90%), it can significantly improve the processing time, e.g., reducing the processing time for 10K data samples from 179 to 45 seconds (78% improvement).
Towards the end of this PhD project, the new IEEE 1855 extension is taken as a proposal into the consideration of the IEEE standards committee and is currently in the process of final approval in 2023
Open Data
Open data is freely usable, reusable, or redistributable by anybody, provided there are safeguards in place that protect the data’s integrity and transparency. This book describes how data retrieved from public open data repositories can improve the learning qualities of digital networking, particularly performance and reliability. Chapters address such topics as knowledge extraction, Open Government Data (OGD), public dashboards, intrusion detection, and artificial intelligence in healthcare
Hybrid approaches based on computational intelligence and semantic web for distributed situation and context awareness
2011 - 2012The research work focuses on Situation Awareness and Context Awareness topics.
Specifically, Situation Awareness involves being aware of what is happening in the vicinity
to understand how information, events, and one’s own actions will impact goals and objectives,
both immediately and in the near future. Thus, Situation Awareness is especially
important in application domains where the information flow can be quite high and poor
decisions making may lead to serious consequences.
On the other hand Context Awareness is considered a process to support user applications
to adapt interfaces, tailor the set of application-relevant data, increase the precision of
information retrieval, discover services, make the user interaction implicit, or build smart
environments.
Despite being slightly different, Situation and Context Awareness involve common
problems such as: the lack of a support for the acquisition and aggregation of dynamic environmental
information from the field (i.e. sensors, cameras, etc.); the lack of formal approaches
to knowledge representation (i.e. contexts, concepts, relations, situations, etc.)
and processing (reasoning, classification, retrieval, discovery, etc.); the lack of automated
and distributed systems, with considerable computing power, to support the reasoning on a
huge quantity of knowledge, extracted by sensor data.
So, the thesis researches new approaches for distributed Context and Situation Awareness
and proposes to apply them in order to achieve some related research objectives such
as knowledge representation, semantic reasoning, pattern recognition and information retrieval.
The research work starts from the study and analysis of state of art in terms of
techniques, technologies, tools and systems to support Context/Situation Awareness. The
main aim is to develop a new contribution in this field by integrating techniques deriving
from the fields of Semantic Web, Soft Computing and Computational Intelligence. From
an architectural point of view, several frameworks are going to be defined according to the
multi-agent paradigm.
Furthermore, some preliminary experimental results have been obtained in some application
domains such as Airport Security, Traffic Management, Smart Grids and
Healthcare.
Finally, future challenges is going to the following directions: Semantic Modeling of
Fuzzy Control, Temporal Issues, Automatically Ontology Elicitation, Extension to other
Application Domains and More Experiments. [edited by author]XI n.s
Acesso remoto dinâmico e seguro a bases de dados com integração de polÃticas de acesso suave
The amount of data being created and shared has grown greatly in recent
years, thanks in part to social media and the growth of smart devices.
Managing the storage and processing of this data can give a competitive edge
when used to create new services, to enhance targeted advertising, etc. To
achieve this, the data must be accessed and processed. When applications
that access this data are developed, tools such as Java Database Connectivity,
ADO.NET and Hibernate are typically used. However, while these tools aim to
bridge the gap between databases and the object-oriented programming
paradigm, they focus only on the connectivity issue. This leads to increased
development time as developers need to master the access policies to write
correct queries. Moreover, when used in database applications within noncontrolled
environments, other issues emerge such as database credentials
theft; application authentication; authorization and auditing of large groups of
new users seeking access to data, potentially with vague requirements;
network eavesdropping for data and credential disclosure; impersonating
database servers for data modification; application tampering for unrestricted
database access and data disclosure; etc.
Therefore, an architecture capable of addressing these issues is necessary to
build a reliable set of access control solutions to expand and simplify the
application scenarios of access control systems. The objective, then, is to
secure the remote access to databases, since database applications may be
used in hard-to-control environments and physical access to the host
machines/network may not be always protected. Furthermore, the authorization
process should dynamically grant the appropriate permissions to users that
have not been explicitly authorized to handle large groups seeking access to
data. This includes scenarios where the definition of the access requirements is
difficult due to their vagueness, usually requiring a security expert to authorize
each user individually. This is achieved by integrating and auditing soft access
policies based on fuzzy set theory in the access control decision-making
process. A proof-of-concept of this architecture is provided alongside a
functional and performance assessment.A quantidade de dados criados e partilhados tem crescido nos últimos anos,
em parte graças às redes sociais e à proliferação dos dispositivos inteligentes.
A gestão do armazenamento e processamento destes dados pode fornecer
uma vantagem competitiva quando usados para criar novos serviços, para
melhorar a publicidade direcionada, etc. Para atingir este objetivo, os dados
devem ser acedidos e processados. Quando as aplicações que acedem a
estes dados são desenvolvidos, ferramentas como Java Database
Connectivity, ADO.NET e Hibernate são normalmente utilizados. No entanto,
embora estas ferramentas tenham como objetivo preencher a lacuna entre as
bases de dados e o paradigma da programação orientada por objetos, elas
concentram-se apenas na questão da conectividade. Isto aumenta o tempo de
desenvolvimento, pois os programadores precisam dominar as polÃticas de
acesso para escrever consultas corretas. Além disso, quando usado em
aplicações de bases de dados em ambientes não controlados, surgem outros
problemas, como roubo de credenciais da base de dados; autenticação de
aplicações; autorização e auditoria de grandes grupos de novos utilizadores
que procuram acesso aos dados, potencialmente com requisitos vagos; escuta
da rede para obtenção de dados e credenciais; personificação de servidores
de bases de dados para modificação de dados; manipulação de aplicações
para acesso ilimitado à base de dados e divulgação de dados; etc.
Uma arquitetura capaz de resolver esses problemas é necessária para
construir um conjunto confiável de soluções de controlo de acesso, para
expandir e simplificar os cenários de aplicação destes sistemas. O objetivo,
então, é proteger o acesso remoto a bases de dados, uma vez que as
aplicações de bases de dados podem ser usados em ambientes de difÃcil
controlo e o acesso fÃsico à s máquinas/rede nem sempre está protegido.
Adicionalmente, o processo de autorização deve conceder dinamicamente as
permissões adequadas aos utilizadores que não foram explicitamente
autorizados para suportar grupos grandes de utilizadores que procuram aceder
aos dados. Isto inclui cenários em que a definição dos requisitos de acesso é
difÃcil devido à sua imprecisão, geralmente exigindo um especialista em
segurança para autorizar cada utilizador individualmente. Este objetivo é
atingido no processo de decisão de controlo de acesso com a integração e
auditaria das polÃticas de acesso suaves baseadas na teoria de conjuntos
difusos. Uma prova de conceito desta arquitetura é fornecida em conjunto com
uma avaliação funcional e de desempenho.Programa Doutoral em Informátic
Evaluating the Impact of Defeasible Argumentation as a Modelling Technique for Reasoning under Uncertainty
Limited work exists for the comparison across distinct knowledge-based approaches in Artificial Intelligence (AI) for non-monotonic reasoning, and in particular for the examination of their inferential and explanatory capacity. Non-monotonicity, or defeasibility, allows the retraction of a conclusion in the light of new information. It is a similar pattern to human reasoning, which draws conclusions in the absence of information, but allows them to be corrected once new pieces of evidence arise. Thus, this thesis focuses on a comparison of three approaches in AI for implementation of non-monotonic reasoning models of inference, namely: expert systems, fuzzy reasoning and defeasible argumentation. Three applications from the fields of decision-making in healthcare and knowledge representation and reasoning were selected from real-world contexts for evaluation: human mental workload modelling, computational trust modelling, and mortality occurrence modelling with biomarkers. The link between these applications comes from their presumptively non-monotonic nature. They present incomplete, ambiguous and retractable pieces of evidence. Hence, reasoning applied to them is likely suitable for being modelled by non-monotonic reasoning systems. An experiment was performed by exploiting six deductive knowledge bases produced with the aid of domain experts. These were coded into models built upon the selected reasoning approaches and were subsequently elicited with real-world data. The numerical inferences produced by these models were analysed according to common metrics of evaluation for each field of application. For the examination of explanatory capacity, properties such as understandability, extensibility, and post-hoc interpretability were meticulously described and qualitatively compared. Findings suggest that the variance of the inferences produced by expert systems and fuzzy reasoning models was higher, highlighting poor stability. In contrast, the variance of argument-based models was lower, showing a superior stability of its inferences across different system configurations. In addition, when compared in a context with large amounts of conflicting information, defeasible argumentation exhibited a stronger potential for conflict resolution, while presenting robust inferences. An in-depth discussion of the explanatory capacity showed how defeasible argumentation can lead to the construction of non-monotonic models with appealing properties of explainability, compared to those built with expert systems and fuzzy reasoning. The originality of this research lies in the quantification of the impact of defeasible argumentation. It illustrates the construction of an extensive number of non-monotonic reasoning models through a modular design. In addition, it exemplifies how these models can be exploited for performing non-monotonic reasoning and producing quantitative inferences in real-world applications. It contributes to the field of non-monotonic reasoning by situating defeasible argumentation among similar approaches through a novel empirical comparison
Decision Support Systems
Decision support systems (DSS) have evolved over the past four decades from theoretical concepts into real world computerized applications. DSS architecture contains three key components: knowledge base, computerized model, and user interface. DSS simulate cognitive decision-making functions of humans based on artificial intelligence methodologies (including expert systems, data mining, machine learning, connectionism, logistical reasoning, etc.) in order to perform decision support functions. The applications of DSS cover many domains, ranging from aviation monitoring, transportation safety, clinical diagnosis, weather forecast, business management to internet search strategy. By combining knowledge bases with inference rules, DSS are able to provide suggestions to end users to improve decisions and outcomes. This book is written as a textbook so that it can be used in formal courses examining decision support systems. It may be used by both undergraduate and graduate students from diverse computer-related fields. It will also be of value to established professionals as a text for self-study or for reference
INVESTIGATION OF THE ROLE OF SERVICE LEVEL AGREEMENTS IN WEB SERVICE QUALITY
Context/Background:
Use of Service Level Agreements (SLAs) is crucial to provide the value added services to consumers to achieve their requirements successfully. SLAs also ensure the expected Quality of Service to consumers.
Aim:
This study investigates how efficient structural representation and management of SLAs can help to ensure the Quality of Service (QoS) in Web services during Web service composition.
Method:
Existing specifications and structures for SLAs for Web services do not fully formalize and provide support for different automatic and dynamic behavioral aspects needed for QoS calculation. This study addresses the issues on how to formalize and document the structures of SLAs for better service utilization and improved QoS results. The Service Oriented Architecture (SOA) is extended in this study with addition of an SLAAgent, which helps to automate the QoS calculation using Fuzzy Inference Systems, service discovery, service selection, SLA monitoring and management during service composition with the help of structured SLA documents.
Results:
The proposed framework improves the ways of how to structure, manage and monitor SLAs during Web service composition to achieve the better Quality of Service effectively and efficiently.
Conclusions:
To deal with different types of computational requirements the automation of SLAs is a challenge during Web service composition. This study shows the significance of the SLAs for better QoS during composition of services in SOA
A framework to manage uncertainties in cloud manufacturing environment
This research project aims to develop a framework to manage uncertainty in cloud manufacturing for small and medium enterprises (SMEs). The framework includes a cloud manufacturing taxonomy; guidance to deal with uncertainty in cloud manufacturing, by providing a process to identify uncertainties; a detailed step-by-step approach to managing the uncertainties; a list of uncertainties; and response strategies to security and privacy uncertainties in cloud manufacturing. Additionally, an online assessment tool has been developed to implement the uncertainty management framework into a real life context.
To fulfil the aim and objectives of the research, a comprehensive literature review was performed in order to understand the research aspects. Next, an uncertainty management technique was applied to identify, assess, and control uncertainties in cloud manufacturing. Two well-known approaches were used in the evaluation of the uncertainties in this research: Simple Multi-Attribute Rating Technique (SMART) to prioritise uncertainties; and a fuzzy rule-based system to quantify security and privacy uncertainties. Finally, the framework was embedded into an online assessment tool and validated through expert opinion and case studies.
Results from this research are useful for both academia and industry in understanding aspects of cloud manufacturing. The main contribution is a framework that offers new insights for decisions makers on how to deal with uncertainty at adoption and implementation stages of cloud manufacturing. The research also introduced a novel cloud manufacturing taxonomy, a list of uncertainty factors, an assessment process to prioritise uncertainties and quantify security and privacy related uncertainties, and a knowledge base for providing recommendations and solutions
Memetic algorithms for ontology alignment
2011 - 2012Semantic interoperability represents the capability of two or more systems to
meaningfully and accurately interpret the exchanged data so as to produce
useful results. It is an essential feature of all distributed and open knowledge
based systems designed for both e-government and private businesses, since it
enables machine interpretation, inferencing and computable logic.
Unfortunately, the task of achieving semantic interoperability is very difficult
because it requires that the meanings of any data must be specified in an
appropriate detail in order to resolve any potential ambiguity.
Currently, the best technology recognized for achieving such level of precision
in specification of meaning is represented by ontologies. According to the
most frequently referenced definition [1], an ontology is an explicit
specification of a conceptualization, i.e., the formal specification of the
objects, concepts, and other entities that are presumed to exist in some area of
interest and the relationships that hold them [2]. However, different tasks or
different points of view lead ontology designers to produce different
conceptualizations of the same domain of interest. This means that the
subjectivity of the ontology modeling results in the creation of heterogeneous
ontologies characterized by terminological and conceptual discrepancies.
Examples of these discrepancies are the use of different words to name the
same concept, the use of the same word to name different concepts, the
creation of hierarchies for a specific domain region with different levels of
detail and so on. The arising so-called semantic heterogeneity problem
represents, in turn, an obstacle for achieving semantic interoperability... [edited by author]XI n.s
Modélisation et exploitation de base de connaissances dans le cadre du web des objets
The concept Web of things (WOT) is gradually becoming a reality as the result of development of network and hardware technologies. Nowadays, there is an increasing number of objects that can be used in predesigned applications. The world is thus more tightly connected, various objects can share their information as well as being triggered through a Web-like structure. However, even if the heterogeneous objects have the ability to be connected to the Web, they cannot be used in different applications unless there is a common model so that their heterogeneity can be described and understood. In this thesis, we want to provide a common model to describe those heterogeneous objects and use them to solve user’s problems. Users can have various requests, either to find a particular object, or to fulfill some tasks. We highlight thus two research directions. The first step is to model those heterogeneous objects and related concepts in WOT, and the next step is to use this model to fulfill user’s requests. Thus, we first study the existing technologies, applications and domains where the WOT can be applied. We compare the existing description models in this domain and find their insufficiency to be applied in the WOT...Le concept du web des objets (WOT - web of things) est devenu une réalité avec le développement d’internet, des réseaux, des technologies matérielles et des objets communicants. De nos jours, il existe un nombre croissant d’objets susceptibles d’être utilisés dans des applications spécifiques. Le Monde est ainsi plus étroitement connecté, différents objets pouvant maintenant partager leurs informations et être ainsi utilisés à travers une structure similaire à celle du Web classique. Cependant, même si des objets hétérogènes ont la possibilité de se connecter au Web, ils ne peuvent pas être utilisés dans différentes applications à moins de posséder un modèle de représentation et d’interrogation commun capable de prendre en compte leur hétérogénéité. Dans cette thèse, notre objectif est d’offrir un modèle commun pour décrire les objets hétérogènes et pouvoir ensuite les utiliser pour accéder aux requêtes des utilisateurs. Ceux-ci peuvent avoir différentes demandes, que ce soit pour trouver un objet particulier ou pour réaliser certaines tâches. Nous mettons en évidence deux directions de recherche. La première consiste à trouver une bonne modélisation de ces objets hétérogènes et des concepts liés au WOT. La seconde est d’utiliser un tel modèle pour répondre efficacement aux requêtes des utilisateurs. Dans un premier temps, nous étudions d’abord les technologies, les applications et les domaines existants où le WOT peut être appliqué. Nous comparons les modèles de description existants dans ce domaine et nous mettons en évidence leurs insuffisances lors d’applications relatives au WOT. Nous proposons alors un nouveau modèle sémantique pour la description d’objets dans le cadre du WOT. Ce modèle est construit sur une ontologie qui comporte trois composantes principales: le Core model, le Space model et l’Agent model. Ce modèle peut alors permettre la description à la fois des informations statiques mais aussi des changements dynamiques associés au WOT..
- …