2,381 research outputs found
Outlier-Resilient Web Service QoS Prediction
The proliferation of Web services makes it difficult for users to select the
most appropriate one among numerous functionally identical or similar service
candidates. Quality-of-Service (QoS) describes the non-functional
characteristics of Web services, and it has become the key differentiator for
service selection. However, users cannot invoke all Web services to obtain the
corresponding QoS values due to high time cost and huge resource overhead.
Thus, it is essential to predict unknown QoS values. Although various QoS
prediction methods have been proposed, few of them have taken outliers into
consideration, which may dramatically degrade the prediction performance. To
overcome this limitation, we propose an outlier-resilient QoS prediction method
in this paper. Our method utilizes Cauchy loss to measure the discrepancy
between the observed QoS values and the predicted ones. Owing to the robustness
of Cauchy loss, our method is resilient to outliers. We further extend our
method to provide time-aware QoS prediction results by taking the temporal
information into consideration. Finally, we conduct extensive experiments on
both static and dynamic datasets. The results demonstrate that our method is
able to achieve better performance than state-of-the-art baseline methods.Comment: 12 pages, to appear at the Web Conference (WWW) 202
TPMCF: Temporal QoS Prediction using Multi-Source Collaborative Features
Recently, with the rapid deployment of service APIs, personalized service
recommendations have played a paramount role in the growth of the e-commerce
industry. Quality-of-Service (QoS) parameters determining the service
performance, often used for recommendation, fluctuate over time. Thus, the QoS
prediction is essential to identify a suitable service among functionally
equivalent services over time. The contemporary temporal QoS prediction methods
hardly achieved the desired accuracy due to various limitations, such as the
inability to handle data sparsity and outliers and capture higher-order
temporal relationships among user-service interactions. Even though some recent
recurrent neural-network-based architectures can model temporal relationships
among QoS data, prediction accuracy degrades due to the absence of other
features (e.g., collaborative features) to comprehend the relationship among
the user-service interactions. This paper addresses the above challenges and
proposes a scalable strategy for Temporal QoS Prediction using Multi-source
Collaborative-Features (TPMCF), achieving high prediction accuracy and faster
responsiveness. TPMCF combines the collaborative-features of users/services by
exploiting user-service relationship with the spatio-temporal auto-extracted
features by employing graph convolution and transformer encoder with multi-head
self-attention. We validated our proposed method on WS-DREAM-2 datasets.
Extensive experiments showed TPMCF outperformed major state-of-the-art
approaches regarding prediction accuracy while ensuring high scalability and
reasonably faster responsiveness.Comment: 10 Pages, 7 figure
Information-Based Neighborhood Modeling
Since the inception of the World Wide Web, the amount of data present on websites and internet infrastructure has grown exponentially that researchers continuously develop new and more efficient ways of sorting and presenting information to end-users. Particular websites, such as e-commerce websites, filter data with the help of recommender systems. Over the years, methods have been developed to improve recommender accuracy, yet developers face a problem when new items or users enter the system. With little to no information on user or item preferences, recommender systems struggle generating accurate predictions. This is the cold-start problem. Ackoff defines information as data structured around answers to the question words: what, where, when, who and how many. This paper explores how Ackoff’s definition of information might improve accuracy and alleviate cold-start conditions when applied to the neighborhood model of collaborative filtering (Ackoff, 1989, p. 3)
Prediction, Recommendation and Group Analytics Models in the domain of Mashup Services and Cyber-Argumentation Platform
Mashup application development is becoming a widespread software development practice due to its appeal for a shorter application development period. Application developers usually use web APIs from different sources to create a new streamlined service and provide various features to end-users. This kind of practice saves time, ensures reliability, accuracy, and security in the developed applications. Mashup application developers integrate these available APIs into their applications. Still, they have to go through thousands of available web APIs and chose only a few appropriate ones for their application. Recommending relevant web APIs might help application developers in this situation. However, very low API invocation from mashup applications creates a sparse mashup-web API dataset for the recommendation models to learn about the mashups and their web API invocation pattern. One research aims to analyze these mashup-specific critical issues, look for supplemental information in the mashup domain, and develop web API recommendation models for mashup applications. The developed recommendation model generates useful and accurate web APIs to reduce the impact of low API invocations in mashup application development.
Cyber-Argumentation platform also faces a similarly challenging issue. In large-scale cyber argumentation platforms, participants express their opinions, engage with one another, and respond to feedback and criticism from others in discussing important issues online. Argumentation analysis tools capture the collective intelligence of the participants and reveal hidden insights from the underlying discussions. However, such analysis requires that the issues have been thoroughly discussed and participant’s opinions are clearly expressed and understood. Participants typically focus only on a few ideas and leave others unacknowledged and underdiscussed. This generates a limited dataset to work with, resulting in an incomplete analysis of issues in the discussion. One solution to this problem would be to develop an opinion prediction model for cyber-argumentation. This model would predict participant’s opinions on different ideas that they have not explicitly engaged.
In cyber-argumentation, individuals interact with each other without any group coordination. However, the implicit group interaction can impact the participating user\u27s opinion, attitude, and discussion outcome. One of the objectives of this research work is to analyze different group analytics in the cyber-argumentation environment. The objective is to design an experiment to inspect whether the critical concepts of the Social Identity Model of Deindividuation Effects (SIDE) are valid in our argumentation platform. This experiment can help us understand whether anonymity and group sense impact user\u27s behavior in our platform. Another section is about developing group interaction models to help us understand different aspects of group interactions in the cyber-argumentation platform.
These research works can help develop web API recommendation models tailored for mashup-specific domains and opinion prediction models for the cyber-argumentation specific area. Primarily these models utilize domain-specific knowledge and integrate them with traditional prediction and recommendation approaches. Our work on group analytic can be seen as the initial steps to understand these group interactions
An Approach of QoS Evaluation for Web Services Design With Optimized Avoidance of SLA Violations
Quality of service (QoS) is an official agreement that governs the contractual commitments between service providers and consumers in respect to various nonfunctional requirements, such as performance, dependability, and security. While more Web services are available for the construction of software systems based upon service-oriented architecture (SOA), QoS has become a decisive factor for service consumers to choose from service providers who provide similar services. QoS is usually documented on a service-level agreement (SLA) to ensure the functionality and quality of services and to define monetary penalties in case of any violation of the written agreement. Consequently, service providers have a strong interest in keeping their commitments to avoid and reduce the situations that may cause SLA violations.However, there is a noticeable shortage of tools that can be used by service providers to either quantitively evaluate QoS of their services for the predication of SLA violations or actively adjust their design for the avoidance of SLA violations with optimized service reconfigurations. Developed in this dissertation research is an innovative framework that tackles the problem of SLA violations in three separated yet connected phases. For a given SOA system under examination, the framework employs sensitivity analysis in the first phase to identify factors that are influential to system performance, and the impact of influential factors on QoS is then quantitatively measured with a metamodel-based analysis in the second phase. The results of analyses are then used in the third phase to search both globally and locally for optimal solutions via a controlled number of experiments. In addition to technical details, this dissertation includes experiment results to demonstrate that this new approach can help service providers not only predicting SLA violations but also avoiding the unnecessary increase of the operational cost during service optimization
Automatic management tool for attribution and monitorization of projects/internships
No último ano académico, os estudantes do ISEP necessitam de realizar um projeto final para
obtenção do grau académico que pretendem alcançar. O ISEP fornece uma plataforma digital
onde é possível visualizar todos os projetos que os alunos se podem candidatar. Apesar das
vantagens que a plataforma digital traz, esta também possui alguns problemas, nomeadamente
a difícil escolha de projetos adequados ao estudante devido à excessiva oferta e falta de
mecanismos de filtragem. Para além disso, existe também uma indecisão acrescida para
selecionar um supervisor que seja compatível para o projeto selecionado.
Tendo o aluno escolhido o projeto e o supervisor, dá-se início à fase de monitorização do
mesmo, que possui também os seus problemas, como o uso de diversas ferramentas que
posteriormente levam a possíveis problemas de comunicação e dificuldade em manter um
histórico de versões do trabalho desenvolvido.
De forma a responder aos problemas mencionados, realizou-se um estudo aprofundado dos
tópicos de sistemas de recomendação aplicados a Machine Learning e Learning Management
Systems. Para cada um desses grandes temas, foram analisados sistemas semelhantes capazes
de solucionar o problema proposto, tais como sistemas de recomendação desenvolvidos em
artigos científicos, aplicações comerciais e ferramentas como o ChatGPT.
Através da análise do estado da arte, concluiu-se que a solução para os problemas propostos
seria a criação de uma aplicação Web para alunos e supervisores, que juntasse as duas
temáticas analisadas. O sistema de recomendação desenvolvido possui filtragem colaborativa
com factorização de matrizes, e filtragem por conteúdo com semelhança de cossenos. As
tecnologias utilizadas no sistema centram-se em Python no back-end (com o uso de TensorFlow
e NumPy para funcionalidades de Machine Learning) e Svelte no front-end. O sistema foi
inspirado numa arquitetura em microsserviços em que cada serviço é representado pelo seu
próprio contentor de Docker, e disponibilizado ao público através de um domínio público.
O sistema foi avaliado através de três métricas: performance, confiabilidade e usabilidade. Foi
utilizada a ferramenta Quantitative Evaluation Framework para definir dimensões, fatores e
requisitos(e respetivas pontuações). Os estudantes que testaram a solução avaliaram o sistema
de recomendação com um valor de aproximadamente 7 numa escala de 1 a 10, e os valores de
precision, recall, false positive rate e F-Measure foram avaliados em 0.51, 0.71, 0.23 e 0.59
respetivamente. Adicionalmente, ambos os grupos classificaram a aplicação como intuitiva e
de fácil utilização, com resultados a rondar o 8 numa escala de 1 em 10.In the last academic year, students at ISEP need to complete a final project to obtain the
academic degree they aim to achieve. ISEP provides a digital platform where all the projects
that students can apply for can be viewed. Besides the advantages this platform has, it also
brings some problems, such as the difficult selection of projects suited for the student due to
the excessive offering and lack of filtering mechanisms. Additionally, there is also increased
difficulty in selecting a supervisor compatible with their project.
Once the student has chosen the project and the supervisor, the monitoring phase begins,
which also has its issues, such as using various tools that may lead to potential communication
problems and difficulty in maintaining a version history of the work done.
To address the mentioned problems, an in-depth study of recommendation systems applied to
Machine Learning and Learning Management Systems was conducted. For each of these
themes, similar systems that could solve the proposed problem were analysed, such as
recommendation systems developed in scientific papers, commercial applications, and tools
like ChatGPT.
Through the analysis of the state of the art, it was concluded that the solution to the proposed
problems would be the creation of a web application for students and supervisors that
combines the two analysed themes. The developed recommendation system uses collaborative
filtering with matrix factorization and content-based filtering with cosine similarity. The
technologies used in the system are centred around Python on the backend (with the use of
TensorFlow and NumPy for Machine Learning functionalities) and Svelte on the frontend. The
system was inspired by a microservices architecture, where each service is represented by its
own Docker container, and it was made available online through a public domain.
The system was evaluated through performance, reliability, and usability. The Quantitative
Evaluation Framework tool was used to define dimensions, factors, and requirements (and their
respective scores). The students who tested the solution rated the recommendation system
with a value of approximately 7 on a scale of 1 to 10, and the precision, recall, false positive
rate, and F-Measure values were evaluated at 0.51, 0.71, 0.23, and 0.59, respectively.
Additionally, both groups rated the application as intuitive and easy to use, with ratings around
8 on a scale of 1 to 10
- …