42 research outputs found
Assisted Reuse of Pattern-Based Composition Knowledge for Mashup Development
First generation of the World Wide Web (WWW) enabled users to have instantaneous access to a large diversity of knowledge. Second generation of the WWW (Web 2.0) brought a fundamental change in the way people interact with and through the World Wide Web. Web 2.0 has made the World Wide Web a platform not only for communication and sharing information but also for software development (e.g., web service composition). Web mashup or mashup development is a Web2.0 development approach in which users are expected to create applications by combining multiple data sources, application logic and UI components from the web to cater for their situational application needs. However, in reality creating an even simple mashup application is a complex task that can only be managed by skilled developers.
Examples of ready mashup models are one of the main sources of help for users who don't know how to design a mashup, provided that suitable examples can be found (examples that have an analogy with the modeling situation faced by the user). But also tutorials, expert colleagues or friends, and, of course, Google are typical means to find help. However, searching for help does not always lead to a success, and retrieved information is only seldom immediately usable as it is, since the retrieved pieces of information are not contextual, i.e., immediately applicable to the given modeling problem.
Motivated by the development challenges faced by a naive user of existing mashup tools, in this thesis we propose toaid such users by enabling assisted reuse of pattern-based composition knowledge. In this thesis we show how it is possible to effectively assist these users in their development task with contextual, interactive recommendations of composition knowledge in the form of mashup model patterns. We study a set of recommendation algorithms with different levels of performance and describe a flexible pattern weaving approach for the one-click reuse of patterns. We prove the generality of our algorithms and approach by implementing two prototype tools for two different mashup platforms. Finally, we validate the usefulness of our assisted development approach by performing thorough empirical tests and two user studies with our prototype tools
Deep Learning Framework for Online Interactive Service Recommendation in Iterative Mashup Development
Recent years have witnessed the rapid development of service-oriented
computing technologies. The boom of Web services increases the selection burden
of software developers in developing service-based systems (such as mashups).
How to recommend suitable follow-up component services to develop new mashups
has become a fundamental problem in service-oriented software engineering. Most
of the existing service recommendation approaches are designed for mashup
development in the single-round recommendation scenario. It is hard for them to
update recommendation results in time according to developers' requirements and
behaviors (e.g., instant service selection). To address this issue, we propose
a deep-learning-based interactive service recommendation framework named DLISR,
which aims to capture the interactions among the target mashup, selected
services, and the next service to recommend. Moreover, an attention mechanism
is employed in DLISR to weigh selected services when recommending the next
service. We also design two separate models for learning interactions from the
perspectives of content information and historical invocation information,
respectively, as well as a hybrid model called HISR. Experiments on a
real-world dataset indicate that HISR outperforms several state-of-the-art
service recommendation methods in the online interactive scenario for
developing new mashups iteratively.Comment: 15 pages, 6 figures, and 3 table
Personalizing the web: A tool for empowering end-users to customize the web through browser-side modification
167 p.Web applications delegate to the browser the final rendering of their pages. Thispermits browser-based transcoding (a.k.a. Web Augmentation) that can be ultimately singularized for eachbrowser installation. This creates an opportunity for Web consumers to customize their Web experiences.This vision requires provisioning adequate tooling that makes Web Augmentation affordable to laymen.We consider this a special class of End-User Development, integrating Web Augmentation paradigms.The dominant paradigm in End-User Development is scripting languages through visual languages.This thesis advocates for a Google Chrome browser extension for Web Augmentation. This is carried outthrough WebMakeup, a visual DSL programming tool for end-users to customize their own websites.WebMakeup removes, moves and adds web nodes from different web pages in order to avoid tabswitching, scrolling, the number of clicks and cutting and pasting. Moreover, Web Augmentationextensions has difficulties in finding web elements after a website updating. As a consequence, browserextensions give up working and users might stop using these extensions. This is why two differentlocators have been implemented with the aim of improving web locator robustness
Personalizing the web: A tool for empowering end-users to customize the web through browser-side modification
167 p.Web applications delegate to the browser the final rendering of their pages. Thispermits browser-based transcoding (a.k.a. Web Augmentation) that can be ultimately singularized for eachbrowser installation. This creates an opportunity for Web consumers to customize their Web experiences.This vision requires provisioning adequate tooling that makes Web Augmentation affordable to laymen.We consider this a special class of End-User Development, integrating Web Augmentation paradigms.The dominant paradigm in End-User Development is scripting languages through visual languages.This thesis advocates for a Google Chrome browser extension for Web Augmentation. This is carried outthrough WebMakeup, a visual DSL programming tool for end-users to customize their own websites.WebMakeup removes, moves and adds web nodes from different web pages in order to avoid tabswitching, scrolling, the number of clicks and cutting and pasting. Moreover, Web Augmentationextensions has difficulties in finding web elements after a website updating. As a consequence, browserextensions give up working and users might stop using these extensions. This is why two differentlocators have been implemented with the aim of improving web locator robustness
Recommended from our members
Advances in Technology Enhanced Learning
‘Advances in Technology Enhanced Learning’ presents a range of research projects which aim to explore how to make engagement in learning (and teaching) more passionate. This interactive and experimental resource discusses innovations which pave the way to open collaboration at scale. The book introduces methodological and technological breakthroughs via twelve chapters to learners, instructors, and decision-makers in schools, universities, and workplaces.
The Open University's Knowledge Media Institute and the EU TELMap project have brought together the luminaries from the European research area to showcase their vision of the future of learning with technology via their recent research project work. The projects discussed range widely over the Technology Enhanced Learning area from: environments for responsive open learning, work-based reflection, work-based social creativity, serious games and many more
Ubiquitous Computing
The aim of this book is to give a treatment of the actively developed domain of Ubiquitous computing. Originally proposed by Mark D. Weiser, the concept of Ubiquitous computing enables a real-time global sensing, context-aware informational retrieval, multi-modal interaction with the user and enhanced visualization capabilities. In effect, Ubiquitous computing environments give extremely new and futuristic abilities to look at and interact with our habitat at any time and from anywhere. In that domain, researchers are confronted with many foundational, technological and engineering issues which were not known before. Detailed cross-disciplinary coverage of these issues is really needed today for further progress and widening of application range. This book collects twelve original works of researchers from eleven countries, which are clustered into four sections: Foundations, Security and Privacy, Integration and Middleware, Practical Applications
Enhancing Geospatial Data: Collecting and Visualising User-Generated Content Through Custom Toolkits and Cloud Computing Workflows
Through this thesis we set the hypothesis that, via the creation of a set of custom toolkits, using cloud computing, online user-generated content, can be extracted from emerging large-scale data sets, allowing the collection, analysis and visualisation of geospatial data by social scientists. By the use of a custom-built suite of software, known as the ‘BigDataToolkit’, we examine the need and use of cloud computing and custom workflows to open up access to existing online data as well as setting up processes to enable the collection of new data. We examine the use of the toolkit to collect large amounts of data from various online sources, such as Social Media Application Programming Interfaces (APIs) and data stores, to visualise the data collected in real-time. Through the execution of these workflows, this thesis presents an implementation of a smart collector framework to automate the collection process to significantly increase the amount of data that can be obtained from the standard API endpoints. By the use of these interconnected methods and distributed collection workflows, the final system is able to collect and visualise a larger amount of data in real time than single system data collection processes used within traditional social media analysis. Aimed at allowing researchers without a core understanding of the intricacies of computer science, this thesis provides a methodology to open up new data sources to not only academics but also wider participants, allowing the collection of user-generated geographic and textual content, en masse. A series of case studies are provided, covering applications from the single researcher collecting data through to collection via the use of televised media. These are examined in terms of the tools created and the opportunities opened, allowing real-time analysis of data, collected via the use of the developed toolkit
Mineração de informação biomédica a partir de literatura científica
Doutoramento conjunto MAP-iThe rapid evolution and proliferation of a world-wide computerized network,
the Internet, resulted in an overwhelming and constantly growing
amount of publicly available data and information, a fact that was also verified
in biomedicine. However, the lack of structure of textual data inhibits
its direct processing by computational solutions. Information extraction is
the task of text mining that intends to automatically collect information
from unstructured text data sources. The goal of the work described in this
thesis was to build innovative solutions for biomedical information extraction
from scientific literature, through the development of simple software
artifacts for developers and biocurators, delivering more accurate, usable
and faster results. We started by tackling named entity recognition - a crucial
initial task - with the development of Gimli, a machine-learning-based
solution that follows an incremental approach to optimize extracted linguistic
characteristics for each concept type. Afterwards, Totum was built to
harmonize concept names provided by heterogeneous systems, delivering a
robust solution with improved performance results. Such approach takes
advantage of heterogenous corpora to deliver cross-corpus harmonization
that is not constrained to specific characteristics. Since previous solutions
do not provide links to knowledge bases, Neji was built to streamline the
development of complex and custom solutions for biomedical concept name
recognition and normalization. This was achieved through a modular and
flexible framework focused on speed and performance, integrating a large
amount of processing modules optimized for the biomedical domain. To
offer on-demand heterogenous biomedical concept identification, we developed
BeCAS, a web application, service and widget. We also tackled relation
mining by developing TrigNER, a machine-learning-based solution for
biomedical event trigger recognition, which applies an automatic algorithm
to obtain the best linguistic features and model parameters for each event
type. Finally, in order to assist biocurators, Egas was developed to support
rapid, interactive and real-time collaborative curation of biomedical documents,
through manual and automatic in-line annotation of concepts and
relations. Overall, the research work presented in this thesis contributed
to a more accurate update of current biomedical knowledge bases, towards
improved hypothesis generation and knowledge discovery.A rápida evolução e proliferação de uma rede mundial de computadores, a
Internet, resultou num esmagador e constante crescimento na quantidade
de dados e informação publicamente disponíveis, o que também se verificou
na biomedicina. No entanto, a inexistência de estrutura em dados textuais
inibe o seu processamento direto por parte de soluções informatizadas. Extração
de informação é a tarefa de mineração de texto que pretende extrair
automaticamente informação de fontes de dados de texto não estruturados.
O objetivo do trabalho descrito nesta tese foi essencialmente focado em
construir soluções inovadoras para extração de informação biomédica a partir
da literatura científica, através do desenvolvimento de aplicações simples
de usar por programadores e bio-curadores, capazes de fornecer resultados
mais precisos, usáveis e de forma mais rápida. Começámos por abordar o
reconhecimento de nomes de conceitos - uma tarefa inicial e fundamental -
com o desenvolvimento de Gimli, uma solução baseada em inteligência artificial
que aplica uma estratégia incremental para otimizar as características
linguísticas extraídas do texto para cada tipo de conceito. Posteriormente,
Totum foi implementado para harmonizar nomes de conceitos provenientes
de sistemas heterogéneos, oferecendo uma solução mais robusta e com melhores
resultados. Esta aproximação recorre a informação contida em corpora
heterogéneos para disponibilizar uma solução não restrita às característica
de um único corpus. Uma vez que as soluções anteriores não oferecem
ligação dos nomes a bases de conhecimento, Neji foi construído para facilitar
o desenvolvimento de soluções complexas e personalizadas para o
reconhecimento de conceitos nomeados e respectiva normalização. Isto foi
conseguido através de uma plataforma modular e flexível focada em rapidez
e desempenho, integrando um vasto conjunto de módulos de processamento
optimizados para o domínio biomédico. De forma a disponibilizar identificação
de conceitos biomédicos em tempo real, BeCAS foi desenvolvido para
oferecer um serviço, aplicação e widget Web. A extracção de relações entre
conceitos também foi abordada através do desenvolvimento de TrigNER,
uma solução baseada em inteligência artificial para o reconhecimento de
palavras que desencadeiam a ocorrência de eventos biomédicos. Esta ferramenta
aplica um algoritmo automático para encontrar as melhores características
linguísticas e parâmetros para cada tipo de evento. Finalmente,
de forma a auxiliar o trabalho de bio-curadores, Egas foi desenvolvido para
suportar a anotação rápida, interactiva e colaborativa em tempo real de
documentos biomédicos, através da anotação manual e automática de conceitos
e relações de forma contextualizada. Resumindo, este trabalho contribuiu
para a actualização mais precisa das actuais bases de conhecimento,
auxiliando a formulação de hipóteses e a descoberta de novo conhecimento