633 research outputs found
Semantic annotation of Web APIs with SWEET
Recently technology developments in the area of services on the Web are marked by the proliferation of Web applications and APIs. The development and evolution of applications based on Web APIs is, however, hampered by the lack of automation that can be achieved with current technologies. In this paper we present SWEET - Semantic Web sErvices Editing Tool - a lightweight Web application for creating semantic descriptions of Web APIs. SWEET directly supports the creation of mashups by enabling the semantic annotation of Web APIs, thus contributing to the automation of the discovery, composition and invocation service tasks. Furthermore, it enables the development of composite SWS based applications on top of Linked Data
Reviving Static Charts into Live Charts
Data charts are prevalent across various fields due to their efficacy in
conveying complex data relationships. However, static charts may sometimes
struggle to engage readers and efficiently present intricate information,
potentially resulting in limited understanding. We introduce "Live Charts," a
new format of presentation that decomposes complex information within a chart
and explains the information pieces sequentially through rich animations and
accompanying audio narration. We propose an automated approach to revive static
charts into Live Charts. Our method integrates GNN-based techniques to analyze
the chart components and extract data from charts. Then we adopt large natural
language models to generate appropriate animated visuals along with a
voice-over to produce Live Charts from static ones. We conducted a thorough
evaluation of our approach, which involved the model performance, use cases, a
crowd-sourced user study, and expert interviews. The results demonstrate Live
Charts offer a multi-sensory experience where readers can follow the
information and understand the data insights better. We analyze the benefits
and drawbacks of Live Charts over static charts as a new information
consumption experience
Automatic Synchronization of Multi-User Photo Galleries
In this paper we address the issue of photo galleries synchronization, where
pictures related to the same event are collected by different users. Existing
solutions to address the problem are usually based on unrealistic assumptions,
like time consistency across photo galleries, and often heavily rely on
heuristics, limiting therefore the applicability to real-world scenarios. We
propose a solution that achieves better generalization performance for the
synchronization task compared to the available literature. The method is
characterized by three stages: at first, deep convolutional neural network
features are used to assess the visual similarity among the photos; then, pairs
of similar photos are detected across different galleries and used to construct
a graph; eventually, a probabilistic graphical model is used to estimate the
temporal offset of each pair of galleries, by traversing the minimum spanning
tree extracted from this graph. The experimental evaluation is conducted on
four publicly available datasets covering different types of events,
demonstrating the strength of our proposed method. A thorough discussion of the
obtained results is provided for a critical assessment of the quality in
synchronization.Comment: ACCEPTED to IEEE Transactions on Multimedi
VisAhoi: Towards a Library to Generate and Integrate Visualization Onboarding Using High-level Visualization Grammars
Visualization onboarding supports users in reading, interpreting, and
extracting information from visual data representations. General-purpose
onboarding tools and libraries are applicable for explaining a wide range of
graphical user interfaces but cannot handle specific visualization
requirements. This paper describes a first step towards developing an
onboarding library called VisAhoi, which is easy to integrate, extend,
semi-automate, reuse, and customize. VisAhoi supports the creation of
onboarding elements for different visualization types and datasets. We
demonstrate how to extract and describe onboarding instructions using three
well-known high-level descriptive visualization grammars - Vega-Lite,
Plotly.js, and ECharts. We show the applicability of our library by performing
two usage scenarios that describe the integration of VisAhoi into a VA tool for
the analysis of high-throughput screening (HTS) data and, second, into a
Flourish template to provide an authoring tool for data journalists for a
treemap visualization. We provide a supplementary website that demonstrates the
applicability of VisAhoi to various visualizations, including a bar chart, a
horizon graph, a change matrix or heatmap, a scatterplot, and a treemap
visualization
Recommended from our members
Explainable and Advisable Learning for Self-driving Vehicles
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers, etc., can understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. Our work has focused on the challenge of generating introspective explanations of deep models for self-driving vehicles. In Chapter 3, we begin by exploring the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. In Chapter 4, we add an attention-based video-to-text model to produce textual explanations of model actions, e.g. "the car slows down because the road is wet". The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. These explainable systems represent an externalization of tacit knowledge. The network's opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. In Chapter 5, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Further, in Chapter 6, we propose a new approach that learns vehicle control with the help of long-term (global) human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly
On the integration of model-based feature information in Product Lifecycle Management systems
[EN] As CAD models continue to become more critical information sources in the product's lifecycle, it is necessary to develop efficient mechanisms to store, retrieve, and manage larger volumes of increasingly complex data. Because of their unique characteristics, 3D annotations can be used to embed design and manufacturing information directly into a CAD model, which makes models effective vehicles to describe aspects of the geometry or provide additional information that can be connected to a particular geometric element. However, access to this information is often limited, difficult, and even unavailable to external applications. As model complexity and volume of information continue to increase, new and more powerful methods to interrogate these annotations are needed.
In this paper, we demonstrate how 3D annotations can be effectively structured and integrated into a Product Lifecycle Management (PLM) system to provide a cohesive view of product-related information in a design environment. We present a strategy to organize and manage annotation information which is stored internally in a CAD model, and make it fully available through the PLM. Our method involves a dual representation of 3D annotations with enhanced data structures that provides shared and easy access to the information. We describe the architecture of a system which includes a software component for the CAD environment and a module that integrates with the PLM server. We validate our approach through a software prototype that uses a parametric modeling application and two commercial PLM packages with distinct data models.This work was supported by the Spanish Ministry of Economy and Competitiveness and the FEDER Funds, through the ANNOTA project (Ref. TIN2013-46036-C3-1-R).Camba, J.; Contero, M.; Company, P.; PĂ©rez Lopez, DC. (2017). On the integration of model-based feature information in Product Lifecycle Management systems. International Journal of Information Management. 37(6):611-621. https://doi.org/10.1016/j.ijinfomgt.2017.06.002S61162137
FDDetector: A Tool for Deduplicating Features in Software Product Lines
Duplication is one of the model defects that affect software product lines during their evolution. Many approaches have been proposed to deal with duplication in code level while duplication in features hasn’t received big interest in literature. At the aim of reducing maintenance cost and improving product quality in an early stage of a product line, we have proposed in previous work a tool support based on a conceptual framework. The main objective of this tool called FDDetector is to detect and correct duplication in product line models. In this paper, we recall the motivation behind creating a solution for feature deduplication and we present progress done in the design and implementation of FDDetector
WonderFlow: Narration-Centric Design of Animated Data Videos
Creating an animated data video enriched with audio narration takes a
significant amount of time and effort and requires expertise. Users not only
need to design complex animations, but also turn written text scripts into
audio narrations and synchronize visual changes with the narrations. This paper
presents WonderFlow, an interactive authoring tool, that facilitates
narration-centric design of animated data videos. WonderFlow allows authors to
easily specify a semantic link between text and the corresponding chart
elements. Then it automatically generates audio narration by leveraging
text-to-speech techniques and aligns the narration with an animation.
WonderFlow provides a visualization structure-aware animation library designed
to ease chart animation creation, enabling authors to apply pre-designed
animation effects to common visualization components. It also allows authors to
preview and iteratively refine their data videos in a unified system, without
having to switch between different creation tools. To evaluate WonderFlow's
effectiveness and usability, we created an example gallery and conducted a user
study and expert interviews. The results demonstrated that WonderFlow is easy
to use and simplifies the creation of data videos with narration-animation
interplay
Supporting the creation of semantic RESTful service descriptions
Research on semantic Web services (SWS) has been devoted to reduce the extensive manual eort required for manipulating Web services by enhancing them with semantic information. Recently, the world around services on the Web, thus far limited to "classical" Web services based on SOAP and WSDL, has significantly evolved with the proliferation of Web applications and APIs, often referred to as RESTful Web services. However, despite their success, RESTful services are currently facing similar limitations to those identified for traditional Web service technologies and present even further difficulties, such as the lack of machine-processable service descriptions. In order to address these challenges and to enable the wider adoption of RESTful service technologies, we advocate an integrated lightweight approach for formally describing semantic RESTful services. The approach is based on the use of the hRESTS (HTML for RESTful Services) and MicroWSMO microformats, which enable the creation of machine-readable service descriptions and the addition of semantic annotations, correspondingly. Finally, we present SWEET - Semantic Web sErvices Editing Tool - which effectively supports users in creating semantic descriptions of RESTful services based on the aforementioned technologies
- …