4,173 research outputs found
Generating collaborative systems for digital libraries: A model-driven approach
This is an open access article shared under a Creative Commons Attribution 3.0 Licence (http://creativecommons.org/licenses/by/3.0/). Copyright @ 2010 The Authors.The design and development of a digital library involves different stakeholders, such as: information architects, librarians, and domain experts, who need to agree on a common language to describe, discuss, and negotiate the services the library has to offer. To this end, high-level, language-neutral models have to be devised. Metamodeling techniques favor the definition of domainspecific visual languages through which stakeholders can share their views and directly manipulate representations of the domain entities. This paper describes CRADLE (Cooperative-Relational Approach to Digital Library Environments), a metamodel-based framework and visual language for the definition of notions and services related to the development of digital libraries. A collection of tools allows the automatic generation of several services, defined with the CRADLE visual language, and of the graphical user interfaces providing access to them for the final user. The effectiveness of the approach is illustrated by presenting digital libraries generated with CRADLE, while the CRADLE environment has been evaluated by using the cognitive dimensions framework
Intelligent indexing of crime scene photographs
The Scene of Crime Information System's automatic image-indexing prototype goes beyond extracting keywords and syntactic relations from captions. The semantic information it gathers gives investigators an intuitive, accurate way to search a database of cases for specific photographic evidence. Intelligent, automatic indexing and retrieval of crime scene photographs is one of the main functions of SOCIS, our research prototype developed within the Scene of Crime Information System project. The prototype, now in its final development and evaluation phase, applies advanced natural language processing techniques to text-based image indexing and retrieval to tackle crime investigation needs effectively and efficiently
PerspectivesX: A Proposed Tool to Scaffold Collaborative Learning Activities within MOOCs
In this work-in-progress paper, we introduce the PerspectivesX tool which
aims to scaffold collaborative learning activities within MOOCs. The
PerspectivesX tool has been designed to promote learner knowledge construction
and curation for a range of multi-perspective elaboration techniques (e.g.,
SWOT analysis and Six Thinking Hats). The PerspectivesX tool is designed to
store learner submissions in a searchable knowledge base which is able to be
persisted across course re-runs and promotes the use of natural language
processing techniques to allow course moderators to provide scalable feedback.
In this paper we outline the design principles that structured collaborative
learning tools need to adhere to, design a prototype tool (PerspectivesX) and
evaluate whether MOOC platform extension frameworks are able to support the
implementation of the tool.Comment: Accepted as a Work In Progress paper at EMOOC 201
From Interactive to Experimental Multimedia
Perhaps the most dramatic Information Society development witnessed today is the wide availability of social networking capabilities for the users, orchestrated through the wide variety of virtual multimedia communication tools. Mobile and networked interactive multimedia applications are employed to promptly capture or create user-centered conten
Recommended from our members
A collaborative-project memory tool for participatory planning
Technology is more and more providing planners and designer with tools and methods to collect and communicate spatial data and assist spatial analysis. When we think about new technologies supporting planning we mainly think about GIS, urban modelling, simulation models and virtual reality. But many other challenges to the planning practice need for tools to support and improve planning activities. In this paper we discuss the need of new tools to support knowledge representation and knowledge sharing in participatory planning processes. The paper describes the use of a hypermedia and sensemaking tool (Compendium) to structure the knowledge produced in a real participatory planning process. In the present application Compendium has been used not for real-time capturing but for a post-hoc analysis of a real participatory planning experience.
Compendium has been used to represent and reconstruct the group memory of consultation meetings in order to allow both the planning team and the citizens to navigate into the contents of those meetings. Moreover the paper describes the main features and potential of the use of Compendium in Participatory Planning domain, and it describes the results of the group memory reconstruction. Finally the case study opens reflections on the need of new planning technologies supporting participatory knowledge generation, representation and management
cphVB: A System for Automated Runtime Optimization and Parallelization of Vectorized Applications
Modern processor architectures, in addition to having still more cores, also
require still more consideration to memory-layout in order to run at full
capacity. The usefulness of most languages is deprecating as their
abstractions, structures or objects are hard to map onto modern processor
architectures efficiently.
The work in this paper introduces a new abstract machine framework, cphVB,
that enables vector oriented high-level programming languages to map onto a
broad range of architectures efficiently. The idea is to close the gap between
high-level languages and hardware optimized low-level implementations. By
translating high-level vector operations into an intermediate vector bytecode,
cphVB enables specialized vector engines to efficiently execute the vector
operations.
The primary success parameters are to maintain a complete abstraction from
low-level details and to provide efficient code execution across different,
modern, processors. We evaluate the presented design through a setup that
targets multi-core CPU architectures. We evaluate the performance of the
implementation using Python implementations of well-known algorithms: a jacobi
solver, a kNN search, a shallow water simulation and a synthetic stencil
simulation. All demonstrate good performance
A Training Assistant Tool for the Automated Visual Inspection System
This thesis considers the problem of assisting a human user setting up an automated Visual Inspection (VI) system. The VI system uses a stationary camera on an automobile assembly line to inspect cars as they pass by. The inspection process is intended to identify when parts have been missed or incorrect parts have been assembled. The result is reported to a human working on the assembly line who then can take corrective actions. As originally developed, the system requires a setup phase in which the human user places the camera and records a video of at least 30 minutes length to use for training the system. Training includes specifying regions of cars passing by that are to be inspected. After deployment of a number of systems, it was learned that users could benefit from being provided guidance in best practices to delineate training data. It was also learned that users could benefit from simple visual feedback to ascertain whether or not an inspection problem was suitable for a VI system or if the problem was too challenging. This thesis describes a few methods and a new software tool intended to address this need
Exploring techniques for vision based human activity recognition: Methods, systems, and evaluation
With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activity, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation towards the performance of human activity recognitio
Soccer event detection via collaborative multimodal feature analysis and candidate ranking
This paper presents a framework for soccer event detection through collaborative analysis of the textual, visual and aural modalities. The basic notion is to decompose a match video into smaller segments until ultimately the desired eventful segment is identified. Simple features are considered namely the minute-by-minute reports from sports websites (i.e. text), the semantic shot classes of far and closeup-views (i.e. visual), and the low-level features of pitch and log-energy (i.e. audio). The framework demonstrates that despite considering simple features, and by averting the use of labeled training examples, event detection can be achieved at very high accuracy. Experiments conducted on ~30-hours of soccer video show very promising results for the detection of goals, penalties, yellow cards and red cards
- âŠ