19,040 research outputs found
A gap analysis of Internet-of-Things platforms
We are experiencing an abundance of Internet-of-Things (IoT) middleware
solutions that provide connectivity for sensors and actuators to the Internet.
To gain a widespread adoption, these middleware solutions, referred to as
platforms, have to meet the expectations of different players in the IoT
ecosystem, including device providers, application developers, and end-users,
among others. In this article, we evaluate a representative sample of these
platforms, both proprietary and open-source, on the basis of their ability to
meet the expectations of different IoT users. The evaluation is thus more
focused on how ready and usable these platforms are for IoT ecosystem players,
rather than on the peculiarities of the underlying technological layers. The
evaluation is carried out as a gap analysis of the current IoT landscape with
respect to (i) the support for heterogeneous sensing and actuating
technologies, (ii) the data ownership and its implications for security and
privacy, (iii) data processing and data sharing capabilities, (iv) the support
offered to application developers, (v) the completeness of an IoT ecosystem,
and (vi) the availability of dedicated IoT marketplaces. The gap analysis aims
to highlight the deficiencies of today's solutions to improve their integration
to tomorrow's ecosystems. In order to strengthen the finding of our analysis,
we conducted a survey among the partners of the Finnish IoT program, counting
over 350 experts, to evaluate the most critical issues for the development of
future IoT platforms. Based on the results of our analysis and our survey, we
conclude this article with a list of recommendations for extending these IoT
platforms in order to fill in the gaps.Comment: 15 pages, 4 figures, 3 tables, Accepted for publication in Computer
Communications, special issue on the Internet of Things: Research challenges
and solution
The Need to Support of Data Flow Graph Visualization of Forensic Lucid Programs, Forensic Evidence, and their Evaluation by GIPSY
Lucid programs are data-flow programs and can be visually represented as data
flow graphs (DFGs) and composed visually. Forensic Lucid, a Lucid dialect, is a
language to specify and reason about cyberforensic cases. It includes the
encoding of the evidence (representing the context of evaluation) and the crime
scene modeling in order to validate claims against the model and perform event
reconstruction, potentially within large swaths of digital evidence. To aid
investigators to model the scene and evaluate it, instead of typing a Forensic
Lucid program, we propose to expand the design and implementation of the Lucid
DFG programming onto Forensic Lucid case modeling and specification to enhance
the usability of the language and the system and its behavior. We briefly
discuss the related work on visual programming an DFG modeling in an attempt to
define and select one approach or a composition of approaches for Forensic
Lucid based on various criteria such as previous implementation, wide use,
formal backing in terms of semantics and translation. In the end, we solicit
the readers' constructive, opinions, feedback, comments, and recommendations
within the context of this short discussion.Comment: 11 pages, 7 figures, index; extended abstract presented at VizSec'10
at http://www.vizsec2010.org/posters ; short paper accepted at PST'1
i-JEN: Visual interactive Malaysia crime news retrieval system
Supporting crime news investigation involves a mechanism to help monitor the current and past status of criminal events. We believe this could be well facilitated by focusing on the user interfaces and the event crime model aspects. In this paper we discuss on a development of Visual Interactive Malaysia Crime News Retrieval System (i-JEN) and describe the approach, user studies and planned, the system architecture and future plan. Our main objectives are to construct crime-based event; investigate the use of crime-based event in improving the classification and clustering; develop an interactive crime news retrieval system; visualize crime news in an effective and interactive way; integrate them into a usable and robust system and evaluate the usability and system performance. The system will serve as a news monitoring system which aims to automatically organize, retrieve and present the crime news in such a way as to support an effective monitoring, searching, and browsing for the target users groups of general public, news analysts and policemen or crime investigators. The study will contribute to the better understanding of the crime data consumption in the Malaysian context as well as the developed system with the visualisation features to address crime data and the eventual goal of combating the crimes
Geoweb 2.0 for Participatory Urban Design: Affordances and Critical Success Factors
In this paper, we discuss the affordances of open-source Geoweb 2.0 platforms
to support the participatory design of urban projects in real-world
practices.We first introduce the two open-source platforms used in our study
for testing purposes. Then, based on evidence from five different field studies
we identify five affordances of these platforms: conversations on alternative
urban projects, citizen consultation, design empowerment, design studio
learning and design research. We elaborate on these in detail and identify a
key set of success factors for the facilitation of better practices in the
future
Recommended from our members
Using SVG and XSLT for graphic representation
Using SVG and XSLT for graphic representation
In this paper we will present an XML based framework that can be used to produce graphical visualisation of scientific data. The approach rather than producing ordinary histogram and function diagaram graphs, tries to represent the information in a more graphical appealing and easy to understand way. For examples the approach will give the ability to represent the temperature as the level of coulored fluid in a thermometer.
The proposed framework is able to maintain the value of the datas strictly separated from the visual form of its representation (positions of element, colours, visual representation etc.).
By defining appropriate data structures and expressing them using XML, the framework gives the user the ability to create graphic representations using standard SVG and XSLT.
Since XML can be used for describing complex data information, we represent every level of the graphic representation with an XML structure.
To describe our architecture we defined the following XML dialects, each one with different markup tags, reflecting the semantical values of the elements.
Data definition level. Used to define the value of the datas that can be used in the graphic representation
Data representation level. Used to define the graphic representation, it defines how the values expressed by the data definition level are represented.
Both data representation and data definition files are based on a DTD to impose the constraints.
Data representation level is the core of the system, and defines a powerful language for representation.
Source primitives. Used to define for the source of the graphic elements, for example static file or SVG code.
Modification primitives. Used to define the modifications that can affect a graphic element, for example rotation, scaling or repetition.
Disposition primitives. Used to define the possible dispositions along x, y and z axes, for example to impose a order in the representation of elements.
Action primitives. Used to define the possible actions that canbe activated by graphic elements for different user behaviours. For example a mouse action can activate a link to a different resource, or can change the value of any of the other primitives of the data structure, as image source or disposition, or can show a tooltip .
XSLT is used to output a SVG file derived from the two files describing the graphic representation.
Our aim is to provide an abstract language to be used to represent in different ways the same concept. In fact, we can link a data definition file with different data representation levels, providing different kinds and levels of complexity for the same concept. An example use could be the representation of the temperature described before, where the temperature itself could be represented either as the level of mercury in the termomether, or as the rotation of an arrow in a gauge.
The transformation process is made from an XML source tree into an XML result tree, using XPath to define patterns. XSLT transformation process is based on templates, that define some actions (like adding or removing elements, or sorting them) to be performed when a part of the document matches a template.
To implement some of the complex graphics operations we are using XSLT extensions that allow to perform mathematical operations.
These XSLT extensions are not yet standard and require specific compliant parser, as Apache Xalan, that allows the developer to interface with Java classes in order to increase XSLT areas of application, from simple node transformations to quite complex operations
User-centered development of a Virtual Research Environment to support collaborative research events
This paper discusses the user-centred development process within the Collaborative Research Events on the Web (CREW) project, funded under the JISC Virtual Research Environments (VRE) programme. After presenting the project, its aims and the functionality
of the CREW VRE, we focus on the user engagement approach, grounded in the method of co-realisation. We describe the different research settings and requirements of our three embedded user groups and the respective activities conducted so far. Finally we elaborate on
the main challenges of our user engagement approach and end with the project’s next steps
Investigating the effectiveness of an efficient label placement method using eye movement data
This paper focuses on improving the efficiency and effectiveness of dynamic and interactive maps in relation to the user. A label placement method with an improved algorithmic efficiency is presented. Since this algorithm has an influence on the actual placement of the name labels on the map, it is tested if this efficient algorithms also creates more effective maps: how well is the information processed by the user. We tested 30 participants while they were working on a dynamic and interactive map display. Their task was to locate geographical names on each of the presented maps. Their eye movements were registered together with the time at which a given label was found. The gathered data reveal no difference in the user's response times, neither in the number and the duration of the fixations between both map designs. The results of this study show that the efficiency of label placement algorithms can be improved without disturbing the user's cognitive map. Consequently, we created a more efficient map without affecting its effectiveness towards the user
Using visual analytics to develop situation awareness in astrophysics
We present a novel collaborative visual analytics application for cognitively overloaded users in the astrophysics domain. The system was developed for scientists who need to analyze heterogeneous, complex data under time pressure, and make predictions and time-critical decisions rapidly and correctly under a constant influx of changing data. The Sunfall Data Taking system utilizes several novel visualization and analysis techniques to enable a team of geographically distributed domain specialists to effectively and remotely maneuver a custom-built instrument under challenging operational conditions. Sunfall Data Taking has been in production use for 2 years by a major international astrophysics collaboration (the largest data volume supernova search currently in operation), and has substantially improved the operational efficiency of its users. We describe the system design process by an interdisciplinary team, the system architecture and the results of an informal usability evaluation of the production system by domain experts in the context of Endsley's three levels of situation awareness
Report of the user requirements and web based access for eResearch workshops
The User Requirements and Web Based Access for eResearch Workshop, organized jointly by NeSC and NCeSS, was held on 19 May 2006. The aim was to identify lessons learned from e-Science projects that would contribute to our capacity to make Grid infrastructures and tools usable and accessible for diverse user communities. Its focus was on providing an opportunity for a pragmatic discussion between e-Science end users
and tool builders in order to understand usability challenges, technological options, community-specific content and needs, and methodologies for design and development. We invited members of six UK e-Science projects and one US project, trying as far as
possible to pair a user and developer from each project in order to discuss their contrasting perspectives and experiences. Three breakout group sessions covered the
topics of user-developer relations, commodification, and functionality. There was also extensive post-meeting discussion, summarized here.
Additional information on the workshop, including the agenda, participant list, and talk slides, can be found online at http://www.nesc.ac.uk/esi/events/685/
Reference: NeSC report UKeS-2006-07 available from http://www.nesc.ac.uk/technical_papers/UKeS-2006-07.pd
- …