53,779 research outputs found
From the visual book to the WEB book : the importance of design
This paper presents the results of two studies into electronic book production. The Visual book study explored the importance of the visual component of the book metaphor for the production of more effective electronic books, while the WEB book study took the findings of the Visual book and applied them to the production of books for publication on the World Wide Web (WWW). Both studies started from an assessment of which kinds of paper book are more suitable for translation into electronic form. Both also identified publications which are meant to be used for reference rather than those which are read sequentially, and usually in their entirety. This group includes scientific publications and textbooks which were both used as the target group for the Visual book and the WEB book experiments. In this paper we discuss the results of the two studies and how they could influence the design and production of more effective electronic books
The WEB Book experiments in electronic textbook design
This paper describes a series of three evaluations of electronic textbooks on the Web, which focused on assessing how appearance and design can affect users' sense of engagement and directness with the material. The EBONI Project's methodology for evaluating electronic textbooks is outlined and each experiment is described, together with an analysis of results. Finally, some recommendations for successful design are suggested, based on an analysis of all experimental data. These recommendations underline the main findings of the evaluations: that users want some features of paper books to be preserved in the electronic medium, while also preferring electronic text to be written in a scannable style
Report of the user requirements and web based access for eResearch workshops
The User Requirements and Web Based Access for eResearch Workshop, organized jointly by NeSC and NCeSS, was held on 19 May 2006. The aim was to identify lessons learned from e-Science projects that would contribute to our capacity to make Grid infrastructures and tools usable and accessible for diverse user communities. Its focus was on providing an opportunity for a pragmatic discussion between e-Science end users
and tool builders in order to understand usability challenges, technological options, community-specific content and needs, and methodologies for design and development. We invited members of six UK e-Science projects and one US project, trying as far as
possible to pair a user and developer from each project in order to discuss their contrasting perspectives and experiences. Three breakout group sessions covered the
topics of user-developer relations, commodification, and functionality. There was also extensive post-meeting discussion, summarized here.
Additional information on the workshop, including the agenda, participant list, and talk slides, can be found online at http://www.nesc.ac.uk/esi/events/685/
Reference: NeSC report UKeS-2006-07 available from http://www.nesc.ac.uk/technical_papers/UKeS-2006-07.pd
Online Information on Dysmenorrhea: An Evaluation of Readability, Credibility, Quality, and Usability
Aims and objectives
To evaluate online information on dysmenorrhoea, including readability, credibility, quality and usability.
Background
Menstrual pain impacts 45%ā95% of women of reproductive age globally and is the leading cause of school and work absences among women. Women often seek online information on dysmenorrhoea; however, little is known about the information quality.
Design
This was a descriptive study to evaluate online information on dysmenorrhoea.
Methods
We imitated search strategies of the general public. Specifically, we employed the three most popular search engines worldwideāGoogle, Yahoo and Bingāand used lay search terms, āperiod painā and āmenstrual cramps.ā We screened 60 web pages. Following removal of duplicates and irrelevant web pages, 25 met the eligibility criteria. Two team members independently evaluated the included web pages using standardised tools. Readability was evaluated with the FleschāKincaid Reading Ease and FleschāKincaid Grade formulas; credibility, quality and usability were evaluated with established tools. We followed the STROBE checklist for reporting this study.
Results
For readability, the mean FleschāKincaid level was 10th grade. For credibility, 8% of web pages referenced scientific literature and 28% stated the author's name and qualifications. For quality, no web page employed userādriven content production; 8% of web pages referenced evidenceābased guidelines, 32% of web pages had accurate content, and 4% of web pages recommended shared decisionāmaking. Most web pages were interactive and included nontextual information. Some nontextual information was inaccurate.
Conclusion
Online information on dysmenorrhoea has generally low readability, mixed credibility and variable quality.
Relevance to clinical practice
Strategies to improve health information on dysmenorrhoea include avoiding complex terms, incorporating visual aids, presenting evidenceābased information and developing a decision aid to support shared decisionāmaking. Healthcare providers should be aware of the problematic health information that individuals are exposed to and provide education about how to navigate online health information
The Cognitive Atlas: Employing Interaction Design Processes to Facilitate Collaborative Ontology Creation
The Cognitive Atlas is a collaborative knowledge-building project that aims to develop an ontology that characterizes the current conceptual framework among researchers in cognitive science and neuroscience. The project objectives from the beginning focused on usability, simplicity, and utility for end users. Support for Semantic Web technologies was also a priority in order to support interoperability with other neuroscience projects and knowledge bases. Current off-the-shelf semantic web or semantic wiki technologies, however, do not often lend themselves to simple user interaction designs for non-technical researchers and practitioners; the abstract nature and complexity of these systems acts as point of friction for user interaction, inhibiting usability and utility. Instead, we take an alternate interaction design approach driven by user centered design processes rather than a base set of semantic technologies. This paper reviews the initial two rounds of design and development of the Cognitive Atlas system, including interactive design decisions and their implementation as guided by current industry practices for the development of complex interactive systems
Digital interaction: where are we going?
In the framework of the AVI 2018 Conference, the interuniversity center ECONA has organized a thematic workshop on "Digital Interaction: where are we going?". Six contributions from the ECONA members investigate different perspectives around this thematic
Recommended from our members
Using SVG and XSLT for graphic representation
Using SVG and XSLT for graphic representation
In this paper we will present an XML based framework that can be used to produce graphical visualisation of scientific data. The approach rather than producing ordinary histogram and function diagaram graphs, tries to represent the information in a more graphical appealing and easy to understand way. For examples the approach will give the ability to represent the temperature as the level of coulored fluid in a thermometer.
The proposed framework is able to maintain the value of the datas strictly separated from the visual form of its representation (positions of element, colours, visual representation etc.).
By defining appropriate data structures and expressing them using XML, the framework gives the user the ability to create graphic representations using standard SVG and XSLT.
Since XML can be used for describing complex data information, we represent every level of the graphic representation with an XML structure.
To describe our architecture we defined the following XML dialects, each one with different markup tags, reflecting the semantical values of the elements.
Data definition level. Used to define the value of the datas that can be used in the graphic representation
Data representation level. Used to define the graphic representation, it defines how the values expressed by the data definition level are represented.
Both data representation and data definition files are based on a DTD to impose the constraints.
Data representation level is the core of the system, and defines a powerful language for representation.
Source primitives. Used to define for the source of the graphic elements, for example static file or SVG code.
Modification primitives. Used to define the modifications that can affect a graphic element, for example rotation, scaling or repetition.
Disposition primitives. Used to define the possible dispositions along x, y and z axes, for example to impose a order in the representation of elements.
Action primitives. Used to define the possible actions that canbe activated by graphic elements for different user behaviours. For example a mouse action can activate a link to a different resource, or can change the value of any of the other primitives of the data structure, as image source or disposition, or can show a tooltip .
XSLT is used to output a SVG file derived from the two files describing the graphic representation.
Our aim is to provide an abstract language to be used to represent in different ways the same concept. In fact, we can link a data definition file with different data representation levels, providing different kinds and levels of complexity for the same concept. An example use could be the representation of the temperature described before, where the temperature itself could be represented either as the level of mercury in the termomether, or as the rotation of an arrow in a gauge.
The transformation process is made from an XML source tree into an XML result tree, using XPath to define patterns. XSLT transformation process is based on templates, that define some actions (like adding or removing elements, or sorting them) to be performed when a part of the document matches a template.
To implement some of the complex graphics operations we are using XSLT extensions that allow to perform mathematical operations.
These XSLT extensions are not yet standard and require specific compliant parser, as Apache Xalan, that allows the developer to interface with Java classes in order to increase XSLT areas of application, from simple node transformations to quite complex operations
Engineering of an Extreme Rainfall Detection System using Grid Computing
This paper describes a new approach for intensive rainfall data analysis. ITHACA's Extreme Rainfall Detection System (ERDS) is conceived to provide near real-time alerts related to potential exceptional rainfalls worldwide, which can be used by WFP or other humanitarian assistance organizations to evaluate the event and understand the potentially floodable areas where their assistance is needed. This system is based on precipitation analysis and it uses rainfall data from satellite at worldwide extent. This project uses the Tropical Rainfall Measuring Mission Multisatellite Precipitation Analysis dataset, a NASA-delivered near real-time product for current rainfall condition monitoring over the world. Considering the great deal of data to process, this paper presents an architectural solution based on Grid Computing techniques. Our focus is on the advantages of using a distributed architecture in terms of performances for this specific purpos
DataCite as a novel bibliometric source: Coverage, strengths and limitations
This paper explores the characteristics of DataCite to determine its
possibilities and potential as a new bibliometric data source to analyze the
scholarly production of open data. Open science and the increasing data sharing
requirements from governments, funding bodies, institutions and scientific
journals has led to a pressing demand for the development of data metrics. As a
very first step towards reliable data metrics, we need to better comprehend the
limitations and caveats of the information provided by sources of open data. In
this paper, we critically examine records downloaded from the DataCite's OAI
API and elaborate a series of recommendations regarding the use of this source
for bibliometric analyses of open data. We highlight issues related to metadata
incompleteness, lack of standardization, and ambiguous definitions of several
fields. Despite these limitations, we emphasize DataCite's value and potential
to become one of the main sources for data metrics development.Comment: Paper accepted for publication in Journal of Informetric
- ā¦