17 research outputs found
User-author centered multimedia building blocks
The advances of multimedia models and tools popularized the access and production of multimedia contents: in this new scenario, there is no longer a clear distinction between authors and end-users of a production. These user-authors often work in a collaborative way. As end-users, they collectively participate in interactive environments, consuming multimedia artifacts. In their authors' role, instead of starting from scratch, they often reuse others' productions, which can be decomposed, fusioned and transformed to meet their goals. Since the need for sharing and adapting productions is felt by many communities, there has been a proliferation of standards and mechanisms to exchange complex digital objects, for distinct application domains. However, these initiatives have created another level of complexity, since people have to define which share/ reuse solution they want to adopt, and may even have to resort to programming tasks. They also lack effective strategies to combine these reused artifacts. This paper presents a solution to this demand, based on a user-author centered multimedia building block model-the digital content component (DCC). DCCs upgrade the notion of digital objects to digital components, as they homogenously wrap any kind of digital content (e.g., multimedia artifacts, software) inside a single component abstraction. The model is fully supported by a software infrastructure, which exploits the model's semantic power to automate low level technical activities, thereby freeing user-authors to concentrate on creative tasks. Model and infrastructure improve recent research initiatives to standardize the means of sharing and reuse domain specific digital contents. The paper's contributions are illustrated using examples implemented in a DCC-based authoring tool, in real life situations.124176340342
Interoperability for GIS document management in environmental planning
Environmental planning requires constant tracing and revision of activities. Planners must be provided with appropriate documentation tools to aid communication among them and support plan enactment, revision and evolution. Moreover, planners often work in distinct institutions, thus these supporting tools must interoperate in distributed environments and in a semantically coherent fashion. Since semantics are strongly related to use, documentation also enhances the ways in which users can cooperate. The emergence of the Semantic Web created the need for documenting Web data and processes, using specific standards. This paper addresses this problem, for two issues: (1) ways of documenting planning processes, in three different aspects: what was done, how it was done and why it was done that way; and (2) a framework that supports the management of those documents using Semantic Web standards.353410012
Recommended from our members
Experiences with user-centered design for the tigres workflow API
Scientific data volumes have been growing exponentially. This has resulted in the need for new tools that enable users to operate on and analyze data. Cyber infrastructure tools, including workflow tools, that have been developed in the last few years has often fallen short if user needs and suffered from lack of wider adoption. User-centered Design (UCD) process has been used as an effective approach to develop usable software with high adoption rates. However, UCD has largely been applied for user-interfaces and there has been limited work in applying UCD to application program interfaces and cyber infrastructure tools. We use an adapted version of UCD that we refer to as Scientist-Centered Design (SCD) to engage with users in the design and development of Tigres, a workflow application programming interface. Tigres provides a simple set of programming templates (e.g., sequence, parallel, split, merge) that can be can used to compose and execute computational and data transformation pipelines. In this paper, we describe Tigres and discuss our experiences with the use of UCD for the initial development of Tigres. Our experience-to-date is that the UCD process not only resulted in better requirements gathering but also heavily influenced the architecture design and implementation details. User engagement during the development of tools such as Tigres is critical to ensure usability and increase adoption
Recommended from our members
Experiences with user-centered design for the tigres workflow API
Scientific data volumes have been growing exponentially. This has resulted in the need for new tools that enable users to operate on and analyze data. Cyber infrastructure tools, including workflow tools, that have been developed in the last few years has often fallen short if user needs and suffered from lack of wider adoption. User-centered Design (UCD) process has been used as an effective approach to develop usable software with high adoption rates. However, UCD has largely been applied for user-interfaces and there has been limited work in applying UCD to application program interfaces and cyber infrastructure tools. We use an adapted version of UCD that we refer to as Scientist-Centered Design (SCD) to engage with users in the design and development of Tigres, a workflow application programming interface. Tigres provides a simple set of programming templates (e.g., sequence, parallel, split, merge) that can be can used to compose and execute computational and data transformation pipelines. In this paper, we describe Tigres and discuss our experiences with the use of UCD for the initial development of Tigres. Our experience-to-date is that the UCD process not only resulted in better requirements gathering but also heavily influenced the architecture design and implementation details. User engagement during the development of tools such as Tigres is critical to ensure usability and increase adoption
Recommended from our members
Integrated analysis of productivity and biodiversity in a southern Alberta prairie
Grasslands play important roles in ecosystem production and support a large farming and grazing industry. An accurate and efficient way is needed to estimate grassland health and production for monitoring and adjusting management to get sustainable products and other ecosystem services. Previous studies of grasslands have shown varying relationships between productivity and biodiversity, with most showing either a positive or a hump-shaped relationship where productivity peaks at intermediate diversity. In this study, we used airborne imaging spectrometry combined with ground sampling and eddy covariance measurements to estimate the spatial pattern of production and biodiversity for two sites of contrasting productivity in a southern Alberta prairie ecosystem. Resulting patterns revealed that more diverse sites generally had greater productivity, supporting the hypothesis of a positive relationship between production and biodiversity for this site. We showed that the addition of evenness to richness (using the Shannon Index of dominant species instead of the number of dominant species alone) improved the correlation with optical diversity, an optically derived metric of biodiversity based on the coefficient of variation in spectral reflectance across space. Similarly, the Shannon Index was better correlated with productivity (estimated via NDVI (Normalized Difference Vegetation Index)) than the number of dominant species alone. Optical diversity provided a potent proxy for other more traditional biodiversity metrics (richness and Shannon index). Coupling field measurements and imaging spectrometry provides a method for assessing grassland productivity and biodiversity at a larger scale than can be sampled from the ground, and allows the integrated analysis of the productivity-biodiversity relationship over large areas
Recommended from our members
Balancing the needs of consumers and producers for scientific data collections
Recent emphasis and requirements for open data publication have led to significant increases in data availability in the Earth sciences, which is critical to long-tail data integration. Currently, data are often published in a repository with an identifier and citation, similar to those for papers. Subsequent publications that use the data are expected to provide a citation in the reference section of the paper. However, the format of the data citation is still evolving, particularly with regards to citing dynamic data, subsets, and collections of data. Considering the motivations of both data producers and consumers, the most pressing need is to create user-friendly solutions that provide credit for data producers and enable accurate citation of data, particularly integrated data. Providing easy-to-use data citations is a critical foundation that is required to address the socio-technical challenges around data integration. Studies that integrate data from dozens or hundreds of datasets must often include data citations in supplementary material due to page limits. However, citations in the supplementary material are not indexed, making it difficult to track citations and thus giving credit to the data producer. In this paper, we discuss our experiences and the challenges we have encountered with current citation guidance. We also review the relative merits of the currently available mechanisms designed to enable compact citation of collections of data, such as data collections, data papers, and dynamic data citations. We consider these options for three data producer scenarios: a domain-specific data collection, a data repository, and a large-scale, multidisciplinary project. We posit that a new mechanism is also needed to enable citation of multiple datasets and credit to data producers
Recommended from our members
A new data set monitors land-air exchanges
FLUXNET15, the latest update of the longest global record of ecosystem carbon, water, and energy fluxes, features improved data quality, new data products, and more open data sharing policies
Recommended from our members
A new data set monitors land-air exchanges
FLUXNET15, the latest update of the longest global record of ecosystem carbon, water, and energy fluxes, features improved data quality, new data products, and more open data sharing policies