1,255 research outputs found

    Energy dissipation prediction of particle dampers

    Get PDF
    This paper presents initial work on developing models for predicting particle dampers (PDs) behaviour using the Discrete Element Method (DEM). In the DEM approach, individual particles are typically represented as elements with mass and rotational inertia. Contacts between particles and with walls are represented using springs, dampers and sliding friction interfaces. In order to use DEM to predict damper behaviour adequately, it is important to identify representative models of the contact conditions. It is particularly important to get the appropriate trade-off between accuracy and computational efficiency as PDs have so many individual elements. In order to understand appropriate models, experimental work was carried out to understand interactions between the typically small (1.5–3 mm diameter) particles used. Measurements were made of coefficient of restitution and interface friction. These were used to give an indication of the level of uncertainty that the simplest (linear) models might assume. These data were used to predict energy dissipation in a PD via a DEM simulation. The results were compared with that of an experiment

    mSpace Mobile: Exploring Support for Mobile Tasks

    No full text
    In the following paper we compare two Web application interfaces, mSpace Mobile and Google Local in supporting location discovery tasks on mobile devices while stationary and while on the move. While mSpace Mobile performed well in both stationary and mobile conditions, performance in Google Local dropped significantly. We postulate that mSpace Mobile performed so well because it breaks the paradigm of the page for delivering Web content, thereby enabling new and more powerful interfaces to be used to support mobility

    Webbox+Page Blossom: exploring design for AKTive data interaction

    No full text
    We give away our data to multiple data services without, for the most part, being able to get that data back to reuse in any other way, leaving us, at best, to re-find, re-cover, retype, remember and re-manage this material. In this work in progress, we hypothesize that if we facilitate easy interaction to store, access and reuse our personal, social and public data, we will not only decrease time spent to recreate it for multiple walled data contexts, but in particular, we will develop novel interactions for new kinds of knowledge building. To facilitate exploration of this hypothesis, we propose Page Blossom an exemplar of such dynamic data interaction that is based on data reuse via our open data platform Webbox + Active (active knowledge technology) lenses

    Hunter gatherer: within-web-page collection making

    No full text
    Hunter Gatherer is a tool that lets Web users carry out three main tasks: (1) collect components from within Web pages; (2) represent those components in a collection; and (3) edit those collections. We report on the design and evaluation of the tool and contextualize tool use in terms of our research goals to investigate possible shifts in information interaction practices resulting from tool use

    Spatial Consistency and Contextual Cues for Incidental Learning in Browser Design

    No full text
    This paper introduces the Backward Highlighting technique for mitigating an identified flaw in directional column-faceted browsers like iTunes. Further, the technique significantly enhances the information that can be learned from the columns and encourages further interaction with facet items that were previously restricted from use. After giving a detailed overview of faceted browsing approaches, the Backward Highlighting technique is described along with possible implementations. Two of these possible implementations are compared to a control condition to statistically prove the value of Backward Highlighting. The analysis produces design recommendations for implementing the Backward Highlighting technique within faceted browsers that choose the directional column approach. The paper concludes with future work on how to further improve on the statistically proven advantages provided by the Backward Highlighting technique

    Continuum: designing timelines for hierarchies, relationships and scale

    No full text
    Temporal events, while often discrete, also have interesting relationships within and across times: larger events are often collections of smaller more discrete events (battles within wars; artists' works within a form); events at one point also have correlations with events at other points (a play written in one period is related to its performance, or lack of performance, over a period of time). Most temporal visualisations, however, only represent discrete data points or single data types along a single timeline: this event started here and ended there; this work was published at this time; this tag was popular for this period. In order to represent richer, faceted attributes of temporal events, we present Continuum. Continuum enables hierarchical relationships in temporal data to be represented and explored; it enables relationships between events across periods to be expressed, and in particular it enables user-determined control over the level of detail of any facet of interest so that the person using the system can determine a focus point, no matter the level of zoom over the temporal space. We present the factors motivating our approach, our evaluation and implementation of this new visualisation which makes it easy for anyone to apply this interface to rich, large-scale datasets with temporal data

    mSpace meets EPrints: a Case Study in Creating Dynamic Digital Collections

    No full text
    In this case study we look at issues involved in (a) generating dynamic digital libraries that are on a particular topic but span heterogeneous collections at distinct sites, (b) supplementing the artefacts in that collection with additional information available either from databases at the artefact's home or from the Web at large, and (c) providing an interaction paradigm that will support effective exploration of this new resource. We describe how we used two available frameworks, mSpace and EPrints to support this kind of collection building. The result of the study is a set of recommendations to improve the connectivity of remote resources both to one another and to related Web resources, and that will also reduce problems like co-referencing in order to enable the creation of new collections on demand

    Using pivots to explore heterogeneous collections: A case study in musicology

    No full text
    In order to provide a better e-research environment for musicologists, the musicSpace project has partnered with musicology’s leading data publishers, aggregated and enriched their data, and developed a richly featured exploratory search interface to access the combined dataset. There have been several significant challenges to developing this service, and intensive collaboration between musicologists (the domain experts) and computer scientists (who developed the enabling technologies) was required. One challenge was the actual aggregation of the data itself, as this was supplied adhering to a wide variety of different schemas and vocabularies. Although the domain experts expended much time and effort in analysing commonalities in the data, as data sources of increasing complexity were added earlier decisions regarding the design of the aggregated schema, particularly decisions made with reference to simpler data sources, were often revisited to take account of unanticipated metadata types. Additionally, in many domains a single source may be considered to be definitive for certain types of information. In musicology, this is essentially the case with the “works lists” of composers’ musical compositions given in Grove Music Online (http://www.oxfordmusiconline.com/public/book/omo_gmo), and so for musicSpace, we have mapped all sources to the works lists from Grove for the purposes of exploration, specifically to exploit the accuracy of its metadata in respect to dates of publication, catalogue numbers, and so on. Therefore, rather than mapping all fields from Grove to a central model, it would be far quicker (in terms of development time) to create a system to “pull-in” data from other sources that are mapped directly to the Grove works lists
    corecore