199,367 research outputs found

    Investigating the Feasibility of Creating a Piece of Software for Practical Electrical Classes that Engages Learners of Different Learning Styles

    Get PDF
    This paper looks at feasibility of creating a piece of software for practical electrical classes that engages learners of different learning styles. Traditional practical electrical classes are usually delivered using text based resources, but due in part to the advances in technology it is possible to provide information in a variety of formats. The starting point of this research was to evaluate the preferred learning style of the typical apprentice learner by using a learning style questionnaire based on the Vark model. The Vark model represents four learning styles Visual, Auditory, Reading/Writing and Kinaesthetic. The results from the questionnaire then influenced the design of a workshop interface to suit the learnerā€™s particular learning style. The final design was influenced by expert opinion in the area of learning styles as well as subject area experts. The interface was evaluated by 28 electrical apprentices and six lecturers who all agreed that the interface presented a new and innovative approach to delivering information within a practical workshop setting. The study concludes that it is possible to create a workshop interface that engages learners of different learning styles

    Role-Play Simulations and System Dynamics for Sustainability Solutions around Dams in New England

    Get PDF
    Research has shown that much of the science produced does not make its way to the decision-making table. This leads to a gap between scientific and societal progress, which is problematic. This study tests a novel science-based negotiation simulation that integrates role-play simulations (RPSs) with a system dynamic model (SDM). In RPSs, stakeholders engage in a mock decision-making process (reflecting real-life institutional arrangements and scientific knowledge) for a set period. By playing an assigned role (different from the participantā€™s real-life role), participants have a safe space to learn about each otherā€™s perspectives, develop shared understanding about a complex issue, and collaborate on solving that issue. System Dynamic Models (SDMs) are visual tools used to simulate the interactions and feedback with a complex system. We test the integration of the two approaches toward problem-solving with real stakeholders in New Hampshire and Rhode Island via a series of two consecutive workshops in each state. The workshops are intended to engage representatives from diverse groups who are interested in dam related issues to foster dialogue, learning, and creativity. Participants will discuss a hypothetical (yet realistic) dam-decision scenario to consider scientific information and explore dam management options that meet one another\u27s interests. In the first workshop participants will contribute to the design of the fictionalized dam decision scenario and the SDM, for which we have presented drafts based on a literature review, stakeholder interviews, and expert knowledge. In the second workshop, participants will assume another representative\u27s role and discuss dam management options for the fictionalized scenario. We will report results related to the effectiveness to which this new knowledge production process leads to more innovative and collaborative decision-making around New England dams

    DCU and UTA at ImageCLEFPhoto 2007

    Get PDF
    Dublin City University (DCU) and University of Tampere(UTA) participated in the ImageCLEF 2007 photographic ad-hoc retrieval task with several monolingual and bilingual runs. Our approach was language independent: text retrieval based on fuzzy s-gram query translation was combined with visual retrieval. Data fusion between text and image content was performed using unsupervised query-time weight generation approaches. Our baseline was a combination of dictionary-based query translation and visual retrieval, which achieved the best result. The best mixed modality runs using fuzzy s-gram translation achieved on average around 83% of the performance of the baseline. Performance was more similar when only top rank precision levels of P10 and P20 were considered. This suggests that fuzzy sgram query translation combined with visual retrieval is a cheap alternative for cross-lingual image retrieval where only a small number of relevant items are required. Both sets of results emphasize the merit of our query-time weight generation schemes for data fusion, with the fused runs exhibiting marked performance increases over single modalities, this is achieved without the use of any prior training data

    TRECVid 2007 experiments at Dublin City University

    Get PDF
    In this paper we describe our retrieval system and experiments performed for the automatic search task in TRECVid 2007. We submitted the following six automatic runs: ā€¢ F A 1 DCU-TextOnly6: Baseline run using only ASR/MT text features. ā€¢ F A 1 DCU-ImgBaseline4: Baseline visual expert only run, no ASR/MT used. Made use of query-time generation of retrieval expert coefficients for fusion. ā€¢ F A 2 DCU-ImgOnlyEnt5: Automatic generation of retrieval expert coefficients for fusion at index time. ā€¢ F A 2 DCU-imgOnlyEntHigh3: Combination of coefficient generation which combined the coefficients generated by the query-time approach, and the index-time approach, with greater weight given to the index-time coefficient. ā€¢ F A 2 DCU-imgOnlyEntAuto2: As above, except that greater weight is given to the query-time coefficient that was generated. ā€¢ F A 2 DCU-autoMixed1: Query-time expert coefficient generation that used both visual and text experts

    K-Space at TRECVid 2007

    Get PDF
    In this paper we describe K-Space participation in TRECVid 2007. K-Space participated in two tasks, high-level feature extraction and interactive search. We present our approaches for each of these activities and provide a brief analysis of our results. Our high-level feature submission utilized multi-modal low-level features which included visual, audio and temporal elements. Specific concept detectors (such as Face detectors) developed by K-Space partners were also used. We experimented with different machine learning approaches including logistic regression and support vector machines (SVM). Finally we also experimented with both early and late fusion for feature combination. This year we also participated in interactive search, submitting 6 runs. We developed two interfaces which both utilized the same retrieval functionality. Our objective was to measure the effect of context, which was supported to different degrees in each interface, on user performance. The first of the two systems was a ā€˜shotā€™ based interface, where the results from a query were presented as a ranked list of shots. The second interface was ā€˜broadcastā€™ based, where results were presented as a ranked list of broadcasts. Both systems made use of the outputs of our high-level feature submission as well as low-level visual features

    Dublin City University at TRECVID 2008

    Get PDF
    In this paper we describe our system and experiments performed for both the automatic search task and the event detection task in TRECVid 2008. For the automatic search task for 2008 we submitted 3 runs utilizing only visual retrieval experts, continuing our previous work in examining techniques for query-time weight generation for data-fusion and determining what we can get from global visual only experts. For the event detection task we submitted results for 5 required events (ElevatorNoEntry, OpposingFlow, PeopleMeet, Embrace and PersonRuns) and 1 optional event (DoorOpenClose)

    Towards more effective visualisations in climate services: good practices and recommendations

    Get PDF
    Visualisations are often the entry point to information that supports stakeholdersā€™ decision- and policy-making processes. Visual displays can employ either static, dynamic or interactive formats as well as various types of representations and visual encodings, which differently affect the attention, recognition and working memory of users. Despite being well-suited for expert audiences, current climate data visualisations need to be further improved to make communication of climate information more inclusive for broader audiences, including people with disabilities. However, the lack of evidence-based guidelines and tools makes the creation of accessible visualisations challenging, potentially leading to misunderstanding and misuse of climate information by users. Taking stock of visualisation challenges identified in a workshop by climate service providers, we review good practices commonly applied by other visualisation-related disciplines strongly based on usersā€™ needs that could be applied to the climate services context. We show how lessons learned in the fields of user experience, data visualisation, graphic design and psychology make useful recommendations for the development of more effective climate service visualisations. This includes applying a user-centred design approach, using interaction in a suitable way in visualisations, paying attention to information architecture or selecting the right type of representation and visual encoding. The recommendations proposed here can help climate service providers reduce usersā€™ cognitive load and improve their overall experience when using a service. These recommendations can be useful for the development of the next generation of climate services, increasing their usability while ensuring that their visual components are inclusive and do not leave anyone behind.The research leading to these results received funding from the European Unionā€™s Horizon 2020 research and innovation programme under grant agreements no. 689029 (Climateurope), 776787 (S2S4E), 776467 (MED-GOLD) and 869565 (VitiGEOSS).Peer ReviewedPostprint (published version

    Optimising visual solutions for complex strategic scenarios : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Psychology at Massey University, Wellington, New Zealand

    Get PDF
    Attempts to pre-emptively improve post-disaster outcomes need to reflect an improved understanding of cognitive adaptations made by collaborating researchers and practitioners. This research explored the use of visual logic models to enhance the quality of decisions being made by these professionals. The research looked at the way visual representations serve to enhance these decisions, as part of cognitive adaptations to considering the complexity of relevant pre-disaster conditions constituting community resilience. It was proposed that a visual logic model display, using boxes and arrows to display linkages between activities and downstream objectives, could support effective, efficient and responsive approaches to relevant community resilience interventions being carried out in a pre-disaster context. The first of three phases comprising this thesis used Q-methodology to identify patterns of opinions concerning building a shared framework of pre-disaster, community resilience indicators for this purpose. Three patterns identified helped to assess the needs for applied research undertaken in phase two. The second phase of this thesis entailed building an action-focused logic model to enhance associated collaborations between emergency management practitioners and researchers. An analysis of participant interviews determined that the process used to build this logic model served as a catalyst for research which could help improve community resilience interventions. The third phase used an experimental approach to different display formats produced during phase two to test whether a visual logic model display stimulated a higher quality of decisions, compared with a more conventional, text-based chart of key performance indicators. Results supported the use of similar methods for much larger scale research to assess how information displays support emergency management decisions with wide-ranging, longer-term implications. Overall, results from these three phases indicate that certain logic model formats can help foster collaborative efforts to improve characteristics of community resilience against disasters. This appears to occur when a logic model forms an integrated component of efficient cognitive dynamics across a network of decision making agents. This understanding of logic model function highlights clear opportunities for further research. It also represents a novel contribution to knowledge about using logic models to support emergency management decisions with complex, long term implications
    • ā€¦
    corecore