59 research outputs found

    Turku Centre for Computer Science – Annual Report 2013

    Get PDF
    Due to a major reform of organization and responsibilities of TUCS, its role, activities, and even structures have been under reconsideration in 2013. The traditional pillar of collaboration at TUCS, doctoral training, was reorganized due to changes at both universities according to the renewed national system for doctoral education. Computer Science and Engineering and Information Systems Science are now accompanied by Mathematics and Statistics in newly established doctoral programs at both University of Turku and &Aring;bo Akademi University. Moreover, both universities granted sufficient resources to their respective programmes for doctoral training in these fields, so that joint activities at TUCS can continue. The outcome of this reorganization has the potential of proving out to be a success in terms of scientific profile as well as the quality and quantity of scientific and educational results.&nbsp; International activities that have been characteristic to TUCS since its inception continue strong. TUCS&rsquo; participation in European collaboration through EIT ICT Labs Master&rsquo;s and Doctoral School is now more active than ever. The new double degree programs at MSc and PhD level between University of Turku and Fudan University in Shaghai, P.R.China were succesfully set up and are&nbsp; now running for their first year. The joint students will add to the already international athmosphere of the ICT House.&nbsp; The four new thematic reseach programmes set up acccording to the decision by the TUCS Board have now established themselves, and a number of events and other activities saw the light in 2013. The TUCS Distinguished Lecture Series managed to gather a large audience with its several prominent speakers. The development of these and other research centre activities continue, and&nbsp; new practices and structures will be initiated to support the tradition of close academic collaboration.&nbsp; The TUCS&rsquo; slogan Where Academic Tradition Meets the Exciting Future has proven true throughout these changes. Despite of the dark clouds on the national and European economic sky, science and higher education in the field have managed to retain all the key ingredients for success. Indeed, the future of ICT and Mathematics in Turku seems exciting.</p

    Acta Cybernetica : Volume 21. Number 3.

    Get PDF

    Real-time programming and the big ideas of computational literacy

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2003.Includes bibliographical references (p. 115-121).Though notoriously difficult, real-time programming offers children a rich new set of applications, and the opportunity to engage bodily knowledge and experience more centrally in intellectual enterprises. Moreover, the seemingly specialized problems of real-time programming can be seen as keys to longstanding difficulties of programming in general. I report on a critical design inquiry into the nature and potential of real-time programming by children. A cyclical process of design, prototyping and testing of computational environments has led to two design innovations: a language in which declarative and procedural descriptions of computation are given equal status, and can subsume each other to arbitrary levels of nesting [and] a "live text" environment, in which real-time display of, and intervention in, program execution are accomplished within the program text itself. Based on children's use of these tools, as well as comparative evidence from other media and domains, I argue that the coordination of discrete and continuous process should be considered a central Big Idea in programming and beyond. In addition, I offer the theoretical notion of the "steady frame" as a way to clarify the user interface requirements of real-time programming, and also to understand the role of programming in learning to construct dynamic models, theories, and representations. Implications for the role of programming in education and for the future of computational literacy are discussed.by Christopher Michael Hancock.Ph.D

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Interactive Mediascape Designed for Collaborative Creativity

    Get PDF
    We are living in the world where "together alone" has become a staple of daily life, our interaction with our physical environment is diminishing due to constant interaction with the device we carry - our phone. Developing applications in virtual space has been a booming field and produced a vary rapid growth in the number of users and the body of scientific data in the last 20 years, however the same cannot be said about our physical world. There are far less innovations which promote technology-mediated human interaction in physical space. The scientific knowledge base has been recently filling up with studies about the unfavorable side-effects on the human mind and body due to a disconnect from one's physicality and environment, a disconnect resulting from the lifestyle filled with virtual presence experiences. As network systems become more robust and powerful, today with IoT advancements, we have an opportunity to bring interactive technologies into the physical environment and bring the human communication mediated by technology back into the tangible space. More so, we can create experiments and study how to design safe and playful interactive environments which best promote human collaboration and then bring such systems into museums, galleries, public spaces, schools, libraries, shopping malls, etc.This thesis is focused on the topic of technology-mediated experience (TME) in physical space. Although interactive installations and immersive experiences are not a new topic, there has been no attempt to describe such experiences in a systematic way, to find common threads, structures, variables and general approach to designing interactive multi-modal experiences in physical space. The first question of this research is to understand how we can approach the development of Interactive Mediascapes (IM), to make a common ground of knowledge for expanding and growing this, segmented and under-researched at the moment, sector of technological creative innovation. Grounding the research in a multidisciplinary approach and bringing together concepts from HCI, IoT, TUI, spacial design, dramaturgy and the notion of agency in user experience, I propose an Interactive Mediascape Design Theory by starting with outlining its Ecosystem: a) semantic model and b) taxonomy of elements and attributes. Further, the second question is addressed: how can IM be designed to form an environment which encourages collaboration and fosters creativity? By integrating the proposed IM ecosystem with concepts of playful creativity and 3-C collaboration model among others, a roadmap for developing IM for collaborative creativity is outlined. These findings are presented alongside with analysis of related practical work

    An ontological approach to information visualization.

    Get PDF
    Visualization is one of the indispensable means for addressing the rapid explosion of data and information. Although a large collection of visualization techniques have been developed over the past three decades, the majority of ordinary users have little knowledge about these techniques. Despite there being many interactive visualization tools available in the public domain or commercially, producing visualizations remains a skilled and time-consuming task. One approach for cost-effective dissemination of visualization techniques is to use captured expert knowledge for helping ordinary users generate visualizations automatically. In this work, we propose to use captured knowledge in ontologies to reduce the parameter space, providing a more effective automated solution to the dissemination of visualization techniques to ordinary users. As an example, we consider the visualization of music chart data and football statistics on the web, and aim to generate visualizations automatically from the data. The work has three main contributions: Visualisation as Mapping. We consider the visualization process as a mapping task and assess this approach from both a tree-based and graph-based perspective. We discuss techniques for automatic mapping and present a general approach for Information Perceptualisation through mapping which we call Information Realisation. VizThis: Tree-centric Mapping. We have built a tree-based mapping toolkit which provides a pragmatic solution for visualising any XML-based source data using either SVG or X3D (or potentially any other XML-based target format). The toolkit has data cleansing and data analysis features. It also allows automatic mapping through a type-constrained system (AutoMap). If the user wishes to alter mappings, the system gives the users warnings about specific problem areas so that they can be immediately corrected. SeniViz: Graph-centric Mapping. We present an ontology-based pipeline to automatically map tabular data to geometrical data, and to select appropriate visualization tools, styles and parameters. The pipeline is based on three ontologies: a Domain Ontology (DO) captures the knowledge about the subject domain being visualized; a Visual Representation Ontology (VRO) captures the specific representational capabilities of different visualization techniques (e.g.. Tree Map); and a Semantic Bridge Ontology (SBO) captures specific expert-knowledge about valuable mappings between domain and representation concepts. In this way, we have an ontology mapping algorithm which can dynamically score and rank potential visualizations. We also present the results of a user study to assess the validity and effectiveness of the SemViz approach

    A framework for the analysis and evaluation of enterprise models

    Get PDF
    Bibliography: leaves 264-288.The purpose of this study is the development and validation of a comprehensive framework for the analysis and evaluation of enterprise models. The study starts with an extensive literature review of modelling concepts and an overview of the various reference disciplines concerned with enterprise modelling. This overview is more extensive than usual in order to accommodate readers from different backgrounds. The proposed framework is based on the distinction between the syntactic, semantic and pragmatic model aspects and populated with evaluation criteria drawn from an extensive literature survey. In order to operationalize and empirically validate the framework, an exhaustive survey of enterprise models was conducted. From this survey, an XML database of more than twenty relatively large, publicly available enterprise models was constructed. A strong emphasis was placed on the interdisciplinary nature of this database and models were drawn from ontology research, linguistics, analysis patterns as well as the traditional fields of data modelling, data warehousing and enterprise systems. The resultant database forms the test bed for the detailed framework-based analysis and its public availability should constitute a useful contribution to the modelling research community. The bulk of the research is dedicated to implementing and validating specific analysis techniques to quantify the various model evaluation criteria of the framework. The aim for each of the analysis techniques is that it can, where possible, be automated and generalised to other modelling domains. The syntactic measures and analysis techniques originate largely from the disciplines of systems engineering, graph theory and computer science. Various metrics to measure model hierarchy, architecture and complexity are tested and discussed. It is found that many are not particularly useful or valid for enterprise models. Hence some new measures are proposed to assist with model visualization and an original "model signature" consisting of three key metrics is proposed.Perhaps the most significant contribution ofthe research lies in the development and validation of a significant number of semantic analysis techniques, drawing heavily on current developments in lexicography, linguistics and ontology research. Some novel and interesting techniques are proposed to measure, inter alia, domain coverage, model genericity, quality of documentation, perspicuity and model similarity. Especially model similarity is explored in depth by means of various similarity and clustering algorithms as well as ways to visualize the similarity between models. Finally, a number of pragmatic analyses techniques are applied to the models. These include face validity, degree of use, authority of model author, availability, cost, flexibility, adaptability, model currency, maturity and degree of support. This analysis relies mostly on the searching for and ranking of certain specific information details, often involving a degree of subjective interpretation, although more specific quantitative procedures are suggested for some of the criteria. To aid future researchers, a separate chapter lists some promising analysis techniques that were investigated but found to be problematic from methodological perspective. More interestingly, this chapter also presents a very strong conceptual case on how the proposed framework and the analysis techniques associated vrith its various criteria can be applied to many other information systems research areas. The case is presented on the grounds of the underlying isomorphism between the various research areas and illustrated by suggesting the application of the framework to evaluate web sites, algorithms, software applications, programming languages, system development methodologies and user interfaces

    3rd EGEE User Forum

    Get PDF
    We have organized this book in a sequence of chapters, each chapter associated with an application or technical theme introduced by an overview of the contents, and a summary of the main conclusions coming from the Forum for the chapter topic. The first chapter gathers all the plenary session keynote addresses, and following this there is a sequence of chapters covering the application flavoured sessions. These are followed by chapters with the flavour of Computer Science and Grid Technology. The final chapter covers the important number of practical demonstrations and posters exhibited at the Forum. Much of the work presented has a direct link to specific areas of Science, and so we have created a Science Index, presented below. In addition, at the end of this book, we provide a complete list of the institutes and countries involved in the User Forum
    • …
    corecore