3,591 research outputs found

    Understanding the bi-directional relationship between analytical processes and interactive visualization systems

    Get PDF
    Interactive visualizations leverage the human visual and reasoning systems to increase the scale of information with which we can effectively work, therefore improving our ability to explore and analyze large amounts of data. Interactive visualizations are often designed with target domains in mind, such as analyzing unstructured textual information, which is a main thrust in this dissertation. Since each domain has its own existing procedures of analyzing data, a good start to a well-designed interactive visualization system is to understand the domain experts' workflow and analysis processes. This dissertation recasts the importance of understanding domain users' analysis processes and incorporating such understanding into the design of interactive visualization systems. To meet this aim, I first introduce considerations guiding the gathering of general and domain-specific analysis processes in text analytics. Two interactive visualization systems are designed by following the considerations. The first system is Parallel-Topics, a visual analytics system supporting analysis of large collections of documents by extracting semantically meaningful topics. Based on lessons learned from Parallel-Topics, this dissertation further presents a general visual text analysis framework, I-Si, to present meaningful topical summaries and temporal patterns, with the capability to handle large-scale textual information. Both systems have been evaluated by expert users and deemed successful in addressing domain analysis needs. The second contribution lies in preserving domain users' analysis process while using interactive visualizations. Our research suggests the preservation could serve multiple purposes. On the one hand, it could further improve the current system. On the other hand, users often need help in recalling and revisiting their complex and sometimes iterative analysis process with an interactive visualization system. This dissertation introduces multiple types of evidences available for capturing a user's analysis process within an interactive visualization and analyzes cost/benefit ratios of the capturing methods. It concludes that tracking interaction sequences is the most un-intrusive and feasible way to capture part of a user's analysis process. To validate this claim, a user study is presented to theoretically analyze the relationship between interactions and problem-solving processes. The results indicate that constraining the way a user interacts with a mathematical puzzle does have an effect on the problemsolving process. As later evidenced in an evaluative study, a fair amount of high-level analysis can be recovered through merely analyzing interaction logs

    Metadata enrichment for digital heritage: users as co-creators

    Get PDF
    This paper espouses the concept of metadata enrichment through an expert and user-focused approach to metadata creation and management. To this end, it is argued the Web 2.0 paradigm enables users to be proactive metadata creators. As Shirky (2008, p.47) argues Web 2.0’s social tools enable “action by loosely structured groups, operating without managerial direction and outside the profit motive”. Lagoze (2010, p. 37) advises, “the participatory nature of Web 2.0 should not be dismissed as just a popular phenomenon [or fad]”. Carletti (2016) proposes a participatory digital cultural heritage approach where Web 2.0 approaches such as crowdsourcing can be sued to enrich digital cultural objects. It is argued that “heritage crowdsourcing, community-centred projects or other forms of public participation”. On the other hand, the new collaborative approaches of Web 2.0 neither negate nor replace contemporary standards-based metadata approaches. Hence, this paper proposes a mixed metadata approach where user created metadata augments expert-created metadata and vice versa. The metadata creation process no longer remains to be the sole prerogative of the metadata expert. The Web 2.0 collaborative environment would now allow users to participate in both adding and re-using metadata. The case of expert-created (standards-based, top-down) and user-generated metadata (socially-constructed, bottom-up) approach to metadata are complementary rather than mutually-exclusive. The two approaches are often mistakenly considered as dichotomies, albeit incorrectly (Gruber, 2007; Wright, 2007) . This paper espouses the importance of enriching digital information objects with descriptions pertaining the about-ness of information objects. Such richness and diversity of description, it is argued, could chiefly be achieved by involving users in the metadata creation process. This paper presents the importance of the paradigm of metadata enriching and metadata filtering for the cultural heritage domain. Metadata enriching states that a priori metadata that is instantiated and granularly structured by metadata experts is continually enriched through socially-constructed (post-hoc) metadata, whereby users are pro-actively engaged in co-creating metadata. The principle also states that metadata that is enriched is also contextually and semantically linked and openly accessible. In addition, metadata filtering states that metadata resulting from implementing the principle of enriching should be displayed for users in line with their needs and convenience. In both enriching and filtering, users should be considered as prosumers, resulting in what is called collective metadata intelligence

    Data Mining Algorithms for Internet Data: from Transport to Application Layer

    Get PDF
    Nowadays we live in a data-driven world. Advances in data generation, collection and storage technology have enabled organizations to gather data sets of massive size. Data mining is a discipline that blends traditional data analysis methods with sophisticated algorithms to handle the challenges posed by these new types of data sets. The Internet is a complex and dynamic system with new protocols and applications that arise at a constant pace. All these characteristics designate the Internet a valuable and challenging data source and application domain for a research activity, both looking at Transport layer, analyzing network tra c flows, and going up to Application layer, focusing on the ever-growing next generation web services: blogs, micro-blogs, on-line social networks, photo sharing services and many other applications (e.g., Twitter, Facebook, Flickr, etc.). In this thesis work we focus on the study, design and development of novel algorithms and frameworks to support large scale data mining activities over huge and heterogeneous data volumes, with a particular focus on Internet data as data source and targeting network tra c classification, on-line social network analysis, recommendation systems and cloud services and Big data

    Data mining by means of generalized patterns

    Get PDF
    The thesis is mainly focused on the study and the application of pattern discovery algorithms that aggregate database knowledge to discover and exploit valuable correlations, hidden in the analyzed data, at different abstraction levels. The aim of the research effort described in this work is two-fold: the discovery of associations, in the form of generalized patterns, from large data collections and the inference of semantic models, i.e., taxonomies and ontologies, suitable for driving the mining proces

    CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap

    Get PDF
    After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in multimedia search engines, we have identified and analyzed gaps within European research effort during our second year. In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio- economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal challenges

    Listening to Museums: Sounds as objects of culture and curatorial care

    Full text link
    This practice-based project begins with an exploration of the acoustic environments of a variety of contemporary museums via field recording and sound mapping. Through a critical listening practice, this mapping leads to a central question: can sounds act as objects analogous to physical objects within museum practice – and if so, what is at stake in creating a museum that only exhibits sounds?Given the interest in collection and protection of intangible culture within contemporary museum practice, as well as the evolving anthropological view of sound as an object of human culture, this project suggests that a re-definition of Pierre Shaeffer’s oft-debated term ‘sound object’ within the context of museum practice may be of use in re-imagining how sounds might be able to function within traditionally object-based museum exhibition practices. Furthermore, the longstanding notion of ‘soundmarks’ – sounds that reoccur within local communities which help to define their unique cultural identity – is explored as a means by which post-industrial sounds such as traffic signals for the visually impaired and those made by public transport, may be considered deserving of protection by museum practitioners.These ideas are then tested via creative practice by establishing an experimental curatorial project, The Museum of Portable Sound (MOPS), an institution dedicated to collecting, preserving, and exhibiting sounds as objects of culture and human agency. MOPS displays sounds, collected via the author’s field recording practice, as museological objects that, like the physical objects described by Stephen Greenblatt, ‘resonate’ with the outside world – but also with each other, via their careful selection and sequencing that calls back to the mix tape culture of the late twentieth century.The unconventional form of MOPS – digital audio files on a single mobile phone accompanied by a museum ‘map’ and Gallery Guide – emphasizes social connections between the virtual and the physical. The project presents a viable format via which sounds may be displayed as culture while also interrogating what a museum can be in the twenty first centur

    Text Analytics: the convergence of Big Data and Artificial Intelligence

    Get PDF
    The analysis of the text content in emails, blogs, tweets, forums and other forms of textual communication constitutes what we call text analytics. Text analytics is applicable to most industries: it can help analyze millions of emails; you can analyze customers’ comments and questions in forums; you can perform sentiment analysis using text analytics by measuring positive or negative perceptions of a company, brand, or product. Text Analytics has also been called text mining, and is a subcategory of the Natural Language Processing (NLP) field, which is one of the founding branches of Artificial Intelligence, back in the 1950s, when an interest in understanding text originally developed. Currently Text Analytics is often considered as the next step in Big Data analysis. Text Analytics has a number of subdivisions: Information Extraction, Named Entity Recognition, Semantic Web annotated domain’s representation, and many more. Several techniques are currently used and some of them have gained a lot of attention, such as Machine Learning, to show a semisupervised enhancement of systems, but they also present a number of limitations which make them not always the only or the best choice. We conclude with current and near future applications of Text Analytics

    Times of Change in the Demoscene : A Creative Community and Its Relationship with Technology

    Get PDF
    The demoscene is a form of digital culture that emerged in the mid-1980s after home computers started becoming commonplace. Throughout its approximately thirty years of existence it has changed in a number of ways, due to both external and internal factors. The most evident external driver has been the considerable technological development of the period, which has forced the community to react in its own particular ways. A modest body of research on the demoscene already exists, even though several topics still remain unstudied. In this thesis I approach the scene from three different angles: community, artefacts and relationship with technology. The most important frames of reference are subcultural studies, history of computing, game studies, domestication of technology and software studies. The research material is equally diverse, consisting of texts, creative works and interviews. The study paints an uncommon picture of the scene as a meritocracy that actively and even aggressively debates technological change. Technical prowess does not imply embracing new gadgets uncritically, in particular because their perceived ease is in dire contrast with the shared ethic that emphasises individuals’ skill. Practices, interests and relationships to other communities – gamers in particular – are still subject to constant change and, therefore, we should not consider the demoscene as a frozen monoculture, but rather as a group of phenomena that are linked to different periods of time, locations and computing platforms.Times of Change in the Demoscene: A Creative Community and Its Relationship with Technology (Muutoksen aikoja demoskenessä: luova yhteisö ja sen teknologiasuhde) Demoskene on 1980-luvun puolivälissä kotitietokoneiden yleistymisen myötä syntynyt digitaalisen kulttuurin muoto. Noin kolmenkymmenen vuoden olemassaolonsa aikana se on muuttunut monin tavoin, johtuen sekä ulkoisista että sisäsyntyisistä tekijöistä. Ilmeisin ulkoinen muutosvoima on ajanjakson huomattava tietotekninen kehitys, johon yhteisö on sopeutunut omissa puitteissaan. Demoskenestä on jo olemassa jonkin verran akateemista tutkimusta, vaikka lukuisia aiheita onkin yhä täysin kartoittamatta. Tässä työssä lähestyn skeneä kolmesta eri näkökulmasta: yhteisön, artefaktien sekä teknologiasuhteen suunnasta. Tärkeimpiä viitekehyksiä ovat alakulttuuritutkimus, tietotekniikan historia, pelitutkimus, teknologian kotoutuminen sekä uusimpana tulokkaana ohjelmistotutkimus. Tutkimusmateriaali on samoin monimuotoista, koostuen teksteistä, luovista töistä sekä haastatteluista. Tutkimuksen myötä hahmottuu poikkeuksellinen kuva skenestä meritokraattisena yhteisönä, joka ottaa aktiivisesti ja usein kärkkäästi kantaa teknologiseen muutokseen. Tekninen kyvykkyys ei johda uutuuksien kritiikittömään omaksumiseen, etenkin kun uusien laitteiden mukanaan tuoma näennäinen helppous sotii yhteisössä vallitsevaa yksilön osaamista korostavaa etiikkaa vastaan. Käytännöt, mielenkiinnon kohteet ja suhtautuminen muihin yhteisöihin – etenkin pelaajiin – ovat edelleen jatkuvassa muutoksessa, eikä demoskeneä siten voikaan tarkastella jähmettyneenä yhtenäiskulttuurina, vaan pikemminkin ryhmänä eri ajanjaksoihin, paikkoihin ja laitteisiin kytkeytyneitä ilmiöitä.Siirretty Doriast
    corecore