2,568 research outputs found

    A knowledge hub to enhance the learning processes of an industrial cluster

    Get PDF
    Industrial clusters have been defined as ?networks of production of strongly interdependent firms (including specialised suppliers), knowledge producing agents (universities, research institutes, engineering companies), institutions (brokers, consultants), linked to each other in a value adding production chain? (OECD Focus Group, 1999). The industrial clusters distinctive mode of production is specialisation, based on a sophisticated division of labour, that leads to interlinked activities and need for cooperation, with the consequent emergence of communities of practice (CoPs). CoPs are here conceived as groups of people and/or organisations bound together by shared expertise and propensity towards a joint work (Wenger and Suyden, 1999). Cooperation needs closeness for just-in-time delivery, for communication, for the exchange of knowledge, especially in its tacit form. Indeed the knowledge exchanges between the CoPs specialised actors, in geographical proximity, lead to spillovers and synergies. In the digital economy landscape, the use of collaborative technologies, such as shared repositories, chat rooms and videoconferences can, when appropriately used, have a positive impact on the development of the CoP exchanges process of codified knowledge. On the other end, systems for the individuals profile management, e-learning platforms and intelligent agents can trigger also some socialisation mechanisms of tacit knowledge. In this perspective, we have set-up a model of a Knowledge Hub (KH), driven by the Information and Communication Technologies (ICT-driven), that enables the knowledge exchanges of a CoP. In order to present the model, the paper is organised in the following logical steps: - an overview of the most seminal and consolidated approaches to CoPs; - a description of the KH model, ICT-driven, conceived as a booster of the knowledge exchanges of a CoP, that adds to the economic benefits coming from geographical proximity, the advantages coming from organizational proximity, based on the ICTs; - a discussion of some preliminary results that we are obtaining during the implementation of the model.

    Astrophysicists and physicists as creators of ArXiv-based commenting resources for their research communities. An initial survey

    Get PDF
    This paper conveys the outcomes of what results to be the first, though initial, overview of commenting platforms and related 2.0 resources born within and for the astrophysical community (from 2004 to 2016). Experiences were added, mainly in the physics domain, for a total of 22 major items, including four epijournals, and four supplementary resources, thus casting some light onto an unexpected richness and consonance of endeavours. These experiences rest almost entirely on the contents of the database ArXiv, which adds to its merits that of potentially setting the grounds for web 2.0 resources, and research behaviours, to be explored. Most of the experiences retrieved are UK and US based, but the resulting picture is international, as various European countries, China and Australia have been actively involved. Final remarks about creation patterns and outcome of these resources are outlined. The results integrate the previous studies according to which the web 2.0 is presently of limited use for communication in astrophysics and vouch for a role of researchers in the shaping of their own professional communication tools that is greater than expected. Collaterally, some aspects of ArXiv s recent pathway towards partial inclusion of web 2.0 features are touched upon. Further investigation is hoped for.Comment: Journal article 16 page

    Analyzing the Language of Food on Social Media

    Full text link
    We investigate the predictive power behind the language of food on social media. We collect a corpus of over three million food-related posts from Twitter and demonstrate that many latent population characteristics can be directly predicted from this data: overweight rate, diabetes rate, political leaning, and home geographical location of authors. For all tasks, our language-based models significantly outperform the majority-class baselines. Performance is further improved with more complex natural language processing, such as topic modeling. We analyze which textual features have most predictive power for these datasets, providing insight into the connections between the language of food, geographic locale, and community characteristics. Lastly, we design and implement an online system for real-time query and visualization of the dataset. Visualization tools, such as geo-referenced heatmaps, semantics-preserving wordclouds and temporal histograms, allow us to discover more complex, global patterns mirrored in the language of food.Comment: An extended abstract of this paper will appear in IEEE Big Data 201

    The CoMeRe corpus for French: structuring and annotating heterogeneous CMC genres

    Get PDF
    Final version to Special Issue of JLCL (Journal of Language Technology and Computational Linguistics (JLCL, http://jlcl.org/): BUILDING AND ANNOTATING CORPORA OF COMPUTER-MEDIATED DISCOURSE: Issues and Challenges at the Interface of Corpus and Computational Linguistics (ed. by Michael Beißwenger, Nelleke Oostdijk, Angelika Storrer & Henk van den Heuvel)International audienceThe CoMeRe project aims to build a kernel corpus of different Computer-Mediated Com-munication (CMC) genres with interactions in French as the main language, by assembling interactions stemming from networks such as the Internet or telecommunication, as well as mono and multimodal, synchronous and asynchronous communications. Corpora are assem-bled using a standard, thanks to the TEI (Text Encoding Initiative) format. This implies extending, through a European endeavor, the TEI model of text, in order to encompass the richest and the more complex CMC genres. This paper presents the Interaction Space model. We explain how this model has been encoded within the TEI corpus header and body. The model is then instantiated through the first four corpora we have processed: three corpora where interactions occurred in single-modality environments (text chat, or SMS systems) and a fourth corpus where text chat, email and forum modalities were used simultaneously. The CoMeRe project has two main research perspectives: Discourse Analysis, only alluded to in this paper, and the linguistic study of idiolects occurring in different CMC genres. As NLP algorithms are an indispensable prerequisite for such research, we present our motiva-tions for applying an automatic annotation process to the CoMeRe corpora. Our wish to guarantee generic annotations meant we did not consider any processing beyond morphosyn-tactic labelling, but prioritized the automatic annotation of any freely variant elements within the corpora. We then turn to decisions made concerning which annotations to make for which units and describe the processing pipeline for adding these. All CoMeRe corpora are verified, thanks to a staged quality control process, designed to allow corpora to move from one project phase to the next. Public release of the CoMeRe corpora is a short-term goal: corpora will be integrated into the forthcoming French National Reference Corpus, and disseminated through the national linguistic infrastructure ORTOLANG. We, therefore, highlight issues and decisions made concerning the OpenData perspective

    Social Media Multidimensional Analysis for Intelligent Health Surveillance

    Get PDF
    Background: Recent work in social network analysis has shown the usefulness of analysing and predicting outcomes from user-generated data in the context of Public Health Surveillance (PHS). Most of the proposals have focused on dealing with static datasets gathered from social networks, which are processed and mined off-line. However, little work has been done on providing a general framework to analyse the highly dynamic data of social networks from a multidimensional perspective. In this paper, we claim that such a framework is crucial for including social data in PHS systems. Methods: We propose a dynamic multidimensional approach to deal with social data streams. In this approach, dynamic dimensions are continuously updated by applying unsupervised text mining methods. More specifically, we analyse the semantics and temporal patterns in posts for identifying relevant events, topics and users. We also define quality metrics to detect relevant user profiles. In this way, the incoming data can be further filtered to cope with the goals of PHS systems. Results: We have evaluated our approach over a long-term stream of Twitter. We show how the proposed quality metrics allow us to filter out the users that are out-of-domain as well as those with low quality in their messages. We also explain how specific user profiles can be identified through their descriptions. Finally, we illustrate how the proposed multidimensional model can be used to identify main events and topics, as well as to analyse their audience and impact. Conclusions: The results show that the proposed dynamic multidimensional model is able to identify relevant events and topics and analyse them from different perspectives, which is especially useful for PHS systems
    • 

    corecore