2,568 research outputs found
A knowledge hub to enhance the learning processes of an industrial cluster
Industrial clusters have been defined as ?networks of production of strongly interdependent firms (including specialised suppliers), knowledge producing agents (universities, research institutes, engineering companies), institutions (brokers, consultants), linked to each other in a value adding production chain? (OECD Focus Group, 1999). The industrial clusters distinctive mode of production is specialisation, based on a sophisticated division of labour, that leads to interlinked activities and need for cooperation, with the consequent emergence of communities of practice (CoPs). CoPs are here conceived as groups of people and/or organisations bound together by shared expertise and propensity towards a joint work (Wenger and Suyden, 1999). Cooperation needs closeness for just-in-time delivery, for communication, for the exchange of knowledge, especially in its tacit form. Indeed the knowledge exchanges between the CoPs specialised actors, in geographical proximity, lead to spillovers and synergies. In the digital economy landscape, the use of collaborative technologies, such as shared repositories, chat rooms and videoconferences can, when appropriately used, have a positive impact on the development of the CoP exchanges process of codified knowledge. On the other end, systems for the individuals profile management, e-learning platforms and intelligent agents can trigger also some socialisation mechanisms of tacit knowledge. In this perspective, we have set-up a model of a Knowledge Hub (KH), driven by the Information and Communication Technologies (ICT-driven), that enables the knowledge exchanges of a CoP. In order to present the model, the paper is organised in the following logical steps: - an overview of the most seminal and consolidated approaches to CoPs; - a description of the KH model, ICT-driven, conceived as a booster of the knowledge exchanges of a CoP, that adds to the economic benefits coming from geographical proximity, the advantages coming from organizational proximity, based on the ICTs; - a discussion of some preliminary results that we are obtaining during the implementation of the model.
Astrophysicists and physicists as creators of ArXiv-based commenting resources for their research communities. An initial survey
This paper conveys the outcomes of what results to be the first, though
initial, overview of commenting platforms and related 2.0 resources born within
and for the astrophysical community (from 2004 to 2016). Experiences were
added, mainly in the physics domain, for a total of 22 major items, including
four epijournals, and four supplementary resources, thus casting some light
onto an unexpected richness and consonance of endeavours. These experiences
rest almost entirely on the contents of the database ArXiv, which adds to its
merits that of potentially setting the grounds for web 2.0 resources, and
research behaviours, to be explored.
Most of the experiences retrieved are UK and US based, but the resulting
picture is international, as various European countries, China and Australia
have been actively involved.
Final remarks about creation patterns and outcome of these resources are
outlined. The results integrate the previous studies according to which the web
2.0 is presently of limited use for communication in astrophysics and vouch for
a role of researchers in the shaping of their own professional communication
tools that is greater than expected. Collaterally, some aspects of ArXiv s
recent pathway towards partial inclusion of web 2.0 features are touched upon.
Further investigation is hoped for.Comment: Journal article 16 page
Analyzing the Language of Food on Social Media
We investigate the predictive power behind the language of food on social
media. We collect a corpus of over three million food-related posts from
Twitter and demonstrate that many latent population characteristics can be
directly predicted from this data: overweight rate, diabetes rate, political
leaning, and home geographical location of authors. For all tasks, our
language-based models significantly outperform the majority-class baselines.
Performance is further improved with more complex natural language processing,
such as topic modeling. We analyze which textual features have most predictive
power for these datasets, providing insight into the connections between the
language of food, geographic locale, and community characteristics. Lastly, we
design and implement an online system for real-time query and visualization of
the dataset. Visualization tools, such as geo-referenced heatmaps,
semantics-preserving wordclouds and temporal histograms, allow us to discover
more complex, global patterns mirrored in the language of food.Comment: An extended abstract of this paper will appear in IEEE Big Data 201
The CoMeRe corpus for French: structuring and annotating heterogeneous CMC genres
Final version to Special Issue of JLCL (Journal of Language Technology and Computational Linguistics (JLCL, http://jlcl.org/): BUILDING AND ANNOTATING CORPORA OF COMPUTER-MEDIATED DISCOURSE: Issues and Challenges at the Interface of Corpus and Computational Linguistics (ed. by Michael BeiĂwenger, Nelleke Oostdijk, Angelika Storrer & Henk van den Heuvel)International audienceThe CoMeRe project aims to build a kernel corpus of different Computer-Mediated Com-munication (CMC) genres with interactions in French as the main language, by assembling interactions stemming from networks such as the Internet or telecommunication, as well as mono and multimodal, synchronous and asynchronous communications. Corpora are assem-bled using a standard, thanks to the TEI (Text Encoding Initiative) format. This implies extending, through a European endeavor, the TEI model of text, in order to encompass the richest and the more complex CMC genres. This paper presents the Interaction Space model. We explain how this model has been encoded within the TEI corpus header and body. The model is then instantiated through the first four corpora we have processed: three corpora where interactions occurred in single-modality environments (text chat, or SMS systems) and a fourth corpus where text chat, email and forum modalities were used simultaneously. The CoMeRe project has two main research perspectives: Discourse Analysis, only alluded to in this paper, and the linguistic study of idiolects occurring in different CMC genres. As NLP algorithms are an indispensable prerequisite for such research, we present our motiva-tions for applying an automatic annotation process to the CoMeRe corpora. Our wish to guarantee generic annotations meant we did not consider any processing beyond morphosyn-tactic labelling, but prioritized the automatic annotation of any freely variant elements within the corpora. We then turn to decisions made concerning which annotations to make for which units and describe the processing pipeline for adding these. All CoMeRe corpora are verified, thanks to a staged quality control process, designed to allow corpora to move from one project phase to the next. Public release of the CoMeRe corpora is a short-term goal: corpora will be integrated into the forthcoming French National Reference Corpus, and disseminated through the national linguistic infrastructure ORTOLANG. We, therefore, highlight issues and decisions made concerning the OpenData perspective
Recommended from our members
Contextual Semantics for Radicalisation Detection on Twitter
Much research aims to detect online radical content mainly using radicalisation glossaries, i.e., by looking for terms and expressions associated with religion, war, offensive language, etc. However, such crude methods are highly inaccurate towards content that uses radicalisation terminology to simply report on current events, to share harmless religious rhetoric, or even to counter extremism.
Language is complex and the context in which particular terms are used should not be disregarded. In this paper, we propose an approach for building a representation of the semantic context of the terms that are linked to radicalised rhetoric. We use this approach to analyse over 114K tweets that contain radicalisation-terms (around 17K posted by pro-ISIS users, and 97k posted by âgeneralâ Twitter users).
We report on how the contextual information differs for the same radicalisation terms in the two datasets, which indicate that contextual semantics can help to better discriminate radical content from content that only uses radical terminology.The classifiers we built to test this hypothesis outperform those that disregard contextual informatio
Social Media Multidimensional Analysis for Intelligent Health Surveillance
Background: Recent work in social network analysis has shown the usefulness of analysing and predicting outcomes from user-generated data in the context of Public Health Surveillance (PHS). Most of the proposals have focused on dealing with static datasets gathered from social networks, which are processed and mined off-line. However, little work has been done on providing a general framework to analyse the highly dynamic data of social networks from a multidimensional perspective. In this paper, we claim that such a framework is crucial for including social data in PHS systems. Methods: We propose a dynamic multidimensional approach to deal with social data streams. In this approach, dynamic dimensions are continuously updated by applying unsupervised text mining methods. More specifically, we analyse the semantics and temporal patterns in posts for identifying relevant events, topics and users. We also define quality metrics to detect relevant user profiles. In this way, the incoming data can be further filtered to cope with the goals of PHS systems. Results: We have evaluated our approach over a long-term stream of Twitter. We show how the proposed quality metrics allow us to filter out the users that are out-of-domain as well as those with low quality in their messages. We also explain how specific user profiles can be identified through their descriptions. Finally, we illustrate how the proposed multidimensional model can be used to identify main events and topics, as well as to analyse their audience and impact. Conclusions: The results show that the proposed dynamic multidimensional model is able to identify relevant events and topics and analyse them from different perspectives, which is especially useful for PHS systems
Recommended from our members
Supporting Story Synthesis: Bridging the Gap between Visual Analytics and Storytelling
Visual analytics usually deals with complex data and uses sophisticated algorithmic, visual, and interactive techniques. Findings of the analysis often need to be communicated to an audience that lacks visual analytics expertise. This requires analysis outcomes to be presented in simpler ways than that are typically used in visual analytics systems. However, not only analytical visualizations may be too complex for target audience but also the information that needs to be presented. Hence, there exists a gap on the path from obtaining analysis findings to communicating them, which involves two aspects: information and display complexity. We propose a general framework where data analysis and result presentation are linked by story synthesis, in which the analyst creates and organizes story contents. Differently, from the previous research, where analytic findings are represented by stored display states, we treat findings as data constructs. In story synthesis, findings are selected, assembled, and arranged in views using meaningful layouts that take into account the structure of information and inherent properties of its components. We propose a workflow for applying the proposed framework in designing visual analytics systems and demonstrate the generality of the approach by applying it to two domains, social media, and movement analysis
- âŠ