20,739 research outputs found
Editorial
In this issue of Literacy and Numeracy Studies, Theres Bellander and Zoe Nikolaidou examine the online health literacy practices of parents whose child or unborn foetus has been diagnosed with a heart defect, and Julie Choi and Ulrike Najar report on their study of the authors’ English language teaching of immigrant and refugee women in Australia
Multimodal music information processing and retrieval: survey and future challenges
Towards improving the performance in various music information processing
tasks, recent studies exploit different modalities able to capture diverse
aspects of music. Such modalities include audio recordings, symbolic music
scores, mid-level representations, motion, and gestural data, video recordings,
editorial or cultural tags, lyrics and album cover arts. This paper critically
reviews the various approaches adopted in Music Information Processing and
Retrieval and highlights how multimodal algorithms can help Music Computing
applications. First, we categorize the related literature based on the
application they address. Subsequently, we analyze existing information fusion
approaches, and we conclude with the set of challenges that Music Information
Retrieval and Sound and Music Computing research communities should focus in
the next years
Personalized Pancreatic Tumor Growth Prediction via Group Learning
Tumor growth prediction, a highly challenging task, has long been viewed as a
mathematical modeling problem, where the tumor growth pattern is personalized
based on imaging and clinical data of a target patient. Though mathematical
models yield promising results, their prediction accuracy may be limited by the
absence of population trend data and personalized clinical characteristics. In
this paper, we propose a statistical group learning approach to predict the
tumor growth pattern that incorporates both the population trend and
personalized data, in order to discover high-level features from multimodal
imaging data. A deep convolutional neural network approach is developed to
model the voxel-wise spatio-temporal tumor progression. The deep features are
combined with the time intervals and the clinical factors to feed a process of
feature selection. Our predictive model is pretrained on a group data set and
personalized on the target patient data to estimate the future spatio-temporal
progression of the patient's tumor. Multimodal imaging data at multiple time
points are used in the learning, personalization and inference stages. Our
method achieves a Dice coefficient of 86.8% +- 3.6% and RVD of 7.9% +- 5.4% on
a pancreatic tumor data set, outperforming the DSC of 84.4% +- 4.0% and RVD
13.9% +- 9.8% obtained by a previous state-of-the-art model-based method
Recommended from our members
Mobile Learning Revolution: Implications for Language Pedagogy
Mobile technologies including cell phones and tablets are a pervasive feature of everyday life with potential impact on teaching and learning. “Mobile pedagogy” may seem like a contradiction in terms, since mobile learning often takes place physically beyond the teacher's reach, outside the walls of the classroom. While pedagogy implies careful planning, mobility exposes learners to the unexpected. A thoughtful pedagogical response to this reality involves new conceptualizations of what is to be learned and new activity designs. This approach recognizes that learners may act in more self-determined ways beyond the classroom walls, where online interactions and mobile encounters influence their target language communication needs and interests. The chapter sets out a range of opportunities for out-of-class mobile language learning that give learners an active role and promote communication. It then considers the implications of these developments for language content and curricula and the evolving roles and competences of teachers
CHORUS Deliverable 4.3: Report from CHORUS workshops on national initiatives and metadata
Minutes of the following Workshops:
• National Initiatives on Multimedia Content Description and Retrieval, Geneva, October 10th, 2007.
• Metadata in Audio-Visual/Multimedia production and archiving, Munich, IRT, 21st – 22nd November 2007
Workshop in Geneva 10/10/2007
This highly successful workshop was organised in cooperation with the European Commission. The event brought together
the technical, administrative and financial representatives of the various national initiatives, which have been established
recently in some European countries to support research and technical development in the area of audio-visual content
processing, indexing and searching for the next generation Internet using semantic technologies, and which may lead to an
internet-based knowledge infrastructure. The objective of this workshop was to provide a platform for mutual information
and exchange between these initiatives, the European Commission and the participants. Top speakers were present from
each of the national initiatives. There was time for discussions with the audience and amongst the European National
Initiatives. The challenges, communalities, difficulties, targeted/expected impact, success criteria, etc. were tackled. This
workshop addressed how these national initiatives could work together and benefit from each other.
Workshop in Munich 11/21-22/2007
Numerous EU and national research projects are working on the automatic or semi-automatic generation of descriptive and
functional metadata derived from analysing audio-visual content. The owners of AV archives and production facilities are
eagerly awaiting such methods which would help them to better exploit their assets.Hand in hand with the digitization of
analogue archives and the archiving of digital AV material, metadatashould be generated on an as high semantic level as
possible, preferably fully automatically. All users of metadata rely on a certain metadata model. All AV/multimedia search
engines, developed or under current development, would have to respect some compatibility or compliance with the
metadata models in use. The purpose of this workshop is to draw attention to the specific problem of metadata models in the
context of (semi)-automatic multimedia search
Linking recorded data with emotive and adaptive computing in an eHealth environment
Telecare, and particularly lifestyle monitoring, currently relies on the ability to detect and respond to changes in individual behaviour using data derived from sensors around the home. This means that a significant aspect of behaviour, that of an individuals emotional state, is not accounted for in reaching a conclusion as to the form of response required. The linked concepts of emotive and adaptive computing offer an opportunity to include information about emotional state and the paper considers how current developments in this area have the potential to be integrated within telecare and other areas of eHealth. In doing so, it looks at the development of and current state of the art of both emotive and adaptive computing, including its conceptual background, and places them into an overall eHealth context for application and development
TandemNet: Distilling Knowledge from Medical Images Using Diagnostic Reports as Optional Semantic References
In this paper, we introduce the semantic knowledge of medical images from
their diagnostic reports to provide an inspirational network training and an
interpretable prediction mechanism with our proposed novel multimodal neural
network, namely TandemNet. Inside TandemNet, a language model is used to
represent report text, which cooperates with the image model in a tandem
scheme. We propose a novel dual-attention model that facilitates high-level
interactions between visual and semantic information and effectively distills
useful features for prediction. In the testing stage, TandemNet can make
accurate image prediction with an optional report text input. It also
interprets its prediction by producing attention on the image and text
informative feature pieces, and further generating diagnostic report
paragraphs. Based on a pathological bladder cancer images and their diagnostic
reports (BCIDR) dataset, sufficient experiments demonstrate that our method
effectively learns and integrates knowledge from multimodalities and obtains
significantly improved performance than comparing baselines.Comment: MICCAI2017 Ora
- …