330 research outputs found
Prospective, open, multi-centre phase I/II trial to assess safety and efficacy of neoadjuvant radiochemotherapy with docetaxel and oxaliplatin in patients with adenocarcinoma of the oesophagogastric junction
Background: This phase I/II-trial assessed the dose-limiting toxicities (DLT) and maximum tolerated dose (MTD) of neoadjuvant radiochemotherapy (RCT) with docetaxel and oxaliplatin in patients with locally advanced adenocarcinoma of the oesophagogastric junction.
Methods: Patients received neoadjuvant radiotherapy (50.4 Gy) together with weekly docetaxel (20 mg/m2 at dose level (DL) 1 and 2, 25 mg/m2 at DL 3) and oxaliplatin (40 mg/m2 at DL 1, 50 mg/m2 at DL 2 and 3) over 5 weeks. The primary endpoint was the DLT and the MTD of the RCT regimen. Secondary endpoints included overall response rate (ORR) and progression-free survival (PFS).
Results: A total of 24 patients were included. Four patients were treated at DL 1, 13 patients at DL 2 and 7 patients at DL 3. The MTD of the RCT was considered DL 2 with docetaxel 20 mg/m2 and oxaliplatin 50 mg/m2. Objective response (CR/PR) was observed in 32% (7/22) of patients. Eighteen patients (75%) underwent surgery after RCT. The median PFS for all patients (n = 24) was 6.5 months. The median overall survival for all patients (n = 24) was 16.3 months. Patients treated at DL 2 had a median overall survival of 29.5 months.
Conclusion: Neoadjuvant RCT with docetaxel 20 mg/m2 and oxaliplatin 50 mg/m2 was effective and showed a good toxicity profile. Future studies should consider the addition of targeted therapies to current neoadjuvant therapy regimens to further improve the outcome of patients with advanced cancer of the oesophagogastric junction.
Trial Registration: NCT0037498
Knowledge Capture from Multiple Online Sources with the Extensible Web Retrieval Toolkit (eWRT)
Knowledge capture approaches in the age of massive Web data require robust and scalable mechanisms to acquire, consolidate and pre-process large amounts of heterogeneous data, both un-structured and structured. This paper addresses this requirement by introducing the Extensible Web Retrieval Toolkit (eWRT), a modular Python API for retrieving social data from Web sources such as Delicious, Flickr, Yahoo! and Wikipedia. eWRT has been released as an open source library under GNU GPLv3. It includes classes for caching and data management, and provides low-level text processing capabilities including language detection, phonetic string similarity measures, and string normalization
TextSweeper - A System for Content Extraction and Overview Page Detection
Web pages not only contain main content, but also other elements such as navigation panels, advertisements and links to related documents. Furthermore, overview pages (summarization pages and entry points) duplicate and aggregate parts of articles and thereby create redundancies. The noise elements in Web pages as well as overview pages affect the performance of downstream processes such as Web-based Information Retrieval. Context Extraction's task is identifying and extracting the main content from a Web page. In this research-in-progress paper we present an approach which not only identifies and extracts the main content, but also detects overview pages and thereby allows skipping them. The content extraction part of the system is an extension of existing Text-to-Tag ratio methods, overview page detection is accomplished with the net text length heuristic. Preliminary results and ad-hoc evaluation indicate a promising system performance. A formal evaluation and comparison to other state-of-the-art approaches is part of future work
Media Watch on Climate Change – Visual Analytics for Aggregating and Managing Environmental Knowledge from Online Sources
This paper presents the Media Watch on Climate Change, a public Web portal that captures and aggregates large archives of digital content from multiple stakeholder groups. Each week it assesses the domain-specific relevance of millions of documents and user comments from news media, blogs, Web 2.0 platforms such as Facebook, Twitter and YouTube, the Web sites of companies and NGOs, and a range of other sources. An interactive dashboard with trend charts and complex map projections not only shows how often and where environmental information is published, but also provides a real-time account of concepts that stakeholders associate with climate change. Positive or negative sentiment is computed automatically, which not only sheds light on the impact of education and public outreach campaigns that target environmental literacy, but also help to gain a better understanding of how others perceive climate-related issues
From Web Intelligence to Knowledge Co-Creation – A Platform to Analyze and Support Stakeholder Communication
Organizations require tools to assess their online reputation as well as the impact of their marketing and public outreach activities. The Media Watch on Climate Change is a Web intelligence and online collaboration platform that addresses this requirement. It aggregates large archives of digital content from multiple stakeholder groups and enables the co-creation and visualization of evolving knowledge archives. This paper introduces the base platform and a context-aware document editor as an add-on that supports concurrent authoring by multiple users. While documents are being edited, semantic methods analyze them on the fly to recommend related content. Positive or negative sentiment is computed automatically to gain a better understanding of third-party perceptions. The editor is part of an interactive dashboard that uses trend charts and map projections to show how often and where relevant information is published, and to provide a real-time account of concepts that stakeholders associate with a topic
Extraction and Interactive Exploration of Knowledge from Aggregated News and Social Media Content
The webLyzard media monitoring and Web intelligence platform (www.webLyzard.com) presented in this paper is a flexible tool for assessing the positioning of an organization and the effectiveness of its communications. The platform aggregates large archives of digital content from multiple stakeholders. Each week it processes millions of documents and user comments from news media, blogs, Web 2.0 platforms such as Facebook, Twitter and YouTube, and the Web sites of companies and NGOs. An interactive dashboard with trend charts and complex map projections shows how often and where information is published. It also provides a real-time account of topics that stakeholders associate with an organization. Positive or negative sentiment is computed automatically, which reflects the impact of public relations and marketing campaigns
Reconstruction of primary vertices at the ATLAS experiment in Run 1 proton–proton collisions at the LHC
This paper presents the method and performance of primary vertex reconstruction in proton–proton collision data recorded by the ATLAS experiment during Run 1 of the LHC. The studies presented focus on data taken during 2012 at a centre-of-mass energy of √s=8 TeV. The performance has been measured as a function of the number of interactions per bunch crossing over a wide range, from one to seventy. The measurement of the position and size of the luminous region and its use as a constraint to improve the primary vertex resolution are discussed. A longitudinal vertex position resolution of about 30μm is achieved for events with high multiplicity of reconstructed tracks. The transverse position resolution is better than 20μm and is dominated by the precision on the size of the luminous region. An analytical model is proposed to describe the primary vertex reconstruction efficiency as a function of the number of interactions per bunch crossing and of the longitudinal size of the luminous region. Agreement between the data and the predictions of this model is better than 3% up to seventy interactions per bunch crossing
- …
