135 research outputs found
Quantum Nature of Plasmon-Enhanced Raman Scattering
We report plasmon-enhanced Raman scattering in graphene coupled to a single
plasmonic hotspot measured as a function of laser energy. The enhancement
profiles of the G peak show strong enhancement (up to ) and narrow
resonances (30 meV) that are induced by the localized surface plasmon of a gold
nanodimer. We observe the evolution of defect-mode scattering in a defect-free
graphene lattice in resonance with the plasmon. We propose a quantum theory of
plasmon-enhanced Raman scattering, where the plasmon forms an integral part of
the excitation process. Quantum interferences between scattering channels
explain the experimentally observed resonance profiles, in particular, the
marked difference in enhancement factors for incoming and outgoing resonance
and the appearance of the defect-type modes.Comment: Keywords: plasmon-enhanced Raman scattering, SERS, graphene, quantum
interferences, microscopic theory of Raman scattering. Content: 22 pages
including 5 figures + 11 pages supporting informatio
Semantic Representation of Physics Research Data
Improvements in web technologies and artificial intelligence enable novel, more data-driven research practices for scientists. However, scientific knowledge generated from data-intensive research practices is disseminated with unstructured formats, thus hindering the scholarly communication in various respects. The traditional document-based representation of scholarly information hampers the reusability of research contributions. To address this concern, we developed the Physics Ontology (PhySci) to represent physics-related scholarly data in a machine-interpretable format. PhySci facilitates knowledge exploration, comparison, and organization of such data by representing it as knowledge graphs. It establishes a unique conceptualization to increase the visibility and accessibility to the digital content of physics publications. We present the iterative design principles by outlining a methodology for its development and applying three different evaluation approaches: data-driven and criteria-based evaluation, as well as ontology testing
How to feed the squerall with RDF and other data nuts?
Advances in Data Management methods have resulted in a wide array of storage solutions having varying query capabilities and supporting different data formats. Traditionally, heterogeneous data was transformed off-line into a unique format and migrated to a unique data management system, before being uniformly queried. However, with the increasing amount of heterogeneous data sources, many of which are dynamic, modern applications prefer accessing directly the original fresh data. Addressing this requirement, we designed and developed Squerall, a software framework that enables the querying of original large and heterogeneous data on-the-fly without prior data transformation. Squerall is built from the ground up with extensibility in consideration, e.g., supporting more data sources. Here, we explain Squerall’s extensibility aspect and demonstrate step-by-step how to add support for RDF data, a new extension to the previously supported range of data sources
TripleCheckMate: A Tool for Crowdsourcing the Quality Assessment of Linked Data
Linked Open Data (LOD) comprises of an unprecedented volume of structured datasets on the Web. However, these datasets are of varying quality ranging from extensively curated datasets to crowdsourced and even extracted data of relatively low quality. We present a methodology for assessing the quality of linked data resources, which comprises of a manual and a semi-automatic process. In this paper we focus on the manual process where the first phase includes the detection of common quality problems and their representation in a quality problem taxonomy. The second phase comprises of the evaluation of a large number of individual resources, according to the quality problem taxonomy via crowdsourcing. This process is implemented by the tool TripleCheckMate wherein a user assesses an individual resource and evaluates each fact for correctness. This paper focuses on describing the methodology, quality taxonomy and the tools’ system architecture, user perspective and extensibility
Combination of a fast white light interferometer with a phase-shifting interferometric line sensor for form measurements of precision components
Modern industrial fabrication processes put high requirements on the quality of the surface form of precision-machined components, e.g. optical lenses or microelectromechanical systems (MEMS). Optical sensors provide high precision non-contact surface measurement to verify these quality requirements, even on fragile surfaces.
The low-cost line-scanning interferometer that is presented in this contribution is based on a Michelson interferometer configuration in combination with a high-speed line-scan camera. The sensor can operate in scanning white-light or in optical path length modulation (OPLM) mode. The white-light mode is used to automatically align the sensor perpendicular in the working distance of 13 mm to the surface of the specimen. In OPLM-mode, an oscillating reference mirror and a band-pass filtered light source are used, to measure the form of a radial symmetric specimen with a diameter of up to 25.4 mm with interferometric accuracy. Several overlapping ring-shaped sub-apertures are measured iteratively in different radial positions until the whole surface is scanned. The sub-apertures are stitched together to reconstruct the complete 3D-topography, while overlapping areas can be used to correct movement errors of the scanning axes. This concept is highly adaptive and can be applied to many different specimen geometries e.g. planes, spheres or aspheric glass lenses
Recommended from our members
Encoding Knowledge Graph Entity Aliases in Attentive Neural Network for Wikidata Entity Linking
The collaborative knowledge graphs such as Wikidata excessively rely on the crowd to author the information. Since the crowd is not bound to a standard protocol for assigning entity titles, the knowledge graph is populated by non-standard, noisy, long or even sometimes awkward titles. The issue of long, implicit, and nonstandard entity representations is a challenge in Entity Linking (EL) approaches for gaining high precision and recall. Underlying KG in general is the source of target entities for EL approaches, however, it often contains other relevant information, such as aliases of entities (e.g., Obama and Barack Hussein Obama are aliases for the entity Barack Obama). EL models usually ignore such readily available entity attributes. In this paper, we examine the role of knowledge graph context on an attentive neural network approach for entity linking on Wikidata. Our approach contributes by exploiting the sufficient context from a KG as a source of background knowledge, which is then fed into the neural network. This approach demonstrates merit to address challenges associated with entity titles (multi-word, long, implicit, case-sensitive). Our experimental study shows ≈8% improvements over the baseline approach, and significantly outperform an end to end approach for Wikidata entity linking
Analysing the evolution of computer science events leveraging a scholarly knowledge graph: a scientometrics study of top-ranked events in the past decade
The publish or perish culture of scholarly communication results in quality and relevance to be are subordinate to quantity. Scientific events such as conferences play an important role in scholarly communication and knowledge exchange. Researchers in many fields, such as computer science, often need to search for events to publish their research results, establish connections for collaborations with other researchers and stay up to date with recent works. Researchers need to have a meta-research understanding of the quality of scientific events to publish in high-quality venues. However, there are many diverse and complex criteria to be explored for the evaluation of events. Thus, finding events with quality-related criteria becomes a time-consuming task for researchers and often results in an experience-based subjective evaluation. OpenResearch.org is a crowd-sourcing platform that provides features to explore previous and upcoming events of computer science, based on a knowledge graph. In this paper, we devise an ontology representing scientific events metadata. Furthermore, we introduce an analytical study of the evolution of Computer Science events leveraging the OpenResearch.org knowledge graph. We identify common characteristics of these events, formalize them, and combine them as a group of metrics. These metrics can be used by potential authors to identify high-quality events. On top of the improved ontology, we analyzed the metadata of renowned conferences in various computer science communities, such as VLDB, ISWC, ESWC, WIMS, and SEMANTiCS, in order to inspect their potential as event metrics
- …