11,402 research outputs found
On systematic approaches for interpreted information transfer of inspection data from bridge models to structural analysis
In conjunction with the improved methods of monitoring damage and degradation processes, the interest in reliability assessment of reinforced concrete bridges is increasing in recent years. Automated imagebased inspections of the structural surface provide valuable data to extract quantitative information about deteriorations, such as crack patterns. However, the knowledge gain results from processing this information in a structural context, i.e. relating the damage artifacts to building components. This way, transformation to structural analysis is enabled. This approach sets two further requirements: availability of structural bridge information and a standardized storage for interoperability with subsequent analysis tools. Since the involved large datasets are only efficiently processed in an automated manner, the implementation of the complete workflow from damage and building data to structural analysis is targeted in this work. First, domain concepts are derived from the back-end tasks: structural analysis, damage modeling, and life-cycle assessment. The common interoperability format, the Industry Foundation Class (IFC), and processes in these domains are further assessed. The need for usercontrolled interpretation steps is identified and the developed prototype thus allows interaction at subsequent model stages. The latter has the advantage that interpretation steps can be individually separated into either a structural analysis or a damage information model or a combination of both. This approach to damage information processing from the perspective of structural analysis is then validated in different case studies
Reviewing Developments of Graph Convolutional Network Techniques for Recommendation Systems
The Recommender system is a vital information service on today's Internet.
Recently, graph neural networks have emerged as the leading approach for
recommender systems. We try to review recent literature on graph neural
network-based recommender systems, covering the background and development of
both recommender systems and graph neural networks. Then categorizing
recommender systems by their settings and graph neural networks by spectral and
spatial models, we explore the motivation behind incorporating graph neural
networks into recommender systems. We also analyze challenges and open problems
in graph construction, embedding propagation and aggregation, and computation
efficiency. This guides us to better explore the future directions and
developments in this domain.Comment: arXiv admin note: text overlap with arXiv:2103.08976 by other author
Sentinel: a co-designed platform for semantic enrichment of social media streams
We introduce the Sentinel platform that supports semantic enrichment of streamed social media data for the purposes of situational understanding. The platform is the result of a codesign effort between computing and social scientists, iteratively developed through a series of pilot studies. The platform is founded upon a knowledge-based approach, in which input streams (channels) are characterized by spatial and terminological parameters, collected media is preprocessed to identify significant terms (signals), and data are tagged (framed) in relation to an ontology. Interpretation of processed media is framed in terms of the 5W framework (who, what, when, where, and why). The platform is designed to be open to the incorporation of new processing modules, building on the knowledge-based elements (channels, signals, and framing ontology) and accessible via a set of user-facing apps. We present the conceptual architecture for the platform, discuss the design and implementation challenges of the underlying streamprocessing system, and present a number of apps developed in the context of the pilot studies, highlighting the strengths and importance of the codesign approach and indicating promising areas for future research
Learning Relation Prototype from Unlabeled Texts for Long-tail Relation Extraction
Relation Extraction (RE) is a vital step to complete Knowledge Graph (KG) by
extracting entity relations from texts.However, it usually suffers from the
long-tail issue. The training data mainly concentrates on a few types of
relations, leading to the lackof sufficient annotations for the remaining types
of relations. In this paper, we propose a general approach to learn relation
prototypesfrom unlabeled texts, to facilitate the long-tail relation extraction
by transferring knowledge from the relation types with sufficient trainingdata.
We learn relation prototypes as an implicit factor between entities, which
reflects the meanings of relations as well as theirproximities for transfer
learning. Specifically, we construct a co-occurrence graph from texts, and
capture both first-order andsecond-order entity proximities for embedding
learning. Based on this, we further optimize the distance from entity pairs
tocorresponding prototypes, which can be easily adapted to almost arbitrary RE
frameworks. Thus, the learning of infrequent or evenunseen relation types will
benefit from semantically proximate relations through pairs of entities and
large-scale textual information.We have conducted extensive experiments on two
publicly available datasets: New York Times and Google Distant
Supervision.Compared with eight state-of-the-art baselines, our proposed model
achieves significant improvements (4.1% F1 on average). Furtherresults on
long-tail relations demonstrate the effectiveness of the learned relation
prototypes. We further conduct an ablation study toinvestigate the impacts of
varying components, and apply it to four basic relation extraction models to
verify the generalization ability.Finally, we analyze several example cases to
give intuitive impressions as qualitative analysis. Our codes will be released
later
Trustworthiness in Social Big Data Incorporating Semantic Analysis, Machine Learning and Distributed Data Processing
This thesis presents several state-of-the-art approaches constructed for the purpose of (i) studying the trustworthiness of users in Online Social Network platforms, (ii) deriving concealed knowledge from their textual content, and (iii) classifying and predicting the domain knowledge of users and their content. The developed approaches are refined through proof-of-concept experiments, several benchmark comparisons, and appropriate and rigorous evaluation metrics to verify and validate their effectiveness and efficiency, and hence, those of the applied frameworks
Hermitian - non-Hermitian interfaces in quantum theory
In the global framework of quantum theory the individual quantum systems seem
clearly separated into two families with the respective manifestly Hermitian
and hiddenly Hermitian operators of their Hamiltonian. In the light of certain
preliminary studies these two families seem to have an empty overlap. In this
paper we demonstrate that it is not so. We are going to show that whenever the
interaction potentials are chosen weakly nonlocal, the separation of the two
families may disappear. The overlaps alias interfaces between the Hermitian and
non-Hermitian descriptions of the unitarily evolving quantum system in question
may become non-empty. This assertion will be illustrated via a few analytically
solvable elementary models.Comment: 24 pages, 9 figure
Recommended from our members
User-centred video abstraction
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonThe rapid growth of digital video content in recent years has imposed the need for the development of technologies with the capability to produce condensed but semantically rich versions of the input video stream in an effective manner. Consequently, the topic of Video Summarisation is becoming increasingly popular in multimedia community and numerous video abstraction approaches have been proposed accordingly. These recommended techniques can be divided into two major categories of automatic and semi-automatic in accordance with the required level of human intervention in summarisation process. The fully-automated methods mainly adopt the low-level visual, aural and textual features alongside the mathematical and statistical algorithms in furtherance to extract the most significant segments of original video. However, the effectiveness of this type of techniques is restricted by a number of factors such as domain-dependency, computational expenses and the inability to understand the semantics of videos from low-level features. The second category of techniques however, attempts to alleviate the quality of summaries by involving humans in the abstraction process to bridge the semantic gap. Nonetheless, a single user’s subjectivity and other external contributing factors such as distraction will potentially deteriorate the performance of this group of approaches. Accordingly, in this thesis we have focused on the development of three user-centred effective video summarisation techniques that could be applied to different video categories and generate satisfactory results. According to our first proposed approach, a novel mechanism for a user-centred video summarisation has been presented for the scenarios in which multiple actors are employed in the video summarisation process in order to minimise the negative effects of sole user adoption. Based on our recommended algorithm, the video frames were initially scored by a group of video annotators ‘on the fly’. This was followed by averaging these assigned scores in order to generate a singular saliency score for each video frame and, finally, the highest scored video frames alongside the corresponding audio and textual contents were extracted to be included into the final summary. The effectiveness of our approach has been assessed by comparing the video summaries generated based on our approach against the results obtained from three existing automatic summarisation tools that adopt different modalities for abstraction purposes. The experimental results indicated that our proposed method is capable of delivering remarkable outcomes in terms of Overall Satisfaction and Precision with an acceptable Recall rate, indicating the usefulness of involving user input in the video summarisation process. In an attempt to provide a better user experience, we have proposed our personalised video summarisation method with an ability to customise the generated summaries in accordance with the viewers’ preferences. Accordingly, the end-user’s priority levels towards different video scenes were captured and utilised for updating the average scores previously assigned by the video annotators. Finally, our earlier proposed summarisation method was adopted to extract the most significant audio-visual content of the video. Experimental results indicated the capability of this approach to deliver superior outcomes compared with our previously proposed method and the three other automatic summarisation tools. Finally, we have attempted to reduce the required level of audience involvement for personalisation purposes by proposing a new method for producing personalised video summaries. Accordingly, SIFT visual features were adopted to identify the video scenes’ semantic categories. Fusing this retrieved data with pre-built users’ profiles, personalised video abstracts can be created. Experimental results showed the effectiveness of this method in delivering superior outcomes comparing to our previously recommended algorithm and the three other automatic summarisation techniques
- …