5,945,950 research outputs found
Pilot/Practice Digital Content Analysis
In this assignment, students pick a small sample of social media content and analyze it using a technique of their choice, then write up their results/experience
Moisture content analysis of wooden bridges
The article deals with assessing the impact of moisture content conditions in wood mass of the wood bridges constructions on their lifespan in Central Europe. Wood moisture content as one of main factors influencing the wooden elements mechanical properties was studied on seventeen wooden bridge constructions. The dependence of temperature and relative humidity on material moisture content was observed in summer season and also in winter season. The lifespan of historical and modern wood structures was discussed as well.Web of Science64354453
The computational content of Nonstandard Analysis
Kohlenbach's proof mining program deals with the extraction of effective
information from typically ineffective proofs. Proof mining has its roots in
Kreisel's pioneering work on the so-called unwinding of proofs. The proof
mining of classical mathematics is rather restricted in scope due to the
existence of sentences without computational content which are provable from
the law of excluded middle and which involve only two quantifier alternations.
By contrast, we show that the proof mining of classical Nonstandard Analysis
has a very large scope. In particular, we will observe that this scope includes
any theorem of pure Nonstandard Analysis, where `pure' means that only
nonstandard definitions (and not the epsilon-delta kind) are used. In this
note, we survey results in analysis, computability theory, and Reverse
Mathematics.Comment: In Proceedings CL&C 2016, arXiv:1606.0582
Content analysis: What are they talking about?
Quantitative content analysis is increasingly used to surpass surface level analyses in Computer-Supported Collaborative Learning (e.g., counting messages), but critical reflection on accepted practice has generally not been reported. A review of CSCL conference proceedings revealed a general vagueness in definitions of units of analysis. In general, arguments for choosing a unit were lacking and decisions made while developing the content analysis procedures were not made explicit. In this article, it will be illustrated that the currently accepted practices concerning the ‘unit of meaning’ are not generally applicable to quantitative content analysis of electronic communication. Such analysis is affected by ‘unit boundary overlap’ and contextual constraints having to do with the technology used. The analysis of e-mail communication required a different unit of analysis and segmentation procedure. This procedure proved to be reliable, and the subsequent coding of these units for quantitative analysis yielded satisfactory reliabilities. These findings have implications and recommendations for current content analysis practice in CSCL research
Video semantic content analysis based on ontology
The rapid increase in the available amount of video data is creating a growing demand for efficient methods for understanding and managing it at the semantic level. New multimedia standards, such as MPEG-4 and MPEG-7, provide the basic functionalities in order to manipulate and transmit objects and metadata. But importantly, most of the content of video data at a semantic level is out of the scope of the standards. In this paper, a video semantic content analysis framework based on ontology is presented. Domain ontology is used to define high level semantic concepts and their relations in the context of the examined domain. And low-level features (e.g. visual and aural) and video content analysis algorithms are integrated into the ontology to enrich video semantic analysis. OWL is used for the ontology description. Rules in Description Logic are defined to describe how features and algorithms for video analysis should be applied according to different perception content and low-level features. Temporal Description Logic is used to describe the semantic events, and a reasoning algorithm is proposed for events detection. The proposed framework is demonstrated in a soccer video domain and shows promising results
Benefits of Computer Based Content Analysis to Foresight
Purpose of the article: The present manuscript summarizes benefits of the use of computer-based content
analysis in a generation phase of foresight initiatives. Possible advantages, disadvantages and limitations of
the content analysis for the foresight projects are discussed as well.
Methodology/methods: In order to specify the benefits and identify the limitations of the content analysis
within the foresight, results of the generation phase of a particular foresight project performed without
and subsequently with the use of computer based content analysis tool were compared by two proposed
measurements.
Scientific aim: The generation phase of the foresight is the most demanding part in terms of analysis duration,
costs and resources due to a significant amount of reviewed text. In addition, the conclusions of the foresight
evaluation are dependent on personal views and perceptions of the foresight analysts as the evaluation is
based merely on reading. The content analysis may partially or even fully replace the reading and provide an
important benchmark.
Findings: The use of computer based content analysis tool significantly reduced time to conduct the foresight
generation phase. The content analysis tool showed very similar results as compared to the evaluation
performed by the standard reading. Only ten % of results were not revealed by the use of content analysis tool.
On the other hand, several new topics were identified by means of content analysis tool that were missed by
the reading.
Conclusions: The results of two measurements should be subjected to further testing within more foresight
projects to validate them. The computer based content analysis tool provides valuable benchmark to the
foresight analysts and partially substitute the reading. However, a complete replacement of the reading is not
recommended, as deep understanding to weak signals interpretation is essential for the foresight
Multimodal Content Analysis for Effective Advertisements on YouTube
The rapid advances in e-commerce and Web 2.0 technologies have greatly
increased the impact of commercial advertisements on the general public. As a
key enabling technology, a multitude of recommender systems exists which
analyzes user features and browsing patterns to recommend appealing
advertisements to users. In this work, we seek to study the characteristics or
attributes that characterize an effective advertisement and recommend a useful
set of features to aid the designing and production processes of commercial
advertisements. We analyze the temporal patterns from multimedia content of
advertisement videos including auditory, visual and textual components, and
study their individual roles and synergies in the success of an advertisement.
The objective of this work is then to measure the effectiveness of an
advertisement, and to recommend a useful set of features to advertisement
designers to make it more successful and approachable to users. Our proposed
framework employs the signal processing technique of cross modality feature
learning where data streams from different components are employed to train
separate neural network models and are then fused together to learn a shared
representation. Subsequently, a neural network model trained on this joint
feature embedding representation is utilized as a classifier to predict
advertisement effectiveness. We validate our approach using subjective ratings
from a dedicated user study, the sentiment strength of online viewer comments,
and a viewer opinion metric of the ratio of the Likes and Views received by
each advertisement from an online platform.Comment: 11 pages, 5 figures, ICDM 201
- …