24,389 research outputs found
Data mining technology for the evaluation of learning content interaction
Interactivity is central for the success of learning. In e-learning and other educational multimedia environments, the evaluation of interaction and behaviour is particularly crucial. Data mining – a non-intrusive, objective analysis technology – shall be proposed as the central evaluation technology for the analysis of the usage of computer-based educational environments and in particular of the interaction with educational content. Basic mining techniques are reviewed and their application in a Web-based third-level course environment is illustrated. Analytic models capturing interaction aspects from the application domain (learning) and the software infrastructure (interactive multimedia) are required for the meaningful interpretation of mining results
The Co-Evolution of Test Maintenance and Code Maintenance through the lens of Fine-Grained Semantic Changes
Automatic testing is a widely adopted technique for improving software
quality. Software developers add, remove and update test methods and test
classes as part of the software development process as well as during the
evolution phase, following the initial release. In this work we conduct a large
scale study of 61 popular open source projects and report the relationships we
have established between test maintenance, production code maintenance, and
semantic changes (e.g, statement added, method removed, etc.). performed in
developers' commits.
We build predictive models, and show that the number of tests in a software
project can be well predicted by employing code maintenance profiles (i.e., how
many commits were performed in each of the maintenance activities: corrective,
perfective, adaptive). Our findings also reveal that more often than not,
developers perform code fixes without performing complementary test maintenance
in the same commit (e.g., update an existing test or add a new one). When
developers do perform test maintenance, it is likely to be affected by the
semantic changes they perform as part of their commit.
Our work is based on studying 61 popular open source projects, comprised of
over 240,000 commits consisting of over 16,000,000 semantic change type
instances, performed by over 4,000 software engineers.Comment: postprint, ICSME 201
Spectral comparison of large urban graphs
The spectrum of an axial graph is proposed as a means for comparison between spaces,
particularly for measuring between very large and complex graphs. A number of methods have
been used in recent years for comparative analysis within large sets of urban areas, both to
investigate properties of specific known types of street network or to propose a taxonomy of urban
morphology based on an analytical technique. In many cases, a single or small range of predefined,
scalar measures such as metric distance, integration, control or clustering coefficient have
been used to compare the graphs. While these measures are well understood theoretically, their
low dimensionality determines the range of observations that can ultimately be drawn from the data.
Spectral analysis consists of a high dimensional vector representing each space, between which
metric distance may be measured to indicate the overall difference between two spaces, or
subspaces may be extracted to correspond to certain features. It is used for comparison of entire
urban graphs, to determine similarities (and differences) in their overall structure.
Results are shown of a comparison of 152 cities distributed around the world. The clustering of
cities of similar properties in a high dimensional space is discussed. Principal and nonlinear
components of the data set indicate significant correlations in the graph similarities between cities
and their proximity to one another, suggesting that cultural features based on location are evident in
the city form and that these can be quantified by the proposed method. Results of classification
tests show that a city’s location can be estimated based purely on its form.
The high dimensionality of the spectra is beneficial for its utility in data-mining applications that can
draw correlations with other data sets such as land use information. It is shown how further
processing by supervised learning allows the extraction of relevant features. A methodological
comparison is also drawn with statistical studies that use a strong correlation between human
genetic markers and geographical location of populations to derive detailed reconstructions of
prehistoric migration. Thus, it is suggested that the method may be utilised for mapping the transfer
of cultural memes by measuring comparison between cities
European Arctic Initiatives Compendium
Julkaistu versi
Hierarchical video surveillance architecture: a chassis for video big data analytics and exploration
There is increasing reliance on video surveillance systems for systematic derivation, analysis and interpretation of the data needed for predicting, planning, evaluating and implementing public safety. This is evident from the massive number of surveillance cameras deployed across public locations. For example, in July 2013, the British Security Industry Association (BSIA) reported that over 4 million CCTV cameras had been installed in Britain alone. The BSIA also reveal that only 1.5% of these are state owned. In this paper, we propose a framework that allows access to data from privately owned cameras, with the aim of increasing the efficiency and accuracy of public safety planning, security activities, and decision support systems that are based on video integrated surveillance systems. The accuracy of results obtained from government-owned public safety infrastructure would improve greatly if privately owned surveillance systems ‘expose’ relevant video-generated metadata events, such as triggered alerts and also permit query of a metadata repository. Subsequently, a police officer, for example, with an appropriate level of system permission can query unified video systems across a large geographical area such as a city or a country to predict the location of an interesting entity, such as a pedestrian or a vehicle. This becomes possible with our proposed novel hierarchical architecture, the Fused Video Surveillance Architecture (FVSA). At the high level, FVSA comprises of a hardware framework that is supported by a multi-layer abstraction software interface. It presents video surveillance systems as an adapted computational grid of intelligent services, which is integration-enabled to communicate with other compatible systems in the Internet of Things (IoT)
- …