706,406 research outputs found
3D FEM and DEM Analyses of Underground Openings in Competent Rock Masses
The paper is aimed at comparing the results of numerical analyses of underground openings in
competent rock masses like the Carrara Marble (Italy) by considering a real and well documented case
study. More specifically, 3D FEM and DEM analyses were carried out on a rock-mass model interested by
two faults and three sets of discontinuities. The geometrical model is representative of deep underground
openings where spalling-cracks and rock bursts can occur. PLAXIS 3D and 3DEC were used for the
analyses. Intact rock and rock mass characterization of Carrara Marble was inferred from available
technical literature. The analysis results were compared in terms of principal stresses and displacements
in a number of monitoring points around the opening. The main practical interest is to find out a reliable
approach for evaluating the stability of very large openings in a competent rock mass like Carrara marble.
For such a purpose, a number of available in-situ stress measurements were used
Statistical Network Analysis for Functional MRI: Summary Networks and Group Comparisons
Comparing weighted networks in neuroscience is hard, because the topological
properties of a given network are necessarily dependent on the number of edges
of that network. This problem arises in the analysis of both weighted and
unweighted networks. The term density is often used in this context, in order
to refer to the mean edge weight of a weighted network, or to the number of
edges in an unweighted one. Comparing families of networks is therefore
statistically difficult because differences in topology are necessarily
associated with differences in density. In this review paper, we consider this
problem from two different perspectives, which include (i) the construction of
summary networks, such as how to compute and visualize the mean network from a
sample of network-valued data points; and (ii) how to test for topological
differences, when two families of networks also exhibit significant differences
in density. In the first instance, we show that the issue of summarizing a
family of networks can be conducted by adopting a mass-univariate approach,
which produces a statistical parametric network (SPN). In the second part of
this review, we then highlight the inherent problems associated with the
comparison of topological functions of families of networks that differ in
density. In particular, we show that a wide range of topological summaries,
such as global efficiency and network modularity are highly sensitive to
differences in density. Moreover, these problems are not restricted to
unweighted metrics, as we demonstrate that the same issues remain present when
considering the weighted versions of these metrics. We conclude by encouraging
caution, when reporting such statistical comparisons, and by emphasizing the
importance of constructing summary networks.Comment: 16 pages, 5 figure
The Hirsch spectrum: a novel tool for analysing scientific journals
This paper introduces the Hirsch spectrum (h-spectrum) for analyzing the academic reputation of a scientific journal. h-Spectrum is a novel tool based on the Hirsch (h) index. It is easy to construct: considering a specific journal in a specific interval of time, h-spectrum is defined as the distribution representing the h-indexes associated to the authors of the journal articles. This tool allows defining a reference profile of the typical author of a journal, compare different journals within the same scientific field, and provide a rough indication of prestige/reputation of a journal in the scientific community. h-Spectrum can be associated to every journal. Ten specific journals in the Quality Engineering/Quality Management field are analyzed so as to preliminarily investigate the h-spectrum characteristic
Optimizing feature extraction in image analysis using experimented designs, a case study evaluating texture algorithms for describing appearance retention in carpets
When performing image analysis, one of the most critical steps is the selection of appropriate techniques. A huge amount of features can be extracted from several techniques and the selection is commonly performed based on expert knowledge. In this paper we present the theory of experimental designs as a tool for an objective selection of techniques in image analysis domain. We present a study case for evaluating appearance retention in textile floor coverings using texture features. The use of experimental design theory permitted to select an optimal set of techniques for describing the texture changes due to degradation
The role of the information set for forecasting - with applications to risk management
Predictions are issued on the basis of certain information. If the
forecasting mechanisms are correctly specified, a larger amount of available
information should lead to better forecasts. For point forecasts, we show how
the effect of increasing the information set can be quantified by using
strictly consistent scoring functions, where it results in smaller average
scores. Further, we show that the classical Diebold-Mariano test, based on
strictly consistent scoring functions and asymptotically ideal forecasts, is a
consistent test for the effect of an increase in a sequence of information sets
on -step point forecasts. For the value at risk (VaR), we show that the
average score, which corresponds to the average quantile risk, directly relates
to the expected shortfall. Thus, increasing the information set will result in
VaR forecasts which lead on average to smaller expected shortfalls. We
illustrate our results in simulations and applications to stock returns for
unconditional versus conditional risk management as well as univariate modeling
of portfolio returns versus multivariate modeling of individual risk factors.
The role of the information set for evaluating probabilistic forecasts by using
strictly proper scoring rules is also discussed.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS709 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
The longer term value of creativity judgements in computational creativity
During research to develop the Standardised Procedure for Evaluating Creative Systems (SPECS) methodology for evaluat- ing the creativity of ‘creative’ systems, in 2011 an evaluation case study was carried out. The case study investigated how we can make a ‘snapshot’ decision, in a short space of time, on the creativity of systems in various domains. The systems to be evaluated were presented at the International Computational Creativity Conference in 2011. Evaluation was performed by people whose domain expertise ranges from expert to novice, depending on the system. The SPECS methodology was used for evaluation, and was compared to two other creativity evaluation methods (Ritchie’s criteria and Colton’s Creative Tripod) and to results from surveying people’s opinion on the creativity of the systems under investigation. Here, we revisit those results, considering them in the context of what these systems have contributed to computational creativity development. Five years on, we now have data on how influential these systems were within computational creativity, and to what extent the work in these systems has influenced further developments in computational creativity research. This paper investigates whether the evaluations of creativity of these systems have been helpful in predicting which systems will be more influential in computational creativity (as measured by paper citations and further development within later computational systems). While a direct correlation between evaluative results and longer term impact is not discovered (and perhaps too simplistic an aim, given the factors at play in determining research impact), some interesting alignments are noted between the 2011 results and the impact of papers five years on
- …