423,475 research outputs found
ShapeStacks: Learning Vision-Based Physical Intuition for Generalised Object Stacking
Physical intuition is pivotal for intelligent agents to perform complex
tasks. In this paper we investigate the passive acquisition of an intuitive
understanding of physical principles as well as the active utilisation of this
intuition in the context of generalised object stacking. To this end, we
provide: a simulation-based dataset featuring 20,000 stack configurations
composed of a variety of elementary geometric primitives richly annotated
regarding semantics and structural stability. We train visual classifiers for
binary stability prediction on the ShapeStacks data and scrutinise their
learned physical intuition. Due to the richness of the training data our
approach also generalises favourably to real-world scenarios achieving
state-of-the-art stability prediction on a publicly available benchmark of
block towers. We then leverage the physical intuition learned by our model to
actively construct stable stacks and observe the emergence of an intuitive
notion of stackability - an inherent object affordance - induced by the active
stacking task. Our approach performs well even in challenging conditions where
it considerably exceeds the stack height observed during training or in cases
where initially unstable structures must be stabilised via counterbalancing.Comment: revised version to appear at ECCV 201
On the stability of Dirac sheet configurations
Using cooling for SU(2) lattice configurations, purely Abelian constant
magnetic field configurations were left over after the annihilation of
constituents that formed metastable Q=0 configurations. These so-called Dirac
sheet configurations were found to be stable if emerging from the confined
phase, close to the deconfinement phase transition, provided their Polyakov
loop was sufficiently non-trivial. Here we show how this is related to the
notion of marginal stability of the appropriate constant magnetic field
configurations. We find a perfect agreement between the analytic prediction for
the dependence of stability on the value of the Polyakov loop (the holonomy) in
a finite volume and the numerical results studied on a finite lattice in the
context of the Dirac sheet configurations
Scalable BGP Prefix Selection for Effective Inter-domain Traffic Engineering
Inter-domain Traffic Engineering for multi-homed networks faces a scalability
challenge, as the size of BGP routing table continue to grow. In this context,
the choice of the best path must be made potentially for each destination
prefix, requiring all available paths to be characterised (e.g., through
measurements) and compared with each other. Fortunately, it is well-known that
a few number of prefixes carry the larger part of the traffic. As a natural
consequence, to engineer large volume of traffic only few prefixes need to be
managed. Yet, traffic characteristics of a given prefix can greatly vary over
time, and little is known on the dynamism of traffic at this aggregation level,
including predicting the set of the most significant prefixes in the near
future. %based on past observations. Sophisticated prediction methods won't
scale in such context. In this paper, we study the relationship between prefix
volume, stability, and predictability, based on recent traffic traces from nine
different networks. Three simple and resource-efficient methods to select the
prefixes associated with the most important foreseeable traffic volume are then
proposed. Such proposed methods allow to select sets of prefixes with both
excellent representativeness (volume coverage) and stability in time, for which
the best routes are identified. The analysis carried out confirm the potential
benefits of a route decision engine
Banking Deregulation and Financial Stability : is it Time to re-regulate in Canada ?
We provide new evidence of a worsening of the risk-return trade-off in Canadian banking. Surging OBS activities have led to increasingly volatile net operating revenues, and might have reduced well-known measures of bank profitability, like return on assets and return on equity. In this context, a natural question arises: should we re-regulate? On this matter, we confirm CalmĂšs(2003) prediction: a maturation process took place after 1997. Using a new approach based on ARCH-M estimation, we find that an additional risk premium has emerged. In this sense, there is no need to re-regulate.ARCH-M Models, risk premium, financial stability
Dinosolve: A Protein Disulfide Bonding Prediction Server Using Context-Based Features to Enhance Prediction Accuracy
Background: Disulfide bonds play an important role in protein folding and structure stability. Accurately predicting disulfide bonds from protein sequences is important for modeling the structural and functional characteristics of many proteins. Methods: In this work, we introduce an approach of enhancing disulfide bonding prediction accuracy by taking advantage of context-based features. We firstly derive the first-order and second-order mean-force potentials according to the amino acid environment around the cysteine residues from large number of cysteine samples. The mean-force potentials are integrated as context-based scores to estimate the favorability of a cysteine residue in disulfide bonding state as well as a cysteine pair in disulfide bond connectivity. These context-based scores are then incorporated as features together with other sequence and evolutionary information to train neural networks for disulfide bonding state prediction and connectivity prediction. Results: The 10-fold cross validated accuracy is 90.8% at residue-level and 85.6% at protein-level in classifying an individual cysteine residue as bonded or free, which is around 2% accuracy improvement. The average accuracy for disulfide bonding connectivity prediction is also improved, which yields overall sensitivity of 73.42% and specificity of 91.61%. Conclusions: Our computational results have shown that the context-based scores are effective features to enhance the prediction accuracies of both disulfide bonding state prediction and connectivity prediction. Our disulfide prediction algorithm is implemented on a web server named Dinosolve available at: http://hpcr.cs.odu.edu/dinosolve
Episodic memory enhancement versus impairment is determined by contextual similarity across events
For over a century, stability of spatial context across related episodes has been considered a source of memory interference, impairing memory retrieval. However, contemporary memory integration theory generates a diametrically opposite prediction. Here, we aimed to resolve this discrepancy by manipulating local context similarity across temporally disparate but related episodes and testing the direction and underlying mechanisms of memory change. A series of experiments show that contextual stability produces memory integration and marked reciprocal strengthening. Variable context, conversely, seemed to result in competition such that new memories become enhanced at the expense of original memories. Interestingly, these patterns were virtually inverted in an additional experiment where context was reinstated during recall. These observations 1) identify contextual similarity across original and new memories as an important determinant in the volatility of memory, 2) present a challenge to classic and modern theories on episodic memory change, and 3) indicate that the sensitivity of context-induced memory changes to retrieval conditions may reconcile paradoxical predictions of interference and integration theory
Linear system identification using stable spline kernels and PLQ penalties
The classical approach to linear system identification is given by parametric
Prediction Error Methods (PEM). In this context, model complexity is often
unknown so that a model order selection step is needed to suitably trade-off
bias and variance. Recently, a different approach to linear system
identification has been introduced, where model order determination is avoided
by using a regularized least squares framework. In particular, the penalty term
on the impulse response is defined by so called stable spline kernels. They
embed information on regularity and BIBO stability, and depend on a small
number of parameters which can be estimated from data. In this paper, we
provide new nonsmooth formulations of the stable spline estimator. In
particular, we consider linear system identification problems in a very broad
context, where regularization functionals and data misfits can come from a rich
set of piecewise linear quadratic functions. Moreover, our anal- ysis includes
polyhedral inequality constraints on the unknown impulse response. For any
formulation in this class, we show that interior point methods can be used to
solve the system identification problem, with complexity O(n3)+O(mn2) in each
iteration, where n and m are the number of impulse response coefficients and
measurements, respectively. The usefulness of the framework is illustrated via
a numerical experiment where output measurements are contaminated by outliers.Comment: 8 pages, 2 figure
Development of a Bankruptcy Prediction Model for the Banking Sector in Mozambique Using Linear Discriminant Analysis
In Mozambique there is no evidence of a bankruptcy prediction model developed in the national economic context, yet, back in 2016, the national banking sector suffered a financial shock that resulted in Mozambiqueâs Central Bank intervention in two banks (Moza Banco, S.A. and Nosso Banco, S.A.). This was a result of the deterioration of their financial and prudential indicators, although Mozambique had been adhering to the Basel Accords since 1994. The Basel Accords provides recommendations on banking sector supervision worldwide with the aim to enhance financial system stability. While it doesnât predict bankruptcy, the prediction model can be used as an auxiliary tool to manage that risk, but this has to be built in the national economic context. This paper develops for Mozambiqueâs banking sector a bankruptcy prediction model in the Mozambican context through the linear discriminant analyses method, following two assumptions: (i) composition of the sample and (ii) robustness of the financial prediction indicators (the capital structure, profitability asset concentration and asset quality) from 2012 to 2020. The developed model attained an accuracy level of 84% one year before Central Bank intervention (2015) with the entire population of 19 banks of the sector, which makes it recommendable as a risk management tool for this sector
- âŠ