226 research outputs found

    Contextual Object Detection with a Few Relevant Neighbors

    Full text link
    A natural way to improve the detection of objects is to consider the contextual constraints imposed by the detection of additional objects in a given scene. In this work, we exploit the spatial relations between objects in order to improve detection capacity, as well as analyze various properties of the contextual object detection problem. To precisely calculate context-based probabilities of objects, we developed a model that examines the interactions between objects in an exact probabilistic setting, in contrast to previous methods that typically utilize approximations based on pairwise interactions. Such a scheme is facilitated by the realistic assumption that the existence of an object in any given location is influenced by only few informative locations in space. Based on this assumption, we suggest a method for identifying these relevant locations and integrating them into a mostly exact calculation of probability based on their raw detector responses. This scheme is shown to improve detection results and provides unique insights about the process of contextual inference for object detection. We show that it is generally difficult to learn that a particular object reduces the probability of another, and that in cases when the context and detector strongly disagree this learning becomes virtually impossible for the purposes of improving the results of an object detector. Finally, we demonstrate improved detection results through use of our approach as applied to the PASCAL VOC and COCO datasets

    Succinic semialdehyde dehydrogenase deficiency: Lessons from mice and men

    Get PDF
    Succinic semialdehyde dehydrogenase (SSADH) deficiency, a disorder of GABA degradation with subsequent elevations in brain GABA and GHB, is a neurometabolic disorder with intellectual disability, epilepsy, hypotonia, ataxia, sleep disorders, and psychiatric disturbances. Neuroimaging reveals increased T2-weighted MRI signal usually affecting the globus pallidus, cerebellar dentate nucleus, and subthalamic nucleus, and often cerebral and cerebellar atrophy. EEG abnormalities are usually generalized spike-wave, consistent with a predilection for generalized epilepsy. The murine phenotype is characterized by failure-to-thrive, progressive ataxia, and a transition from generalized absence to tonic-clonic to ultimately fatal convulsive status epilepticus. Binding and electrophysiological studies demonstrate use-dependent downregulation of GABA(A) and (B) receptors in the mutant mouse. Translational human studies similarly reveal downregulation of GABAergic activity in patients, utilizing flumazenil-PET and transcranial magnetic stimulation for GABA(A) and (B) activity, respectively. Sleep studies reveal decreased stage REM with prolonged REM latencies and diminished percentage of stage REM. An ad libitum ketogenic diet was reported as effective in the mouse model, with unclear applicability to the human condition. Acute application of SGS–742, a GABA(B) antagonist, leads to improvement in epileptiform activity on electrocorticography. Promising mouse data using compounds available for clinical use, including taurine and SGS–742, form the framework for human trials

    Discovering a junction tree behind a Markov network by a greedy algorithm

    Full text link
    In an earlier paper we introduced a special kind of k-width junction tree, called k-th order t-cherry junction tree in order to approximate a joint probability distribution. The approximation is the best if the Kullback-Leibler divergence between the true joint probability distribution and the approximating one is minimal. Finding the best approximating k-width junction tree is NP-complete if k>2. In our earlier paper we also proved that the best approximating k-width junction tree can be embedded into a k-th order t-cherry junction tree. We introduce a greedy algorithm resulting very good approximations in reasonable computing time. In this paper we prove that if the Markov network underlying fullfills some requirements then our greedy algorithm is able to find the true probability distribution or its best approximation in the family of the k-th order t-cherry tree probability distributions. Our algorithm uses just the k-th order marginal probability distributions as input. We compare the results of the greedy algorithm proposed in this paper with the greedy algorithm proposed by Malvestuto in 1991.Comment: The paper was presented at VOCAL 2010 in Veszprem, Hungar

    Modelling with non-stratified chain event graphs

    Get PDF
    © 2019, Springer Nature Switzerland AG. Chain Event Graphs (CEGs) are recent probabilistic graphical modelling tools that have proved successful in modelling scenarios with context-specific independencies. Although the theory underlying CEGs supports appropriate representation of structural zeroes, the literature so far does not provide an adaptation of the vanilla CEG methods for a real-world application presenting structural zeroes also known as the non-stratified CEG class. To illustrate these methods, we present a non-stratified CEG representing a public health intervention designed to reduce the risk and rate of falling in the elderly. We then compare the CEG model to the more conventional Bayesian Network model when applied to this setting

    Cytogerontology since 1881: A reappraisal of August Weismann and a review of modern progress

    Get PDF
    Cytogerontology, the science of cellular ageing, originated in 1881 with the prediction by August Weismann that the somatic cells of higher animals have limited division potential. Weismann's prediction was derived by considering the role of natural selection in regulating the duration of an organism's life. For various reasons, Weismann's ideas on ageing fell into neglect following his death in 1914, and cytogerontology has only reappeared as a major research area following the demonstration by Hayflick and Moorhead in the early 1960s that diploid human fibroblasts are restricted to a finite number of divisions in vitro. In this review we give a detailed account of Weismann's theory, and we reveal that his ideas were both more extensive in their scope and more pertinent to current research than is generally recognised. We also appraise the progress which has been made over the past hundred years in investigating the causes of ageing, with particular emphasis being given to (i) the evolution of ageing, and (ii) ageing at the cellular level. We critically assess the current state of knowledge in these areas and recommend a series of points as primary targets for future research

    On the relationship between individual and population health

    Get PDF
    The relationship between individual and population health is partially built on the broad dichotomization of medicine into clinical medicine and public health. Potential drawbacks of current views include seeing both individual and population health as absolute and independent concepts. I will argue that the relationship between individual and population health is largely relative and dynamic. Their interrelated dynamism derives from a causally defined life course perspective on health determination starting from an individual’s conception through growth, development and participation in the collective till death, all seen within the context of an adaptive society. Indeed, it will become clear that neither individual nor population health is identifiable or even definable without informative contextualization within the other. For instance, a person’s health cannot be seen in isolation but must be placed in the rich contextual web such as the socioeconomic circumstances and other health determinants of where they were conceived, born, bred, and how they shaped and were shaped by their environment and communities, especially given the prevailing population health exposures over their lifetime. We cannot discuss the “what” and “how much” of individual and population health until we know the cumulative trajectories of both, using appropriate causal language

    Pediatric appendicitis rupture rate: a national indicator of disparities in healthcare access

    Get PDF
    BACKGROUND: The U.S. National Healthcare Disparities Report is a recent effort to measure and monitor racial and ethnic disparities in health and healthcare. The Report is a work in progress and includes few indicators specific to children. An indicator worthy of consideration is racial/ethnic differences in the rate of bad outcomes for pediatric acute appendicitis. Bad outcomes for this condition are indicative of poor access to healthcare, which is amenable to social and healthcare policy changes. METHODS: We analyzed the KID Inpatient Database, a nationally representative sample of pediatric hospitalization, to compare rates of appendicitis rupture between white, African American, Hispanic and Asian children. We ran weighted logistic regression models to obtain national estimates of relative odds of rupture rate for the four groups, adjusted for developmental, biological, socioeconomic, health services and hospital factors that might influence disease outcome. RESULTS: Rupture was a much more burdensome outcome than timely surgery and rupture avoidance. Rupture cases had 97% higher hospital charges and 175% longer hospital stays than non-rupture cases on average. These burdens disproportionately affected minority children, who had 24% – 38% higher odds of appendicitis rupture than white children, adjusting for age and gender. These differences were reduced, but remained significant after adjusting for other factors. CONCLUSION: The racial/ethnic disparities in pediatric appendicitis outcome are large and are preventable with timely diagnosis and surgery for all children. Furthermore, estimating this disparity using the KID survey is a relatively straightforward process. Therefore pediatric appendicitis rupture rate is a good candidate for inclusion in the National Healthcare Disparities Report. As with most other health and healthcare disparities, efforts to reduce disparities in income, wealth and access to care will most likely improve the odds of favorable outcome for this condition as well

    Predictive coding and representationalism

    Get PDF
    According to the predictive coding theory of cognition (PCT), brains are predictive machines that use perception and action to minimize prediction error, i.e. the discrepancy between bottom–up, externally-generated sensory signals and top–down, internally-generated sensory predictions. Many consider PCT to have an explanatory scope that is unparalleled in contemporary cognitive science and see in it a framework that could potentially provide us with a unified account of cognition. It is also commonly assumed that PCT is a representational theory of sorts, in the sense that it postulates that our cognitive contact with the world is mediated by internal representations. However, the exact sense in which PCT is representational remains unclear; neither is it clear that it deserves such status—that is, whether it really invokes structures that are truly and nontrivially representational in nature. In the present article, I argue that the representational pretensions of PCT are completely justified. This is because the theory postulates cognitive structures—namely action-guiding, detachable, structural models that afford representational error detection—that play genuinely representational functions within the cognitive system

    Evolutionary approaches for the reverse-engineering of gene regulatory networks: A study on a biologically realistic dataset

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Inferring gene regulatory networks from data requires the development of algorithms devoted to structure extraction. When only static data are available, gene interactions may be modelled by a Bayesian Network (BN) that represents the presence of direct interactions from regulators to regulees by conditional probability distributions. We used enhanced evolutionary algorithms to stochastically evolve a set of candidate BN structures and found the model that best fits data without prior knowledge.</p> <p>Results</p> <p>We proposed various evolutionary strategies suitable for the task and tested our choices using simulated data drawn from a given bio-realistic network of 35 nodes, the so-called insulin network, which has been used in the literature for benchmarking. We assessed the inferred models against this reference to obtain statistical performance results. We then compared performances of evolutionary algorithms using two kinds of recombination operators that operate at different scales in the graphs. We introduced a niching strategy that reinforces diversity through the population and avoided trapping of the algorithm in one local minimum in the early steps of learning. We show the limited effect of the mutation operator when niching is applied. Finally, we compared our best evolutionary approach with various well known learning algorithms (MCMC, K2, greedy search, TPDA, MMHC) devoted to BN structure learning.</p> <p>Conclusion</p> <p>We studied the behaviour of an evolutionary approach enhanced by niching for the learning of gene regulatory networks with BN. We show that this approach outperforms classical structure learning methods in elucidating the original model. These results were obtained for the learning of a bio-realistic network and, more importantly, on various small datasets. This is a suitable approach for learning transcriptional regulatory networks from real datasets without prior knowledge.</p
    corecore