12,684 research outputs found

    Data Assimilation by Conditioning on Future Observations

    Full text link
    Conventional recursive filtering approaches, designed for quantifying the state of an evolving uncertain dynamical system with intermittent observations, use a sequence of (i) an uncertainty propagation step followed by (ii) a step where the associated data is assimilated using Bayes' rule. In this paper we switch the order of the steps to: (i) one step ahead data assimilation followed by (ii) uncertainty propagation. This route leads to a class of filtering algorithms named \emph{smoothing filters}. For a system driven by random noise, our proposed methods require the probability distribution of the driving noise after the assimilation to be biased by a nonzero mean. The system noise, conditioned on future observations, in turn pushes forward the filtering solution in time closer to the true state and indeed helps to find a more accurate approximate solution for the state estimation problem

    Receivership : a coordinated strategy to stabilize troubled properties

    Get PDF
    With the impact of municipal debt burdens, coupled with the effects of declining real estate prices and the US financial crisis, municipalities are looking for novel and cost-effective approaches to address abandoned, blighted and/or foreclosed properties that threaten the quality of life of their communities. Receivership, the use of statutory power to seize buildings and place properties under control of a judicially supervised 'receiver', can be an effective tool to tackle the problem of troubled properties which repeatedly violate safety and sanitary codes. Despite its potential, receivership requires significant coordination, as well as a committed team, in order to implement the intricate process of running a successful receivership strategy.Foreclosure - Massachusetts ; Housing policy - Massachusetts

    RowHammer: Reliability Analysis and Security Implications

    Full text link
    As process technology scales down to smaller dimensions, DRAM chips become more vulnerable to disturbance, a phenomenon in which different DRAM cells interfere with each other's operation. For the first time in academic literature, our ISCA paper exposes the existence of disturbance errors in commodity DRAM chips that are sold and used today. We show that repeatedly reading from the same address could corrupt data in nearby addresses. More specifically: When a DRAM row is opened (i.e., activated) and closed (i.e., precharged) repeatedly (i.e., hammered), it can induce disturbance errors in adjacent DRAM rows. This failure mode is popularly called RowHammer. We tested 129 DRAM modules manufactured within the past six years (2008-2014) and found 110 of them to exhibit RowHammer disturbance errors, the earliest of which dates back to 2010. In particular, all modules from the past two years (2012-2013) were vulnerable, which implies that the errors are a recent phenomenon affecting more advanced generations of process technology. Importantly, disturbance errors pose an easily-exploitable security threat since they are a breach of memory protection, wherein accesses to one page (mapped to one row) modifies the data stored in another page (mapped to an adjacent row).Comment: This is the summary of the paper titled "Flipping Bits in Memory Without Accessing Them: An Experimental Study of DRAM Disturbance Errors" which appeared in ISCA in June 201

    A Hierarchical Bayesian Framework for Constructing Sparsity-inducing Priors

    Full text link
    Variable selection techniques have become increasingly popular amongst statisticians due to an increased number of regression and classification applications involving high-dimensional data where we expect some predictors to be unimportant. In this context, Bayesian variable selection techniques involving Markov chain Monte Carlo exploration of the posterior distribution over models can be prohibitively computationally expensive and so there has been attention paid to quasi-Bayesian approaches such as maximum a posteriori (MAP) estimation using priors that induce sparsity in such estimates. We focus on this latter approach, expanding on the hierarchies proposed to date to provide a Bayesian interpretation and generalization of state-of-the-art penalized optimization approaches and providing simultaneously a natural way to include prior information about parameters within this framework. We give examples of how to use this hierarchy to compute MAP estimates for linear and logistic regression as well as sparse precision-matrix estimates in Gaussian graphical models. In addition, an adaptive group lasso method is derived using the framework.Comment: Submitted for publication; corrected typo

    A Bayesian space–time model for clustering areal units based on their disease trends

    Get PDF
    Population-level disease risk across a set of non-overlapping areal units varies in space and time, and a large research literature has developed methodology for identifying clusters of areal units exhibiting elevated risks. However, almost no research has extended the clustering paradigm to identify groups of areal units exhibiting similar temporal disease trends. We present a novel Bayesian hierarchical mixture model for achieving this goal, with inference based on a Metropolis-coupled Markov chain Monte Carlo ((MC) 3 ) algorithm. The effectiveness of the (MC) 3 algorithm compared to a standard Markov chain Monte Carlo implementation is demonstrated in a simulation study, and the methodology is motivated by two important case studies in the United Kingdom. The first concerns the impact on measles susceptibility of the discredited paper linking the measles, mumps, and rubella vaccination to an increased risk of Autism and investigates whether all areas in the Scotland were equally affected. The second concerns respiratory hospitalizations and investigates over a 10 year period which parts of Glasgow have shown increased, decreased, and no change in risk

    Lines-of-inquiry and sources of evidence in work-based research

    Get PDF
    There is synergy between the investigative practices of police detectives and social scientists, including work-based researchers. They both develop lines-of-inquiry and draw on multiple sources of evidence in order to make inferences about people, trends and phenomena. However, the principles associated with lines-of-inquiry and sources of evidence have not so far been examined in relation to work-based research methods, which are often unexplored or ill-defined in the published literature. We explore this gap by examining the various direct and indirect lines-of-inquiry and the main sources of primary and secondary evidence used in work-based research, which is especially relevant because some work-based researchers are also police detectives. Clearer understanding of these intersections will be useful in emerging professional contexts where the work-based researcher, the detective, and the social scientist cohere in the one person and their research project. The case we examined was a Professional Studies programme at a university in Australia, which has many police detectives doing work-based research, and from their experience we conclude there is synergy between work-based research and lines of enquiry. Specifically, in the context of research methods, we identify seven sources of evidence: 1) creative, unstructured, and semi-structured interviews; 2) structured interviews; 3) consensus group methods; 4) surveys; 5) documentation and archives; 6) direct observations and participant observations; and 7) physical or cultural artefacts, and show their methodological features related to data and method type, reliability, validity, and types of analysis, along with their respective advantages and disadvantages. This study thereby unpacks and isolates those characteristics of work-based research which are relevant to a growing body of literature related to the messy, co-produced and wicked problems of private companies, government agencies, and non-government organisations and the research methods used to investigate them

    From Laser Induced Line Narrowing To Electromagnetically Induced Transparency: Closed System Analysis

    Get PDF
    Laser induced line narrowing effect, discovered more than thirty years ago, can also be applied to recent studies in high resolution spectroscopy based on electromagnetically induced transparency. In this paper we first present a general form of the transmission width of electromagnetically induced transparency in a homogeneously broadened medium. We then analyze a Doppler broadened medium by using a Lorentzian function as the atomic velocity distribution. The dependence of the transmission linewidth on the driving field intensity is discussed and compared to the laser induced line narrowing effect. This dependence can be characterized by a parameter which can be regarded as ``the degree of optical pumping''.Comment: 8 pages, 5 figure
    • …
    corecore