1,966 research outputs found
Asymptotic expansions for high-contrast elliptic equations
In this paper, we present a high-order expansion for elliptic equations in
high-contrast media. The background conductivity is taken to be one and we
assume the medium contains high (or low) conductivity inclusions. We derive an
asymptotic expansion with respect to the contrast and provide a procedure to
compute the terms in the expansion. The computation of the expansion does not
depend on the contrast which is important for simulations. The latter allows
avoiding increased mesh resolution around high conductivity features. This work
is partly motivated by our earlier work in \cite{ge09_1} where we design
efficient numerical procedures for solving high-contrast problems. These
multiscale approaches require local solutions and our proposed high-order
expansion can be used to approximate these local solutions inexpensively. In
the case of a large-number of inclusions, the proposed analysis can help to
design localization techniques for computing the terms in the expansion. In the
paper, we present a rigorous analysis of the proposed high-order expansion and
estimate the remainder of it. We consider both high and low conductivity
inclusions
The cost of continuity: performance of iterative solvers on isogeometric finite elements
In this paper we study how the use of a more continuous set of basis
functions affects the cost of solving systems of linear equations resulting
from a discretized Galerkin weak form. Specifically, we compare performance of
linear solvers when discretizing using B-splines, which span traditional
finite element spaces, and B-splines, which represent maximum
continuity. We provide theoretical estimates for the increase in cost of the
matrix-vector product as well as for the construction and application of
black-box preconditioners. We accompany these estimates with numerical results
and study their sensitivity to various grid parameters such as element size
and polynomial order of approximation . Finally, we present timing results
for a range of preconditioning options for the Laplace problem. We conclude
that the matrix-vector product operation is at most \slfrac{33p^2}{8} times
more expensive for the more continuous space, although for moderately low ,
this number is significantly reduced. Moreover, if static condensation is not
employed, this number further reduces to at most a value of 8, even for high
. Preconditioning options can be up to times more expensive to setup,
although this difference significantly decreases for some popular
preconditioners such as Incomplete LU factorization
Computational complexity and memory usage for multi-frontal direct solvers in structured mesh finite elements
The multi-frontal direct solver is the state-of-the-art algorithm for the
direct solution of sparse linear systems. This paper provides computational
complexity and memory usage estimates for the application of the multi-frontal
direct solver algorithm on linear systems resulting from B-spline-based
isogeometric finite elements, where the mesh is a structured grid. Specifically
we provide the estimates for systems resulting from polynomial
B-spline spaces and compare them to those obtained using spaces.Comment: 8 pages, 2 figure
On Stochastic Error and Computational Efficiency of the Markov Chain Monte Carlo Method
In Markov Chain Monte Carlo (MCMC) simulations, the thermal equilibria
quantities are estimated by ensemble average over a sample set containing a
large number of correlated samples. These samples are selected in accordance
with the probability distribution function, known from the partition function
of equilibrium state. As the stochastic error of the simulation results is
significant, it is desirable to understand the variance of the estimation by
ensemble average, which depends on the sample size (i.e., the total number of
samples in the set) and the sampling interval (i.e., cycle number between two
consecutive samples). Although large sample sizes reduce the variance, they
increase the computational cost of the simulation. For a given CPU time, the
sample size can be reduced greatly by increasing the sampling interval, while
having the corresponding increase in variance be negligible if the original
sampling interval is very small. In this work, we report a few general rules
that relate the variance with the sample size and the sampling interval. These
results are observed and confirmed numerically. These variance rules are
derived for the MCMC method but are also valid for the correlated samples
obtained using other Monte Carlo methods. The main contribution of this work
includes the theoretical proof of these numerical observations and the set of
assumptions that lead to them
People Can Be So Fake: A New Dimension to Privacy and Technology Scholarship
This article updates the traditional discussion of privacy and technology, focused since the days of Warren and Brandeis on the capacity of technology to manipulate information. It proposes a novel dimension to the impact of anthropomorphic or social design on privacy.
Technologies designed to imitate people-through voice, animation, and natural language-are increasingly commonplace, showing up in our cars, computers, phones, and homes. A rich literature in communications and psychology suggests that we are hardwired to react to such technology as though a person were actually present. Social interfaces accordingly capture our attention, improve interactivity, and can free up our hands for other tasks.
At the same time, technologies that imitate people have the potential to implicate long-standing privacy values. One of the well-documented effects on users of interfaces and devices that emulate people is the sensation of being observed and evaluated. Their presence can alter our attitude, behavior, and physiological state. Widespread adoption of such technology may accordingly lessen opportunities for solitude and chill curiosity and self-development. These effects are all the more dangerous in that they cannot be addressed through traditional privacy protections such as encryption or anonymization. At the same time, the unique properties of social technology also present an opportunity to improve privacy, particularly online
Against Notice Skepticism in Privacy (and Elsewhere)
What follows is an exploration of innovative new ways to deliver privacy notice. Unlike traditional notice that relies upon text or symbols to convey information, emerging strategies of “visceral” notice leverage a consumer’s very experience of a product or service to warn or inform. A regulation might require that a cell phone camera make a shutter sound so people know their photo is being taken. Or a law could incentivize websites to be more formal (as opposed to casual) wherever they collect personal information, as formality tends to place people on greater guard about what they disclose. The thesis of this Article is that, for a variety of reasons, experience as a form of privacy disclosure is worthy of further study before we give in to calls to abandon notice as a regulatory strategy in privacy and elsewhere.
In Part I, the Article examines the promise of radical new forms of experiential or visceral notice based in contemporary design psychology. This Part also compares and contrasts visceral notice to other regulator strategies that seek to “nudge” or influence consumer or citizen behavior.
Part II discusses why the further exploration of visceral notice and other notice innovation is warranted. Part III explores potential challenges to visceral notice—for instance, from the First Amendment—and lays out some thoughts on the best regulatory context for requiring or incentivizing visceral notice. In particular, this Part highlights the potential of safe harbors and goal-based rules, i.e., rules that look to the outcome of a notice strategy rather than dictate precisely how notice must be delivered.
This Article uses online privacy as a case study for several reasons. First, notice is among the only affirmative obligations that companies face with respect to privacy—online privacy is a quintessential notice regime. Second, the Internet is a context in which notice is widely understood to have failed, but where the nature of digital services means that viable regulatory alternatives are few and poor. Finally, the fact that websites are entirely designed environments furnishes unique opportunities for the sorts of untraditional interventions explored in Part I
The Boundaries of Privacy Harm
Just as a burn is an injury caused by heat, so is privacy harm a unique injury with specific boundaries and characteristics. This Essay describes privacy harm as falling into two related categories. The subjective category of privacy harm is the perception of unwanted observation. This category describes unwelcome mental states—anxiety, embarrassment, fear—that stem from the belief that one is being watched or monitored. Examples of subjective privacy harms include everything from a landlord eavesdropping on his tenants to generalized government surveillance.
The objective category of privacy harm is the unanticipated or coerced use of information concerning a person against that person. These are negative, external actions justified by reference to personal information. Examples include identity theft, the leaking of classified information that reveals an undercover agent, and the use of a drunk-driving suspect’s blood as evidence against him. The subjective and objective categories of privacy harm are distinct but related. Just as assault is the apprehension of battery, so is the perception of unwanted observation largely an apprehension of information-driven injury. The categories represent, respectively, the anticipation and consequence of a loss of control over personal information.
This approach offers several advantages. It uncouples privacy harm from privacy violations, demonstrating that no person need commit a privacy violation for privacy harm to occur (and vice versa). It creates a “limiting principle” capable of revealing when another value—autonomy or equality, for instance—is more directly at stake. It also creates a “rule of recognition” that permits the identification of a privacy harm when no other harm is apparent. Finally, this approach permits the measurement and redress of privacy harm in novel ways
- …