1,199 research outputs found
A D.C. Programming Approach to the Sparse Generalized Eigenvalue Problem
In this paper, we consider the sparse eigenvalue problem wherein the goal is
to obtain a sparse solution to the generalized eigenvalue problem. We achieve
this by constraining the cardinality of the solution to the generalized
eigenvalue problem and obtain sparse principal component analysis (PCA), sparse
canonical correlation analysis (CCA) and sparse Fisher discriminant analysis
(FDA) as special cases. Unlike the -norm approximation to the
cardinality constraint, which previous methods have used in the context of
sparse PCA, we propose a tighter approximation that is related to the negative
log-likelihood of a Student's t-distribution. The problem is then framed as a
d.c. (difference of convex functions) program and is solved as a sequence of
convex programs by invoking the majorization-minimization method. The resulting
algorithm is proved to exhibit \emph{global convergence} behavior, i.e., for
any random initialization, the sequence (subsequence) of iterates generated by
the algorithm converges to a stationary point of the d.c. program. The
performance of the algorithm is empirically demonstrated on both sparse PCA
(finding few relevant genes that explain as much variance as possible in a
high-dimensional gene dataset) and sparse CCA (cross-language document
retrieval and vocabulary selection for music retrieval) applications.Comment: 40 page
Research and Education in Computational Science and Engineering
Over the past two decades the field of computational science and engineering
(CSE) has penetrated both basic and applied research in academia, industry, and
laboratories to advance discovery, optimize systems, support decision-makers,
and educate the scientific and engineering workforce. Informed by centuries of
theory and experiment, CSE performs computational experiments to answer
questions that neither theory nor experiment alone is equipped to answer. CSE
provides scientists and engineers of all persuasions with algorithmic
inventions and software systems that transcend disciplines and scales. Carried
on a wave of digital technology, CSE brings the power of parallelism to bear on
troves of data. Mathematics-based advanced computing has become a prevalent
means of discovery and innovation in essentially all areas of science,
engineering, technology, and society; and the CSE community is at the core of
this transformation. However, a combination of disruptive
developments---including the architectural complexity of extreme-scale
computing, the data revolution that engulfs the planet, and the specialization
required to follow the applications to new frontiers---is redefining the scope
and reach of the CSE endeavor. This report describes the rapid expansion of CSE
and the challenges to sustaining its bold advances. The report also presents
strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie
Community-based adaptation costing: An integrated framework for the participatory costing of community-based adaptations to climate change in agriculture
Understanding the cost associated with climate change adaptation interventions in agriculture is
important for mobilizing institutional support and providing timely resources to improve
resilience and adaptive capacities. Top-down national estimates of adaptation costs carry a risk
of mismatching the availability of funds with what is actually required on the ground.
Consequently, global and national policies require credible evidence from the local level, taking
into account microeconomic dynamics and community-appropriate adaptation strategies. These
bottom-up studies will improve adaptation planning (the how) and will also serve to inform and
validate top-down assessments of the total costs of adaptation (the how much).
Participatory Social Return on Investment (PSROI) seeks to provide a pragmatic, local-level
planning and costing framework suitable for replication by government and civil society
organizations. The ‘PSROI Framework’ is designed around a participatory workshop for
prioritizing and planning community-based adaptation (CBA) strategies, followed by an
analysis of the economic, social and environmental impacts of the priority measures using a
novel cost-benefit analysis framework.
The PSROI framework has been applied in three separate pilot initiatives in Kochiel and
Othidhe, Kenya, and Dodji, Senegal. This working paper seeks to outline the theoretical and
methodological foundations of the PSROI framework, provide case-study results from each
pilot study, and assess the strengths and weaknesses of the framework according to its
robustness, effectiveness and scalabilit
Recommended from our members
Property as the Law of Things
The New Private Law takes seriously the need for baselines in general and the traditional ones furnished by the law in particular. One such baseline is the “things” of property. The bundle of rights picture popularized by the Legal Realists downplayed things and promoted the expectation that features of property are detachable and tailorable without limit. The bundle picture captures too much to be a theory. By contrast, the information cost, or architectural, theory proposed here captures how the features of property work together to achieve property’s purposes. Drawing on Herbert Simon’s notions of nearly decomposable systems and modularity, the article shows how property employs a thing-based exclusion-governance architecture to manage complexity of the interactions between legal actors. Modular property first breaks this system of interactions into components, and this begins with defining the modular things of property. Property then specifies the interface between the modular components of property through governance strategies that make more direct reference to uses and purposes, as in the law of nuisance, covenants, and zoning. In contrast to the bundle of rights picture, the modular theory captures how a great number of features of property – ranging from in-rem-ness, the right to exclude, and the residual claim, through alienability, persistence, and compatibility, and beyond to deep aspects like recursiveness, scalability, and resilience – follow from the modular architecture. The Article then shows how the information cost theory helps explain some puzzling phenomena such as the pedis possessio in mining law, fencing in and fencing out, the unit rule in eminent domain, and the intersection of state action and the enforcement of covenants. The Article concludes with some implications of property as a law of modular things for the architecture of private law
Autonomous Vehicle Coordination with Wireless Sensor and Actuator Networks
A coordinated team of mobile wireless sensor and actuator nodes can bring numerous benefits for various applications in the field of cooperative surveillance, mapping unknown areas, disaster management, automated highway and space exploration. This article explores the idea of mobile nodes using vehicles on wheels, augmented with wireless, sensing, and control capabilities. One of the vehicles acts as a leader, being remotely driven by the user, the others represent the followers. Each vehicle has a low-power wireless sensor node attached, featuring a 3D accelerometer and a magnetic compass. Speed and orientation are computed in real time using inertial navigation techniques. The leader periodically transmits these measures to the followers, which implement a lightweight fuzzy logic controller for imitating the leader's movement pattern. We report in detail on all development phases, covering design, simulation, controller tuning, inertial sensor evaluation, calibration, scheduling, fixed-point computation, debugging, benchmarking, field experiments, and lessons learned
CASPR: Judiciously Using the Cloud for Wide-Area Packet Recovery
We revisit a classic networking problem -- how to recover from lost packets
in the best-effort Internet. We propose CASPR, a system that judiciously
leverages the cloud to recover from lost or delayed packets. CASPR supplements
and protects best-effort connections by sending a small number of coded packets
along the highly reliable but expensive cloud paths. When receivers detect
packet loss, they recover packets with the help of the nearby data center, not
the sender, thus providing quick and reliable packet recovery for
latency-sensitive applications. Using a prototype implementation and its
deployment on the public cloud and the PlanetLab testbed, we quantify the
benefits of CASPR in providing fast, cost effective packet recovery. Using
controlled experiments, we also explore how these benefits translate into
improvements up and down the network stack
Large-scale adaptive mantle convection simulation
A new generation, parallel adaptive-mesh mantle convection code, Rhea, is described and benchmarked. Rhea targets large-scale mantle convection simulations on parallel computers, and thus has been developed with a strong focus on computational efficiency and parallel scalability of both mesh handling and numerical solvers. Rhea builds mantle convection solvers on a collection of parallel octree-based adaptive finite element libraries that support new distributed data structures and parallel algorithms for dynamic coarsening, refinement, rebalancing and repartitioning of the mesh. In this study we demonstrate scalability to 122 880 compute cores and verify correctness of the implementation. We present the numerical approximation and convergence properties using 3-D benchmark problems and other tests for variable-viscosity Stokes flow and thermal convection
Multilevel balancing domain decomposition at extreme scales
In this paper we present a fully distributed, communicator-aware, recursive, and interlevel-overlapped message-passing implementation of the multilevel balancing domain decomposition by constraints (MLBDDC) preconditioner. The implementation highly relies on subcommunicators in order to achieve the desired effect of coarse-grain overlapping of computation and communication, and communication and communication among levels in the hierarchy (namely, interlevel overlapping). Essentially, the main communicator is split into as many nonoverlapping subsets of message-passing interface (MPI) tasks (i.e., MPI subcommunicators) as levels in the hierarchy. Provided that specialized resources (cores and memory) are devoted to each level, a careful rescheduling and mapping of all the computations and communications in the algorithm lets a high degree of overlapping be exploited among levels. All subroutines and associated data structures are expressed recursively, and therefore MLBDDC preconditioners with an arbitrary number of levels can be built while re-using significant and recurrent parts of the codes. This approach leads to excellent weak scalability results as soon as level-1 tasks can fully overlap coarser-levels duties. We provide a model to indicate how to choose the number of levels and coarsening ratios between consecutive levels and determine qualitatively the scalability limits for a given choice. We have carried out a comprehensive weak scalability analysis of the proposed implementation for the three-dimensional Laplacian and linear elasticity problems on structured and unstructured meshes. Excellent weak scalability results have been obtained up to 458,752 IBM BG/Q cores and 1.8 million MPI being, being the first time that exact domain decomposition preconditioners (only based on sparse direct solvers) reach these scales. (An erratum is attached.
- …