9,566 research outputs found
Recommended from our members
Diagnostic Applications for Micro-Synchrophasor Measurements
This report articulates and justifies the preliminary selection of diagnostic applications for data from micro-synchrophasors (µPMUs) in electric power distribution systems that will be further studied and developed within the scope of the three-year ARPA-e award titled Micro-synchrophasors for Distribution Systems
O(N) methods in electronic structure calculations
Linear scaling methods, or O(N) methods, have computational and memory
requirements which scale linearly with the number of atoms in the system, N, in
contrast to standard approaches which scale with the cube of the number of
atoms. These methods, which rely on the short-ranged nature of electronic
structure, will allow accurate, ab initio simulations of systems of
unprecedented size. The theory behind the locality of electronic structure is
described and related to physical properties of systems to be modelled, along
with a survey of recent developments in real-space methods which are important
for efficient use of high performance computers. The linear scaling methods
proposed to date can be divided into seven different areas, and the
applicability, efficiency and advantages of the methods proposed in these areas
is then discussed. The applications of linear scaling methods, as well as the
implementations available as computer programs, are considered. Finally, the
prospects for and the challenges facing linear scaling methods are discussed.Comment: 85 pages, 15 figures, 488 references. Resubmitted to Rep. Prog. Phys
(small changes
Three real-space discretization techniques in electronic structure calculations
A characteristic feature of the state-of-the-art of real-space methods in
electronic structure calculations is the diversity of the techniques used in
the discretization of the relevant partial differential equations. In this
context, the main approaches include finite-difference methods, various types
of finite-elements and wavelets. This paper reports on the results of several
code development projects that approach problems related to the electronic
structure using these three different discretization methods. We review the
ideas behind these methods, give examples of their applications, and discuss
their similarities and differences.Comment: 39 pages, 10 figures, accepted to a special issue of "physica status
solidi (b) - basic solid state physics" devoted to the CECAM workshop "State
of the art developments and perspectives of real-space electronic structure
techniques in condensed matter and molecular physics". v2: Minor stylistic
and typographical changes, partly inspired by referee comment
Statistical and expert-based landslide susceptibility modeling on a national scale applied to North Macedonia
This article presents a Geographic Information System (GIS) assessment of Landslide Susceptibility Zonation (LSZ) in North Macedonia. Because of the weak landslide inventory, statistical method (frequency ratio) is combined with Analytical Hierarchy Process (AHP). In this study, lithology, slope, plan curvature, precipitations, land cover, distance from streams, and distance from roads were selected as precondition factors for landslide occurrence. There are two advantages of the approach used. The first is the possibility of comparing of the results and cros svalidation between the statistical and expert based methods with an indication of the advantages and drawbacks of each of them. The second is the possibility of better weighting of precondition factors for landslide occurrence, which can be useful in cases of weak landslide inventory. The final result shows that in the case of weak landslide inventory, LSZ map created with the combination of both models provide better overall results than each model separately
NASA Sea Ice Validation Program for the Defense Meteorological Satellite Program Special Sensor Microwave Imager
The history of the program is described along with the SSM/I sensor, including its calibration and geolocation correction procedures used by NASA, SSM/I data flow, and the NASA program to distribute polar gridded SSM/I radiances and sea ice concentrations (SIC) on CD-ROMs. Following a discussion of the NASA algorithm used to convert SSM/I radiances to SICs, results of 95 SSM/I-MSS Landsat IC comparisons for regions in both the Arctic and the Antarctic are presented. The Landsat comparisons show that the overall algorithm accuracy under winter conditions is 7 pct. on average with 4 pct. negative bias. Next, high resolution active and passive microwave image mosaics from coordinated NASA and Navy aircraft underflights over regions of the Beaufort and Chukchi seas in March 1988 were used to show that the algorithm multiyear IC accuracy is 11 pct. on average with a positive bias of 12 pct. Ice edge crossings of the Bering Sea by the NASA DC-8 aircraft were used to show that the SSM/I 15 pct. ice concentration contour corresponds best to the location of the initial bands at the ice edge. Finally, a summary of results and recommendations for improving the SIC retrievals from spaceborne radiometers are provided
Engineering Benchmarks for Planning: the Domains Used in the Deterministic Part of IPC-4
In a field of research about general reasoning mechanisms, it is essential to
have appropriate benchmarks. Ideally, the benchmarks should reflect possible
applications of the developed technology. In AI Planning, researchers more and
more tend to draw their testing examples from the benchmark collections used in
the International Planning Competition (IPC). In the organization of (the
deterministic part of) the fourth IPC, IPC-4, the authors therefore invested
significant effort to create a useful set of benchmarks. They come from five
different (potential) real-world applications of planning: airport ground
traffic control, oil derivative transportation in pipeline networks,
model-checking safety properties, power supply restoration, and UMTS call
setup. Adapting and preparing such an application for use as a benchmark in the
IPC involves, at the time, inevitable (often drastic) simplifications, as well
as careful choice between, and engineering of, domain encodings. For the first
time in the IPC, we used compilations to formulate complex domain features in
simple languages such as STRIPS, rather than just dropping the more interesting
problem constraints in the simpler language subsets. The article explains and
discusses the five application domains and their adaptation to form the PDDL
test suites used in IPC-4. We summarize known theoretical results on structural
properties of the domains, regarding their computational complexity and
provable properties of their topology under the h+ function (an idealized
version of the relaxed plan heuristic). We present new (empirical) results
illuminating properties such as the quality of the most wide-spread heuristic
functions (planning graph, serial planning graph, and relaxed plan), the growth
of propositional representations over instance size, and the number of actions
available to achieve each fact; we discuss these data in conjunction with the
best results achieved by the different kinds of planners participating in
IPC-4
A self-calibrating system for finger tracking using sound waves
In this thesis a system for tracking the fingers of a user using sound waves is developed. The proposed solution is to attach a small speaker to each finger and then have a number of microphones placed ad hoc around a computer monitor listening to the speakers. The system should then be able to track the positions of the fingers so that the coordinates can be mapped to the computer monitor and be used for human-computer interfacing. The thesis focuses on the proof-of-concept of the system. The system pipeline consists of three parts: signal processing, system self-calibration and real-time sound source tracking. In the signal processing step four different signal methods are constructed and evaluated. It is shown that multiple signals can be used in parallel. The signal method with the best performance uses a number of dampened sine waves stacked on top of each other, with each sound wave having a different frequency within a specified frequency band. The goal was to use ultrasound frequency bands for the system but experimenting showed that they gave rise to a lot of aliasing, thus rendering the higher frequency bands unusable. The second step, the system self-calibration, aims to do a scene reconstruction to find the positions of the microphones and the sound source path using only the received signal transmissions. First the time-difference of arrival (TDOA) values are estimated using robust techniques centred around a GCC-PHAT. The time offsets are then estimated in order to convert the TDOA problem into a time-of-arrival (TOA) problem so that the positions of the receivers and sound events can be calculated. Finally a "virtual screen" is fitted to the sound source path to be used for coordinate projection. The scene reconstruction was successful in 80 % of the test cases, in the sense that it managed to estimate the spatial positions at all. The estimates for the microphones had errors of 11.8 +/- 5 centimetres on average for the successful test cases, which is worse than the results presented in previous research. However, the best test case outperformed the results of another paper. The newly developed and implemented technique for finding the virtual screen was far from robust and only found a reasonable virtual screen in 12.5 % of the test cases. In the third step the sound events were estimated, one sound event at a time, using the SRP-PHAT method with the CFRC improvement. Unfortunate choices of the search volumes made the calculations very computationally heavy. The results were comparable to those of the system self-calibration when using the same data and the estimated microphone positions
Short Term Unit Commitment as a Planning Problem
‘Unit Commitment’, setting online schedules for generating units in a power system to ensure supply meets demand, is integral to the secure, efficient, and economic daily operation of a power system. Conflicting desires for security of supply at minimum cost complicate this. Sustained research has produced methodologies within a guaranteed bound of optimality, given sufficient computing time.
Regulatory requirements to reduce emissions in modern power systems have necessitated increased renewable generation, whose output cannot be directly controlled, increasing complex uncertainties. Traditional methods are thus less efficient, generating more costly schedules or requiring impractical increases in solution time.
Meta-Heuristic approaches are studied to identify why this large body of work has had little industrial impact despite continued academic interest over many years. A discussion of lessons learned is given, and should be of interest to researchers presenting new Unit Commitment approaches, such as a Planning implementation.
Automated Planning is a sub-field of Artificial Intelligence, where a timestamped sequence of predefined actions manipulating a system towards a goal configuration is sought. This differs from previous Unit Commitment formulations found in the literature. There are fewer times when a unit’s online status switches, representing a Planning action, than free variables in a traditional formulation. Efficient reasoning about these actions could reduce solution time, enabling Planning to tackle Unit Commitment problems with high levels of renewable generation.
Existing Planning formulations for Unit Commitment have not been found. A successful formulation enumerating open challenges would constitute a good benchmark problem for the field. Thus, two models are presented. The first demonstrates the approach’s strength in temporal reasoning over numeric optimisation. The second balances this but current algorithms cannot handle it. Extensions to an existing algorithm are proposed alongside a discussion of immediate challenges and possible solutions. This is intended to form a base from which a successful methodology can be developed
- …