2,071 research outputs found
Compound index for power quality evaluation and benchmarking
High level of delivered power quality (PQ) is becoming one of the key performance indicators for both contemporary and future power networks. The increased proliferation of converter connected generation and load in power networks, increased sensitivity to network disturbances of some of these new types of devices and requirements for more flexible operation of power networks led to the revision of some of PQ standards and introduction of modified or in some cases new requirements for PQ compliance. Although almost all PQ phenomena, with exception of voltage transients, are well defined and appropriate thresholds for individual phenomena are set in international standards, there is no standardised nor commonly accepted way to describe and evaluate the overall PQ performance at buses. This study presents an analytic hierarchy process (AHP) inspired methodology for assessing the overall PQ performance at a bus based on several different PQ phenomena considered simultaneously. Compound bus PQ index is defined using AHP to present the overall PQ performance at the bus with respect to voltage sag, harmonics and voltage unbalance. The application of the methodology is illustrated on a 295 bus generic distribution network
LabChain: an Interactive Prototype for Synthetic Peer-to-Peer Trade Research in Experimental Energy Economics
Blockchain-based peer-to-peer (P2P) electricity markets
received considerable attention in the past years, leading to a
rich variety of proposed market designs. Yet, little comparability
and consensus exists on optimal market design, also due to a
lack of common evaluation and benchmarking infrastructure.
This article describes LabChain, an interactive prototype as
research infrastructure for conducting experiments in (simulated)
P2P electricity markets involving real human actors. The software
stack comprises: (i) an (open) data layer for experiment
configuration, (ii) a blockchain layer to reliably document bids
and transactions, (iii) an experiment coordination layer and (iv)
a user interface layer for participant interactions.
As evaluation environment for human interactions within a
laboratory setting, researchers can investigate patterns based
on energy system and market setup and can compare and
evaluate designs under real human behavior allowing alignment
of intentions and outcomes. This contributes to the evaluation
and benchmarking infrastructure discourse
Recommended from our members
Shotgun metagenome data of a defined mock community using Oxford Nanopore, PacBio and Illumina technologies.
Metagenomic sequence data from defined mock communities is crucial for the assessment of sequencing platform performance and downstream analyses, including assembly, binning and taxonomic assignment. We report a comparison of shotgun metagenome sequencing and assembly metrics of a defined microbial mock community using the Oxford Nanopore Technologies (ONT) MinION, PacBio and Illumina sequencing platforms. Our synthetic microbial community BMock12 consists of 12 bacterial strains with genome sizes spanning 3.2-7.2 Mbp, 40-73% GC content, and 1.5-7.3% repeats. Size selection of both PacBio and ONT sequencing libraries prior to sequencing was essential to yield comparable relative abundances of organisms among all sequencing technologies. While the Illumina-based metagenome assembly yielded good coverage with few misassemblies, contiguity was greatly improved by both, Illumina + ONT and Illumina + PacBio hybrid assemblies but increased misassemblies, most notably in genomes with high sequence similarity to each other. Our resulting datasets allow evaluation and benchmarking of bioinformatics software on Illumina, PacBio and ONT platforms in parallel
Evaluation and Benchmarking for Robot Motion Planning Problems Using TuLiP
Model checking is a technique commonly used in the verification of software and hardware. More recently, similar techniques have been employed to synthesize software that is correct by construction. TuLiP is a toolkit which interfaces with game solvers and model checkers to achieve this, producing a finite-state automaton representing a controller that satisfies the supplied specification. For motion planning in particular, a model checker may be employed in a deterministic scenario to produce a path satisfying a specification φ by checking against its negation ¬φ. If a counterexample is found, it will be a trace which satisfies φ. This was achieved in the TuLiP framework using the linear temporal logic (LTL) model checkers NuSMV and SPIN. A benchmark scenario based on a regular grid-world with obstacle and goal regions and reachability properties was devised, and extended to allow control of various complexity parameters, such as grid size, number of actors, specification type etc. Different measures of performance were explored, including CPU time, memory usage and path length, and the behavior of each checker with increasing problem complexity was analyzed using these metrics. The suitability of each checker for different classes and complexities of motion-planning problem was evaluated
Navigational algorithms evaluation and benchmarking
One of the fundamental problems in mobile robotics is navigating unexplored
environments safely and efficiently. Efforts to address this issue are classified
into three categories: reactive-based approaches, which make instantaneous decisions;
map-based approaches, involving grid or topological representations;
and learning-based approaches. Evaluating and comparing approaches is essential
to better understand them, particularly in how they perform in different
problem environments and in relation to each other. This information serves to
guide the development of further approaches, highlight problem environments,
and provide a clear mapping between approaches and environments. However,
current comparative studies within a single category have been limited by the
existence of a degree of similarity between the different approaches. There
has not yet been a comparative framework across different categories in navigational
robotics. Thus, this work aims to develop an evaluation method to
compare a variety of different approaches in the same environment to achieve
a better understanding of navigational algorithms. To this end, a framework
has been proposed that simulates these approaches in a common set of problem
environments and evaluates them with the same set of metrics to compare
their effectiveness and efficiency. The most common reactive and map-based
approaches are implemented and a generic, precise, and empirical way to evaluate
their performance to the set of environments they are in and compared to
the other different approaches is demonstrated. The resulting analysis shows
that methods like RRT* don’t improve on the RRT when benchmarked and the
evaluation of the problem areas of the Potential Field approach led to the development
of the novel Pheromone Potential Field approach. This work opens
the doors to more in-depth research into benchmarking across the different navigational
categories in static and dynamic environments, which will result in a
better understanding and significantly impact the future development of navigational
approaches. This research is a step toward dynamic navigational planners
that match the different approaches to a set of problems or environments
Comparative evaluation of instrument segmentation and tracking methods in minimally invasive surgery
Intraoperative segmentation and tracking of minimally invasive instruments is
a prerequisite for computer- and robotic-assisted surgery. Since additional
hardware like tracking systems or the robot encoders are cumbersome and lack
accuracy, surgical vision is evolving as promising techniques to segment and
track the instruments using only the endoscopic images. However, what is
missing so far are common image data sets for consistent evaluation and
benchmarking of algorithms against each other. The paper presents a comparative
validation study of different vision-based methods for instrument segmentation
and tracking in the context of robotic as well as conventional laparoscopic
surgery. The contribution of the paper is twofold: we introduce a comprehensive
validation data set that was provided to the study participants and present the
results of the comparative validation study. Based on the results of the
validation study, we arrive at the conclusion that modern deep learning
approaches outperform other methods in instrument segmentation tasks, but the
results are still not perfect. Furthermore, we show that merging results from
different methods actually significantly increases accuracy in comparison to
the best stand-alone method. On the other hand, the results of the instrument
tracking task show that this is still an open challenge, especially during
challenging scenarios in conventional laparoscopic surgery
- …