57 research outputs found
ImageJ2: ImageJ for the next generation of scientific image data
ImageJ is an image analysis program extensively used in the biological
sciences and beyond. Due to its ease of use, recordable macro language, and
extensible plug-in architecture, ImageJ enjoys contributions from
non-programmers, amateur programmers, and professional developers alike.
Enabling such a diversity of contributors has resulted in a large community
that spans the biological and physical sciences. However, a rapidly growing
user base, diverging plugin suites, and technical limitations have revealed a
clear need for a concerted software engineering effort to support emerging
imaging paradigms, to ensure the software's ability to handle the requirements
of modern science. Due to these new and emerging challenges in scientific
imaging, ImageJ is at a critical development crossroads.
We present ImageJ2, a total redesign of ImageJ offering a host of new
functionality. It separates concerns, fully decoupling the data model from the
user interface. It emphasizes integration with external applications to
maximize interoperability. Its robust new plugin framework allows everything
from image formats, to scripting languages, to visualization to be extended by
the community. The redesigned data model supports arbitrarily large,
N-dimensional datasets, which are increasingly common in modern image
acquisition. Despite the scope of these changes, backwards compatibility is
maintained such that this new functionality can be seamlessly integrated with
the classic ImageJ interface, allowing users and developers to migrate to these
new methods at their own pace. ImageJ2 provides a framework engineered for
flexibility, intended to support these requirements as well as accommodate
future needs
Evaluation of Image Pixels Similarity Measurement Algorithm Accelerated on GPU with OpenACC
OpenACC is a directive based parallel programming library that allows for easy acceleration of existing C, C++ and Fortran based applications with minimal code modifications. By annotating the bottleneck causing section of the code with OpenACC directives, the acceleration of the code can be simplified, leading for high portability of performance across different target Graphic Processing Units (GPUs). In this work, the portability of an implemented parallelizable chi-square based pixel similarity measurement algorithm has been evaluated on two consumer and professional grade GPUs. To our best knowledge, this is the first performance evaluation report that utilizes the OpenACC optimization clauses (collapse and tile) on different GPUs to process a less workload (low resolution image of 581x429 pixels) and a heavy workload (high resolution image of 4500 x 3500 pixels) to demonstrate the effectiveness and high portability of OpenACC
Investigation into scalable energy and performance models for many-core systems
PhD ThesisIt is likely that many-core processor systems will continue to penetrate
emerging embedded and high-performance applications. Scalable energy and
performance models are two critical aspects that provide insights into the
conflicting trade-offs between them with growing hardware and software
complexity. Traditional performance models, such as Amdahl’s Law,
Gustafson’s and Sun-Ni’s, have helped the research community and industry
to better understand the system performance bounds with given processing
resources, which is otherwise known as speedup. However, these models and
their existing extensions have limited applicability for energy and/or
performance-driven system optimization in practical systems. For instance,
these are typically based on software characteristics, assuming ideal and
homogeneous hardware platforms or limited forms of processor
heterogeneity. In addition, the measurement of speedup and parallelization
factors of an application running on a specific hardware platform require
instrumenting the original software codes. Indeed, practical speedup and
parallelizability models of application workloads running on modern
heterogeneous hardware are critical for energy and performance models, as
they can be used to inform design and control decisions with an aim to
improve system throughput and energy efficiency.
This thesis addresses the limitations by firstly developing novel and
scalable speedup and energy consumption models based on a more general
representation of heterogeneity, referred to as the normal form heterogeneity.
A method is developed whereby standard performance counters found in
modern many-core platforms can be used to derive speedup, and therefore
the parallelizability of the software, without instrumenting applications. This
extends the usability of the new models to scenarios where the
parallelizability of software is unknown, leading to potentially Run-Time
Management (RTM) speedup and/or energy efficiency optimization. The
models and optimization methods presented in this thesis are validated
through extensive experimentation, by running a number of different
applications in wide-ranging concurrency scenarios on a number of different
homogeneous and heterogeneous Multi/Many Core Processor (M/MCP)
systems. These include homogeneous and heterogeneous architectures and
viii
range from existing off-the-shelf platforms to potential future system
extensions. The practical use of these models and methods is demonstrated
through real examples such as studying the effectiveness of the system load
balancer.
The models and methodologies proposed in this thesis provide guidance to
a new opportunities for improving the energy efficiency of M/MCP systemsHigher Committee of Education Development
(HCED) in Ira
Revisiting Shared Data Protection Against Key Exposure
This paper puts a new light on secure data storage inside distributed
systems. Specifically, it revisits computational secret sharing in a situation
where the encryption key is exposed to an attacker. It comes with several
contributions: First, it defines a security model for encryption schemes, where
we ask for additional resilience against exposure of the encryption key.
Precisely we ask for (1) indistinguishability of plaintexts under full
ciphertext knowledge, (2) indistinguishability for an adversary who learns: the
encryption key, plus all but one share of the ciphertext. (2) relaxes the
"all-or-nothing" property to a more realistic setting, where the ciphertext is
transformed into a number of shares, such that the adversary can't access one
of them. (1) asks that, unless the user's key is disclosed, noone else than the
user can retrieve information about the plaintext. Second, it introduces a new
computationally secure encryption-then-sharing scheme, that protects the data
in the previously defined attacker model. It consists in data encryption
followed by a linear transformation of the ciphertext, then its fragmentation
into shares, along with secret sharing of the randomness used for encryption.
The computational overhead in addition to data encryption is reduced by half
with respect to state of the art. Third, it provides for the first time
cryptographic proofs in this context of key exposure. It emphasizes that the
security of our scheme relies only on a simple cryptanalysis resilience
assumption for blockciphers in public key mode: indistinguishability from
random, of the sequence of diferentials of a random value. Fourth, it provides
an alternative scheme relying on the more theoretical random permutation model.
It consists in encrypting with sponge functions in duplex mode then, as before,
secret-sharing the randomness
- …