411 research outputs found
Informational efficiency in distressed markets: The case of European corporate bonds
This paper investigates the effect of the 2008 financial crisis on informational efficiency by carrying out a long-memory analysis of European corporate bond markets. We compute the Hurst exponent for fifteen sectorial indices to scrutinise the time-varying behaviour of long-range memory, applying a shuffling technique to avoid short-term correlation. We find that the financial crisis has uneven effects on the informational efficiency of all corporate bond sectors, especially those related to financial services. However, their vulnerability is not homogeneous and some nonfinancial sectors suffer only a transitory effect.Fil: Fernández, Aurelio. Universitat Rovira I Virgili; EspañaFil: Guercio, MarĂa BelĂ©n. Universidad Provincial del Sudoeste; Argentina. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - BahĂa Blanca. Instituto de Investigaciones EconĂłmicas y Sociales del Sur. Universidad Nacional del Sur. Departamento de EconomĂa. Instituto de Investigaciones EconĂłmicas y Sociales del Sur; ArgentinaFil: Martinez, Lisana BelĂ©n. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - BahĂa Blanca. Instituto de Investigaciones EconĂłmicas y Sociales del Sur. Universidad Nacional del Sur. Departamento de EconomĂa. Instituto de Investigaciones EconĂłmicas y Sociales del Sur; Argentina. Universidad Provincial del Sudoeste; Argentin
Suitability of lacunarity measure for blind steganalysis
Blind steganalysis performance is influenced by several factors including the features used for classification. This paper investigates the suitability of using lacunarity measure as a potential feature vectorfor blind steganalysis. Differential Box Counting (DBC) based lacunarity measure has been employed using the traditional sequential grid (SG) and a new radial strip (RS) approach. The performance of the multi-class SVM based classifier was unfortunately not what was expected. However, the findings show that both the SG and RS lacunarity produce enough discriminating features that warrant further research
From Malware Samples to Fractal Images: A New Paradigm for Classification. (Version 2.0, Previous version paper name: Have you ever seen malware?)
To date, a large number of research papers have been written on the
classification of malware, its identification, classification into different
families and the distinction between malware and goodware. These works have
been based on captured malware samples and have attempted to analyse malware
and goodware using various techniques, including techniques from the field of
artificial intelligence. For example, neural networks have played a significant
role in these classification methods. Some of this work also deals with
analysing malware using its visualisation. These works usually convert malware
samples capturing the structure of malware into image structures, which are
then the object of image processing. In this paper, we propose a very
unconventional and novel approach to malware visualisation based on dynamic
behaviour analysis, with the idea that the images, which are visually very
interesting, are then used to classify malware concerning goodware. Our
approach opens an extensive topic for future discussion and provides many new
directions for research in malware analysis and classification, as discussed in
conclusion. The results of the presented experiments are based on a database of
6 589 997 goodware, 827 853 potentially unwanted applications and 4 174 203
malware samples provided by ESET and selected experimental data (images,
generating polynomial formulas and software generating images) are available on
GitHub for interested readers. Thus, this paper is not a comprehensive compact
study that reports the results obtained from comparative experiments but rather
attempts to show a new direction in the field of visualisation with possible
applications in malware analysis.Comment: This paper is under review; the section describing conversion from
malware structure to fractal figure is temporarily erased here to protect our
idea. It will be replaced by a full version when accepte
Structured Parallelism by Composition - Design and implementation of a framework supporting skeleton compositionality
This thesis is dedicated to the efficient compositionality of algorithmic skeletons, which are abstractions of common parallel programming patterns. Skeletons can be implemented in the functional parallel language Eden as mere parallel higher order functions. The use of algorithmic skeletons facilitates parallel programming massively. This is because they already implement the tedious details of parallel programming and can be specialised for concrete applications by providing problem specific functions and parameters. Efficient skeleton compositionality is of particular importance because complex, specialised skeletons can be compound of simpler base skeletons. The resulting modularity is especially important for the context of functional programming and should not be missing in a functional language. We subdivide composition into three categories:
-Nesting: A skeleton is instantiated from another skeleton instance. Communication is tree shaped, along the call hierarchy. This is directly supported by Eden.
-Composition in sequence: The result of a skeleton is the input for a succeeding skeleton. Function composition is expressed in Eden by the ( . ) operator. For performance reasons the processes of both skeletons should be able to exchange results directly instead of using the indirection via the caller process. We therefore introduce the remote data concept.
-Iteration: A skeleton is called in sequence a variable number of times. This can be defined using recursion and composition in sequence. We optimise the number of skeleton instances, the communication in between the iteration steps and the control of the loop. To this end, we developed an iteration framework where iteration skeletons are composed from control and body skeletons.
Central to our composition concept is remote data. We send a remote data handle instead of ordinary data, the data handle is used at its destination to request the referenced data. Remote data can be used inside arbitrary container types for efficient skeleton composition similar to ordinary distributed data types. The free combinability of remote data with arbitrary container types leads to a high degree of flexibility. The programmer is not restricted by using a predefined set of distributed data types and (re-)distribution functions. Moreover, he can use remote data with arbitrary container types to elegantly create process topologies.
For the special case of skeleton iteration we prevent the repeated construction and deconstruction of skeleton instances for each single iteration step, which is common for the recursive use of skeletons. This minimises the parallel overhead for process and channel creation and allows to keep data local on persistent processes. To this end we provide a skeleton framework. This concept is independent of remote data, however the use of remote data in combination with the iteration framework makes the framework more flexible.
For our case studies, both approaches perform competitively compared to programs with identical parallel structure but which are implemented using monolithic skeletons - i.e. skeleton not composed from simpler ones.
Further, we present extensions of Eden which enhance composition support: generalisation of overloaded communication, generalisation of process instantiation, compositional process placement and extensions of Box types used to adapt communication behaviour
Algorithms as Folding: Reframing the Analytical Focus. 6(2): 1-12.
This article proposes an analytical approach to algorithms that stresses operations of folding. The aim of this approach is
to broaden the common analytical focus on algorithms as biased and opaque black boxes, and to instead highlight the
many relations that algorithms are interwoven with. Our proposed approach thus highlights how algorithms fold heterogeneous things: data, methods and objects with multiple ethical and political effects. We exemplify the utility of our
approach by proposing three specific operations of folding—proximation, universalisation and normalisation. The article
develops these three operations through four empirical vignettes, drawn from different settings that deal with algorithms
in relation to AIDS, Zika and stock markets. In proposing this analytical approach, we wish to highlight the many different
attachments and relations that algorithms enfold. The approach thus aims to produce accounts that highlight how
algorithms dynamically combine and reconfigure different social and material heterogeneities as well as the ethical,
normative and political consequences of these reconfigurations
Domain-Specific Modelling for Coordination Engineering
Multi-core processors offer increased speed and efficiency on various devices, from desktop computers to smartphones. But the challenge is not only how to gain the utmost performance, but also how to support portability, continuity with prevalent technologies, and the dissemination of existing principles of parallel software design. This thesis shows how model-driven software development can help engineering parallel systems. Rather than simply offering yet another programming approach for concurrency, it proposes using an explicit coordination model as the first development artefact. Key topics include: Basic foundations of parallel software design, coordination models and languages, and model-driven software development How Coordination Engineering eases parallel software design by separating concerns and activities across roles How the Space-Coordinated Processes (SCOPE) coordination model combines coarse-grained choreography of parallel processes with fine-grained parallelism within these processes Extensive experimental evaluation on SCOPE implementations and the application of Coordination Engineerin
Just-in-time Hardware generation for abstracted reconfigurable computing
This thesis addresses the use of reconfigurable hardware in computing platforms, in order to harness the performance benefits of dedicated hardware whilst maintaining the flexibility associated with software. Although the reconfigurable computing concept is not new, the low level nature of the supporting tools normally used, together with the consequent limited level of abstraction and resultant lack of backwards compatibility, has prevented the widespread adoption of this technology. In addition, bandwidth and architectural limitations, have seriously constrained the potential improvements in performance. A review of existing approaches and tools flows is conducted to highlight the current problems being faced in this field. The objective of the work presented in this thesis is to introduce a radically new approach to reconfigurable computing tool flows. The runtime based tool flow introduces complete abstraction between the application developer and the underlying hardware. This new technique eliminates the ease of use and backwards compatibility issues that have plagued the reconfigurable computing concept, and could pave the way for viable mainstream reconfigurable computing platforms. An easy to use, cycle accurate behavioural modelling system is also presented, which was used extensively during the early exploration of new concepts and architectures. Some performance improvements produced by the new reconfigurable computing tool flow, when applied to both a MIPS based embedded platform, and the Cray XDl, are also presented. These results are then analyzed and the hardware and software factors affecting the performance increases that were obtained are discussed, together with potential techniques that could be used to further increase the performance of the system. Lastly a heterogenous computing concept is proposed, in which, a computer system, containing multiple types of computational resource is envisaged, each having their own strengths and weaknesses (e.g. DSPs, CPUs, FPGAs). A revolutionary new method of fully exploiting the potential of such a system, whilst maintaining scalability, backwards compatibility, and ease of use is also presented
- …