3,129 research outputs found

    FINANCIAL VISUALIZATION APPLICATION ADOPTING BIMODAL VISUALIZATION

    Get PDF
    Thevisualization system considers financial people's needs and approaches alternative wayto understand financial data. This is oneof the processes of analyzing and converting data from two different modalities into graphics. This allows financial decision makers andfinancial analyst to gaininsight intothe data, drawconclusions and directly interact with the data. The purpose of this project is to develop visualization for financial instruments - equity and to researchthe effectiveness of financial visualization for financial trader andinvestors. Tomake sure financial visualization for equities will work, this project will focus first in otherelement from finance whichis share price. Thereasonto choose share price is because share price is one of important financial elements in financial market. This visualizationsystemdeals with two modalities of information; numerical and textual. Financial trader or investor make his/her decision based onthe behaviour of equities or share price overa certain period of time and consults other sources of information directly or indirectly with theinstruments. These sources include internal factor of certain industry or economies and about the sectorand wider issues that may affect on the instrument which equities

    Reflecting on the Physics of Notations applied to a visualisation case study

    Get PDF
    This paper presents a critical reflection upon the concept of 'physics of notations' proposed by Moody. This is based upon the post hoc application of the concept in the analysis of a visualisation tool developed for a common place mathematics tool. Although this is not the intended design and development approach presumed or preferred by the physics of notations, there are benefits to analysing an extant visualisation. In particular, our analysis benefits from the visualisation having been developed and refined employing graphic design professionals and extensive formative user feedback. Hence the rationale for specific visualisation features is to some extent traceable. This reflective analysis shines a light on features of both the visualisation and domain visualised, illustrating that it could have been analysed more thoroughly at design time. However the same analysis raises a variety of interesting questions about the viability of scoping practical visualisation design in the framework proposed by the physics of notations

    Sharper asset ranking from total drawdown durations

    Full text link
    The total duration of drawdowns is shown to provide a moment-free, unbiased, efficient and robust estimator of Sharpe ratios both for Gaussian and heavy-tailed price returns. We then use this quantity to infer an analytic expression of the bias of moment-based Sharpe ratio estimators as a function of the return distribution tail exponent. The heterogeneity of tail exponents at any given time among assets implies that our new method yields significantly different asset rankings than those of moment-based methods, especially in periods large volatility. This is fully confirmed by using 20 years of historical data on 3449 liquid US equities.Comment: 21 pages, 12 figure

    Engineering Resilient Collective Adaptive Systems by Self-Stabilisation

    Get PDF
    Collective adaptive systems are an emerging class of networked computational systems, particularly suited in application domains such as smart cities, complex sensor networks, and the Internet of Things. These systems tend to feature large scale, heterogeneity of communication model (including opportunistic peer-to-peer wireless interaction), and require inherent self-adaptiveness properties to address unforeseen changes in operating conditions. In this context, it is extremely difficult (if not seemingly intractable) to engineer reusable pieces of distributed behaviour so as to make them provably correct and smoothly composable. Building on the field calculus, a computational model (and associated toolchain) capturing the notion of aggregate network-level computation, we address this problem with an engineering methodology coupling formal theory and computer simulation. On the one hand, functional properties are addressed by identifying the largest-to-date field calculus fragment generating self-stabilising behaviour, guaranteed to eventually attain a correct and stable final state despite any transient perturbation in state or topology, and including highly reusable building blocks for information spreading, aggregation, and time evolution. On the other hand, dynamical properties are addressed by simulation, empirically evaluating the different performances that can be obtained by switching between implementations of building blocks with provably equivalent functional properties. Overall, our methodology sheds light on how to identify core building blocks of collective behaviour, and how to select implementations that improve system performance while leaving overall system function and resiliency properties unchanged.Comment: To appear on ACM Transactions on Modeling and Computer Simulatio

    The seven ages of Fortran

    Get PDF
    When IBM's John Backus first developed the Fortran programming language, back in 1957, he certainly never dreamt that it would become a world-wide success and still be going strong many years later. Given the oft-repeated predictions of its imminent demise, starting around 1968, it is a surprise, even to some of its most devoted users, that this much-maligned language is not only still with us, but is being further developed for the demanding applications of the future. What has made this programming language succeed where most slip into oblivion? One reason is certainly that the language has been regularly standardized. In this paper we will trace the evolution of the language from its first version and though six cycles of formal revision, and speculate on how this might continue. Now, modern Fortran is a procedural, imperative, compiled language with a syntax well suited to a direct representation of mathematical formulas. Individual procedures may be compiled separately or grouped into modules, either way allowing the convenient construction of very large programs and procedure libraries. Procedures communicate via global data areas or by argument association. The language now contains features for array processing, abstract data types, dynamic data structures, objectoriented programming and parallel processing.Facultad de Informátic

    The seven ages of Fortran

    Get PDF
    When IBM's John Backus first developed the Fortran programming language, back in 1957, he certainly never dreamt that it would become a world-wide success and still be going strong many years later. Given the oft-repeated predictions of its imminent demise, starting around 1968, it is a surprise, even to some of its most devoted users, that this much-maligned language is not only still with us, but is being further developed for the demanding applications of the future. What has made this programming language succeed where most slip into oblivion? One reason is certainly that the language has been regularly standardized. In this paper we will trace the evolution of the language from its first version and though six cycles of formal revision, and speculate on how this might continue. Now, modern Fortran is a procedural, imperative, compiled language with a syntax well suited to a direct representation of mathematical formulas. Individual procedures may be compiled separately or grouped into modules, either way allowing the convenient construction of very large programs and procedure libraries. Procedures communicate via global data areas or by argument association. The language now contains features for array processing, abstract data types, dynamic data structures, objectoriented programming and parallel processing.Facultad de Informátic

    Secret-free security: a survey and tutorial

    Get PDF
    Classical keys, i.e., secret keys stored permanently in digital form in nonvolatile memory, appear indispensable in modern computer security-but also constitute an obvious attack target in any hardware containing them. This contradiction has led to perpetual battle between key extractors and key protectors over the decades. It is long known that physical unclonable functions (PUFs) can at least partially overcome this issue, since they enable secure hardware without the above classical keys. Unfortunately, recent research revealed that many standard PUFs still contain other types of secrets deeper in their physical structure, whose disclosure to adversaries breaks security as well: Examples include the manufacturing variations in SRAM PUFs, the power-up states of SRAM PUFs, or the signal delays in Arbiter PUFs. Most of these secrets have already been extracted in viable attacks in the past, breaking PUF-security in practice. A second generation of physical security primitives now shows potential to resolve this remaining problem, however. In certain applications, so-called Complex PUFs, SIMPLs/PPUFs, and UNOs are able to realize not just hardware that is free of classical keys in the above sense, but completely secret-free instead. In the resulting hardware systems, adversaries could hypothetically be allowed to inspect every bit and every atom, and learn any information present in any form in the system, without being able to break security. Secret-free hardware would hence promise to be innately and permanently immune against any physical or malware-based key-extraction: There simply is no security-critical information to extract anymore. Our survey and tutorial paper takes the described situation as starting point, and categorizes, formalizes, and overviews the recently evolving area of secret-free security. We propose the attempt of making hardware completely secret-free as promising endeavor in future hardware designs, at least in those application scenarios where this is logically possible. In others, we suggest that secret-free techniques could be combined with standard PUFs and classical methods to construct hybrid systems with notably reduced attack surfaces

    Improving the Deductive System DES with Persistence by Using SQL DBMS's

    Get PDF
    This work presents how persistent predicates have been included in the in-memory deductive system DES by relying on external SQL database management systems. We introduce how persistence is supported from a user-point of view and the possible applications the system opens up, as the deductive expressive power is projected to relational databases. Also, we describe how it is possible to intermix computations of the deductive engine and the external database, explaining its implementation and some optimizations. Finally, a performance analysis is undertaken, comparing the system with current relational database systems.Comment: In Proceedings PROLE 2014, arXiv:1501.0169
    • …
    corecore