8 research outputs found

    LegitimID: A federative digital identity system for strong authentication

    Get PDF
    The growing use of online services advocated the emergence of digital identity as a mechanism of data security and personal information protection that can increase the trust among online users and applications. This paper introduces a new security system developed around the digital identity concept, implemented using a federative multifactor strong authentication framework and tested in an authentic online educational setting to accomplish the complete life cycle of business privacy. System performance evaluated on a sample of 108 students revealed an excellent acceptance and confidence among the users

    Portfolio Volatility Estimation Relative to Stock Market Cross-Sectional Intrinsic Entropy

    No full text
    Selecting stock portfolios and assessing their relative volatility risk compared to the market as a whole, market indices, or other portfolios is of great importance to professional fund managers and individual investors alike. Our research uses the cross-sectional intrinsic entropy (CSIE) model to estimate the cross-sectional volatility of the stock groups that can be considered together as portfolio constituents. The CSIE market volatility estimate is based on daily traded prices—open, high, low, and close (OHLC)—along with the daily traded volume for symbols listed on the considered market. In our study, we benchmark portfolio volatility risks against the volatility of the entire market provided by the CSIE and the volatility of market indices computed using longitudinal data. This article introduces CSIE-based betas to characterise the relative volatility risk of the portfolio against market indices and the market as a whole. We empirically prove that, through CSIE-based betas, multiple sets of symbols that outperform the market indices in terms of rate of return while maintaining the same level of risk or even lower than the one exhibited by the market index can be discovered, for any given time interval. These sets of symbols can be used as constituent stock portfolios and, in connection with the perspective provided by the CSIE volatility estimates, to hierarchically assess their relative volatility risk within the broader context of the overall volatility of the stock market

    The Cross-Sectional Intrinsic Entropy—A Comprehensive Stock Market Volatility Estimator

    No full text
    To take into account the temporal dimension of uncertainty in stock markets, this paper introduces a cross-sectional estimation of stock market volatility based on the intrinsic entropy model. The proposed cross-sectional intrinsic entropy (CSIE) is defined and computed as a daily volatility estimate for the entire market, grounded on the daily traded prices—open, high, low, and close prices (OHLC)—along with the daily traded volume for all symbols listed on The New York Stock Exchange (NYSE) and The National Association of Securities Dealers Automated Quotations (NASDAQ). We perform a comparative analysis between the time series obtained from the CSIE and the historical volatility as provided by the estimators: close-to-close, Parkinson, Garman–Klass, Rogers–Satchell, Yang–Zhang, and intrinsic entropy (IE), defined and computed from historical OHLC daily prices of the Standard & Poor’s 500 index (S&P500), Dow Jones Industrial Average (DJIA), and the NASDAQ Composite index, respectively, for various time intervals. Our study uses an approximate 6000-day reference point, starting 1 January 2001, until 23 January 2022, for both the NYSE and the NASDAQ. We found that the CSIE market volatility estimator is consistently at least 10 times more sensitive to market changes, compared to the volatility estimate captured through the market indices. Furthermore, beta values confirm a consistently lower volatility risk for market indices overall, between 50% and 90% lower, compared to the volatility risk of the entire market in various time intervals and rolling windows

    Integrating R and Java for Enhancing Interactivity of Algorithmic Data Analysis Software Solutions

    No full text
    Conceiving software solutions for statistical processing and algorithmic data analysis involves handling diverse data, fetched from various sources and in different formats, and presenting the results in a suggestive, tailorable manner. Our ongoing research aims to design programming technics for integrating R developing environment with Java programming language for interoperability at a source code level. The goal is to combine the intensive data processing capabilities of R programing language, along with the multitude of statistical function libraries, with the flexibility offered by Java programming language and platform, in terms of graphical user interface and mathematical function libraries. Both developing environments are multiplatform oriented, and can complement each other through interoperability. R is a comprehensive and concise programming language, benefiting from a continuously expanding and evolving set of packages for statistical analysis, developed by the open source community. While is a very efficient environment for statistical data processing, R platform lacks support for developing user friendly, interactive, graphical user interfaces (GUIs). Java on the other hand, is a high level object oriented programming language, which supports designing and developing performant and interactive frameworks for general purpose software solutions, through Java Foundation Classes, JavaFX and various graphical libraries. In this paper we treat both aspects of integration and interoperability that refer to integrating Java code into R applications, and bringing R processing sequences into Java driven software solutions. Our research has been conducted focusing on case studies concerning pattern recognition and cluster analysis

    Sustainable development in education – automating curriculum assessment

    Get PDF
    The perpetual need for developing a sustainable economic environment places the education policies at the foundation of social adaptability. Creating and maintaining curriculum content that meets the demands of a continuously changing society, and the challenges that such a rapid evolution put on the labour market, is one of the top priorities for any education system and institution involved in education at any level. This paper proposes a cognitive computing solution for assessing, in a programmatic manner, large corpora of curriculum content created by teachers from lower secondary education environment for Informatics instruction in Romanian schools. The result of this initiative at the national level is corpora of curricular content that must be evaluated to verify the degree to which the material meets the requirements of the national curriculum. We addressed this crucial yet tedious process by designing and implementing a solution for automating curriculum assessment through cognitive computing. The paper outlines a sustainable framework to evaluate curriculum content in an automated fashion, and for providing critical feedback timely to both content creators, and to policy makers responsible for creating economically viable and future adaptable education strategies. First published online 05 July 202

    R Spatial and GIS Interoperability for Ethnic, Linguistic and Religious Diversity Analysis in Romania

    No full text
    Diversity aspects, particularly ethnic, linguistic and religious ones, have become global, capturing a large interest in being extremely sensitive recently. Traditionally, these had been issues concerning only particular countries and/or regions, due to specific historical conditions. The recent waves of mass migration towards the wealthier countries rose great problems regarding populations which come with radically different ethnic, linguistic and religious background compared to the local population. Our research is focused on analysing ethnic, linguistic and religious diversity in Romania, at Local Administrative Units level (LAU2), along with the segregation analysis regarding the same aspects at county (NUTS3) and region levels (NUTS2) by integrating R processing flexibility with and Geographic Information Systems (GIS) presentation abilities. R programming language offers support for developing integrated analysis solutions, based on specialized packages for computing diversity/segregation indices, in connection with packages that allow processing and visualising data geospatially, through interoperability with popular GIS, such as ArcGIS and QGIS. It is Romania census data that is employed as data source for analysis, with a focus on the latest census data from 2011

    An Intrinsic Entropy Model for Exchange-Traded Securities

    No full text
    This paper introduces an intrinsic entropy model which can be employed as an indicator for gauging investors’ interest in a given exchange-traded security, along with the state of the overall market corroborated by individual security trading data. Although the syntagma of intrinsic entropy might sound somehow pleonastic, since entropy itself characterizes the fundamentals of a system, we would like to make a clear distinction between entropy models based on the values that a random variable may take, and the model that we propose, which employs actual stock exchange trading data. The model that we propose for the intrinsic entropy does not include any exogenous factor that could influence the level of entropy. The intrinsic entropy signals if the market is either inclined to buy the security or rather to sell it. We further explore the usage of the intrinsic entropy model for algorithmic trading, in order to demonstrate the value of our model in assisting investors’ intraday stock portfolio selection, along with timely generated signals for supporting the buy/sell decision-making process. The test results provide empirical evidence that the proposed intrinsic entropy model can be used as an indicator for evaluating the direction and the intensity of intraday trading activity of an exchange-traded security. The data employed for testing consisted of historical intraday transactions executed on The Bucharest Stock Exchange (BVB)

    A Volatility Estimator of Stock Market Indices Based on the Intrinsic Entropy Model

    No full text
    Grasping the historical volatility of stock market indices and accurately estimating are two of the major focuses of those involved in the financial securities industry and derivative instruments pricing. This paper presents the results of employing the intrinsic entropy model as a substitute for estimating the volatility of stock market indices. Diverging from the widely used volatility models that take into account only the elements related to the traded prices, namely the open, high, low, and close prices of a trading day (OHLC), the intrinsic entropy model takes into account the traded volumes during the considered time frame as well. We adjust the intraday intrinsic entropy model that we introduced earlier for exchange-traded securities in order to connect daily OHLC prices with the ratio of the corresponding daily volume to the overall volume traded in the considered period. The intrinsic entropy model conceptualizes this ratio as entropic probability or market credence assigned to the corresponding price level. The intrinsic entropy is computed using historical daily data for traded market indices (S&P 500, Dow 30, NYSE Composite, NASDAQ Composite, Nikkei 225, and Hang Seng Index). We compare the results produced by the intrinsic entropy model with the volatility estimates obtained for the same data sets using widely employed industry volatility estimators. The intrinsic entropy model proves to consistently deliver reliable estimates for various time frames while showing peculiarly high values for the coefficient of variation, with the estimates falling in a significantly lower interval range compared with those provided by the other advanced volatility estimators
    corecore