1,962 research outputs found

    Toward Business Integrity Modeling and Analysis Framework for Risk Measurement and Analysis

    Get PDF
    Financialization has contributed to economic growth but has caused scandals, misselling, rogue trading, tax evasion, and market speculation. To a certain extent, it has also created problems in social and economic instability. It is an important aspect of Enterprise Security, Privacy, and Risk (ESPR), particularly in risk research and analysis. In order to minimize the damaging impacts caused by the lack of regulatory compliance, governance, ethical responsibilities, and trust, we propose a Business Integrity Modeling and Analysis (BIMA) framework to unify business integrity with performance using big data predictive analytics and business intelligence. Comprehensive services include modeling risk and asset prices, and consequently, aligning them with business strategies, making our services, according to market trend analysis, both transparent and fair. The BIMA framework uses Monte Carlo simulation, the Black–Scholes–Merton model, and the Heston model for performing financial, operational, and liquidity risk analysis and present outputs in the form of analytics and visualization. Our results and analysis demonstrate supplier bankruptcy modeling, risk pricing, high-frequency pricing simulations, London Interbank Offered Rate (LIBOR) rate simulation, and speculation detection results to provide a variety of critical risk analysis. Our approaches to tackle problems caused by financial services and the operational risk clearly demonstrate that the BIMA framework, as the outputs of our data analytics research, can effectively combine integrity and risk analysis together with overall business performance and can contribute to operational risk research

    Storage Solutions for Big Data Systems: A Qualitative Study and Comparison

    Full text link
    Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed

    PiCo: A Domain-Specific Language for Data Analytics Pipelines

    Get PDF
    In the world of Big Data analytics, there is a series of tools aiming at simplifying programming applications to be executed on clusters. Although each tool claims to provide better programming, data and execution models—for which only informal (and often confusing) semantics is generally provided—all share a common under- lying model, namely, the Dataflow model. Using this model as a starting point, it is possible to categorize and analyze almost all aspects about Big Data analytics tools from a high level perspective. This analysis can be considered as a first step toward a formal model to be exploited in the design of a (new) framework for Big Data analytics. By putting clear separations between all levels of abstraction (i.e., from the runtime to the user API), it is easier for a programmer or software designer to avoid mixing low level with high level aspects, as we are often used to see in state-of-the-art Big Data analytics frameworks. From the user-level perspective, we think that a clearer and simple semantics is preferable, together with a strong separation of concerns. For this reason, we use the Dataflow model as a starting point to build a programming environment with a simplified programming model implemented as a Domain-Specific Language, that is on top of a stack of layers that build a prototypical framework for Big Data analytics. The contribution of this thesis is twofold: first, we show that the proposed model is (at least) as general as existing batch and streaming frameworks (e.g., Spark, Flink, Storm, Google Dataflow), thus making it easier to understand high-level data-processing applications written in such frameworks. As result of this analysis, we provide a layered model that can represent tools and applications following the Dataflow paradigm and we show how the analyzed tools fit in each level. Second, we propose a programming environment based on such layered model in the form of a Domain-Specific Language (DSL) for processing data collections, called PiCo (Pipeline Composition). The main entity of this programming model is the Pipeline, basically a DAG-composition of processing elements. This model is intended to give the user an unique interface for both stream and batch processing, hiding completely data management and focusing only on operations, which are represented by Pipeline stages. Our DSL will be built on top of the FastFlow library, exploiting both shared and distributed parallelism, and implemented in C++11/14 with the aim of porting C++ into the Big Data world

    A Mechanism-Based Explanation of the Institutionalization of Semantic Technologies in the Financial Industry

    Get PDF
    Part 3: Creating Value through ApplicationsInternational audienceThis paper explains how the financial industry is solving its data, risk management, and associated vocabulary problems using semantic technologies. The paper is the first to examine this phenomenon and to identify the social and institutional mechanisms being applied to socially construct a standard common vocabulary using ontology-based models. This standardized ontology-based common vocabulary will underpin the design of next generation of semantically-enabled information systems (IS) for the financial industry. The mechanisms that are helping institutionalize this common vocabulary are identified using a longitudinal case study, whose embedded units of analysis focus on central agents of change—the Enterprise Data Management Council and the Object Management Group. All this has important implications for society, as it is intended that semantically-enabled IS will, for example, provide stakeholders, such as regulators, with better transparency over systemic risks to national and international financial systems, thereby mitigating or avoiding future financial crises

    Economic Trends in Enterprise Search Solutions

    Get PDF
    Enterprise search technology retrieves information within organizations. This data can be proprietary and public, its access to it may be restricted or not. Enterprise search solutions render business processes more efficient particularly in data-intensive companies. This technology is key to increasing the competitiveness of the digital economy; thus it constitutes a strategic market for the European Union. The Enterprise Search Solution (ESS) market was worth close to one billion USD in 2008 and is expected to grow quicker than the overall market for information and knowledge management systems. Optimistic market forecasts expect market size to exceed 1,200 million USD by the end of 2010. Other market analyses see the growth rate slowing down and stabilizing at around 10% a year in 2010. Even in the least favourable case, enterprise search remains an attractive market, particularly because of the opportunities expected to arise from the convergence of ESS and Information Systems. This report looks at the demand and supply side of ESS and provides data about the market. It presents the evolution of market dynamics over the past decade and describes the current situation. Our main thesis is that ESS is currently placed at the point where two established markets, namely web search and the management of information systems, overlap. The report offers evidence that these two markets are converging and discusses the role of the different stakeholders (providers of web search engines, enterprise resource management tools, pure enterprise search tools, etc.) in this changing context.JRC.DDG.J.4-Information Societ
    • …
    corecore