423,000 research outputs found

    Designing Traceability into Big Data Systems

    Full text link
    Providing an appropriate level of accessibility and traceability to data or process elements (so-called Items) in large volumes of data, often Cloud-resident, is an essential requirement in the Big Data era. Enterprise-wide data systems need to be designed from the outset to support usage of such Items across the spectrum of business use rather than from any specific application view. The design philosophy advocated in this paper is to drive the design process using a so-called description-driven approach which enriches models with meta-data and description and focuses the design process on Item re-use, thereby promoting traceability. Details are given of the description-driven design of big data systems at CERN, in health informatics and in business process management. Evidence is presented that the approach leads to design simplicity and consequent ease of management thanks to loose typing and the adoption of a unified approach to Item management and usage.Comment: 10 pages; 6 figures in Proceedings of the 5th Annual International Conference on ICT: Big Data, Cloud and Security (ICT-BDCS 2015), Singapore July 2015. arXiv admin note: text overlap with arXiv:1402.5764, arXiv:1402.575

    Multifaceted modelling of complex business enterprises

    Get PDF
    We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control

    Intra-Organizational Boundary Spanning: A Machine Learning Approach

    Get PDF
    With the ubiquity of data, new opportunities have emerged for the application of data science and machine learning approaches to help enhance the efficiency and effectiveness of knowledge management. With the growing use of social media technologies in enterprise settings, one specific area of knowledge management warranting the use of big data analytics involves cross-boundary knowledge creation and management. The objective of this paper is to develop and test a machine learning approach that can assist knowledge managers in detecting three types of intra-organizational boundary spanning activities with the goal of predicting and improving such important outcomes as team effectiveness, collaboration, knowledge sharing, and innovation

    Digital Hash Data Encryption for IoT Financial Transactions using Blockchain Security in the Cloud

    Get PDF
    Blockchain security via the Internet of Things (IoT) will reshape the decision-making function of the data-driven incumbent smart enterprise, providing the vision of the connected world of things. Enterprise IoT development of devices, personnel, and systems in such a way that they may connect and communicate with each other through the Internet. Blockchain is an enterprise financial transaction, and its digital network is distributed transaction ledger. Today, enterprises need the massive global data management and rapid trading volume to keep things going and growing. It creates enterprise business challenges of different types of security, transparency, and complexity of the problem. Enterprise architecture offers several advantages for the thief to obtain a specific user account,   application, and access to the device. This is, will doesn't be to provide the necessities of security. The proposed Digital Hash Data Encryption (DHDE) is used to secure the transaction data-based embedded system people and blockchain. Blockchain and IoT technology integration may bring numerous benefits to mention. Therefore, the proposed DHDE algorithm comprehensively discusses the blockchain technology integration system. The proposed DHDE algorithm encrypts the transaction data for an unauthorized person who cannot access the enterprise transaction data based on embedded system people and blockchain

    Graph Processing in Main-Memory Column Stores

    Get PDF
    Evermore, novel and traditional business applications leverage the advantages of a graph data model, such as the offered schema flexibility and an explicit representation of relationships between entities. As a consequence, companies are confronted with the challenge of storing, manipulating, and querying terabytes of graph data for enterprise-critical applications. Although these business applications operate on graph-structured data, they still require direct access to the relational data and typically rely on an RDBMS to keep a single source of truth and access. Existing solutions performing graph operations on business-critical data either use a combination of SQL and application logic or employ a graph data management system. For the first approach, relying solely on SQL results in poor execution performance caused by the functional mismatch between typical graph operations and the relational algebra. To the worse, graph algorithms expose a tremendous variety in structure and functionality caused by their often domain-specific implementations and therefore can be hardly integrated into a database management system other than with custom coding. Since the majority of these enterprise-critical applications exclusively run on relational DBMSs, employing a specialized system for storing and processing graph data is typically not sensible. Besides the maintenance overhead for keeping the systems in sync, combining graph and relational operations is hard to realize as it requires data transfer across system boundaries. A basic ingredient of graph queries and algorithms are traversal operations and are a fundamental component of any database management system that aims at storing, manipulating, and querying graph data. Well-established graph traversal algorithms are standalone implementations relying on optimized data structures. The integration of graph traversals as an operator into a database management system requires a tight integration into the existing database environment and a development of new components, such as a graph topology-aware optimizer and accompanying graph statistics, graph-specific secondary index structures to speedup traversals, and an accompanying graph query language. In this thesis, we introduce and describe GRAPHITE, a hybrid graph-relational data management system. GRAPHITE is a performance-oriented graph data management system as part of an RDBMS allowing to seamlessly combine processing of graph data with relational data in the same system. We propose a columnar storage representation for graph data to leverage the already existing and mature data management and query processing infrastructure of relational database management systems. At the core of GRAPHITE we propose an execution engine solely based on set operations and graph traversals. Our design is driven by the observation that different graph topologies expose different algorithmic requirements to the design of a graph traversal operator. We derive two graph traversal implementations targeting the most common graph topologies and demonstrate how graph-specific statistics can be leveraged to select the optimal physical traversal operator. To accelerate graph traversals, we devise a set of graph-specific, updateable secondary index structures to improve the performance of vertex neighborhood expansion. Finally, we introduce a domain-specific language with an intuitive programming model to extend graph traversals with custom application logic at runtime. We use the LLVM compiler framework to generate efficient code that tightly integrates the user-specified application logic with our highly optimized built-in graph traversal operators. Our experimental evaluation shows that GRAPHITE can outperform native graph management systems by several orders of magnitude while providing all the features of an RDBMS, such as transaction support, backup and recovery, security and user management, effectively providing a promising alternative to specialized graph management systems that lack many of these features and require expensive data replication and maintenance processes

    POPIM: Pragmatic online project information management for collaborative product development

    Get PDF
    POPIM (Pragmatic Online Project Information Management) is a prototype web-based platform for managing collaborative product development projects within an extended enterprise environment. A suite of facilities are provided for geographically dispersed project team members to communicate, share, and collaborate on a project in a common workspace where they enjoy online access to the most up to date project information and maintain a high-level data consistency and accumulate experience and knowledgebase. In addition to standard project management functionality such as defining work structure breakdowns, determining work schedules, teaming up with specialists, and allocating resources, POPIM incorporates workflow management (including dependency management), and deliverable management (document management if documents are considered as one kind of deliverables). Individual members have their personalized accounts according to their skills and roles/responsibilities in a project. A project team and its members may maintain their own journals/records. More application-specific functions such as product design review and engineering change management can be implicitly performed through online document forms.published_or_final_versio

    Hi, how can I help you?: Automating enterprise IT support help desks

    Full text link
    Question answering is one of the primary challenges of natural language understanding. In realizing such a system, providing complex long answers to questions is a challenging task as opposed to factoid answering as the former needs context disambiguation. The different methods explored in the literature can be broadly classified into three categories namely: 1) classification based, 2) knowledge graph based and 3) retrieval based. Individually, none of them address the need of an enterprise wide assistance system for an IT support and maintenance domain. In this domain the variance of answers is large ranging from factoid to structured operating procedures; the knowledge is present across heterogeneous data sources like application specific documentation, ticket management systems and any single technique for a general purpose assistance is unable to scale for such a landscape. To address this, we have built a cognitive platform with capabilities adopted for this domain. Further, we have built a general purpose question answering system leveraging the platform that can be instantiated for multiple products, technologies in the support domain. The system uses a novel hybrid answering model that orchestrates across a deep learning classifier, a knowledge graph based context disambiguation module and a sophisticated bag-of-words search system. This orchestration performs context switching for a provided question and also does a smooth hand-off of the question to a human expert if none of the automated techniques can provide a confident answer. This system has been deployed across 675 internal enterprise IT support and maintenance projects.Comment: To appear in IAAI 201

    Intelligens Neurális GRID rendszer és alkalmazásai

    Get PDF
    The Intelligent Neural GRID (INGRID) is a specific GRID system running artificial intelligence software and capable of solving analogous problems in real time. INGRID also has strong forecasting and classification capabilities. In this architecture the data-input can be in different locations but the evaluation of data is a global process using the shared resources of GRID and analogue processing of Cellular Neural Network. This means the local databases of different regions are evaluated in relationship with each other with an efficient data mining technology. Using pattern-recognition and pattern-analysis, INGRID can give global and local forecasts on analyzed processes. Potential applications of INGRID technology include applications in the field of phasing enterprise resource planning, more efficient data mining, forecasts of market events, traffic control, knowledge resource sharing, integration of information and visual search. Other application possibilities include meteorology, climatic control, environmental management, pollution and inundation control of rivers

    Fine-tuning Central Banks Web Communications: Usability Tests & Content Management

    Get PDF
    Business processes especially in the Central Banks are more fully integrated and streamlined than ever before. Also, realistic system landscapes often consist of many systems. Disconnected silos of unstructured information continue to pile up for each organizational function and different interfaces are often implemented using the technology that is considered to be ideal for the respective interface. There appears to be lack of Enterprise Content Management strategy thus leading to significant business challenges such as untrustworthy business information due to inaccurate, outdated, conflicting information, longer financial cycles and generally inefficient processes, system performance degradations and poor data organization, inconsistent, confusing user interface as well as frequent context switching. There is therefore the need for an effective enterprise content management strategy. Web content management systems are often used for storing, controlling, versioning, and publishing industry-specific documentations. Usability testing of web sites is an essential element of quality assurance and a true test of how people actually use Central Banks’ web site. It is a test of whether outsiders can successfully use the Banks’ Web site. Although formal usability tests are expensive, time-consuming and often prohibitive, periodic user testing is an important element in developing and maintaining a reader-friendly Website. Usability should emphasise clarity of communication, accessibility, consistency, navigation design, maintenance and good visual presentation. A solution to corporate intranet/internet chaos are Enterprise Portals. An enterprise portal is the gateway to the end user. It offers a central point of access to information, applications and services in an enterprise. It is a one-stop shopping for knowledge workers; the portal is both a gateway to and a destination on the enterprise network that provides transparent, tailored access to distributed digital resource. An Enterprise Portal provides numerous benefits to users, allowing them to interact with relevant information and application, both internal and external to the company, collaborate with others both inside and outside the Central Banks through self-service publishing customise-and-tailor a Web page with information that is easily found. This paper discusses the issue of Usability Tests and Web Content Management that enhance user productivity. Drawing from some award winning intranets some areas for best practices for the financial services such as the African Central Banks are high-lightened vis-à-vis the infrastructural problems facing the African Continen
    • …
    corecore