537 research outputs found

    Lean Middleware

    Get PDF
    This paper describes an approach to achieving data integration across multiple sources in an enterprise, in a manner that is cost efficient and economically scalable. We present an approach that does not rely on major investment in structured, heavy-weight database systems for data storage or heavy-weight middleware responsible for integrated access. The approach is centered around pushing any required data structure and semantics functionality (schema) to application clients, as well as pushing integration specification and functionality to clients where integration can be performed on-the-fly

    Service-oriented modeling for e-business applications components

    Get PDF
    The emerging trends for e-business engineering revolve around specialisation and cooperation. Successful companies focus on their core competences, and rely on a network of business partners for the support services required to compose a comprehensive offer for their customers. Modulariy is crucial for a flexible e-business infrastructure, but related requirements seldom reflect on the design and operational models of business information systems. Software components are widely used for the implementation of e-business applications, with proved benefits in terms of system development and maintenance. We propose a service-oriented componentisation of ebusiness systems as a way to close the gap with the business models they support. Blurring the distinction between external services and internal capabilities, we propose a homogeneous model for the definition of ebusiness applications components. After a brief discussion on the foundational aspects of the approach, we present the process-based technique we adopted for component modelling. We then present an infrastructure compliant with the model proposed that we built on top of an EJB (Enterprise Java Beans) platform

    Transfer and Inventory Components of Developing Repository Services

    Get PDF
    4th International Conference on Open RepositoriesThis presentation was part of the session : Conference PresentationsDate: 2009-05-19 10:00 AM – 11:30 AMAt the Library of Congress, our most basic data management needs are not surprising: How do we know what we have, where it is, and who it belongs to? How do we get files "new and legacy" from where they are to where they need to be? And how do we record and track events in the life cycle of our files? This presentation describes current work at the Library in implementing tools to meet these needs as a set of modular services -- Transfer, Transport, and Inventory -- that will fit into a larger scheme of repository services to be developed. These modular services do not equate to everything needed to call a system a repository. But this is a set of services that equate to many aspects of "ingest" and "archiving" the registry of a deposit activity, the controlled transfer and transport of files, and an inventory system that can be used to track files, record events in those files life cycles, and provide basic file-level discovery and auditing. This is the first stage in the development of a suite of tools to help the Library ensure long-term stewardship of its digital assets

    Towards structured sharing of raw and derived neuroimaging data across existing resources

    Full text link
    Data sharing efforts increasingly contribute to the acceleration of scientific discovery. Neuroimaging data is accumulating in distributed domain-specific databases and there is currently no integrated access mechanism nor an accepted format for the critically important meta-data that is necessary for making use of the combined, available neuroimaging data. In this manuscript, we present work from the Derived Data Working Group, an open-access group sponsored by the Biomedical Informatics Research Network (BIRN) and the International Neuroimaging Coordinating Facility (INCF) focused on practical tools for distributed access to neuroimaging data. The working group develops models and tools facilitating the structured interchange of neuroimaging meta-data and is making progress towards a unified set of tools for such data and meta-data exchange. We report on the key components required for integrated access to raw and derived neuroimaging data as well as associated meta-data and provenance across neuroimaging resources. The components include (1) a structured terminology that provides semantic context to data, (2) a formal data model for neuroimaging with robust tracking of data provenance, (3) a web service-based application programming interface (API) that provides a consistent mechanism to access and query the data model, and (4) a provenance library that can be used for the extraction of provenance data by image analysts and imaging software developers. We believe that the framework and set of tools outlined in this manuscript have great potential for solving many of the issues the neuroimaging community faces when sharing raw and derived neuroimaging data across the various existing database systems for the purpose of accelerating scientific discovery

    Mapping service components to EJB business objects

    Get PDF
    The emerging trends for e-business engineering revolve around specialisation and cooperation. Successful companies focus on their core competencies and rely on a network of business partners for the support services required to compose a comprehensive offer for their customers. Modularity is crucial for a flexible e-business infrastructure, but related requirements seldom reflect on the design and operational models of business information systems. Software components are widely used for the implementation of e-business applications, with proven benefits in terms of system development and maintenance. We propose a service-oriented componentisation of e-business systems as a way to close the gap with the business models they support. Blurring the distinction between external services and internal capabilities, we propose a homogeneous model for the definition of e-business applications components and present a process-based technique for component modelling. We finally present an Enterprise Java Beans extension that implements the model

    Online Integration of Semistructured Data

    Get PDF
    Data integration systems play an important role in the development of distributed multi-database systems. Data integration collects data from heterogeneous and distributed sources, and provides a global view of data to the users. Systems need to process user\u27s applications in the shortest possible time. The virtualization approach to data integration systems ensures that the answers to user requests are the most up-to-date ones. In contrast, the materialization approach reduces data transmission time at the expense of data consistency between the central and remote sites. The virtualization approach to data integration systems can be applied in either batch or online mode. Batch processing requires all data to be available at a central site before processing is started. Delays in transmission of data over a network contribute to a longer processing time. On the other hand, in an online processing mode data integration is performed piece-by-piece as soon as a unit of data is available at the central site. An online processing mode presents the partial results to the users earlier. Due to the heterogeneity of data models at the remote sites, a semistructured global view of data is required. The performance of data integration systems depends on an appropriate data model and the appropriate data integration algorithms used. This thesis presents a new algorithm for immediate processing of data collected from remote and autonomous database systems. The algorithm utilizes the idle processing states while the central site waits for completion of data transmission to produce instant partial results. A decomposition strategy included in the algorithm balances of the computations between the central and remote sites to force maximum resource utilization at both sites. The thesis chooses the XML data model for the representation of semistructured data, and presents a new formalization of the XML data model together with a set of algebraic operations. The XML data model is used to provide a virtual global view of semistructured data. The algebraic operators are consistent with operations of relational algebra, such that any existing syntax based query optimization technique developed for the relational model of data can be directly applied. The thesis shows how to optimize online processing by generating one online integration plan for several data increments. Further, the thesis shows how each independent increment expression can be processed in a parallel mode on a multi core processor system. The dynamic scheduling system proposed in the thesis is able to defer or terminate a plan such that materialization updates and unnecessary computations are minimized. The thesis shows that processing data chunks of fragmented XML documents allows for data integration in a shorter period of time. Finally, the thesis provides a clear formalization of the semistructured data model, a set of algorithms with high-level descriptions, and running examples. These formal backgrounds show that the proposed algorithms are implementable

    Making XML Pay: Revising Existing Electronic Payments Law to Accommodate Innovation

    Get PDF
    Many businesses today are rushing to embrace e-Business technologies in a mad scramble to remain competitive. Only a few years ago, simply using email instead of faxes or phone calls, converting a purchasing system to EDI technology, or building a corporate Web site might have seemed like important advances in the use of new information technologies. Businesses are now moving beyond such electronic commerce technologies and trying to integrate their disparate information systems and business processes into a comprehensive new e-Business structure. At the heart of this new model for business organization is the idea that information and resources should be able to flow to where they are most needed at a moment\u27s notice. Such fluidity in access and control over information and resources is very difficult to achieve in traditional hierarchical corporate organizations. By adopting new technologies, including XML, businesses can set up a more flexible, decentralized form of organization that can be more nimble in recognizing and responding to changing market conditions. The assimilation of these and other electronic commerce technologies into established businesses permits those businesses to provide goods and services to existing customers more efficiently. For example, General Electric, one of the world\u27s largest diversified manufacturing companies, has used electronic commerce technologies to reduce the amount of time required to process purchase orders and to reduce the cost paid for materials by using a secure Internet site to link customers and suppliers to manufacturing resource planning software. Efficiencies of this type are generally referred to as a function of supply chain reengineering when they take place in traditional manufacturing industries between purchasers and vendors, or value chain reengineering when the same type of efficiencies are sought more generally throughout more diverse types of organization and industries. eXtensible Markup Language (XML) is a new standard that governs the way information is organized and exchanged. Use of the XML standard in organizing the information businesses need to conduct business would permit greater use of electronic searching technologies to identify potential trading partners, greater use of automated processes in negotiating the terms of transactions, and greater automation in tracking the execution and fulfillment of transactions after deals are struck. A major stumbling block on the path to realizing the e-Business model is the difficulty most businesses face when trying to integrate electronic payment processes into other business processes. Financial transactions normally need to be controlled with more rigorous security procedures than other transactions. Financial markets were early adopters of electronic communications technologies, and as a result have a huge installed base of older technologies that are very reliable and stable. These legacy computer systems, however, integrate poorly with newer Internet based systems developed for other business processes. As a result, most businesses in the United States still rely heavily on paper checks as their primary payment device, even for transactions entered into electronically. The adoption of XML standards by retail merchants and financial service providers will create new risks and opportunities for consumers using electronic funds transfers. In consumer markets, one challenge posed by the adoption of new technologies such as XML is designing appropriate human-computer interfaces rather than achieving interoperability among existing computer systems. In addition, new technologies will facilitate greater reliance by consumers on new automated contracting processes such as electronic agent software. Unlike the law that governs business-to-business electronic funds transfers, the law and regulations governing consumer electronic funds transfers often reflect anachronistic models of technology and consumer protection. Since the mid-1990s, federal regulations governing consumer electronic funds transfers have been under review and are in the process of being updated. It is possible that even very recent revisions may soon appear anachronistic in light of the rapid pace of innovation in business processes. Regulators should not focus on preserving the form of existing consumer protection regulations, but on advancing their underlying objective of consumer empowerment in new environments. The development of new user interfaces for payments products should include information that helps consumers understand the functional differences between different forms of electronic payments, and the different risks that may be associated with each. Consumers, consumer advocates, and regulators will need to contribute to the standard-setting processes to make sure that the concerns and preferences of consumers are reflected in standards that gain widespread acceptance

    Making XML Pay: Revising Existing Electronic Payments Law to Accommodate Innovation

    Get PDF
    Many businesses today are rushing to embrace e-Business technologies in a mad scramble to remain competitive. Only a few years ago, simply using email instead of faxes or phone calls, converting a purchasing system to EDI technology, or building a corporate Web site might have seemed like important advances in the use of new information technologies. Businesses are now moving beyond such electronic commerce technologies and trying to integrate their disparate information systems and business processes into a comprehensive new e-Business structure. At the heart of this new model for business organization is the idea that information and resources should be able to flow to where they are most needed at a moment\u27s notice. Such fluidity in access and control over information and resources is very difficult to achieve in traditional hierarchical corporate organizations. By adopting new technologies, including XML, businesses can set up a more flexible, decentralized form of organization that can be more nimble in recognizing and responding to changing market conditions. The assimilation of these and other electronic commerce technologies into established businesses permits those businesses to provide goods and services to existing customers more efficiently. For example, General Electric, one of the world\u27s largest diversified manufacturing companies, has used electronic commerce technologies to reduce the amount of time required to process purchase orders and to reduce the cost paid for materials by using a secure Internet site to link customers and suppliers to manufacturing resource planning software. Efficiencies of this type are generally referred to as a function of supply chain reengineering when they take place in traditional manufacturing industries between purchasers and vendors, or value chain reengineering when the same type of efficiencies are sought more generally throughout more diverse types of organization and industries. eXtensible Markup Language (XML) is a new standard that governs the way information is organized and exchanged. Use of the XML standard in organizing the information businesses need to conduct business would permit greater use of electronic searching technologies to identify potential trading partners, greater use of automated processes in negotiating the terms of transactions, and greater automation in tracking the execution and fulfillment of transactions after deals are struck. A major stumbling block on the path to realizing the e-Business model is the difficulty most businesses face when trying to integrate electronic payment processes into other business processes. Financial transactions normally need to be controlled with more rigorous security procedures than other transactions. Financial markets were early adopters of electronic communications technologies, and as a result have a huge installed base of older technologies that are very reliable and stable. These legacy computer systems, however, integrate poorly with newer Internet based systems developed for other business processes. As a result, most businesses in the United States still rely heavily on paper checks as their primary payment device, even for transactions entered into electronically. The adoption of XML standards by retail merchants and financial service providers will create new risks and opportunities for consumers using electronic funds transfers. In consumer markets, one challenge posed by the adoption of new technologies such as XML is designing appropriate human-computer interfaces rather than achieving interoperability among existing computer systems. In addition, new technologies will facilitate greater reliance by consumers on new automated contracting processes such as electronic agent software. Unlike the law that governs business-to-business electronic funds transfers, the law and regulations governing consumer electronic funds transfers often reflect anachronistic models of technology and consumer protection. Since the mid-1990s, federal regulations governing consumer electronic funds transfers have been under review and are in the process of being updated. It is possible that even very recent revisions may soon appear anachronistic in light of the rapid pace of innovation in business processes. Regulators should not focus on preserving the form of existing consumer protection regulations, but on advancing their underlying objective of consumer empowerment in new environments. The development of new user interfaces for payments products should include information that helps consumers understand the functional differences between different forms of electronic payments, and the different risks that may be associated with each. Consumers, consumer advocates, and regulators will need to contribute to the standard-setting processes to make sure that the concerns and preferences of consumers are reflected in standards that gain widespread acceptance
    • 

    corecore