594 research outputs found

    A language for information commerce processes

    Get PDF
    Automatizing information commerce requires languages to represent the typical information commerce processes. Existing languages and standards cover either only very specific types of business models or are too general to capture in a concise way the specific properties of information commerce processes. We introduce a language that is specifically designed for information commerce. It can be directly used for the implementation of the processes and communication required in information commerce. It allows to cover existing business models that are known either from standard proposals or existing information commerce applications on the Internet. The language has a concise logical semantics. In this paper we present the language concepts and an implementation architecture

    An Architecture for Information Commerce Systems

    Get PDF
    The increasing use of the Internet in business and commerce has created a number of new business opportunities and the need for supporting models and platforms. One of these opportunities is information commerce (i-commerce), a special case of ecommerce focused on the purchase and sale of information as a commodity. In this paper we present an architecture for i-commerce systems using OPELIX (Open Personalized Electronic Information Commerce System) [11] as an example. OPELIX provides an open information commerce platform that enables enterprises to produce, sell, deliver, and manage information products and related services over the Internet. We focus on the notion of information marketplace, a virtual location that enables i-commerce, describe the business and domain model for an information marketplace, and discuss the role of intermediaries in this environment. The domain model is used as the basis for the software architecture of the OPELIX system. We discuss the characteristics of the OPELIX architecture and compare our approach to related work in the field

    Distributing Real Time Data From a Multi-Node Large Scale Contact Center Using Corba

    Get PDF
    This thesis researches and evaluates the current technologies available for developing a system for propagation of Real-Time Data from a large scale Enterprise Server to large numbers of registered clients on the network. The large scale Enterprise Server being implemented is a Contact Centre Server, which can be a standalone system or part of a multi-nodal system. This paper makes three contributions to the study of scalable real-time notification services. Firstly, it defines the research of the different technologies and their implementation for distributed objects in today\u27s world of computing. Secondly, the paper explains how we have addressed key design challenges faced when implementing a Notification Service for TAO, which is our CORBA-compliant real-time Object Request Broker (ORB). The paper shows how to integrate and configure CORBA features to provide real-time event communication. Finally, the paper analyzes the results of the implementation and how it compares to existing technologies being used for the propagation of Real-Time Data

    Integration of Configurable Dynamic Notification System with CSIBER Website

    Get PDF
    In this digital era every academic institution and commercial setup investments enormously in hosting and maintenance of the website which plays a critical role in the success of an organization by making it reachable across wide geographical area at any time. A carefully designed website reflects institute’s best assets and delivers tremendous first-hand information to any user at any time irrespective of his/her geographical location. To stay in market there is a constant requirement for changing the look and feel and content of the website and incorporating dynamism into the website. It is inevitable to keep the website constantly updated since it is accessible to the public. As the new website data pertaining to event information, notification etc is constantly generated and old data soon becomes obsolete, it demands for continuous manual efforts from the human resource to keep the dynamically changing data current and up-to-date. It can save tremendous amount of human effort and time, if such a task is automated which in turn enables meaningful data to be displayed on the website with very little human intervention. To facilitate this new technologies such as jQuery, JSON, angular JS etc. are emerging continuously to name a few. In the current paper, the author has proposed an algorithm for the integration of dynamic notification system with existing website of CSIBER. The algorithm is implemented in PHP and MySQL and hosted on web server employing the web hosting service availed by the organization. The dynamic module is scheduled to be executed periodically on a daily basic by the Cron utility and server-side include is dynamically created and embedded in home page. Every month’s events can be scheduled and stored in the backend database which is parsed by dynamic module and the required data is accordingly generated. As a measure towards efficiency improvement, the tool is executed once per day instead of executing it for every user request. Two options are proposed for integration, one on client-side and the second one on the server-side. The dialog displaying the notification data is rendered mobile friendly and is subject to Google’s mobile friendly test

    A Review of IFC Standardization – Interoperability Through Complementary Development Approaches

    Get PDF
    The Industry Foundation Classes (IFC) data model has been in development by an industry consortium since 1994; during this time the industry context, standardization organization, resource availability, and technology development have exposed the standardization process to a dynamic environment. While the overarching mission of IFC standardization has always been to provide interoperability between AEC/FM software applications and actors, both the goals and the views on how to best achieve those goals have changed throghout the years. Despite the fact that IFC has enjoyed sustained professional and scholarly interest throughout its development, reflective socio-technical studies on the subject are largely non-existent. This study reviews the major shifts in the development process of the IFC standard from its origins in the early 1990s up to 2011, splitting the timeline into four distinct phases. A finding of the review is that the IFC standardization process has utilized complementary minimalist and structuralist approaches for different phases of the standardization process - balancing exhaustive structuralism and implementable minimalism. The concepts behind Model View Definitions (MVD), Information Delivery Manuals (IDM), and the International Framework for Dictionaries (IFD) were not documented from the start and only became relevant as standardization progressed, with each of the components contributing minimalism to a structurally constructed data model

    Managing Information System Integration Technologies--A Study of Text Mined Industry White Papers

    Get PDF
    Industry white papers are increasingly being used to explain the philosophy and operation of a product in marketplace or technology context. This explanation is used by senior managers for strategic planning in an organization. This research explores the effectiveness of white papers and strategies for managers to learn about technologies using white papers. The research is conducted by collecting industry white papers in the area of Information System Integration and gleaned relevant information through text-mining tool, Vantage Point. The text mined information is analyzed to provide solutions for practical problems in systems integration market. The indirect findings of the research are New System Integration Business Models, Methods for Calculating ROI of System Integration Project, and Managing Implementation Failures

    PROPOSED MIDDLEWARE SOLUTION FOR RESOURCE-CONSTRAINED DISTRIBUTED EMBEDDED NETWORKS

    Get PDF
    The explosion in processing power of embedded systems has enabled distributed embedded networks to perform more complicated tasks. Middleware are sets of encapsulations of common and network/operating system-specific functionality into generic, reusable frameworks to manage such distributed networks. This thesis will survey and categorize popular middleware implementations into three adapted layers: host-infrastructure, distribution, and common services. This thesis will then apply a quantitative approach to grading and proposing a single middleware solution from all layers for two target platforms: CubeSats and autonomous unmanned aerial vehicles (UAVs). CubeSats are 10x10x10cm nanosatellites that are popular university-level space missions, and impose power and volume constraints. Autonomous UAVs are similarly-popular hobbyist-level vehicles that exhibit similar power and volume constraints. The MAVLink middleware from the host-infrastructure layer is proposed as the middleware to manage the distributed embedded networks powering these platforms in future projects. Finally, this thesis presents a performance analysis on MAVLink managing the ARM Cortex-M 32-bit processors that power the target platforms
    • 

    corecore