44 research outputs found

    Free/Open Source Software - Open Standards

    Get PDF
    Co-published with Elsevier a division of Reed Elsevier India Private LimitedThis primer is part of a series of primers on Free and Open Source Software (FOSS) from IOSN serving as introductory documents to FOSS in general, as well as covering particular topic areas that are deemed important to FOSS such as open standards. Open standards are not the same as FOSS. However, like FOSS, they can minimize the possibility of technology and vendor lock-ins and level the playing field. They can also play an important role in promoting the interoperability of FOSS and proprietary software and this is crucial in the current, mixed Information technology (IT) environment. Being a primer in the IOSN FOSS series, the issues concerning open standards are approached from the FOSS and software perspectives and emphasis is given to the relationship that some of these standards have with FOSS. The definition of an open standard has generated much controversy with regard to whether it should contain patents licensed under reasonable and non-discriminatory (RAND) terms. The FOSS community, in general, is of the view that such RAND-encumbered standards should not be considered as open standards but most of the standards development organizations and bodies do accept patents available under RAND terms in their standards. The primer has incorporated definitions of open standards from both sides and also put into perspective the minimal characteristics that an open standard should have. It is hoped that this primer will provide the reader with a better understanding as to why open standards are important and how they can complement FOSS in fostering a more open IT environment. As users and consumers, the readers of this primer should demand from their software, conformance to open standards as far as possible. In addition to promoting interoperability and making more choices available, this will make it easier for FOSS to co-exist and take root in environments filled with proprietary software

    Site Security Handbook

    Full text link

    CPA WebTrust practitioners\u27 guide

    Get PDF
    https://egrove.olemiss.edu/aicpa_guides/1788/thumbnail.jp

    The Architecture of a Worldwide Distributed System

    Get PDF

    RESTful Service Composition

    Get PDF
    The Service-Oriented Architecture (SOA) has become one of the most popular approaches to building large-scale network applications. The web service technologies are de facto the default implementation for SOA. Simple Object Access Protocol (SOAP) is the key and fundamental technology of web services. Service composition is a way to deliver complex services based on existing partner services. Service orchestration with the support of Web Services Business Process Execution Language (WSBPEL) is the dominant approach of web service composition. WSBPEL-based service orchestration inherited the issue of interoperability from SOAP, and it was furthermore challenged for performance, scalability, reliability and modifiability. I present an architectural approach for service composition in this thesis to address these challenges. An architectural solution is so generic that it can be applied to a large spectrum of problems. I name the architectural style RESTful Service Composition (RSC), because many of its elements and constraints are derived from Representational State Transfer (REST). REST is an architectural style developed to describe the architectural style of the Web. The Web has demonstrated outstanding interoperability, performance, scalability, reliability and modifiability. RSC is designed for service composition on the Internet. The RSC style is composed on specific element types, including RESTful service composition client, RESTful partner proxy, composite resource, resource client, functional computation and relaying service. A service composition is partitioned into stages; each stage is represented as a computation that has a uniform identifier and a set of uniform access methods; and the transitions between stages are driven by computational batons. RSC is supplemented by a programming model that emphasizes on-demand function, map-reduce and continuation passing. An RSC-style composition does not depend on either a central conductor service or a common choreography specification, which makes it different from service orchestration or service choreography. Four scenarios are used to evaluate the performance, scalability, reliability and modifiability improvement of the RSC approach compared to orchestration. An RSC-style solution and an orchestration solution are compared side by side in every scenario. The first scenario evaluates the performance improvement of the X-Ray Diffraction (XRD) application in ScienceStudio; the second scenario evaluates the scalability improvement of the Process Variable (PV) snapshot application; the third scenario evaluates the reliability improvement of a notification application by simulation; and the fourth scenario evaluates the modifiability improvement of the XRD application in order to fulfil emerging requirements. The results show that the RSC approach outperforms the orchestration approach in every aspect

    CPA\u27s guide to information security

    Get PDF
    https://egrove.olemiss.edu/aicpa_guides/1963/thumbnail.jp

    Snake-Oil Security Claims the Systematic Misrepresentation of Product Security in the E-Commerce Arena

    Get PDF
    The modern commercial systems and software industry in the United States have grown up in a snake-oil salesman\u27s paradise. The largest sector of this industry by far is composed of standard commercial systems that are marketed to provide specified functionality (e.g. Internet web server, firewall, router, etc.) Such products are generally provided with a blanket disclaimer stating that the purchaser must evaluate the suitability of the product for use, and that the user assumes all liability for product behavior. In general, users cannot evaluate and cannot be expected to evaluate the security claims of a product. The ability to analyze security claims is important because a consumer may place unwarranted trust in the security abilities of a web server (or other computer device) to perform its stated purpose, thereby putting his own organization at risk, as well as third parties (consumers, business partners, etc.) All but the largest and most capable organizations lack the resources or expertise to evaluate the security claims of a product. More importantly, no reasonable and knowledgeable person would expect them to be able to do so. The normal legal presumptions of approximate equality of bargaining power and comparable sophistication in evaluating benefits and risks are grievously unjust in the context of software security. In these transactions, it is far wiser to view the general purchaser, even if that purchaser is a sizable corporation, as an ignorant consumer. Hence, often purchasers accept what appear to be either implied merchantability claims of the vendor or claims of salespersons\u27 made outside of the context of a written document. These claims frequently have little, if any, basis in fact. These standard commercial systems form the bulk of the critical infrastructure of existing Internet functionality and e-commerce systems. Often, these systems are not trustworthy, yet the use of these systems by misinformed purchasers created massive vulnerability for both purchasers and third parties (including a substantial fraction of both U.S. and international citizens). The frequent disclosure of individual credit card information from supposedly secure commercial systems illustrates an aspect of this vulnerability and raises serious questions concerning the merchantability of these systems. While it is impossible to avoid all risks, they can be reduced to a very small fraction of their current level. Vendors have willfully taken approaches and used processes that do not allow assurance of appropriate security properties, while simultaneously and recklessly misrepresenting the security properties of their products to their customers

    Satellite Networks: Architectures, Applications, and Technologies

    Get PDF
    Since global satellite networks are moving to the forefront in enhancing the national and global information infrastructures due to communication satellites' unique networking characteristics, a workshop was organized to assess the progress made to date and chart the future. This workshop provided the forum to assess the current state-of-the-art, identify key issues, and highlight the emerging trends in the next-generation architectures, data protocol development, communication interoperability, and applications. Presentations on overview, state-of-the-art in research, development, deployment and applications and future trends on satellite networks are assembled

    Deterministic Object Management in Large Distributed Systems

    Get PDF
    Caching is a widely used technique to improve the scalability of distributed systems. A central issue with caching is maintaining object replicas consistent with their master copies. Large distributed systems, such as the Web, typically deploy heuristic-based consistency mechanisms, which increase delay and place extra load on the servers, while not providing guarantees that cached copies served to clients are up-to-date. Server-driven invalidation has been proposed as an approach to strong cache consistency, but it requires servers to keep track of which objects are cached by which clients. We propose an alternative approach to strong cache consistency, called MONARCH, which does not require servers to maintain per-client state. Our approach builds on a few key observations. Large and popular sites, which attract the majority of the traffic, construct their pages from distinct components with various characteristics. Components may have different content types, change characteristics, and semantics. These components are merged together to produce a monolithic page, and the information about their uniqueness is lost. In our view, pages should serve as containers holding distinct objects with heterogeneous type and change characteristics while preserving the boundaries between these objects. Servers compile object characteristics and information about relationships between containers and embedded objects into explicit object management commands. Servers piggyback these commands onto existing request/response traffic so that client caches can use these commands to make object management decisions. The use of explicit content control commands is a deterministic, rather than heuristic, object management mechanism that gives content providers more control over their content. The deterministic object management with strong cache consistency offered by MONARCH allows content providers to make more of their content cacheable. Furthermore, MONARCH enables content providers to expose internal structure of their pages to clients. We evaluated MONARCH using simulations with content collected from real Web sites. The results show that MONARCH provides strong cache consistency for all objects, even for unpredictably changing ones, and incurs smaller byte and message overhead than heuristic policies. The results also show that as the request arrival rate or the number of clients increases, the amount of server state maintained by MONARCH remains the same while the amount of server state incurred by server invalidation mechanisms grows

    New Waves of IoT Technologies Research – Transcending Intelligence and Senses at the Edge to Create Multi Experience Environments

    Get PDF
    The next wave of Internet of Things (IoT) and Industrial Internet of Things (IIoT) brings new technological developments that incorporate radical advances in Artificial Intelligence (AI), edge computing processing, new sensing capabilities, more security protection and autonomous functions accelerating progress towards the ability for IoT systems to self-develop, self-maintain and self-optimise. The emergence of hyper autonomous IoT applications with enhanced sensing, distributed intelligence, edge processing and connectivity, combined with human augmentation, has the potential to power the transformation and optimisation of industrial sectors and to change the innovation landscape. This chapter is reviewing the most recent advances in the next wave of the IoT by looking not only at the technology enabling the IoT but also at the platforms and smart data aspects that will bring intelligence, sustainability, dependability, autonomy, and will support human-centric solutions.acceptedVersio
    corecore