8,856 research outputs found
Recommended from our members
Data standardization
With data rapidly becoming the lifeblood of the global economy, the ability to improve its use significantly affects both social and private welfare. Data standardization is key to facilitating and improving the use of data when data portability and interoperability are needed. Absent data standardization, a âTower of Babelâ of different databases may be created, limiting synergetic knowledge production. Based on interviews with data scientists, this Article identifies three main technological obstacles to data portability and interoperability: metadata uncertainties, data transfer obstacles, and missing data. It then explains how data standardization can remove at least some of these obstacles and lead to smoother data flows and better machine learning. The Article then identifies and analyzes additional effects of data standardization. As shown, data standardization has the potential to support a competitive and distributed data collection ecosystem and lead to easier policing in cases where rights are infringed or unjustified harms are created by data-fed algorithms. At the same time, increasing the scale and scope of data analysis can create negative externalities in the form of better profiling, increased harms to privacy, and cybersecurity harms. Standardization also has implications for investment and innovation, especially if lock-in to an inefficient standard occurs. The Article then explores whether market-led standardization initiatives can be relied upon to increase welfare, and the role governmental-facilitated data standardization should play, if at all
Interoperability standards for cloud architecture
Enabling cloud infrastructures to evolve into a transparent platform raises interoperability issues. Interoperability
requires standard data models and communication technologies compatible with the existing Internet
infrastructure. To reduce vendor lock-in situations, cloud computing must implement common strategies regarding
standards, interoperability and portability. Open standards are of critical importance and need to be embedded into interoperability solutions. Interoperability is determined at the data level as well as the service level. Relevant modelling standards and integration solutions shall be analysed in the context of clouds
Towards Grid Interoperability
The Grid paradigm promises to provide global access to computing resources, data storage and experimental instruments. It also provides an elegant solution to many resource administration and provisioning problems while offering a platform for collaboration and resource sharing. Although substantial progress has been made towards these goals, nevertheless there is still a lot of work to be done until the Grid can deliver its promises. One of the central issues is the development of standards and Grid interoperability. Job execution is one of the key capabilities in all Grid environments. This is a well understood, mature area with standards and implementations. This paper describes some proof of concept experiments demonstrating the interoperability between various Grid environments
ClouNS - A Cloud-native Application Reference Model for Enterprise Architects
The capability to operate cloud-native applications can generate enormous
business growth and value. But enterprise architects should be aware that
cloud-native applications are vulnerable to vendor lock-in. We investigated
cloud-native application design principles, public cloud service providers, and
industrial cloud standards. All results indicate that most cloud service
categories seem to foster vendor lock-in situations which might be especially
problematic for enterprise architectures. This might sound disillusioning at
first. However, we present a reference model for cloud-native applications that
relies only on a small subset of well standardized IaaS services. The reference
model can be used for codifying cloud technologies. It can guide technology
identification, classification, adoption, research and development processes
for cloud-native application and for vendor lock-in aware enterprise
architecture engineering methodologies
MSUO Information Technology and Geographical Information Systems: Common Protocols & Procedures. Report to the Marine Safety Umbrella Operation
The Marine Safety Umbrella Operation (MSUO) facilitates the cooperation between Interreg
funded Marine Safety Projects and maritime stakeholders. The main aim of MSUO is to
permit efficient operation of new projects through Project Cooperation Initiatives, these
include the review of the common protocols and procedures for Information Technology (IT)
and Geographical Information Systems (GIS).
This study carried out by CSA Group and the National Centre for Geocomputation (NCG)
reviews current spatial information standards in Europe and the data management
methodologies associated with different marine safety projects.
International best practice was reviewed based on the combined experience of spatial data
research at NCG and initiatives in the US, Canada and the UK relating to marine security
service information and acquisition and integration of large marine datasets for ocean
management purposes.
This report identifies the most appropriate international data management practices that could
be adopted for future MSUO projects
Quality-aware model-driven service engineering
Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Quality aspects
ranging from interoperability to maintainability to performance are of central importance for the integration of heterogeneous, distributed service-based systems. Architecture models can substantially influence quality attributes of the implemented software systems. Besides the benefits of explicit architectures on maintainability and reuse, architectural constraints such as styles, reference architectures and architectural patterns can influence observable software properties such as performance. Empirical performance evaluation is a process of measuring and evaluating the performance of implemented software. We present an approach for addressing the quality of services and service-based systems at the model-level in the context of model-driven service engineering. The focus on architecture-level models is a consequence of the black-box
character of services
Engineering Workflow: The Process in Product Data Technology
The prevailing paradigm for enterprises in the new decade is undoubtedly speed. This enterprise view is driven by the availability of e-business technology that enables new forms of collaboration between companies. The rapid developments in e-business also have an impact on the future of engineering organizations. This paper focuses on the early phases of a productâs life cycle, i.e. between initial concept and release to manufacturing. New engineering workflow capabilities are presented, that have been tailored to speed up the engineering of new products
Space Generic Open Avionics Architecture (SGOAA) reference model technical guide
This report presents a full description of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA consists of a generic system architecture for the entities in spacecraft avionics, a generic processing architecture, and a six class model of interfaces in a hardware/software system. The purpose of the SGOAA is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of specific avionics hardware/software systems. The SGOAA defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied
Cloud computing services: taxonomy and comparison
Cloud computing is a highly discussed topic in the technical and economic world, and many of the big players of the software industry have entered the development of cloud services. Several companies what to explore the possibilities and benefits of incorporating such cloud computing services in their business, as well as the possibilities to offer own cloud services. However, with the amount of cloud computing services increasing quickly, the need for a taxonomy framework rises. This paper examines the available cloud computing services and identifies and explains their main characteristics. Next, this paper organizes these characteristics and proposes a tree-structured taxonomy. This taxonomy allows quick classifications of the different cloud computing services and makes it easier to compare them. Based on existing taxonomies, this taxonomy provides more detailed characteristics and hierarchies. Additionally, the taxonomy offers a common terminology and baseline information for easy communication. Finally, the taxonomy is explained and verified using existing cloud services as examples
- âŠ