46,627 research outputs found
A Geographic Information System Framework for the Management of Sensor Deployments
A prototype Geographic Information System (GIS) framework has been developed to map, manage, and monitor sensors with respect to other geographic features, including land base and in-plant features. The GIS framework supports geographic placement and subsequent discovery, query, and tasking of sensors in a network-centric environment using Web services. The framework couples the GIS feature placement logic of sensors with an extensible ontology which captures the capabilities, properties, protocols, integrity constraints, and other parameters of interest for a large variety of sensor types. The approach is significant in that custom, GIS-based interfaces can be rapidly developed via the integration of sensors and sensor networks into applications without having detailed knowledge of the sensors’ underlying device drivers by leveraging service-oriented computing infrastructure within the GIS framework
Semantic Web Technologies in Support of Service Oriented Architecture Governance
As Service Oriented Architecture (SOA) deployments gradually mature they also grow in size and complexity. The number of service providers, services, and service consumers increases, and so do the dependencies among these entities and the various artefacts that describe how services operate, or how they are meant to operate under specific conditions. Appropriate governance over the various phases and activities associated with the service lifecycle is therefore indispensable in order to prevent a SOA deployment from dissolving into an unmanageable infrastructure. The employment of Semantic Web technologies for describing and reasoning about service properties and governance requirements has the potential to greatly enhance the effectiveness and efficiency of SOA Governance solutions by increasing the levels of automation in a wide-range of tasks relating to service lifecycle management. The goal of the proposed research work is to investigate the application of Semantic Web technologies in the context of service lifecycle management, and propose a concrete theoretical and technological approach for supporting SOA Governance through the realisation of semantically-enhanced registry and repository solutions
Planning and Design Soa Architecture Blueprint
Service Oriented Architecture (SOA) is a framework for integrating business processes and supporting IT infrastructure as secure, standardized components-services-that can be reused and combined to address changing business priorities. Services are the building blocks of SOA and new applications can be constructed through consuming these services and orchestrating services within a business process. In SOA, services map to the business functions that are identified during business process analysis. Upon a successful implementation of SOA, the enterprise gain benefit by reducing development time, utilizing flexible and responsive application structure, and following dynamic connectivity of application logics between business partners. This paper presents SOA reference architecture blueprint as the building blocks of SOA which is services, service components and flows that together support enterprise business processes and the business goals
Leveraging Semantic Web Service Descriptions for Validation by Automated Functional Testing
Recent years have seen the utilisation of Semantic Web Service descriptions for automating a wide range of service-related activities, with a primary focus on service discovery, composition, execution and mediation. An important area which so far has received less attention is service validation, whereby advertised services are proven to conform to required behavioural specifications. This paper proposes a method for validation of service-oriented systems through automated functional testing. The method leverages ontology-based and rule-based descriptions of service inputs, outputs, preconditions and effects (IOPE) for constructing a stateful EFSM specification. The specification is subsequently utilised for functional testing and validation using the proven Stream X-machine (SXM) testing methodology. Complete functional test sets are generated automatically at an abstract level and are then applied to concrete Web services, using test drivers created from the Web service descriptions. The testing method comes with completeness guarantees and provides a strong method for validating the behaviour of Web services
Leveraging upon standards to build the Internet of things
Smart embedded objects will become an important part of what is called the Internet of Things. However, the integration of embedded devices into the Internet introduces several challenges, since many of the existing Internet technologies and protocols were not designed for this class of devices. In the past few years, there were many efforts to enable the extension of Internet technologies to constrained devices. Initially, this resulted in proprietary protocols and architectures. Later, the integration of constrained devices into the Internet was embraced by IETF, moving towards standardized IP-based protocols. Long time, most efforts were focusing on the networking layer. More recently, the IETF CoRE working group started working on an embedded counterpart of HTTP, allowing the integration of constrained devices into existing service networks. In this paper, we will briefly review the history of integrating constrained devices into the Internet, with a prime focus on the IETF standardization work in the ROLL and CoRE working groups. This is further complemented with some research results that illustrate how these novel technologies can be extended or used to tackle other problems.The research leading to these results has received funding from the
European Union's Seventh Framework Programme (FP7/2
007-2013) under
grant agreement n°258885 (SPITFIRE project), from the iMinds ICON projects
GreenWeCan and O’CareCloudS, and a VLI
R PhD scholarship to Isam Ishaq
Recommended from our members
Cancer Informatics for Cancer Centers (CI4CC): Building a Community Focused on Sharing Ideas and Best Practices to Improve Cancer Care and Patient Outcomes.
Cancer Informatics for Cancer Centers (CI4CC) is a grassroots, nonprofit 501c3 organization intended to provide a focused national forum for engagement of senior cancer informatics leaders, primarily aimed at academic cancer centers anywhere in the world but with a special emphasis on the 70 National Cancer Institute-funded cancer centers. Although each of the participating cancer centers is structured differently, and leaders' titles vary, we know firsthand there are similarities in both the issues we face and the solutions we achieve. As a consortium, we have initiated a dedicated listserv, an open-initiatives program, and targeted biannual face-to-face meetings. These meetings are a place to review our priorities and initiatives, providing a forum for discussion of the strategic and pragmatic issues we, as informatics leaders, individually face at our respective institutions and cancer centers. Here we provide a brief history of the CI4CC organization and meeting highlights from the latest CI4CC meeting that took place in Napa, California from October 14-16, 2019. The focus of this meeting was "intersections between informatics, data science, and population science." We conclude with a discussion on "hot topics" on the horizon for cancer informatics
Enabling Interactive Analytics of Secure Data using Cloud Kotta
Research, especially in the social sciences and humanities, is increasingly
reliant on the application of data science methods to analyze large amounts of
(often private) data. Secure data enclaves provide a solution for managing and
analyzing private data. However, such enclaves do not readily support discovery
science---a form of exploratory or interactive analysis by which researchers
execute a range of (sometimes large) analyses in an iterative and collaborative
manner. The batch computing model offered by many data enclaves is well suited
to executing large compute tasks; however it is far from ideal for day-to-day
discovery science. As researchers must submit jobs to queues and wait for
results, the high latencies inherent in queue-based, batch computing systems
hinder interactive analysis. In this paper we describe how we have augmented
the Cloud Kotta secure data enclave to support collaborative and interactive
analysis of sensitive data. Our model uses Jupyter notebooks as a flexible
analysis environment and Python language constructs to support the execution of
arbitrary functions on private data within this secure framework.Comment: To appear in Proceedings of Workshop on Scientific Cloud Computing,
Washington, DC USA, June 2017 (ScienceCloud 2017), 7 page
- …