124,353 research outputs found
Web services robustness testing
Web services are a new paradigm for building software applications that has many advantages over the previous paradigms; however, Web Services are still not widely used because Service Requesters do not trust services that were built by others. Testing can assuage this problem because it can be used to assess the quality attributes of Web Services. This thesis proposes a framework and presents a proof of concept tool that can be used to test the robustness and other related attributes of a Web Service. The tool can be easily enhanced to assess other quality attributes. The framework is based on analyzing Web Services Description Language (WSDL) documents of Web Services to find what faults could affect the robustness quality attributes. After that using these faults to build test case generation rules to assess the robustness quality attribute of Web Services. This framework will give a better understanding of the faults that may affect the robustness quality attribute of Web Services, how these faults are related to the interface or the contract of a Web Service under test, and what testing techniques can be used to detect such faults. The approach used in this thesis for building test cases for Web Services was used with many examples in order to demonstrate its effectiveness; these examples have shown that the approach and the proof of concept tool are able to assess the robustness of Web Services implementation and Web Services platforms. Four hundred and two test clients were automatically built by the tool, based on the test cases rules, to assess the robustness of these Web Services examples. These test clients detected eleven robustness failures in the Web Services implementations and nine robustness failures in the Web Services platforms. Also the approach was able to help in comparing the robustness of two different Web Services platforms, namely Axis and GLUE. After deploying the same Web Services in both of these platforms; Axis showed less robustness and security failures than GLUE
Adaptive Traffic Fingerprinting for Darknet Threat Intelligence
Darknet technology such as Tor has been used by various threat actors for
organising illegal activities and data exfiltration. As such, there is a case
for organisations to block such traffic, or to try and identify when it is used
and for what purposes. However, anonymity in cyberspace has always been a
domain of conflicting interests. While it gives enough power to nefarious
actors to masquerade their illegal activities, it is also the cornerstone to
facilitate freedom of speech and privacy. We present a proof of concept for a
novel algorithm that could form the fundamental pillar of a darknet-capable
Cyber Threat Intelligence platform. The solution can reduce anonymity of users
of Tor, and considers the existing visibility of network traffic before
optionally initiating targeted or widespread BGP interception. In combination
with server HTTP response manipulation, the algorithm attempts to reduce the
candidate data set to eliminate client-side traffic that is most unlikely to be
responsible for server-side connections of interest. Our test results show that
MITM manipulated server responses lead to expected changes received by the Tor
client. Using simulation data generated by shadow, we show that the detection
scheme is effective with false positive rate of 0.001, while sensitivity
detecting non-targets was 0.016+-0.127. Our algorithm could assist
collaborating organisations willing to share their threat intelligence or
cooperate during investigations.Comment: 26 page
Performance impact of web services on Internet servers
While traditional Internet servers mainly served static and
later also dynamic content, the popularity of Web services
is increasing rapidly. Web services incorporate additional
overhead compared to traditional web interaction. This
overhead increases the demand on Internet servers which
is of particular importance when the request rate to the
server is high. We conduct experiments that show that the
imposed overhead of Web services is non-negligible
during server overload. In our experiments the response
time for Web services is more than 30% higher and the
server throughput more than 25% lower compared to
traditional web interaction using dynamically created
HTML pages
Machine-Readable Privacy Certificates for Services
Privacy-aware processing of personal data on the web of services requires
managing a number of issues arising both from the technical and the legal
domain. Several approaches have been proposed to matching privacy requirements
(on the clients side) and privacy guarantees (on the service provider side).
Still, the assurance of effective data protection (when possible) relies on
substantial human effort and exposes organizations to significant
(non-)compliance risks. In this paper we put forward the idea that a privacy
certification scheme producing and managing machine-readable artifacts in the
form of privacy certificates can play an important role towards the solution of
this problem. Digital privacy certificates represent the reasons why a privacy
property holds for a service and describe the privacy measures supporting it.
Also, privacy certificates can be used to automatically select services whose
certificates match the client policies (privacy requirements).
Our proposal relies on an evolution of the conceptual model developed in the
Assert4Soa project and on a certificate format specifically tailored to
represent privacy properties. To validate our approach, we present a worked-out
instance showing how privacy property Retention-based unlinkability can be
certified for a banking financial service.Comment: 20 pages, 6 figure
RAFDA: A Policy-Aware Middleware Supporting the Flexible Separation of Application Logic from Distribution
Middleware technologies often limit the way in which object classes may be
used in distributed applications due to the fixed distribution policies that
they impose. These policies permeate applications developed using existing
middleware systems and force an unnatural encoding of application level
semantics. For example, the application programmer has no direct control over
inter-address-space parameter passing semantics. Semantics are fixed by the
distribution topology of the application, which is dictated early in the design
cycle. This creates applications that are brittle with respect to changes in
distribution. This paper explores technology that provides control over the
extent to which inter-address-space communication is exposed to programmers, in
order to aid the creation, maintenance and evolution of distributed
applications. The described system permits arbitrary objects in an application
to be dynamically exposed for remote access, allowing applications to be
written without concern for distribution. Programmers can conceal or expose the
distributed nature of applications as required, permitting object placement and
distribution boundaries to be decided late in the design cycle and even
dynamically. Inter-address-space parameter passing semantics may also be
decided independently of object implementation and at varying times in the
design cycle, again possibly as late as run-time. Furthermore, transmission
policy may be defined on a per-class, per-method or per-parameter basis,
maximizing plasticity. This flexibility is of utility in the development of new
distributed applications, and the creation of management and monitoring
infrastructures for existing applications.Comment: Submitted to EuroSys 200
Recommended from our members
Web Service Trust: Towards A Dynamic Assessment Framework
Trust in software services is a key prerequisite for the success and wide adoption of services-oriented computing (SOC) in an open Internet world. However, trust is poorly assessed by existing methods and technologies, especially in dynamically composed and deployed SOC systems. In this paper, we discuss current methods for assessing trust in service-oriented computing and identify gaps of current platforms, in particular with regards to runtime trust assessment. To address these gaps, we propose a model of runtime trust assessment of software services and introduce a framework for realizing the model. A key characteristic of our approach is the support that it offers for customizable assessment of trust based on evidence collected during the operation of software services and its ability to combine this evidence with subjective assessments coming from service clients
MonALISA : A Distributed Monitoring Service Architecture
The MonALISA (Monitoring Agents in A Large Integrated Services Architecture)
system provides a distributed monitoring service. MonALISA is based on a
scalable Dynamic Distributed Services Architecture which is designed to meet
the needs of physics collaborations for monitoring global Grid systems, and is
implemented using JINI/JAVA and WSDL/SOAP technologies. The scalability of the
system derives from the use of multithreaded Station Servers to host a variety
of loosely coupled self-describing dynamic services, the ability of each
service to register itself and then to be discovered and used by any other
services, or clients that require such information, and the ability of all
services and clients subscribing to a set of events (state changes) in the
system to be notified automatically. The framework integrates several existing
monitoring tools and procedures to collect parameters describing computational
nodes, applications and network performance. It has built-in SNMP support and
network-performance monitoring algorithms that enable it to monitor end-to-end
network performance as well as the performance and state of site facilities in
a Grid. MonALISA is currently running around the clock on the US CMS test Grid
as well as an increasing number of other sites. It is also being used to
monitor the performance and optimize the interconnections among the reflectors
in the VRVS system.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 8 pages, pdf. PSN MOET00
End-to-End QoS Support for a Medical Grid Service Infrastructure
Quality of Service support is an important prerequisite for the adoption of Grid technologies for medical applications. The GEMSS Grid infrastructure addressed this issue by offering end-to-end QoS in the form of explicit timeliness guarantees for compute-intensive medical simulation services. Within GEMSS, parallel applications installed on clusters or other HPC hardware may be exposed as QoS-aware Grid services for which clients may dynamically negotiate QoS constraints with respect to response time and price using Service Level Agreements. The GEMSS infrastructure and middleware is based on standard Web services technology and relies on a reservation based approach to QoS coupled with application specific performance models. In this paper we present an overview of the GEMSS infrastructure, describe the available QoS and security mechanisms, and demonstrate the effectiveness of our methods with a Grid-enabled medical imaging service
The OMII Software â Demonstrations and Comparisons between two different deployments for Client-Server Distributed Systems
This paper describes the key elements of the OMII software and the scenarios which OMII software can be deployed to achieve distributed computing in the UK e-Science Community, where two different deployments for Client-Server distributed systems are demonstrated. Scenarios and experiments for each deployment have been described, with its advantages and disadvantages compared and analyzed. We conclude that our first deployment is more relevant for system administrators or developers, and the second deployment is more suitable for usersâ perspective which they can send and check job status for hundred job submissions
- âŚ