944 research outputs found
A Framework to assess the value of web services
Large organizations often begin to adopt new software technologies prior to establishing appropriate value frameworks. This approach may produce sub-optimal investment decisions and technology adoption rates, and introduce excessive risk. In this thesis, a value-based framework is developed for assessing the impact of Web Services technology investments on business systems development. The value factors included in the framework are data management, application development and deployment, system integration, and response time to market opportunities
Content not available: Why The United Kingdom's Proposal For A “Package Of Platform Safety Measures” Will Harm Free Speech
This article critiques key proposals of the United Kingdom’s “Online Harms” White Paper; in particular, the proposal for new digital regulator and the imposition of a “duty of care” on platforms. While acknowledging that a duty of care, backed up by sanctions works well in some environments, we argue is not appropriate for policing the White Paper’s identified harms as it could result in the blocking of legal, subjectively harmful content. Furthermore, the proposed regulator lacks the necessary independence and could be subjected to political interference. We conclude that the imposition of a duty of care will result in an unacceptable chilling effect on free expression, resulting in a draconian regulatory environment for platforms, with users’ digital rights adversely affected
Recommended from our members
Discovering Network Control Vulnerabilities and Policies in Evolving Networks
The range and number of new applications and services are growing at an unprecedented rate. Computer networks need to be able to provide connectivity for these services and meet their constantly changing demands. This requires not only support of new network protocols and security requirements, but often architectural redesigns for long-term improvements to efficiency, speed, throughput, cost, and security. Networks are now facing a drastic increase in size and are required to carry a constantly growing amount of heterogeneous traffic. Unfortunately such dynamism greatly complicates security of not only the end nodes in the network, but also of the nodes of the network itself. To make matters worse, just as applications are being developed at faster and faster rates, attacks are becoming more pervasive and complex. Networks need to be able to understand the impact of these attacks and protect against them.
Network control devices, such as routers, firewalls, censorship devices, and base stations, are elements of the network that make decisions on how traffic is handled. Although network control devices are expected to act according to specifications, there can be various reasons why they do not in practice. Protocols could be flawed, ambiguous or incomplete, developers could introduce unintended bugs, or attackers may find vulnerabilities in the devices and exploit them. Malfunction could intentionally or unintentionally threaten the confidentiality, integrity, and availability of end nodes and the data that passes through the network. It can also impact the availability and performance of the control devices themselves and the security policies of the network. The fast-paced evolution and scalability of current and future networks create a dynamic environment for which it is difficult to develop automated tools for testing new protocols and components. At the same time, they make the function of such tools vital for discovering implementation flaws and protocol vulnerabilities as networks become larger and more complex, and as new and potentially unrefined architectures become adopted. This thesis will present the design, implementation, and evaluation of a set of tools designed for understanding implementation of network control nodes and how they react to changes in traffic characteristics as networks evolve. We will first introduce Firecycle, a test bed for analyzing the impact of large-scale attacks and Machine-to-Machine (M2M) traffic on the Long Term Evolution (LTE) network. We will then discuss Autosonda, a tool for automatically discovering rule implementation and finding triggering traffic features in censorship devices.
This thesis provides the following contributions:
1. The design, implementation, and evaluation of two tools to discover models of network control nodes in two scenarios of evolving networks, mobile network and censored internet
2. First existing test bed for analysis of large-scale attacks and impact of traffic scalability on LTE mobile networks
3. First existing test bed for LTE networks that can be scaled to arbitrary size and that deploys traffic models based on real traffic traces taken from a tier-1 operator
4. An analysis of traffic models of various categories of Internet of Things (IoT) devices
5. First study demonstrating the impact of M2M scalability and signaling overload on the packet core of LTE mobile networks
6. A specification for modeling of censorship device decision models
7. A means for automating the discovery of features utilized in censorship device decision models, comparison of these models, and their rule discover
Web Service Discovery Based on Past User Experience
Web service technology provides a way for simplifying interoperability among different organizations. A piece of functionality available as a web service can be involved in a new business process. Given the steadily growing number of available web services, it is hard for developers to find services appropriate for their needs. The main research efforts in this area are oriented on developing a mechanism for semantic web service description and matching. In this paper, we present an alternative approach for supporting users in web service discovery. Our system implements the implicit culture approach for recommending web services to developers based on the history of decisions made by other developers with similar needs. We explain the main ideas underlying our approach and report on experimental results
Cyberbullying Detection System with Multiple Server Configurations
Due to the proliferation of online networking, friendships and relationships - social communications have reached a whole new level. As a result of this scenario, there is an increasing evidence that social applications are frequently used for bullying. State-of-the-art studies in cyberbullying detection have mainly focused on the content of the conversations while largely ignoring the users involved in cyberbullying. To encounter this problem, we have designed a distributed cyberbullying detection system that will detect bullying messages and drop them before they are sent to the intended receiver. A prototype has been created using the principles of NLP, Machine Learning and Distributed Systems. Preliminary studies conducted with it, indicate a strong promise of our approach
Methodological challenges in online trials: an update and insights from the REACT trial
There has been a growth in the number of web-based trials of web-based interventions, adding to an increasing evidence base for their feasibility and effectiveness. However, there are challenges associated with such trials, which researchers must address. This discussion paper follows the structure of the Down Your Drink trial methodology paper, providing an update from the literature for each key trial parameter (recruitment, registration eligibility checks, consent and participant withdrawal, randomization, engagement with a web-based intervention, retention, data quality and analysis, spamming, cybersquatting, patient and public involvement, and risk management and adverse events), along with our own recommendations based on designing the Relatives Education and Coping Toolkit randomized controlled trial for relatives of people with psychosis or bipolar disorder. The key recommendations outlined here are relevant for future web-based and hybrid trials and studies using iterative development and test models such as the Accelerated Creation-to-Sustainment model, both within general health research and specifically within mental health research for relatives. Researchers should continue to share lessons learned from conducting web-based trials of web-based interventions to benefit future studies
RESTest: automated black-box testing of RESTful web APIs
Testing RESTful APIs thoroughly is critical due to their key role in
software integration. Existing tools for the automated generation
of test cases in this domain have shown great promise, but their
applicability is limited as they mostly rely on random inputs, i.e.,
fuzzing. In this paper, we present RESTest, an open source black box testing framework for RESTful web APIs. Based on the API
specification, RESTest supports the generation of test cases using
different testing techniques such as fuzzing and constraint-based
testing, among others. RESTest is developed as a framework and can
be easily extended with new test case generators and test writers
for different programming languages. We evaluate the tool in two
scenarios: offline and online testing. In the former, we show how
RESTest can efficiently generate realistic test cases (test inputs and
test oracles) that uncover bugs in real-world APIs. In the latter, we
show RESTest’s capabilities as a continuous testing and monitoring
framework. Demo video: https://youtu.be/1f_tjdkaCKo.Junta de AndalucĂa APOLO (US-1264651)Junta de AndalucĂa EKIPMENT-PLUS (P18-FR-2895)Ministerio de Ciencia, InnovaciĂłn y Universidades RTI2018-101204-B-C21 (HORATIO)Ministerio de EducaciĂłn, Cultura y Deporte FPU17/0407
Web browsing automation for applications quality control
Context: Quality control comprises the set of activities aimed to evaluate that software meets its specification and delivers the functionality expected by the consumers. These activities are often removed in the development process and, as a result, the final software product usually lacks quality. Objective: We propose a set of techniques to automate the quality control for web applications from the client-side, guiding the process by functional and nonfunctional requirements (performance, security, compatibility, usability and accessibility). Method: The first step to achieve automation is to define the structure of the web navigation. Existing software artifacts in the phase of analysis and design are reused. Then, the independent paths of navigation are found, and each path is traversed automatically using real browsers while different kinds of assessments are carried out. Results: The processes and methods proposed in this paper have been implemented by means of a reference architecture and open source tools. A laboratory experiment and an industrial case study have been performed in order to validate the proposal. Conclusion: The definition of navigation paths is a rich approach to model web applications. Grey-box (black-box and white-box) methods have been proved to be very valuable for web assessment. The Chinese Postman Problem (CPP) is an optimal way to find the independent paths in a web navigation modeled as a directed graph
- …