1,000 research outputs found

    A novel composite web service selection based on quality of service

    Get PDF
    Using the internet, as a dynamic environment thanks to its distributed characteristic, for web service deployment has become a crucial issue in QoS-driven service composition. An accurate adaption should be undertaken to provide a reliable service composition which enables the composited services are being executed appropriately. That is, the critical aspect of service composition is the proper execution of combination of web services while the appropriate service adaption performed with respect to predetermined functional and non-functional characteristics. In this paper, we attempts to deliberate the optimization approaches to devise the appropriate scheme for QoS-based composite web service selection

    A security-and quality-aware system architecture for Internet of Things

    Get PDF
    Internet of Things (IoT) is characterized, at the system level, by high diversity with respect to enabling technologies and supported services. IoT also assumes to deal with a huge amount of heterogeneous data generated by devices, transmitted by the underpinning infrastructure and processed to support value-added services. In order to provide users with valuable output, the IoT architecture should guarantee the suitability and trustworthiness of the processed data. This is a major requirement of such systems in order to guarantee robustness and reliability at the service level. In this paper, we introduce a novel IoT architecture able to support security, privacy and data quality guarantees, thereby effectively boosting the diffusion of IoT services

    Trusted content-based publish/subscribe trees

    Get PDF
    Publish/Subscribe systems hold strong assumptions of the expected behaviour of clients and routers, as it is assumed they all abide by the matching and routing protocols. Assumptions of implicit trust between the components of the publish/subscribe infrastructure are acceptable where the underlying event distribution service is under the control of a single or multiple co-operating administrative entities and contracts between clients and these authorities exist, however there are application contexts where these presumptions do not hold. In such environments, such as ad hoc networks, there is the possibility of selfish and malicious behaviour that can lead to disruption of the routing and matching algorithms. The most commonly researched approach to security in publish/subscribe systems is role-based access control (RBAC). RBAC is suitable for ensuring confidentiality, but due to the assumption of strong identities associated with well defined roles and the absence of monitoring systems to allow for adaptable policies in response to the changing behaviour of clients, it is not appropriate for environments where: identities can not be assigned to roles in the absence of a trusted administrative entity; long-lived identities of entities do not exist; and where the threat model consists of highly adaptable malicious and selfish entities. Motivated by recent work in the application of trust and reputation to Peer-to-Peer networks, where past behaviour is used to generate trust opinions that inform future transactions, we propose an approach where the publish/subscribe infrastructure is constructed and re-configured with respect to the trust preferences of clients and routers. In this thesis, we show how Publish/Subscribe trees (PSTs) can be constructed with respect to the trust preferences of publishers and subscribers, and the overhead costs of event dissemination. Using social welfare theory, it is shown that individual trust preferences over clients and routers, which are informed by a variety of trust sources, can be aggregated to give a social preference over the set of feasible PSTs. By combining this and the existing work on PST overheads, the Maximum Trust PST with Overhead Budget problem is defined and is shown to be in NP-complete. An exhaustive search algorithm is proposed that is shown to be suitable only for very small problem sizes. To improve scalability, a faster tabu search algorithm is presented, which is shown to scale to larger problem instances and gives good approximations of the optimal solutions. The research contributions of this work are: the use of social welfare theory to provide a mechanism to establish the trustworthiness of PSTs; the finding that individual trust is not interpersonal comparable as is considered to be the case in much of the trust literature; the Maximum Trust PST with Overhead Budget problem; and algorithms to solve this problem

    On the enhancement of Big Data Pipelines through Data Preparation, Data Quality, and the distribution of Optimisation Problems

    Get PDF
    Nowadays, data are fundamental for companies, providing operational support by facilitating daily transactions. Data has also become the cornerstone of strategic decision-making processes in businesses. For this purpose, there are numerous techniques that allow to extract knowledge and value from data. For example, optimisation algorithms excel at supporting decision-making processes to improve the use of resources, time and costs in the organisation. In the current industrial context, organisations usually rely on business processes to orchestrate their daily activities while collecting large amounts of information from heterogeneous sources. Therefore, the support of Big Data technologies (which are based on distributed environments) is required given the volume, variety and speed of data. Then, in order to extract value from the data, a set of techniques or activities is applied in an orderly way and at different stages. This set of techniques or activities, which facilitate the acquisition, preparation, and analysis of data, is known in the literature as Big Data pipelines. In this thesis, the improvement of three stages of the Big Data pipelines is tackled: Data Preparation, Data Quality assessment, and Data Analysis. These improvements can be addressed from an individual perspective, by focussing on each stage, or from a more complex and global perspective, implying the coordination of these stages to create data workflows. The first stage to improve is the Data Preparation by supporting the preparation of data with complex structures (i.e., data with various levels of nested structures, such as arrays). Shortcomings have been found in the literature and current technologies for transforming complex data in a simple way. Therefore, this thesis aims to improve the Data Preparation stage through Domain-Specific Languages (DSLs). Specifically, two DSLs are proposed for different use cases. While one of them is a general-purpose Data Transformation language, the other is a DSL aimed at extracting event logs in a standard format for process mining algorithms. The second area for improvement is related to the assessment of Data Quality. Depending on the type of Data Analysis algorithm, poor-quality data can seriously skew the results. A clear example are optimisation algorithms. If the data are not sufficiently accurate and complete, the search space can be severely affected. Therefore, this thesis formulates a methodology for modelling Data Quality rules adjusted to the context of use, as well as a tool that facilitates the automation of their assessment. This allows to discard the data that do not meet the quality criteria defined by the organisation. In addition, the proposal includes a framework that helps to select actions to improve the usability of the data. The third and last proposal involves the Data Analysis stage. In this case, this thesis faces the challenge of supporting the use of optimisation problems in Big Data pipelines. There is a lack of methodological solutions that allow computing exhaustive optimisation problems in distributed environments (i.e., those optimisation problems that guarantee the finding of an optimal solution by exploring the whole search space). The resolution of this type of problem in the Big Data context is computationally complex, and can be NP-complete. This is caused by two different factors. On the one hand, the search space can increase significantly as the amount of data to be processed by the optimisation algorithms increases. This challenge is addressed through a technique to generate and group problems with distributed data. On the other hand, processing optimisation problems with complex models and large search spaces in distributed environments is not trivial. Therefore, a proposal is presented for a particular case in this type of scenario. As a result, this thesis develops methodologies that have been published in scientific journals and conferences.The methodologies have been implemented in software tools that are integrated with the Apache Spark data processing engine. The solutions have been validated through tests and use cases with real datasets

    Metrics, indicators and analytics to support government excellence programme::the case of Dubai Government Website Excellence Model (WEM)

    Get PDF
    This research is focused on the construction of composite indicators: a complex process involving various steps that have significant impact on the results. One of the main problems in constructing composite indicators is its reliance on multiple subjective judgments (Cherchye et al., 2008). This was clearly demonstrated in the case of Website Excellence Model (WEM) scores, whose main purpose is to assess and compare the performance of Dubai Government departments’ website. Many subjective judgments were being made by different parties in each of the three main stages of the WEM process: pre-assessment, assessment and post-assessment stage. This level of subjectivity led to a problem where many departments end up being unsatisfied with the overall scores and the general process of deriving the results.This research indicates that at each stage of the WEM process, the reliability, validity and fairness of the results were affected. To construct a more accurate, flexible, equitable and transparent WEM scoring methodology, we proposed the use of geometric data envelopment analysis model (G-DEA) along with some general guidelines to be followed during different stages of the process. G-DEA methodology combines positive characteristics of geometric aggregation, Analytical Hierarchy Process (AHP) and DEA. Geometric aggregation makes improvements on two different levels. First, it is better suited for constructing WEM scores than the “standard” additive aggregation, for much the same reasons as for why the switch from additive to geometric aggregation took place for Human Development Index back in 2010. Second, it allows for DEA-like models to be easily extended and applied to a composite indicator irrespective of how complex its hierarchy structure may be. The elements of AHP and DEA contribute through their own well-known properties, such as the reduction of decision bias (AHP and DEA) and an equitable evaluation of departments relative to the observed best practices (DEA).In short, this thesis proposes the use of G-DEA model and discusses the most relevant theoretical and practical aspects and features of that method when applying it to WEM scores. G-DEA methodology is well suited for the WEM scoring framework but there are certainly many other applications, relating to the construction of composite indicators that could benefit from the same methodology. Overall, this study aims to provide both practitioners and academics in the field of composite indicators with a clear application focus on using G-DEA to assess website performance, penetrating the area which so far has never been used in the context of composite indictors. In addition, this study clearly illustrates how G-DEA can combine many good qualities of different well-known techniques for constructing composite indicators

    Context Aware Computing for The Internet of Things: A Survey

    Get PDF
    As we are moving towards the Internet of Things (IoT), the number of sensors deployed around the world is growing at a rapid pace. Market research has shown a significant growth of sensor deployments over the past decade and has predicted a significant increment of the growth rate in the future. These sensors continuously generate enormous amounts of data. However, in order to add value to raw sensor data we need to understand it. Collection, modelling, reasoning, and distribution of context in relation to sensor data plays critical role in this challenge. Context-aware computing has proven to be successful in understanding sensor data. In this paper, we survey context awareness from an IoT perspective. We present the necessary background by introducing the IoT paradigm and context-aware fundamentals at the beginning. Then we provide an in-depth analysis of context life cycle. We evaluate a subset of projects (50) which represent the majority of research and commercial solutions proposed in the field of context-aware computing conducted over the last decade (2001-2011) based on our own taxonomy. Finally, based on our evaluation, we highlight the lessons to be learnt from the past and some possible directions for future research. The survey addresses a broad range of techniques, methods, models, functionalities, systems, applications, and middleware solutions related to context awareness and IoT. Our goal is not only to analyse, compare and consolidate past research work but also to appreciate their findings and discuss their applicability towards the IoT.Comment: IEEE Communications Surveys & Tutorials Journal, 201
    corecore