31 research outputs found

    Automated Realistic Test Input Generation and Cost Reduction in Service-centric System Testing

    Get PDF
    Service-centric System Testing (ScST) is more challenging than testing traditional software due to the complexity of service technologies and the limitations that are imposed by the SOA environment. One of the most important problems in ScST is the problem of realistic test data generation. Realistic test data is often generated manually or using an existing source, thus it is hard to automate and laborious to generate. One of the limitations that makes ScST challenging is the cost associated with invoking services during testing process. This thesis aims to provide solutions to the aforementioned problems, automated realistic input generation and cost reduction in ScST. To address automation in realistic test data generation, the concept of Service-centric Test Data Generation (ScTDG) is presented, in which existing services used as realistic data sources. ScTDG minimises the need for tester input and dependence on existing data sources by automatically generating service compositions that can generate the required test data. In experimental analysis, our approach achieved between 93% and 100% success rates in generating realistic data while state-of-the-art automated test data generation achieved only between 2% and 34%. The thesis addresses cost concerns at test data generation level by enabling data source selection in ScTDG. Source selection in ScTDG has many dimensions such as cost, reliability and availability. This thesis formulates this problem as an optimisation problem and presents a multi-objective characterisation of service selection in ScTDG, aiming to reduce the cost of test data generation. A cost-aware pareto optimal test suite minimisation approach addressing testing cost concerns during test execution is also presented. The approach adapts traditional multi-objective minimisation approaches to ScST domain by formulating ScST concerns, such as invocation cost and test case reliability. In experimental analysis, the approach achieved reductions between 69% and 98.6% in monetary cost of service invocations during testin

    Integrating passive ubiquitous surfaces into human-computer interaction

    Get PDF
    Mobile technologies enable people to interact with computers ubiquitously. This dissertation investigates how ordinary, ubiquitous surfaces can be integrated into human-computer interaction to extend the interaction space beyond the edge of the display. It turns out that acoustic and tactile features generated during an interaction can be combined to identify input events, the user, and the surface. In addition, it is shown that a heterogeneous distribution of different surfaces is particularly suitable for realizing versatile interaction modalities. However, privacy concerns must be considered when selecting sensors, and context can be crucial in determining whether and what interaction to perform.Mobile Technologien ermöglichen den Menschen eine allgegenwĂ€rtige Interaktion mit Computern. Diese Dissertation untersucht, wie gewöhnliche, allgegenwĂ€rtige OberflĂ€chen in die Mensch-Computer-Interaktion integriert werden können, um den Interaktionsraum ĂŒber den Rand des Displays hinaus zu erweitern. Es stellt sich heraus, dass akustische und taktile Merkmale, die wĂ€hrend einer Interaktion erzeugt werden, kombiniert werden können, um Eingabeereignisse, den Benutzer und die OberflĂ€che zu identifizieren. DarĂŒber hinaus wird gezeigt, dass eine heterogene Verteilung verschiedener OberflĂ€chen besonders geeignet ist, um vielfĂ€ltige InteraktionsmodalitĂ€ten zu realisieren. Bei der Auswahl der Sensoren mĂŒssen jedoch Datenschutzaspekte berĂŒcksichtigt werden, und der Kontext kann entscheidend dafĂŒr sein, ob und welche Interaktion durchgefĂŒhrt werden soll

    Challenges of rapid migration to fully virtual education in the age of the Corona virus pandemic: experiences from across the world

    Get PDF
    The social disruption caused by the sudden eruption of the Corona Virus pandemic has shaken the whole world, influencing all levels of education immensely. Notwithstanding there was a lack of preparedness for this global public health emergency which continues to affect all aspects of work and life. The problem is, naturally, multifaceted, fast evolving and complex, affecting everyone, threatening our well-being, the global economy, the environment and all societal and cultural norms and our everyday activities. In a recent UNESCO report it is noted that nearly a billion and a quarter (which is 67,7 % of the total number) of learners have been affected by the Corona Virus pandemic worldwide. The education sector at all levels has been one of the hardest hit sectors particularly as the academic/school year was in full swing. The impact of the pandemic is widespread, representing a health hazard worldwide. Being such, it profoundly affects society as a whole, and its members that are, in particular, i) individuals (the learners, their parents, educators, support staff), ii) schools, training organisations, pedagogical institutions and education systems, iii) quickly transformed policies, methods and pedagogies to serve the newly appeared needs of the latter. Lengthy developments of such scale usually take years of consultation, strategic planning and implementation. In addition to raising awareness across the population of the dangers of the virus transmission and instigating total lockdown, it has been necessary to develop mechanisms for continuing the delivery of education as well as demanding mechanisms for assuring the quality of the educational experience and educational results. There is often scepticism about securing quality standards in such a fast moving situation. Often in the recent past, the perception was that courses and degrees leading to an award are inferior if the course modules (and sometimes its assessment components) were wholly online. Over the last three decades most Higher Education institutions developed both considerable infrastructure and knowhow enabling distance mode delivery schools (Primary and Secondary) had hardly any necessary infrastructure nor adequate knowhow for enabling virtual education. In addition, community education and various training providers were mainly delivered face-to-face and that had to either stop altogether or rapidly convert materials, exercises and tests for online delivery and testing. A high degree of flexibility and commitment was demanded of all involved and particularly from the educators, who undertook to produce new educational materials in order to provide online support to pupils and students. Apart from the delivery mode of education, which is serving for certificated programmes, it is essential to ensure that learners’ needs are thoroughly and continuously addressed and are efficiently supported throughout the Coronavirus or any other future lockdown. The latter can be originated by various causes and reasons that vary in nature, such as natural or socioeconomical. Readiness, thus, in addition to preparedness, is the primary key question and solution when it comes to quality education for any lockdown. In most countries, the compulsory primary and secondary education sectors have been facing a more difficult challenge than that faced by Higher Education. The poor or in many cases non-existent technological infrastructure and low technological expertise of the teachers, instructors and parents, make the delivery of virtual education difficult or even impossible. The latter, coupled with phenomena such as social exclusion and digital divide where thousands of households do not have adequate access to broadband Internet, Wi-Fi infrastructure and personal computers hamper the promising and strenuous virtual solutions. The shockwaves of the sudden demands on all sectors of society and on individuals required rapid decisions and actions. We will not attempt to answer the question “Why was the world unprepared for the onslaught of the Coronavirus pandemic” but need to ascertain the level of preparedness and readiness particularly of the education sector, to effect the required rapid transition. We aimed to identify the challenges, and problems faced by the educators and their institutions. Through first-hand experiences we also identify best practices and solutions reached. Thus we constructed a questionnaire to gather our own responses but also experiences from colleagues and members of our environment, family, friends, and colleagues. This paper reports the first-hand experiences and knowledge of 33 co-authors from 27 institutions and from 13 different countries from Europe, Asia, and Africa. The communication technologies and development platforms used are identified; the challenges faced as well as solutions and best practices are reported. The findings are consolidated into the four areas explored i.e. Development Platforms, Communications Technologies, Challenges/Problems and Solutions/Best Practices. The conclusion summarises the findings into emerging themes and similarities. Reflections on the lasting impact of the effect of Coronavirus on education, limitations of study, and indications of future work complete the paper

    Approaches to implement and evaluate aggregated search

    Get PDF
    La recherche d'information agrĂ©gĂ©e peut ĂȘtre vue comme un troisiĂšme paradigme de recherche d'information aprĂšs la recherche d'information ordonnĂ©e (ranked retrieval) et la recherche d'information boolĂ©enne (boolean retrieval). Les deux paradigmes les plus explorĂ©s jusqu'Ă  aujourd'hui retournent un ensemble ou une liste ordonnĂ©e de rĂ©sultats. C'est Ă  l'usager de parcourir ces ensembles/listes et d'en extraire l'information nĂ©cessaire qui peut se retrouver dans plusieurs documents. De maniĂšre alternative, la recherche d'information agrĂ©gĂ©e ne s'intĂ©resse pas seulement Ă  l'identification des granules (nuggets) d'information pertinents, mais aussi Ă  l'assemblage d'une rĂ©ponse agrĂ©gĂ©e contenant plusieurs Ă©lĂ©ments. Dans nos travaux, nous analysons les travaux liĂ©s Ă  la recherche d'information agrĂ©gĂ©e selon un schĂ©ma gĂ©nĂ©ral qui comprend 3 parties: dispatching de la requĂȘte, recherche de granules d'information et agrĂ©gation du rĂ©sultat. Les approches existantes sont groupĂ©es autours de plusieurs perspectives gĂ©nĂ©rales telle que la recherche relationnelle, la recherche fĂ©dĂ©rĂ©e, la gĂ©nĂ©ration automatique de texte, etc. Ensuite, nous nous sommes focalisĂ©s sur deux pistes de recherche selon nous les plus prometteuses: (i) la recherche agrĂ©gĂ©e relationnelle et (ii) la recherche agrĂ©gĂ©e inter-verticale. * La recherche agrĂ©gĂ©e relationnelle s'intĂ©resse aux relations entre les granules d'information pertinents qui servent Ă  assembler la rĂ©ponse agrĂ©gĂ©e. En particulier, nous nous sommes intĂ©ressĂ©s Ă  trois types de requĂȘtes notamment: requĂȘte attribut (ex. prĂ©sident de la France, PIB de l'Italie, maire de Glasgow, ...), requĂȘte instance (ex. France, Italie, Glasgow, Nokia e72, ...) et requĂȘte classe (pays, ville française, portable Nokia, ...). Pour ces requĂȘtes qu'on appelle requĂȘtes relationnelles nous avons proposĂ©s trois approches pour permettre la recherche de relations et l'assemblage des rĂ©sultats. Nous avons d'abord mis l'accent sur la recherche d'attributs qui peut aider Ă  rĂ©pondre aux trois types de requĂȘtes. Nous proposons une approche Ă  large Ă©chelle capable de rĂ©pondre Ă  des nombreuses requĂȘtes indĂ©pendamment de la classe d'appartenance. Cette approche permet l'extraction des attributs Ă  partir des tables HTML en tenant compte de la qualitĂ© des tables et de la pertinence des attributs. Les diffĂ©rentes Ă©valuations de performances effectuĂ©es prouvent son efficacitĂ© qui dĂ©passe les mĂ©thodes de l'Ă©tat de l'art. DeuxiĂšmement, nous avons traitĂ© l'agrĂ©gation des rĂ©sultats composĂ©s d'instances et d'attributs. Ce problĂšme est intĂ©ressant pour rĂ©pondre Ă  des requĂȘtes de type classe avec une table contenant des instances (lignes) et des attributs (colonnes). Pour garantir la qualitĂ© du rĂ©sultat, nous proposons des pondĂ©rations sur les instances et les attributs promouvant ainsi les plus reprĂ©sentatifs. Le troisiĂšme problĂšme traitĂ© concerne les instances de la mĂȘme classe (ex. France, Italie, Allemagne, ...). Nous proposons une approche capable d'identifier massivement ces instances en exploitant les listes HTML. Toutes les approches proposĂ©es fonctionnent Ă  l'Ă©chelle Web et sont importantes et complĂ©mentaires pour la recherche agrĂ©gĂ©e relationnelle. Enfin, nous proposons 4 prototypes d'application de recherche agrĂ©gĂ©e relationnelle. Ces derniers peuvent rĂ©pondre des types de requĂȘtes diffĂ©rents avec des rĂ©sultats relationnels. Plus prĂ©cisĂ©ment, ils recherchent et assemblent des attributs, des instances, mais aussi des passages et des images dans des rĂ©sultats agrĂ©gĂ©s. Un exemple est la requĂȘte ``Nokia e72" dont la rĂ©ponse sera composĂ©e d'attributs (ex. prix, poids, autonomie batterie, ...), de passages (ex. description, reviews, ...) et d'images. Les rĂ©sultats sont encourageants et illustrent l'utilitĂ© de la recherche agrĂ©gĂ©e relationnelle. * La recherche agrĂ©gĂ©e inter-verticale s'appuie sur plusieurs moteurs de recherche dits verticaux tel que la recherche d'image, recherche vidĂ©o, recherche Web traditionnelle, etc. Son but principal est d'assembler des rĂ©sultats provenant de toutes ces sources dans une mĂȘme interface pour rĂ©pondre aux besoins des utilisateurs. Les moteurs de recherche majeurs et la communautĂ© scientifique nous offrent dĂ©jĂ  une sĂ©rie d'approches. Notre contribution consiste en une Ă©tude sur l'Ă©valuation et les avantages de ce paradigme. Plus prĂ©cisĂ©ment, nous comparons 4 types d'Ă©tudes qui simulent des situations de recherche sur un total de 100 requĂȘtes et 9 sources diffĂ©rentes. Avec cette Ă©tude, nous avons identifiĂ©s clairement des avantages de la recherche agrĂ©gĂ©e inter-verticale et nous avons pu dĂ©duire de nombreux enjeux sur son Ă©valuation. En particulier, l'Ă©valuation traditionnelle utilisĂ©e en RI, certes la moins rapide, reste la plus rĂ©aliste. Pour conclure, nous avons proposĂ© des diffĂ©rents approches et Ă©tudes sur deux pistes prometteuses de recherche dans le cadre de la recherche d'information agrĂ©gĂ©e. D'une cĂŽtĂ©, nous avons traitĂ© trois problĂšmes importants de la recherche agrĂ©gĂ©e relationnelle qui ont portĂ© Ă  la construction de 4 prototypes d'application avec des rĂ©sultats encourageants. De l'autre cĂŽtĂ©, nous avons mis en place 4 Ă©tudes sur l'intĂ©rĂȘt et l'Ă©valuation de la recherche agrĂ©gĂ©e inter-verticale qui ont permis d'identifier les enjeux d'Ă©valuation et les avantages du paradigme. Comme suite Ă  long terme de ce travail, nous pouvons envisager une recherche d'information qui intĂšgre plus de granules relationnels et plus de multimĂ©dia.Aggregated search or aggregated retrieval can be seen as a third paradigm for information retrieval following the Boolean retrieval paradigm and the ranked retrieval paradigm. In the first two, we are returned respectively sets and ranked lists of search results. It is up to the time-poor user to scroll this set/list, scan within different documents and assemble his/her information need. Alternatively, aggregated search not only aims the identification of relevant information nuggets, but also the assembly of these nuggets into a coherent answer. In this work, we present at first an analysis of related work to aggregated search which is analyzed with a general framework composed of three steps: query dispatching, nugget retrieval and result aggregation. Existing work is listed aside different related domains such as relational search, federated search, question answering, natural language generation, etc. Within the possible research directions, we have then focused on two directions we believe promise the most namely: relational aggregated search and cross-vertical aggregated search. * Relational aggregated search targets relevant information, but also relations between relevant information nuggets which are to be used to assemble reasonably the final answer. In particular, there are three types of queries which would easily benefit from this paradigm: attribute queries (e.g. president of France, GDP of Italy, major of Glasgow, ...), instance queries (e.g. France, Italy, Glasgow, Nokia e72, ...) and class queries (countries, French cities, Nokia mobile phones, ...). We call these queries as relational queries and we tackle with three important problems concerning the information retrieval and aggregation for these types of queries. First, we propose an attribute retrieval approach after arguing that attribute retrieval is one of the crucial problems to be solved. Our approach relies on the HTML tables in the Web. It is capable to identify useful and relevant tables which are used to extract relevant attributes for whatever queries. The different experimental results show that our approach is effective, it can answer many queries with high coverage and it outperforms state of the art techniques. Second, we deal with result aggregation where we are given relevant instances and attributes for a given query. The problem is particularly interesting for class queries where the final answer will be a table with many instances and attributes. To guarantee the quality of the aggregated result, we propose the use of different weights on instances and attributes to promote the most representative and important ones. The third problem we deal with concerns instances of the same class (e.g. France, Germany, Italy ... are all instances of the same class). Here, we propose an approach that can massively extract instances of the same class from HTML lists in the Web. All proposed approaches are applicable at Web-scale and they can play an important role for relational aggregated search. Finally, we propose 4 different prototype applications for relational aggregated search. They can answer different types of queries with relevant and relational information. Precisely, we not only retrieve attributes and their values, but also passages and images which are assembled into a final focused answer. An example is the query ``Nokia e72" which will be answered with attributes (e.g. price, weight, battery life ...), passages (e.g. description, reviews ...) and images. Results are encouraging and they illustrate the utility of relational aggregated search. * The second research direction that we pursued concerns cross-vertical aggregated search, which consists of assembling results from different vertical search engines (e.g. image search, video search, traditional Web search, ...) into one single interface. Here, different approaches exist in both research and industry. Our contribution concerns mostly evaluation and the interest (advantages) of this paradigm. We propose 4 different studies which simulate different search situations. Each study is tested with 100 different queries and 9 vertical sources. Here, we could clearly identify new advantages of this paradigm and we could identify different issues with evaluation setups. In particular, we observe that traditional information retrieval evaluation is not the fastest but it remains the most realistic. To conclude, we propose different studies with respect to two promising research directions. On one hand, we deal with three important problems of relational aggregated search following with real prototype applications with encouraging results. On the other hand, we have investigated on the interest and evaluation of cross-vertical aggregated search. Here, we could clearly identify some of the advantages and evaluation issues. In a long term perspective, we foresee a possible combination of these two kinds of approaches to provide relational and cross-vertical information retrieval incorporating more focus, structure and multimedia in search results

    From Resilience-Building to Resilience-Scaling Technologies: Directions -- ReSIST NoE Deliverable D13

    Get PDF
    This document is the second product of workpackage WP2, "Resilience-building and -scaling technologies", in the programme of jointly executed research (JER) of the ReSIST Network of Excellence. The problem that ReSIST addresses is achieving sufficient resilience in the immense systems of ever evolving networks of computers and mobile devices, tightly integrated with human organisations and other technology, that are increasingly becoming a critical part of the information infrastructure of our society. This second deliverable D13 provides a detailed list of research gaps identified by experts from the four working groups related to assessability, evolvability, usability and diversit

    Semantic discovery and reuse of business process patterns

    Get PDF
    Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modelling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. As a statement of research-in-progress this paper focuses on business process patterns and proposes an initial methodological framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework borrows ideas from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse

    A Theory and Practice of Website Engagibility

    Get PDF
    This thesis explores the domain of website quality. It presents a new study of website quality - an abstraction and synthesis, a measurement methodology, and analysis - and proposes metrics which can be used to quantify it. The strategy employed involved revisiting software quality, modelling its broader perspectives and identifying quality factors which are specific to the World Wide Web (WWW). This resulted in a detailed set of elements which constitute website quality, a method for quantifying a quality measure, and demonstrating an approach to benchmarking eCommerce websites. The thesis has two dimensions. The first is a contribution to the theory of software quality - specifically website quality. The second dimension focuses on two perspectives of website quality - quality-of-product and quality-of-use - and uses them to present a new theory and methodology which are important first steps towards understanding metrics and their use when quantifying website quality. Once quantified, the websites can be benchmarked by evaluators and website owners for comparison with competitor sites. The thesis presents a study of five mature eCommerce websites. The study involves identifying, defining and collecting data counts for 67 site-level criteria for each site. These counts are specific to website product quality and include criteria such as occurrences of hyperlinks and menus which underpin navigation, occurrences of activities which underpin interactivity, and counts relating to a site’s eCommerce maturity. Lack of automated count collecting tools necessitated online visits to 537 HTML pages and performing manual counts. The thesis formulates a new approach to measuring website quality, named Metric Ratio Analysis (MRA). The thesis demonstrates how one website quality factor - engagibility - can be quantified and used for website comparison analysis. The thesis proposes a detailed theoretical and empirical validation procedure for MRA

    La volonté machinale: understanding the electronic voting controversy

    Get PDF
    Contains fulltext : 32048_voloma.pdf (publisher's version ) (Open Access)Radboud Universiteit Nijmegen, 21 januari 2008Promotor : Jacobs, B.P.F. Co-promotores : Poll, E., Becker, M.226 p

    Security policy architecture for web services environment

    Get PDF
    An enhanced observer is model that observes behaviour of a service and then automatically reports any changes in the state of the service to evaluator model. The e-observer observes the state of a service to determine whether it conforms to and obeys its intended behaviour or policy rules. E-observer techniques address most problems, govern and provide a proven solution that is re-usable in a similar context. This leads to an organisation and formalisation policy which is the engine of the e-observer model. Policies are used to refer to specific security rules for particular systems. They are derived from the goals of management that describe the desired behaviour of distributed heterogeneous systems and networks. These policies should be defended by security which has become a coherent and crucial issue. Security aims to protect these policies whenever possible. It is the first line of protection for resources or assets against events such as loss of availability, unauthorised access or modiïŹcation of data. The techniques devised to protect information from intruders are general purpose in nature and, therefore, cannot directly enforce security that has no universal definition, the high degree of assurance of security properties of systems used in security-critical areas, such as business, education and financial, is usually achieved by verification. In addition, security policies express the protection requirements of a system in a precise and unambiguous form. They describe the requirements and mechanisms for securing the resources and assets between the sharing parties of a business transaction. However, Service-Oriented Computing (SOC) is a new paradigm of computing that considers "services" as fundamental elements for developing applications/solutions. SOC has many advantages that support IT to improve and increase its capabilities. SOC allows flexibility to be integrated into application development. This allows services to be provided in a highly distributed manner by Web services. Many organisations and enterprises have undertaken developments using SOC. Web services (WSs) are examples of SOC. WSs have become more powerful and sophisticated in recent years and are being used successfully for inter-operable solutions across various networks. The main benefit of web services is that they use machine-to-machine interaction. This leads initially to explore the "Quality" aspect of the services. Quality of Service (QoS) describes many techniques that prioritise one type of traffic or programme that operates across a network connection. Hence, QoS has rules to determine which requests have priority and uses these rules in order to specify their priority to real-time communications. In addition, these rules can be sophisticated and expressed as policies that constrain the behaviour of these services. The rules (policies) should be addressed and enforced by the security mechanism. Moreover, in SOC and in particular web services, services are black boxes where behaviour may be completely determined by its interaction with other services under confederation system. Therefore, we propose the design and implementation of the “behaviour of services,” which is constrained by QoS policies. We formulate and implement novel techniques for web service policy-based QoS, which leads to the development of a framework for observing services. These services interact with each other by verifying them in a formal and systematic manner. This framework can be used to specify security policies in a succinct and unambiguous manner; thus, we developed a set of rules that can be applied inductively to verify the set of traces generated by the specification of our model’s policy. These rules could be also used for verifying the functionality of the system. In order to demonstrate the protection features of information system that is able to specify and concisely describe a set of traces generated, we subsequently consider the design and management of Ponder policy language to express QoS and its associated based on criteria, such as, security. An algorithm was composed for analysing the observations that are constrained by policies, and then a prototype system for demonstrating the observation architecture within the education sector. Finally, an enforcement system was used to successfully deploy the prototype’s infrastructure over Web services in order to define an optimisation model that would capture efficiency requirements. Therefore, our assumption is, tracing and observing the communication between services and then takes the decision based on their behaviour and history. Hence, the big issue here is how do we ensure that some given security requirements are satisfied and enforced? The scenario here is under confederation system and based on the following: System’s components are Web-services. These components are black boxes and designed/built by various vendors. Topology is highly changeable. Consequently, the main issues are: ‱ The proposal, design and development of a prototype of observation system that manages security policy and its associated aspects by evaluating the outcome results via the evaluator model. ‱ Taming the design complexity of the observation system by leaving considerable degrees of freedom for their structure and behaviour and by bestowing upon them certain characteristics, and to learn and adapt with respect to dynamically changing environments.Saudi Arabian Cultural Burea

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation
    corecore