11 research outputs found

    Support for Ad-Hoc applications in ubiquitous computing

    Get PDF
    This thesis presents work within the area of ubiquitous computing, an area based on a vision of computers blending into the background. The work has been done within the EU project PalCom that introduces palpable computing. Palpable computing puts a new perspective on ubiquitous computing, by focusing on human understandability. The thesis goals are to allow for ad-hoc combinations of services and nonpreplanned interaction in ubiquitous computing networks. This is not possible with traditional technologies for network services, which are based on standardization of service interfaces at the domain level. In contrast to those, our approach is based on standardization at a generic level, and on self-describing services. We propose techniques for ad-hoc applications that allow users to inspect and combine services, and to specify their cooperation in assemblies. A key point is that the assembly is external to the services. That makes it possible to adapt to changes in one service, without rewriting the other coordinated services. A framework has been implemented for building services that can be combined into ad-hoc applications, and example scenarios have been tested on top of the framework. A browser tool has been built for discovering services, for interacting with them, and for combining them. Finally, discovery and communication protocols for palpable computing have been developed, that support ad-hoc applications

    Exploring regression testing and software product line testing - research and state of practice

    Get PDF
    In large software organizations with a product line development approach a selective testing of product variants is necessary in order to keep pace with the decreased development time for new products, enabled by the systematic reuse. The close relationship between products in product line indicates an option to reduce the testing effort due to redundancy. In many cases test selection is performed manually, based on test leaders’ expertise. This makes the cost and quality of the testing highly dependent on the skills and experience of the test leaders. There is a need in industry for systematic approaches to test selection. The goal of our research is to improve the control of the testing and reduce the amount of redundant testing in the product line context by applying regression test selection strategies. In this thesis, the state of art of regression testing and software product line testing are explored. Two extensive systematic reviews are conducted as well as an industrial survey of regression testing state of practice and an industrial evaluation of a pragmatic regression test selection strategy. Regression testing is not an isolated one-off activity, but rather an activity of varying scope and preconditions, strongly dependent on the context in which it is applied. Several techniques for regression test selection are proposed and evaluated empirically but in many cases the context is too specific for a technique to be easily applied directly by software developers. In order to improve the possibility for generalizing empirical results on regression test selection, guidelines for reporting the testing context are discussed in this thesis. Software product line testing is a relatively new research area. The understanding about challenges is well established but when looking for solutions to these challenges, we mostly find proposals, and empirical evaluations are sparse. Regression test selection strategies proposed in literature are not easily applicable in the product line context. Instead, control may be increased by increased visibility of the effects of testing and proper measurements of software quality. Focus of our future work will be on how to guide the planning and assessment of regression testing activities in large, complex reuse based systems, by visualizing the quality achieved in different parts of the system and evaluating the effects of different selection strategies when applied in various regression testing situations

    Content-Aware Multimedia Communications

    Get PDF
    The demands for fast, economic and reliable dissemination of multimedia information are steadily growing within our society. While people and economy increasingly rely on communication technologies, engineers still struggle with their growing complexity. Complexity in multimedia communication originates from several sources. The most prominent is the unreliability of packet networks like the Internet. Recent advances in scheduling and error control mechanisms for streaming protocols have shown that the quality and robustness of multimedia delivery can be improved significantly when protocols are aware of the content they deliver. However, the proposed mechanisms require close cooperation between transport systems and application layers which increases the overall system complexity. Current approaches also require expensive metrics and focus on special encoding formats only. A general and efficient model is missing so far. This thesis presents efficient and format-independent solutions to support cross-layer coordination in system architectures. In particular, the first contribution of this work is a generic dependency model that enables transport layers to access content-specific properties of media streams, such as dependencies between data units and their importance. The second contribution is the design of a programming model for streaming communication and its implementation as a middleware architecture. The programming model hides the complexity of protocol stacks behind simple programming abstractions, but exposes cross-layer control and monitoring options to application programmers. For example, our interfaces allow programmers to choose appropriate failure semantics at design time while they can refine error protection and visibility of low-level errors at run-time. Based on some examples we show how our middleware simplifies the integration of stream-based communication into large-scale application architectures. An important result of this work is that despite cross-layer cooperation, neither application nor transport protocol designers experience an increase in complexity. Application programmers can even reuse existing streaming protocols which effectively increases system robustness.Der Bedarf unsere Gesellschaft nach kostengünstiger und zuverlässiger Kommunikation wächst stetig. Während wir uns selbst immer mehr von modernen Kommunikationstechnologien abhängig machen, müssen die Ingenieure dieser Technologien sowohl den Bedarf nach schneller Einführung neuer Produkte befriedigen als auch die wachsende Komplexität der Systeme beherrschen. Gerade die Übertragung multimedialer Inhalte wie Video und Audiodaten ist nicht trivial. Einer der prominentesten Gründe dafür ist die Unzuverlässigkeit heutiger Netzwerke, wie z.B.~dem Internet. Paketverluste und schwankende Laufzeiten können die Darstellungsqualität massiv beeinträchtigen. Wie jüngste Entwicklungen im Bereich der Streaming-Protokolle zeigen, sind jedoch Qualität und Robustheit der Übertragung effizient kontrollierbar, wenn Streamingprotokolle Informationen über den Inhalt der transportierten Daten ausnutzen. Existierende Ansätze, die den Inhalt von Multimediadatenströmen beschreiben, sind allerdings meist auf einzelne Kompressionsverfahren spezialisiert und verwenden berechnungsintensive Metriken. Das reduziert ihren praktischen Nutzen deutlich. Außerdem erfordert der Informationsaustausch eine enge Kooperation zwischen Applikationen und Transportschichten. Da allerdings die Schnittstellen aktueller Systemarchitekturen nicht darauf vorbereitet sind, müssen entweder die Schnittstellen erweitert oder alternative Architekturkonzepte geschaffen werden. Die Gefahr beider Varianten ist jedoch, dass sich die Komplexität eines Systems dadurch weiter erhöhen kann. Das zentrale Ziel dieser Dissertation ist es deshalb, schichtenübergreifende Koordination bei gleichzeitiger Reduzierung der Komplexität zu erreichen. Hier leistet die Arbeit zwei Beträge zum aktuellen Stand der Forschung. Erstens definiert sie ein universelles Modell zur Beschreibung von Inhaltsattributen, wie Wichtigkeiten und Abhängigkeitsbeziehungen innerhalb eines Datenstroms. Transportschichten können dieses Wissen zur effizienten Fehlerkontrolle verwenden. Zweitens beschreibt die Arbeit das Noja Programmiermodell für multimediale Middleware. Noja definiert Abstraktionen zur Übertragung und Kontrolle multimedialer Ströme, die die Koordination von Streamingprotokollen mit Applikationen ermöglichen. Zum Beispiel können Programmierer geeignete Fehlersemantiken und Kommunikationstopologien auswählen und den konkreten Fehlerschutz dann zur Laufzeit verfeinern und kontrolliere

    Advances in component-oriented programming

    Get PDF
    WCOP 2006 is the eleventh event in a series of highly successful workshops, which took place in conjunction with every ECOOP since 1996. Component oriented programming (COP) has been described as the natural extension of object-oriented programming to the realm of independently extensible systems. Several important approaches have emerged over the recent years, including component technology standards, such as CORBA/CCM, COM/COM+, J2EE/EJB, and .NET, but also the increasing appreciation of software architecture for component-based systems, and the consequent effects on organizational processes and structures as well as the software development business as a whole. COP aims at producing software components for a component market and for late composition. Composers are third parties, possibly the end users, who are not able or willing to change components. This requires standards to allow independently created components to interoperate, and specifications that put the composer into the position to decide what can be composed under which conditions. On these grounds, WCOP\u2796 led to the following definition: "A component is a unit of composition with contractually specified interfaces and explicit context dependencies only. Components can be deployed independently and are subject to composition by third parties." After WCOP\u2796 focused on the fundamental terminology of COP, the subsequent workshops expanded into the many related facets of component software. WCOP 2006 emphasizes reasons for using components beyond reuse. While considering software components as a technical means to increase software reuse, other reasons for investing into component technology tend to be overseen. For example, components play an important role in frameworks and product-lines to enable configurability (even if no component is reused). Another role of components beyond reuse is to increase the predictability of the properties of a system. The use of components as contractually specified building blocks restricts the degrees of freedom during software development compared to classic line-by-line programming. This restriction is beneficial for the predictability of system properties. For an engineering approach to software design, it is important to understand the implications of design decisions on a system\u27s properties. Therefore, approaches to evaluate and predict properties of systems by analyzing its components and its architecture are of high interest. To strengthen the relation between architectural descriptions of systems and components, a comprehensible mapping to component-oriented middleware platforms is important. Model-driven development with its use of generators can provide a suitable link between architectural views and technical component execution platforms. WCOP 2006 accepted 13 papers, which are organised according to the program below. The organisers are looking forward to an inspiring and thought provoking workshop. The organisers thank Jens Happe and Michael Kuperberg for preparing the proceedings volume

    Prioritizing Program Elements: A Pre-testing Effort To Improve Software Quality

    Get PDF
    Test effort prioritization is a powerful technique that enables the tester to effectively utilize the test resources by streamlining the test effort. The distribution of test effort is important to test organization. We address prioritization-based testing strategies in order to do the best possible job with limited test resources. Our proposed techniques give benefit to the tester, when applied in the case of looming deadlines and limited resources. Some parts of a system are more critical and sensitive to bugs than others, and thus should be tested thoroughly. The rationale behind this thesis is to estimate the criticality of various parts within a system and prioritize the parts for testing according to their estimated criticality. We propose several prioritization techniques at different phases of Software Development Life Cycle (SDLC). Different chapters of the thesis aim at setting test priority based on various factors of the system. The purpose is to identify and focus on the critical and strategic areas and detect the important defects as early as possible, before the product release. Focusing on the critical and strategic areas helps to improve the reliability of the system within the available resources. We present code-based and architecture-based techniques to prioritize the testing tasks. In these techniques, we analyze the criticality of a component within a system using a combination of its internal and external factors. We have conducted a set of experiments on the case studies and observed that the proposed techniques are efficient and address the challenge of prioritization. We propose a novel idea of calculating the influence of a component, where in-fluence refers to the contribution or usage of the component at every execution step. This influence value serves as a metric in test effort prioritization. We first calculate the influence through static analysis of the source code and then, refine our work by calculating it through dynamic analysis. We have experimentally proved that decreasing the reliability of an element with high influence value drastically increases the failure rate of the system, which is not true in case of an element with low influence value. We estimate the criticality of a component within a system by considering its both internal and external factors such as influence value, average execution time, structural complexity, severity and business value. We prioritize the components for testing according to their estimated criticality. We have compared our approach with a related approach, in which the components were prioritized on the basis of their structural complexity only. From the experimental results, we observed that our approach helps to reduce the failure rate at the operational environment. The consequence of the observed failures were also low compared to the related approach. Priority should be established by order of importance or urgency. As the importance of a component may vary at different points of the testing phase, we propose a multi cycle-based test effort prioritization approach, in which we assign different priorities to the same component at different test cycles. Test effort prioritization at the initial phase of SDLC has a greater impact than that made at a later phase. As the analysis and design stage is critical compared to other stages, detecting and correcting errors at this stage is less costly compared to later stages of SDLC. Designing metrics at this stage help the test manager in decision making for allocating resources. We propose a technique to estimate the criticality of a use case at the design level. The criticality is computed on the basis of complexity and business value. We evaluated the complexity of a use case analytically through a set of data collected at the design level. We experimentally observed that assigning test effort to various use cases according to their estimated criticality improves the reliability of a system under test. Test effort prioritization based on risk is a powerful technique for streamlining the test effort. The tester can exploit the relationship between risk and testing effort. We proposed a technique to estimate the risk associated with various states at the component level and risk associated with use case scenarios at the system level. The estimated risks are used for enhancing the resource allocation decision. An intermediate graph called Inter-Component State-Dependence graph (ISDG) is introduced for getting the complexity for a state of a component, which is used for risk estimation. We empirically evaluated the estimated risks. We assigned test priority to the components / scenarios within a system according to their estimated risks. We performed an experimental comparative analysis and observed that the testing team guided by our technique achieved high test efficiency compared to a related approach

    A platform-independent domain-specific modeling language for multiagent systems

    Get PDF
    Associated with the increasing acceptance of agent-based computing as a novel software engineering paradigm, recently a lot of research addresses the development of suitable techniques to support the agent-oriented software development. The state-of-the-art in agent-based software development is to (i) design the agent systems basing on an agent-based methodology and (ii) take the resulting design artifact as a base to manually implement the agent system using existing agent-oriented programming languages or general purpose languages like Java. Apart from failures made when manually transform an abstract specification into a concrete implementation, the gap between design and implementation may also result in the divergence of design and implementation. The framework discussed in this dissertation presents a platform-independent domain-specific modeling language for MASs called Dsml4MAS that allows modeling agent systems in a platform-independent and graphical manner. Apart from the abstract design, Dsml4MAS also allows to automatically (i) check the generated design artifacts against a formal semantic specification to guarantee the well-formedness of the design and (ii) translate the abstract specification into a concrete implementation. Taking both together, Dsml4MAS ensures that for any well-formed design, an associated implementation will be generated closing the gap between design and code.Aufgrund wachsender Akzeptanz von Agentensystemen zur Behandlung komplexer Problemstellungen wird der Schwerpunkt auf dem Gebiet der agentenorientierten Softwareentwicklung vor allem auf die Erforschung von geeignetem Entwicklungswerkzeugen gesetzt. Stand der Forschung ist es dabei das Agentendesign mittels einer Agentenmethodologie zu spezifizieren und die resultierenden Artefakte als Grundlage zur manuellen Programmierung zu verwenden. Fehler, die bei dieser manuellen Überführung entstehen, machen insbesondere das abstrakte Design weniger nützlich in Hinsicht auf die Nachhaltigkeit der entwickelten Softwareapplikation. Das in dieser Dissertation diskutierte Rahmenwerk erörtert eine plattformunabhängige domänenspezifische Modellierungssprache für Multiagentensysteme namens Dsml4MAS. Dsml4MAS erlaubt es Agentensysteme auf eine plattformunabhängige und graphische Art und Weise darzustellen. Die Modellierungssprache umfasst (i) eine abstrakte Syntax, die das Vokabular der Sprache definiert, (ii) eine konkrete Syntax, die die graphische Darstellung spezifiziert sowie (iii) eine formale Semantik, die dem Vokabular eine präzise Bedeutung gibt. Dsml4MAS ist Bestandteil einer (semi-automatischen) Methodologie, die es (i) erlaubt die abstrakte Spezifikation schrittweise bis hin zur konkreten Implementierung zu konkretisieren und (ii) die Interoperabilität zu alternativen Softwareparadigmen wie z.B. Dienstorientierte Architekturen zu gewährleisten
    corecore