3,591 research outputs found

    Should I stay or should I go? : On forces that drive and prevent MBSE adoption in the embedded systems industry

    Get PDF
    [Context] Model-based Systems Engineering (MBSE) comprises a set of models and techniques that is often suggested as solution to cope with the challenges of engineering complex systems. Although many practitioners agree with the arguments on the potential benefits of the techniques, companies struggle with the adoption of MBSE. [Goal] In this paper, we investigate the forces that prevent or impede the adoption of MBSE in companies that develop embedded software systems. We contrast the hindering forces with issues and challenges that drive these companies towards introducing MBSE. [Method] Our results are based on 20 interviews with experts from 10 companies. Through exploratory research, we analyze the results by means of thematic coding. [Results] Forces that prevent MBSE adoption mainly relate to immature tooling, uncertainty about the return-on-investment, and fears on migrating existing data and processes. On the other hand, MBSE adoption also has strong drivers and participants have high expectations mainly with respect to managing complexity, adhering to new regulations, and reducing costs. [Conclusions] We conclude that bad experiences and frustration about MBSE adoption originate from false or too high expectations. Nevertheless, companies should not underestimate the necessary efforts for convincing employees and addressing their anxiety

    Implementing Agile Methodology: Challenges and Best Practices

    Get PDF
    Information Technology (IT) projects have a reputation of not delivering business requirements. Historical challenges like meeting cost, quality, and timeline targets remain despite the extensive experience most organizations have managing projects of all sizes. The profession continues to have high profile failures that make headlines, such as the recent healthcare.gov initiative. This research provides literary sources on agile methodology that can be used to help improve project processes and outcomes

    Developing a diagnostic heuristic for integrated sugarcane supply and processing systems.

    Get PDF
    Doctoral Degrees. University of KwaZulu-Natal, Pietermaritzburg.Innovation is a valuable asset that gives supply chains a competitive edge. Moreover, the adoption of innovative research recommendations in agricultural value chains and integrated sugarcane supply and processing systems (ISSPS) in particular has been relatively slow when compared with other industries such as electronics and automotive. The slow adoption is attributed to the complex, multidimensional nature of ISSPS and the perceived lack of a holistic approach when dealing with certain issues. Most of the interventions into ISSPS often view the system as characterised by tame problems hence, the widespread application of traditional operations research approaches. Integrated sugarcane supply and processing systems are, nonetheless, also characterised by wicked problems. Interventions into such contexts should therefore, embrace tame and/or wicked issues. Systemic approaches are important and have in the past identified several system-scale opportunities within ISSPS. Such interventions are multidisciplinary and employ a range of methodologies spanning across paradigms. The large number of methodologies available, however, makes choosing the right method or a combination thereof difficult. In this context, a novel overarching diagnostic heuristic for ISSPS was developed in this research. The heuristic will be used todiagnose relatively small, but pertinent ISSPS constraints and opportunities. The heuristic includes a causal model that determines and ranks linkages between the many domains that govern integrated agricultural supply and processing systems (IASPS) viz. biophysical, collaboration, culture, economics, environment, future strategy, information sharing, political forces, and structures. Furthermore, a diagnostic toolkit based on the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) was developed. The toolkit comprises a diagnostic criteria and a suite of systemic tools. The toolkit, in addition, determines thesuitability of each tool to diagnose any of the IASPS domains. Overall, the diagnostic criteria include accessibility, interactiveness, transparency, iterativeness, feedback, cause-and-effect logic, and time delays. The tools considered for the toolkit were current reality trees, fuzzy cognitive maps (FCMs), network analysis approaches, rich pictures (RP), stock and flow diagrams, cause and effect diagrams (CEDs), and causal loop diagrams (CLDs). Results from the causal model indicate that collaboration, structure and information sharing had a high direct leverage over the other domains as these were associated with a larger number of linkages. Collaboration and structure further provided dynamic leverage as these were also part of feedback loops. Political forces and the culture domain in contrast, provided lowleverage as these domains were only directly linked to collaboration. It was further revealed that each tool provides a different facet to complexity hence, the need for methodological pluralism. All the tools except RP could be applied, to a certain extent, across both appreciation and analysis criteria. Rich pictures do not have causal analysis capabilities viz. cause-and-effect logic, time delays and feedback. Stock and flow diagrams and CLDs conversely, met all criteria. All the diagnostic tools in the toolkit could be used across all the system domains except for FCMs. Fuzzy cognitive maps are explicitly subjective and their contribution lies outside the objective world. Caution should therefore be practiced when FCMs areapplied within the biophysical domain. The heuristic is only an aid to decision making. The decision to select a tool or a combination thereof remains with the user(s). Even though the heuristic was demonstrated at Mhlume sugarcane milling area, it is recommended that other areas be considered for future research. The heuristic itself should continuously be updated with criteria, tools and other domain dimensions

    Squeak and Rattle Prediction for Robust Product Development in the automotive industry

    Get PDF
    Squeak and rattle are nonstationary, irregular, and impulsive sounds that are audible inside the car cabin. For decades, customer complaints about squeak and rattle have been, and still are, among the top quality issues in the automotive industry. These annoying sounds are perceived as quality defect indications and burden warranty costs to the car manufacturers. Today, the quality improvements regarding the persistent type of sounds in the car, as well as the increasing popularity of electric engines, as green and quiet propulsion solutions, stress the necessity for attenuating annoying sounds like squeak and rattle more than in the past. The economical and robust solutions to this problem are to be sought in the pre-design-freeze phases of the product development and by employing design-concept-related practices. To achieve this goal, prediction and evaluation tools and methods are required to deal with the squeak and rattle quality issues upfront in the product development process. The available tools and methods for the prediction of squeak and rattle sounds in the pre-design-freeze phases of a car development process are not yet sufficiently mature. The complexity of the squeak and rattle events, the existing knowledge gap about the mechanisms behind the squeak and rattle sounds, the lack of accurate simulation and post-processing methods, as well as the computational cost of complex simulations are some of the significant hurdles in this immaturity. This research addresses this problem by identifying a framework for the prediction of squeak and rattle sounds based on a cause-and-effect diagram. The main domains and the elements and the sub-contributors to the problem in each domain within this framework are determined through literature studies, field explorations and descriptive studies conducted on the subject. Further, improvement suggestions for the squeak and rattle evaluation and prediction methods are proposed through prescriptive studies. The applications of some of the proposed methods in the automotive industry are demonstrated and examined in industrial problems.The outcome of this study enhances the understanding of some of the parameters engaged in the squeak and rattle generation. Simulation methods are proposed to actively involve the contributing factors studied in this work for squeak and rattle risk evaluation. To enhance the efficiency and accuracy of the risk evaluation process, methods were investigated and proposed for the system excitation efficiency, modelling accuracy and efficiency and quantification of the response in the time and frequency domains. The demonstrated simulation methods besides the improved understanding of the mechanisms behind the phenomenon can facilitate a more accurate and robust prediction of squeak and rattle risk during the pre-design-freeze stages of the car development

    Automotive UX design and data-driven development: Narrowing the gap to support practitioners

    Get PDF
    The development and evaluation of In-Vehicle Information Systems (IVISs) is strongly based on insights from qualitative studies conducted in artificial contexts (e.g., driving simulators or lab experiments). However, the growing complexity of the systems and the uncertainty about the context in which they are used, create a need to augment qualitative data with quantitative data, collected during real-world driving. In contrast to many digital companies that are already successfully using data-driven methods, Original Equipment Manufacturers (OEMs) are not yet succeeding in releasing the potentials such methods offer. We aim to understand what prevents automotive OEMs from applying data-driven methods, what needs practitioners formulate, and how collecting and analyzing usage data from vehicles can enhance UX activities. We adopted a Multiphase Mixed Methods approach comprising two interview studies with more than 15 UX practitioners and two action research studies conducted with two different OEMs. From the four studies, we synthesize the needs of UX designers, extract limitations within the domain that hinder the application of data-driven methods, elaborate on unleveraged potentials, and formulate recommendations to improve the usage of vehicle data. We conclude that, in addition to modernizing the legal, technical, and organizational infrastructure, UX and Data Science must be brought closer together by reducing silo mentality and increasing interdisciplinary collaboration. New tools and methods need to be developed and UX experts must be empowered to make data-based evidence an integral part of the UX design process

    Software engineering perspectives on physiological computing

    Get PDF
    Physiological computing is an interesting and promising concept to widen the communication channel between the (human) users and computers, thus allowing an increase of software systems' contextual awareness and rendering software systems smarter than they are today. Using physiological inputs in pervasive computing systems allows re-balancing the information asymmetry between the human user and the computer system: while pervasive computing systems are well able to flood the user with information and sensory input (such as sounds, lights, and visual animations), users only have a very narrow input channel to computing systems; most of the time, restricted to keyboards, mouse, touchscreens, accelerometers and GPS receivers (through smartphone usage, e.g.). Interestingly, this information asymmetry often forces the user to subdue to the quirks of the computing system to achieve his goals -- for example, users may have to provide information the software system demands through a narrow, time-consuming input mode that the system could sense implicitly from the human body. Physiological computing is a way to circumvent these limitations; however, systematic means for developing and moulding physiological computing applications into software are still unknown. This thesis proposes a methodological approach to the creation of physiological computing applications that makes use of component-based software engineering. Components help imposing a clear structure on software systems in general, and can thus be used for physiological computing systems as well. As an additional bonus, using components allow physiological computing systems to leverage reconfigurations as a means to control and adapt their own behaviours. This adaptation can be used to adjust the behaviour both to the human and to the available computing environment in terms of resources and available devices - an activity that is crucial for complex physiological computing systems. With the help of components and reconfigurations, it is possible to structure the functionality of physiological computing applications in a way that makes them manageable and extensible, thus allowing a stepwise and systematic extension of a system's intelligence. Using reconfigurations entails a larger issue, however. Understanding and fully capturing the behaviour of a system under reconfiguration is challenging, as the system may change its structure in ways that are difficult to fully predict. Therefore, this thesis also introduces a means for formal verification of reconfigurations based on assume-guarantee contracts. With the proposed assume-guarantee contract framework, it is possible to prove that a given system design (including component behaviours and reconfiguration specifications) is satisfying real-time properties expressed as assume-guarantee contracts using a variant of real-time linear temporal logic introduced in this thesis - metric interval temporal logic for reconfigurable systems. Finally, this thesis embeds both the practical approach to the realisation of physiological computing systems and formal verification of reconfigurations into Scrum, a modern and agile software development methodology. The surrounding methodological approach is intended to provide a frame for the systematic development of physiological computing systems from first psychological findings to a working software system with both satisfactory functionality and software quality aspects. By integrating practical and theoretical aspects of software engineering into a self-contained development methodology, this thesis proposes a roadmap and guidelines for the creation of new physiological computing applications.Physiologisches Rechnen ist ein interessantes und vielversprechendes Konzept zur Erweiterung des Kommunikationskanals zwischen (menschlichen) Nutzern und Rechnern, und dadurch die Berücksichtigung des Nutzerkontexts in Software-Systemen zu verbessern und damit Software-Systeme intelligenter zu gestalten, als sie es heute sind. Physiologische Eingangssignale in ubiquitären Rechensystemen zu verwenden, ermöglicht eine Neujustierung der Informationsasymmetrie, die heute zwischen Menschen und Rechensystemen existiert: Während ubiquitäre Rechensysteme sehr wohl in der Lage sind, den Menschen mit Informationen und sensorischen Reizen zu überfluten (z.B. durch Töne, Licht und visuelle Animationen), hat der Mensch nur sehr begrenzte Einflussmöglichkeiten zu Rechensystemen. Meistens stehen nur Tastaturen, die Maus, berührungsempfindliche Bildschirme, Beschleunigungsmesser und GPS-Empfänger (zum Beispiel durch Mobiltelefone oder digitale Assistenten) zur Verfügung. Diese Informationsasymmetrie zwingt die Benutzer zur Unterwerfung unter die Usancen der Rechensysteme, um ihre Ziele zu erreichen - zum Beispiel müssen Nutzer Daten manuell eingeben, die auch aus Sensordaten des menschlichen Körpers auf unauffällige weise erhoben werden können. Physiologisches Rechnen ist eine Möglichkeit, diese Beschränkung zu umgehen. Allerdings fehlt eine systematische Methodik für die Entwicklung physiologischer Rechensysteme bis zu fertiger Software. Diese Dissertation präsentiert einen methodischen Ansatz zur Entwicklung physiologischer Rechenanwendungen, der auf der komponentenbasierten Softwareentwicklung aufbaut. Der komponentenbasierte Ansatz hilft im Allgemeinen dabei, eine klare Architektur des Software-Systems zu definieren, und kann deshalb auch für physiologische Rechensysteme angewendet werden. Als zusätzlichen Vorteil erlaubt die Komponentenorientierung in physiologischen Rechensystemen, Rekonfigurationen als Mittel zur Kontrolle und Anpassung des Verhaltens von physiologischen Rechensystemen zu verwenden. Diese Adaptionstechnik kann genutzt werden um das Verhalten von physiologischen Rechensystemen an den Benutzer anzupassen, sowie an die verfügbare Recheninfrastruktur im Sinne von Systemressourcen und Geräten - eine Maßnahme, die in komplexen physiologischen Rechensystemen entscheidend ist. Mit Hilfe der Komponentenorientierung und von Rekonfigurationen wird es möglich, die Funktionalität von physiologischen Rechensystemen so zu strukturieren, dass das System wartbar und erweiterbar bleibt. Dadurch wird eine schrittweise und systematische Erweiterung der Funktionalität des Systems möglich. Die Verwendung von Rekonfigurationen birgt allerdings Probleme. Das Systemverhalten eines Software-Systems, das Rekonfigurationen unterworfen ist zu verstehen und vollständig einzufangen ist herausfordernd, da das System seine Struktur auf schwer vorhersehbare Weise verändern kann. Aus diesem Grund führt diese Arbeit eine Methode zur formalen Verifikation von Rekonfigurationen auf Grundlage von Annahme-Zusicherungs-Verträgen ein. Mit dem vorgeschlagenen Annahme-Zusicherungs-Vertragssystem ist es möglich zu beweisen, dass ein gegebener Systementwurf (mitsamt Komponentenverhalten und Spezifikation des Rekonfigurationsverhaltens) eine als Annahme-Zusicherungs-Vertrag spezifizierte Echtzeiteigenschaft erfüllt. Für die Spezifikation von Echtzeiteigenschaften kann eine Variante von linearer Temporallogik für Echtzeit verwendet werden, die in dieser Arbeit eingeführt wird: Die metrische Intervall-Temporallogik für rekonfigurierbare Systeme. Schließlich wird in dieser Arbeit sowohl ein praktischer Ansatz zur Realisierung von physiologischen Rechensystemen als auch die formale Verifikation von Rekonfigurationen in Scrum eingebettet, einer modernen und agilen Softwareentwicklungsmethodik. Der methodische Ansatz bietet einen Rahmen für die systematische Entwicklung physiologischer Rechensysteme von Erkenntnissen zur menschlichen Physiologie hin zu funktionierenden physiologischen Softwaresystemen mit zufriedenstellenden funktionalen und qualitativen Eigenschaften. Durch die Integration sowohl von praktischen wie auch theoretischen Aspekten der Softwaretechnik in eine vollständige Entwicklungsmethodik bietet diese Arbeit einen Fahrplan und Richtlinien für die Erstellung neuer physiologischer Rechenanwendungen
    corecore