204 research outputs found

    Transdisciplinarity seen through Information, Communication, Computation, (Inter-)Action and Cognition

    Full text link
    Similar to oil that acted as a basic raw material and key driving force of industrial society, information acts as a raw material and principal mover of knowledge society in the knowledge production, propagation and application. New developments in information processing and information communication technologies allow increasingly complex and accurate descriptions, representations and models, which are often multi-parameter, multi-perspective, multi-level and multidimensional. This leads to the necessity of collaborative work between different domains with corresponding specialist competences, sciences and research traditions. We present several major transdisciplinary unification projects for information and knowledge, which proceed on the descriptive, logical and the level of generative mechanisms. Parallel process of boundary crossing and transdisciplinary activity is going on in the applied domains. Technological artifacts are becoming increasingly complex and their design is strongly user-centered, which brings in not only the function and various technological qualities but also other aspects including esthetic, user experience, ethics and sustainability with social and environmental dimensions. When integrating knowledge from a variety of fields, with contributions from different groups of stakeholders, numerous challenges are met in establishing common view and common course of action. In this context, information is our environment, and informational ecology determines both epistemology and spaces for action. We present some insights into the current state of the art of transdisciplinary theory and practice of information studies and informatics. We depict different facets of transdisciplinarity as we see it from our different research fields that include information studies, computability, human-computer interaction, multi-operating-systems environments and philosophy.Comment: Chapter in a forthcoming book: Information Studies and the Quest for Transdisciplinarity - Forthcoming book in World Scientific. Mark Burgin and Wolfgang Hofkirchner, Editor

    A Methodological Process for the Design of Frameworks Oriented to Infotainment User Interfaces

    Get PDF
    The objective of this paper was to propose a methodological process for the design of frameworks oriented to infotainment user interfaces. Four stages comprise the proposed process, conceptualization, structuring, documentation, and evaluation; in addition, these stages include activities, tasks, and deliverables to guide a work team during the design of a framework. To determine the stages and their components, an analysis of 42 papers was carried out through a systematic literature review in search of similarities during the design process of frameworks related to user interfaces. The evaluation method by a panel of experts was used to determine the validity of the proposal; the conceptual proposal was provided to a panel of 10 experts for their analysis and later a questionnaire in the form of a Likert scale was used to collect the information on the validation of the proposal. The results of the evaluation indicated that the methodological process is valid to meet the objective of designing a framework oriented to infotainment user interfaces

    Sharing GPUs for Real-Time Autonomous-Driving Systems

    Get PDF
    Autonomous vehicles at mass-market scales are on the horizon. Cameras are the least expensive among common sensor types and can preserve features such as color and texture that other sensors cannot. Therefore, realizing full autonomy in vehicles at a reasonable cost is expected to entail computer-vision techniques. These computer-vision applications require massive parallelism provided by the underlying shared accelerators, such as graphics processing units, or GPUs, to function “in real time.” However, when computer-vision researchers and GPU vendors refer to “real time,” they usually mean “real fast”; in contrast, certifiable automotive systems must be “real time” in the sense of being predictable. This dissertation addresses the challenging problem of how GPUs can be shared predictably and efficiently for real-time autonomous-driving systems. We tackle this challenge in four steps. First, we investigate NVIDIA GPUs with respect to scheduling, synchronization, and execution. We conduct an extensive set of experiments to infer NVIDIA GPU scheduling rules, which are unfortunately undisclosed by NVIDIA and are beyond access owing to their closed-source software stack. We also expose a list of pitfalls pertaining to CPU-GPU synchronization that can result in unbounded response times of GPU-using applications. Lastly, we examine a fundamental trade-off for designing real-time tasks under different execution options. Overall, our investigation provides an essential understanding of NVIDIA GPUs, allowing us to further model and analyze GPU tasks. Second, we develop a new model and conduct schedulability analysis for GPU tasks. We extend the well-studied sporadic task model with additional parameters that characterize the parallel execution of GPU tasks. We show that NVIDIA scheduling rules are subject to fundamental capacity loss, which implies a necessary total utilization bound. We derive response-time bounds for GPU task systems that satisfy our schedulability conditions. Third, we address an industrial challenge of supplying the throughput performance of computer-vision frameworks to support adequate coverage and redundancy offered by an array of cameras. We re-think the design of convolution neural network (CNN) software to better utilize hardware resources and achieve increased throughput (number of simultaneous camera streams) without any appreciable increase in per-frame latency (camera to CNN output) or reduction of per-stream accuracy. Fourth, we apply our analysis to a finer-grained graph scheduling of a computer-vision standard, OpenVX, which explicitly targets embedded and real-time systems. We evaluate both the analytical and empirical real-time performance of our approach.Doctor of Philosoph

    Real-time scheduling for 3D rendering on automotive embedded systems

    Get PDF
    Im Automobilbereich erfreut sich der Einsatz von 3D-Grafik zunehmender Beliebtheit. Beispielsweise zeigte Mercedes-Benz im F125 Autoprototypen, wie analoge Zeiger der Kombiinstrumente durch digitale Displays ersetzt werden. Der Trend, 3D-Anwendungen zu nutzen, geht in zwei Richtungen: Zum einen hin zu kritischeren Anwendungen wie der Geschwindigkeitsanzeige, zum anderen hin zu Drittanbieteranwendungen, die beispielsweise ĂŒber einen Appstore bezogen werden. Um Isolationsanforderungen zu erfĂŒllen, werden traditionell neue Funktionen im Auto hĂ€ufig mittels neuer SteuergerĂ€te umgesetzt. Um jedoch Kosten, Energieverbrauch und Bauraum im Fahrzeug zu sparen, sollten alle 3D-Anwendungen eine einzige Hardwareplattform und somit auch eine einzige GPU als gemeinsame Ressource nutzen. FĂŒr zeitsensitive Anwendungen wie die Geschwindigkeitsanzeige ergibt sich hierbei die Herausforderung, Rendering in Echtzeit zu gewĂ€hrleisten. HierfĂŒr sind wirksame Konzepte fĂŒr das Echtzeitscheduling der GPU erforderlich, welche Sicherheit und Isolation beim 3D-Rendering garantieren können. Da aktuelle GPUs nicht unterbrechbar sind, muss ein Deadline-basierter Scheduler die AusfĂŒhrungszeit der GPU-Befehle im Voraus kennen. Bestehende Schedulingkonzepte unterstĂŒtzen leider keine dynamischen Tasks, keine periodischen Echtzeitdeadlines, oder setzen unterbrechbare AusfĂŒhrung voraus. In dieser Arbeit werden die fĂŒr HMI-Rendering im Automobilbereich relevanten Anforderungen beschrieben. Basierend auf diesen Anforderungen wird das Konzept des virtualisierten automobilen Grafiksystems (VAGS) vorgestellt, welches einen Hypervisor nutzt um die Isolation zwischen verschiedenen VMs, insbesondere fĂŒr die Headunit und die Kombiinstrumente, sicherzustellen. Des Weiteren wird ein neuartiges Framework vorgestellt, welches die AusfĂŒhrungszeit von GPU-Befehlen misst und basierend auf OpenGL ES 2.0 vorhersagt. Hierbei werden fĂŒr die relevanten GPU-Befehle wie Draw und SwapBuffers Vorhersagemodelle vorgestellt. FĂŒr Draw-Befehle werden zwei Heuristiken vorgeschlagen, welche die Anzahl der Fragmente abschĂ€tzen, zwei Konzepte, welche die AusfĂŒhrungszeit der Grafikshader vorhersagen, sowie ein optionaler Echtzeit-Korrekturmechanismus. Die Anzahl der Fragmente wird entweder mittels einer Bounding-Box des gerenderten Modells, auf welche die Projektion des Vertexshaders angewendet wird, abgeschĂ€tzt, oder durch eine Teilmenge der gerenderten Dreiecke, welche genutzt wird um die DurchschnittsgrĂ¶ĂŸe eines Dreiecks zu ermitteln. Um die Laufzeit eines Shaders abzuschĂ€tzen, wird er entweder in einer Kalibrierungsumgebung in einem separaten OpenGL-Kontext ausgefĂŒhrt, oder es wird ein offline trainiertes MARS-Modell verwendet. Die Implementierung und die Auswertungen des Frameworks zeigen dessen Machbarkeit und dass eine gute Vorhersagegenauigkeit erreicht werden kann. Beim Rendern einer Szene des bekannten Benchmarkprogramms Glmark2 wurden beispielsweise weniger 0,4 % der Messproben um mehr als 100 ÎŒs unterschĂ€tzt und weniger als 0,2 % der Messproben um mehr als 100 ÎŒs ĂŒberschĂ€tzt. Unsere Implementierung verursacht bei langer AusfĂŒhrung eine zusĂ€tzliche CPU-Rechenzeit von ĂŒblicherweise weniger als 25 %, bei manchen Szenarien ist diese sogar vernachlĂ€ssigbar. Der Programmstart verlangsamt sich beim effizientesten Verfahren hierbei lediglich um etwa 30 ms. Auf lange Sicht liegt er typischerweise unter 25 % und ist fĂŒr manche Szenarien sogar vernachlĂ€ssigbar. DarĂŒber hinaus wird ein echtzeitfĂ€higes 3D-GPU-Schedulingframework vorgestellt, welches kritischen Anwendungen Garantien gibt und trotzdem die verbleibenden GPU-Ressourcen den weniger kritischen Anwendungen zur VerfĂŒgung stellt, wodurch eine hohe GPU-Auslastung erreicht wird. Da aktuelle GPUs nicht unterbrechbar sind, werden die vorgestellten Konzepte zur Vorhersage der AusfĂŒhrungszeit verwendet um prioritĂ€tsbasiert Scheduling-Entscheidungen zu treffen. Die Implementierung basiert auf einem automobilkonformen eingebetteten System, auf welchem Linux ausgefĂŒhrt wird. Die darauf ausgefĂŒhrten Auswertungen zeigen die Machbarkeit und Wirksamkeit der vorgestellten Konzepte. Der GPU-Scheduler erfĂŒllt die jeweiligen Echtzeitvorgaben fĂŒr eine variable Anzahl von Anwendungen, welche unterschiedliche GPU-Befehlsfolgen erzeugen. Hierbei wird bei einem anspruchsvollen Szenario mit 17 Anwendungen eine hohe GPU-Auslastung von 99 % erzielt und 99,9 % der Deadlines der höchstprioren Anwendung erfĂŒllt. Des Weiteren wird das Scheduling in Echtzeit mit weniger als 9 ÎŒs Latenz effizient ausgefĂŒhrt

    A Survey of Customer Service System Based on Learning

    Get PDF
    With the rapid development of artificial intelligence, people have moved from manual customer service to handling affairs, and now they are more inclined to use intelligent customer service systems. The intelligent customer service system is generally a chat robot based on natural language processing, and it is a dialogue system. Therefore, it plays a vital role in many fields, especially in the field of e-commerce. In this article, to help researchers further study the customer service system for e-commerce, we survey the learning-based methods in dialogue understanding, dialogue management and dialogue response generation in the customer service system. In particular, we compare the advantages and disadvantages of these methods and pointed out further research directions

    Towards a Cyber-Physical Manufacturing Cloud through Operable Digital Twins and Virtual Production Lines

    Get PDF
    In last decade, the paradigm of Cyber-Physical Systems (CPS) has integrated industrial manufacturing systems with Cloud Computing technologies for Cloud Manufacturing. Up to 2015, there were many CPS-based manufacturing systems that collected real-time machining data to perform remote monitoring, prognostics and health management, and predictive maintenance. However, these CPS-integrated and network ready machines were not directly connected to the elements of Cloud Manufacturing and required human-in-the-loop. Addressing this gap, we introduced a new paradigm of Cyber-Physical Manufacturing Cloud (CPMC) that bridges a gap between physical machines and virtual space in 2017. CPMC virtualizes machine tools in cloud through web services for direct monitoring and operations through Internet. Fundamentally, CPMC differs with contemporary modern manufacturing paradigms. For instance, CPMC virtualizes machining tools in cloud using remote services and establish direct Internet-based communication, which is overlooked in existing Cloud Manufacturing systems. Another contemporary, namely cyber-physical production systems enable networked access to machining tools. Nevertheless, CPMC virtualizes manufacturing resources in cloud and monitor and operate them over the Internet. This dissertation defines the fundamental concepts of CPMC and expands its horizon in different aspects of cloud-based virtual manufacturing such as Digital Twins and Virtual Production Lines. Digital Twin (DT) is another evolving concept since 2002 that creates as-is replicas of machining tools in cyber space. Up to 2018, many researchers proposed state-of-the-art DTs, which only focused on monitoring production lifecycle management through simulations and data driven analytics. But they overlooked executing manufacturing processes through DTs from virtual space. This dissertation identifies that DTs can be made more productive if they engage directly in direct execution of manufacturing operations besides monitoring. Towards this novel approach, this dissertation proposes a new operable DT model of CPMC that inherits the features of direct monitoring and operations from cloud. This research envisages and opens the door for future manufacturing systems where resources are developed as cloud-based DTs for remote and distributed manufacturing. Proposed concepts and visions of DTs have spawned the following fundamental researches. This dissertation proposes a novel concept of DT based Virtual Production Lines (VPL) in CPMC in 2019. It presents a design of a service-oriented architecture of DTs that virtualizes physical manufacturing resources in CPMC. Proposed DT architecture offers a more compact and integral service-oriented virtual representations of manufacturing resources. To re-configure a VPL, one requirement is to establish DT-to-DT collaborations in manufacturing clouds, which replicates to concurrent resource-to-resource collaborations in shop floors. Satisfying the above requirements, this research designs a novel framework to easily re-configure, monitor and operate VPLs using DTs of CPMC. CPMC publishes individual web services for machining tools, which is a traditional approach in the domain of service computing. But this approach overcrowds service registry databases. This dissertation introduces a novel fundamental service publication and discovery approach in 2020, OpenDT, which publishes DTs with collections of services. Experimental results show easier discovery and remote access of DTs while re-configuring VPLs. Proposed researches in this dissertation have received numerous citations both from industry and academia, clearly proving impacts of research contributions

    Proceedings of the 4th Workshop of the MPM4CPS COST Action

    Get PDF
    Proceedings of the 4th Workshop of the MPM4CPS COST Action with the presentations delivered during the workshop and papers with extended versions of some of them

    Progress toward multi‐robot reconnaissance and the MAGIC 2010 competition

    Full text link
    Tasks like search‐and‐rescue and urban reconnaissance benefit from large numbers of robots working together, but high levels of autonomy are needed to reduce operator requirements to practical levels. Reducing the reliance of such systems on human operators presents a number of technical challenges, including automatic task allocation, global state and map estimation, robot perception, path planning, communications, and human‐robot interfaces. This paper describes our 14‐robot team, which won the MAGIC 2010 competition. It was designed to perform urban reconnaissance missions. In the paper, we describe a variety of autonomous systems that require minimal human effort to control a large number of autonomously exploring robots. Maintaining a consistent global map, which is essential for autonomous planning and for giving humans situational awareness, required the development of fast loop‐closing, map optimization, and communications algorithms. Key to our approach was a decoupled centralized planning architecture that allowed individual robots to execute tasks myopically, but whose behavior was coordinated centrally. We will describe technical contributions throughout our system that played a significant role in its performance. We will also present results from our system both from the competition and from subsequent quantitative evaluations, pointing out areas in which the system performed well and where interesting research problems remain. © 2012 Wiley Periodicals, Inc.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/93532/1/21426_ftp.pd

    Design of a Scenario-Based Immersive Experience Room

    Get PDF
    open1noopenKlopfenstein, Cuno LorenzKlopfenstein, CUNO LOREN

    Practical, appropriate, empirically-validated guidelines for designing educational games

    Get PDF
    There has recently been a great deal of interest in the potential of computer games to function as innovative educational tools. However, there is very little evidence of games fulfilling that potential. Indeed, the process of merging the disparate goals of education and games design appears problematic, and there are currently no practical guidelines for how to do so in a coherent manner. In this paper, we describe the successful, empirically validated teaching methods developed by behavioural psychologists and point out how they are uniquely suited to take advantage of the benefits that games offer to education. We conclude by proposing some practical steps for designing educational games, based on the techniques of Applied Behaviour Analysis. It is intended that this paper can both focus educational games designers on the features of games that are genuinely useful for education, and also introduce a successful form of teaching that this audience may not yet be familiar with
    • 

    corecore