1,718 research outputs found
Simulation of Mixed Critical In-vehicular Networks
Future automotive applications ranging from advanced driver assistance to
autonomous driving will largely increase demands on in-vehicular networks. Data
flows of high bandwidth or low latency requirements, but in particular many
additional communication relations will introduce a new level of complexity to
the in-car communication system. It is expected that future communication
backbones which interconnect sensors and actuators with ECU in cars will be
built on Ethernet technologies. However, signalling from different application
domains demands for network services of tailored attributes, including
real-time transmission protocols as defined in the TSN Ethernet extensions.
These QoS constraints will increase network complexity even further.
Event-based simulation is a key technology to master the challenges of an
in-car network design. This chapter introduces the domain-specific aspects and
simulation models for in-vehicular networks and presents an overview of the
car-centric network design process. Starting from a domain specific description
language, we cover the corresponding simulation models with their workflows and
apply our approach to a related case study for an in-car network of a premium
car
Designing and Composing for Interdependent Collaborative Performance with Physics-Based Virtual Instruments
Interdependent collaboration is a system of live musical performance in which performers can directly manipulate each other’s musical outcomes. While most collaborative musical systems implement electronic communication channels between players that allow for parameter mappings, remote transmissions of actions and intentions, or exchanges of musical fragments, they interrupt the energy continuum between gesture and sound, breaking our cognitive representation of gesture to sound dynamics.
Physics-based virtual instruments allow for acoustically and physically plausible behaviors that are related to (and can be extended beyond) our experience of the physical world. They inherently maintain and respect a representation of the gesture to sound energy continuum.
This research explores the design and implementation of custom physics-based virtual instruments for realtime interdependent collaborative performance. It leverages the inherently physically plausible behaviors of physics-based models to create dynamic, nuanced, and expressive interconnections between performers. Design considerations, criteria, and frameworks are distilled from the literature in order to develop three new physics-based virtual instruments and associated compositions intended for dissemination and live performance by the electronic music and instrumental music communities. Conceptual, technical, and artistic details and challenges are described, and reflections and evaluations by the composer-designer and performers are documented
Interoperability Among Unmanned Maritime Vehicles: Review and First In-field Experimentation
Complex maritime missions, both above and below the surface, have traditionally been carried out by manned surface ships and submarines equipped with advanced sensor systems. Unmanned Maritime Vehicles (UMVs) are increasingly demonstrating their potential for improving existing naval capabilities due to their rapid deployability, easy scalability, and high reconfigurability, offering a reduction in both operational time and cost. In addition, they mitigate the risk to personnel by leaving the man far-from-the-risk but in-the-loop of decision making. In the long-term, a clear interoperability framework between unmanned systems, human operators, and legacy platforms will be crucial for effective joint operations planning and execution. However, the present multi-vendor multi-protocol solutions in multi-domain UMVs activities are hard to interoperate without common mission control interfaces and communication protocol schemes. Furthermore, the underwater domain presents significant challenges that cannot be satisfied with the solutions developed for terrestrial networks. In this paper, the interoperability topic is discussed blending a review of the technological growth from 2000 onwards with recent authors' in-field experience; finally, important research directions for the future are given. Within the broad framework of interoperability in general, the paper focuses on the aspect of interoperability among UMVs not neglecting the role of the human operator in the loop. The picture emerging from the review demonstrates that interoperability is currently receiving a high level of attention with a great and diverse deal of effort. Besides, the manuscript describes the experience from a sea trial exercise, where interoperability has been demonstrated by integrating heterogeneous autonomous UMVs into the NATO Centre for Maritime Research and Experimentation (CMRE) network, using different robotic middlewares and acoustic modem technologies to implement a multistatic active sonar system. A perspective for the interoperability in marine robotics missions emerges in the paper, through a discussion of current capabilities, in-field experience and future advanced technologies unique to UMVs. Nonetheless, their application spread is slowed down by the lack of human confidence. In fact, an interoperable system-of-systems of autonomous UMVs will require operators involved only at a supervisory level. As trust develops, endorsed by stable and mature interoperability, human monitoring will be diminished to exploit the tremendous potential of fully autonomous UMVs
OSCER state of the Center
Biography
Dr. Henry Neeman is the Director of the OU Supercomputing Center for Education & Research, Assistant Vice President Information Techology – Research Strategy Advisor, Associate Professor in the College of Engineering and Adjunct Associate Professor in the School of Computer Science at the University of Oklahoma. He received his BS in computer science and his BA in statistics with a minor in mathematics from the State University of New York at Buffalo in 1987, his MS in CS from the University of Illinois at Urbana-Champaign in 1990 and his PhD in CS from UIUC in 1996. Prior to coming to OU, Dr. Neeman was a postdoctoral research associate at the National Center for Supercomputing Applications at UIUC, and before that served as a graduate research assistant both at NCSA and at the Center for Supercomputing Research & Development.
In addition to his own teaching and research, Dr. Neeman collaborates with dozens of research groups, applying High Performance Computing techniques in fields such as numerical weather prediction, bioinformatics and genomics, data mining, high energy physics, astronomy, nanotechnology, petroleum reservoir management, river basin modeling and engineering optimization. He serves as an ad hoc advisor to student researchers in many of these fields.
Dr. Neeman's research interests include high performance computing, scientific computing, parallel and distributed computing and computer science education.Presented at the Oklahoma Supercomputing Symposium 2013, October 2, 2013.The OU Supercomputing Center for Education & Research (OSCER) celebrates its 11th anniversary on August 31 2013. In this report, we examine what OSCER is, what OSCER does, what OSCER has accomplished in its 11 years, and where OSCER is going.The University of Oklahoma
The University of Oklahoma Supercomputing Center for Education and Resesarch
OSCER
IT
Information Technology
The University of Oklahoma's Department of Information TechnologyN
A Networked Dataflow Simulation Environment for Signal Processing and Data Mining Applications
In networked signal processing systems, dataflow graphs can be used to
describe the processing on individual network nodes. However, to analyze the
correctness and performance of these systems, designers must understand
the interactions across these individual "node-level'' dataflow graphs --- as
they communicate across the network --- in addition to the characteristics of
the individual graphs.
In this thesis, we present a novel simulation environment, called the
NS-2 -- TDIF SIMulation environment (NT-SIM). NT-SIM provides integrated co-simulation of networked systems and combines the
network analysis capabilities provided by the Network Simulator (ns) with
the scheduling capabilities of a dataflow-based framework, thereby providing
novel features for more comprehensive simulation of networked signal
processing systems.
Through a novel integration of advanced tools for network and dataflow graph
simulation, our NT-SIM environment allows comprehensive simulation and analysis
of networked systems. We present two case studies that concretely demonstrate
the utility of NT-SIM in the contexts of a heterogeneous signal processing and
data mining system design
A Decade of Research in Fog computing: Relevance, Challenges, and Future Directions
Recent developments in the Internet of Things (IoT) and real-time
applications, have led to the unprecedented growth in the connected devices and
their generated data. Traditionally, this sensor data is transferred and
processed at the cloud, and the control signals are sent back to the relevant
actuators, as part of the IoT applications. This cloud-centric IoT model,
resulted in increased latencies and network load, and compromised privacy. To
address these problems, Fog Computing was coined by Cisco in 2012, a decade
ago, which utilizes proximal computational resources for processing the sensor
data. Ever since its proposal, fog computing has attracted significant
attention and the research fraternity focused at addressing different
challenges such as fog frameworks, simulators, resource management, placement
strategies, quality of service aspects, fog economics etc. However, after a
decade of research, we still do not see large-scale deployments of
public/private fog networks, which can be utilized in realizing interesting IoT
applications. In the literature, we only see pilot case studies and small-scale
testbeds, and utilization of simulators for demonstrating scale of the
specified models addressing the respective technical challenges. There are
several reasons for this, and most importantly, fog computing did not present a
clear business case for the companies and participating individuals yet. This
paper summarizes the technical, non-functional and economic challenges, which
have been posing hurdles in adopting fog computing, by consolidating them
across different clusters. The paper also summarizes the relevant academic and
industrial contributions in addressing these challenges and provides future
research directions in realizing real-time fog computing applications, also
considering the emerging trends such as federated learning and quantum
computing.Comment: Accepted for publication at Wiley Software: Practice and Experience
journa
- …