20,968 research outputs found
Analysis and Mitigation of Shared Resource Contention on Heterogeneous Multicore: An Industrial Case Study
In this paper, we address the industrial challenge put forth by ARM in ECRTS
2022. We systematically analyze the effect of shared resource contention to an
augmented reality head-up display (AR-HUD) case-study application of the
industrial challenge on a heterogeneous multicore platform, NVIDIA Jetson Nano.
We configure the AR-HUD application such that it can process incoming image
frames in real-time at 20Hz on the platform. We use micro-architectural
denial-of-service (DoS) attacks as aggressor tasks of the challenge and show
that they can dramatically impact the latency and accuracy of the AR-HUD
application, which results in significant deviations of the estimated
trajectories from the ground truth, despite our best effort to mitigate their
influence by using cache partitioning and real-time scheduling of the AR-HUD
application. We show that dynamic LLC (or DRAM depending on the aggressor)
bandwidth throttling of the aggressor tasks is an effective mean to ensure
real-time performance of the AR-HUD application without resorting to
over-provisioning the system
sMolBoxes: Dataflow Model for Molecular Dynamics Exploration
We present sMolBoxes, a dataflow representation for the exploration and analysis of long molecular dynamics (MD) simulations. When MD simulations reach millions of snapshots, a frame-by-frame observation is not feasible anymore. Thus, biochemists rely to a large extent only on quantitative analysis of geometric and physico-chemical properties. However, the usage of abstract methods to study inherently spatial data hinders the exploration and poses a considerable workload. sMolBoxes link quantitative analysis of a user-defined set of properties with interactive 3D visualizations. They enable visual explanations of molecular behaviors, which lead to an efficient discovery of biochemically significant parts of the MD simulation. sMolBoxes follow a node-based model for flexible definition, combination, and immediate evaluation of properties to be investigated. Progressive analytics enable fluid switching between multiple properties, which facilitates hypothesis generation. Each sMolBox provides quick insight to an observed property or function, available in more detail in the bigBox View. The case studies illustrate that even with relatively few sMolBoxes, it is possible to express complex analytical tasks, and their use in exploratory analysis is perceived as more efficient than traditional scripting-based methods.acceptedVersio
Detecting Anomalous Microflows in IoT Volumetric Attacks via Dynamic Monitoring of MUD Activity
IoT networks are increasingly becoming target of sophisticated new
cyber-attacks. Anomaly-based detection methods are promising in finding new
attacks, but there are certain practical challenges like false-positive alarms,
hard to explain, and difficult to scale cost-effectively. The IETF recent
standard called Manufacturer Usage Description (MUD) seems promising to limit
the attack surface on IoT devices by formally specifying their intended network
behavior. In this paper, we use SDN to enforce and monitor the expected
behaviors of each IoT device, and train one-class classifier models to detect
volumetric attacks.
Our specific contributions are fourfold. (1) We develop a multi-level
inferencing model to dynamically detect anomalous patterns in network activity
of MUD-compliant traffic flows via SDN telemetry, followed by packet inspection
of anomalous flows. This provides enhanced fine-grained visibility into
distributed and direct attacks, allowing us to precisely isolate volumetric
attacks with microflow (5-tuple) resolution. (2) We collect traffic traces
(benign and a variety of volumetric attacks) from network behavior of IoT
devices in our lab, generate labeled datasets, and make them available to the
public. (3) We prototype a full working system (modules are released as
open-source), demonstrates its efficacy in detecting volumetric attacks on
several consumer IoT devices with high accuracy while maintaining low false
positives, and provides insights into cost and performance of our system. (4)
We demonstrate how our models scale in environments with a large number of
connected IoTs (with datasets collected from a network of IP cameras in our
university campus) by considering various training strategies (per device unit
versus per device type), and balancing the accuracy of prediction against the
cost of models in terms of size and training time.Comment: 18 pages, 13 figure
Graph Neural Networks for Link Prediction with Subgraph Sketching
Many Graph Neural Networks (GNNs) perform poorly compared to simple
heuristics on Link Prediction (LP) tasks. This is due to limitations in
expressive power such as the inability to count triangles (the backbone of most
LP heuristics) and because they can not distinguish automorphic nodes (those
having identical structural roles). Both expressiveness issues can be
alleviated by learning link (rather than node) representations and
incorporating structural features such as triangle counts. Since explicit link
representations are often prohibitively expensive, recent works resorted to
subgraph-based methods, which have achieved state-of-the-art performance for
LP, but suffer from poor efficiency due to high levels of redundancy between
subgraphs. We analyze the components of subgraph GNN (SGNN) methods for link
prediction. Based on our analysis, we propose a novel full-graph GNN called
ELPH (Efficient Link Prediction with Hashing) that passes subgraph sketches as
messages to approximate the key components of SGNNs without explicit subgraph
construction. ELPH is provably more expressive than Message Passing GNNs
(MPNNs). It outperforms existing SGNN models on many standard LP benchmarks
while being orders of magnitude faster. However, it shares the common GNN
limitation that it is only efficient when the dataset fits in GPU memory.
Accordingly, we develop a highly scalable model, called BUDDY, which uses
feature precomputation to circumvent this limitation without sacrificing
predictive performance. Our experiments show that BUDDY also outperforms SGNNs
on standard LP benchmarks while being highly scalable and faster than ELPH.Comment: 29 pages, 19 figures, 6 appendice
The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution
Теорія систем мобільних інфокомунікацій. Системна архітектура
Навчальний посібник містить опис логічних та фізичних структур, процедур,
алгоритмів, протоколів, принципів побудови і функціонування мереж
стільникового мобільного зв’язку (до 3G) і мобільних інфокомунікацій (4G і вище),
приділяючи увагу розгляду загальних архітектур мереж операторів мобільного
зв’язку, їх управління і координування, неперервності еволюції розвитку засобів
функціонування і способів надання послуг таких мереж. Посібник структурно має
сім розділів і побудований так, що складність матеріалу зростає з кожним
наступним розділом. Навчальний посібник призначено для здобувачів ступеня
бакалавра за спеціальністю 172 «Телекомунікації та радіотехніка», буде також
корисним для аспірантів, наукових та інженерно-технічних працівників за
напрямом інформаційно-телекомунікаційних систем та технологій.The manual contains a description of the logical and physical structures, procedures, algorithms, protocols, principles of construction and operation of cellular networks for mobile communications (up to 3G) and mobile infocommunications (4G and higher), paying attention to the consideration of general architectures of mobile operators' networks, their management, and coordination, the continuous evolution of the development of the means of operation and methods of providing services of such networks. The manual has seven structural sections and is structured in such a way that the complexity of the material increases with each subsequent chapter. The textbook is intended for applicants for a bachelor's degree in specialty 172 "Telecommunications and Radio Engineering", and will also be useful to graduate students, and scientific and engineering workers in the direction of information and telecommunication systems and technologies
Exploring the Training Factors that Influence the Role of Teaching Assistants to Teach to Students With SEND in a Mainstream Classroom in England
With the implementation of inclusive education having become increasingly valued over the years, the training of Teaching Assistants (TAs) is now more important than ever, given that they work alongside pupils with special educational needs and disabilities (hereinafter SEND) in mainstream education classrooms. The current study explored the training factors that influence the role of TAs when it comes to teaching SEND students in mainstream classrooms in England during their one-year training period. This work aimed to increase understanding of how the training of TAs is seen to influence the development of their personal knowledge and professional skills. The study has significance for our comprehension of the connection between the TAs’ training and the quality of education in the classroom. In addition, this work investigated whether there existed a correlation between the teaching experience of TAs and their background information, such as their gender, age, grade level taught, years of teaching experience, and qualification level.
A critical realist theoretical approach was adopted for this two-phased study, which involved the mixing of adaptive and grounded theories respectively. The multi-method project featured 13 case studies, each of which involved a trainee TA, his/her college tutor, and the classroom teacher who was supervising the trainee TA. The analysis was based on using semi-structured interviews, various questionnaires, and non-participant observation methods for each of these case studies during the TA’s one-year training period. The primary analysis of the research was completed by comparing the various kinds of data collected from the participants in the first and second data collection stages of each case. Further analysis involved cross-case analysis using a grounded theory approach, which made it possible to draw conclusions and put forth several core propositions. Compared with previous research, the findings of the current study reveal many implications for the training and deployment conditions of TAs, while they also challenge the prevailing approaches in many aspects, in addition to offering more diversified, enriched, and comprehensive explanations of the critical pedagogical issues
Hardware Acceleration of Neural Graphics
Rendering and inverse-rendering algorithms that drive conventional computer
graphics have recently been superseded by neural representations (NR). NRs have
recently been used to learn the geometric and the material properties of the
scenes and use the information to synthesize photorealistic imagery, thereby
promising a replacement for traditional rendering algorithms with scalable
quality and predictable performance. In this work we ask the question: Does
neural graphics (NG) need hardware support? We studied representative NG
applications showing that, if we want to render 4k res. at 60FPS there is a gap
of 1.5X-55X in the desired performance on current GPUs. For AR/VR applications,
there is an even larger gap of 2-4 OOM between the desired performance and the
required system power. We identify that the input encoding and the MLP kernels
are the performance bottlenecks, consuming 72%,60% and 59% of application time
for multi res. hashgrid, multi res. densegrid and low res. densegrid encodings,
respectively. We propose a NG processing cluster, a scalable and flexible
hardware architecture that directly accelerates the input encoding and MLP
kernels through dedicated engines and supports a wide range of NG applications.
We also accelerate the rest of the kernels by fusing them together in Vulkan,
which leads to 9.94X kernel-level performance improvement compared to un-fused
implementation of the pre-processing and the post-processing kernels. Our
results show that, NGPC gives up to 58X end-to-end application-level
performance improvement, for multi res. hashgrid encoding on average across the
four NG applications, the performance benefits are 12X,20X,33X and 39X for the
scaling factor of 8,16,32 and 64, respectively. Our results show that with
multi res. hashgrid encoding, NGPC enables the rendering of 4k res. at 30FPS
for NeRF and 8k res. at 120FPS for all our other NG applications
- …